Quantcast
Channel: Baeldung
Viewing all 4754 articles
Browse latest View live

AssertJ for Guava

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

This article focuses on AssertJ Guava-related assertions and is the second article from the AssertJ series. If you want to some general info about AssertJ, have a look at the first article in the series Introduction to AssertJ.

2. Maven Dependencies

In order to use AssertJ with Guava, you need to add the following dependency to your pom.xml:

<dependency>
    <groupId>org.assertj</groupId>
    <artifactId>assertj-guava</artifactId>
    <version>3.0.0</version>
    <scope>test</scope>
</dependency>

You can find the latest version here.

And note that since version 3.0.0, AssertJ Guava relies on Java 8 and AssertJ Core 3.x.

3. Guava Assertions in Action

AssertJ has custom assertions for Guava types: ByteSource, Multimap, Optional, Range, RangeMap and Table.

3.1. ByteSource Assertions

Let’s start off by creating two empty temporary files:

File temp1 = File.createTempFile("bael", "dung1");
File temp2 = File.createTempFile("bael", "dung2");

and creating ByteSource instances from them:

ByteSource byteSource1 = Files.asByteSource(temp1);
ByteSource byteSource2 = Files.asByteSource(temp2);

Now we can write the following assertion:

assertThat(buteSource1)
  .hasSize(0)
  .hasSameContentAs(byteSource2);

3.2. Multimap Assertions

Multimaps are maps that can associate more than one value with a given key. The Multimap assertions work pretty similarly to normal Map implementations.

Let’s start by creating a Multimap instance and adding some entries:

Multimap<Integer, String> mmap = Multimaps
  .newMultimap(new HashMap<>(), Sets::newHashSet);
mmap.put(1, "one");
mmap.put(1, "1");

And now we can assert:

assertThat(mmap)
  .hasSize(2)
  .containsKeys(1)
  .contains(entry(1, "one"))
  .contains(entry(1, "1"));

There are also two additional assertions available – with subtle difference between them:

  • containsAllEntriesOf and
  • hasSameEntriesAs.

Let’s have a look at these two assertions; we’ll start by defining a few maps:

Multimap<Integer, String> mmap1 = ArrayListMultimap.create();
mmap1.put(1, "one");
mmap1.put(1, "1");
mmap1.put(2, "two");
mmap1.put(2, "2");

Multimap<Integer, String> mmap1_clone = Multimaps
  .newSetMultimap(new HashMap<>(), HashSet::new);
mmap1_clone.put(1, "one");
mmap1_clone.put(1, "1");
mmap1_clone.put(2, "two");
mmap1_clone.put(2, "2");

Multimap<Integer, String> mmap2 = Multimaps
  .newSetMultimap(new HashMap<>(), HashSet::new);
mmap2.put(1, "one");
mmap2.put(1, "1");

As you can see, mmap1 and mmap1_clone contain exactly the same entries, but are two different objects of two different Map types. The Map mmap2 contains a single entry that is shared among all maps. Now the following assertion is true:

assertThat(mmap1)
  .containsAllEntriesOf(mmap2)
  .containsAllEntriesOf(mmap1_clone)
  .hasSameEntriesAs(mmap1_clone);

3.3. Optional Assertions

Assertions for Guava’s Optional involve value presence checking and utilities for extracting the inner value.

Let’s start by creating an Optional instance:

Optional<String> something = Optional.of("something");

And now we can check the value’s presence and assert the Optional‘s content:

assertThat(something)
  .isPresent()
  .extractingValue()
  .isEqualTo("something");

3.4. Range Assertions

Assertions for Guava’s Range class involves checking Range‘s lower and upper bounds or whether a certain value is within a given range.

Let’s define a simple range of characters by doing the following:

Range<String> range = Range.openClosed("a", "g");

and now we can test:

assertThat(range)
  .hasOpenedLowerBound()
  .isNotEmpty()
  .hasClosedUpperBound()
  .contains("b");

3.5. Table Assertions

AssertJ’s table-specific assertions allow the checking of row and column count and the presence of a cells value.

Let’s create a simple Table instance:

Table<Integer, String, String> table = HashBasedTable.create(2, 2);
table.put(1, "A", "PRESENT");
table.put(1, "B", "ABSENT");

and now we can perform the following check:

assertThat(table)
  .hasRowCount(1)
  .containsValues("ABSENT")
  .containsCell(1, "B", "ABSENT");

4. Conclusion

In this article from AssertJ series, we explored all Guava-related features.

The implementation of all the examples and code snippets can be found in a GitHub project.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Introduction to Java Logging

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

Logging is a powerful aid for understanding and debugging program’s run-time behavior. Logs capture and persist the important data and make it available for analysis at any point in time.

This article discusses the most popular java logging frameworks, Log4j 2 and Logback, along with their predecessor Log4j, and briefly touches upon SLF4J, a logging facade that provides a common interface for different logging frameworks.

2. Enabling Logging

All the logging frameworks discussed in the article share the notion of loggers, appenders and layouts. Enabling logging inside the project follows three common steps:

  1. Adding needed libraries
  2. Configuration
  3. Placing log statements

The upcoming sections discuss the steps for each framework individually.

3. Log4j 2

Log4j 2 is new and improved version of the Log4j logging framework. The most compelling improvement is the possibility of asynchronous logging. Log4j 2 requires the following libraries:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-api</artifactId>
    <version>2.6.1</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.6.1</version>
</dependency>

Latest version of log4j-api you can find here and log4j-corehere.

3.1. Configuration

Configuring Log4j 2 is based on the main configuration log4j.xml file. The first thing to configure is the appender.

These determine where the log message will be routed. Destination can be a console, a file, socket, etc.

Log4j 2 has many appenders for different purposes, you can find more information on the official Log4j 2 site.

Lets take a look at a simple config example:

<Configuration status="debug" name="baeldung" packages="">
    <Appenders>
        <Console name="stdout" target="SYSTEM_OUT">
            <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss} %p %m%n"/>
        </Console>
    </Appenders>
</Configuration>

You can set a name for each appender, for example use name console instead of stdout.

Notice the PatternLayout element – this determines how message should look like. In our example, the pattern is set based on the pattern param, where %d determines date pattern, %p – output of log level, %m – output of logged message and %n – adds new line symbol. More info about pattern you can find on official Log4j 2 page.

Finally – to enable an appender (or multiple) you need to add it to <Root> section:

<Root level="error">
    <AppenderRef ref="STDOUT"/>
</Root>

3.2. Logging to File

Sometimes you will need to use logging to a file, so we will add fout logger to our configuration:

<Appenders>
    <File name="fout" fileName="baeldung.log" append="true">
        <PatternLayout>
            <Pattern>%d{yyyy-MM-dd HH:mm:ss} %-5p %m%nw</Pattern>
        </PatternLayout>
    </File>
</Appenders>

The File appender have several parameters that can be configured:

  • file – determines file name of the log file
  • append – The default value for this param is true, meaning that by default a File appender will append to an existing file and not truncate it.
  • PatternLayout that was described in previous example.

In order to enable File appender you need to add it to <Root> section:

<Root level="INFO">
    <AppenderRef ref="stdout" />
    <AppenderRef ref="fout"/>
</Root>

3.3. Asynchronous Logging

If you want to make your Log4j 2 asynchronous you need to add LMAX disruptor library to your pom.xml. LMAX disruptor is a lock-free inter-thread communication library.

Adding disruptor to pom.xml:

<dependency>
    <groupId>com.lmax</groupId>
    <artifactId>disruptor</artifactId>
    <version>3.3.4</version>
</dependency>

Latest version of disruptor can be found here.

If you want to use LMAX disruptor you need to use <asyncRoot> instead of <Root> in your configuration.

<AsyncRoot level="DEBUG">
    <AppenderRef ref="stdout" />
    <AppenderRef ref="fout"/>
</AsyncRoot>

Or you can enable asynchronous logging by setting the system property Log4jContextSelector to org.apache.logging.log4j.core.async.AsyncLoggerContextSelector.

You can of course read more about the configuration of the Log4j2 async logger and see some performance diagrams on the Log4j2 official page .

3.4. Usage

The following is a simple example that demonstrates the use of Log4j for logging:

import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager;

public class Log4jExample {

    private static Logger logger = LogManager.getLogger(Log4jExample.class);

    public static void main(String[] args) {
        logger.debug("Debug log message");
        logger.info("Info log message");
        logger.error("Error log message");
    }
}

After running, the application will log the following messages to both console and file named baeldung.log:

2016-06-16 17:02:13 INFO  Info log message
2016-06-16 17:02:13 ERROR Error log message

If you elevate the root log level to ERROR:

<level value="ERROR" />

The output will look like the following:

2016-06-16 17:02:13 ERROR Error log message

As you can see, changing the log level to upper parameter causes the messages with lower log levels will not be print to appenders.

Method logger.error can be also used to log an exception that occurred:

try {
    // Here some exception can be thrown
} catch (Exception e) {
    logger.error("Error log message", throwable);
}

3.5. Package Level Configuration

Let’s say you need to show messages with the log level TRACE – for example from a specific package such as com.baeldung.log4j2:

logger.trace("Trace log message");

For all other packages you want to continue logging only INFO messages.

Keep in mind that TRACE is lower than the root log level INFO that we specified in configuration.

To enable logging only for one of packages you need to add the following section before <Root> to your log4j.xml:

<Logger name="com.baeldung.log4j2" level="debug">
    <AppenderRef ref="stdout"/>
</Logger>

It will enable logging for com.baeldung.log4j package and your output will look like:

2016-06-16 17:02:13 TRACE Trace log message
2016-06-16 17:02:13 DEBUG Debug log message
2016-06-16 17:02:13 INFO  Info log message
2016-06-16 17:02:13 ERROR Error log message

4. Logback

Logback is meant to be an improved version of Log4j, developed by the same developer who made Log4j.

Logback also has a lot more features compared to Log4j, with many of them being introduced into Log4j 2 as well. Here’s a quick look at all of the advantages of Logback on the official site.

Let’s start by adding the following dependency to the pom.xml:

<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.1.7</version>
</dependency>

This dependency will transitively pull in another two dependencies, the logback-core and slf4j-api. Note that the latest version of Logback can be found here.

4.1. Configuration

Let’s now have a look at a Logback configuration example:

<configuration>
  # Console appender
  <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
    <layout class="ch.qos.logback.classic.PatternLayout">
      # Pattern of log message for console appender
      <Pattern>%d{yyyy-MM-dd HH:mm:ss} %-5p %m%n</Pattern>
    </layout>
  </appender>

  # File appender
  <appender name="fout" class="ch.qos.logback.core.FileAppender">
    <file>baeldung.log</file>
    <append>false</append>
    <encoder>
      # Pattern of log message for file appender
      <pattern>%d{yyyy-MM-dd HH:mm:ss} %-5p %m%n</pattern>
    </encoder>
  </appender>

  # Override log level for specified package
  <logger name="com.baeldung.log4j" level="TRACE"/>

  <root level="INFO">
    <appender-ref ref="stdout" />
    <appender-ref ref="fout" />
  </root>
</configuration>

Logback uses SLF4J as an interface, so you need to import SLF4J’s Logger and LoggerFactory.

4.2. SLF4J

SLF4J provides a common interface and abstraction for most of the Java logging frameworks. It acts as a facade and provides standardized API for accessing the underlying features of the logging framework.

Logback uses SLF4J as native API for its functionality. Following is the example using Logback logging:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Log4jExample {

    private static Logger logger = LoggerFactory.getLogger(Log4jExample.class);

    public static void main(String[] args) {
        logger.debug("Debug log message");
        logger.info("Info log message");
        logger.error("Error log message");
    }
}

The output will remain the same as in previous examples.

5. Log4J

Finally, let’s have a look at the venerable Log4j logging framework.

At this point it’s of course outdated, but worth discussing as it lays the foundation for its more modern successors.

Many of the configuration details match those discussed in Log4j 2 section.

5.1. Configuration

First of all you need to add Log4j library to your projects pom.xml:

<dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>1.2.17</version>
</dependency>

Here you should be able to find latest version of Log4j.

Lets take a look at a complete example of simple Log4j configuration with only one console appender:

<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd" >
<log4j:configuration debug="false">

    <!--Console appender-->
    <appender name="stdout" class="org.apache.log4j.ConsoleAppender">
        <layout class="org.apache.log4j.PatternLayout">
            <param name="ConversionPattern" 
              value="%d{yyyy-MM-dd HH:mm:ss} %p %m%n" />
        </layout>
    </appender>

    <root>
        <level value="INFO" />
        <appender-ref ref="stdout" />
    </root>

</log4j:configuration>

<log4j:configuration debug=”false”> is open tag of whole configuration which has one property – debug. It determines whether you want to add Log4j debug information to logs.

5.2. Usage

After you have added Log4j library and configuration you can use logger in your code. Lets take a look at a simple example:

import org.apache.log4j.Logger;

public class Log4jExample {
    private static Logger logger = Logger.getLogger(Log4jExample.class);

    public static void main(String[] args) {
        logger.debug("Debug log message");
        logger.info("Info log message");
        logger.error("Error log message");
    }
}

6. Conclusion

This article shows very simple examples of how you can use different logging framework such as Log4j, Log4j2 and Logback. It covers simple configuration examples for all of the mentioned frameworks.

The examples that accompany the article are available at GitHub.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to JSF EL 2

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Introduction

Expression Language (EL), is a scripting language that’s seen adoption within many Java frameworks, such as Spring with SpEL and JBoss with JBoss EL.

In this article, we’ll focus at the JSF’s implementation of this scripting language – Unified EL.

EL is currently in version 3.0, a major upgrade that allows the processing engine to be used in standalone mode – for example, on the Java SE platform. Prior versions were dependent on a JavaEE-compliant application server or web container. This article discusses EL version 2.2.

2. Immediate and Deferred Evaluation

The primary function of EL in JSF is to connect the JSF view (usually XHTML markup) and the java-based back-end. The back-end can be user-created managed beans, or container-managed objects like the HTTP session.

We will be looking at EL 2.2. EL in JSF comes in two general forms, immediate syntax EL and deferred syntax EL.

2.1. Immediate Syntax EL

Otherwise known as JSP EL, this is a scripting format that’s a holdover from the JSP days of java web application development.

The JSP EL expressions start with the dollar sign ($), then followed by the left curly bracket ({), then followed by the actual expression, and finally closed with the right curly bracket (}):

${ELBean.value > 0}

This syntax:

  1. Is evaluated only once (at the beginning) in the lifecycle of a page. What this means is that the value that is. being read by the expression in the example above must be set before the page is loaded.
  2. Provides read-only access to bean values.
  3. And as a result, requires adherence to the JavaBean naming convention.

For most uses, this form of EL is not very versatile.

2.2. Deferred Execution EL

Deferred Execution EL is the EL designed for JSF proper. It’s major syntactical difference with JSP EL is that it’s marked with a “#” instead of a “$“.

#{ELBean.value > 0}

Deferred EL:

  1. Is in sync with the JSF lifecycle. This means that an EL expression in deferred EL is evaluated at different points in the rendering of a JSF page (at the beginning and the end).
  2. Provides read and write access to bean values. This allows one to set a value in a JSF backing-bean (or anywhere else) using EL.
  3. Allows a programmer to invoke arbitrary methods on an object and depending on the version of EL, pass arguments to such methods.

Unified EL is the specification that unifies both deferred EL and JSP EL, allowing both syntax in the same page.

3. Unified EL

Unified EL allows two general flavors of expressions, value expressions and method expressions.

And a quick note – the following sections will show some examples, which are all available in the app (see the Github link at the end) by navigating to:

http://localhost:8080/jsf/el_intro.jsf

3.1. Value Expressions

A value expression allows us to either read or set a managed bean property, depending on where it’s placed.

The following expression reads a managed bean property onto the page:

Hello, #{ELBean.firstName}

The following expression however, allows us to set a value on the user object:

<h:inputText id="firstName" value="#{ELBean.firstName}" required="true"/>

The variable must follow JavaBean naming convention to be eligible for this kind of treatment. For the value of the bean to be committed, the enclosing form just needs to be saved.

3.2. Method Expressions

Unified EL provides method expressions to execute public, non-static methods from within a JSF page. The methods may or may not have return values.

Here’s a quick example:

<h:commandButton value="Save" action="#{ELBean.save}"/>

The save() method being referred to is defined on a backing bean named ELBean. 

Starting from EL 2.2, you can also pass arguments to the method that’s accessed using EL. This can allow us to rewrite our example thus:

<h:inputText id="firstName" binding="#{firstName}" required="true"/>
<h:commandButton value="Save"
  action="#{ELBean.saveFirstName(firstName.value.toString().concat('(passed)'))}"/>

What we’ve done here, is to create a page-scoped binding expression for the inputText component and directly pass the value attribute to the method expression.

Note that the variable is passed to the method without any special notation, curly braces or escape characters.

3.3. Implicit EL Objects

The JSF EL engine provides access to several container-managed objects. Some of them are:

  • #{Application}: Also available as the #{servletContext}, this is the object representing the web application instance
  • #{applicationScope}: a map of variables accessible web application-wide
  • #{Cookie}: a map of the HTTP Cookie variables
  • #{facesContext}: the current instance of FacesContext
  • #{flash}: the JSF Flash scoped-object
  • #{header}: a map of the HTTP headers in the current request
  • #{initParam}: a map of the context initialization variables of the web application
  • #{param}: a map of the HTTP request query parameters
  • #{request}: the HTTPServletRequest object
  • #{requestScope}: a request-scoped map of variables
  • #{sessionScope}: a session-scoped map of variables
  • #{session}: the HTTPSession object
  • #{viewScope}: a view (page-) scoped map of variables

The following simple example lists all the request headers and values by accessing the headers implicit object:

<c:forEach items="#{header}" var="header">
   <tr>
       <td>#{header.key}</td>
       <td>#{header.value}</td>
   </tr>
</c:forEach>

4. What You Can Do in EL

In it’s versatility, EL can be featured in Java code, XHTML markup, Javascript and even in JSF configuration files like the faces-config.xml file. Let’s examine some concrete use-cases.

4.1. Use EL in Page Markup

EL can be featured in standard HTML tags:

<meta name="description" content="#{ELBean.pageDescription}"/>

4.2. Use EL in JavaScript

EL will be interpreted when encountered in Javascript or <script> tags:

<script type="text/javascript"> var theVar = #{ELBean.firstName};</script>

A backing bean variable will be set as a javascript variable here.

4.3. Evaluate Boolean Logic in EL Using Operators

EL supports fairly advanced comparison operators:

  • eq equality operator, equivalent to “==.”
  • lt less than operator, equivalent to “<.”
  • le less than or equal to operator, equivalent to “<=.”
  • gt greater than operator, equivalent to “>.”
  • ge greater than or equal, equivalent to “>=.

4.4. Evaluate EL in a Backing Bean

From within the backing bean code, one can evaluate an EL expression using the JSF Application. This opens up a world of possibilities, in connecting the JSF page with the backing bean. You could retrieve implicit EL objects, or retrieve actual HTML page components or their value easily from the backing bean:

FacesContext ctx = FacesContext.getCurrentInstance(); 
Application app = ctx.getApplication(); 
String firstName = app.evaluateExpressionGet(ctx, "#{firstName.value}", String.class); 
HtmlInputText firstNameTextBox = app.evaluateExpressionGet(ctx, "#{firstName}", HtmlInputText.class);

This allows the developer a great deal of flexibility in interacting with a JSF page.

5. What You Can Not Do in EL

EL < 3.0 does have some limitations. The following sections discuss some of them.

5.1. No Overloading

EL doesn’t support the use of overloading. So in a backing bean with the following methods:

public void save(User theUser);
public void save(String username);
public void save(Integer uid);

JSF EL will not be able to properly evaluate the following expression

<h:commandButton value="Save" action="#{ELBean.save(firstName.value)}"/>

The JSF ELResolver will introspect the class definition of bean, and pick the first method returned by java.lang.Class#getMethods (a method that returns the methods available in a class). The order of the methods returned is not guaranteed and this will inevitably result in undefined behaviour.

5.2. No Enums or Constant Values

JSF EL < 3.0, doesn’t support the use of constant values or Enums in the script. So, having any of the following

public static final String USER_ERROR_MESS = "No, you can’t do that";
enum Days { Sat, Sun, Mon, Tue, Wed, Thu, Fri };

means that you won’t be able to do the following

<h:outputText id="message" value="#{ELBean.USER_ERROR_MESS}"/>
<h:commandButton id="saveButton" value="save" rendered="bean.offDay==Days.Sun"/>

5.3. No Built-in Null Safety

JSF EL < v3.0 doesn’t provide implicit null safe access, which some may find odd about a modern scripting engine.

So if person in the expression below is null, the entire expression fails with an unsightly NPE

Hello Mr, #{ELBean.person.surname}"

6. Conclusion

We’ve examined some of the fundamentals of JSF EL, strengths and limitations.

This is largely a versatile scripting language with some room for improvement; it’s also the glue that binds the JSF view, to the JSF model and controller.

The source code that accompanies this article is available at GitHub.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Hibernate: save, persist, update, merge, saveOrUpdate

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

In this article we will discuss the differences between several methods of the Session interface: save, persist, update, merge, saveOrUpdate.

This is not an introduction to Hibernate and you should already know the basics of configuration, object-relational mapping and working with entity instances. For an introductory article to Hibernate, visit our tutorial on Hibernate 4 with Spring.

2. Session as a Persistence Context Implementation

The Session interface has several methods that eventually result in saving data to the database: persist, save, update, merge, saveOrUpdate. To understand the difference between these methods, we must first discuss the purpose of the Session as a persistence context and the difference between the states of entity instances in relation to the Session.

We should also understand the history of Hibernate development that led to some partly duplicated API methods.

2.1. Managing Entity Instances

Apart from object-relational mapping itself, one of the problems that Hibernate was intended to solve is the problem of managing entities during runtime. The notion of “persistence context” is Hibernate’s solution to this problem. Persistence context can be thought of as a container or a first-level cache for all the objects that you loaded or saved to a database during a session.

The session is a logical transaction, which boundaries are defined by your application’s business logic. When you work with the database through a persistence context, and all of your entity instances are attached to this context, you should always have a single instance of entity for every database record that you’ve interacted during the session with.

In Hibernate, the persistence context is represented by org.hibernate.Session instance. For JPA, it is the javax.persistence.EntityManager. When we use Hibernate as a JPA provider and operate via EntityManager interface, the implementation of this interface basically wraps the underlying Session object. However, Hibernate Session provides a richer interface with more possibilities so sometimes it is useful  to work with Session directly.

2.2. States of Entity Instances

Any entity instance in your application appears in one of the three main states in relation to the Session persistence context:

  • transient — this instance is not, and never was, attached to a Session; this instance has no corresponding rows in the database; it’s usually just a new object that you have created to save to the database;
  • persistent — this instance is associated with a unique Session object; upon flushing the Session to the database, this entity is guaranteed to have a corresponding consistent record in the database;
  • detached — this instance was once attached to a Session (in a persistent state), but now it’s not; an instance enters this state if you evict it from the context, clear or close the Session, or put the instance through serialization/deserialization process.

Here is a simplified state diagram with comments on Session methods that make the state transitions happen.

2016-07-11_13-38-11

When the entity instance is in the persistent state, all changes that you make to the mapped fields of this instance will be applied to the corresponding database records and fields upon flushing the Session. The persistent instance can be thought of as “online”, whereas the detached instance has gone “offline” and is not monitored for changes.

This means that when you change fields of a persistent object, you don’t have to call save, update or any of those methods to get these changes to the database: all you need is to commit the transaction, or flush or close the session, when you’re done with it.

2.3. Conformity to JPA Specification

Hibernate was the most successful Java ORM implementation. No wonder that the specification for Java persistence API (JPA) was heavily influenced by the Hibernate API. Unfortunately, there were also many differences: some major, some more subtle.

To act as an implementation of the JPA standard, Hibernate APIs had to be revised. Several methods were added to Session interface to match the EntityManager interface. These methods serve the same purpose as the “original” methods, but conform to the specification and thus have some differences.

3. Differences Between the Operations

It is important to understand from the beginning that all of the methods (persist, save, update, merge, saveOrUpdate) do not immediately result in the corresponding SQL UPDATE or INSERT statements. The actual saving of data to the database occurs on committing the transaction or flushing the Session.

The mentioned methods basically manage the state of entity instances by transitioning them between different states along the lifecycle.

As an example entity, we will use a simple annotation-mapped entity Person:

@Entity
public class Person {

    @Id
    @GeneratedValue
    private Long id;

    private String name;

    // ... getters and setters

}

3.1. Persist

The persist method is intended for adding a new entity instance to the persistence context, i.e. transitioning an instance from transient to persistent state.

We usually call it when we want to add a record to the database (persist an entity instance):

Person person = new Person();
person.setName("John");
session.persist(person);

What happens after the persist method is called? The person object has transitioned from transient to persistent state. The object is in the persistence context now, but not yet saved to the database. The generation of INSERT statements will occur only upon commiting the transaction, flushing or closing the session.

Notice that the persist method has void return type. It operates on the passed object “in place”, changing its state. The person variable references the actual persisted object.

This method is a later addition to the Session interface. The main differentiating feature of this method is that it conforms to the JSR-220 specification (EJB persistence). The semantics of this method is strictly defined in the specification, which basically states, that:

  • a transient instance becomes persistent (and the operation cascades to all of its relations with cascade=PERSIST or cascade=ALL),
  • if an instance is already persistent, then this call has no effect for this particular instance (but it still cascades to its relations with cascade=PERSIST or cascade=ALL),
  • if an instance is detached, you should expect an exception, either upon calling this method, or upon committing or flushing the session.

Notice that there is nothing here that concerns the identifier of an instance. The spec does not state that the id will be generated right away, regardless of the id generation strategy. The specification for the persist method allows the implementation to issue statements for generating id on commit or flush, and the id is not guaranteed to be non-null after calling this method, so you should not rely upon it.

You may call this method on an already persistent instance, and nothing happens. But if you try to persist a detached instance, the implementation is bound to throw an exception. In the following example we persist the entity, evict it from the context so it becomes detached, and then try to persist again. The second call to session.persist() causes an exception, so the following code will not work:

Person person = new Person();
person.setName("John");
session.persist(person);

session.evict(person);

session.persist(person); // PersistenceException!

3.2. Save

The save method is an “original” Hibernate method that does not conform to the JPA specification.

Its purpose is basically the same as persist, but it has different implementation details. The documentation for this method strictly states that it persists the instance, “first assigning a generated identifier”. The method is guaranteed to return the Serializable value of this identifier.

Person person = new Person();
person.setName("John");
Long id = (Long) session.save(person);

The effect of saving an already persisted instance is the same as with persist. Difference comes when you try to save a detached instance:

Person person = new Person();
person.setName("John");
Long id1 = (Long) session.save(person);

session.evict(person);
Long id2 = (Long) session.save(person);

The id2 variable will differ from id1. The call of save on a detached instance creates a new persistent instance and assigns it a new identifier, which results in a duplicate record in a database upon committing or flushing.

3.3. Merge

The main intention of the merge method is to update a persistent entity instance with new field values from a detached entity instance.

For instance, suppose you have a RESTful interface with a method for retrieving an JSON-serialized object by its id to the caller and a method that receives an updated version of this object from the caller. An entity that passed through such serialization/deserialization will appear in a detached state.

After deserializing this entity instance, you need to get a persistent entity instance from a persistence context and update its fields with new values from this detached instance. So the merge method does exactly that:

  • finds an entity instance by id taken from the passed object (either an existing entity instance from the persistence context is retrieved, or a new instance loaded from the database);
  • copies fields from the passed object to this instance;
  • returns newly updated instance.

In the following example we evict (detach) the saved entity from context, change the name field, and then merge the detached entity.

Person person = new Person(); 
person.setName("John"); 
session.save(person);

session.evict(person);
person.setName("Mary");

Person mergedPerson = (Person) session.merge(person);

Note that the merge method returns an object — it is the mergedPerson object that was loaded into persistence context and updated, not the person object that you passed as an argument. Those are two different objects, and the person object usually needs to be discarded (anyway, don’t count on it being attached to persistence context).

As with persist method, the merge method is specified by JSR-220 to have certain semantics that you can rely upon:

  • if the entity is detached, it is copied upon an existing persistent entity;
  • if the entity is transient, it is copied upon a newly created persistent entity;
  • this operation cascades for all relations with cascade=MERGE or cascade=ALL mapping;
  • if the entity is persistent, then this method call does not have effect on it (but the cascading still takes place).

3.4. Update

As with persist and save, the update method is an “original” Hibernate method that was present long before the merge method was added. Its semantics differs in several key points:

  • it acts upon passed object (its return type is void); the update method transitions the passed object from detached to persistent state;
  • this method throws an exception if you pass it a transient entity.

In the following example we save the object, then evict (detach) it from the context, then change its name and call update. Notice that we don’t put the result of the update operation in a separate variable, because the update takes place on the person object itself. Basically we’re reattaching the existing entity instance to the persistence context — something the JPA specification does not allow us to do.

Person person = new Person();
person.setName("John");
session.save(person);
session.evict(person);

person.setName("Mary");
session.update(person);

Trying to call update on a transient instance will result in an exception. The following will not work:

Person person = new Person();
person.setName("John");
session.update(person); // PersistenceException!

3.5. SaveOrUpdate

This method appears only in the Hibernate API and does not have its standardized counterpart. Similar to update, it also may be used for reattaching instances.

Actually, the internal DefaultUpdateEventListener class that processes the update method is a subclass of DefaultSaveOrUpdateListener, just overriding some functionality. The main difference of saveOrUpdate method is that it does not throw exception when applied to a transient instance; instead, it makes this transient instance persistent. The following code will persist a newly created instance of Person:

Person person = new Person();
person.setName("John");
session.saveOrUpdate(person);

You may think of this method as a universal tool for making an object persistent regardless of its state wether it is transient or detached.

4. What to Use?

If you don’t have any special requirements, as a rule of thumb, you should stick to the persist and merge methods, because they are standardized and guaranteed to conform to the JPA specification.

They are also portable in case you decide to switch to another persistence provider, but they may sometimes appear not so useful as the “original” Hibernate methods, save, update and saveOrUpdate.

5. Conclusion

We’ve discussed the purpose of different Hibernate Session methods in relation to managing persistent entities in runtime. We’ve learned how these methods transist entity instances through their lifecycles and why some of these methods have duplicated functionality.

The source code for the article is available on GitHub.

I usually post about Persistence on Twitter - you can follow me there:


Minification of JS and CSS Assets with Maven

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

This article shows how to minify Javascript and CSS assets as a build step and serve the resulting files with Spring MVC.

We will use YUI Compressor as the underlying minification library and YUI Compressor Maven plugin to integrate it into our build process.

2. Maven Plugin Configuration

First, we need to declare that we will use the compressor plugin in our pom.xml file and execute the compress goal. This will compress all .js and .css files under src/main/webapp so that foo.js will be minified as foo-min.js and myCss.css will be minified as myCss-min.css:

<plugin>
   <groupId>net.alchim31.maven</groupId>
    <artifactId>yuicompressor-maven-plugin</artifactId>
    <version>1.5.1</version>
    <executions>
        <execution>
            <goals>
                <goal>compress</goal>
            </goals>
        </execution>
    </executions>
</plugin>

Our src/main/webapp directory contains the following files:

js/
├── foo.js
├── jquery-1.11.1.min.js
resources/
└── myCss.css

After executing mvn clean package, the generated WAR file will contain the following files:

js/
├── foo.js
├── foo-min.js
├── jquery-1.11.1.min.js
├── jquery-1.11.1.min-min.js
resources/
├── myCss.css
└── myCss-min.css

3. Keeping the Filenames the Same

At this stage, when we execute mvn clean package, foo-min.js and myCss-min.css are created by the plugin. Since we have originally used foo.js and myCss.css when referring to the files, our page will still use the original non-minified files, as the minified files have different names than the original.

In order to prevent having both foo.js/foo-min.js and myCss.css/myCss-min.css and have the files minified without changing their names, we need to configure the plugin with nosuffix option as follows:

<plugin>
    <groupId>net.alchim31.maven</groupId>
    <artifactId>yuicompressor-maven-plugin</artifactId>
    <version>1.5.1</version>
    <executions>
        <execution>
            <goals>
                <goal>compress</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <nosuffix>true</nosuffix>
    </configuration>
</plugin>

Now when we execute mvn clean package, we will have the following files in the generated WAR file:

js/
├── foo.js
├── jquery-1.11.1.min.js
resources/
└── myCss.css

4. WAR Plugin Configuration

Keeping the filenames the same has a side effect. It causes the WAR plugin to overwrite the minified foo.js and myCss.css files with the original files, so we don’t have the minified versions of the files in the final output. foo.js file contains the following lines before minification:

function testing() {
    alert("Testing");
}

When we examine the contents of the foo.js file in the generated WAR file, we see that it has the original content instead of the minified content. To solve this problem, we need to specify a webappDirectory for the compressor plugin and reference this from within WAR plugin configuration.

<plugin>
    <groupId>net.alchim31.maven</groupId>
    <artifactId>yuicompressor-maven-plugin</artifactId>
    <version>1.5.1</version>
    <executions>
        <execution>
            <goals>
                <goal>compress</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <nosuffix>true</nosuffix>
        <webappDirectory>${project.build.directory}/min</webappDirectory>
    </configuration>
</plugin>
<plugin>
<artifactId>maven-war-plugin</artifactId>
<configuration>
    <webResources>
        <resource>
            <directory>${project.build.directory}/min</directory>
        </resource>
    </webResources>
</configuration>
</plugin>

Here we have specified the min directory as the output directory for the minified files and configured the WAR plugin to include this in the final output.

Now we have the minified files in the generated WAR file, with their original filenames foo.js and myCss.css. We can check foo.js to see that it has the following minified content now:

function testing(){alert("Testing")};

5. Excluding Already Minified Files

Third-party Javascript and CSS libraries may have minified versions available for download. If you happen to use one of these in your project, you don’t need to process them again.

Including already minified files produces warning messages when building the project.

For example, jquery-1.11.1.min.js is an already minified Javascript file and it causes warning messages similar to the following during the build:

[WARNING] .../src/main/webapp/js/jquery-1.11.1.min.js [-1:-1]: 
Using 'eval' is not recommended. Moreover, using 'eval' reduces the level of compression!
execScript||function(b){a. ---> eval <--- .call(a,b);})
[WARNING] ...jquery-1.11.1.min.js:line -1:column -1: 
Using 'eval' is not recommended. Moreover, using 'eval' reduces the level of compression!
execScript||function(b){a. ---> eval <--- .call(a,b);})

To exclude already minified files from the process, configure the compressor plugin with an excludes option as follows:

<plugin>
    <groupId>net.alchim31.maven</groupId>
    <artifactId>yuicompressor-maven-plugin</artifactId>
    <version>1.5.1</version>
    <executions>
        <execution>
            <goals>
                <goal>compress</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <nosuffix>true</nosuffix>
        <webappDirectory>${project.build.directory}/min</webappDirectory>
        <excludes>
            <exclude>**/*.min.js</exclude>
        </excludes>
    </configuration>
</plugin>

This will exclude all files under all directories whose filenames end with min.js. Executing mvn clean package doesn’t produce warning messages now and the build doesn’t try to minify already minified files.

6. Conclusion

In this article, we have described a nice way to integrate minification of Javascript and CSS files into your Maven workflow. To serve these static assets with your Spring MVC application, see our Serve Static Resources with Spring article.

You can find the code for this article on GitHub.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Binary Data Formats in a Spring REST API

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Overview

While JSON and XML are widely popular data transfer formats when it comes to REST APIs, they’re not the only options available.

There exist many other formats with varying degree of serialization speed and serialized data size.

In this article we explore how to configure a Spring REST mechanism to use binary data formats – which we illustrate with Kryo.

Moreover we show how to support multiple data formats by adding support for Google Protocol buffers.

2. HttpMessageConverter

HttpMessageConverter interface is basically Spring’s public API for the conversion of REST data formats.

There are different ways to specify the desired converters. Here we extend WebMvcConfigurerAdapter and explicitly provide the converters we want to use in the overridden configureMessageConverters method:

@Configuration
@EnableWebMvc
@ComponentScan({ "org.baeldung.web" })
public class WebConfig extends WebMvcConfigurerAdapter {
    @Override
    public void configureMessageConverters(List<HttpMessageConverter<?>> messageConverters) {
        super.configureMessageConverters(messageConverters);
    }
}

3. Kryo

3.1. Kryo Overview and Maven

Kryo is a binary encoding format that provides good serialization and deserialization speed and smaller transferred data size compared to text-based formats.

While in theory it can be used to transfer data between different kinds of systems, it is primarily designed to work with Java components.

We add the necessary Kryo libraries with the following Maven dependency:

<dependency>
    <groupId>com.esotericsoftware</groupId>
    <artifactId>kryo</artifactId>
    <version>4.0.0</version>
</dependency>

To check the latest version of kryo you can have a look here.

3.2. Kryo in Spring REST

In order to utilize Kryo as data transfer format, we create a custom HttpMessageConverter and implement the necessary serialization and deserialization logic. Also, we define our custom HTTP header for Kryo: application/x-kryo. Here is a full simplified working example which we use for demonstration purposes:

public class KryoHttpMessageConverter extends AbstractHttpMessageConverter<Object> {

    public static final MediaType KRYO = new MediaType("application", "x-kryo");

    private static final ThreadLocal<Kryo> kryoThreadLocal = new ThreadLocal<Kryo>() {
        @Override
        protected Kryo initialValue() {
            Kryo kryo = new Kryo();
            kryo.register(Foo.class, 1);
            return kryo;
        }
    };

    public KryoHttpMessageConverter() {
        super(KRYO);
    }

    @Override
    protected boolean supports(Class<?> clazz) {
        return Object.class.isAssignableFrom(clazz);
    }

    @Override
    protected Object readInternal(
      Class<? extends Object> clazz, HttpInputMessage inputMessage) throws IOException {
        Input input = new Input(inputMessage.getBody());
        return kryoThreadLocal.get().readClassAndObject(input);
    }

    @Override
    protected void writeInternal(
      Object object, HttpOutputMessage outputMessage) throws IOException {
        Output output = new Output(outputMessage.getBody());
        kryoThreadLocal.get().writeClassAndObject(output, object);
        output.flush();
    }

    @Override
    protected MediaType getDefaultContentType(Object object) {
        return KRYO;
    }
}

The controller method is straightforward (note there is no need for any custom protocol-specific data types, we use plain Foo DTO):

@RequestMapping(method = RequestMethod.GET, value = "/foos/{id}")
@ResponseBody
public Foo findById(@PathVariable long id) {
    return fooRepository.findById(id);
}

And a quick test to prove that we have wired everything together correctly:

RestTemplate restTemplate = new RestTemplate();
restTemplate.setMessageConverters(Arrays.asList(new KryoHttpMessageConverter()));

HttpHeaders headers = new HttpHeaders();
headers.setAccept(Arrays.asList(KryoHttpMessageConverter.KRYO));
HttpEntity<String> entity = new HttpEntity<String>(headers);

ResponseEntity<Foo> response = restTemplate.exchange("http://localhost:8080/spring-rest/foos/{id}",
  HttpMethod.GET, entity, Foo.class, "1");
Foo resource = response.getBody();

assertThat(resource, notNullValue());

4. Supporting Multiple Data Formats

Often you would want to provide support for multiple data formats for the same service. The clients specify the desired data formats in the Accept HTTP header, and the corresponding message converter is invoked to serialize the data.

Usually, you just have to register another converter for things to work out of the box. Spring picks the appropriate converter automatically based on the value in the Accept header and the supported media types declared in the converters.

For example, to add support for both JSON and Kryo, register both KryoHttpMessageConverter and MappingJackson2HttpMessageConverter:

@Override
public void configureMessageConverters(List<HttpMessageConverter<?>> messageConverters) {
    messageConverters.add(new MappingJackson2HttpMessageConverter());
    messageConverters.add(new KryoHttpMessageConverter());
    super.configureMessageConverters(messageConverters);
}

Now, let’s suppose that we want to add Google Protocol Buffer to the list as well. For this example, we assume there is a class FooProtos.Foo generated with the protoc compiler based on the following proto file:

package baeldung;
option java_package = "org.baeldung.web.dto";
option java_outer_classname = "FooProtos";
message Foo {
    required int64 id = 1;
    required string name = 2;
}

Spring comes with some built-in support for Protocol Buffer. All we need to make it work is to include ProtobufHttpMessageConverter in the list of supported converters:

@Override
public void configureMessageConverters(List<HttpMessageConverter<?>> messageConverters) {
    messageConverters.add(new MappingJackson2HttpMessageConverter());
    messageConverters.add(new KryoHttpMessageConverter());
    messageConverters.add(new ProtobufHttpMessageConverter());
    super.configureMessageConverters(messageConverters);
}

However, we have to define a separate controller method that returns FooProtos.Foo instances (JSON and Kryo both deal with Foos, so no changes are needed in the controller to distinguish the two).

There are two ways to resolve the ambiguity about which method gets called. The first approach is to use different URLs for protobuf and other formats. For example, for protobuf:

@RequestMapping(method = RequestMethod.GET, value = "/fooprotos/{id}")
@ResponseBody
public FooProtos.Foo findProtoById(@PathVariable long id) { … }

and for the others:

@RequestMapping(method = RequestMethod.GET, value = "/foos/{id}")
@ResponseBody
public Foo findById(@PathVariable long id) { … }

Notice that for protobuf we use value = “/fooprotos/{id}” and for the other formats value = “/foos/{id}”.

The second – and better approach is to use the same URL, but to explicitly specify the produced data format in the request mapping for protobuf:

@RequestMapping(
  method = RequestMethod.GET, 
  value = "/foos/{id}", 
  produces = { "application/x-protobuf" })
@ResponseBody
public FooProtos.Foo findProtoById(@PathVariable long id) { … }

Note that by specifying the media type in the produces annotation attribute we give a hint to the underlying Spring mechanism about which mapping needs to be used based on the value in the Accept header provided by clients, so there is no ambiguity about which method needs to be invoked for the “foos/{id}” URL.

The second approach enables us to provide a uniform and consistent REST API to the clients for all data formats.

Finally, if you’re interested in going deeper into using Protocol Buffers with a Spring REST API, have a look at the reference article.

5. Registering Extra Message Converters

It is very important to note that you loose all of the default message converters when you override the configureMessageConverters method. Only the ones you provide will be used.

While sometimes this is exactly what you want, in many cases you just want to add new converters, while still keeping the default ones which already take care of standard data formats like JSON. To achieve this, override the extendMessageConverters method:

@Configuration
@EnableWebMvc
@ComponentScan({ "org.baeldung.web" })
public class WebConfig extends WebMvcConfigurerAdapter {
    @Override
    public void extendMessageConverters(List<HttpMessageConverter<?>> messageConverters) {
        messageConverters.add(new ProtobufHttpMessageConverter());
        messageConverters.add(new KryoHttpMessageConverter());
        super.configureMessageConverters(messageConverters);
    }
}

6. Conclusion

In this tutorial, we looked at how easy it is to use any data transfer format in Spring MVC, and we examined this by using Kryo as an example.

We also showed how to add support for multiple formats so that different clients are able to use different formats.

The implementation of this Binary Data Formats in a Spring REST API Tutorial is of course on Github. This is a Maven based project, so it should be easy to import and run as it is.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

AssertJ’s Java 8 Features

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

This article focuses on AssertJ‘s Java8-related features and is the third article from the series.

If you’re looking for general information on its main features have a look at the first article in the series Introduction to AssertJ and then at AssertJ for Guava.

2. Maven Dependencies

Java 8’s support is included in the main AssertJ Core module since version 3.5.1. In order to use the module, you will need to include the following section in your pom.xml file:

<dependency>
    <groupId>org.assertj</groupId>
    <artifactId>assertj-core</artifactId>
    <version>3.5.1</version>
    <scope>test</scope>
</dependency>

This dependency covers only the basic Java assertions. If you want to use the advanced assertions, you will need to add additional modules separately.

The latest Core version can be found here.

3. Java 8 Features

AssertJ leverages Java 8 features by providing special helper methods and new assertions for Java 8 types.

3.1. Optional Assertions

Let’s create a simple Optional instance:

Optional<String> givenOptional = Optional.of("something");

We can now easily check if an Optional contains some value and what that containing value is:

assertThat(givenOptional)
  .isPresent()
  .hasValue("something");

3.2. Predicate Assertions

Let’s create a simple Predicate instance by checking the length of a String:

Predicate<String> predicate = s -> s.length() > 4;

Now you can easily check which Strings are rejected or accepted by the Predicate:

assertThat(predicate)
  .accepts("aaaaa", "bbbbb")
  .rejects("a", "b")
  .acceptsAll(asList("aaaaa", "bbbbb"))
  .rejectsAll(asList("a", "b"));

3.3. LocalDate Assertions

Let’s start by defining two LocalDate objects:

LocalDate givenLocalDate = LocalDate.of(2016, 7, 8);
LocalDate todayDate = LocalDate.now();

You can now easily check if a given date is before/after a given date, or today:

assertThat(givenLocalDate)
  .isBefore(LocalDate.of(2020, 7, 8))
  .isAfterOrEqualTo(LocalDate.of(1989, 7, 8));

assertThat(todayDate)
  .isAfter(LocalDate.of(1989, 7, 8))
  .isToday();

3.4. LocalDateTime Assertions

The LocalDateTime assertions work similarly to LocalDate‘s, but do not share the isToday method.

Let’s create an example LocalDateTime object:

LocalDateTime givenLocalDate = LocalDateTime.of(2016, 7, 8, 12, 0);

And now you can check:

assertThat(givenLocalDate)
  .isBefore(LocalDateTime.of(2020, 7, 8, 11, 2));

3.5. LocalTime Assertions

The LocalTime assertions work similarly to other java.util.time.* assertions, but they do have one exclusive method: hasSameHourAs.

Let’s create an example LocalTime object:

LocalTime givenLocalTime = LocalTime.of(12, 15);

and now you can assert:

assertThat(givenLocalTime)
  .isAfter(LocalTime.of(1, 0))
  .hasSameHourAs(LocalTime.of(12, 0));

3.6. FlatExtracting Helper Method

The FlatExtracting is a special utility method that utilizes Java 8’s lambdas in order to extract properties from Iterable elements.

Let’s create a simple List with LocalDate objects:

List<LocalDate> givenList = asList(ofYearDay(2016, 5), ofYearDay(2015, 6));

now we can easily check if this List contains at least one LocalDate object with the year 2015:

assertThat(givenList)
  .flatExtracting(LocalDate::getYear)
  .contains(2015);

the flatExtracting method does not limit us to field extraction. We can always provide it with any function:

assertThat(givenList)
  .flatExtracting(LocalDate::isLeapYear)
  .contains(true);

or even:

assertThat(givenList)
  .flatExtracting(Object::getClass)
  .contains(LocalDate.class);

You can also extract multiple properties at once:

assertThat(givenList)
  .flatExtracting(LocalDate::getYear, LocalDate::getDayOfMonth)
  .contains(2015, 6);

3.7. Satisfies Helper Method

The Satisfies method allows you to quickly check if an object satisfies all provided assertions.

Let’s create an example String instance:

String givenString = "someString";

and now we can provide assertions as a lambda body:

assertThat(givenString)
  .satisfies(s -> {
    assertThat(s).isNotEmpty();
    assertThat(s).hasSize(10);
  });

3.8. HasOnlyOneElementSatisfying Helper Method

The HasOnlyOneElement helper method allows checking if an Iterable instance contains exactly only one element satisfying provided assertions.

Let’s create an example List:

List<String> givenList = Arrays.asList("");

and now you can assert:

assertThat(givenList)
  .hasOnlyOneElementSatisfying(s -> assertThat(s).isEmpty());

3.9. Matches Helper Method

The Matches helper method allows checking if a given object matches the given Predicate function.

Let’s take an empty String:

String emptyString = "";

and now we can check it’s state by providing an adequate Predicate lambda function:

assertThat(emptyString)
  .matches(String::isEmpty);

4. Conclusion

In this last article from the AssertJ series, we explored all advanced AssertJ Java 8’s features, which concludes the series.

The implementation of all the examples and code snippets can be found in the GitHub project.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Intro to Jedis – the Java Redis Client Library

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

This article is an introduction to Jedis, a client library in Java for Redis – the popular in-memory data structure store that can persist on disk as well. It is driven by a keystore-based data structure to persist data and can be used as a database, cache, message broker, etc.

First we are going to explain in which kind of situations Jedis is useful and what it is about. In the subsequent sections we are elaborating on the various data structures and explaining transactions, pipelining and the publish/subscribe feature. We conclude with connection pooling and Redis Cluster.

2. Why Jedis?

Redis lists the most well known client libraries on their official site. There are multiple alternatives to Jedis, but only two more are currently worthy of their recommendation star, lettuce and Redisson.

These two clients do have some unique features like thread safety, transparent reconnection handling and an asynchronous API, all features of which Jedis lacks. However, it is small and considerably faster than the other two. Besides, it is the client library of choice of the Spring Framework developers and it has the biggest community of all three.

3. Maven Dependencies

Let’s start by declaring the only dependency we will need in the pom.xml:

<dependency>
    <groupId>redis.clients</groupId>
    <artifactId>jedis</artifactId>
    <version>2.8.1</version>
</dependency>

If you’re looking for the latest version of the library, check out this page.

4. Redis Installation

You will need to install and fire up one of the latest versions of Redis. We are running the latest stable version at this moment (3.2.1), but any post 3.x version should be okay. Find here more information about Redis for Linux and Macintosh, they have very similar basic installation steps. Windows is not officially supported but this port is well maintained.

Thereafter we can directly dive in and connect to it from our Java code:

Jedis jedis = new Jedis();

The default constructor will work just fine unless you have started the service on a non-default port or on a remote machine, in which case you can configure it correctly by passing the correct values as parameters into the constructor.

5. Redis Data Structures

Most of the native operation commands are supported and, conveniently enough, they normally share the same method name.

5.1. Strings

Strings are the most basic kind of Redis value, useful for when you need to persist simple key-value data types:

jedis.set("events/city/rome", "32,15,223,828");
String cachedResponse = jedis.get("events/city/rome");

The variable cachedResponse will hold the value 32,15,223,828. Coupled with expiration support, discussed later, it can work as a lightning fast and simple to use cache layer for HTTP requests received at your web application and other caching requirements.

5.2. Lists

Redis Lists are simply lists of strings, sorted by insertion order and make it an ideal tool to implement, for instance, message queues:

jedis.lpush("queue#tasks", "firstTask");
jedis.lpush("queue#tasks", "secondTask");

String task = jedis.rpop("queue#tasks");

The variable task will hold the value firstTask. Remember that you can serialize any object and persist it as a string, so messages in the queue can carry more complex data when required.

5.3. Sets

Redis Sets are an unordered collection of Strings that come in handy when you want to exclude repeated members:

jedis.sadd("nicknames", "nickname#1");
jedis.sadd("nicknames", "nickname#2");
jedis.sadd("nicknames", "nickname#1");

Set<String> nicknames = jedis.smembers("nicknames");
boolean exists = jedis.sismember("nicknames", "nickname#1");

The Java Set nicknames will have a size of 2, the second addition of nickname#1 was ignored. Also, the exists variable will have a value of true, the method sismember enables you to quickly check for the existence of a particular member.

5.4. Hashes

Redis Hashes are maps between string fields and string values:

jedis.hset("user#1", "name", "Peter");
jedis.hset("user#1", "job", "politician");
		
String name = jedis.hget("user#1", "name");
		
Map<String, String> fields = jedis.hgetAll("user#1");
String job = fields.get("job");

As you can see, hashes are a very convenient data type when you want to access object’s properties individually since you do not need to retrieve the whole object.

5.5. Sorted Sets

Sorted Sets are like a Set where each member has an associated ranking, that is used for sorting them:

Map<String, Double> scores = new HashMap<>();

scores.put("PlayerOne", 3000.0);
scores.put("PlayerTwo", 1500.0);
scores.put("PlayerThree", 8200.0);

for (String player : scores.keySet()) {
    jedis.zadd("ranking", scores.get(player), player);
}
		
String player = jedis.zrevrange("ranking", 0, 1).iterator().next();
long rank = jedis.zrevrank("ranking", "PlayerOne");

The variable player will hold the value PlayerThree because we are retrieving the top 1 player and he is the one with the highest score. The rank variable will have a value of 1 because PlayerOne is the second in the ranking and the ranking is zero-based.

6. Transactions

Transactions guarantee atomicity and thread safety operations, which means that requests from other clients will never be handled concurrently during Redis transactions:

String friendsPrefix = "friends#";
String userOneId = "4352523";
String userTwoId = "5552321";

Transaction t = jedis.multi();
t.sadd(friendsPrefix + userOneId, userTwoId);
t.sadd(friendsPrefix + userTwoId, userOneId);
t.exec();

You can even make a transaction success dependent on a specific key by “watching” it right before you instantiate your Transaction:

jedis.watch("friends#deleted#" + userOneId);

If the value of that key changes before the transaction is executed, the transaction will not be completed successfully.

7. Pipelining

When we have to send multiple commands, we can pack them together in one request and save connection overhead by using pipelines, it is essentially a network optimization. As long as the operations are mutually independent, we can take advantage of this technique:

String userOneId = "4352523";
String userTwoId = "4849888";

Pipeline p = jedis.pipelined();
p.sadd("searched#" + userOneId, "paris");
p.zadd("ranking", 126, userOneId);
p.zadd("ranking", 325, userTwoId);
Response<Boolean> pipeExists = p.sismember("searched#" + userOneId, "paris");
Response<Set<String>> pipeRanking = p.zrange("ranking", 0, -1);
p.sync();

String exists = pipeExists.get();
Set<String> ranking = pipeRanking.get();

Notice we do not get direct access to the command responses, instead we are given a Response instance from which we can request the underlying response after the pipeline has been synced.

8. Publish/Subscribe

We can use the Redis messaging broker functionality to send messages between the different components of our system. Make sure the subscriber and publisher threads do not share the same Jedis connection.

8.1. Subscriber

Subscribe and listen to messages sent to a channel:

Jedis jSubscriber = new Jedis();
jSubscriber.subscribe(new JedisPubSub() {
    @Override
    public void onMessage(String channel, String message) {
        // handle message
    }
}, "channel");

Subscribe is a blocking method, you will need to unsubscribe from the JedisPubSub explicitly. We have overridden the onMessage method but there are many more useful methods available to override.

8.2. Publisher

Then simply send messages to that same channel from the publisher’s thread:

Jedis jPublisher = new Jedis();
jPublisher.publish("channel", "test message");

9. Connection Pooling

It is important to know that the way we have been dealing with our Jedis instance is naive. In a real world scenario you do not want to use a single instance in a multi-threaded environment as a single instance is not thread safe.

Luckily enough we can easily create a pool of connections to Redis for us to reuse on demand, a pool that is thread safe and reliable as long as you return the resource to the pool when you are done with it.

Let’s create the JedisPool:

final JedisPoolConfig poolConfig = buildPoolConfig();
JedisPool jedisPool = new JedisPool(poolConfig, "localhost");

private JedisPoolConfig buildPoolConfig() {
    final JedisPoolConfig poolConfig = new JedisPoolConfig();
    poolConfig.setMaxTotal(128);
    poolConfig.setMaxIdle(128);
    poolConfig.setMinIdle(16);
    poolConfig.setTestOnBorrow(true);
    poolConfig.setTestOnReturn(true);
    poolConfig.setTestWhileIdle(true);
    poolConfig.setMinEvictableIdleTimeMillis(Duration.ofSeconds(60).toMillis());
    poolConfig.setTimeBetweenEvictionRunsMillis(Duration.ofSeconds(30).toMillis());
    poolConfig.setNumTestsPerEvictionRun(3);
    poolConfig.setBlockWhenExhausted(true);
    return poolConfig;
}

Since the pool instance is thread safe, you can store it somewhere statically but you should take care of destroying the pool to avoid leaks when the application is being shutdown.

Now we can make use of our pool from anywhere in the application when needed:

try (Jedis jedis = jedisPool.getResource()) {
    // do operations with jedis resource
}

We used the Java try-with-resources statement to avoid having to manually close the Jedis resource, but if you cannot use this statement you can also close the resource manually in the finally clause.

Make sure you use a pool like we have described in your application if you do not want to face nasty multi-threading issues. You can obviously play with the pool configuration parameters to adapt it to the best setup in your system.

10. Redis Cluster

This Redis implementation provides easy scalability and high availability, we encourage you to read their official specification if you are not familiar with it. We will not cover Redis cluster setup since that is a bit out of the scope for this article, but you should have no problems in doing so when you are done with its documentation.

Once we have that ready, we can start using it from our application:

try (JedisCluster jedisCluster = new JedisCluster(new HostAndPort("localhost", 6379))) {
    // use the jedisCluster resource as if it was a normal Jedis resource
} catch (IOException e) {}

We only need to provide the host and port details from one of our master instances, it will auto-discover the rest of the instances in the cluster.

This is certainly a very powerful feature but it is not a silver bullet. When using Redis Cluster you cannot perform transactions nor use pipelines, two important features on which many applications rely for ensuring data integrity.

Transactions are disabled because, in a clustered environment, keys will be persisted across multiple instances. Operation atomicity and thread safety cannot be guaranteed for operations that involve command execution in different instances.

Some advanced key creation strategies will ensure that data that is interesting for you to be persisted in the same instance will get persisted that way. In theory that should enable you to perform transactions successfully using one of the underlying Jedis instances of the Redis Cluster.

Unfortunately, currently you cannot find out in which Redis instance a particular key is saved using Jedis (which is actually supported natively by Redis), so you do not know against which of the instances you must perform the transaction operation. If you are interested about this, you can find more information here.

11. Conclusion

The vast majority of the features from Redis are already available in Jedis and its development moves forward at a good pace.

It gives you the ability to integrate a powerful in-memory storage engine in your application with very little hassle, just do not forget to set up connection pooling to avoid thread safety issues.

I usually post about Persistence on Twitter - you can follow me there:



Using Couchbase in a Spring Application

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

In this follow-up to our introduction to Couchbase, we create a set of Spring services that can be used together to create a basic persistence layer for a Spring application without the use of Spring Data.

2. Cluster Service

In order to satisfy the constraint that only a single CouchbaseEnvironment may be active in the JVM, we begin by writing a service that connects to a Couchbase cluster and provides access to data buckets without directly exposing either the Cluster or CouchbaseEnvironment instances.

2.1. Interface

Here is our ClusterService interface:

public interface ClusterService {
    Bucket openBucket(String name, String password);
}

2.2. Implementation

Our implementation class instantiates a DefaultCouchbaseEnvironment and connects to a cluster during the @PostConstruct phase during Spring context initialization.

This ensures that the cluster is not null and that it is connected when the class is injected into other service classes, thus enabling them to open one or more data buckets:

@Service
public class ClusterServiceImpl implements ClusterService {
    private Cluster cluster;
    
    @PostConstruct
    private void init() {
        CouchbaseEnvironment env = DefaultCouchbaseEnvironment.create();
        cluster = CouchbaseCluster.create(env, "localhost");
    }
...
}

Next, we provide a ConcurrentHashMap to contain the open buckets and implement the openBucket method:

private Map<String, Bucket> buckets = new ConcurrentHashMap<>();

@Override
synchronized public Bucket openBucket(String name, String password) {
    if(!buckets.containsKey(name)) {
        Bucket bucket = cluster.openBucket(name, password);
        buckets.put(name, bucket);
    }
    return buckets.get(name);
}

3. Bucket Service

Depending on how you architect your application, you may need to provide access to the same data bucket in multiple Spring services.

If we merely attempted to open the same bucket in two or more services during application startup, the second service to attempt this is likely to encounter a ConcurrentTimeoutException.

To avoid this scenario, we define a BucketService interface and an implementation class per bucket. Each implementation class acts as a bridge between the ClusterService and the classes that need direct access to a particular Bucket.

3.1. Interface

Here is our BucketService interface:

public interface BucketService {
    Bucket getBucket();
}

3.2. Implementation

The following class provides access to the “baeldung-tutorial” bucket:

@Service
@Qualifier("TutorialBucketService")
public class TutorialBucketService implements BucketService {

    @Autowired
    private ClusterService couchbase;
    
    private Bucket bucket;
    
    @PostConstruct
    private void init() {
        bucket = couchbase.openBucket("baeldung-tutorial", "");
    }

    @Override
    public Bucket getBucket() {
        return bucket;
    }
}

By injecting the ClusterService in our TutorialBucketService implementation class and opening the bucket in a method annotated with @PostConstruct, we have ensured that the bucket will be ready for use when the TutorialBucketService is then injected into other services.

4. Persistence Layer

Now that we have a service in place to obtain a Bucket instance, we will create a repository-like persistence layer that provides CRUD operations for entity classes to other services without exposing the Bucket instance to them.

4.1. The Person Entity

Here is the Person entity class that we wish to persist:

public class Person {

    private String id;
    private String type;
    private String name;
    private String homeTown;

    // standard getters and setters
}

4.2. Converting Entity Classes To and From JSON

To convert entity classes to and from the JsonDocument objects that Couchbase uses in its persistence operations, we define the JsonDocumentConverter interface:

public interface JsonDocumentConverter<T> {
    JsonDocument toDocument(T t);
    T fromDocument(JsonDocument doc);
}

4.3. Implementing the JSON Converter

Next, we need to implement a JsonConverter for Person entities.

@Service
public class PersonDocumentConverter
  implements JsonDocumentConverter<Person> {
    ...
}

We could use the Jackson library in conjunction with the JsonObject class’s toJson and fromJson methods to serialize and deserialize the entities, however there is additional overhead in doing so.

Instead, for the toDocument method, we’ll use the fluent methods of the JsonObject class to create and populate a JsonObject before wrapping it a JsonDocument:

@Override
public JsonDocument toDocument(Person p) {
    JsonObject content = JsonObject.empty()
            .put("type", "Person")
            .put("name", p.getName())
            .put("homeTown", p.getHomeTown());
    return JsonDocument.create(p.getId(), content);
}

And for the fromDocument method, we’ll use theJsonObject class’s getString method along with the setters in the Person class in our fromDocument method:

@Override
public Person fromDocument(JsonDocument doc) {
    JsonObject content = doc.content();
    Person p = new Person();
    p.setId(doc.id());
    p.setType("Person");
    p.setName(content.getString("name"));
    p.setHomeTown(content.getString("homeTown"));
    return p;
}

4.4. CRUD Interface

We now create a generic CrudService interface that defines persistence operations for entity classes:

public interface CrudService<T> {
    void create(T t);
    T read(String id);
    T readFromReplica(String id);
    void update(T t);
    void delete(String id);
    boolean exists(String id);
}

4.5. Implementing the CRUD Service

With the entity and converter classes in place, we now implement the CrudService for the Person entity, injecting the bucket service and document converter shown above and retrieving the bucket during initialization:

@Service
public class PersonCrudService implements CrudService<Person> {
    
    @Autowired
    private TutorialBucketService bucketService;
    
    @Autowired
    private PersonDocumentConverter converter;
    
    private Bucket bucket;
    
    @PostConstruct
    private void init() {
        bucket = bucketService.getBucket();
    }

    @Override
    public void create(Person person) {
        if(person.getId() == null) {
            person.setId(UUID.randomUUID().toString());
        }
        JsonDocument document = converter.toDocument(person);
        bucket.insert(document);
    }

    @Override
    public Person read(String id) {
        JsonDocument doc = bucket.get(id);
        return (doc != null ? converter.fromDocument(doc) : null);
    }

    @Override
    public Person readFromReplica(String id) {
        List<JsonDocument> docs = bucket.getFromReplica(id, ReplicaMode.FIRST);
        return (docs.isEmpty() ? null : converter.fromDocument(docs.get(0)));
    }

    @Override
    public void update(Person person) {
        JsonDocument document = converter.toDocument(person);
        bucket.upsert(document);
    }

    @Override
    public void delete(String id) {
        bucket.remove(id);
    }

    @Override
    public boolean exists(String id) {
        return bucket.exists(id);
    }
}

5. Putting it All Together

Now that we have all of the pieces of our persistence layer in place, here’s a simple example of a registration service that uses the PersonCrudService to persist and retrieve registrants:

@Service
public class RegistrationService {

    @Autowired
    private PersonCrudService crud;
    
    public void registerNewPerson(String name, String homeTown) {
        Person person = new Person();
        person.setName(name);
        person.setHomeTown(homeTown);
        crud.create(person);
    }
    
    public Person findRegistrant(String id) {
        try{
            return crud.read(id);
        }
        catch(CouchbaseException e) {
            return crud.readFromReplica(id);
        }
    }
}

6. Conclusion

We have shown that with a few basic Spring services, it is fairly trivial to incorporate Couchbase into a Spring application and implement a basic persistence layer without using Spring Data.

The source code shown in this tutorial is available in the github project.

You can learn more about the Couchbase Java SDK at the official Couchbase developer documentation site.

I usually post about Persistence on Twitter - you can follow me there:


Intro to Spring Boot Starters

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

Dependency management is a critical aspects of any complex project. And doing this manually is less than ideal; the more time you spent on it the less time you have on the other important aspects of the project.

Spring Boot starters were built to address exactly this problem. Starter POMs are a set of convenient dependency descriptors that you can include in your application. You get a one-stop-shop for all the Spring and related technology that you need, without having to hunt through sample code and copy paste loads of dependency descriptors.

We have more than 30 Boot starters available – let’s see some of them in the following sections.

2. The Web Starter

First, let’s look at developing the REST service; we can use libraries like Spring MVC, Tomcat and Jackson – a lot of dependencies for a single application.

Spring Boot starters can help to reduce the number of manually added dependencies just by adding one dependency. So instead of manually specifying the dependencies just add one starter as in the following example:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

Now we can create a REST controller. For the sake of simplicity we won’t use the database and focus on the REST controller:

@RestController
public class GenericEntityController {
    private List<GenericEntity> entityList = new ArrayList<>();

    @RequestMapping("/entity/all")
    public List<GenericEntity> findAll() {
        return entityList;
    }

    @RequestMapping(value = "/entity", method = RequestMethod.POST)
    public GenericEntity addEntity(GenericEntity entity) {
        entityList.add(entity);
        return entity;
    }

    @RequestMapping("/entity/findby/{id}")
    public GenericEntity findById(@PathVariable Long id) {
        return entityList.stream().
                 filter(entity -> entity.getId().equals(id)).
                   findFirst().get();
    }
}

The GenericEntity is a simple bean with id of type Long and value of type String.

That’s it – with the application running, you can access http://localhost:8080/springbootapp/entity/all and check the controller is working.

We have created a REST application with quite minimal configuration.

3. The Test Starter

For testing we usually use the following set of libraries: Spring Test, JUnit, Hamcrest and Mockito. We can include all of these libraries manually, but Spring Boot starter can be used to automatically include these libraries in the following way:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency>

Notice that you don’t need to specify the version number of an artifact. Spring Boot will figure out what version to use – all you need to specify is the version of spring-boot-starter-parent artifact. If later on you need to upgrade the Boot library and dependencies, just upgrade the Boot version in one place and it will take care of the rest.

Let’s actually test the controller we created in the previous example.

There are two ways to test the controller:

  • Using the mock environment
  • Using the embedded Servlet container (like Tomcat or Jetty)

In this example we’ll use a mock environment:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
@WebAppConfiguration
public class SpringBootApplicationTest {
    @Autowired
    private WebApplicationContext webApplicationContext;
    private MockMvc mockMvc;

    @Before
    public void setupMockMvc() {
        mockMvc = MockMvcBuilders.webAppContextSetup(webApplicationContext).build();
    }

    @Test
    public void givenRequestHasBeenMade_whenMeetsAllOfGivenConditions_thenCorrect()
      throws Exception { 
        MediaType contentType = new MediaType(MediaType.APPLICATION_JSON.getType(),
        MediaType.APPLICATION_JSON.getSubtype(), Charset.forName("utf8"));
        mockMvc.perform(MockMvcRequestBuilders.get("/entity/all")).
        andExpect(MockMvcResultMatchers.status().isOk()).
        andExpect(MockMvcResultMatchers.content().contentType(contentType)).
        andExpect(jsonPath("$", hasSize(4))); 
    } 
}

What is important here is that @WebAppConfiguration annotation and MockMVC are part of the spring-test module, hasSize is a Hamcrest matcher, and @Before is a JUnit annotation. These are all available by importing one this one starter dependency.

4. The Data JPA Starter

Most web applications have some sort of persistence – and that’s quite often JPA.

Instead of defining all of the associated dependencies manually – let’s go with the starter instead:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <scope>runtime</scope>
</dependency>

Notice that out of box we have automatic support for at least the following databases: H2, Derby and Hsqldb. In our example we’ll use H2.

Now let’s create the repository for our entity:

public interface GenericEntityRepository extends JpaRepository<GenericEntity, Long> {}

Time to test the code. Here is the JUnit test:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
public class SpringBootJPATest {
    
    @Autowired
    private GenericEntityRepository genericEntityRepository;

    @Test
    public void givenGenericEntityRepository_whenSaveAndRetreiveEntity_thenOK() {
        GenericEntity genericEntity = 
          genericEntityRepository.save(new GenericEntity("test"));
        GenericEntity foundedEntity = 
          genericEntityRepository.findOne(genericEntity.getId());
        
        assertNotNull(foundedEntity);
        assertEquals(genericEntity.getValue(), foundedEntity.getValue());
    }
}

We didn’t spend time on specifying the database vendor, URL connection and credentials. No extra configuration is necessary as we’re benefiting from the solid Boot defaults; but of course all of these details can still be configured if necessary.

5. The Mail Starter

A very common task in enterprise development is sending email; and dealing directly with Java Mail API usually can be difficult.

Spring Boot starter hides this complexity – mail dependencies can be specified in the following way:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-mail</artifactId>
</dependency>

Now we can directly use the JavaMailSender, so let’s write some tests.

For the testing purpose we need a simple SMTP server. In this example we’ll use Wiser. This is how we can include it in our POM:

<dependency>
    <groupId>org.subethamail</groupId>
    <artifactId>subethasmtp</artifactId>
    <version>3.1.7</version>
    <scope>test</scope>
</dependency>

The latest version of Wiser can be found on Maven central repository.

Here is the source code for the test:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
public class SpringBootMailTest {
    @Autowired
    private JavaMailSender javaMailSender;

    private Wiser wiser;

    private String userTo = "user2@localhost";
    private String userFrom = "user1@localhost";
    private String subject = "Test subject";
    private String textMail = "Text subject mail";

    @Before
    public void setUp() throws Exception {
        final int TEST_PORT = 25;
        wiser = new Wiser(TEST_PORT);
        wiser.start();
    }

    @After
    public void tearDown() throws Exception {
        wiser.stop();
    }

    @Test
    public void givenMail_whenSendAndReceived_thenCorrect() throws Exception {
        SimpleMailMessage message = composeEmailMessage();
        javaMailSender.send(message);
        List<WiserMessage> messages = wiser.getMessages();

        assertThat(messages, hasSize(1));
        WiserMessage wiserMessage = messages.get(0);
        assertEquals(userFrom, wiserMessage.getEnvelopeSender());
        assertEquals(userTo, wiserMessage.getEnvelopeReceiver());
        assertEquals(subject, getSubject(wiserMessage));
        assertEquals(textMail, getMessage(wiserMessage));
    }

    private String getMessage(WiserMessage wiserMessage)
      throws MessagingException, IOException {
        return wiserMessage.getMimeMessage().getContent().toString().trim();
    }

    private String getSubject(WiserMessage wiserMessage) throws MessagingException {
        return wiserMessage.getMimeMessage().getSubject();
    }

    private SimpleMailMessage composeEmailMessage() {
        SimpleMailMessage mailMessage = new SimpleMailMessage();
        mailMessage.setTo(userTo);
        mailMessage.setReplyTo(userFrom);
        mailMessage.setFrom(userFrom);
        mailMessage.setSubject(subject);
        mailMessage.setText(textMail);
        return mailMessage;
    }
}

In the test, the @Before and @After methods are in charge of starting and stopping the mail server.

Notice that we’re wiring in the JavaMailSender bean – the bean was automatically created by Spring Boot.

Just like any other defaults in Boot, the email settings for the JavaMailSender can be customized in application.properties:

spring.mail.host=localhost
spring.mail.port=25
spring.mail.properties.mail.smtp.auth=false

So we configured the mail server on localhost:25 and we didn’t require authentication.

6. Conclusion

In this article we have given an overview of Starters, explained why we need them and provided examples on how to use them in your projects.

Let’s recap the benefits of using Spring Boot starters:

  • increase pom manageability
  • production ready, tested & supported dependency configurations
  • decrease the overall configuration time for the project

Actual list of starters can be found here. Source code for the examples can be found here.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

What’s New in Spring 4.3?

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

The Spring 4.3 release brought some nice refinements into core container, caching, JMS, Web MVC and testing submodules of the framework.

In this post we will discuss few of these improvements including:

  • Implicit Constructor Injection
  • Java 8 Default Interface Methods Support
  • Improved Resolution of Dependencies
  • Cache Abstraction Refinements
  • Composed @RequestMapping Variants
  • @RequestScope, @SessionScope, @ApplicationScope annotations
  • @RequestAttribute and @SessionAttribute annotations
  • Libraries/Application Servers Versions Support

2. Implicit Constructor Injection

Consider the following service class:

@Service
public class FooService {

    private final FooRepository repository;

    @Autowired
    public FooService(FooRepository repository) {
        this.repository = repository
    }
}

Quite a common use case, but if you forget the @Autowired annotation on the constructor, the container will throw an exception looking for a default constructor, unless you explicitly do the wiring.

So as of 4.3, you no longer need to specify an explicit injection annotation in such a single-constructor scenario. This is particularly elegant for classes which do not carry any annotations at all:

public class FooService {

    private final FooRepository repository;

    public FooService(FooRepository repository) {
        this.repository = repository
    }
}

In Spring 4.2 and below, the following configuration for this bean will not work, because Spring will not be able to find a default constructor for FooService. Spring 4.3 is more clever and will autowire the constructor automatically:

<beans>
    <bean class="com.baeldung.spring43.ctor.FooRepository" />
    <bean class="com.baeldung.spring43.ctor.FooService" />
</beans>

Similarly, you may have noticed that @Configuration classes historically did not support constructor injection. As of 4.3, they do and they obviously allow omitting @Autowired in a single-constructor scenario as well:

@Configuration
public class FooConfiguration {

    private final FooRepository repository;

    public FooConfiguration(FooRepository repository) {
        this.repository = repository
    }

    @Bean
    public FooService fooService() {
        return new FooService(this.repository);
    }
}

3. Java 8 Default Interface Methods Support

Before Spring 4.3, default interface methods were not supported.

This was not easy to implement, because even JDK’s own JavaBean introspector did not detect default methods as accessors. Since Spring 4.3, getters and setters implemented as default interface methods are detected during injection, which allows to use them for instance as common preprocessors for accessed properties, like in this example:

public interface IDateHolder {

    void setLocalDate(LocalDate localDate);

    LocalDate getLocalDate();

    default void setStringDate(String stringDate) {
        setLocalDate(LocalDate.parse(stringDate, 
          DateTimeFormatter.ofPattern("dd.MM.yyyy")));
    }

}

This bean may now have the stringDate property injected:

<bean id="dateHolder" 
  class="com.baeldung.spring43.defaultmethods.DateHolder">
    <property name="stringDate" value="15.10.1982"/>
</bean>

Same goes for using test annotations like @BeforeTransaction and @AfterTransaction on default interface methods. JUnit 5 already supports its test annotations on default interface methods and Spring 4.3 follows the lead. Now you can abstract common testing logic in an interface and implement it in test classes. Here is an interface for test cases that logs messages before and after transactions in tests:

public interface ITransactionalTest {

    Logger log = LoggerFactory.getLogger(ITransactionalTest.class);

    @BeforeTransaction
    default void beforeTransaction() {
        log.info("Before opening transaction");
    }

    @AfterTransaction
    default void afterTransaction() {
        log.info("After closing transaction");
    }

}

Another improvement concerning annotations @BeforeTransaction, @AfterTransaction and @Transactional is the relaxation of the requirement that the annotated methods should be public — now they may have any visibility level.

4. Improved Resolution of Dependencies

The newest version introduces also the ObjectProvider, an extension of the existing ObjectFactory interface with handy signatures such as getIfAvailable and getIfUnique to retrieve a bean only if it actually exists or if a single candidate can be determined (in particular: a primary candidate in case of multiple matching beans).

@Service
public class FooService {

    private final FooRepository repository;

    public FooService(ObjectProvider<FooRepository> repositoryProvider) {
        this.repository = repositoryProvider.getIfUnique();
    }
}

You may use such ObjectProvider handle for custom resolution purposes during initialization as shown above, or store the handle in a field for late on-demand resolution (as you typically do with an ObjectFactory).

5. Cache Abstraction Refinements

The cache abstraction is mainly used to cache values that are CPU and/or IO consuming. In certain use cases, a given key may be requested by several threads (i.e. clients) in parallel, especially on startup. Synchronized cache support is a long-requested feature that has now been implemented. Assume the following:

@Service
public class FooService {

    @Cacheable(cacheNames = "foos", sync = true)
    public Foo getFoo(String id) { ... }

}

Notice the sync = true attribute which tells the framework to block any concurrent threads while the value is being computed. This will make sure that this intensive operation is invoked only once in case of concurrent access.

Spring 4.3 also improves the caching abstraction as follows:

  • SpEL expressions in cache-related annotations can now refer to beans (i.e. @beanName.method()).
  • ConcurrentMapCacheManager and ConcurrentMapCache now support the serialization of cache entries via a new storeByValue attribute.
  • @Cacheable, @CacheEvict, @CachePut, and @Caching may now be used as meta-annotations to create custom composed annotations with attribute overrides.

6. Composed @RequestMapping Variants

Spring Framework 4.3 introduces the following method-level composed variants of the @RequestMapping annotation that help to simplify mappings for common HTTP methods and better express the semantics of the annotated handler method.

  • @GetMapping
  • @PostMapping
  • @PutMapping
  • @DeleteMapping
  • @PatchMapping

For example, @GetMapping is a shorter form of saying @RequestMapping(method = RequestMethod.GET). The following example shows an MVC controller that has been simplified with a composed @GetMapping annotation.

@Controller
@RequestMapping("/appointments")
public class AppointmentsController {

    private final AppointmentBook appointmentBook;

    @Autowired
    public AppointmentsController(AppointmentBook appointmentBook) {
        this.appointmentBook = appointmentBook;
    }

    @GetMapping
    public Map<String, Appointment> get() {
        return appointmentBook.getAppointmentsForToday();
    }
}

7. @RequestScope, @SessionScope, @ApplicationScope annotations

When using annotation-driven components or Java Config, the @RequestScope, @SessionScope and @ApplicationScope annotations can be used to assign a component to the required scope. These annotations not only set the scope of the bean, but also set the scoped proxy mode to ScopedProxyMode.TARGET_CLASS.

TARGET_CLASS mode means that CGLIB proxy will be used for proxying of this bean and ensuring that it can be injected in any other bean, even with a broader scope. TARGET_CLASS mode allows proxying not only for interfaces, but for classes too.

@RequestScope
@Component
public class LoginAction {
    // ...
}
@SessionScope
@Component
public class UserPreferences {
    // ...
}
@ApplicationScope
@Component
public class AppPreferences {
    // ...
}

8. @RequestAttribute and @SessionAttribute annotations

Two more annotations for injecting parameters of the HTTP request into Controller methods appeared, namely @RequestAttribute and @SessionAttribute. They allow you to access some pre-existing attributes, managed globally (i.e. outside the Controller). The values for these attributes may be provided, for instance, by registered instances of javax.servlet.Filter or org.springframework.web.servlet.HandlerInterceptor.

Suppose we have registered the following HandlerInterceptor implementation that parses the request and adds an additional login parameter to the session and another query parameter to a request:

public class ParamInterceptor extends HandlerInterceptorAdapter {

    @Override
    public boolean preHandle(HttpServletRequest request, 
      HttpServletResponse response, Object handler) throws Exception {
        request.getSession().setAttribute("login", "john");
        request.setAttribute("query", "invoices");
        return super.preHandle(request, response, handler);
    }

}

Such parameters may be injected into a Controller instance with corresponding annotations on method arguments:

@GetMapping
public String get(@SessionAttribute String login, 
  @RequestAttribute String query) {
    return String.format("login = %s, query = %s", login, query);
}

9. Libraries/Application Servers Versions Support

Spring 4.3 supports the following library versions and server generations:

  • Hibernate ORM 5.2 (still supporting 4.2/4.3 and 5.0/5.1 as well, with 3.6 deprecated now)
  • Jackson 2.8 (minimum raised to Jackson 2.6+ as of Spring 4.3)
  • OkHttp 3.x (still supporting OkHttp 2.x side by side)
  • Netty 4.1
  • Undertow 1.4
  • Tomcat 8.5.2 as well as 9.0 M6

Furthermore, Spring 4.3 embeds the updated ASM 5.1 and Objenesis 2.4 in spring-core.jar.

10. Conclusion

In this article we discussed some of the new features introduced with Spring 4.3.

We’ve covered useful annotations that eliminate boilerplate, new helpful methods of dependency lookup and injection and several important improvements within web and caching facilities.

You can find the source code for the article on GitHub.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Mutation Testing with PITest

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

Software testing refers to the techniques used to assess the functionality of a software application. In this article we’re going to discuss some of the metrics used in the software testing industry, such as code coverage and mutation testing, with peculiar interest on how to perform a mutation test using the PITest library.

For the sake of simplicity, we’re going to base this demonstration on a basic palindrome function – Note that a palindrome is a string that reads the same backward and forward.

2. Maven Dependencies

As you can see in the Maven dependencies configuration, we will use JUnit to run our tests and the PITest library to introduce mutants into our code – don’t worry, we will see in a second what a mutant is. You can always lookup the latest dependency version against the maven central repository by following this link.

<dependency>
    <groupId>org.pitest</groupId>
    <artifactId>pitest-parent</artifactId>
    <version>1.1.10</version>
    <type>pom</type>
</dependency>

In order to have the PITest library up and running, we also need to include the pitest-maven plugin in our pom.xml configuration file:

<plugin>
    <groupId>org.pitest</groupId>
    <artifactId>pitest-maven</artifactId>
    <version>1.1.10</version>
    <configuration>
        <targetClasses>
            <param>com.baeldung.testing.mutation.*</param>
        </targetClasses>
        <targetTests>
            <param>com.baeldung.mutation.test.*</param>
	</targetTests>
     </configuration>
</plugin>

3. Project Setup

Now that we have our Maven dependencies configured, let’s have a look at this self-explanatory palindrome function:

public boolean isPalindrome(String inputString) {
    if (inputString.length() == 0) {
        return true;
    } else {
        char firstChar = inputString.charAt(0);
        char lastChar = inputString.charAt(inputString.length() - 1);
        String mid = inputString.substring(1, inputString.length() - 1);
        return (firstChar == lastChar) && isPalindrome(mid);
    }
}

All we need now is a simple JUnit test to make sure that our implementation works in the desired way:

@Test
public void acceptsPalindrome() {
    Palindrome palindromeTester = new Palindrome();
    assertTrue(palindromeTester.isPalindrome("noon"));
}

So far so good, we are ready to run our test case successfully as a JUnit test.

Next in this article, we are going to focus on code and mutation coverage using the PITest library.

4. Code Coverage

Code coverage has been used extensively in the software industry, to measure what percent of the execution paths has been exercised during automated tests.

We can measure the effective code coverage based on execution paths using tools like Eclemma available on Eclipse IDE.

After running TestPalindrome with code coverage we can easily achieve a 100% coverage score – Note that isPalindrome is recursive, so it’s pretty obvious that empty input length check will be covered anyway.

Unfortunately, code coverage metrics can sometimes be quite ineffective, because a 100% code coverage score only means that all lines were exercised at least once, but it says nothing about tests accuracy or use-cases completeness, and that’s why mutation testing actually matters.

5. Mutation Coverage

Mutation testing is a testing technique used to improve the adequacy of tests, and identify defects in code. The idea is to change the production code dynamically, and cause the tests to fail.

Good tests shall fail

Each change in the code is called a mutant, and it results an altered version of the program, called a mutation.

We say that the mutation is killed, if it can cause a fail in the tests. We also say that the mutation survived if the mutant couldn’t affect the tests behavior.

Now let’s run the test using Maven, with the goal option set to: org.pitest:pitest-maven:mutationCoverage.

We can check the reports in HTML format in the target/pit-test/YYYYMMDDHHMI directory:

  • 100% line coverage: 7/7
  • 63% mutation coverage: 5/8

Clearly our test sweeps across all the execution paths, thus, the line coverage score is 100%. In the other hand the PITest library introduced 8 mutants, 5 of them were killed – Caused a fail – but 3 survived.

We can check the com.baeldung.testing.mutation/Palindrome.java.html report for more details about the mutants created:


Mutations

These are the mutators active by default when running a mutation coverage test:

  • INCREMENTS_MUTATOR
  • VOID_METHOD_CALL_MUTATOR
  • RETURN_VALS_MUTATOR
  • MATH_MUTATOR
  • NEGATE_CONDITIONALS_MUTATOR
  • INVERT_NEGS_MUTATOR
  • CONDITIONALS_BOUNDARY_MUTATOR

For more details about the PITest mutators, you can check the official documentation page link.

Our mutation coverage score reflects the lack of test cases, as we can’t make sure that our palindrome function rejects non-palindromic and near-palindromic string inputs.

6. Improve the Mutation Score

Now that we know what a mutation is, we need to improve our mutation score by killing the surviving mutants.

Let’s take the first mutation – negated conditional – on line 6 as an example. The mutant survived because even if we change the code snippet:

if (inputString.length() == 0) {
    return true;
}

To:

if (inputString.length() != 0) {
    return true;
}

The test will pass, and that’s why the mutation survived. The idea is to implement a new test that will fail, in case the mutant is introduced. The same can be done for the remaining mutants.

@Test
public void rejectsNonPalindrome() {
    Palindrome palindromeTester = new Palindrome();
    assertFalse(palindromeTester.isPalindrome("box"));
}
@Test
public void rejectsNearPalindrome() {
    Palindrome palindromeTester = new Palindrome();
    assertFalse(palindromeTester.isPalindrome("neon"));
}

Now we can run our tests using the mutation coverage plugin, to make sure that all the mutations were killed, as we can see in the PITest report generated in target directory.

  • 100% line coverage: 7/7
  • 100% mutation coverage: 8/8

7. PITest Tests Configuration

Mutation testing may be resources-extensive sometimes, so we need to put proper configuration in place to improve tests effectiveness. We can make use of the targetClasses tag, to define the list of classes to be mutated. Mutation testing cannot be applied on all classes in a real world project, as it will be time consuming, and resource critical.

It is also important to define the mutators you plan to use during mutation testing, in order to minimize the computing resources needed to perform the tests:

<configuration>
    <targetClasses>
        <param>com.baeldung.testing.mutation.*</param>
    </targetClasses>
    <targetTests>
        <param>com.baeldung.mutation.test.*</param>
    </targetTests>
    <mutators>
        <mutator>CONSTRUCTOR_CALLS</mutator>
        <mutator>VOID_METHOD_CALLS</mutator>
        <mutator>RETURN_VALS</mutator>
        <mutator>NON_VOID_METHOD_CALLS</mutator>
    </mutators>
</configuration>

Moreover, the PITest library offers a variety of options available to customize your testing strategies, you can specify the maximum number of mutants introduced by class using the maxMutationsPerClass option for example. More details about PITest options in the official Maven quickstart guide.

8. Conclusion

Note that code coverage is still an important metric, but sometimes it is not sufficient enough to guarantee a well-tested code. So in this article we’ve walked through mutation testing as a more sophisticated way to ensure tests quality and endorse test-cases, using the PITest library.

We have also seen how to analyze a basic PITest reports, while improving the mutation coverage score.

Even though mutation testing reveals defects in code, it should be used wisely, because it is an extremely costly and time-consuming process.

You can checkout the examples provided in this article in the linked GitHub project.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 133

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Stagnation with Java EE 8: Can the Java Community Make a Difference? [infoq.com]

The Java EE story unfolding.

>> JUnit 5 M1 [marcphilipp.de]

The official announcement of the very first milestone of JUnit 5. Good stuff – it looks like it’s time to give this one a real try.

>> JUnit 5 – Dynamic Tests [codefx.org]

And speaking of the first milestone of JUnit 5, Nicolai is of course continuing to explore the very cool aspects of JUnit. This one is about the concept of a dynamic test, which can potentially be quite powerful.

>> How to Get Started with Java Machine Learning [takipi.com]

A quick overview of machine learning in the Java ecosystem. It’s a bit of a shame that Mahout has been abandoned in this space, as it was showing some promise a few years back.

>> Robot Framework Tutorial 2016 – Integration with Jenkins [codecentric.de]

Solid intro to setting up the Robot Framework with a new Jenkins 2.0 server from scratch. Definitely one to bookmark for if/when you have to do that.

>> The best way to detect database connection leaks [vladmihalcea.com]

DB connection leaks are never fun and almost never easy to find and solve. This is a good introduction to how you may handle that.

>> Stackoverflow: 7 of the Best Java Answers That You Haven’t Seen [takipi.com]

A fun an interesting idea – having a look at a few of the top questions about Java on StackOverflow.

Some of these brought me right back to the Operating Systems course in college 🙂

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Jepsen: VoltDB 6.3 [aphyr.com]

Another deep analysis of a persistence solution that doesn’t quite do what it advertises yet, but is on the right track to getting there.

These kinds of in-depth dives into the low level problems of a persistence implementation are priceless to really learn about the mindset, the approach and the practical way of looking things when it comes to persistence.

>> The REST Report [tbray.org]

Some thoughts about REST in the ecosystem today.

>> How Collaboration Humanizes the Enterprise [daedtech.com]

This piece is both a bit sad and a bit encouraging.

Organizations do inherently run into well-understood issues with scale, but it’s still so very rare to find an organization that is successfully navigating these issues and not just patching them up.

>> For those of you thinking about moving into the test automation field [ontestautomation.com]

Advice worth reading even if you’re not planning to do test automation full time but simply in parallel to your day to day work.

>> Benefits of Serverless Architectures [martinfowler.com]

Continuing the discussion, this piece has a look at the benefits and reasoning behind a Serverless architecture.

>> Baeldung Q2 2016 Report [meta.baeldung.com]

And the quarterly inside look at the numbers of baeldung.com is out.

If you didn’t know, I publish this report regularly giving curious readers an inside look into how the site is doing.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> You are entitled to a cubicle that is three inches wider [dilbert.com]

>> The extreme level of abstraction [dilbert.com]

>> Hubris? What’s that [dilbert.com]

4. Pick of the Week

>> Getting less done [signalvnoise.com]

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

3 Common Hibernate Performance Issues and How to Find Them in Your Log File

$
0
0

1. Introduction

You’ve probably read some of the complaints about bad Hibernate performance or maybe you’ve struggled with some of them yourself. I have been using Hibernate for more than 15 years now and I have run into more than enough of these issues.

Over the years, I’ve learned that these problems can be avoided and that you can find a lot of them in your log file. In this post, I want to show you how you can find and fix 3 of them.

If you like this post, make sure to read to the end of it to download a free cheat sheet with even more Hibernate performance tuning tips.

2. Find and Fix Performance Problems

2.1. Log SQL Statements in Production

The first performance issue is extremely easy to spot and often ignored. It’s the logging of SQL statements in a production environment.

Writing some log statements doesn’t sound like a big deal, and there are a lot of applications out there which do exactly that. But it is extremely inefficient, especially via System.out.println as Hibernate does it if you set the show_sql parameter in your Hibernate configuration to true:

Hibernate: select 
    order0_.id as id1_2_, 
    order0_.orderNumber as orderNum2_2_, 
    order0_.version as version3_2_ 
  from purchaseOrder order0_
Hibernate: select 
    items0_.order_id as order_id4_0_0_, 
    items0_.id as id1_0_0_, 
    items0_.id as id1_0_1_, 
    items0_.order_id as order_id4_0_1_, 
    items0_.product_id as product_5_0_1_, 
    items0_.quantity as quantity2_0_1_, 
    items0_.version as version3_0_1_ 
  from OrderItem items0_ 
  where items0_.order_id=?
Hibernate: select 
    items0_.order_id as order_id4_0_0_, 
    items0_.id as id1_0_0_, 
    items0_.id as id1_0_1_, 
    items0_.order_id as order_id4_0_1_, 
    items0_.product_id as product_5_0_1_, 
    items0_.quantity as quantity2_0_1_, 
    items0_.version as version3_0_1_ 
  from OrderItem items0_ 
  where items0_.order_id=?
Hibernate: select 
    items0_.order_id as order_id4_0_0_, 
    items0_.id as id1_0_0_, 
    items0_.id as id1_0_1_, 
    items0_.order_id as order_id4_0_1_, 
    items0_.product_id as product_5_0_1_, 
    items0_.quantity as quantity2_0_1_, 
    items0_.version as version3_0_1_ 
  from OrderItem items0_ 
  where items0_.order_id=?

In one of my projects, I improved the performance by 20% within a few minutes by setting show_sql to false. That’s the kind of achievement you like to report in the next stand-up meeting 🙂

It’s pretty obvious how you can fix this performance issue. Just open your configuration (e.g. your persistence.xml file) and set the show_sql parameter to false. You don’t need this information in production anyways.

But you might need them during development. If you don’t, you use 2 different Hibernate configurations (which you shouldn’t) you deactivated the SQL statement logging there as well. The solution for that is to use 2 different log configurations for development and production which are optimized for the specific requirements of the runtime environment.

Development Configuration

The development configuration should provide as many useful information as possible so that you can see how Hibernate interacts with the database. You should therefore at least log the generated SQL statements in your development configuration. You can do this by activating DEBUG message for the org.hibernate.SQL category. If you also want to see the values of your bind parameters, you have to set the log level of org.hibernate.type.descriptor.sql to TRACE:

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{HH:mm:ss,SSS} %-5p [%c] - %m%n
log4j.rootLogger=info, stdout

# basic log level for all messages
log4j.logger.org.hibernate=info

# SQL statements and parameters
log4j.logger.org.hibernate.SQL=debug
log4j.logger.org.hibernate.type.descriptor.sql=trace

The following code snippet shows some example log messages which Hibernate writes with this log configuration. As you can see, you get detailed information about the executed SQL query and all set and retrieved parameter values:

23:03:22,246 DEBUG SQL:92 - select 
    order0_.id as id1_2_, 
    order0_.orderNumber as orderNum2_2_, 
    order0_.version as version3_2_ 
  from purchaseOrder order0_ 
  where order0_.id=1
23:03:22,254 TRACE BasicExtractor:61 - extracted value ([id1_2_] : [BIGINT]) - [1]
23:03:22,261 TRACE BasicExtractor:61 - extracted value ([orderNum2_2_] : [VARCHAR]) - [order1]
23:03:22,263 TRACE BasicExtractor:61 - extracted value ([version3_2_] : [INTEGER]) - [0]

Hibernate provides you with a lot more internal information about a Session if you activate the Hibernate statistics. You can do this by setting the system property hibernate.generate_statistics to true.

But please, only activate the statistics on your development or test environment. Collecting all these information slows down your application and you might create your performance problems yourself if you activate it this in production.

You can see some example statistics in the following code snippet:

23:04:12,123 INFO StatisticalLoggingSessionEventListener:258 - Session Metrics {
 23793 nanoseconds spent acquiring 1 JDBC connections;
 0 nanoseconds spent releasing 0 JDBC connections;
 394686 nanoseconds spent preparing 4 JDBC statements;
 2528603 nanoseconds spent executing 4 JDBC statements;
 0 nanoseconds spent executing 0 JDBC batches;
 0 nanoseconds spent performing 0 L2C puts;
 0 nanoseconds spent performing 0 L2C hits;
 0 nanoseconds spent performing 0 L2C misses;
 9700599 nanoseconds spent executing 1 flushes (flushing a total of 9 entities and 3 collections);
 42921 nanoseconds spent executing 1 partial-flushes (flushing a total of 0 entities and 0 collections)
}

I regularly use these statistics in my daily work to find performance issues before they occur in production and I could write several posts just about that. So let’s just focus on the most important ones.

The lines 2 to 5 show you how many JDBC connections and statements Hibernate used during this session and how much time it spent on it. You should always have a look at these values and compare them with your expectations.

If there are a lot more statements than you expected, you most likely have the most common performance problem, the n+1 select issue. You can find it in nearly all applications, and it might create huge performance issues on a bigger database. I explain this issue in more details in the next section.

The lines 7 to 9 show how Hibernate interacted with the 2nd level cache. This is one of Hibernate’s 3 caches, and it stores entities in a session independent way. If you use the 2nd level in your application, you should always monitor these statistics to see if Hibernate gets the entities from there.

Production Configuration

The production configuration should be optimized for performance and avoid any messages that are not urgently required. In general, that means that you should only log error messages. If you use Log4j, you can achieve that with the following configuration:

If you use Log4j, you can achieve that with the following configuration:

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{HH:mm:ss,SSS} %-5p [%c] - %m%n
log4j.rootLogger=info, stdout

# basic log level for all messages
log4j.logger.org.hibernate=error

2.2. N+1 Select Issue

As I already explained, the n+1 select issue is the most common performance problem. A lot of developers blame the OR-Mapping concept for this issue, and they’re not completely wrong. But you can easily avoid it if you understand how Hibernate treats lazily fetched relationships. The developer is, therefore, to blame as well because it’s his responsibility to avoid these kinds of issues. So let me explain first why this issue exists and then show you an easy way to prevent it. If you are already familiar with the n+1 select issues, you can jump directly to the solution.

Hibernate provides a very convenient mapping for relationships between entities. You just need an attribute with the type of the related entity and a few annotations to define it:

@Entity
@Table(name = "purchaseOrder")
public class Order implements Serializable {

    @OneToMany(mappedBy = "order", fetch = FetchType.LAZY)
    private Set<OrderItem> items = new HashSet<OrderItem>();

    ...
}

When you now load an Order entity from the database, you just need to call the getItems() method to get all items of this order. Hibernate hides the required database queries to get the related OrderItem entities from the database.

When you started with Hibernate, you probably learned that you should use FetchType.LAZY for most of the relationships and that it’s the default for to-many relationships. This tells Hibernate to only fetch the related entities if you use the attribute which maps the relationship. Fetching only the data you need is a good thing in general, but it also requires Hibernate to execute an additional query to initialize each relationship. This can result in a huge number of queries, if you work on a list of entities, like I do in the following code snippet:

List<Order> orders = em.createQuery("SELECT o FROM Order o").getResultList();
for (Order order : orders) {
    log.info("Order: " + order.getOrderNumber());
    log.info("Number of items: " + order.getItems().size());
}

You probably wouldn’t expect that this few lines of code can create hundreds or even thousands of database queries. But it does if you use FetchType.LAZY for the relationship to the OrderItem entity:

22:47:30,065 DEBUG SQL:92 - select 
    order0_.id as id1_2_, 
    order0_.orderNumber as orderNum2_2_, 
    order0_.version as version3_2_ 
  from purchaseOrder order0_
22:47:30,136 INFO NamedEntityGraphTest:58 - Order: order1
22:47:30,140 DEBUG SQL:92 - select 
    items0_.order_id as order_id4_0_0_, 
    items0_.id as id1_0_0_, 
    items0_.id as id1_0_1_, 
    items0_.order_id as order_id4_0_1_, 
    items0_.product_id as product_5_0_1_, 
    items0_.quantity as quantity2_0_1_, 
    items0_.version as version3_0_1_ 
  from OrderItem items0_ 
  where items0_.order_id=?
22:47:30,171 INFO NamedEntityGraphTest:59 - Number of items: 2
22:47:30,171 INFO NamedEntityGraphTest:58 - Order: order2
22:47:30,172 DEBUG SQL:92 - select 
    items0_.order_id as order_id4_0_0_, 
    items0_.id as id1_0_0_, 
    items0_.id as id1_0_1_, 
    items0_.order_id as order_id4_0_1_, 
    items0_.product_id as product_5_0_1_, 
    items0_.quantity as quantity2_0_1_, 
    items0_.version as version3_0_1_ 
  from OrderItem items0_ 
  where items0_.order_id=?
22:47:30,174 INFO NamedEntityGraphTest:59 - Number of items: 2
22:47:30,174 INFO NamedEntityGraphTest:58 - Order: order3
22:47:30,174 DEBUG SQL:92 - select 
    items0_.order_id as order_id4_0_0_, 
    items0_.id as id1_0_0_, 
    items0_.id as id1_0_1_, 
    items0_.order_id as order_id4_0_1_, 
    items0_.product_id as product_5_0_1_, 
    items0_.quantity as quantity2_0_1_, 
    items0_.version as version3_0_1_ 
  from OrderItem items0_ 
  where items0_.order_id=?
22:47:30,176 INFO NamedEntityGraphTest:59 - Number of items: 2

Hibernate performs one query to get all Order entities and an additional one for each of the n Order entities to initialize the orderItem relationship. So you now know why this kind of issue is called n+1 select issue and why it can create huge performance problems.

What makes it even worse is, that you often don’t recognize it on a small test database, if you haven’t checked your Hibernate statistics. The code snippet requires only a few dozen queries if the test database doesn’t contain a lot of orders. But that will be completely different if you use your productive database which contains several thousand of them.

I said earlier that you can easily avoid these issues. And that’s true. You just have to initialize the orderItem relationship when you select the Order entities from the database.

But please, only do that, if you use the relationship in your business code and don’t use FetchType.EAGER to always fetch the related entities. That just replaces your n+1 issue with another performance problem.

Initialize a Relationships with a @NamedEntityGraph

There are several different options to initialize relationships. I prefer to use a @NamedEntityGraph which is is one of my favorite features introduced in JPA 2.1. It provides a query independent way to specify a graph of entities which Hibernate shall fetch from the database. In following code snippet, you can see an example of a simple graph that lets Hibernate eagerly fetch the items attribute of an entity:

@Entity
@Table(name = "purchase_order")
@NamedEntityGraph(
  name = "graph.Order.items", 
  attributeNodes = @NamedAttributeNode("items"))
public class Order implements Serializable {

    ...
}

There isn’t much you need to do to define an entity graph with a @NamedEntityGraph annotation. You just have to provide a unique name for the graph and one @NamedAttributeNode annotation for each attribute Hibernate shall fetch eagerly. In this example, it’s only the items attribute which maps the relationship between an Order and several OrderItem entities.

Now you can use the entity graph to control the fetching behaviour or a specific query. You, therefore, have to instantiate an EntityGraph based on the @NamedEntityGraph definition and provide it as a hint to the EntityManager.find() method or your query. I do this in the following code snippet where I select the Order entity with id 1 from the database:

EntityGraph graph = this.em.getEntityGraph("graph.Order.items");

Map hints = new HashMap();
hints.put("javax.persistence.fetchgraph", graph);

return this.em.find(Order.class, 1L, hints);

Hibernate uses this information to create one SQL statement which gets the attributes of the Order entity and the attributes of the entity graph from the database:

17:34:51,310 DEBUG [org.hibernate.loader.plan.build.spi.LoadPlanTreePrinter] (pool-2-thread-1) 
  LoadPlan(entity=blog.thoughts.on.java.jpa21.entity.graph.model.Order) 
    - Returns 
      - EntityReturnImpl(
          entity=blog.thoughts.on.java.jpa21.entity.graph.model.Order, 
          querySpaceUid=<gen:0>, 
          path=blog.thoughts.on.java.jpa21.entity.graph.model.Order) 
        - CollectionAttributeFetchImpl(
            collection=blog.thoughts.on.java.jpa21.entity.graph.model.Order.items, 
            querySpaceUid=<gen:1>, 
            path=blog.thoughts.on.java.jpa21.entity.graph.model.Order.items)
          - (collection element) CollectionFetchableElementEntityGraph(
              entity=blog.thoughts.on.java.jpa21.entity.graph.model.OrderItem, 
              querySpaceUid=<gen:2>, 
              path=blog.thoughts.on.java.jpa21.entity.graph.model.Order.items.<elements>) 
            - EntityAttributeFetchImpl(entity=blog.thoughts.on.java.jpa21.entity.graph.model.Product,
                querySpaceUid=<gen:3>, 
                path=blog.thoughts.on.java.jpa21.entity.graph.model.Order.items.<elements>.product) 
    - QuerySpaces 
      - EntityQuerySpaceImpl(uid=<gen:0>, entity=blog.thoughts.on.java.jpa21.entity.graph.model.Order)
        - SQL table alias mapping - order0_ 
        - alias suffix - 0_ 
        - suffixed key columns - {id1_2_0_} 
        - JOIN (JoinDefinedByMetadata(items)) : <gen:0> -> <gen:1> 
          - CollectionQuerySpaceImpl(uid=<gen:1>, 
              collection=blog.thoughts.on.java.jpa21.entity.graph.model.Order.items) 
            - SQL table alias mapping - items1_ 
            - alias suffix - 1_ 
            - suffixed key columns - {order_id4_2_1_} 
            - entity-element alias suffix - 2_ 
            - 2_entity-element suffixed key columns - id1_0_2_ 
            - JOIN (JoinDefinedByMetadata(elements)) : <gen:1> -> <gen:2> 
              - EntityQuerySpaceImpl(uid=<gen:2>, 
                  entity=blog.thoughts.on.java.jpa21.entity.graph.model.OrderItem) 
                - SQL table alias mapping - items1_ 
                - alias suffix - 2_ 
                - suffixed key columns - {id1_0_2_}
                - JOIN (JoinDefinedByMetadata(product)) : <gen:2> -> <gen:3> 
                  - EntityQuerySpaceImpl(uid=<gen:3>, 
                      entity=blog.thoughts.on.java.jpa21.entity.graph.model.Product) 
                    - SQL table alias mapping - product2_ 
                    - alias suffix - 3_ 
                    - suffixed key columns - {id1_1_3_}
17:34:51,311 DEBUG [org.hibernate.loader.entity.plan.EntityLoader] (pool-2-thread-1) 
  Static select for entity blog.thoughts.on.java.jpa21.entity.graph.model.Order [NONE:-1]: 
  select order0_.id as id1_2_0_, 
    order0_.orderNumber as orderNum2_2_0_, 
    order0_.version as version3_2_0_, 
    items1_.order_id as order_id4_2_1_, 
    items1_.id as id1_0_1_, 
    items1_.id as id1_0_2_, 
    items1_.order_id as order_id4_0_2_, 
    items1_.product_id as product_5_0_2_, 
    items1_.quantity as quantity2_0_2_, 
    items1_.version as version3_0_2_, 
    product2_.id as id1_1_3_, 
    product2_.name as name2_1_3_, 
    product2_.version as version3_1_3_ 
  from purchase_order order0_ 
    left outer join OrderItem items1_ on order0_.id=items1_.order_id 
    left outer join Product product2_ on items1_.product_id=product2_.id 
  where order0_.id=?

Initializing only one relationship is good enough for a blog post but in a real project, you will most likely want to build more complex graphs. So let’s do that.
You can, of course, provide an array of @NamedAttributeNode annotations to fetch multiple attributes of the same entity and you can use @NamedSubGraph to define the fetching behaviour for an additional level of entities. I use that in the following code snippet to fetch not only all related OrderItem entities but also the Product entity for each OrderItem:

@Entity
@Table(name = "purchase_order")
@NamedEntityGraph(
  name = "graph.Order.items", 
  attributeNodes = @NamedAttributeNode(value = "items", subgraph = "items"), 
  subgraphs = @NamedSubgraph(name = "items", attributeNodes = @NamedAttributeNode("product")))
public class Order implements Serializable {

    ...
}

As you can see, the definition of a @NamedSubGraph is very similar to the definition of a @NamedEntityGraph. You can then reference this subgraph in a @NamedAttributeNode annotation to define the fetching behaviour for this specific attribute.

The combination of these annotations allows you to define complex entity graphs which you can use to initialize all relationships you use in your use case and avoid n+1 select issues. If you want to specify your entity graph dynamically at runtime, you can do this also via a Java API.

2.3. Update Entities One by One

Updating entities one by one feels very natural if you think in an object oriented way. You just get the entities you want to update and call a few setter methods to change their attributes like you do it with any other object.

This approach works fine if you only change a few entities. But it gets very inefficient when you work with a list of entities and is the third performance issues you can easily spot in your log file. You just have to look for a bunch SQL UPDATE statements that look completely the same, as you can see in the following log file:

22:58:05,829 DEBUG SQL:92 - select 
  product0_.id as id1_1_, 
  product0_.name as name2_1_, 
  product0_.price as price3_1_, 
  product0_.version as version4_1_ from Product product0_
22:58:05,883 DEBUG SQL:92 - update Product set name=?, price=?, version=? where id=? and version=?
22:58:05,889 DEBUG SQL:92 - update Product set name=?, price=?, version=? where id=? and version=?
22:58:05,891 DEBUG SQL:92 - update Product set name=?, price=?, version=? where id=? and version=?
22:58:05,893 DEBUG SQL:92 - update Product set name=?, price=?, version=? where id=? and version=?
22:58:05,900 DEBUG SQL:92 - update Product set name=?, price=?, version=? where id=? and version=?

The relational representation of the database records is a much better fit for these use cases than the object oriented one. With SQL, you could just write one SQL statement that updates all the records you want to change.

You can do the same with Hibernate if you use JPQL, native SQL or the CriteriaUpdate API. All 3 of very similar, so let’s use JPQL in this example.

You can define a JPQL UPDATE statement in a similar way as you know it from SQL. You just define which entity you want to update, how to change the values of its attributes and limit the affected entities in the WHERE statement.
You can see an example of it in the following code snippet where I increase the price of all products by 10%:

em.createQuery("UPDATE Product p SET p.price = p.price*0.1").executeUpdate();

Hibernate creates an SQL UPDATE statement based on the JPQL statement and sends it to the database which performs the update operation.

It’s pretty obvious that this approach is a lot faster if you have to update a huge number entities. But it also has a drawback. Hibernate doesn’t know which entities are affected by the update operation and doesn’t update its 1st level cache. You should, therefore, make sure not to read and update an entity with a JPQL statement within the same Hibernate Session or you have to detach it to remove it from the cache.

3. Summary and Performance Cheat Sheet

Within this post, I’ve shown you 3 Hibernate performance issues which you can find in your log files.

2 of them were caused by a huge number of SQL statements. This a common reason for performance issues, if you work with Hibernate. Hibernate hides the database access behind its API and that makes it often difficult to guess the actual number of SQL statements. You should therefore always check the executed SQL statements when you make a change to your persistence tier.

If you like to learn more Hibernate performance tuning, make sure to download my cheat sheet with 7 Hibernate Performance Tuning Tips.

Guide to Java 8 Collectors

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In this tutorial we will be going through Java 8’s Collectors, which are used at the final step of processing a Stream.

If you want to read more about Stream API itself, check this article.

2. Collect Method

Stream.collect() is one of the Java 8’s Stream API‘s terminal methods. It allows to perform mutable fold operations (repackaging elements to some data structures and applying some additional logic, concatenating them, etc.) on data elements held in a Stream instance.

The strategy for this operation is provided via Collector interface implementation.

3. Collectors

All predefined implementations can be found in the Collectors class. It’s a common practice to use a following static import with them in order to leverage increased readability:

import static java.util.stream.Collectors.*;

or just single import collectors of your choice:

import static java.util.stream.Collectors.toList;
import static java.util.stream.Collectors.toMap;
import static java.util.stream.Collectors.toSet;

In the following examples we will be reusing the following list:

List<String> givenList = Arrays.asList("a", "bb", "ccc", "dd");

3.1. Collectors.toList()

ToList collector can be used for collecting all Stream elements into an List instance. The important thing to remember is the fact that we can’t assume any particular List implementation with this method. If you want to have more control over this, use toCollection instead.

Let’s create a Stream instance representing a sequence of elements and collect them into an List instance:

List<String> result = givenList.stream()
  .collect(toList());

3.2. Collectors.toSet()

ToSet collector can be used for collecting all Stream elements into an Set instance. The important thing to remember is the fact that we can’t assume any particular Set implementation with this method. If you want to have more control over this, use toCollection instead.

Let’s create a Stream instance representing a sequence of elements and collect them into an Set instance:

Set<String> result = givenList.stream()
  .collect(toSet());

3.3. Collectors.toCollection()

As you probably already noticed, when using toSet and toList collectors, you can’t make any assumptions of their implementations. If you want to use a custom implementation, you will need to use the toCollection collector with a provided collection of your choice.

Let’s create a Stream instance representing a sequence of elements and collect them into an LinkedList instance:

List<String> result = givenList.stream()
  .collect(toCollection(LinkedList::new))

Notice that this will not work with any immutable collections. In such case you would need to either write a custom Collector implementation or use collectingAndThen.

3.4. Collectors.toMap()

ToMap collector can be used to collect Stream elements into a Map instance. In order to do this, you need to provide two functions:

  • keyMapper
  • valueMapper

keyMapper will be used for extracting a Map key from a Stream element and valueMapper will be used for extracting a value associated with a given key.

Let’s collect those elements into a Map that stores strings as keys and their lengths as values:

Map<String, Integer> result = givenList.stream()
  .collect(toMap(Function.identity(), String::length))

Function.identity() is just a shortcut for defining function that accept and return the same value;

Sometimes you might encounter a situation where you might end up with a key collision. In such case you should use toMap with another signature.

Map<String, Integer> result = givenList.stream()
  .collect(toMap(Function.identity(), String::length, (i1, i2) -> i1));

The third argument here is a BinaryOperator, where you can specify how you want collisions to be handled. In this case we will just pick any of these two colliding values, because we know that same strings will always have same lengths too.

3.5. Collectors.collectingAndThen()

CollectingAndThen is a special collector that allows to perform another action on a Collector’s result straight after collecting ends.

Let’s collect Stream elements to a List instance and then convert the result into an ImmutableList instance:

List<String> result = givenList.stream()
  .collect(collectingAndThen(toList(), ImmutableList::copyOf))

3.6. Collectors.joining()

Joining collector can be used for joining Stream<String> elements.

We can join them together by doing:

String result = givenList.stream()
  .collect(joining());

which will result in:

"abbcccdd"

You can also specify custom separators, prefixes, postfixes:

String result = givenList.stream()
  .collect(joining(" "));

which will result in:

"a bb ccc dd"

or you can write:

String result = givenList.stream()
  .collect(joining(" ", "PRE-", "-POST"));

which will result in:

"PRE-a bb ccc dd-POST"

3.7. Collectors.counting()

Counting is a simple collector that allows simply counting of all Stream elements.

Now we can write:

Long result = givenList.stream()
  .collect(counting());

3.8. Collectors.summarizingDouble/Long/Int()

SummarizingDouble/Long/Int is a collector that returns a special class containing statistical information about numerical data in a Stream of extracted elements.

We can obtain information about string lengths by doing:

DoubleSummaryStatistics result = givenList.stream()
  .collect(summarizingDouble(String::length));

In this case following will be true:

assertThat(result.getAverage()).isEqualTo(2);
assertThat(result.getCount()).isEqualTo(4);
assertThat(result.getMax()).isEqualTo(3);
assertThat(result.getMin()).isEqualTo(1);
assertThat(result.getSum()).isEqualTo(8);

3.9. Collectors.averagingDouble/Long/Int()

AveragingDouble/Long/Int is a collector that simply returns an average of extracted elements.

We can get average string length by doing:

Double result = givenList.stream()
  .collect(averagingDouble(String::length));

3.10. Collectors.summingDouble/Long/Int()

SummingDouble/Long/Int is a collector that simply returns a sum ofextracted elements.

We can get a sum of all string lengths by doing:

Double result = givenList.stream()
  .collect(summingDouble(String::length));

3.11. Collectors.maxBy()/minBy()

MaxBy/MinBy collectors return the biggest/the smallest element of a Stream according to a provided Comparator instance.

We can pick the biggest element by doing:

Optional<String> result = givenList.stream()
  .collect(maxBy(Comparator.naturalOrder()));

Notice that returned value is wrapped in an Optional instance. This forces users to rethink the empty collection cornercase.

3.12. Collectors.groupingBy()

GroupingBy collector is used for grouping objects by some property and storing results in a Map instance.

We can group them by string length and store grouping results in Set instances:

Map<Integer, Set<String>> result = givenList.stream()
  .collect(groupingBy(String::length, toSet()));

This will result in following being true:

assertThat(result)
  .containsEntry(1, newHashSet("a", "b"))
  .containsEntry(2, newHashSet("bb"))
  .containsEntry(3, newHashSet("ccc"))
  .containsEntry(4, newHashSet("dd"));

Notice that the second argument of the groupingBy method is actually a Collector and you are free to use any Collector of your choice.

3.13. Collectors.partitioningBy

PartitioningBy is a specialized case of groupingBy that accepts a Predicate instance and collects Stream elements into a Map instance that stores Boolean values as keys and collections as values. Under the “true” key, you can find a collection of elements matching the given Predicate and under the “false” key, you can find a collection of elements not matching the given Predicate.

You can write:

Map<Boolean, List<String>> result = givenList.stream()
  .collect(partitioningBy(s -> s.length() > 2))

Which results in a Map containing:

{false=["a", "bb", "dd"], true=["ccc"]}

4. Custom Collectors

If you want to write your own Collector implementation, you need to implement Collector interface and specify its 3 generic parameters:

public interface Collector<T, A, R> {...}
  1. T – the type of objects that will be available for collection,
  2. A – the type of an mutable accumulator object,
  3. R – the type of a final result.

Let’s write an example Collector for collecting elements into an ImmutableSet instance. We start by specifying the right types:

private class ImmutableSetCollector<T>
  implements Collector<T, ImmutableSet.Builder<T>, ImmutableSet<T>> {...}

Since we need a mutable collection for internal collection operation handling, we can’t use ImmutableSet for this, we need to use some other mutable collection or any other class that could temporarily accumulate objects for us.
In this case we will go on with a ImmutableSet.Builder and now we need to implement 5 methods:

  • Supplier<ImmutableSet.Builder<T>> supplier()
  • BiConsumer<ImmutableSet.Builder<T>, T> accumulator()
  • BinaryOperator<ImmutableSet.Builder<T>> combiner()
  • Function<ImmutableSet.Builder<T>, ImmutableSet<T>> finisher()
  • Set<Characteristics> characteristics()

supplier() method returns a Supplier instance that generates an empty accumulator instance, so in this case we can simply write:

@Override
public Supplier<ImmutableSet.Builder<T>> supplier() {
    return ImmutableSet::builder;
}

accumulator() method returns a function that is used for adding a new element to an existing accumulator object, so let’s just use the Builder‘s add method.

@Override
public BiConsumer<ImmutableSet.Builder<T>, T> accumulator() {
    return ImmutableSet.Builder::add;
}

combiner() method returns a function that is used for merging two accumulators together:

@Override
public BinaryOperator<ImmutableSet.Builder<T>> combiner() {
    return (left, right) -> left.addAll(right.build());
}

finisher() method returns a function that is used for converting an accumulator to final result type, so in this case we will just use Builder‘s build method:

@Override
public Function<ImmutableSet.Builder<T>, ImmutableSet<T>> finisher() {
    return ImmutableSet.Builder::build;
}

characteristics() method is used to provide Stream with some additional information that will be used for internal optimizations. In this case we do not pay attention to the elements order in a Set, so we will use Characteristics.UNORDERED. In order to obtain more information regarding this subject,  check Characteristics‘ JavaDoc.

@Override public Set<Characteristics> characteristics() {
    return Sets.immutableEnumSet(Characteristics.UNORDERED);
}

Here is the complete implementation along with the usage:

public class ImmutableSetCollector<T>
  implements Collector<T, ImmutableSet.Builder<T>, ImmutableSet<T>> {

@Override
public Supplier<ImmutableSet.Builder<T>> supplier() {
    return ImmutableSet::builder;
}

@Override
public BiConsumer<ImmutableSet.Builder<T>, T> accumulator() {
    return ImmutableSet.Builder::add;
}

@Override
public BinaryOperator<ImmutableSet.Builder<T>> combiner() {
    return (left, right) -> left.addAll(right.build());
}

@Override
public Function<ImmutableSet.Builder<T>, ImmutableSet<T>> finisher() {
    return ImmutableSet.Builder::build;
}

@Override
public Set<Characteristics> characteristics() {
    return Sets.immutableEnumSet(Characteristics.UNORDERED);
}

public static <T> ImmutableSetCollector<T> toImmutableSet() {
    return new ImmutableSetCollector<>();
}

and here in action:

List<String> givenList = Arrays.asList("a", "bb", "ccc", "dddd");

ImmutableSet<String> result = givenList.stream()
  .collect(toImmutableSet());

5. Conclusion

In this article we explored in depth Java 8’s collectors and showed how to implement a custom collector.

All code examples are available on the GitHub.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Supercharge Java Authentication with JSON Web Tokens (JWTs)

$
0
0

Getting ready to build, or struggling with, secure authentication in your Java application? Unsure of the benefits of using tokens (and specifically JSON web tokens), or how they should be deployed? I’m excited to answer these questions, and more, for you in this tutorial!

Before we dive into JSON Web Tokens (JWTs), and the JJWT library (created by Stormpath‘s CTO, Les Hazlewood and maintained by a community of contributors), let’s cover some basics.

1. Authentication vs. Token Authentication

The set of protocols an application uses to confirm user identity is authentication. Applications have traditionally persisted identity through session cookies. This paradigm relies on server-side storage of session IDs which forces developers to create session storage that is either unique and server-specific, or implemented as a completely separate session storage layer.

Token authentication was developed to solve problems server-side session IDs didn’t, and couldn’t. Just like traditional authentication, users present verifiable credentials, but are now issued a set of tokens instead of a session ID. The initial credentials could be the standard username/password pair, API keys, or even tokens from another service. (Stormpath’s API Key Authentication Feature is an example of this.)

1.1 Why Tokens?

Very simply, using tokens in place of session IDs can lower your server load, streamline permission management, and provide better tools for supporting a distributed or cloud-based infrastructure. In the case of JWT, this is primarily accomplished through the stateless nature of these types of tokens (more on that below).

Tokens offer a wide variety of applications, including: Cross Site Request Forgery (CSRF) protection schemes, OAuth 2.0 interactions, session IDs, and (in cookies) as authentication representations. In most cases, standards do not specify a particular format for tokens. Here’s an example of a typical Spring Security CSRF token in an HTML form:

<input name="_csrf" type="hidden" 
  value="f3f42ea9-3104-4d13-84c0-7bcb68202f16"/>

If you try to post that form without the right CSRF token, you get an error response, and that’s the utility of tokens. The above example is a “dumb” token. This means there is no inherent meaning to be gleaned from the token itself. This is also where JWTs make a big difference.

2. What’s in a JWT?

JWTs (pronounced “jots”) are URL-safe, encoded, cryptographically signed (sometimes encrypted) strings that can be used as tokens in a variety of applications. Here’s an example of a JWT being used as a CSRF token:

<input name="_csrf" type="hidden" 
  value="eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiJlNjc4ZjIzMzQ3ZTM0MTBkYjdlNjg3Njc4MjNiMmQ3MCIsImlhdCI6MTQ2NjYzMzMxNywibmJmIjoxNDY2NjMzMzE3LCJleHAiOjE0NjY2MzY5MTd9.rgx_o8VQGuDa2AqCHSgVOD5G68Ld_YYM7N7THmvLIKc"/>

In this case, you can see that the token is much longer than in our previous example. Just like we saw before, if the form is submitted without the token you get an error response.

So, why JWT?

The above token is cryptographically signed and therefore can be verified, providing proof that it hasn’t been tampered with. Also, JWTs are encoded with a variety of additional information.

Let’s look at the anatomy of a JWT to better understand how we squeeze all this goodness out of it. You may have noticed that there are three distinct sections separated by periods (.):

Header eyJhbGciOiJIUzI1NiJ9
Payload eyJqdGkiOiJlNjc4ZjIzMzQ3ZTM0MTBkYjdlNjg3Njc4MjNiMmQ3MCIsImlhdC
I6MTQ2NjYzMzMxNywibmJmIjoxNDY2NjMzMzE3LCJleHAiOjE0NjY2MzY5MTd9
Signature rgx_o8VQGuDa2AqCHSgVOD5G68Ld_YYM7N7THmvLIKc

Each section is base64 URL-encoded. This ensures that it can be used safely in a URL (more on this later). Let’s take a closer look at each section individually.

2.1. The Header

If you base64 to decode the header, you will get the following JSON string:

{"alg":"HS256"}

This shows that the JWT was signed with HMAC using SHA-256.

2.2. The Payload

If you decode the payload, you get the following JSON string (formatted for clarity):

{
  "jti": "e678f23347e3410db7e68767823b2d70",
  "iat": 1466633317,
  "nbf": 1466633317,
  "exp": 1466636917
}

Within the payload, as you can see, there are a number of keys with values. These keys are called “claims” and the JWT specification has seven of these specified as “registered” claims. They are:

iss Issuer
sub Subject
aud Audience
exp Expiration
nbf Not Before
iat Issued At
jti JWT ID

When building a JWT, you can put in any custom claims you wish. The list above simply represents the claims that are reserved both in the key that is used and the expected type. Our CSRF has a JWT ID, an “Issued At” time, a “Not Before” time, and an Expiration time. The expiration time is exactly one minute past the issued at time.

2.3. The Signature

Finally, the signature section is created by taking the header and payload together (with the . in between) and passing it through the specified algorithm (HMAC using SHA-256, in this case) along with a known secret. Note that the secret is always a byte array, and should be of a length that makes sense for the algorithm used. Below, I use a random base64 encoded string (for readability) that’s converted into a byte array.

It looks like this in pseudo-code:

computeHMACSHA256(
    header + "." + payload, 
    base64DecodeToByteArray("4pE8z3PBoHjnV1AhvGk+e8h2p+ShZpOnpr8cwHmMh1w=")
)

As long as you know the secret, you can generate the signature yourself and compare your result to the signature section of the JWT to verify that it has not been tampered with. Technically, a JWT that’s been cryptographically signed is called a JWS. JWTs can also be encrypted and would then be called a JWE. (In actual practice, the term JWT is used to describe JWEs and JWSs.)

This brings us back to the benefits of using a JWT as our CSRF token. We can verify the signature and we can use the information encoded in the JWT to confirm its validity. So, not only does the string representation of the JWT need to match what’s stored server-side, we can ensure that it’s not expired simply by inspecting the exp claim. This saves the server from maintaining additional state.

Well, we’ve covered a lot of ground here. Let’s dive into some code!

3. Setup the JJWT Tutorial

JJWT (https://github.com/jwtk/jjwt) is a Java library providing end-to-end JSON Web Token creation and verification. Forever free and open-source (Apache License, Version 2.0), it was designed with a builder-focused interface hiding most of its complexity.

The primary operations in using JJWT involve building and parsing JWTs. We’ll look at these operations next, then get into some extended features of the JJWT, and finally, we’ll see JWTs in action as CSRF tokens in a Spring Security, Spring Boot application.

The code demonstrated in the following sections can be found here. Note: The project uses Spring Boot from the beginning as its easy to interact with the API that it exposes.

To build the project, execute the following:

git clone https://github.com/eugenp/tutorials.git
cd tutorials/jjwt
mvn clean install

One of the great things about Spring Boot is how easy it is to fire up an application. To run the JJWT Fun application, simply do the following:

java -jar target/*.jar

There are ten endpoints exposed in this example application (I use httpie to interact with the application. It can be found here.)

http localhost:8080
Available commands (assumes httpie - https://github.com/jkbrzt/httpie):

  http http://localhost:8080/
	This usage message

  http http://localhost:8080/static-builder
	build JWT from hardcoded claims

  http POST http://localhost:8080/dynamic-builder-general claim-1=value-1 ... [claim-n=value-n]
	build JWT from passed in claims (using general claims map)

  http POST http://localhost:8080/dynamic-builder-specific claim-1=value-1 ... [claim-n=value-n]
	build JWT from passed in claims (using specific claims methods)

  http POST http://localhost:8080/dynamic-builder-compress claim-1=value-1 ... [claim-n=value-n]
	build DEFLATE compressed JWT from passed in claims

  http http://localhost:8080/parser?jwt=<jwt>
	Parse passed in JWT

  http http://localhost:8080/parser-enforce?jwt=<jwt>
	Parse passed in JWT enforcing the 'iss' registered claim and the 'hasMotorcycle' custom claim

  http http://localhost:8080/get-secrets
	Show the signing keys currently in use.

  http http://localhost:8080/refresh-secrets
	Generate new signing keys and show them.

  http POST http://localhost:8080/set-secrets 
    HS256=base64-encoded-value HS384=base64-encoded-value HS512=base64-encoded-value
	Explicitly set secrets to use in the application.

In the sections that follow, we will examine each of these endpoints and the JJWT code contained in the handlers.

4. Building JWTs with JJWT

Because of JJWT’s fluent interface, the creation of the JWT is basically a three-step process:

  1. The definition of the internal claims of the token, like Issuer, Subject, Expiration, and ID.
  2. The cryptographic signing of the JWT (making it a JWS).
  3. The compaction of the JWT to a URL-safe string, according to the JWT Compact Serialization rules.

The final JWT will be a three-part base64-encoded string, signed with the specified signature algorithm, and using the provided key. After this point, the token is ready to be shared with the another party.

Here’s an example of the JJWT in action:

String jws = Jwts.builder()
  .setIssuer("Stormpath")
  .setSubject("msilverman")
  .claim("name", "Micah Silverman")
  .claim("scope", "admins")
  // Fri Jun 24 2016 15:33:42 GMT-0400 (EDT)
  .setIssuedAt(Date.from(Instant.ofEpochSecond(1466796822L)))
  // Sat Jun 24 2116 15:33:42 GMT-0400 (EDT)
  .setExpiration(Date.from(Instant.ofEpochSecond(4622470422L)))
  .signWith(
    SignatureAlgorithm.HS256,
    TextCodec.BASE64.decode("Yn2kjibddFAWtnPJ2AFlL8WXmohJMCvigQggaEypa5E=")
  )
  .compact();

This is very similar to the code that’s in the StaticJWTController.fixedBuilder method of the code project.

At this point, it’s worth talking about a few anti-patterns related to JWTs and signing. If you’ve ever seen JWT examples before, you’ve likely encountered one of these signing anti-pattern scenarios:

  1. .signWith(
        SignatureAlgorithm.HS256,
       "secret".getBytes("UTF-8")    
    )
  2. .signWith(
        SignatureAlgorithm.HS256,
        "Yn2kjibddFAWtnPJ2AFlL8WXmohJMCvigQggaEypa5E=".getBytes("UTF-8")
    )
  3. .signWith(
        SignatureAlgorithm.HS512,
        TextCodec.BASE64.decode("Yn2kjibddFAWtnPJ2AFlL8WXmohJMCvigQggaEypa5E=")
    )

Any of the HS type signature algorithms takes a byte array. It’s convenient for humans to read to take a string and convert it to a byte array.

Anti-pattern 1 above demonstrates this. This is problematic because the secret is weakened by being so short and it’s not a byte array in its native form. So, to keep it readable, we can base64 encode the byte array.

However, anti-pattern 2 above takes the base64 encoded string and converts it directly to a byte array. What should be done is to decode the base64 string back into the original byte array.

Number 3 above demonstrates this. So, why is this one also an anti-pattern? It’s a subtle reason in this case. Notice that the signature algorithm is HS512. The byte array is not the maximum length that HS512 can support, making it a weaker secret than what is possible for that algorithm.

The example code includes a class called SecretService that ensures secrets of the proper strength are used for the given algorithm. At application startup time, a new set of secrets is created for each of the HS algorithms. There are endpoints to refresh the secrets as well as to explicitly set the secrets.

If you have the project running as described above, execute the following so that the JWT examples below match the responses from your project.

http POST localhost:8080/set-secrets \
  HS256="Yn2kjibddFAWtnPJ2AFlL8WXmohJMCvigQggaEypa5E=" \
  HS384="VW96zL+tYlrJLNCQ0j6QPTp+d1q75n/Wa8LVvpWyG8pPZOP6AA5X7XOIlI90sDwx" \
  HS512="cd+Pr1js+w2qfT2BoCD+tPcYp9LbjpmhSMEJqUob1mcxZ7+Wmik4AYdjX+DlDjmE4yporzQ9tm7v3z/j+QbdYg=="

Now, you can hit the /static-builder endpoint:

http http://localhost:8080/static-builder

This produces a JWT that looks like this:

eyJhbGciOiJIUzI1NiJ9.
eyJpc3MiOiJTdG9ybXBhdGgiLCJzdWIiOiJtc2lsdmVybWFuIiwibmFtZSI6Ik1pY2FoIFNpbHZlcm1hbiIsInNjb3BlIjoiYWRtaW5zIiwiaWF0IjoxNDY2Nzk2ODIyLCJleHAiOjQ2MjI0NzA0MjJ9.
kP0i_RvTAmI8mgpIkDFhRX3XthSdP-eqqFKGcU92ZIQ

Now, hit:

http http://localhost:8080/parser?jwt=eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTdG9ybXBhdGgiLCJzdWIiOiJtc2lsdmVybWFuIiwibmFtZSI6Ik1pY2FoIFNpbHZlcm1hbiIsInNjb3BlIjoiYWRtaW5zIiwiaWF0IjoxNDY2Nzk2ODIyLCJleHAiOjQ2MjI0NzA0MjJ9.kP0i_RvTAmI8mgpIkDFhRX3XthSdP-eqqFKGcU92ZIQ

The response has all the claims that we included when we created the JWT.

HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
...
{
    "jws": {
        "body": {
            "exp": 4622470422,
            "iat": 1466796822,
            "iss": "Stormpath",
            "name": "Micah Silverman",
            "scope": "admins",
            "sub": "msilverman"
        },
        "header": {
            "alg": "HS256"
        },
        "signature": "kP0i_RvTAmI8mgpIkDFhRX3XthSdP-eqqFKGcU92ZIQ"
    },
    "status": "SUCCESS"
}

This is the parsing operation, which we’ll get into in the next section.

Now, let’s hit an endpoint that takes claims as parameters and will build a custom JWT for us.

http -v POST localhost:8080/dynamic-builder-general iss=Stormpath sub=msilverman hasMotorcycle:=true

Note: There’s a subtle difference between the hasMotorcycle claim and the other claims. httpie assumes that JSON parameters are strings by default. To submit raw JSON using using httpie, you use the := form rather than =. Without that, it would submit “hasMotorcycle”: “true”, which is not what we want.

Here’s the output:

POST /dynamic-builder-general HTTP/1.1
Accept: application/json
...
{
    "hasMotorcycle": true,
    "iss": "Stormpath",
    "sub": "msilverman"
}

HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
...
{
    "jwt": 
      "eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTdG9ybXBhdGgiLCJzdWIiOiJtc2lsdmVybWFuIiwiaGFzTW90b3JjeWNsZSI6dHJ1ZX0.OnyDs-zoL3-rw1GaSl_KzZzHK9GoiNocu-YwZ_nQNZU",
    "status": "SUCCESS"
}

Let’s take a look at the code that backs this endpoint:

@RequestMapping(value = "/dynamic-builder-general", method = POST)
public JwtResponse dynamicBuilderGeneric(@RequestBody Map<String, Object> claims) 
  throws UnsupportedEncodingException {
    String jws =  Jwts.builder()
        .setClaims(claims)
        .signWith(
            SignatureAlgorithm.HS256,
            secretService.getHS256SecretBytes()
        )
        .compact();
    return new JwtResponse(jws);
}

Line 2 ensures that the incoming JSON is automatically converted to a Java Map<String, Object>, which is super handy for JJWT as the method on line 5 simply takes that Map and sets all the claims at once.

As terse as this code is, we need something more specific to ensure that the claims that are passed are valid. Using the .setClaims(Map<String, Object> claims) method is handy when you already know that the claims represented in the map are valid. This is where the type-safety of Java comes into the JJWT library.

For each of the Registered Claims defined in the JWT specification, there’s a corresponding Java method in the JJWT that takes the spec-correct type.

Let’s hit another endpoint in our example and see what happens:

http -v POST localhost:8080/dynamic-builder-specific iss=Stormpath sub:=5 hasMotorcycle:=true

Note that we’ve passed in an integer, 5, for the “sub” claim. Here’s the output:

POST /dynamic-builder-specific HTTP/1.1
Accept: application/json
...
{
    "hasMotorcycle": true,
    "iss": "Stormpath",
    "sub": 5
}

HTTP/1.1 400 Bad Request
Connection: close
Content-Type: application/json;charset=UTF-8
...
{
    "exceptionType": "java.lang.ClassCastException",
    "message": "java.lang.Integer cannot be cast to java.lang.String",
    "status": "ERROR"
}

Now, we’re getting an error response because the code is enforcing the type of the Registered Claims. In this case, sub must be a string. Here’s the code that backs this endpoint:

@RequestMapping(value = "/dynamic-builder-specific", method = POST)
public JwtResponse dynamicBuilderSpecific(@RequestBody Map<String, Object> claims) 
  throws UnsupportedEncodingException {
    JwtBuilder builder = Jwts.builder();
    
    claims.forEach((key, value) -> {
        switch (key) {
            case "iss":
                builder.setIssuer((String) value);
                break;
            case "sub":
                builder.setSubject((String) value);
                break;
            case "aud":
                builder.setAudience((String) value);
                break;
            case "exp":
                builder.setExpiration(Date.from(
                    Instant.ofEpochSecond(Long.parseLong(value.toString()))
                ));
                break;
            case "nbf":
                builder.setNotBefore(Date.from(
                    Instant.ofEpochSecond(Long.parseLong(value.toString()))
                ));
                break;
            case "iat":
                builder.setIssuedAt(Date.from(
                    Instant.ofEpochSecond(Long.parseLong(value.toString()))
                ));
                break;
            case "jti":
                builder.setId((String) value);
                break;
            default:
                builder.claim(key, value);
        }
    });
	
    builder.signWith(SignatureAlgorithm.HS256, secretService.getHS256SecretBytes());

    return new JwtResponse(builder.compact());
}

Just like before, the method accepts a Map<String, Object> of claims as its parameter. However, this time, we are calling the specific method for each of the Registered Claims which enforces type.

One refinement to this is to make the error message more specific. Right now, we only know that one of our claims is not the correct type. We don’t know which claim was in error or what it should be. Here’s a method that will give us a more specific error message. It also deals with a bug in the current code.

private void ensureType(String registeredClaim, Object value, Class expectedType) {
    boolean isCorrectType =
        expectedType.isInstance(value) ||
        expectedType == Long.class && value instanceof Integer;

    if (!isCorrectType) {
        String msg = "Expected type: " + expectedType.getCanonicalName() + 
		    " for registered claim: '" + registeredClaim + "', but got value: " + 
			value + " of type: " + value.getClass().getCanonicalName();
        throw new JwtException(msg);
    }
}

Line 3 checks that the passed in value is of the expected type. If not, a JwtException is thrown with the specific error. Let’s take a look at this in action by making the same call we did earlier:

http -v POST localhost:8080/dynamic-builder-specific iss=Stormpath sub:=5 hasMotorcycle:=true
POST /dynamic-builder-specific HTTP/1.1
Accept: application/json
...
User-Agent: HTTPie/0.9.3

{
    "hasMotorcycle": true,
    "iss": "Stormpath",
    "sub": 5
}

HTTP/1.1 400 Bad Request
Connection: close
Content-Type: application/json;charset=UTF-8
...
{
    "exceptionType": "io.jsonwebtoken.JwtException",
    "message": 
      "Expected type: java.lang.String for registered claim: 'sub', but got value: 5 of type: java.lang.Integer",
    "status": "ERROR"
}

Now, we have a very specific error message telling us that the sub claim is the one in error.

Let’s circle back to that bug in our code. The issue has nothing to do with the JJWT library. The issue is that the JSON to Java Object mapper built into Spring Boot is too smart for our own good.

If there’s a method that accepts a Java Object, the JSON mapper will automatically convert a passed in number that is less than or equal to 2,147,483,647 into a Java Integer. Likewise, it will automatically convert a passed in number that is greater than 2,147,483,647 into a Java Long. For the iat, nbf, and exp claims of a JWT, we want our ensureType test to pass whether the mapped Object is an Integer or a Long. That’s why we have the additional clause in determining if the passed in value is the correct type:

 boolean isCorrectType =
     expectedType.isInstance(value) ||
     expectedType == Long.class && value instanceof Integer;

If we’re expecting a Long, but the value is an instance of Integer, we still say it’s the correct type. With an understanding of what’s happening with this validation, we can now integrate it into our dynamicBuilderSpecific method:

@RequestMapping(value = "/dynamic-builder-specific", method = POST)
public JwtResponse dynamicBuilderSpecific(@RequestBody Map<String, Object> claims) 
  throws UnsupportedEncodingException {
    JwtBuilder builder = Jwts.builder();

    claims.forEach((key, value) -> {
        switch (key) {
            case "iss":
                ensureType(key, value, String.class);
                builder.setIssuer((String) value);
                break;
            case "sub":
                ensureType(key, value, String.class);
                builder.setSubject((String) value);
                break;
            case "aud":
                ensureType(key, value, String.class);
                builder.setAudience((String) value);
                break;
            case "exp":
                ensureType(key, value, Long.class);
                builder.setExpiration(Date.from(
				    Instant.ofEpochSecond(Long.parseLong(value.toString()))
				));
                break;
            case "nbf":
                ensureType(key, value, Long.class);
                builder.setNotBefore(Date.from(
					Instant.ofEpochSecond(Long.parseLong(value.toString()))
				));
                break;
            case "iat":
                ensureType(key, value, Long.class);
                builder.setIssuedAt(Date.from(
					Instant.ofEpochSecond(Long.parseLong(value.toString()))
				));
                break;
            case "jti":
                ensureType(key, value, String.class);
                builder.setId((String) value);
                break;
            default:
                builder.claim(key, value);
        }
    });

    builder.signWith(SignatureAlgorithm.HS256, secretService.getHS256SecretBytes());

    return new JwtResponse(builder.compact());
}

Note: In all the example code in this section, JWTs are signed with the HMAC using SHA-256 algorithm. This is to keep the examples simple. The JJWT library supports 12 different signature algorithms that you can take advantage of in your own code.

5. Parsing JWTs with JJWT

We saw earlier that our code example has an endpoint for parsing a JWT. Hitting this endpoint:

http http://localhost:8080/parser?jwt=eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTdG9ybXBhdGgiLCJzdWIiOiJtc2lsdmVybWFuIiwibmFtZSI6Ik1pY2FoIFNpbHZlcm1hbiIsInNjb3BlIjoiYWRtaW5zIiwiaWF0IjoxNDY2Nzk2ODIyLCJleHAiOjQ2MjI0NzA0MjJ9.kP0i_RvTAmI8mgpIkDFhRX3XthSdP-eqqFKGcU92ZIQ

produces this response:

HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
...
{
    "claims": {
        "body": {
            "exp": 4622470422,
            "iat": 1466796822,
            "iss": "Stormpath",
            "name": "Micah Silverman",
            "scope": "admins",
            "sub": "msilverman"
        },
        "header": {
            "alg": "HS256"
        },
        "signature": "kP0i_RvTAmI8mgpIkDFhRX3XthSdP-eqqFKGcU92ZIQ"
    },
    "status": "SUCCESS"
}

The parser method of the StaticJWTController class looks like this:

@RequestMapping(value = "/parser", method = GET)
public JwtResponse parser(@RequestParam String jwt) throws UnsupportedEncodingException {
    Jws<Claims> jws = Jwts.parser()
        .setSigningKeyResolver(secretService.getSigningKeyResolver())
        .parseClaimsJws(jwt);

    return new JwtResponse(jws);
}

Line 4 indicates that we expect the incoming string to be a signed JWT (a JWS). And, we are using the same secret that was used to sign the JWT in parsing it. Line 5 parses the claims from the JWT. Internally, it is verifying the signature and it will throw an exception if the signature is invalid.

Notice that in this case we are passing in a SigningKeyResolver rather than a key itself. This is one of the most powerful aspects of JJWT. The header of JWT indicates the algorithm used to sign it. However, we need to verify the JWT before we trust it. It would seem to be a catch 22. Let’s look at the SecretService.getSigningKeyResolver method:

private SigningKeyResolver signingKeyResolver = new SigningKeyResolverAdapter() {
    @Override
    public byte[] resolveSigningKeyBytes(JwsHeader header, Claims claims) {
        return TextCodec.BASE64.decode(secrets.get(header.getAlgorithm()));
    }
};

Using the access to the JwsHeader, I can inspect the algorithm and return the proper byte array for the secret that was used to sign the JWT. Now, JJWT will verify that the JWT has not been tampered with using this byte array as the key.

If I remove the last character of the passed in JWT (which is part of the signature), this is the response:

HTTP/1.1 400 Bad Request
Connection: close
Content-Type: application/json;charset=UTF-8
Date: Mon, 27 Jun 2016 13:19:08 GMT
Server: Apache-Coyote/1.1
Transfer-Encoding: chunked

{
    "exceptionType": "io.jsonwebtoken.SignatureException",
    "message": 
      "JWT signature does not match locally computed signature. JWT validity cannot be asserted and should not be trusted.",
    "status": "ERROR"
}

6. JWTs in Practice: Spring Security CSRF Tokens

While the focus of this post is not Spring Security, we are going to delve into it a bit here to showcase some real-world usage of the JJWT library.

Cross Site Request Forgery is a security vulnerability whereby a malicious website tricks you into submitting requests to a website that you have established trust with. One of the common remedies for this is to implement a synchronizer token pattern. This approach inserts a token into the web form and the application server checks the incoming token against its repository to confirm that it is correct. If the token is missing or invalid, the server will respond with an error.

Spring Security has the synchronizer token pattern built in. Even better, if you are using the Spring Boot and Thymeleaf templates, the synchronizer token is automatically inserted for you.

By default, the token that Spring Security uses is a “dumb” token. It’s just a series of letters and numbers. This approach is just fine and it works. In this section, we enhance the basic functionality by using JWTs as the token. In addition to verifying that the submitted token is the one expected, we validate the JWT to further prove that the token has not been tampered with and to ensure that it is not expired.

To get started, we are going to configure Spring Security using Java configuration. By default, all paths require authentication and all POST endpoints require CSRF tokens. We are going to relax that a bit so that what we’ve built so far still works.

@Configuration
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {

    private String[] ignoreCsrfAntMatchers = {
        "/dynamic-builder-compress",
        "/dynamic-builder-general",
        "/dynamic-builder-specific",
        "/set-secrets"
    };

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
            .csrf()
                .ignoringAntMatchers(ignoreCsrfAntMatchers)
            .and().authorizeRequests()
                .antMatchers("/**")
                .permitAll();
    }
}

We are doing two things here. First, we are saying the CSRF tokens are not required when posting to our REST API endpoints (line 15). Second, we are saying that unauthenticated access should be allowed for all paths (lines 17 – 18).

Let’s confirm that Spring Security is working the way we expect. Fire up the app and hit this url in your browser:

http://localhost:8080/jwt-csrf-form

Here’s the Thymeleaf template for this view:

<!DOCTYPE html>
<html lang="en" xmlns:th="http://www.thymeleaf.org">
    <head>
        <!--/*/ <th:block th:include="fragments/head :: head"/> /*/-->
    </head>
    <body>
        <div class="container-fluid">
            <div class="row">
                <div class="box col-md-6 col-md-offset-3">
                    <p/>
                    <form method="post" th:action="@{/jwt-csrf-form}">
                        <input type="submit" class="btn btn-primary" value="Click Me!"/>
                    </form>
                </div>
            </div>
        </div>
    </body>
</html>

This is a very basic form that will POST to the same endpoint when submitted. Notice that there is no explicit reference to CSRF tokens in the form. If you view the source, you will see something like:

<input type="hidden" name="_csrf" value="5f375db2-4f40-4e72-9907-a290507cb25e" />

This is all the confirmation you need to know that Spring Security is functioning and that the Thymeleaf templates are automatically inserting the CSRF token.

To make the value a JWT, we will enable a custom CsrfTokenRepository. Here’s how our Spring Security configuration changes:

@Configuration
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {

    @Autowired
    CsrfTokenRepository jwtCsrfTokenRepository;

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
            .csrf()
                .csrfTokenRepository(jwtCsrfTokenRepository)
                .ignoringAntMatchers(ignoreCsrfAntMatchers)
            .and().authorizeRequests()
                .antMatchers("/**")
                .permitAll();
    }
}

To connect this, we need a configuration that exposes a bean that returns the custom token repository. Here’s the configuration:

@Configuration
public class CSRFConfig {

    @Autowired
    SecretService secretService;

    @Bean
    @ConditionalOnMissingBean
    public CsrfTokenRepository jwtCsrfTokenRepository() {
        return new JWTCsrfTokenRepository(secretService.getHS256SecretBytes());
    }
}

And, here’s our custom repository (the important bits):

public class JWTCsrfTokenRepository implements CsrfTokenRepository {

    private static final Logger log = LoggerFactory.getLogger(JWTCsrfTokenRepository.class);
    private byte[] secret;

    public JWTCsrfTokenRepository(byte[] secret) {
        this.secret = secret;
    }

    @Override
    public CsrfToken generateToken(HttpServletRequest request) {
        String id = UUID.randomUUID().toString().replace("-", "");

        Date now = new Date();
        Date exp = new Date(System.currentTimeMillis() + (1000*30)); // 30 seconds

        String token;
        try {
            token = Jwts.builder()
                .setId(id)
                .setIssuedAt(now)
                .setNotBefore(now)
                .setExpiration(exp)
                .signWith(SignatureAlgorithm.HS256, secret)
                .compact();
        } catch (UnsupportedEncodingException e) {
            log.error("Unable to create CSRf JWT: {}", e.getMessage(), e);
            token = id;
        }

        return new DefaultCsrfToken("X-CSRF-TOKEN", "_csrf", token);
    }

    @Override
    public void saveToken(CsrfToken token, HttpServletRequest request, HttpServletResponse response) {
        ...
    }

    @Override
    public CsrfToken loadToken(HttpServletRequest request) {
        ...
    }
}

The generateToken method creates a JWT that expires 30 seconds after it’s created. With this plumbing in place, we can fire up the application again and look at the source of /jwt-csrf-form.

Now, the hidden field looks like this:

<input type="hidden" name="_csrf" 
  value="eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIxZjIyMDdiNTVjOWM0MjU0YjZlMjY4MjQwYjIwNzZkMSIsImlhdCI6MTQ2NzA3MDQwMCwibmJmIjoxNDY3MDcwNDAwLCJleHAiOjE0NjcwNzA0MzB9.2kYLO0iMWUheAncXAzm0UdQC1xUC5I6RI_ShJ_74e5o" />

Huzzah! Now our CSRF token is a JWT. That wasn’t too hard.

However, this is only half the puzzle. By default, Spring Security simply saves the CSRF token and confirms that the token submitted in a web form matches the one that’s saved. We want to extend the functionality to validate the JWT and make sure it hasn’t expired. To do that, we’ll add in a filter. Here’s what our Spring Security configuration looks like now:

@Configuration
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {

    ...

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
            .addFilterAfter(new JwtCsrfValidatorFilter(), CsrfFilter.class)
            .csrf()
                .csrfTokenRepository(jwtCsrfTokenRepository)
                .ignoringAntMatchers(ignoreCsrfAntMatchers)
            .and().authorizeRequests()
                .antMatchers("/**")
                .permitAll();
    }

    ...
}

On line 9, we’ve added in a filter and we are placing it in the filter chain after the default CsrfFilter. So, by the time our filter is hit, the JWT token (as a whole) will have already been confirmed to be the correct value saved by Spring Security.

Here’s the JwtCsrfValidatorFilter (it’s private as it’s an inner class of our Spring Security configuration):

private class JwtCsrfValidatorFilter extends OncePerRequestFilter {

    @Override
    protected void doFilterInternal(
      HttpServletRequest request, 
      HttpServletResponse response, 
      FilterChain filterChain) throws ServletException, IOException {
        // NOTE: A real implementation should have a nonce cache so the token cannot be reused
        CsrfToken token = (CsrfToken) request.getAttribute("_csrf");

        if (
            // only care if it's a POST
            "POST".equals(request.getMethod()) &&
            // ignore if the request path is in our list
            Arrays.binarySearch(ignoreCsrfAntMatchers, request.getServletPath()) < 0 &&
            // make sure we have a token
            token != null
        ) {
            // CsrfFilter already made sure the token matched. 
            // Here, we'll make sure it's not expired
            try {
                Jwts.parser()
                    .setSigningKey(secret.getBytes("UTF-8"))
                    .parseClaimsJws(token.getToken());
            } catch (JwtException e) {
                // most likely an ExpiredJwtException, but this will handle any
                request.setAttribute("exception", e);
                response.setStatus(HttpServletResponse.SC_BAD_REQUEST);
                RequestDispatcher dispatcher = request.getRequestDispatcher("expired-jwt");
                dispatcher.forward(request, response);
            }
        }

        filterChain.doFilter(request, response);
    }
}

Take a look at line 23 on. We are parsing the JWT as before. In this case, if an Exception is thrown, the request is forwarded to the expired-jwt template. If the JWT validates, then processing continues as normal.

This closes the loop on overriding the default Spring Security CSRF token behavior with a JWT token repository and validator.

If you fire up the app, browse to /jwt-csrf-form, wait a little more than 30 seconds and click the button, you will see something like this:

jwt_expired

7. JJWT Extended Features

We’ll close out our JJWT journey with a word on some of the features that extend beyond the specification.

7.1. Enforce Claims

As part of the parsing process, JJWT allows you to specify required claims and values those claims should have. This is very handy if there is certain information in your JWTs that must be present in order for you to consider them valid. It avoids a lot of branching logic to manually validate claims. Here’s the method that serves the /parser-enforce endpoint of our sample project.

@RequestMapping(value = "/parser-enforce", method = GET)
public JwtResponse parserEnforce(@RequestParam String jwt) 
  throws UnsupportedEncodingException {
    Jws<Claims> jws = Jwts.parser()
        .requireIssuer("Stormpath")
        .require("hasMotorcycle", true)
        .setSigningKeyResolver(secretService.getSigningKeyResolver())
        .parseClaimsJws(jwt);

    return new JwtResponse(jws);
}

Lines 5 and 6 show you the syntax for registered claims as well as custom claims. In this example, the JWT will be considered invalid if the iss claim is not present or does not have the value: Stormpath. It will also be invalid if the custom hasMotorcycle claim is not present or does not have the value: true.

Let’s first create a JWT that follows the happy path:

http -v POST localhost:8080/dynamic-builder-specific \
  iss=Stormpath hasMotorcycle:=true sub=msilverman
POST /dynamic-builder-specific HTTP/1.1
Accept: application/json
...
{
    "hasMotorcycle": true,
    "iss": "Stormpath",
    "sub": "msilverman"
}

HTTP/1.1 200 OK
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: application/json;charset=UTF-8
...
{
    "jwt": 
      "eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTdG9ybXBhdGgiLCJoYXNNb3RvcmN5Y2xlIjp0cnVlLCJzdWIiOiJtc2lsdmVybWFuIn0.qrH-U6TLSVlHkZdYuqPRDtgKNr1RilFYQJtJbcgwhR0",
    "status": "SUCCESS"
}

Now, let’s validate that JWT:

http -v localhost:8080/parser-enforce?jwt=eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTdG9ybXBhdGgiLCJoYXNNb3RvcmN5Y2xlIjp0cnVlLCJzdWIiOiJtc2lsdmVybWFuIn0.qrH-U6TLSVlHkZdYuqPRDtgKNr1RilFYQJtJbcgwhR0
GET /parser-enforce?jwt=http 
  -v localhost:8080/parser-enforce?jwt=eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTdG9ybXBhdGgiLCJoYXNNb3RvcmN5Y2xlIjp0cnVlLCJzdWIiOiJtc2lsdmVybWFuIn0.qrH-U6TLSVlHkZdYuqPRDtgKNr1RilFYQJtJbcgwhR0 HTTP/1.1
Accept: */*
...
HTTP/1.1 200 OK
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: application/json;charset=UTF-8
...
{
    "jws": {
        "body": {
            "hasMotorcycle": true,
            "iss": "Stormpath",
            "sub": "msilverman"
        },
        "header": {
            "alg": "HS256"
        },
        "signature": "qrH-U6TLSVlHkZdYuqPRDtgKNr1RilFYQJtJbcgwhR0"
    },
    "status": "SUCCESS"
}

So far, so good. Now, this time, let’s leave the hasMotorcycle out:

http -v POST localhost:8080/dynamic-builder-specific iss=Stormpath sub=msilverman

This time, if we try to validate the JWT:

http -v localhost:8080/parser-enforce?jwt=eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTdG9ybXBhdGgiLCJzdWIiOiJtc2lsdmVybWFuIn0.YMONlFM1tNgttUYukDRsi9gKIocxdGAOLaJBymaQAWc

we get:

GET /parser-enforce?jwt=http -v localhost:8080/parser-enforce?jwt=eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTdG9ybXBhdGgiLCJzdWIiOiJtc2lsdmVybWFuIn0.YMONlFM1tNgttUYukDRsi9gKIocxdGAOLaJBymaQAWc HTTP/1.1
Accept: */*
...
HTTP/1.1 400 Bad Request
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Connection: close
Content-Type: application/json;charset=UTF-8
...
{
    "exceptionType": "io.jsonwebtoken.MissingClaimException",
    "message": 
      "Expected hasMotorcycle claim to be: true, but was not present in the JWT claims.",
    "status": "ERROR"
}

This indicates that our hasMotorcycle claim was expected, but was missing.

Let’s do one more example:

http -v POST localhost:8080/dynamic-builder-specific iss=Stormpath hasMotorcycle:=false sub=msilverman

This time, the required claim is present, but it has the wrong value. Let’s see the output of:

http -v localhost:8080/parser-enforce?jwt=eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTdG9ybXBhdGgiLCJoYXNNb3RvcmN5Y2xlIjpmYWxzZSwic3ViIjoibXNpbHZlcm1hbiJ9.8LBq2f0eINB34AzhVEgsln_KDo-IyeM8kc-dTzSCr0c
GET /parser-enforce?jwt=http 
  -v localhost:8080/parser-enforce?jwt=eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTdG9ybXBhdGgiLCJoYXNNb3RvcmN5Y2xlIjpmYWxzZSwic3ViIjoibXNpbHZlcm1hbiJ9.8LBq2f0eINB34AzhVEgsln_KDo-IyeM8kc-dTzSCr0c HTTP/1.1
Accept: */*
...
HTTP/1.1 400 Bad Request
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Connection: close
Content-Type: application/json;charset=UTF-8
...
{
    "exceptionType": "io.jsonwebtoken.IncorrectClaimException",
    "message": "Expected hasMotorcycle claim to be: true, but was: false.",
    "status": "ERROR"
}

This indicates that our hasMotorcycle claim was present, but had a value that was not expected.

MissingClaimException and IncorrectClaimException are your friends when enforcing claims in your JWTs and a feature that only the JJWT library has.

7.2. JWT Compression

If you have a lot of claims on a JWT, it can get big – so big, that it might not fit in a GET url in some browsers.

Let’s a make a big JWT:

http -v POST localhost:8080/dynamic-builder-specific \
  iss=Stormpath hasMotorcycle:=true sub=msilverman the=quick brown=fox jumped=over lazy=dog \
  somewhere=over rainbow=way up=high and=the dreams=you dreamed=of

Here’s the JWT that produces:

eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTdG9ybXBhdGgiLCJoYXNNb3RvcmN5Y2xlIjp0cnVlLCJzdWIiOiJtc2lsdmVybWFuIiwidGhlIjoicXVpY2siLCJicm93biI6ImZveCIsImp1bXBlZCI6Im92ZXIiLCJsYXp5IjoiZG9nIiwic29tZXdoZXJlIjoib3ZlciIsInJhaW5ib3ciOiJ3YXkiLCJ1cCI6ImhpZ2giLCJhbmQiOiJ0aGUiLCJkcmVhbXMiOiJ5b3UiLCJkcmVhbWVkIjoib2YifQ.AHNJxSTiDw_bWNXcuh-LtPLvSjJqwDvOOUcmkk7CyZA

That sucker’s big! Now, let’s hit a slightly different endpoint with the same claims:

http -v POST localhost:8080/dynamic-builder-compress \
  iss=Stormpath hasMotorcycle:=true sub=msilverman the=quick brown=fox jumped=over lazy=dog \
  somewhere=over rainbow=way up=high and=the dreams=you dreamed=of

This time, we get:

eyJhbGciOiJIUzI1NiIsImNhbGciOiJERUYifQ.eNpEzkESwjAIBdC7sO4JegdXnoC2tIk2oZLEGB3v7s84jjse_AFe5FOikc5ZLRycHQ3kOJ0Untu8C43ZigyUyoRYSH6_iwWOyGWHKd2Kn6_QZFojvOoDupRwyAIq4vDOzwYtugFJg1QnJv-5sY-TVjQqN7gcKJ3f-j8c-6J-baDFhEN_uGn58XtnpfcHAAD__w.3_wc-2skFBbInk0YAQ96yGWwr8r1xVdbHn-uGPTFuFE

62 characters shorter! Here’s the code for the method used to generate the JWT:

@RequestMapping(value = "/dynamic-builder-compress", method = POST)
public JwtResponse dynamicBuildercompress(@RequestBody Map<String, Object> claims) 
  throws UnsupportedEncodingException {
    String jws =  Jwts.builder()
        .setClaims(claims)
        .compressWith(CompressionCodecs.DEFLATE)
        .signWith(
            SignatureAlgorithm.HS256,
            secretService.getHS256SecretBytes()
        )
        .compact();
    return new JwtResponse(jws);
}

Notice on line 6 we are specifying  a compression algorithm to use. That’s all there is to it.

What about parsing compressed JWTs? The JJWT library automatically detects the compression and uses the same algorithm to decompress:

GET /parser?jwt=eyJhbGciOiJIUzI1NiIsImNhbGciOiJERUYifQ.eNpEzkESwjAIBdC7sO4JegdXnoC2tIk2oZLEGB3v7s84jjse_AFe5FOikc5ZLRycHQ3kOJ0Untu8C43ZigyUyoRYSH6_iwWOyGWHKd2Kn6_QZFojvOoDupRwyAIq4vDOzwYtugFJg1QnJv-5sY-TVjQqN7gcKJ3f-j8c-6J-baDFhEN_uGn58XtnpfcHAAD__w.3_wc-2skFBbInk0YAQ96yGWwr8r1xVdbHn-uGPTFuFE HTTP/1.1
Accept: */*
...
HTTP/1.1 200 OK
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: application/json;charset=UTF-8
...
{
    "claims": {
        "body": {
            "and": "the",
            "brown": "fox",
            "dreamed": "of",
            "dreams": "you",
            "hasMotorcycle": true,
            "iss": "Stormpath",
            "jumped": "over",
            "lazy": "dog",
            "rainbow": "way",
            "somewhere": "over",
            "sub": "msilverman",
            "the": "quick",
            "up": "high"
        },
        "header": {
            "alg": "HS256",
            "calg": "DEF"
        },
        "signature": "3_wc-2skFBbInk0YAQ96yGWwr8r1xVdbHn-uGPTFuFE"
    },
    "status": "SUCCESS"
}

Notice the calg claim in the header. This was automatically encoded into the JWT and it provides the hint to the parser about what algorithm to use for decompression.

NOTE: The JWE specification does support compression. In an upcoming release of the JJWT library, we will support JWE and compressed JWEs. We will continue to support compression in other types of JWTs, even though it is not specified.

8. Token Tools for Java Devs

While the core focus of this article was not Spring Boot or Spring Security, using those two technologies made it easy to demonstrate all the features discussed in this article. You should be able to build in fire up the server and start playing with the various endpoints we’ve discussed. Just hit:

http http://localhost:8080

Stormpath is also excited to bring a number of open source developer tools to the Java community. These include:

8.1. JJWT (What we’ve been talking about)

JJWT is an easy to use tool for developers to create and verify JWTs in Java. Like many libraries Stormpath supports, JJWT is completely free and open source (Apache License, Version 2.0), so everyone can see what it does and how it does it. Do not hesitate to report any issues, suggest improvements, and even submit some code!

8.2. jsonwebtoken.io and java.jsonwebtoken.io

jsonwebtoken.io is a developer tool we created to make it easy to decode JWTs. Simply paste an existing JWT into the appropriate field to decode its header, payload, and signature. jsonwebtoken.io is powered by nJWT, the cleanest free and open source (Apache License, Version 2.0) JWT library for Node.js developers. You can also see code generated for a variety of languages at this website. The website itself is open-source and can be found here.

java.jsonwebtoken.io is specifically for the JJWT library. You can alter the headers and payload in the upper right box, see the JWT generated by JJWT in the upper left box, and see a sample of the builder and parser Java code in the lower boxes. The website itself is open source and can be found here.

8.3. JWT Inspector

The new kid on the block, JWT Inspector is an open source Chrome extension that allows developers to inspect and debug JWTs directly in-browser. The JWT Inspector will discover JWTs on your site (in cookies, local/session storage, and headers) and make them easily accessible through your navigation bar and DevTools panel.

9. JWT This Down!

JWTs add some intelligence to ordinary tokens. The ability to cryptographically sign and verify, build in expiration times and encode other information into JWTs sets the stage for truly stateless session management. This has a big impact on the ability to scale applications.

At Stormpath, we use JWTs for OAuth2 tokens, CSRF tokens and assertions between microservices, among other usages.

Once you start using JWTs, you may never go back to the dumb tokens of the past. Have any questions? Hit me up at @afitnerd on twitter.

Intro to the Jackson ObjectMapper

$
0
0

I usually post about Jackson and JSON stuff on Twitter - you can follow me there:

1. Overview

This writeup is focused on understanding the Jackson ObjectMapper class – and how to serialize Java objects into JSON and deserialize JSON string into Java objects. To understand more about Jackson library in general, the Jackson Tutorial is a good place to start.

2. Dependencies

Let’s first add the following dependencies to pom.xml:

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.7.5</version>
</dependency>

This dependency will transitively add the following libraries to the classpath:

  1. jackson-annotations-2.7.5.jar
  2. jackson-core-2.7.5.jar
  3. jackson-databind-2.7.5.jar

Always use the latest versions as seen the maven central repository for Jackson databind.

3. Reading and Writing Using ObjectMapper

Let’s start with the basic read and write operations.

The simple readValue API of the ObjectMapper is a good entry-point; this can be used to parse or deserialize JSON content into a Java object.

On the write side of things – the writeValue API can be used to serialize any Java object as JSON output.

We’ll use the following Car class with two fields as the object to serialize or deserialize throughout this article:

public class Car {

    private String color;
    private String type;

    // standard getters setters
}

3.1. Java Object to JSON

A simple example of serialization of Java Object into JSON using the writeValue method of ObjectMapper class:

ObjectMapper objectMapper = new ObjectMapper();
Car car = new Car("yellow", "renault");
objectMapper.writeValue(new File("target/car.json"), car);

Output of the above can be seen in the file as following:

{"color":"yellow","type":"renault"}

The methods writeValueAsString and writeValueAsBytes of ObjectMapper class, generates a JSON from a Java object and returns the generated JSON as a string or as a byte array:

String carAsString = objectMapper.writeValueAsString(car);

3.2. JSON to Java Object

Below is a simple example of JSON string converted to Java object using the ObjectMapper class:

String json = "{ \"color\" : \"Black\", \"type\" : \"BMW\" }";
Car car = objectMapper.readValue(json, Car.class);	

The readValue function also accepts other forms of input like a file containing JSON string:

Car car = objectMapper.readValue(new File("target/json_car.json"), Car.class);

or a URL:

Car car = objectMapper.readValue(new URL("target/json_car.json"), Car.class);

3.3. JSON to Jackson JsonNode

Alternatively a JSON can be parsed into a JsonNode object and used to retrieve data from a given specific node:

String json = "{ \"color\" : \"Black\", \"type\" : \"FIAT\" }";
JsonNode jsonNode = objectMapper.readTree(json);
String color = jsonNode.get("color").asText();
// Output: color -> Black

3.4. Creating Java List from JSON Array String

A JSON in the form of array can be parsed into a Java object list by parsing in the following way:

String jsonCarArray = 
  "[{ \"color\" : \"Black\", \"type\" : \"BMW\" }, { \"color\" : \"Red\", \"type\" : \"FIAT\" }]";
List<Car> listCar = objectMapper.readValue(jsonCarArray, new TypeReference<List<Car>>(){});

3.5. Creating Java Map from JSON String

A JSON in the form of string can be parsed into a Java Map object by parsing in the following way:

String json = "{ \"color\" : \"Black\", \"type\" : \"BMW\" }";
Map<String, Object> map = objectMapper.readValue(json, new TypeReference<Map<String,Object>>(){});

4. Advanced Features

One of the greatest strength of the Jackson library is the highly customizable serialization and deserialization process.

In this section we will go through some advanced features where the input or the output JSON response can be different from the object which generates or consumes the response.

4.1. Configuring Serialization or Deserialization Feature

While converting JSON objects to Java classes, in case the JSON string has some new fields, then the default process will run in exception:

String jsonString = "{ \"color\" : \"Black\", \"type\" : \"Fiat\", \"year\" : \"1970\" }";

The JSON string in the above example in the default parsing process to the Java object for the Class Car will result in the UnrecognizedPropertyException  exception.

Through the configure method we can extend the default process to ignore the new fields:

objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
Car car = objectMapper.readValue(jsonString, Car.class);

JsonNode jsonNodeRoot = objectMapper.readTree(jsonString);
JsonNode jsonNodeYear = jsonNodeRoot.get("year");
String year = jsonNodeYear.asText();

Yet another option is based on the FAIL_ON_NULL_FOR_PRIMITIVES which defines if the null values for primitive values are allowed: 

objectMapper.configure(DeserializationFeature.FAIL_ON_NULL_FOR_PRIMITIVES, false);

Similarly, FAIL_ON_NUMBERS_FOR_ENUM controls if enum values are allowed to be serialized/deserialized as numbers 

objectMapper.configure(DeserializationFeature.FAIL_ON_NUMBERS_FOR_ENUMS, false);

You can find the comprehensive list of serialization and deserialization features on the official site.

4.2. Creating Custom Serializer or Deserializer

Another very important feature of ObjectMapper class is the ability to register custom serializer and deserializer. Custom serializer and deserializer are very useful in situations where the input or the output JSON response is different in structure than the Java class into which it must be serialized or deserialized.

Below is an example of custom JSON serializer:

public class CustomCarSerializer extends JsonSerializer<Car> {
    public CustomCarSerializer() { }
    @Override
    public void serialize(Car car, JsonGenerator jsonGenerator, SerializerProvider serializer) {
        jsonGenerator.writeStartObject();
        jsonGenerator.writeStringField("car_brand", car.getType());
        jsonGenerator.writeEndObject();
    }
}

This custom serializer can be invoked like this:

ObjectMapper mapper = new ObjectMapper();
SimpleModule module = 
  new SimpleModule("CustomCarSerializer", new Version(1, 0, 0, null, null, null));
module.addSerializer(Car.class, new CustomCarSerializer());
mapper.registerModule(module);
Car car = new Car("yellow", "renault");
String carJson = mapper.writeValueAsString(car);

Here’s what the Car looks like (as JSON output) on the client side:

var carJson = {"car_brand":"renault"}

And here’s an example of a custom JSON deserializer:

public class CustomCarDeserializer extends JsonDeserializer<Car> {
    public CustomCarDeserializer() { }
    @Override
    public Car deserialize(JsonParser parser, DeserializationContext deserializer) {
        Car car = new Car();
        ObjectCodec codec = parser.getCodec();
        JsonNode node = codec.readTree(parser);
        
        // try catch block
        JsonNode colorNode = node.get("color");
        String color = colorNode.asText();
        car.setColor(color);
        return car;
    }
}

This custom deserializer can be invoked in the following way:

String json = "{ \"color\" : \"Black\", \"type\" : \"BMW\" }";
ObjectMapper mapper = new ObjectMapper();
SimpleModule module =
  new SimpleModule("CustomCarDeserializer", new Version(1, 0, 0, null, null, null));
module.addDeserializer(Car.class, new CustomeCarDeserializer());
mapper.registerModule(module);
Car car = mapper.readValue(json, Car.class);

4.3. Handling Date Formats

The default serialization of java.util.Date produces a number i.e. epoch timestamp (number of milliseconds since January 1st, 1970, UTC). But this is not very human readable and further conversion is needed to display human readable format.

Let us wrap the Car instance we used so far inside the Request class with the datePurchased property:

public class Request 
{
    private Car car;
    private Date datePurchased;

    // standard getters setters
}

To control the String format of a date, and set it to e.g. yyyy-MM-dd HH:mm a z, consider the following snippet:

ObjectMapper objectMapper = new ObjectMapper();
DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm a z");
objectMapper.setDateFormat(df);
String carAsString = objectMapper.writeValueAsString(request);
// output: {"car":{"color":"yellow","type":"renault"},"datePurchased":"2016-07-03 11:43 AM CEST"}

To learn more about serializing dates with Jackson, read our more in-depth writeup.

4.4. Handling Collections

Another small but useful feature available through DeserializationFeature is the ability to generate the type of collection desired from a JSON Array response, e.g. as an array:

String jsonCarArray = 
  "[{ \"color\" : \"Black\", \"type\" : \"BMW\" }, { \"color\" : \"Red\", \"type\" : \"FIAT\" }]";
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.configure(DeserializationFeature.USE_JAVA_ARRAY_FOR_JSON_ARRAY, true);
Car[] cars = objectMapper.readValue(jsonCarArray, Car[].class);
// print cars

Or as a List:

String jsonCarArray = 
  "[{ \"color\" : \"Black\", \"type\" : \"BMW\" }, { \"color\" : \"Red\", \"type\" : \"FIAT\" }]";
ObjectMapper objectMapper = new ObjectMapper();
List<Car> listCar = objectMapper.readValue(jsonCarArray, new TypeReference<List<Car>>(){});
// print cars

More about handling collections with Jackson is available here.

5. Conclusion

Jackson ObjectMapper is a solid and mature JSON serialization/deserialization library for Java. And the ObjectMapper API provide a very simple way to parse and generate JSON response objects with a lot of flexibility.

The article discuss the main features which make the library so popular. The source code that accompanies the article can be found in GitHub.

I usually post about Jackson and JSON stuff on Twitter - you should follow me there:


A Guide to JMockit Expectations

$
0
0

1. Intro

This article is the second installment in the JMockit series. You may want to read the first article as we are assuming that you are already familiar with JMockit’s basics.

Today we’ll go deeper and focus on expectations. We will show how to define more specific or generic argument matching and more advanced ways of defining values.

2. Argument Values Matching 

The following approaches apply both to Expectations as well as Verifications.

2.1. “Any” Fields

JMockit offers a set of utility fields for making argument matching more generic. One of these utilities are the anyX fields.

These will check that any value was passed and there is one for each primitive type (and the corresponding wrapper class), one for strings, and a “universal” one of type Object.

Let’s see an example:

public interface ExpectationsCollaborator {
    String methodForAny1(String s, int i, Boolean b);
    void methodForAny2(Long l, List<String> lst);
}

@Test
public void test(@Mocked ExpectationsCollaborator mock) throws Exception {
    new Expectations() {{
        mock.methodForAny1(anyString, anyInt, anyBoolean); 
        result = "any";
    }};

    Assert.assertEquals("any", mock.methodForAny1("barfooxyz", 0, Boolean.FALSE));
    mock.methodForAny2(2L, new ArrayList<>());

    new FullVerifications() {{
        mock.methodForAny2(anyLong, (List<String>) any);
    }};
}

You must take into account that when using the any field, you need to cast it to the expected type. The complete list of fields is present in the documentation.

2.2. “With” Methods

JMockit also provides several methods to help with generic argument matching. Those are the withX methods.

These allow for a little more advanced matching than the anyX fields. We can see an example here in which we’ll define an expectation for a method that will be triggered with a string containing foo, an integer not equal to 1, a non-null Boolean and any instance of the List class:

public interface ExpectationsCollaborator {
    String methodForWith1(String s, int i);
    void methodForWith2(Boolean b, List<String> l);
}

@Test
public void testForWith(@Mocked ExpectationsCollaborator mock) throws Exception {
    new Expectations() {{
        mock.methodForWith1(withSubstring("foo"), withNotEqual(1));
        result = "with";
    }};

    assertEquals("with", mock.methodForWith1("barfooxyz", 2));
    mock.methodForWith2(Boolean.TRUE, new ArrayList<>());

    new Verifications() {{
        mock.methodForWith2(withNotNull(), withInstanceOf(List.class));
    }};
}

You can see the complete list of withX methods on JMockit’s documentation.

Take into account that the special with(Delegate) and withArgThat(Matcher) will be covered in their own subsection.

2.3. Null Is Not Null

Something that is good to understand sooner than later is that null is not used to define an argument for which null has been passed to a mock.

Actually, null is used as syntactic sugar to define that any object will be passed (so it can only be used for parameters of reference type). To specifically verify that a given parameter receives the null reference, the withNull() matcher can be used.

For the next example, we’ll define the behaviour for a mock, that should be triggered when the arguments passed are: any string, any List, and a null reference:

public interface ExpectationsCollaborator {
    String methodForNulls1(String s, List<String> l);
    void methodForNulls2(String s, List<String> l);
}

@Test
public void testWithNulls(@Mocked ExpectationsCollaborator mock){
    new Expectations() {{
        mock.methodForNulls1(anyString, null); 
        result = "null";
    }};
    
    assertEquals("null", mock.methodForNulls1("blablabla", new ArrayList<String>()));
    mock.methodForNulls2("blablabla", null);
    
    new Verifications() {{
        mock.methodForNulls2(anyString, (List<String>) withNull());
    }};
}

Note the difference: null means any list and withNull() means a null reference to a list. In particular, this avoids the need to cast the value to the declared parameter type (see that the third argument had to be cast but not the second one).

The only condition to be able to use this is that at least one explicit argument matcher had been used for the expectation (either a with method or an any field).

2.4. “Times” Field

Sometimes, we want to constrain the number of invocations expected for a mocked method. For this, JMockit has the reserved words times, minTimes and maxTimes (all three allow non-negative integers only).

public interface ExpectationsCollaborator {
    void methodForTimes1();
    void methodForTimes2();
    void methodForTimes3();
}

@Test
public void testWithTimes(@Mocked ExpectationsCollaborator mock) {
    new Expectations() {{
        mock.methodForTimes1(); times = 2;
        mock.methodForTimes2();
    }};
    
    mock.methodForTimes1();
    mock.methodForTimes1();
    mock.methodForTimes2();
    mock.methodForTimes3();
    mock.methodForTimes3();
    mock.methodForTimes3();
    
    new Verifications() {{
        mock.methodForTimes3(); minTimes = 1; maxTimes = 3;
    }};
}

In this example, we’ve defined that exactly two invocations (not one, not three, exactly two) of methodForTimes1() should be done using the line times = 2;.

Then we used the default behavior (if no repetition constraint is given minTimes = 1; is used) to define that at least one invocation will be done to methodForTimes2().

Lastly, using minTimes = 1; followed by maxTimes = 3; we defined that between one and three invocations would occur to methodForTimes3().

Take into account that both minTimes and maxTimes can be specified for the same expectation, as long as minTimes is assigned first. On the other hand, times can only be used alone.

2.5. Custom Argument Matching

Sometimes argument matching is not as direct as simply specifying a value or using some of the predefined utilities (anyX or withX).

For that cases, JMockit relies on Hamcrest‘s Matcher interface. You just need to define a matcher for the specific testing scenario and use that matcher with a withArgThat() call.

Let’s see an example for matching a specific class to a passed object:

public interface ExpectationsCollaborator {
    void methodForArgThat(Object o);
}

public class Model {
    public String getInfo(){
        return "info";
    }
}

@Test
public void testCustomArgumentMatching(@Mocked ExpectationsCollaborator mock) {
    new Expectations() {{
        mock.methodForArgThat(withArgThat(new BaseMatcher<Object>() {
            @Override
            public boolean matches(Object item) {
                return item instanceof Model && "info".equals(((Model) item).getInfo());
            }

            @Override
            public void describeTo(Description description) { }
        }));
    }};
    mock.methodForArgThat(new Model());
}

3. Returning Values

Let’s now look at the return values; keep in mind that following approaches apply only to Expectations as no return values can be defined for Verifications.

3.1. Result and Returns (…)

When using JMockit, you have three different ways of defining the expected result of the invocation of a mocked method. Of all three, we’ll talk now about the first two (the simplest ones) which will surely cover 90% of everyday use cases.

These two are the result field and the returns(Object…) method:

  • With the result field, you can define one return value for any non-void returning mocked method. This return value can also be an exception to be thrown (this time working for both non-void and void returning methods).
    • Several result field assignations can be done in order to return more than one value for more than one method invocations (you can mix both return values and errors to be thrown).
    • The same behaviour will be achieved when assigning to result a list or an array of values (of the same type than the return type of the mocked method, NO exceptions here).
  • The returns(Object…) method is syntactic sugar for returning several values of the same time.

This is more easily shown with a code snippet:

public interface ExpectationsCollaborator{
    String methodReturnsString();
    int methodReturnsInt();
}

@Test
public void testResultAndReturns(@Mocked Foo mock){
    new StrictExpectations() {{
        mock.methodReturnsString();
        result = "foo";
        result = new Exception();
        result = "bar";
        mock.methodReturnsInt(); result = new int[] { 1, 2, 3 };
        mock.methodReturnsString(); returns("foo", "bar");
        mock.methodReturnsInt(); result = 1;
    }};
    
    assertEquals("Should return foo", "foo", mock.methodReturnsString());
    try {
        mock.methodReturnsString();
    } catch (Exception e) { }
    
    assertEquals("Should return bar", "bar", mock.methodReturnsString());
    assertEquals("Should return 1", 1, mock.methodReturnsInt());
    assertEquals("Should return 2", 2, mock.methodReturnsInt());
    assertEquals("Should return 3", 3, mock.methodReturnsInt());
    assertEquals("Should return foo", "foo", mock.methodReturnsString());
    assertEquals("Should return bar", "bar", mock.methodReturnsString());
    assertEquals("Should return 1", 1, mock.methodReturnsInt());
}

In this example, we have defined that for the first three calls to methodReturnsString() the expected returns are (in order) “foo”, an exception and “bar”. We achieved this using three different assignations to the result field.

Then on line 14, we defined that for the fourth and fifth calls, “foo” and “bar” should be returned using the returns(Object…) method.

For the methodReturnsInt() we defined on line 13 to return 1, 2 and lastly 3 by assigning an array with the different results to the result field and on line 15 we defined to return 1 by a simple assignation to the result field.

As you can see there are several ways of defining return values for mocked methods.

3.2. Delegators

To end the article we’re going to cover the third way of defining the return value: the Delegate interface. This interface is used for defining more complex return values when defining mocked methods.

We’re going to see an example to simply the explaining:

public interface ExpectationsCollaborator {
    Object methodForDelegate(int i);
}

@Test
public void testDelegate(@Mocked ExpectationsCollaborator mock) {
    new Expectations() {{
        
        mock.methodForDelegate(anyInt);
        result = new Delegate() {
            public int delegate(int i) throws Exception {
                if(i < 3) {
                    return 5;
                } else {
                    throw new Exception();
                }
            }
        };
    }};
    
    assertEquals("Should return 5", 5, mock.methodForDelegate(1));
    try {
        mock.methodForDelegate(3);
    } catch (Exception e) { }
}

The way to use a delegator is to create a new instance for it and assign it to a returns field. In this new instance, you should create a new method with the same parameters and return type than the mocked method (you can use any name for it). Inside this new method, use whatever implementation you want in order to return the desired value.

In the example, we did an implementation in which 5 should be returned when the value passed to the mocked method is less than 3 and an exception is thrown otherwise (note that we had to use times = 2; so that the second invocation is expected as we lost the default behaviour by defining a return value).

It may seem like quite a lot of code, but for same cases, it’ll be the only way to achieve the result we want.

4. Conclusion

With this, we practically showed everything we need to create expectations and verifications for our everyday tests.

We’ll of course publish more articles on JMockit, so stay tuned to learn even more.

And, as always, the full implementation of this tutorial can be found on the GitHub project.

4.1. Articles in the Series

All articles of the series:

Circular Dependencies in Spring

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. What Is a Circular Dependency?

It happens when a bean A depends on another bean B, and the bean B depends on the bean A as well:

Bean A → Bean B → Bean A

Of course, we could have more beans implied:

Bean A → Bean B → Bean C → Bean D → Bean E → Bean A

2. What Happens in Spring

When Spring context is loading all the beans, it tries to create beans in the order needed for them to work completely. For instance, if we didn’t have a circular dependency, like the following case:

Bean A → Bean B → Bean C

Spring will create bean C, then create bean B (and inject bean C into it), then create bean A (and inject bean B into it).

But, when having a circular dependency, Spring cannot decide which of the beans should be created first, since they depend on one another. In these cases, Spring will raise a BeanCurrentlyInCreationException while loading context.

It can happen in Spring when using constructor injection; if you use other types of injections you should not find this problem since the dependencies will be injected when they are needed and not on the context loading.

3. A Quick Example

Let’s define two beans that depend on one another (via constructor injection):

@Component
public class CircularDependencyA {

    private CircularDependencyB circB;

    @Autowired
    public CircularDependencyA(CircularDependencyB circB) {
        this.circB = circB;
    }
}
@Component
public class CircularDependencyB {

    private CircularDependencyA circA;

    @Autowired
    public CircularDependencyB(CircularDependencyA circA) {
        this.circA = circA;
    }
}

Now we can write a Configuration class for the tests, let’s call it TestConfig, that specifies the base package to scan for components. Let’s assume our beans are defined in package “com.baeldung.circulardependency”:

@Configuration
@ComponentScan(basePackages = { "com.baeldung.circulardependency" })
public class TestConfig {
}

And finally we can write a JUnit test to check the circular dependency. The test can be empty, since the circular dependency will be detected during the context loading.

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = { TestConfig.class })
public class CircularDependencyTest {

    @Test
    public void givenCircularDependency_whenConstructorInjection_thenItFails() {
        // Empty test; we just want the context to load
    }
}

If you try to run this test, you will get the following exception:

BeanCurrentlyInCreationException: Error creating bean with name 'circularDependencyA':
Requested bean is currently in creation: Is there an unresolvable circular reference?

4. The Workarounds

We will show some of the most popular ways to deal with this problem.

4.1. Redesign

When you have a circular dependency, it’s likely you have a design problem and the responsibilities are not well separated. You should try to redesign the components properly so their hierarchy is well designed and there is no need for circular dependencies.

If you cannot redesign the components (there can be many possible reasons for that: legacy code, code that has already been tested and cannot be modified, not enough time or resources for a complete redesign…), there are some workarounds to try.

4.2. Use @Lazy

A simple way to break the cycle is saying Spring to initialize one of the beans lazily. That is: instead of fully initializing the bean, it will create a proxy to inject it into the other bean. The injected bean will only be fully created when it’s first needed.

To try this with our code, you can change the CircularDependencyA to the following:

@Component
public class CircularDependencyA {

    private CircularDependencyB circB;

    @Autowired
    public CircularDependencyA(@Lazy CircularDependencyB circB) {
        this.circB = circB;
    }
}

If you run the test now, you will see that the error does not happen this time.

4.3. Use Setter Injection

One of the most popular workarounds, and also what Spring documentation proposes, is using setter injections. You can try changing the constructor injectors for setter injectors. This way Spring creates the beans, but the dependencies are not injected until they are needed.

We will change our classes to use setter injections and will add another field (message) to CircularDependencyB so we can make a proper unit test:

@Component
public class CircularDependencyA {

    private CircularDependencyB circB;

    @Autowired
    public void setCircB(CircularDependencyB circB) {
        this.circB = circB;
    }

    public CircularDependencyB getCircB() {
        return circB;
    }
}
@Component
public class CircularDependencyB {

    private CircularDependencyA circA;

    private String message = "Hi!";

    @Autowired
    public void setCircA(CircularDependencyA circA) {
        this.circA = circA;
    }

    public String getMessage() {
        return message;
    }
}

Now we have to make some changes to our unit test:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = { TestConfig.class })
public class CircularDependencyTest {

    @Autowired
    ApplicationContext context;

    @Bean
    public CircularDependencyA getCircularDependencyA() {
        return new CircularDependencyA();
    }

    @Bean
    public CircularDependencyB getCircularDependencyB() {
        return new CircularDependencyB();
    }

    @Test
    public void givenCircularDependency_whenSetterInjection_thenItWorks() {
        CircularDependencyA circA = context.getBean(CircularDependencyA.class);

        Assert.assertEquals("Hi!", circA.getCircB().getMessage());
    }
}

The following explains the annotations seen above:

@Bean: To tell Spring framework that these methods must be used to retrieve an implementation of the beans to inject.

@Test: The test will get CircularDependencyA bean from the context and assert that its CircularDependencyB has been injected properly, checking the value of its message property.

4.4. Use @PostConstruct

Another way to break the cycle is injecting a dependency using @Autowired on one of the beans, and then use a method annotated with @PostConstruct to set the other dependency.

Our beans could have the following code:

@Component
public class CircularDependencyA {

    @Autowired
    private CircularDependencyB circB;

    @PostConstruct
    public void init() {
        circB.setCircA(this);
    }

    public CircularDependencyB getCircB() {
        return circB;
    }
}
@Component
public class CircularDependencyB {

    private CircularDependencyA circA;
	
    private String message = "Hi!";

    public void setCircA(CircularDependencyA circA) {
        this.circA = circA;
    }
	
    public String getMessage() {
        return message;
    }
}

And we can run the same test we previously had, so we check that the circular dependency exception is still not being thrown and that the dependencies are properly injected.

4.5. Implement ApplicationContextAware and InitializingBean

If one of the beans implements ApplicationContextAware, the bean has access to Spring context and can extract the other bean from there. Implementing InitializingBean we indicate that this bean has to do some actions after all its properties have been set; in this case, we want to manually set our dependency.

The code of our beans would be:

@Component
public class CircularDependencyA implements ApplicationContextAware, InitializingBean {

    private CircularDependencyB circB;

    private ApplicationContext context;

    public CircularDependencyB getCircB() {
        return circB;
    }

    @Override
    public void afterPropertiesSet() throws Exception {
        circB = context.getBean(CircularDependencyB.class);
    }

    @Override
    public void setApplicationContext(final ApplicationContext ctx) throws BeansException {
        context = ctx;
    }
}
@Component
public class CircularDependencyB {

    private CircularDependencyA circA;

    private String message = "Hi!";

    @Autowired
    public void setCircA(CircularDependencyA circA) {
        this.circA = circA;
    }

    public String getMessage() {
        return message;
    }
}

Again, we can run the previous test and see that the exception is not thrown and that the test is working as expected.

5. In Conclusion

There are many ways to deal with circular dependencies in Spring. The first thing to consider is to redesign your beans so there is no need for circular dependencies: they are usually a symptom of a design that can be improved.

But if you absolutely need to have circular dependencies in your project, you can follow some of the workarounds suggested here.

The preferred method is using setter injections. But there are other alternatives, generally based on stopping Spring from managing the initialization and injection of the beans, and doing that yourself using one strategy or another.

You can find the beans shown above here, and the unit test here.

The early-bird price of "Learn Spring Security" expires today:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 134

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Zero Turnaround releases RebelLabs Developer Productivity Report [infoq.com]

Let’s start with the yearly report from RebelLabs, providing some very interesting insights into the trends of our ecosystem.

>> How we fixed all database connection leaks [in.relation.to]

Very cool, to the point walk-through how the connection leaks in the large Hibernate test suite were handled.

>> JUnit 5 – An Early Test Drive – Part 1 [infoq.com]

An early look at the upcoming JUnit 5.

>> Notes on Reactive Programming Part III: A Simple HTTP Server Application [spring.io]

Reactive programing is coming to Spring with version 5 – we know that by now.

The question is – what are the scenarios where it’s going to make a significant difference and how can we use it before Spring 5 comes out.

And this new installment does a good job answering both of these questions.

>> Custom Audit Log With Spring And Hibernate [bozho.net]

There are projects where you can use some of the nicer ways of doing audit. And then there are some codebases where that’s not possible without major and painful refactoring. Luckily, there’s a clean, manual way of doing audit as well.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> The Hardest Part About Microservices: Your Data [christianposta.com]

Data is of course the most complex part of doing Microservices well, and in my experience, the number one reason teams fail during these kinds of implementations.

It turns out that, for example – getting transactional boundaries right across multiple systems is a hard problem to solve, especially without a very good understanding of the semantics to achieve in the system and a clear set of self-imposed limitations at the start.

>> An approach to test your user interface more efficiently [ontestautomation.com]

A quick, interesting read using a pattern I knew very little about – Model-View-ViewModel.

Also worth reading:

3. Musings

>> How to De-Brilliant Your Code [daedtech.com]

I enjoy reading through these listener questions, as they’re a nice change of pace.

And, just as a quick side-note, writing a feature without using the if keyword anywhere is certainly a nice way to spend the weekend 🙂

>> How to Add Static Analysis to Your Process [daedtech.com]

An intro to the thinking, expectations and how-to of dipping your toe into the deep waters of static analysis.

>> Managing rapid growth [dandreamsofcoding.com]

During the last couple of years, the company I’m working for has grown from 2 to more than 600-700 (last time I checked).

It’s definitely quite a ride to go through that kind of growth, and this writeup makes some good points about how to do that well.

>> Sources of Inspiration [daedtech.com]

A quick read, a solid reading list and some inspiration.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Why did you reject my friend request on Facebook? [dilbert.com]

>> I trust them like I trust you [dilbert.com]

>> Stop saying what you’re thinking [dilbert.com]

5. Pick of the Week

This week, I finally finished and launched Learn Spring Security.

It’s been one hell of a week – which is why I’m writing this in the nick of time instead of several days before hitting publish, like I usually do.

Here it is:

>> The Master Class of Learn Spring Security [baeldung.com]

 

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Viewing all 4754 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>