Quantcast
Channel: Baeldung
Viewing all 4754 articles
Browse latest View live

Bootstrapping JPA Programmatically in Java

$
0
0

1. Overview

Most JPA-driven applications make heavy use of the “persistence.xml” file for getting a JPA implementation, such as Hibernate or OpenJPA.

Our approach here provides a centralized mechanism for configuring one or more persistence units and the associated persistence contexts.

And while this approach isn’t inherently wrong, it’s not suitable for use cases where it’s necessary to test in isolation the application components that use different persistence units.

On the bright side, it’s possible to bootstrap a JPA implementation without resorting to the “persistence.xml” file at all, by just using plain Java.

In this tutorial, we’ll see how to accomplish this with Hibernate.

2. Implementing the PersistenceUnitInfo Interface

In a typical “xml-based” JPA configuration, the JPA implementation automatically takes care of implementing the PersistenceUnitInfo interface.

Using all the data gathered by parsing the “persistence.xml” file, the persistence provider uses this implementation to create an entity manager factory. From this factory, we can obtain an entity manager.

Since we won’t rely on the “persistence.xml” file, the first thing that we need to do is to provide our own PersistenceUnitInfo implementation. We’ll use Hibernate for our persistence provider:

public class HibernatePersistenceUnitInfo implements PersistenceUnitInfo {
    
    public static String JPA_VERSION = "2.1";
    private String persistenceUnitName;
    private PersistenceUnitTransactionType transactionType
      = PersistenceUnitTransactionType.RESOURCE_LOCAL;
    private List<String> managedClassNames;
    private List<String> mappingFileNames = new ArrayList<>();
    private Properties properties;
    private DataSource jtaDataSource;
    private DataSource nonjtaDataSource;
    private List<ClassTransformer> transformers = new ArrayList<>();
    
    public HibernatePersistenceUnitInfo(
      String persistenceUnitName, List<String> managedClassNames, Properties properties) {
        this.persistenceUnitName = persistenceUnitName;
        this.managedClassNames = managedClassNames;
        this.properties = properties;
    }

    // standard setters / getters   
}

In a nutshell, the HibernatePersistenceUnitInfo class is just a plain data container, which stores the configuration parameters bound to a specific persistence unit. This includes the persistence unit name, the managed entity classes’ names, the transaction type, the JTA and non-JTA data sources, and so forth.

3. Creating an Entity Manager Factory with Hibernate’s EntityManagerFactoryBuilderImpl Class

Now that we have a custom PersistenceUnitInfo implementation in place, the last thing that we need to do is get an entity manager factory.

Hibernate makes this process a breeze, with its EntityManagerFactoryBuilderImpl class (a neat implementation of the builder pattern).

To provide a higher level of abstraction, let’s create a class that wraps the functionality of EntityManagerFactoryBuilderImpl.

First, let’s showcase the methods that take care of creating an entity manager factory and an entity manager, using Hibernate’s EntityManagerFactoryBuilderImpl class and our HibernatePersistenceUnitInfo class:

public class JpaEntityManagerFactory {
    private String DB_URL = "jdbc:mysql://databaseurl";
    private String DB_USER_NAME = "username";
    private String DB_PASSWORD = "password";
    private Class[] entityClasses;
    
    public JpaEntityManagerFactory(Class[] entityClasses) {
        this.entityClasses = entityClasses;
    }
    
    public EntityManager getEntityManager() {
        return getEntityManagerFactory().createEntityManager();
    }
    
    protected EntityManagerFactory getEntityManagerFactory() {
        PersistenceUnitInfo persistenceUnitInfo = getPersistenceUnitInfo(
          getClass().getSimpleName());
        Map<String, Object> configuration = new HashMap<>();
        return new EntityManagerFactoryBuilderImpl(
          new PersistenceUnitInfoDescriptor(persistenceUnitInfo), configuration)
          .build();
    }
    
    protected HibernatePersistenceUnitInfo getPersistenceUnitInfo(String name) {
        return new HibernatePersistenceUnitInfo(name, getEntityClassNames(), getProperties());
    }

    // additional methods
}

Next, let’s take a look at the methods that provide the parameters required by EntityManagerFactoryBuilderImpl and HibernatePersistenceUnitInfo.

These parameters include the managed entity classes, the entity classes’ names, Hibernate’s configuration properties, and a MysqlDataSource object:

public class JpaEntityManagerFactory {
    //...
    
    protected List<String> getEntityClassNames() {
        return Arrays.asList(getEntities())
          .stream()
          .map(Class::getName)
          .collect(Collectors.toList());
    }
    
    protected Properties getProperties() {
        Properties properties = new Properties();
        properties.put("hibernate.dialect", "org.hibernate.dialect.MySQLDialect");
        properties.put("hibernate.id.new_generator_mappings", false);
        properties.put("hibernate.connection.datasource", getMysqlDataSource());
        return properties;
    }
    
    protected Class[] getEntities() {
        return entityClasses;
    }
    
    protected DataSource getMysqlDataSource() {
        MysqlDataSource mysqlDataSource = new MysqlDataSource();
        mysqlDataSource.setURL(DB_URL);
        mysqlDataSource.setUser(DB_USER_NAME);
        mysqlDataSource.setPassword(DB_PASSWORD);
        return mysqlDataSource;
    }
}

For simplicity’s sake, we’ve hard-coded the database connection parameters within the JpaEntityManagerFactory class. In production, though, we should store these in a separate properties file.

Furthermore, the getMysqlDataSource() method returns a fully-initialized MysqlDataSource object.

We’ve done this just to keep things easy to follow. In a more realistic, loosely-coupled design, we would inject a DataSource object using EntityManagerFactoryBuilderImpl’s withDataSource() method, rather than creating it within the class.

4. Performing CRUD Operations with an Entity Manager

Finally, let’s see how to use a JpaEntityManagerFactory instance for getting a JPA entity manager and performing CRUD operations. (Note that we’ve omitted the User class for brevity’s sake):

public static void main(String[] args) {
    EntityManager entityManager = getJpaEntityManager();
    User user = entityManager.find(User.class, 1);
    
    entityManager.getTransaction().begin();
    user.setName("John");
    user.setEmail("john@domain.com");
    entityManager.merge(user);
    entityManager.getTransaction().commit();
    
    entityManager.getTransaction().begin();
    entityManager.persist(new User("Monica", "monica@domain.com"));
    entityManager.getTransaction().commit();
 
    // additional CRUD operations
}

private static class EntityManagerHolder {
    private static final EntityManager ENTITY_MANAGER = new JpaEntityManagerFactory(
      new Class[]{User.class})
      .getEntityManager();
}

public static EntityManager getJpaEntityManager() {
    return EntityManagerHolder.ENTITY_MANAGER;
}

5. Conclusion

In this article, we showed how to programmatically bootstrap a JPA entity manager using a custom implementation of JPA’s PersistenceUnitInfo interface and Hibernate’s EntityManagerFactoryBuilderImpl class, without having to rely on the traditional “persistence.xml” file.

As usual, all the code samples shown in this article are available over on GitHub.


Controlling Bean Creation Order with @DependsOn Annotation

$
0
0

1. Overview

Spring, by default, manages beans’ lifecycle and arranges their initialization order.

But, we can still customize it based on our needs. We can choose either the SmartLifeCycle interface or the @DependsOn annotation for managing initialization order.

This tutorial explores the @DependsOn annotation and its behavior in case of a missing bean or circular dependency. Or in case of simply needing one bean initialized before another.

2. Maven

First of all, let’s import spring-context dependency in our pom.xml file. We should always refer to Maven Central for the latest version of dependencies:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>5.0.7.RELEASE</version>
</dependency>

3. @DependsOn

We should use this annotation for specifying bean dependencies. Spring guarantees that the defined beans will be initialized before attempting an initialization of the current bean.

Let’s say we have a FileProcessor which depends on a FileReader and FileWriter. In this case, FileReader and FileWriter should be initialized before the FileProcessor.

4. Configuration

The configuration file is a pure Java class with @Configuration annotation:

@Configuration
@ComponentScan("com.baeldung.dependson")
public class Config {
 
    @Bean
    @DependsOn({"fileReader","fileWriter"})
    public FileProcessor fileProcessor(){
        return new FileProcessor();
    }
    
    @Bean("fileReader")
    public FileReader fileReader() {
        return new FileReader();
    }
    
    @Bean("fileWriter")
    public FileWriter fileWriter() {
        return new FileWriter();
    }   
}

FileProcessor specifies its dependencies with @DependsOn. We can also annotate a Component with @DependsOn:

@Component
@DependsOn({"filereader", "fileWriter"})
public class FileProcessor {}

5. Usage

Let us create one class File. Each of the beans updates the text within File. FileReader updates it as read. FileWriter updates it as write and FileProcessor updates the text as processed:

@Test
public void WhenFileProcessorIsCreated_FileTextContains_Processed() {
    FileProcessor processor = context.getBean(FileProcessor.class);
    assertTrue(processor.process().endsWith("processed"));
}

5.1. Missing Dependency

In case of missing dependency, Spring throws a BeanCreationException with a base exception of NoSuchBeanDefinitionException. Read more about NoSuchBeanDefinitionException here.

For example, dummyFileProcessor bean depends on a dummyFileWriter bean. Since dummyFileWriter doesn’t exist, it throws BeanCreationException:

@Test(expected=NoSuchBeanDefinitionException.class)
public void whenDependentBeanNotAvailable_ThrowsNosuchBeanDefinitionException(){
    context.getBean("dummyFileProcessor");
}

5.2. Circular Dependency

Also, in this case, it throws BeanCreationException and highlights that the beans have a circular dependency:

@Bean("dummyFileProcessorCircular")
@DependsOn({"dummyFileReaderCircular"})
@Lazy
public FileProcessor dummyFileProcessorCircular() {
    return new FileProcessor(file);
}

Circular dependencies can happen if a bean has an eventual dependency on itself, creating a circle:

Bean1 -> Bean4 -> Bean6 -> Bean1

6. Key Points

Finally, there are few points which we should take care of while using @DependsOn annotation:

  • While using @DependsOn, we must use component-scanning
  • If a DependsOn-annotated class is declared via XML, DependsOn annotation metadata is ignored

7. Conclusion

@DependsOn becomes especially useful when building systems with complex dependency requirements.

It facilitates the Dependency Injection, ensuring that Spring will have handled all of the initialization of those required Beans before loading our dependent class.

As always, the code can be found over on GitHub.

Find the Middle Element of a Linked List

$
0
0

1. Overview

In this tutorial, we’re going to explain how to find the middle element of a linked list in Java.

We’ll introduce the main problems in the next sections, and we’ll show different approaches to solving them.

2. Keeping Track of the Size

This problem can be easily solved just by keeping track of the size when we add new elements to the list. If we know the size, we also know where the middle element is, so the solution is trivial.

Let’s see an example using the Java implementation of a LinkedList:

public static Optional<String> findMiddleElementLinkedList(
  LinkedList<String> linkedList) {
    if (linkedList == null || linkedList.isEmpty()) {
        return Optional.empty();
    }

    return Optional.of(linkedList.get(
      (linkedList.size() - 1) / 2));
}

If we check the internal code of the LinkedList class, we can see that in this example we’re just traversing the list till we reach the middle element:

Node<E> node(int index) {
    if (index < (size >> 1)) {
        Node<E> x = first;
        for (int i = 0; i < index; i++) {
            x = x.next;
        }
        return x;
    } else {
        Node<E> x = last;
        for (int i = size - 1; i > index; i--) {
            x = x.prev;
        }
        return x;
    }
}

3. Finding the Middle Without Knowing the Size

It’s very common that we encounter problems where we only have the head node of a linked list, and we need to find the middle element. In this case, we don’t know the size of the list, which makes this problem harder to solve.

We’ll show in the next sections several approaches to solving this problem, but first, we need to create a class to represent a node of the list.

Let’s create a Node class, which stores String values:

public static class Node {

    private Node next;
    private String data;

    // constructors/getters/setters
  
    public boolean hasNext() {
        return next != null;
    }

    public void setNext(Node next) {
        this.next = next;
    }

    public String toString() {
        return this.data;
    }
}

Also, we’ll use this helper method in our test cases to create a singly linked list using only our nodes:

private static Node createNodesList(int n) {
    Node head = new Node("1");
    Node current = head;

    for (int i = 2; i <= n; i++) {
        Node newNode = new Node(String.valueOf(i));
        current.setNext(newNode);
        current = newNode;
    }

    return head;
}

3.1. Finding the Size First

The simplest approach to tackle this problem is to find the size of the list first, and after that follow the same approach that we used before – to iterate until the middle element.

Let’s see this solution in action:

public static Optional<String> findMiddleElementFromHead(Node head) {
    if (head == null) {
        return Optional.empty();
    }

    // calculate the size of the list
    Node current = head;
    int size = 1;
    while (current.hasNext()) {
        current = current.next();
        size++;
    }

    // iterate till the middle element
    current = head;
    for (int i = 0; i < (size - 1) / 2; i++) {
        current = current.next();
    }

    return Optional.of(current.data());
}

As we can see, this code iterates through the list twice. Therefore, this solution has a poor performance and it’s not recommended.

3.2. Finding the Middle Element in One Pass Iteratively

We’re now going to improve the previous solution by finding the middle element with only one iteration over the list.

To do that iteratively, we need two pointers to iterate through the list at the same time. One pointer will advance 2 nodes in each iteration, and the other pointer will advance only one node per iteration.

When the faster pointer reaches the end of the list, the slower pointer will be in the middle:

public static Optional<String> findMiddleElementFromHead1PassIteratively(Node head) {
    if (head == null) {
        return Optional.empty();
    }

    Node slowPointer = head;
    Node fastPointer = head;

    while (fastPointer.hasNext() && fastPointer.next().hasNext()) {
        fastPointer = fastPointer.next().next();
        slowPointer = slowPointer.next();
    }

    return Optional.ofNullable(slowPointer.data());
}

We can test this solution with a simple unit test using lists with both odd and even number of elements:

@Test
public void whenFindingMiddleFromHead1PassIteratively_thenMiddleFound() {
 
    assertEquals("3", MiddleElementLookup
      .findMiddleElementFromHead1PassIteratively(
        createNodesList(5)).get());
    assertEquals("2", MiddleElementLookup
      .findMiddleElementFromHead1PassIteratively(
        reateNodesList(4)).get());
}

3.3. Finding the Middle Element in One Pass Recursively

Another way to solve this problem in one pass is by using recursion. We can iterate till the end of the list to know the size and, in the callbacks, we just count until the half of the size.

To do this in Java, we’re going to create an auxiliary class to keep the references of the list size and the middle element during the execution of all the recursive calls:

private static class MiddleAuxRecursion {
    Node middle;
    int length = 0;
}

Now, let’s implement the recursive method:

private static void findMiddleRecursively(
  Node node, MiddleAuxRecursion middleAux) {
    if (node == null) {
        // reached the end
        middleAux.length = middleAux.length / 2;
        return;
    }
    middleAux.length++;
    findMiddleRecursively(node.next(), middleAux);

    if (middleAux.length == 0) {
        // found the middle
        middleAux.middle = node;
    }
    
    middleAux.length--;
}

And finally, let’s create a method that calls the recursive one:

public static Optional<String> findMiddleElementFromHead1PassRecursively(Node head) {
 
    if (head == null) {
        return Optional.empty();
    }

    MiddleAuxRecursion middleAux = new MiddleAuxRecursion();
    findMiddleRecursively(head, middleAux);
    return Optional.of(middleAux.middle.data());
}

Again, we can test it in the same way as we did before:

@Test
public void whenFindingMiddleFromHead1PassRecursively_thenMiddleFound() {
    assertEquals("3", MiddleElementLookup
      .findMiddleElementFromHead1PassRecursively(
        createNodesList(5)).get());
    assertEquals("2", MiddleElementLookup
      .findMiddleElementFromHead1PassRecursively(
        createNodesList(4)).get());
}

4. Conclusion

In this article, we’ve introduced the problem of finding the middle element of a linked list in Java, and we’ve shown different ways of solving it.

We’ve started from the simplest approach where we kept track of the size, and after that, we’ve continued with the solutions to find the middle element from the head node of the list.

As always, the full source code of the examples is available over on GitHub.

Convert String to Date in Java

$
0
0

1. Overview

In this tutorial, we’ll explore several ways to convert String objects into Date objects. We’ll start with the new Date Time API – java.time that was introduced in Java 8 before looking at the old java.util.Date data type also used for representing dates.

To finish, we’ll also look at some external libraries for conversion using Joda-Time and the Apache Commons Lang DateUtils class.

2. Converting String to LocalDate or LocalDateTime

LocalDate and LocalDateTime are immutable date-time objects that represent a date and latterly a date and time. By default, Java dates are in the ISO-8601 format, so if we have any string which represents a date and time in this format, then we can use the parse() API of these classes directly.

Here‘s a bit more detail on this new API.

2.1. Using the Parse API

The Date-Time API provide parse() methods for parsing a String that contains date and time information. To convert String objects to LocalDate and LocalDateTime objects, the String must represent a valid date or time according to ISO_LOCAL_DATE or ISO_LOCAL_DATE_TIME.

Otherwise, a DateTimeParseException will be thrown at runtime.

In our first example, let’s convert a String to a java.time.LocalDate:

LocalDate date = LocalDate.parse("2018-05-05");

A similar approach to the above can be used to convert a String to a java.time.LocalDateTime:

LocalDateTime dateTime = LocalDateTime.parse("2018-05-05T11:50:55");

It is important to note that both the LocalDate and LocalDateTime objects are timezone agnostic. However, when we need to deal with time zone specific date and times we can use the ZonedDateTime parse method directly to get a time zone specific date time:

ZonedDateTime zonedDateTime = ZonedDateTime.parse("2015-05-05T10:15:30+01:00[Europe/Paris]");

Now let’s have a look at how we convert strings with a custom format.

2.2. Using the Parse API with a Custom Formatter

Converting a String with a custom date format into a Date object is a widespread operation in Java.

For this purpose, we’ll use the DateTimeFormatter class which provides numerous predefined formatters, and allows us to define a formatter.

Let’s start with an example of using one of the predefined formatters of DateTimeFormatter:

String dateInString = "19590709";
LocalDate date = LocalDate.parse(dateInString, DateTimeFormatter.BASIC_ISO_DATE);

In the next example let’s create a formatter that applies a format of “EEE, MMM d yyyy”. This format specifies three characters for the full day name of the week,  one digit to represent the day of the month, three characters to represent the month and four digits to represent the year.

This formatter recognizes strings such as “Fri,  3 Jan 2003″ or “Wed, 23 Mar 1994“:

String dateInString = "Mon, 05 May 1980";
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("EEE, d MMM yyyy", Locale.ENGLISH);
LocalDate dateTime = LocalDate.parse(dateInString, formatter);

2.3. Common Date and Time Patterns

Let’s look at some common date and time patterns:

  • y – Year (1996; 96)
  • M – Month in year (July; Jul; 07)
  • d – Day in month (1-31)
  • E – Day name in week (Friday, Sunday)
  • a Am/pm marker (AM, PM)
  • H – Hour in day (0-23)
  • h – Hour in am/pm (1-12)
  • m – Minute in hour (0-60)
  • s – Second in minute (0-60)

For a full list of symbols that we can use to specify a pattern for parsing click here.

If we need to convert java.time dates into the older java.util.Date object, read this article for more details.

3. Converting String to java.util.Date

Before Java 8, the Java date and time mechanism was provided by the old APIs of java.util.Date, java.util.Calendar, and java.util.TimeZone classes which sometimes we still need to work with.

Let’s see how to convert a String into a java.util.Date object:

SimpleDateFormat formatter = new SimpleDateFormat("dd-MMM-yyyy", Locale.ENGLISH);

String dateInString = "7-Jun-2013";
Date date = formatter.parse(dateInString);

In the above example, we first need to construct a SimpleDateFormat object by passing the pattern describing the date and time format.

Next, we need to invoke the parse() method passing the date String. If the String argument passed is not in the same format as the pattern then a ParseException will be thrown.

3.1. Adding Time Zone Information to java.util.Date

It’s important to note that the java.util.Date has no concept of time zone, and only represents the number of seconds passed since the Unix epoch time – 1970-01-01T00:00:00Z.

But, when we print the Date object directly, it will always be printed with the Java default system time zone.

In this final example we’ll look at how to format a date, adding time zone information:

SimpleDateFormat formatter = new SimpleDateFormat("dd-M-yyyy hh:mm:ss a", Locale.ENGLISH);
formatter.setTimeZone(TimeZone.getTimeZone("America/New_York"));

String dateInString = "22-01-2015 10:15:55 AM"; 
Date date = formatter.parse(dateInString);
String formattedDateString = formatter.format(date);

We can also change the JVM time zone programmatically, but this isn’t recommended:

TimeZone.setDefault(TimeZone.getTimeZone("GMT"));

4. External Libraries

Now that we have a good understanding of how to convert String objects to Date objects using the new and old APIs offered by core Java let’s take a look at some external libraries.

4.1. Joda-Time Library

An alternative to the core Java Date and Time library is Joda-Time. Although the authors now recommend users to migrate to java.time (JSR-310) if this isn’t possible then the Joda-Time library provides an excellent alternative for working with Date and Time. This library provides pretty much all capabilities supported in the Java 8 Date Time project.

The artifact can be found on Maven Central:

<dependency>
    <groupId>joda-time</groupId>
    <artifactId>joda-time</artifactId>
    <version>2.10</version>
</dependency>

Here’s a quick example working with the standard DateTime:

DateTimeFormatter formatter = DateTimeFormat.forPattern("dd/MM/yyyy HH:mm:ss");

String dateInString = "07/06/2013 10:11:59";
DateTime dateTime = DateTime.parse(dateInString, formatter);

Let’s also see an example of explicitly setting a time zone:

DateTimeFormatter formatter = DateTimeFormat.forPattern("dd/MM/yyyy HH:mm:ss");

String dateInString = "07/06/2013 10:11:59";
DateTime dateTime = DateTime.parse(dateInString, formatter);
DateTime dateTimeWithZone = dateTime.withZone(DateTimeZone.forID("Asia/Kolkata"));

4.2. Apache Commons Lang – DateUtils

The DateUtils class provides many useful utilities making it easier to work with the legacy Calendar and Date objects.

The commons-lang3 artifact is available from Maven Central:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.7</version>
</dependency>

Let’s convert a date String using an Array of date patterns into a java.util.Date:

String dateInString = "07/06-2013";
Date date = DateUtils.parseDate(dateInString, 
  new String[] { "yyyy-MM-dd HH:mm:ss", "dd/MM-yyyy" });

5. Conclusion

In this article, we illustrated several ways of converting Strings to different types of Date objects (with and without time), both in plain Java as well as using external libraries.

The full source code of the article is available over on GitHub.

Spring Shutdown Callbacks

$
0
0

1. Overview

In this tutorial, we’re going to learn different ways to use shutdown callbacks with Spring.

The main advantage of using a shutdown callback is that it gives us control over a graceful application exit.

2. Shutdown Callback Approaches

Spring supports both the component-level and the context-level shutdown callbacks. We can create these callbacks using:

  • @PreDestroy
  • DisposableBean interface
  • Bean-destroy method
  • Global ServletContextListener

Let’s see all of these approaches with examples.

2.1. Using @PreDestroy

Let’s create a bean that uses @PreDestroy:

@Component
public class Bean1 {

    @PreDestroy
    public void destroy() {
        System.out.println(
          "Callback triggered - @PreDestroy.");
    }
}

During the bean initialization, Spring will register all the bean methods that are annotated with @PreDestroy and invokes them when the application shuts down.

2.2. Using the DisposableBean Interface

Our second bean will implement the DisposableBean interface to register the shutdown callback:

@Component
public class Bean2 implements DisposableBean {

    @Override
    public void destroy() throws Exception {
        System.out.println(
          "Callback triggered - DisposableBean.");
    }
}

2.3. Declaring a Bean Destroy Method

For this approach, firstly we’ll create a class with a custom destroy method:

public class Bean3 {

    public void destroy() {
        System.out.println(
          "Callback triggered - bean destroy method.");
    }
}

Then, we create the configuration class that initializes the bean and marks its destroy() method as our shutdown callback:

@Configuration
public class ShutdownHookConfiguration {

    @Bean(destroyMethod = "destroy")
    public Bean3 initializeBean3() {
        return new Bean3();
    }
}

The XML way of registering the destroy method is:

<bean class="com.baeldung.shutdownhooks.config.Bean3"
  destroy-method="destroy">

2.4. Using a Global ServletContextListener

Unlike the other three approaches, which register the callback at bean level, the ServletContextListener registers the callback at context level.

For this let’s create a custom context listener:

public class ExampleServletContextListener
  implements ServletContextListener {

    @Override
    public void contextDestroyed(ServletContextEvent event) {
        System.out.println(
          "Callback triggered - ContextListener.");
    }

    @Override
    public void contextInitialized(ServletContextEvent event) {
        // Triggers when context initializes
    }
}

We need to register it to the ServletListenerRegistrationBean in the configuration class:

@Bean
ServletListenerRegistrationBean<ServletContextListener> servletListener() {
    ServletListenerRegistrationBean<ServletContextListener> srb
      = new ServletListenerRegistrationBean<>();
    srb.setListener(new ExampleServletContextListener());
    return srb;
}

3. Conclusion

We’ve learned about the different ways Spring provides to register shutdown callbacks, both at the bean level and at the context level.

These can be used for shutting down the application gracefully and effectively freeing up the used resources.

As always all the examples mentioned in this article can be found over on Github.

Count with JsonPath

$
0
0

1. Overview

In this quick tutorial, we’ll explore how to use JsonPath to count objects and arrays in a JSON document.

JsonPath provides a standard mechanism to traverse through specific parts of a JSON document. We can say JsonPath is to JSON what XPath is to XML.

2. Required Dependencies

We’re using the following JsonPath Maven dependency, which is, of course, available on Maven Central:

<dependency>
    <groupId>com.jayway.jsonpath</groupId>
    <artifactId>json-path</artifactId>
    <version>2.4.0</version>
</dependency>

3. Sample JSON

The following JSON will be used to illustrate the examples:

{
    "items":{
        "book":[
            {
                "author":"Arthur Conan Doyle",
                "title":"Sherlock Holmes",
                "price":8.99
            },
            {
                "author":"J. R. R. Tolkien",
                "title":"The Lord of the Rings",
                "isbn":"0-395-19395-8",
                "price":22.99
            }
        ],
        "bicycle":{
            "color":"red",
            "price":19.95
        }
    },
    "url":"mystore.com",
    "owner":"baeldung"
}

4. Count JSON Objects

The root element is denoted by the Dollar symbol “$”. In the following JUnit test, we call JsonPath.read() with the JSON String and the JSON path “$” that we want to count:

public void shouldMatchCountOfObjects() {
    Map<String, String> objectMap = JsonPath.read(json, "$");
    assertEquals(3, objectMap.keySet().size());
}

By counting the size of the resulting Map, we know how many elements are at the given path within the JSON structure.

5. Count JSON Array Size

In the following JUnit test, we query the JSON to find the array containing all books under the items element:

public void shouldMatchCountOfArrays() {
    JSONArray jsonArray = JsonPath.read(json, "$.items.book[*]");
    assertEquals(2, jsonArray.size());
}

6. Conclusion

In this article, we’ve covered some basic examples on how to count items within a JSON structure.

You can explore more path examples in the official JsonPath docs.

As always, the code examples can be found in the GitHub repository.

Front-End App with Spring Security OAuth – Authorization Code Flow

$
0
0

1. Overview

In this tutorial, we’ll continue our Spring Security OAuth series by building a simple front end for Authorization Code flow.

Keep in mind that the focus here is the client-side; have a look at the Spring REST API + OAuth2 + AngularJS writeup – to review detailed configuration for both Authorization and Resource Servers.

2. Authorization Server

Before we get to our front end, we need to add our client details in our Authorization Server configuration:

@Configuration
@EnableAuthorizationServer
public class OAuth2AuthorizationServerConfig extends AuthorizationServerConfigurerAdapter {

    @Override
    public void configure(ClientDetailsServiceConfigurer clients) throws Exception {
        clients.inMemory()
               .withClient("fooClientId")
               .secret(passwordEncoder().encode("secret"))
               .authorizedGrantTypes("authorization_code")
               .scopes("foo", "read", "write")
               .redirectUris("http://localhost:8089/")
...

Note how we now have the Authorization Code grant type enabled, with the following, simple details:

  • our client id is fooClientId
  • our scopes are fooread and write
  • the redirect URI is http://localhost:8089/ (we’re going to use port 8089 for our front-end app)

3. The Front End

Now, let’s start building our simple front-end application.

As we’re going to use to use Angular 6 for our app here, we need to use the frontend-maven-plugin plugin in our Spring Boot application:

<plugin>
    <groupId>com.github.eirslett</groupId>
    <artifactId>frontend-maven-plugin</artifactId>
    <version>1.6</version>

    <configuration>
        <nodeVersion>v8.11.3</nodeVersion>
        <npmVersion>6.1.0</npmVersion>
        <workingDirectory>src/main/resources</workingDirectory>
    </configuration>

    <executions>
        <execution>
            <id>install node and npm</id>
            <goals>
                <goal>install-node-and-npm</goal>
            </goals>
        </execution>

        <execution>
            <id>npm install</id>
            <goals>
                <goal>npm</goal>
            </goals>
        </execution>

        <execution>
            <id>npm run build</id>
            <goals>
                <goal>npm</goal>
            </goals>

            <configuration>
                <arguments>run build</arguments>
            </configuration>
        </execution>
    </executions>
</plugin>

Note that, naturally, we need to install Node.js first on our box; we’ll use the Angular CLI to generate the base for our app:

ng new authCode

4. Angular Module

Now, let’s discuss our Angular Module in detail.

Here’s our simple AppModule:

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { HttpModule } from '@angular/http';
import { RouterModule }   from '@angular/router';

import { AppComponent } from './app.component';
import { HomeComponent } from './home.component';
import { FooComponent } from './foo.component';

@NgModule({
  declarations: [
    AppComponent,
    HomeComponent,
    FooComponent    
  ],
  imports: [
    BrowserModule,
    HttpModule,
    RouterModule.forRoot([
     { path: '', component: HomeComponent, pathMatch: 'full' }], {onSameUrlNavigation: 'reload'})
  ],
  providers: [],
  bootstrap: [AppComponent]
})
export class AppModule { }

Our Module consists of three Components and one service, we’ll discuss them in the following sections

4.1. App Component

Let’s start with our AppComponent which is the root component:

import {Component} from '@angular/core';
 
@Component({
    selector: 'app-root',
    template: `<nav class="navbar navbar-default">
  <div class="container-fluid">
    <div class="navbar-header">
      <a class="navbar-brand" href="/">Spring Security Oauth - Authorization Code</a>
    </div>
  </div>
</nav>
<router-outlet></router-outlet>`
})

export class AppComponent {}

4.2. Home Component

Next is our main component, HomeComponent:

import {Component} from '@angular/core';
import {AppService} from './app.service'
 
@Component({
    selector: 'home-header',
    providers: [AppService],
  template: `<div class="container" >
    <button *ngIf="!isLoggedIn" class="btn btn-primary" (click)="login()" type="submit">Login</button>
    <div *ngIf="isLoggedIn" class="content">
        <span>Welcome !!</span>
        <a class="btn btn-default pull-right"(click)="logout()" href="#">Logout</a>
        <br/>
        <foo-details></foo-details>
    </div>
</div>`
})
 
export class HomeComponent {
     public isLoggedIn = false;

    constructor(
        private _service:AppService){}
 
    ngOnInit(){
        this.isLoggedIn = this._service.checkCredentials();    
        let i = window.location.href.indexOf('code');
        if(!this.isLoggedIn && i != -1){
            this._service.retrieveToken(window.location.href.substring(i + 5));
        }
    }

    login() {
        window.location.href = 'http://localhost:8081/spring-security-oauth-server/oauth/authorize?response_type=code&client_id=' + this._service.clientId + '&redirect_uri='+ this._service.redirectUri;
    }
 
    logout() {
        this._service.logout();
    }
}

Note that:

  • If the user is not logged in, only the login button will appear
  • The login button redirects the user to the Authorization URL
  • When the user is redirected back with the Authorization Code, we retrieve the Access Token using this code

4.3. Foo Component

Our third and final component is the FooComponent; this displays the Foo resources – obtained from Resource Server:

import { Component } from '@angular/core';
import {AppService, Foo} from './app.service'

@Component({
  selector: 'foo-details',
  providers: [AppService],  
  template: `<div class="container">
    <h1 class="col-sm-12">Foo Details</h1>
    <div class="col-sm-12">
        <label class="col-sm-3">ID</label> <span>{{foo.id}}</span>
    </div>
    <div class="col-sm-12">
        <label class="col-sm-3">Name</label> <span>{{foo.name}}</span>
    </div>
    <div class="col-sm-12">
        <button class="btn btn-primary" (click)="getFoo()" type="submit">New Foo</button>        
    </div>
</div>`
})

export class FooComponent {
    public foo = new Foo(1,'sample foo');
    private foosUrl = 'http://localhost:8082/spring-security-oauth-resource/foos/';  

    constructor(private _service:AppService) {}

    getFoo(){
        this._service.getResource(this.foosUrl+this.foo.id)
         .subscribe(
            data => this.foo = data,
            error =>  this.foo.name = 'Error');
    }
}

4.4. App Service

Now, let’s take a look at the AppService:

import {Injectable} from '@angular/core';
import { Cookie } from 'ng2-cookies';
import { Http, Response, Headers, RequestOptions } from '@angular/http';
import { Observable } from 'rxjs/Observable';
import 'rxjs/add/operator/catch';
import 'rxjs/add/operator/map';
 
export class Foo {
  constructor(
    public id: number,
    public name: string) { }
} 

@Injectable()
export class AppService {
   public clientId = 'fooClientId';
   public redirectUri = 'http://localhost:8089/';

  constructor(
    private _http: Http){}

  retrieveToken(code){
    let params = new URLSearchParams();   
    params.append('grant_type','authorization_code');
    params.append('client_id', this.clientId);
    params.append('redirect_uri', this.redirectUri);
    params.append('code',code);

    let headers = new Headers({'Content-type': 'application/x-www-form-urlencoded; charset=utf-8', 'Authorization': 'Basic '+btoa(this.clientId+":secret")});
    let options = new RequestOptions({ headers: headers });
     this._http.post('http://localhost:8081/spring-security-oauth-server/oauth/token', params.toString(), options)
    .map(res => res.json())
    .subscribe(
      data => this.saveToken(data),
      err => alert('Invalid Credentials')
    ); 
  }

  saveToken(token){
    var expireDate = new Date().getTime() + (1000 * token.expires_in);
    Cookie.set("access_token", token.access_token, expireDate);
    console.log('Obtained Access token');
    window.location.href = 'http://localhost:8089';
  }

  getResource(resourceUrl) : Observable<Foo>{
    var headers = new Headers({'Content-type': 'application/x-www-form-urlencoded; charset=utf-8', 'Authorization': 'Bearer '+Cookie.get('access_token')});
    var options = new RequestOptions({ headers: headers });
    return this._http.get(resourceUrl, options)
                   .map((res:Response) => res.json())
                   .catch((error:any) => Observable.throw(error.json().error || 'Server error'));
  }

  checkCredentials(){
    return Cookie.check('access_token');
  } 

  logout() {
    Cookie.delete('access_token');
    window.location.reload();
  }
}

Let’s do a quick rundown of our implementation here:

  • checkCredentials(): to check if the user is logged in
  • retrieveToken(): to obtain access token using Authorization Code
  • saveToken(): to save Access Token in a cookie
  • getResource(): to get Foo details using its ID
  • logout(): to delete Access Token cookie

5. Run the Application

To run our application and make sure everything is working properly, we need to:

  • First, run Authorization Server on port 8081
  • Then, run the  Resource Server on port 8082
  • Finally, run the Front End

We’ll need to build our app first:

mvn clean install

Then change directory to src/main/resources:

cd src/main/resources

Then run our app on port 8089:

npm start

6. Conclusion

In this quick article, we explored building a simple front-end application to use with the Authorization Code flow – with Spring Security and Angular 6.

And, as always, the full source code is available over on GitHub.

Implementing a FTP-Client in Java

$
0
0

1. Overview

In this tutorial, we’ll take a look at how to leverage the Apache Commons Net library to interact with an external FTP server.

2. Setup

When using libraries, that are used to interact with external systems, it’s often a good idea to write some additional integration tests, in order to make sure, we’re using the library correctly.

Nowadays, we’d normally use Docker to spin up those systems for our integration tests. However especially when used in passive mode, an FTP server isn’t the easiest application to run transparently inside a container if we want to make use of dynamic port mappings (which is often necessary for tests being able to be run on a shared CI server).

That’s why we’ll use MockFtpServer instead, a Fake/Stub FTP server written in Java, that provides an extensive API for easy use in JUnit tests:

<dependency>
    <groupId>commons-net</groupId>
    <artifactId>commons-net</artifactId>
    <version>3.6</version>
</dependency>
<dependency> 
    <groupId>org.mockftpserver</groupId> 
    <artifactId>MockFtpServer</artifactId> 
    <version>2.7.1</version> 
    <scope>test</scope> 
</dependency>

It’s recommended to always use the latest version. Those can be found here and here.

3. FTP Support in JDK

Surprisingly, there’s already basic support for FTP in some JDK flavors in the form of sun.net.www.protocol.ftp.FtpURLConnection.

However, we shouldn’t use this class directly and it’s instead possible to use the JDK’s java.net.URL class as an abstraction.

This FTP support is very basic, but leveraging the convenience APIs of java.nio.file.Files, it could be enough for simple use cases:

@Test
public void givenRemoteFile_whenDownloading_thenItIsOnTheLocalFilesystem() throws IOException {
    String ftpUrl = String.format(
      "ftp://user:password@localhost:%d/foobar.txt", fakeFtpServer.getServerControlPort());

    URLConnection urlConnection = new URL(ftpUrl).openConnection();
    InputStream inputStream = urlConnection.getInputStream();
    Files.copy(inputStream, new File("downloaded_buz.txt").toPath());
    inputStream.close();

    assertThat(new File("downloaded_buz.txt")).exists();

    new File("downloaded_buz.txt").delete(); // cleanup
}

Since this basic FTP supports is already missing basic features like file listings, we are going to use FTP support in the Apache Net Commons library in the following examples.

4. Connecting

We first need to connect to the FTP server. Let’s start by creating a class FtpClient.

It will serve as an abstraction API to the actual Apache Commons Net FTP client:

class FtpClient {

    private String server;
    private int port;
    private String user;
    private String password;
    private FTPClient ftp;

    // constructor

    void open() throws IOException {
        ftp = new FTPClient();

        ftp.addProtocolCommandListener(new PrintCommandListener(new PrintWriter(System.out)));

        ftp.connect(server, port);
        int reply = ftp.getReplyCode();
        if (!FTPReply.isPositiveCompletion(reply)) {
            ftp.disconnect();
            throw new IOException("Exception in connecting to FTP Server");
        }

        ftp.login(user, password);
    }

    void close() throws IOException {
        ftp.disconnect();
    }
}

We need the server address and the port, as well as the username and the password. After connecting it’s necessary to actually check the reply code, to be sure connecting was successful. We also add a PrintCommandListener, to print the responses we’d normally see when connecting to an FTP server using command line tools to stdout.

Since our integration tests will have some boilerplate code, like starting/stopping the MockFtpServer and connecting/disconnecting our client, we can do these things in the @Before and @After methods:

public class FtpClientIntegrationTest {

    private FakeFtpServer fakeFtpServer;

    private FtpClient ftpClient;

    @Before
    public void setup() throws IOException {
        fakeFtpServer = new FakeFtpServer();
        fakeFtpServer.addUserAccount(new UserAccount("user", "password", "/data"));

        FileSystem fileSystem = new UnixFakeFileSystem();
        fileSystem.add(new DirectoryEntry("/data"));
        fileSystem.add(new FileEntry("/data/foobar.txt", "abcdef 1234567890"));
        fakeFtpServer.setFileSystem(fileSystem);
        fakeFtpServer.setServerControlPort(0);

        fakeFtpServer.start();

        ftpClient = new FtpClient("localhost", fakeFtpServer.getServerControlPort(), "user", "password");
        ftpClient.open();
    }

    @After
    public void teardown() throws IOException {
        ftpClient.close();
        fakeFtpServer.stop();
    }
}

By setting the mock server control port to the value 0, we’re starting the mock server and a free random port.

That’s why we have to retrieve the actual port when creating the FtpClient after the server has been started, using fakeFtpServer.getServerControlPort().

5. Listing Files

The first actual use case will be listing files.

Let’s start with the test first, TDD-style:

@Test
public void givenRemoteFile_whenListingRemoteFiles_thenItIsContainedInList() throws IOException {
    Collection<String> files = ftpClient.listFiles("");
    assertThat(files).contains("foobar.txt");
}

The implementation itself is equally straightforward. To make the returned data structure a bit simpler for the sake of this example, we transform the returned FTPFile array is transformed into a list of Strings using Java 8 Streams:

Collection<String> listFiles(String path) throws IOException {
    FTPFile[] files = ftp.listFiles(path);
    return Arrays.stream(files)
      .map(FTPFile::getName)
      .collect(Collectors.toList());
}

6. Downloading

For downloading a file from the FTP server, we’re defining an API.

Here we define the source file and the destination on the local filesystem:

@Test
public void givenRemoteFile_whenDownloading_thenItIsOnTheLocalFilesystem() throws IOException {
    ftpClient.downloadFile("/buz.txt", "downloaded_buz.txt");
    assertThat(new File("downloaded_buz.txt")).exists();
    new File("downloaded_buz.txt").delete(); // cleanup
}

The Apache Net Commons FTP client contains a convenient API, that will directly write to a defined OutputStream. This means we can use this directly:

void downloadFile(String source, String destination) throws IOException {
    FileOutputStream out = new FileOutputStream(destination);
    ftp.retrieveFile(source, out);
}

7. Uploading

The MockFtpServer provides some helpful methods for accessing the content of its filesystem. We can use this feature to write a simple integration test for the uploading functionality:

@Test
public void givenLocalFile_whenUploadingIt_thenItExistsOnRemoteLocation() 
  throws URISyntaxException, IOException {
  
    File file = new File(getClass().getClassLoader().getResource("baz.txt").toURI());
    ftpClient.putFileToPath(file, "/buz.txt");
    assertThat(fakeFtpServer.getFileSystem().exists("/buz.txt")).isTrue();
}

Uploading a file works API-wise quite similar to downloading it, but instead of using an OutputStream, we need to provide an InputStream instead:

void putFileToPath(File file, String path) throws IOException {
    ftp.storeFile(path, new FileInputStream(file));
}

8. Conclusion

We’ve seen, that using Java together with the Apache Net Commons allows us, to easily interact with an external FTP server, for read as well as write access.

As usual, the complete code for this article is available in our GitHub repository.


Access a File from the Classpath in a Spring Application

$
0
0

1. Introduction

In this tutorial, we’ll see various ways to access and load the contents of a file that’s on the classpath using Spring.

2. Using Resource

The Resource interface helps in abstracting access to low-level resources. In fact, it supports handling of all kinds of file resources in a uniform manner.

Let’s start by looking at various methods to obtain a Resource instance.

2.1. Manually

For accessing a resource from the classpath, we can simply use ClassPathResource:

public Resource loadEmployees() {
    return new ClassPathResource("data/employees.dat");
}

By default, ClassPathResource removes some boilerplate for selecting between the thread’s context classloader and the default system classloader.

However, we can also indicate the classloader to use either directly:

return new ClassPathResource("data/employees.dat", this.getClass().getClassLoader());

Or indirectly through a specified class:

return new ClassPathResource(
  "data/employees.dat", 
  Employee.class.getClassLoader());

Note that from Resource, we can easily jump to Java standard representations like InputStream or File.

2.2. Using @Value

We can also inject a Resource with @Value:

@Value("classpath:data/resource-data.txt")
Resource resourceFile;

And @Value supports other prefixes, too, like file: and url:.

2.3. Using ResourceLoader

Or, if we want to lazily load our resource, we can use ResourceLoader:

@Autowired
ResourceLoader resourceLoader;

And then we retrieve our resource with getResource:

public Resource loadEmployees() {
    return resourceLoader.getResource(
      "classpath:data/employees.dat");
}

Note, too that ResourceLoader is implemented by all concrete ApplicationContexts, which means that we can also simply depend on ApplicationContext if that suits our situation better:

ApplicationContext context;

public Resource loadEmployees() {
    return context.getResource("classpath:data/employees.dat");
}

3. Using ResourceUtils

As a caveat, there is another way to retrieve resources in Spring, but the ResourceUtils Javadoc is clear that the class is mainly for internal use.

If we see usages of ResourceUtils in our code:

public File loadEmployeesWithSpringInternalClass() 
  throws FileNotFoundException {
    return ResourceUtils.getFile(
      "classpath:data/employees.dat");
}

We should carefully consider the rationale as it’s probably better to use one of the standard approaches above.

4. Reading Resource Data

Once we have a Resource, it’s easy for us to read the contents. As we have already discussed, we can easily obtain a File or an InputStream reference from the Resource.

Let’s imagine we have the following file, data/employees.dat, on the classpath:

Joe Employee,Jan Employee,James T. Employee

4.1. Reading as a File

Now, we can read its contents by calling getFile:

@Test
public void whenResourceAsFile_thenReadSuccessful() 
  throws IOException {
 
    File resource = new ClassPathResource(
      "data/employees.dat").getFile();
    String employees = new String(
      Files.readAllBytes(resource.toPath()));
    assertEquals(
      "Joe Employee,Jan Employee,James T. Employee", 
      employees);
}

Although, note that this approach expects the resource to be present in the filesystem and not within a jar file.

4.2. Reading as an InputStream

Let’s say, though, that our resource is inside a jar.

Then, we can instead read a Resource as an InputStream:

@Test
public void whenResourceAsStream_thenReadSuccessful() 
  throws IOException {
    InputStream resource = new ClassPathResource(
      "data/employees.dat").getInputStream();
    try ( BufferedReader reader = new BufferedReader(
      new InputStreamReader(resource)) ) {
        String employees = reader.lines()
          .collect(Collectors.joining("\n"));
 
        assertEquals("Joe Employee,Jan Employee,James T. Employee", employees);
    }
}

5. Conclusion

In this quick article, we’ve seen a few ways to access and read a resource from the classpath using Spring including eager and lazy loading and on the filesystem or in a jar.

And, as always, I’ve posted all these examples over on GitHub.

Warning: “The type WebMvcConfigurerAdapter is deprecated”

$
0
0

1. Introduction

In this quick tutorial, we’ll have a look at one of the warnings we may see when working with a Spring 5.x.x version, namely the one referring to the deprecated WebMvcConfigurerAdapter class.

We’ll see why this warning happens and how to handle it.

2. Why the Warning is Present

This warning will appear if we’re using Spring version 5 (or Spring Boot 2), either when upgrading an existing application or building a new application with the old API.

Let’s briefly go through the history behind it.

In earlier versions of Spring, up to and including version 4, if we wanted to configure a web application, we could make use of the WebMvcConfigurerAdapter class:

@Configuration
public WebConfig extends WebMvcConfigurerAdapter {
    
    // ...
}

This is an abstract class that implements the WebMvcConfigurer interface and contains empty implementations for all the methods inherited.

By subclassing it, we can override its methods, which provide hooks into various MVC configuration elements such as view resolvers, interceptors and more.

However, Java 8 added the concept of default methods in interfaces. Naturally, the Spring team updated the framework to make full use of the new Java language features.

3. Solution

As mentioned, the WebMvcConfigurer interface, starting with Spring 5, contains default implementations for all its methods. As a result, the abstract adapter class was marked as deprecated.

Let’s see how we can start using the interface directly and get rid of the warning:

@Configuration
public WebConfig implements WebMvcConfigurer {
    // ...
}

And that’s all! The change should be fairly easy to make.

If there are any super() calls to overridden methods, we should remove those as well. Otherwise, we can override any of the configuration callbacks as usual.

While removing the warning is not mandatory, it’s recommended to do so, as the new API is more convenient, and the deprecated class may be removed in future versions.

4. Conclusion

In this short article, we saw how to fix the warning referring to the deprecation of the WebMvcConfigurerAdapter class.

Working with Select and Option in Thymeleaf

$
0
0

1. Overview

Thymeleaf is the very popular templating engine bundled together with Spring Boot. We’ve already published a number of articles about it, and we highly recommend going over the Baeldung’s Thymeleaf series.

In this tutorial, we’re going to look at how to work with select and option tags in Thymeleaf.

2. HTML Basics

In HTML we can build a drop-down list with multiple values:

<select>
    <option value="apple">Apple</option>
    <option value="banana">Banana</option>
    <option value="orange">Orange</option>
    <option value="pear">Pear</option>
</select>

Each list consists of select and nested option tags. By default, the web browser will render a list with the first option preselected.

We can control which value is selected by using selected attribute:

<option value="orange" selected>Orange</option>

Moreover, we can specify that an option isn’t selectable by using the disabled attribute:

<option disabled>Please select...</option>

3. Thymeleaf

In Thymleaf we can use th:field attribute to bind the view with the model:

<select th:field="*{gender}">
    <option th:value="'M'" th:text="Male"></option>
    <option th:value="'F'" th:text="Female"></option>
</select>

While the above example doesn’t really require using a template engine, in more advanced examples to follow we’ll see the power of Thymeleaf.

3.1. Option Without Selection

If we think about a scenario in which there are more options to choose from, a clean and neat way to display all of them is by using th:each attribute together with th:value and th:text:

<select th:field="*{percentage}">
    <option th:each="i : ${#numbers.sequence(0, 100)}" th:value="${i}" th:text="${i}">
    </option>
</select>

In the above example, we’re using a sequence of numbers from 0 to 100. We assign the value of each number i to option tag’s value attribute, and we use the same number as the displayed value.

The Thymeleaf code will be rendered in the browser as:

<select id="percentage" name="percentage">
    <option value="0">0</option>
    <option value="1">1</option>
    <option value="2">2</option>
    ...
    <option value="100">100</option>
</select>

Let’s think about this example as create, i.e., we start with a new form, and the percentage value didn’t need to be preselected.

3.2. Selected Option

If we want to expand our form now with an update functionality, i.e., we go back to the previously created record, and we want to populate the form with existing data, then the option needs to be selected.

We can achieve that by adding th:selected attribute along with some condition:

<select th:field="*{percentage}">
    <option th:each="i : ${#numbers.sequence(0, 100)}" th:value="${i}" th:text="${i}" 
      th:selected="${i==75}"></option>
</select>

In the above example, we want to preselect the value of 75 by checking if i is equal to 75.

However, this code won’t work, and the rendered HTML will be:

<select id="percentage" name="percentage">
    <option value="0">0</option>
    ...
    <option value="74">74</option>
    <option value="75">75</option>
    <option value="76">76</option>
    ...
    <option value="100">100</option>
</select>

To fix it, we need to remove th:field and replace it with name and id attributes:

<select id="percentage" name="percentage">

In the end, we will get:

<select id="percentage" name="percentage">
    <option value="0">0</option>
    ...
    <option value="74">74</option>
    <option value="75" selected="selected">75</option>
    <option value="76">76</option>
    ...
    <option value="100">100</option>
</select>

4. Conclusion

In this short article, we’ve checked how to work with drop-down/list selectors in Thymeleaf. There is one common pitfall with preselecting values, which we’ve shown the solution for.

As always, the code used during the discussion can be found over on GitHub.

A Guide to DeltaSpike Data Module

$
0
0

1. Overview

Apache DeltaSpike is a project which provides a collection of CDI extensions for Java projects; it requires a CDI implementation to be available at runtime.

Of course, it can work with the different implementation of CDI – JBoss Weld or OpenWebBeans. It’s also tested on many application servers.

In this tutorial, we’ll focus on one of the best known and useful – Data module.

2. DeltaSpike Data Module Setup

Apache DeltaSpike Data module is used to simplify implementation of the repository pattern. It allows reducing a boilerplate code by providing centralized logic for queries creation and execution.

It’s very similar to the Spring Data project. To query a database, we need to define a method declaration (without implementation) which follows defined naming convention or which contains @Query annotation. The implementation will be done for us by the CDI extension.

In the next subsections, we’ll cover how to setup Apache DeltaSpike Data module in our application.

2.1. Required Dependencies

To use Apache DeltaSpike Data module in the application, we need to setup required dependencies.

When Maven is our build tool we have to use:

<dependency>
    <groupId>org.apache.deltaspike.modules</groupId>
    <artifactId>deltaspike-data-module-api</artifactId>
    <version>1.8.2</version>
    <scope>compile</scope>
</dependency>
<dependency>
    <groupId>org.apache.deltaspike.modules</groupId>
    <artifactId>deltaspike-data-module-impl</artifactId>
    <version>1.8.2</version>
    <scope>runtime</scope>
</dependency>

When we’re using Gradle:

runtime 'org.apache.deltaspike.modules:deltaspike-data-module-impl'
compile 'org.apache.deltaspike.modules:deltaspike-data-module-api'

Apache DeltaSpike Data module artifacts are available over on Maven Central:

To run an application with Data module, we also need a JPA and CDI implementations available at runtime.

Although it’s possible to run Apache DeltaSpike in Java SE application, in most cases, it will be deployed on the Application Server (e.g., Wildfly or WebSphere).

Application Servers have full Java EE support, so we don’t have to do anything more. In case of Java SE application, we have to provide these implementations (e.g., by adding dependencies to the Hibernate and JBoss Weld).

Next, we’ll also cover required configuration for EntityManager.

2.2. Entity Manager Configuration

The Data module requires EntityManager to be injected over CDI.

We can achieve this by using a CDI producer:

public class EntityManagerProducer {

    @PersistenceContext(unitName = "primary")
    private EntityManager entityManager;

    @ApplicationScoped
    @Produces
    public EntityManager getEntityManager() {
        return entityManager;
    }
}

The above code assumes that we have persistence unit with name primary defined in the persistence.xml file.

Let’s see below as an example of definition:

<persistence-unit name="primary" transaction-type="JTA">
   <jta-data-source>java:jboss/datasources/baeldung-jee7-seedDS</jta-data-source>
   <properties>
      <property name="hibernate.hbm2ddl.auto" value="create-drop" />
      <property name="hibernate.show_sql" value="false" />
   </properties>
</persistence-unit>

The persistence unit in our example uses JTA transaction type which means we have to provide a Transaction Strategy we’re going to use.

2.3. Transaction Strategy

In case we’re using JTA transaction type for our data source then we have to define Transaction Strategy that will be used in the Apache DeltaSpike repositories. We can do it inside apache-deltaspike.properties file (under META-INF directory):

globalAlternatives.org.apache.deltaspike.jpa.spi.transaction.TransactionStrategy=org.apache.deltaspike.jpa.impl.transaction.ContainerManagedTransactionStrategy

There are four types of transaction strategy we can define:

  • BeanManagedUserTransactionStrategy
  • ResourceLocalTransactionStrategy
  • ContainerManagedTransactionStrategy
  • EnvironmentAwareTransactionStrategy

All of them implement org.apache.deltaspike.jpa.spi.transaction.TransactionStrategy.

This was the last part of the configuration required for our data module.

Next, we’ll show how to implement the repository pattern classes.

3. Repository Classes

When we’re using Apache DeltaSpike data module any abstract class or interface can become a repository class.

All we have to do is to add an @Repository annotation with a forEntity attribute which defines JPA entity that our repository should handle:

@Entity
public class User {
    // ...
}  

@Repository(forEntity = User.class) 
public interface SimpleUserRepository { 
    // ... 
}

or with an abstract class:

@Repository(forEntity = User.class)
public abstract class SimpleUserRepository { 
    // ... 
}

Data module discovers classes (or interfaces) with such an annotation and it’ll process methods which are inside.

There are few possibilities to define the query to execute. We’ll cover one by one shortly in the following sections.

4. Query From Method Name

The first possibility to define a query is to use method name which follows a defined naming convention.

It looks like below:

(Entity|Optional<Entity>|List<Entity>|Stream<Entity>) (prefix)(Property[Comparator]){Operator Property [Comparator]}

Next, we’ll focus on each part of this definition.

4.1. Return Type

The return type mainly defines how many objects our query might return. We cannot define single entity type as a return value in case our query might return more than one result.

The following method will throw an exception in case there is more than one User with given name:

public abstract User findByFirstName(String firstName);

The opposite isn’t true – we can define a return value as a Collection even though the result will be just a single entity.

public abstract Collection<User> findAnyByFirstName(String firstName);

The method name prefix which suggests one value as a return type (e.g., findAny) is suppressed in case we define return value as Collection.

The above query will return all Users with a first name matching even the method name prefix suggests something different.

Such combinations (Collection return type and a prefix which suggests one single value return) should be avoided because the code becomes not intuitive and hard to understand.

The next section shows more details about method name prefix.

4.2. Prefix for Query Method

Prefix defines the action we want to execute on the repository. The most useful one is to find entities which match given search criteria.

There are many prefixes for this action like findBy, findAny, findAll. For the detailed list, please check official Apache DeltaSpike documentation:

public abstract User findAnyByLastName(String lastName);

However, there are also other method templates which are used for counting and removing entities. We can count all rows in a table:

public abstract int count();

Also, remove method template exists which we can add in our repository:

public abstract void remove(User user);

Support for countBy and removeBy method prefixes will be added in the next version of Apache DeltaSpike 1.9.0.

The next section shows how we can add more attributes to the queries.

4.3. Query With Many Properties

In the query, we can use many properties combined with and operators.

public abstract Collection<User> findByFirstNameAndLastName(
  String firstName, String lastName);
public abstract Collection<User> findByFirstNameOrLastName(
  String firstName, String lastName);

We can combine as many properties as we want. Search for nested properties is also available which we’ll show next.

4.4. Query With Nested Properties

The query can also use nested properties.

In the following example User entity has an address property of type Address and Address entity has a city property:

@Entity
public class Address {
private String city;
    // ...
}
@Entity
public class User {
    @OneToOne 
    private Address address;
    // ...
}
public abstract Collection<User> findByAddress_city(String city);

4.5. Order in the Query

DeltaSpike allows us to define an order in which result should be returned. We can define both – ascending and descending order:

public abstract List<User> findAllOrderByFirstNameAsc();

As shown above all we have to do is to add a part to the method name which contains property name we want to sort by and the short name for the order direction.

We can combine many orders easily:

public abstract List<User> findAllOrderByFirstNameAscLastNameDesc();

Next, we’ll show how to limit the query result size.

4.6. Limit Query Result Size and Pagination

There are use cases when we want to retrieve few first rows from the whole result. It’s so-called query limit. It’s also straightforward with Data module:

public abstract Collection<User> findTop2OrderByFirstNameAsc();
public abstract Collection<User> findFirst2OrderByFirstNameAsc();

First and top can be used interchangeably.

We can then enable query pagination by providing two additional parameters: @FirstResult and @MaxResult:

public abstract Collection<User> findAllOrderByFirstNameAsc(@FirstResult int start, @MaxResults int size);

We defined already a lot of methods in the repository. Some of them are generic and should be defined once and use by each repository.

Apache DeltaSpike provides few basic types which we can use to have a lot of methods out of the box.

In the next section, we’ll focus on how to do this.

5. Basic Repository Types

To get some basic repository methods, our repository should extend basic type provided by Apache DeltaSpike. There are some of them like EntityRepository, FullEntityRepository, etc.:

@Repository
public interface UserRepository 
  extends FullEntityRepository<User, Long> {
    // ...
}

Or using an abstract class:

@Repository
public abstract class UserRepository extends AbstractEntityRepository<User, Long> {
    // ...
}

The above implementation gives us a lot of methods without writing additional lines of code, so we gained what we wanted – we reduce boilerplate code massively.

In case we’re using base repository type there’s no need to pass an additional forEntity attribute value to our @Repository annotation.

When we’re using abstract classes instead of interfaces for our repositories we get an additional possibility to create a custom query.

Abstract base repository classes, e.g., AbstractEntityRepository gives us an access to fields (via getters) or utility methods which we can use to create a query:

public List<User> findByFirstName(String firstName) {
    return typedQuery("select u from User u where u.firstName = ?1")
      .setParameter(1, firstName)
      .getResultList();
}

In the above example, we used a typedQuery utility method to create a custom implementation.

The last possibility to create a query is to use @Query annotation which we will show next.

6. @Query Annotation

The SQL query to execute can also be defined with the @Query annotation. It’s very similar to the Spring solution. We have to add an annotation to the method with SQL query as a value.

By default this is a JPQL query:

@Query("select u from User u where u.firstName = ?1")
public abstract Collection<User> findUsersWithFirstName(String firstName);

As in the above example, we can easily pass parameters to the query via an index.

In case we want to pass query via native SQL instead of JPQL we need to define additional query attribute – isNative with true value:

@Query(value = "select * from User where firstName = ?1", isNative = true)
public abstract Collection<User> findUsersWithFirstNameNative(String firstName);

7. Conclusion

In this article, we covered the basic definition of Apache DeltaSpike, and we focused on the exciting part – Data module. It’s very similar to the Spring Data Project.

We explored how to implement the repository pattern. We also introduced three possibilities how to define a query to execute.

As always, the complete code examples used in this article are available over on Github.

Java EE Servlet Exception Handling

$
0
0

1. Introduction

In this tutorial, we’re going to handle exceptions in a Java EE Servlet application – in order to provide a graceful and expected outcome whenever an error happens.

2. Java EE Servlet Exceptions

First, we’ll define a Servlet using the API annotations (have a look at the Servlets Intro for more details) with a default GET processor that will throw an exception:

@WebServlet(urlPatterns = "/randomError")
public class RandomErrorServlet extends HttpServlet {

    @Override
    protected void doGet(
      HttpServletRequest req, 
      HttpServletResponse resp) {
        throw new IllegalStateException("Random error");
    }
}

3. Default Error Handling

Let’s now simply deploy the application into our servlet container (we’re going to assume that the application runs under http://localhost:8080/javax-servlets).

When we access the address http://localhost:8080/javax-servlets/randomError, we’ll see the default servlet error handling in place:

Default error handling is provided by the servlet container and can be customized at a container or application level.

4. Custom Error Handling

We can define custom error handling using a web.xml file descriptor in which we can define the following types of policies:

  • Status code error handling – it allows us to map HTTP error codes (client and server) to a static HTML error page or an error handling servlet
  • Exception type error handling – it allows us to map exception types to static HTML error pages or an error handling servlet

4.1. Status Code Error Handling with an HTML Page

We can set up our custom error handling policy for HTTP 404 errors in the web.xml:

<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
  xmlns="http://java.sun.com/xml/ns/javaee"
  xsi:schemaLocation="http://java.sun.com/xml/ns/javaee 
    http://java.sun.com/xml/ns/javaee/web-app_3_1.xsd"
  version="3.1">

    <error-page>
        <error-code>404</error-code>
        <location>/error-404.html</location> <!-- /src/main/webapp/error-404.html-->
    </error-page>

</web-app>

Now, access http://localhost:8080/javax-servlets/invalid.html from the browser – to get the static HTML error page.

4.2. Exception Type Error Handling with a Servlet

We can set up our custom error handling policy for java.lang.Exception in web.xml:

<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
  xmlns="http://java.sun.com/xml/ns/javaee"
  xsi:schemaLocation="http://java.sun.com/xml/ns/javaee 
    http://java.sun.com/xml/ns/javaee/web-app_3_1.xsd"
  version="3.1">
    <error-page> 
        <exception-type>java.lang.Exception</exception-type> 
        <location>/errorHandler</location> 
    </error-page>
</web-app>

In ErrorHandlerServlet, we can access the error details using the error attributes provided in the request:

@WebServlet(urlPatterns = "/errorHandler")
public class ErrorHandlerServlet extends HttpServlet {

    @Override
    protected void doGet(
      HttpServletRequest req, 
      HttpServletResponse resp) throws IOException {
 
        resp.setContentType("text/html; charset=utf-8");
        try (PrintWriter writer = resp.getWriter()) {
            writer.write("<html><head><title>Error description</title></head><body>");
            writer.write("<h2>Error description</h2>");
            writer.write("<ul>");
            Arrays.asList(
              ERROR_STATUS_CODE, 
              ERROR_EXCEPTION_TYPE, 
              ERROR_MESSAGE)
              .forEach(e ->
                writer.write("<li>" + e + ":" + req.getAttribute(e) + " </li>")
            );
            writer.write("</ul>");
            writer.write("</html></body>");
        }
    }
}

Now, we can access http://localhost:8080/javax-servlets/randomError to see the custom error servlet working.

Note: Our exception type defined in the web.xml is too broad and we should specify all the exceptions we want to handle in more detail.

We can also use the container-provided servlet logger in our ErrorHandlerServlet component to log additional details:

Exception exception = (Exception) req.getAttribute(ERROR_EXCEPTION);
if (IllegalArgumentException.class.isInstance(exception)) {
    getServletContext()
      .log("Error on an application argument", exception);
}

It’s worth knowing what’s beyond the servlet-provided logging mechanisms, check the guide on slf4j for more details.

5. Conclusion

In this brief article, we have seen default error handling and specified custom error handling in a servlet application without adding external components nor libraries.

As always, you can find the source code over on Servlets tutorial repository.

Practical Java Examples of the Big O Notation

$
0
0

1. Overview

In this tutorial, we’ll talk about what Big O Notation means. We’ll go through a few examples to investigate its effect on the running time of your code.

2. The Intuition of Big O Notation

We often hear the performance of an algorithm described using Big O Notation.

The study of the performance of algorithms – or algorithmic complexity – falls into the field of algorithm analysis. Algorithm analysis answers the question of how many resources, such as disk space or time, an algorithm consumes.

We’ll be looking at time as a resource. Typically, the less time an algorithm takes to complete, the better.

3. Constant Time Algorithms – O(1)

How does this input size of an algorithm affect its running time? Key to understanding Big O is understanding the rates at which things can grow. The rate in question here is time taken per input size.

Consider this simple piece of code:

int n = 1000;
System.out.println("Hey - your input is: " + n);

Clearly, it doesn’t matter what n is, above. This piece of code takes a constant amount of time to run. It’s not dependent on the size of n.

Similarly:

int n = 1000;
System.out.println("Hey - your input is: " + n);
System.out.println("Hmm.. I'm doing more stuff with: " + n);
System.out.println("And more: " + n);

The above example is also constant time. Even if it takes 3 times as long to run, it doesn’t depend on the size of the input, n. We denote constant time algorithms as follows: O(1). Note that O(2), O(3) or even O(1000) would mean the same thing.

We don’t care about exactly how long it takes to run, only that it takes constant time.

4. Logarithmic Time Algorithms – O(log n)

Constant time algorithms are (asymptotically) the quickest. Logarithmic time is the next quickest. Unfortunately, they’re a bit trickier to imagine.

One common example of a logarithmic time algorithm is the binary search algorithm. To see how to implement binary search in Java, click here.

What is important here is that the running time grows in proportion to the logarithm of the input (in this case, log to the base 2):

for (int i = 1; i < n; i = i * 2){
    System.out.println("Hey - I'm busy looking at: " + i);
}

If n is 8, the output will be the following:

Hey - I'm busy looking at: 1
Hey - I'm busy looking at: 2
Hey - I'm busy looking at: 4

Our simple algorithm ran log(8) = 3 times.

5. Linear Time Algorithms – O(n)

After logarithmic time algorithms, we get the next fastest class: linear time algorithms.

If we say something grows linearly, we mean that it grows directly proportional to the size of its inputs.

Think of a simple for loop:

for (int i = 0; i < n; i++) {
    System.out.println("Hey - I'm busy looking at: " + i);
}

How many times does this for loop run? n times, of course! We don’t know exactly how long it will take for this to run – and we don’t worry about that.

What we do know is that the simple algorithm presented above will grow linearly with the size of its input.

We’d prefer a run time of 0.1n than (1000n + 1000), but both are still linear algorithms; they both grow directly in proportion to the size of their inputs.

Again, if the algorithm was changed to the following:

for (int i = 0; i < n; i++) {
    System.out.println("Hey - I'm busy looking at: " + i);
    System.out.println("Hmm.. Let's have another look at: " + i);
    System.out.println("And another: " + i);
}

The runtime would still be linear in the size of its input, n. We denote linear algorithms as follows: O(n).

As with the constant time algorithms, we don’t care about the specifics of the runtime. O(2n+1) is the same as O(n), as Big O Notation concerns itself with growth for input sizes.

6. N Log N Time Algorithms – O(n log n)

n log n is the next class of algorithms. The running time grows in proportion to n log n of the input:

for (int i = 1; i <= n; i++){
    for(int j = 1; j < 8; j = j * 2) {
        System.out.println("Hey - I'm busy looking at: " + i + " and " + j);
    }
}

For example, if the n is 8, then this algorithm will run 8 * log(8) = 8 * 3 = 24 times. Whether we have strict inequality or not in the for loop is irrelevant for the sake of a Big O Notation.

7. Polynomial Time Algorithms – O(np)

Next up we’ve got polynomial time algorithms. These algorithms are even slower than n log n algorithms.

The term polynomial is a general term which contains quadratic (n2), cubic (n3), quartic (n4), etc. functions. What’s important to know is that O(n2) is faster than O(n3) which is faster than O(n4), etc.

Let’s have a look at a simple example of a quadratic time algorithm:

for (int i = 1; i <= n; i++) {
    for(int j = 1; j <= n; j++) {
        System.out.println("Hey - I'm busy looking at: " + i + " and " + j);
    }
}

This algorithm will run 82 = 64 times. Note, if we were to nest another for loop, this would become an O(n3) algorithm.

8. Exponential Time Algorithms – O(kn)

Now we are getting into dangerous territory; these algorithms grow in proportion to some factor exponentiated by the input size.

For example, O(2n) algorithms double with every additional input. So, if n = 2, these algorithms will run four times; if n = 3, they will run eight times (kind of like the opposite of logarithmic time algorithms).

O(3n) algorithms triple with every additional input, O(kn) algorithms will get k times bigger with every additional input.

Let’s have a look at a simple example of an O(2n) time algorithm:

for (int i = 1; i <= Math.pow(2, n); i++){
    System.out.println("Hey - I'm busy looking at: " + i);
}

This algorithm will run 28 = 256 times.

9. Factorial Time Algorithms – O(n!)

In most cases, this is pretty much as bad as it’ll get. This class of algorithms has a run time proportional to the factorial of the input size.

A classic example of this is solving the traveling salesman problem using a brute-force approach to solve it.

An explanation of the solution to the traveling salesman problem is beyond the scope of this article.

Instead, let’s look at a simple O(n!) algorithm, as in the previous sections:

for (int i = 1; i <= factorial(n); i++){
    System.out.println("Hey - I'm busy looking at: " + i);
}

where factorial(n) simply calculates n!. If n is 8, this algorithm will run 8! = 40320 times.

10. Asymptotic Functions

Big O is what is known as an asymptotic function. All this means, is that it concerns itself with the performance of an algorithm at the limit – i.e. – when lots of input is thrown at it.

Big O doesn’t care about how well your algorithm does with inputs of small size. It’s concerned with large inputs (think sorting a list of one million numbers vs. sorting a list of 5 numbers).

Another thing to note is that there are other asymptotic functions. Big Θ (theta) and Big Ω (omega) also both describes algorithms at the limit (remember, the limit this just means for huge inputs).

To understand the differences between these 3 important functions, we first need to know that each of Big O, Big Θ, and Big Ω describes a set (i.e., a collection of elements).

Here, the members of our sets are algorithms themselves:

  • Big O describes the set of all algorithms that run no worse than a certain speed (it’s an upper bound)
  • Conversely, Big Ω describes the set of all algorithms that run no better than a certain speed (it’s a lower bound)
  • Finally, Big Θ describes the set of all algorithms that run at a certain speed (it’s like equality)

The definitions we’ve put above are not mathematically accurate, but they will aid our understanding.

Usually, you’ll hear things described using Big O, but it doesn’t hurt to know about Big Θ and Big Ω.

11. Conclusion

In this article, we discussed Big O notation, and how understanding the complexity of an algorithm can affect the running time of your code.

A great visualization of the different complexity classes can be found here.

As usual, the code snippets for this tutorial can be found over on GitHub.

Optimistic Locking in JPA

$
0
0

1. Introduction

When it comes to enterprise applications, it’s crucial to manage concurrent access to a database properly. This means we should be able to handle multiple transactions in an effective and most importantly, error-proof way.

What’s more, we need to ensure that data stays consistent between concurrent reads and updates.

To achieve that we can use optimistic locking mechanism provided by Java Persistence API. It leads that multiple updates made on the same data at the same time do not interfere with each other.

2. Understanding Optimistic Locking

In order to use optimistic locking, we need to have an entity including a property with @Version annotation. While using it, each transaction that reads data holds the value of the version property.

Before the transaction wants to make an update, it checks the version property again.

If the value has changed in the meantime an OptimisticLockException is thrown. Otherwise, the transaction commits the update and increments a value version property.

3. Pessimistic Locking vs. Optimistic Locking

It’s good to know that in contrast to optimistic locking JPA gives us pessimistic locking. It’s another mechanism for handling concurrent access for data.

We cover pessimistic locking in one of our previous articles – Pessimistic Locking in JPA. Let’s find out what’s the difference and how we can benefit from each type of locking.

As we’ve said before, optimistic locking is based on detecting changes on entities by checking their version attribute. If any concurrent update takes place, OptmisticLockException occurs. After that, we can retry updating the data.

We can imagine that this mechanism is suitable for applications which do much more reads than updates or deletes. What is more, it’s useful in situations where entities must be detached for some time and locks cannot be held.

On the contrary, pessimistic locking mechanism involves locking entities on the database level.

Each transaction can acquire a lock on data. As long as it holds the lock, no transaction can read, delete or make any updates on the locked data. We can presume that using pessimistic locking may result in deadlocks. However, it ensures greater integrity of data than optimistic locking.

4. Version Attributes

Version attributes are properties with @Version annotation. They are necessary for enabling optimistic locking. Let’s see a sample entity class:

@Entity
public class Student {

    @Id
    private Long id;

    private String name;

    private String lastName;

    @Version
    private Integer version;

    // getters and setters

}

There are several rules which we should follow while declaring version attributes:

  • each entity class must have only one version attribute
  • it must be placed in the primary table for an entity mapped to several tables
  • type of a version attribute must be one of the following: int, Integer, long, Long, short, Short, java.sql.Timestamp

We should know that we can retrieve a value of the version attribute via entity, but we mustn’t update or increment it. Only the persistence provider can do that, so data stays consistent.

It’s worth noticing that persistence providers are able to support optimistic locking for entities which don’t have version attributes. Yet, it’s a good idea to always include version attributes when working with optimistic locking.

If we try to lock an entity which does not contain such attribute and the persistence provider does not support it, it will result in a PersitenceException.

5. Lock Modes

JPA provides us with two different optimistic lock modes (and two aliases):

  • OPTIMISTIC – it obtains an optimistic read lock for all entities containing a version attribute
  • OPTIMISTIC_FORCE_INCREMENT – it obtains an optimistic lock the same as OPTIMISTIC and additionally increments the version attribute value
  • READ – it’s a synonym for OPTIMISTIC
  • WRITE – it’s a synonym for OPTIMISTIC_FORCE_INCREMENT

We can find all types listed above in the LockModeType class.

5.1. OPTIMISTIC (READ)

As we already know, OPTIMISTIC and READ lock modes are synonyms. However, JPA specification recommends us to use OPTIMISTIC in new applications.

Whenever we request the OPTIMISTIC lock mode, a persistence provider will prevent our data from dirty reads as well as non-repeatable reads.

Put simply, it should ensure any transaction fails to commit any modification on data that another transaction:

  • has updated or deleted but not committed
  • has updated or deleted successfully in the meantime

5.2. OPTIMISTIC_INCREMENT (WRITE)

The same as previously, OPTIMISTIC_INCREMENT and WRITE are synonyms, but the former is preferable.

OPTIMISTIC_INCREMENT must meet the same conditions as OPTIMISTIC lock mode. Additionally, it increments the value of a version attribute. However, it’s not specified whether it should be done immediately or may be put off until commit or flush.

It’s worth to know that a persistence provider is allowed to provide OPTIMISTIC_INCREMENT functionality when OPTIMISTIC lock mode is requested.

6. Using Optimistic Locking

We should remember that for versioned entities optimistic locking is available by default. Yet there are several ways of requesting it explicitly.

6.1. Find

To request optimistic locking we can pass the proper LockModeType as an argument to find method od EntityManager:

entityManager.find(Student.class, studentId, LockModeType.OPTIMISTIC);

6.2. Query

Another way to enable locking is using the setLockMode method of Query object:

Query query = entityManager.createQuery("from Student where id = :id");
query.setParameter("id", studentId);
query.setLockMode(LockModeType.OPTIMISTIC_INCREMENT);
query.getResultList()

6.3. Explicit Locking

We can set a lock by calling EnitityManager’s lock method:

Student student = entityManager.find(Student.class, id);
entityManager.lock(student, LockModeType.OPTIMISTIC);

6.4. Refresh

We can call the refresh method the same way as the previous method:

Student student = entityManager.find(Student.class, id);
entityManager.refresh(student, LockModeType.READ);

6.5. NamedQuery

The last option is to use @NamedQuery with the lockMode property:

@NamedQuery(name="optimisticLock",
  query="SELECT s FROM Student s WHERE s.id LIKE :id",
  lockMode = WRITE)

7. OptimisticLockException

When the persistence provider discovers optimistic locking conflicts on entities, it throws OptimisticLockExceptionWe should be aware that due to the exception the active transaction is always marked for rollback.

It’s good to know how we can react to OptimisticLockException. Conveniently, this exception contains a reference to the conflicting entity. However, it’s not mandatory for the persistence provider to supply it in every situation. There is no guarantee that the object will be available.

There is a recommended way of handling the described exception, though. We should retrieve the entity again by reloading or refreshing. Preferably in a new transaction. After that, we can try to update it once more.

8. Conclusion

In this tutorial, we got familiar with a tool which can help us orchestrate concurrent transactions. Optimistic locking uses version attributes included in entities to control concurrent modifications on them.

Therefore, it ensures that any updates or deletes won’t be overwritten or lost silently. Opposite to pessimistic locking, it doesn’t lock entities on the database level and consequently, it isn’t vulnerable to DB deadlocks.

We’ve learned that optimistic locking is enabled for versioned entities by default. However, there are several ways of requesting it explicitly by using various lock mode types.

Another fact we should remember is that each time there are conflicting updates on entities, we should expect an OptimisticLockException.

Lastly, the source code of this tutorial is available over on GitHub.


Java System.getProperty vs System.getenv

$
0
0

1. Introduction

The package java.lang is automatically imported when in a Java application. This package contains many commonly used classes from NullPointerException to Object, Math, and String.

The java.lang.System class is a final class, meaning that we cannot create an instance of it, therefore all methods are static.

We are going to look at the differences between two System methods for reading system properties and environment variables.

These methods are getProperty and getenv.

2. Using System.getProperty()

The Java platform uses a Properties object to provide information about the local system and configuration and we call it System Properties.

System Properties include information such as the current user, the current version of the Java runtime, and file path-name separator.

In the below code, we use System.getProperty(“log_dir”) to read the value of the property log_dir. We also make use of the default value parameter so if the property does not exist, getProperty returns of /tmp/log:

String log_dir = System.getProperty("log_dir","/tmp/log");

To update System Properties at runtime, use the method System.setProperty method:

System.setProperty("log_dir", "/tmp/log");

We can pass our own properties or configurations values to the application using the propertyName command line argument in the format:

java -jar jarName -DpropertyName=value

Setting the property of foo with value of bar in app.jar:

java -jar app -Dfoo=”bar”

System.getProperty will always return a String.

3. Using System.getenv()

Environment Variables are key/value pairs like Properties. Many Operating Systems use Environment Variables to allow configuration information to be passed into applications.

The way to set an environment variable differs from one operating system to another. For example, in Windows, we use a System Utility application from the control panel while in Unix we use shell scripts.

When creating a process, by default it inherits a clone environment of its parent process.

The following code snippet shows using a lambda expression to print all Environment Variables.

System.getenv().forEach((k, v) -> {
    System.out.println(k + ":" + v);
});

getenv() returns a read-only Map. Trying to add values to the map throws an UnsupportedOperationException.

To obtain a single variable, call getenv with the variable name:

String log_dir = System.getenv("log_dir");

On the other hand, we can create another process from our application and add new variables to its environment.

To create a new process in Java, we use ProcessBuilder class which has a method called environment. This method returns a Map but this time the map is not read-only, meaning we can add elements to it:

ProcessBuilder pb = new ProcessBuilder(args);
Map<String, String> env = pb.environment();
env.put("log_dir", "/tmp/log");
Process process = pb.start();

4. The Differences

Although both are essentially maps that provide String values for String keys, lets look at a few differences:

  1. We can update Properties at runtime while Environment Variables are an immutable copy of the Operating System’s variables.
  2. Properties are contained only within Java platform while Environment Variables are global at the Operating System level – available to all applications running on the same machine.
  3. Properties must exist when packaging the application but we can create Environment Variables on the Operating System at almost any point.

5. Conclusion

Although conceptually similar, the application of both Properties and Environment Variables are quite different.

The choice between the options is often a question of scope. Using Environment Variables, the same application can be deployed to multiple machines to run different instances and can be configured at the Operating System level or even in AWS or Azure Consoles. Removing the needing to rebuild application to update config.

Always remember that getProperty follows camel-case convention and getenv does not.

Get and Post Lists of Objects with RestTemplate

$
0
0

1. Introduction

The RestTemplate class is the central tool for performing client-side HTTP operations in Spring. It provides several utility methods for building HTTP requests and handling responses.

And, since RestTemplate integrates well with Jackson, it can serialize/deserialize most objects to and from JSON without much effort. However, working with collections of objects is not so straightforward.

In this tutorial, we’ll see how to use RestTemplate to both GET and POST a list of objects.

2. Example Service

We will be using an employee API that has two HTTP endpoints – get all and create:

  • GET /employees
  • POST /employees

For communication between client and server, we’ll use a simple DTO to encapsulate basic employee data:

public class Employee {
    public long id;
    public String title;

    // standard constructor and setters/getters
}

We’re now ready to write code that uses RestTemplate to get and create lists of Employee objects.

3. Get a List of Objects with RestTemplate

Normally when calling GET, you can use one of the simplified methods in RestTemplate, such as:

getForObject(URI url, Class<T> responseType)

This sends a request to the specified URI using the GET verb and converts the response body into the requested Java type. This works great for most classes, but it has a limitation: we cannot send lists of objects.

The problem is due to type erasure with Java generics. When the application is running, it has no knowledge of what type of object is in the list. This means the data in the list cannot be deserialized into the appropriate type.

Luckily we have two options to get around this.

3.1. Using ParameterizedTypeReference

Using a combination of ResponseEntity and ParameterizedTypeReference, we can easily get a list of objects using RestTemplate:

RestTemplate restTemplate = new RestTemplate();
ResponseEntity<List<Employee>> response = restTemplate.exchange(
  "http://localhost:8080/employees/",
  HttpMethod.GET,
  null,
  new ParameterizedTypeReference<List<Employee>>(){});
List<Employee> employees = response.getBody();

There are a couple of things happening in the code above. First, we use ResponseEntity as our return type, using it to wrap the list of objects we really want. Second, we are calling RestTemplate.exchange() instead of getForObject().

This is the most generic way to use RestTemplate. It requires us to specify the HTTP method, optional request body, and a response type. In this case, we use an anonymous subclass of ParameterizedTypeReference for the response type.

This last part is what allows us to convert the JSON response into a list of objects that are the appropriate type. When we create an anonymous subclass of ParameterizedTypeReference, it uses reflection to capture information about the class type we want to convert our response to.

It holds on to this information using Java’s Type object, and we no longer have to worry about type erasure.

The upside of this approach is that we can use all of our existing code. Our Employee object works as-is, as does our application endpoint. The downside is that the code is a little more verbose.

3.2. Using a Wrapper Class

Some APIs will return a top-level object that contains the list of employees instead of returning the list directly. To handle this situation, we can use a wrapper class that contains the list of employees.

public class EmployeeList {
    private List<Employee> employees;

    public EmployeeList() {
        employees = new ArrayList<>();
    }

    // standard constructor and getter/setter
}

Now we can use the simpler getForObject() method to get the list of employees:

EmployeeList response = restTemplate.getForObject(
  "http://localhost:8080/employees",
  EmployeeList.class);
List<Employee> employees = response.getEmployees();

This code is much simpler but requires an additional wrapper object.

4. Post a List of Objects with RestTemplate

Now let’s look at how to send a list of objects from our client to the server. Just like above, RestTemplate provides a simplified method for calling POST:

postForObject(URI url, Object request, Class<T> responseType)

This sends an HTTP POST to the given URI, with the optional request body, and converts the response into the specified type. Unlike the GET scenario above, we don’t have to worry about type erasure.

This is because now we’re going from Java objects to JSON. The list of objects and their type are known by the JVM, and therefore properly be serialized:

List<Employee> newEmployees = new ArrayList<>();
newEmployees.add(new Employee(3, "Intern"));
newEmployees.add(new Employee(4, "CEO"));

restTemplate.postForObject(
  "http://localhost:8080/employees/",
  newEmployees,
  ResponseEntity.class);

4.1. Using a Wrapper Class

If we need to use a wrapper class to be consistent with GET scenario above, that’s simple too. We can send a new list using RestTemplate:

List<Employee> newEmployees = new ArrayList<>();
newEmployees.add(new Employee(3, "Intern"));
newEmployees.add(new Employee(4, "CEO"));

restTemplate.postForObject(
  "http://localhost:8080/employees",
  new EmployeeList(newEmployees),
  ResponseEntity.class);

5. Conclusion

Using RestTemplate is a simple way of building HTTP clients to communicate with your services.

It provides a number of methods for working with every HTTP method and simple objects. With a little bit of extra code, we can easily use it to work with lists of objects.

As usual, the complete code is available in the Github project.

Docker Test Containers in Java Tests

$
0
0

1. Introduction

In this tutorial, we’ll be looking at Java TestContainers library. It allows us to use Docker containers within our tests. As a result, we can write self-contained integration tests that depend on external resources.

We can use any resource in our tests that have a docker image. For example, there are images for databases, web browsers, web servers, and message queues. Therefore, we can run them as containers within our tests.

2. Requirements

TestContainers library can be used with Java 8 and higher. Besides, it is compatible with JUnit Rules API.

First, let’s define the maven dependency for the core functionality:

<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>testcontainers</artifactId>
    <version>1.8.0</version>
</dependency>

There are also modules for specialized containers. In this tutorial, we’ll be using PostgreSQL and Selenium. 

Let’s add the relevant dependencies:

<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>postgresql </artifactId>
    <version>1.8.0</version>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>selenium </artifactId>
    <version>1.8.0</version>
</dependency>

We can find latest versions on Maven Central.

Also, we need Docker to run containers. Refer to Docker documentation for installation instructions.

Make sure you’re able to run Docker containers in your test environment.

3. Usage

Let’s configure a generic container rule:

@ClassRule
public static GenericContainer simpleWebServer
 = new GenericContainer("alpine:3.2")
   .withExposedPorts(80)
   .withCommand("/bin/sh", "-c", "while true; do echo "
     + "\"HTTP/1.1 200 OK\n\nHello World!\" | nc -l -p 80; done");

We construct a GenericContainer test rule by specifying a docker image name. Then, we configure it with builder methods:

  • We use withExposedPorts to expose a port from the container
  • withCommand defines a container command. It will be executed when the container starts.

The rule is annotated with @ClassRule. As a result, it will start the Docker container before any test in that class runs. The container will be destroyed after all methods are executed.

If you apply @Rule annotation, GenericContainer rule will start a new container for each test method. And it will stop the container when that test method finishes.

We can use IP address and port to communicate with the process running in the container:

@Test
public void givenSimpleWebServerContainer_whenGetReuqest_thenReturnsResponse()
  throws Exception {
    String address = "http://" 
      + simpleWebServer.getContainerIpAddress() 
      + ":" + simpleWebServer.getMappedPort(80);
    String response = simpleGetRequest(address);
    
    assertEquals(response, "Hello World!");
}

4. Usage Modes

There are several usage modes of the test containers. We saw an example of running a GenericContainer.

TestContainers library has also rule definitions with specialized functionality. They are for containers of common databases like MySQL, PostgreSQL; and others like web clients.

Although we can run them as generic containers, the specializations provide extended convenience methods.

4.1. Databases

Let’s assume we need a database server for data-access-layer integration tests. We can run databases in containers with the help of TestContainers library.

For example, we fire up a PostgreSQL container with PostgreSQLContainer rule. Then, we’re able to use helper methods. These are getJdbcUrl, getUsername, getPassword for database connection:

@Rule
public PostgreSQLContainer postgresContainer = new PostgreSQLContainer();

@Test
public void whenSelectQueryExecuted_thenResulstsReturned()
  throws Exception {
    String jdbcUrl = postgresContainer.getJdbcUrl();
    String username = postgresContainer.getUsername();
    String password = postgresContainer.getPassword();
    Connection conn = DriverManager
      .getConnection(jdbcUrl, username, password);
    ResultSet resultSet = 
      conn.createStatement().executeQuery("SELECT 1");
    resultSet.next();
    int result = resultSet.getInt(1);
    
    assertEquals(1, result);
}

It is also possible to run PostgreSQL as a generic container. But it’d be more difficult to configure the connection.

4.2. Web Drivers

Another useful scenario is to run containers with web browsers. BrowserWebDriverContainer rule enables running Chrome and Firefox in docker-selenium containers. Then, we manage them with RemoteWebDriver. 

This is very useful for automating UI/Acceptance tests for web applications:

@Rule
public BrowserWebDriverContainer chrome
  = new BrowserWebDriverContainer()
    .withDesiredCapabilities(DesiredCapabilities.chrome());

@Test
public void whenNavigatedToPage_thenHeadingIsInThePage() {
    RemoteWebDriver driver = chrome.getWebDriver();
    driver.get("https://saucelabs.com/test/guinea-pig");
    String heading = driver
      .findElement(By.xpath("/html/body/h1")).getText();
 
    assertEquals("This page is a Selenium sandbox", heading);
}

4.3. Docker Compose

If the tests require more complex services, we can specify them in a docker-compose file:

simpleWebServer:
  image: alpine:3.2
  command: ["/bin/sh", "-c", "while true; do echo 'HTTP/1.1 200 OK\n\nHello World!' | nc -l -p 80; done"]

Then, we use DockerComposeContainer rule. This rule will start and run services as defined in the compose file.

We use getServiceHost and getServicePost methods to build connection address to the service:

@ClassRule
public static DockerComposeContainer compose = 
  new DockerComposeContainer(
    new File("src/test/resources/test-compose.yml"))
      .withExposedService("simpleWebServer_1", 80);

@Test
public void givenSimpleWebServerContainer_whenGetReuqest_thenReturnsResponse()
  throws Exception {
 
    String address = "http://" + compose.getServiceHost("simpleWebServer_1", 80) + ":" + compose.getServicePort("simpleWebServer_1", 80);
    String response = simpleGetRequest(address);
    
    assertEquals(response, "Hello World");
}

5. Conclusion

We saw how we could use TestContainers library. It eases developing and running integration tests.

We used GenericContainer rule for containers of given docker images. Then, we looked at PostgreSQLContainer, BrowserWebDriverContainer and DockerComposeContainer rules. They give more functionality for specific use cases.

Finally, code samples here can be found over on GitHub.

Introduction to Joda-Time

$
0
0

1. Introduction

Joda-Time is the most widely used date and time processing library, before the release of Java 8. Its purpose was to offer an intuitive API for processing date and time and also address the design issues that existed in the Java Date/Time API.

The central concepts implemented in this library were introduced in the JDK core with the release of the Java 8 version. The new date and time API is found in the java.time package (JSR-310). An overview of these features can be found in this article.

After the release of Java 8, authors consider the project to be mostly finished and advise to use the Java 8 API if possible.

2. Why Use Joda-Time?

The date/time API, before Java 8, presented multiple design problems.

Among the problems is the fact that the Date and SimpleDateFormatter classes aren’t thread-safe. To address this issue, Joda-Time uses immutable classes for handling date and time.

The Date class doesn’t represent an actual date, but instead, it specifies an instant in time, with millisecond precision. The year in a Date starts from 1900, while most of the date operations usually use Epoch time which starts from January 1st, 1970.

Also, the day, month and year offset of a Date is counterintuitive. Days start at 0, while month begins from 1. To access any of them, we must use the Calendar class. Joda-Time offers a clean and fluent API for handling dates and time.

Joda-Time also offers support for eight calendar systems, while Java offers just 2: Gregorian – java.util.GregorianCalendar and Japanese – java.util.JapaneseImperialCalendar.

 3. Setup

To include the functionality of the Joda-Time library, we need to add the following dependency from Maven Central:

<dependency>
    <groupId>joda-time</groupId>
    <artifactId>joda-time</artifactId>
    <version>2.10</version>
</dependency>

 4. Library Overview

Joda-Time models the concept of date and time using the classes in the org.joda.time package.

Among those classes the most commonly used are:

  • LocalDate – represents a date without time
  • LocalTime – represents the time without the time zone
  • LocalDateTime – represents both the date and time without a time zone
  • Instant – represents an exact point in time in milliseconds from the Java epoch of 1970-01-01T00:00:00Z
  • Duration – represents the duration in milliseconds between 2 points in time
  • Period – similar to Duration, but allowing access to individual components of the date and time object, like years, month, days, etc.
  • Interval – represents the time interval between 2 instants

Other important features are the date parsers and formatters. These can be found in the org.joda.time.format package.

The calendar system and time zone specific classes can be found in the org.joda.time.chrono and org.joda.time.tz packages.

Let’s take a look at some examples in which we use the key features of Joda-Time to handle date and time.

5. Representing Date and Time

5.1. Current Date and Time

The current date, without time information, can be obtained by using the now() method from the LocalDate class:

LocalDate currentDate = LocalDate.now();

When we need just the current time, without date information, we can use the LocalTime class:

LocalTime currentTime = LocalTime.now();

To obtain a representation of the current date and time without considering the time zone, we can use LocalDateTime:

LocalDateTime currentDateAndTime = LocalDateTime.now();

Now, using currentDateAndTime, we can convert it to the other types of objects modeling date and time.

We can obtain a DateTime object (which takes into account the time zone) by using the method toDateTime(). When time is not necessary we can convert it to a LocalDate with the method toLocalDate(), and when we only need the time we can use toLocalTime() to obtain a LocalTime object:

DateTime dateTime = currentDateAndTime.toDateTime();
LocalDate localDate = currentDateAndTime.toLocalDate();
LocalTime localTime = currentDateAndTime.toLocalTime();

All the above methods have an overloaded method which accepts a DateTimeZone object to help us represent the date or time in the specified time zone:

LocalDate currentDate = LocalDate.now(DateTimeZone.forID("America/Chicago"));

Also, Joda-Time offers excellent integration with the Java Date and Time API. The constructors accept a java.util.Date object and also, we can use the toDate() method to return a java.util.Date object:

LocalDateTime currentDateTimeFromJavaDate = new LocalDateTime(new Date());
Date currentJavaDate = currentDateTimeFromJavaDate.toDate();

5.2. Custom Date and Time

To represent custom date and time, Joda-Time provides us with several constructors. We can specify the following objects:

  • an Instant
  • a Java Date object
  • a String representation of the date and time using the ISO format
  • parts of the date and time: year, month, day, hour, minute, second, millisecond
Date oneMinuteAgoDate = new Date(System.currentTimeMillis() - (60 * 1000));
Instant oneMinutesAgoInstant = new Instant(oneMinuteAgoDate);

DateTime customDateTimeFromInstant = new DateTime(oneMinutesAgoInstant);
DateTime customDateTimeFromJavaDate = new DateTime(oneMinuteAgoDate);
DateTime customDateTimeFromString = new DateTime("2018-05-05T10:11:12.123");
DateTime customDateTimeFromParts = new DateTime(2018, 5, 5, 10, 11, 12, 123);

Another way we can define a custom date and time is by parsing a given String representation of a date and time in the ISO format:

DateTime parsedDateTime = DateTime.parse("2018-05-05T10:11:12.123");

We can also parse custom representations of a date and time by defining a custom DateTimeFormatter:

DateTimeFormatter dateTimeFormatter
  = DateTimeFormat.forPattern("MM/dd/yyyy HH:mm:ss");
DateTime parsedDateTimeUsingFormatter
  = DateTime.parse("05/05/2018 10:11:12", dateTimeFormatter);

6. Working with Date and Time

6.1. Using Instant

An Instant represents the number of milliseconds from 1970-01-01T00:00:00Z until a given moment in time. For example, the current moment in time can be obtained using the default constructor or the method now():

Instant instant = new Instant();
Instant.now();

To create an Instant for a custom moment in time we can use either one of the constructors or use the methods ofEpochMilli() and ofEpochSecond():

Instant instantFromEpochMilli
  = Instant.ofEpochMilli(milliesFromEpochTime);
Instant instantFromEpocSeconds
  = Instant.ofEpochSecond(secondsFromEpochTime);

The constructors accept a String representing a date and time in the ISO format, a Java Date or a long value representing the number of milliseconds from 1970-01-01T00:00:00Z:

Instant instantFromString
  = new Instant("2018-05-05T10:11:12");
Instant instantFromDate
  = new Instant(oneMinuteAgoDate);
Instant instantFromTimestamp
  = new Instant(System.currentTimeMillis() - (60 * 1000));

When date and time are represented as a String we have the option to parse the String using our desired format:

Instant parsedInstant
  = Instant.parse("05/05/2018 10:11:12", dateTimeFormatter);

Now that we know what Instant represents and how we can create one, let’s see how it can be used.

To compare to Instant objects we can use compareTo() because it implements the Comparable interface, but also we can use the Joda-Time API methods provided in the ReadableInstant interface which Instant also implements:

assertTrue(instantNow.compareTo(oneMinuteAgoInstant) > 0);
assertTrue(instantNow.isAfter(oneMinuteAgoInstant));
assertTrue(oneMinuteAgoInstant.isBefore(instantNow));
assertTrue(oneMinuteAgoInstant.isBeforeNow());
assertFalse(oneMinuteAgoInstant.isEqual(instantNow));

Another helpful feature is that Instant can be converted to a DateTime object or event a Java Date:

DateTime dateTimeFromInstant = instant.toDateTime();
Date javaDateFromInstant = instant.toDate();

When we need to access parts of a date and time, like the year, the hour and so on, we can use the get() method and specify a DateTimeField:

int year = instant.get(DateTimeFieldType.year());
int month = instant.get(DateTimeFieldType.monthOfYear());
int day = instant.get(DateTimeFieldType.dayOfMonth());
int hour = instant.get(DateTimeFieldType.hourOfDay());

Now that we covered the Instant class let’s see some examples of how we can use Duration, Period and Interval.

6.2. Using Duration, Period and Interval

A Duration represents the time in milliseconds between two points in time or in this case it could be two Instants. We’ll use this when we need to add or subtract a specific amount of time to or from another Instant without considering chronology and time zones:

long currentTimestamp = System.currentTimeMillis();
long oneHourAgo = currentTimestamp - 24*60*1000;
Duration duration = new Duration(oneHourAgo, currentTimestamp);
Instant.now().plus(duration);

Also, we can determine how many days, hours, minutes, seconds or milliseconds the duration represents:

long durationInDays = duration.getStandardDays();
long durationInHours = duration.getStandardHours();
long durationInMinutes = duration.getStandardMinutes();
long durationInSeconds = duration.getStandardSeconds();
long durationInMilli = duration.getMillis();

The main difference between Period and Duration is that Period is defined in terms of its date and time components (years, months, hours, etc.) and doesn’t represent an exact number of milliseconds. When using Period date and time calculations will consider the time zone and daylight saving.

For example, adding a Period of 1 month to the February 1st will result in the date representation of March 1st. By using Period the library will take into account leap years.

If we’re to use a Duration we the result wouldn’t be correct, because the Duration represents a fixed amount of time that doesn’t take into account chronology or time zones:

Period period = new Period().withMonths(1);
LocalDateTime datePlusPeriod = localDateTime.plus(period);

An Interval, as the name states, represents the date and time interval between two fixed points in time represented by two Instant objects:

Interval interval = new Interval(oneMinuteAgoInstant, instantNow);

The class is useful when we need to check whether two intervals overlap or calculate the gap between them. The overlap() method will return the overlapping Interval or null when they don’t overlap:

Instant startInterval1 = new Instant("2018-05-05T09:00:00.000");
Instant endInterval1 = new Instant("2018-05-05T11:00:00.000");
Interval interval1 = new Interval(startInterval1, endInterval1);
        
Instant startInterval2 = new Instant("2018-05-05T10:00:00.000");
Instant endInterval2 = new Instant("2018-05-05T11:00:00.000");
Interval interval2 = new Interval(startInterval2, endInterval2);

Interval overlappingInterval = interval1.overlap(interval2);

The difference between intervals can be calculated using the gap() method, and when we want to know whether the end of an interval is equal to the start of another interval we can use the abuts() method:

assertTrue(interval1.abuts(new Interval(
  new Instant("2018-05-05T11:00:00.000"),
  new Instant("2018-05-05T13:00:00.000"))));

6.3. Date and Time Operations

Some of the most common operations are to add, subtract and convert date and time. The library provides specific methods for each of the classes LocalDate, LocalTime, LocalDateTime, and DateTime. It’s important to note that these classes are immutable so that every method invocation will create a new object of its type.

Let’s take LocalDateTime for the current moment and try to change its value:

LocalDateTime currentLocalDateTime = LocalDateTime.now();

To add an extra day to currentLocalDateTime we use the plusDays() method:

LocalDateTime nextDayDateTime = currentLocalDateTime.plusDays(1);

We can also use plus() method to add a Period or Duration to our currentLocalDateTime:

Period oneMonth = new Period().withMonths(1);
LocalDateTime nextMonthDateTime = currentLocalDateTime.plus(oneMonth);

The methods are similar for the other date and time components, for example, plusYears() for adding extra years, plusSeconds() for adding more seconds and so on.

To subtract a day from our currentLocalDateTime we can use the minusDays() method:

LocalDateTime previousDayLocalDateTime
  = currentLocalDateTime.minusDays(1);

Besides, doing calculations with date and time, we can also, set individual parts of the date or time. For example, setting the hour to 10 can be achieved using the withHourOfDay() method. Other methods that start with the prefix “with” can be used to set components of that date or time:

LocalDateTime currentDateAtHour10 = currentLocalDateTime
  .withHourOfDay(0)
  .withMinuteOfHour(0)
  .withSecondOfMinute(0)
  .withMillisOfSecond(0);

Another important aspect is that we can convert from a date and time class type to another. To do this, we can use specific methods provided by the library:

  • toDateTime() – converts LocalDateTime to a DateTime object
  • toLocalDate() – converts LocalDateTime to a LocalDate object
  • toLocalTime() – converts LocalDateTime to a LocalTime object
  • toDate() – converts LocalDateTime to a Java Date object

7. Working with Time Zones

Joda-Time makes it easy for us to work with different time zones and changed between them. We have the DateTimeZone abstract class which is used to represent all aspects regarding a time zone.

The default time zone used by Joda-Time is selected from the user.timezone Java system property. The library API lets us specify, individually for each class or calculation what timezone should be used. For example, we can create a LocalDateTime object

When we know that we’ll use a specific time zone across the whole application, we can set the default time zone:

DateTimeZone.setDefault(DateTimeZone.UTC);

From now one all the date and time operations, if not otherwise specified, will be represented in the UTC time zone.

To see all the available time zones we can use the method getAvailableIDs():

DateTimeZone.getAvailableIDs()

When we need to represent the date or time in a specific time zone we can use any of the classes LocalTime, LocalDate, LocalDateTime, DateTime and specify in the constructor the DateTimeZone object:

DateTime dateTimeInChicago
  = new DateTime(DateTimeZone.forID("America/Chicago"));
DateTime dateTimeInBucharest
  = new DateTime(DateTimeZone.forID("Europe/Bucharest"));
LocalDateTime localDateTimeInChicago
  = new LocalDateTime(DateTimeZone.forID("America/Chicago"));

Also, when converting between those classes we can specify the desired time zone. The method toDateTime() accepts a DateTimeZone object and toDate() accepts java.util.TimeZone object:

DateTime convertedDateTime
  = localDateTimeInChicago.toDateTime(DateTimeZone.forID("Europe/Bucharest"));
Date convertedDate
  = localDateTimeInChicago.toDate(TimeZone.getTimeZone("Europe/Bucharest"));

8. Conclusion

Joda-Time is a fantastic library that started with the main goal to fix the problems in the JDK regarding date and time operations. It soon became the de facto library for date and time handling and recently the main concepts from it were introduced in Java 8.

It’s important to note that the author considers it “to be a largely finished project” and recommends to migrate the existing code to use the Java 8 implementation.

The source code for the article is available over on GitHub.

Introduction to JavaPoet

$
0
0

1. Overview

In this tutorial, we’ll explore the basic functionalities of the JavaPoet library.

JavaPoet is developed by Square, which provides APIs to generate Java source code. It can generate primitive types, reference types and their variants (such as classes, interfaces, enumerated types, anonymous inner classes), fields, methods, parameters, annotations, and Javadocs.

JavaPoet manages the import of the dependent classes automatically. It also uses the Builder pattern to specify the logic to generate Java code.

2. Maven Dependency

In order to use JavaPoet, we can directly download the latest JAR file, or define the following dependency in our pom.xml:

<dependency>
    <groupId>com.squareup</groupId>
    <artifactId>javapoet</artifactId>
    <version>1.10.0</version>
</dependency>

3. Method Specification

First, let’s go through the method specification. To generate a method, we simply call the methodBuilder() method of MethodSpec class. We specify the generated method name as a String argument of the methodBuilder() method.

We can generate any single logical statement ending with the semi-colon using the addStatement() method. Meanwhile, we can define one control flow bounded with curly brackets, such as if-else block, or for loop, in a control flow.

Here’s a quick example – generating the sumOfTen() method which will calculate the sum of numbers from 0 to 10:

MethodSpec sumOfTen = MethodSpec
  .methodBuilder("sumOfTen")
  .addStatement("int sum = 0")
  .beginControlFlow("for (int i = 0; i <= 10; i++)")
  .addStatement("sum += i")
  .endControlFlow()
  .build();

This will produce the following output:

void sumOfTen() {
    int sum = 0;
    for (int i = 0; i <= 10; i++) {
        sum += i;
    }
}

4. Code Block

We can also wrap one or more control flows and logical statements into one code block:

CodeBlock sumOfTenImpl = CodeBlock
  .builder()
  .addStatement("int sum = 0")
  .beginControlFlow("for (int i = 0; i <= 10; i++)")
  .addStatement("sum += i")
  .endControlFlow()
  .build();

Which generates:

int sum = 0;
for (int i = 0; i <= 10; i++) {
    sum += i;
}

We can simplify the earlier logic in the MethodSpec by calling addCode() and providing the sumOfTenImpl object:

MethodSpec sumOfTen = MethodSpec
  .methodBuilder("sumOfTen")
  .addCode(sumOfTenImpl)
  .build();

A code block is also applicable to other specifications, such as types and Javadocs.

5. Field Specification

Next – let’s explore the field specification logic.

In order to generate a field, we use the builder() method of the FieldSpec class:

FieldSpec name = FieldSpec
  .builder(String.class, "name")
  .addModifiers(Modifier.PRIVATE)
  .build();

This will generate the following field:

private String name;

We can also initialize the default value of a field by calling the initializer() method:

FieldSpec defaultName = FieldSpec
  .builder(String.class, "DEFAULT_NAME")
  .addModifiers(Modifier.PRIVATE, Modifier.STATIC, Modifier.FINAL)
  .initializer("\"Alice\"")
  .build();

Which generates:

private static final String DEFAULT_NAME = "Alice";

6. Parameter Specification

Let’s now explore the parameter specification logic.

In case we want to add a parameter to the method, we can call the addParameter() within the chain of the function calls in the builder.

In case of more complex parameter types, we can make use of ParameterSpec builder:

ParameterSpec strings = ParameterSpec
  .builder(
    ParameterizedTypeName.get(ClassName.get(List.class), TypeName.get(String.class)), 
    "strings")
  .build();

We can also add the modifier of the method, such as public and/or static:

MethodSpec sumOfTen = MethodSpec
  .methodBuilder("sumOfTen")
  .addParameter(int.class, "number")
  .addParameter(strings)
  .addModifiers(Modifier.PUBLIC, Modifier.STATIC)
  .addCode(sumOfTenImpl)
  .build();

Here’s how the generated Java code looks like:

public static void sumOfTen(int number, List<String> strings) {
    int sum = 0;
    for (int i = 0; i <= 10; i++) {
        sum += i;
    }
}

7. Type Specification

After exploring the ways to generate methods, fields, and parameters, now let’s take a look at the type specifications.

To declare a type, we can use the TypeSpec which can build classes, interfaces, and enumerated types.

7.1. Generating a Class

In order to generate a class, we can use the classBuilder() method of the TypeSpec class.

We can also specify its modifiers, for instance, public and final access modifiers. In addition to class modifiers, we can also specify fields and methods using already mentioned FieldSpec and MethodSpec classes.

Note that addField() and addMethod() methods are also available when generating interfaces or anonymous inner classes.

Let’s take a look at the following class builder example:

TypeSpec person = TypeSpec
  .classBuilder("Person")
  .addModifiers(Modifier.PUBLIC)
  .addField(name)
  .addMethod(MethodSpec
    .methodBuilder("getName")
    .addModifiers(Modifier.PUBLIC)
    .returns(String.class)
    .addStatement("return this.name")
    .build())
  .addMethod(MethodSpec
    .methodBuilder("setName")
    .addParameter(String.class, "name")
    .addModifiers(Modifier.PUBLIC)
    .returns(String.class)
    .addStatement("this.name = name")
    .build())
  .addMethod(sumOfTen)
  .build();

And here’s how the generated code looks like:

public class Person {
    private String name;

    public String getName() {
        return this.name;
    }

    public String setName(String name) {
        this.name = name;
    }

    public static void sumOfTen(int number, List<String> strings) {
        int sum = 0;
        for (int i = 0; i <= 10; i++) {
            sum += i;
        }
    }
}

7.2. Generating an Interface

To generate a Java interface, we use the interfaceBuilder() method of the TypeSpec.

We can also define a default method by specifying DEFAULT modifier value in the addModifiers():

TypeSpec person = TypeSpec
  .interfaceBuilder("Person")
  .addModifiers(Modifier.PUBLIC)
  .addField(defaultName)
  .addMethod(MethodSpec
    .methodBuilder("getName")
    .addModifiers(Modifier.PUBLIC, Modifier.ABSTRACT)
    .build())
  .addMethod(MethodSpec
    .methodBuilder("getDefaultName")
    .addModifiers(Modifier.PUBLIC, Modifier.DEFAULT)
    .addCode(CodeBlock
      .builder()
      .addStatement("return DEFAULT_NAME")
      .build())
    .build())
  .build();

It will generate the following Java code:

public interface Person {
    private static final String DEFAULT_NAME = "Alice";

    void getName();

    default void getDefaultName() {
        return DEFAULT_NAME;
    }
}

7.3. Generating an Enum

To generate an enumerated type, we can use the enumBuilder() method of the TypeSpec. To specify each enumerated value, we can call the addEnumConstant() method:

TypeSpec gender = TypeSpec
  .enumBuilder("Gender")
  .addModifiers(Modifier.PUBLIC)
  .addEnumConstant("MALE")
  .addEnumConstant("FEMALE")
  .addEnumConstant("UNSPECIFIED")
  .build();

The output of the aforementioned enumBuilder() logic is:

public enum Gender {
    MALE,
    FEMALE,
    UNSPECIFIED
}

7.4. Generating an Anonymous Inner Class

To generate an anonymous inner class, we can use the anonymousClassBuilder() method of the TypeSpec class. Note that we must specify the parent class in the addSuperinterface() method. Otherwise, it will use the default parent class, which is Object:

TypeSpec comparator = TypeSpec
  .anonymousClassBuilder("")
  .addSuperinterface(ParameterizedTypeName.get(Comparator.class, String.class))
  .addMethod(MethodSpec
    .methodBuilder("compare")
    .addModifiers(Modifier.PUBLIC)
    .addParameter(String.class, "a")
    .addParameter(String.class, "b")
    .returns(int.class)
    .addStatement("return a.length() - b.length()")
    .build())
  .build();

This will generate the following Java code:

new Comparator<String>() {
    public int compare(String a, String b) {
        return a.length() - b.length();
    }
});

8. Annotation Specification

To add an annotation to generated code, we can call the addAnnotation() method in a MethodSpec or FieldSpec builder class:

MethodSpec sumOfTen = MethodSpec
  .methodBuilder("sumOfTen")
  .addAnnotation(Override.class)
  .addParameter(int.class, "number")
  .addParameter(strings)
  .addModifiers(Modifier.PUBLIC, Modifier.STATIC)
  .addCode(sumOfTenImpl)
  .build();

Which generates:

@Override
public static void sumOfTen(int number, List<String> strings) {
    int sum = 0;
    for (int i = 0; i <= 10; i++) {
        sum += i;
    }
}

In case we need to specify the member value, we can call the addMember() method of the AnnotationSpec class:

AnnotationSpec toString = AnnotationSpec
  .builder(ToString.class)
  .addMember("exclude", "\"name\"")
  .build();

This will generate the following annotation:

@ToString(
    exclude = "name"
)

9. Generating Javadocs

Javadoc can be generated using CodeBlock, or by specifying the value directly:

MethodSpec sumOfTen = MethodSpec
  .methodBuilder("sumOfTen")
  .addJavadoc(CodeBlock
    .builder()
    .add("Sum of all integers from 0 to 10")
    .build())
  .addAnnotation(Override.class)
  .addParameter(int.class, "number")
  .addParameter(strings)
  .addModifiers(Modifier.PUBLIC, Modifier.STATIC)
  .addCode(sumOfTenImpl)
  .build();

This will generate the following Java code:

/**
 * Sum of all integers from 0 to 10
 */
@Override
public static void sumOfTen(int number, List<String> strings) {
    int sum = 0;
    for (int i = 0; i <= 10; i++) {
        sum += i;
    }
}

10. Formatting

Let’s recheck the example of the FieldSpec initializer in Section 5 which contains an escape char used to escape the “Alice” String value:

initializer("\"Alice\"")

There is also a similar example in Section 8 when we define the excluded member of an annotation:

addMember("exclude", "\"name\"")

It becomes unwieldy when our JavaPoet code grows and has a lot of similar String escape or String concatenation statements.

The String formatting feature in JavaPoet makes String formatting in beginControlFlow()addStatement() or initializer() methods easier. The syntax is similar to String.format() functionality in Java. It can help to format literals, strings, types, and names.

10.1. Literal Formatting

JavaPoet replaces $L with a literal value in the output. We can specify any primitive type and String values in the argument(s):

private MethodSpec generateSumMethod(String name, int from, int to, String operator) {
    return MethodSpec
      .methodBuilder(name)
      .returns(int.class)
      .addStatement("int sum = 0")
      .beginControlFlow("for (int i = $L; i <= $L; i++)", from, to)
      .addStatement("sum = sum $L i", operator)
      .endControlFlow()
      .addStatement("return sum")
      .build();
}

In case we call the generateSumMethod() with the following values specified:

generateSumMethod("sumOfOneHundred", 0, 100, "+");

JavaPoet will generate the following output:

int sumOfOneHundred() {
    int sum = 0;
    for (int i = 0; i <= 100; i++) {
        sum = sum + i;
    }
    return sum;
}

10.2. String Formatting

String formatting generates a value with the quotation mark, which refers exclusively to String type in Java. JavaPoet replaces $S with a String value in the output:

private static MethodSpec generateStringSupplier(String methodName, String fieldName) {
    return MethodSpec
      .methodBuilder(methodName)
      .returns(String.class)
      .addStatement("return $S", fieldName)
      .build();
}

In case we call the generateGetter() method and provide these values:

generateStringSupplier("getDefaultName", "Bob");

We will get the following generated Java code:

String getDefaultName() {
    return "Bob";
}

10.3. Type Formatting

JavaPoet replaces $T with a type in the generated Java code. JavaPoet handles the type in the import statement automatically. If we had provided the type as a literal instead, JavaPoet would not handle the import.

MethodSpec getCurrentDateMethod = MethodSpec
  .methodBuilder("getCurrentDate")
  .returns(Date.class)
  .addStatement("return new $T()", Date.class)
  .build();

JavaPoet will generate the following output:

Date getCurrentDate() {
    return new Date();
}

10.4. Name Formatting

In case we need to refer to a name of a variable/parameter, field or method, we can use $N in JavaPoet’s String formatter.

We can add the previous getCurrentDateMethod() to the new referencing method:

MethodSpec dateToString = MethodSpec
  .methodBuilder("getCurrentDateAsString")
  .returns(String.class)
  .addStatement(
    "$T formatter = new $T($S)", 
    DateFormat.class, 
    SimpleDateFormat.class, 
    "MM/dd/yyyy HH:mm:ss")
  .addStatement("return formatter.format($N())", getCurrentDateMethod)
  .build();

Which generates:

String getCurrentDateAsString() {
    DateFormat formatter = new SimpleDateFormat("MM/dd/yyyy HH:mm:ss");
    return formatter.format(getCurrentDate());
}

11. Generating Lambda Expressions

We can make use of the features that we’ve already explored to generate a Lambda expression. For instance, a code block which prints the name field or a variable multiple times:

CodeBlock printNameMultipleTimes = CodeBlock
  .builder()
  .addStatement("$T<$T> names = new $T<>()", List.class, String.class, ArrayList.class)
  .addStatement("$T.range($L, $L).forEach(i -> names.add(name))", IntStream.class, 0, 10)
  .addStatement("names.forEach(System.out::println)")
  .build();

That logic generates the following output:

List<String> names = new ArrayList<>();
IntStream.range(0, 10).forEach(i -> names.add(name));
names.forEach(System.out::println);

12. Producing the Output using JavaFile

The JavaFile class helps to configure and produce the output of the generated code. To generate Java code, we simply build the JavaFile, provide the package name and an instance of the TypeSpec object.

12.1. Code Indentation

By default, JavaPoet uses two spaces for indentation. To keep the consistency, all examples in this tutorial were presented with 4 spaces indentation, which is configured via indent() method:

JavaFile javaFile = JavaFile
  .builder("com.baeldung.javapoet.person", person)
  .indent("    ")
  .build();

12.2. Static Imports

In case we need to add a static import, we can define the type and specific method name in the JavaFile by calling the addStaticImport() method:

JavaFile javaFile = JavaFile
  .builder("com.baeldung.javapoet.person", person)
  .indent("    ")
  .addStaticImport(Date.class, "UTC")
  .addStaticImport(ClassName.get("java.time", "ZonedDateTime"), "*")
  .build();

Which generates the following static import statements:

import static java.util.Date.UTC;
import static java.time.ZonedDateTime.*;

12.3. Output

The writeTo() method provides functionality to write the code into multiple targets, such as standard output stream (System.out) and File.

To write Java code to a standard output stream, we simply call the writeTo() method, and provide the System.out as the argument:

JavaFile javaFile = JavaFile
  .builder("com.baeldung.javapoet.person", person)
  .indent("    ")
  .addStaticImport(Date.class, "UTC")
  .addStaticImport(ClassName.get("java.time", "ZonedDateTime"), "*")
  .build();

javaFile.writeTo(System.out);

The writeTo() method also accepts java.nio.file.Path and java.io.File. We can provide the corresponding Path or File object in order to generate the Java source code file into the destination folder/path:

Path path = Paths.get(destinationPath);
javaFile.writeTo(path);

For more detailed information regarding JavaFile, please refer to the Javadoc.

13. Conclusion

This article has been an introduction to JavaPoet functionalities, like generating methods, fields, parameters, types, annotations, and Javadocs.

JavaPoet is designed for code generation only. In case we would like to do metaprogramming with Java, JavaPoet as of version 1.10.0 doesn’t support code compilation and running.

As always, the examples and code snippets are available over on GitHub.

Viewing all 4754 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>