Quantcast
Channel: Baeldung
Viewing all 4703 articles
Browse latest View live

How to Convert List to Map in Java

$
0
0

1. Overview

Converting List to Map is a common task. In this tutorial, we’ll cover several ways to do this.

We’ll assume that each element of the List has an identifier which will be used as a key in the resulting Map.

2. Sample Data Structure

Firstly, let’s model the element:

public class Animal {
    private int id;
    private String name;

    //  constructor/getters/setters
}

The id field is unique, hence we can make it the key.

Let’s start converting with the traditional way.

3. Before Java 8

Evidently, we can convert a List to a Map using core Java methods:

public Map<Integer, Animal> convertListBeforeJava8(List<Animal> list) {
    Map<Integer, Animal> map = new HashMap<>();
    for (Animal animal : list) {
        map.put(animal.getId(), animal);
    }
    return map;
}

Let’s test the conversion:

@Test
public void whenConvertBeforeJava8_thenReturnMapWithTheSameElements() {
    Map<Integer, Animal> map = convertListService
      .convertListBeforeJava8(list);
    
    assertThat(
      map.values(), 
      containsInAnyOrder(list.toArray()));
}

4. With Java 8

Starting with Java 8 we can convert a List into a Map using streams and Collectors:

 public Map<Integer, Animal> convertListAfterJava8(List<Animal> list) {
    Map<Integer, Animal> map = list.stream()
      .collect(Collectors.toMap(Animal::getId, animal -> animal));
    return map;
}

Again, let’s make sure the conversion is done correctly:

@Test
public void whenConvertAfterJava8_thenReturnMapWithTheSameElements() {
    Map<Integer, Animal> map = convertListService.convertListAfterJava8(list);
    
    assertThat(
      map.values(), 
      containsInAnyOrder(list.toArray()));
}

5. Using the Guava Library

Besides core Java, we can use 3rd party libraries for the conversion.

5.1. Maven Configuration

Firstly, we need to add the following dependency to our pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>23.6.1-jre</version>
</dependency>

The latest version of this library can always be found here.

5.2. Conversion With Maps.uniqueIndex()

Secondly, let’s use Maps.uniqueIndex() method to convert a List into a Map:

public Map<Integer, Animal> convertListWithGuava(List<Animal> list) {
    Map<Integer, Animal> map = Maps
      .uniqueIndex(list, Animal::getId);
    return map;
}

Finally, let’s test the conversion:

@Test
public void whenConvertWithGuava_thenReturnMapWithTheSameElements() {
    Map<Integer, Animal> map = convertListService
      .convertListWithGuava(list);
    
    assertThat(
      map.values(), 
      containsInAnyOrder(list.toArray()));
}

6. Using Apache Commons Library

We can also make a conversion with the method of Apache Commons library.

6.1. Maven Configuration

Firstly, let’s include Maven dependency:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-collections4</artifactId>
    <version>4.2</version>
</dependency>

The latest version of this dependency is available here.

6.2. MapUtils

Secondly, we’ll make the conversion using MapUtils.populateMap():

public Map<Integer, Animal> convertListWithApacheCommons2(List<Animal> list) {
    Map<Integer, Animal> map = new HashMap<>();
    MapUtils.populateMap(map, list, Animal::getId);
    return map;
}

Lastly, let’s make sure it works as expected:

@Test
public void whenConvertWithApacheCommons2_thenReturnMapWithTheSameElements() {
    Map<Integer, Animal> map = convertListService
      .convertListWithApacheCommons2(list);
    
    assertThat(
      map.values(), 
      containsInAnyOrder(list.toArray()));
}

7. Conflict of Values

Let’s check what happens if the id field isn’t unique.

7.1. List of Animals With Duplicated ids

Firstly, let’s create a List of Animals with non-unique ids:

@Before
public void init() {

    this.duplicatedIdList = new ArrayList<>();

    Animal cat = new Animal(1, "Cat");
    duplicatedIdList.add(cat);
    Animal dog = new Animal(2, "Dog");
    duplicatedIdList.add(dog);
    Animal pig = new Animal(3, "Pig");
    duplicatedIdList.add(pig);
    Animal cow = new Animal(4, "Cow");
    duplicatedIdList.add(cow);
    Animal goat= new Animal(4, "Goat");
    duplicatedIdList.add(goat);
}

As shown above, the cow and the goat have the same id.

7.2. Checking the Behavior

Java Map‘s put() method is implemented so that the latest added value overwrites the previous one with the same key.

For this reason, the traditional conversion and Apache Commons MapUtils.populateMap() behave in the same way:

@Test
public void whenConvertBeforeJava8_thenReturnMapWithRewrittenElement() {

    Map<Integer, Animal> map = convertListService
      .convertListBeforeJava8(duplicatedIdList);

    assertThat(map.values(), hasSize(4));
    assertThat(map.values(), hasItem(duplicatedIdList.get(4)));
}

@Test
public void whenConvertWithApacheCommons_thenReturnMapWithRewrittenElement() {

    Map<Integer, Animal> map = convertListService
      .convertListWithApacheCommons(duplicatedIdList);

    assertThat(map.values(), hasSize(4));
    assertThat(map.values(), hasItem(duplicatedIdList.get(4)));
}

As can be seen, the goat overwrites the cow with the same id.

Unlike that, Collectors.toMap() and MapUtils.populateMap() throw IllegalStateException and IllegalArgumentException respectively:

@Test(expected = IllegalStateException.class)
public void givenADupIdList_whenConvertAfterJava8_thenException() {

    convertListService.convertListAfterJava8(duplicatedIdList);
}

@Test(expected = IllegalArgumentException.class)
public void givenADupIdList_whenConvertWithGuava_thenException() {

    convertListService.convertListWithGuava(duplicatedIdList);
}

8. Conclusion

In this quick article, we’ve covered various ways of converting a List to a Map, giving examples with core Java as well as some popular third-party libraries.

As usual, the complete source code is available over on GitHub.


Query Entities by Dates and Times with Spring Data JPA

$
0
0

1. Introduction

In this quick tutorial, we’ll see how to query entities by dates with Spring Data JPA.

We’ll first refresh our memory about how to map dates and times with JPA.

Then, we’ll create an entity with date and time fields as well as a Spring Data repository to query those entities.

2. Mapping Dates and Times with JPA

For starters, we’ll review a bit of theory about mapping dates with JPA. The thing to know is that we need to decide whether we want to represent:

  • A date only
  • A time only
  • Or both

In addition to the (optional) @Column annotation, we’ll need to add the @Temporal annotation to specify what the field represents.

This annotation takes a parameter which is a value of TemporalType enum:

  • TemporalType.DATE
  • TemporalType.TIME
  • TemporalType.TIMESTAMP

A detailed article about dates and times mapping with JPA can be found here.

3. In Practice

In practice, once our entities are correctly set up, there is not much work to do to query them using Spring Data JPA. We just have to use query methods, @Query annotation.

Every Spring Data JPA mechanism will work just fine.

Let’s see a couple examples of entities queried by dates and times with Spring Data JPA.

3.1. Set Up an Entity

For starters, let’s say we have an Article entity, with a publication date, a publication time and a creation date and time:

@Entity
public class Article {

    @Id
    @GeneratedValue
    Integer id;
 
    @Temporal(TemporalType.DATE)
    Date publicationDate;
 
    @Temporal(TemporalType.TIME)
    Date publicationTime;
 
    @Temporal(TemporalType.TIMESTAMP)
    Date creationDateTime;
}

We split publication date and time into two fields for the demonstration purpose. That way the three temporal types are represented.

3.2. Query the Entities

Now that our entity is all set up, let’s create a Spring Data repository to query those articles.

We’ll create three methods, using several Spring Data JPA features:

public interface ArticleRepository 
  extends JpaRepository<Article, Integer> {

    List<Article> findAllByPublicationDate(Date publicationDate);

    List<Article> findAllByPublicationTimeBetween(
      Date publicationTimeStart,
      Date publicationTimeEnd);

    @Query("select a from Article a where a.creationDateTime <= :creationDateTime")
    List<Article> findAllWithCreationDateTimeBefore(
      @Param("creationDateTime") Date creationDateTime);

}

So we defined three methods:

  • findAllByPublicationDate which retrieves articles published on a given date
  • findAllByPublicationTimeBetween which retrieves articles published between two given hours
  • and findAllWithCreationDateTimeBefore which retrieves articles created before a given date and time

The two first methods rely on Spring Data query methods mechanism and the last on @Query annotation.

In the end, that doesn’t change the way dates will be treated. The first method will only consider the date part of the parameter.

The second will only consider the time of the parameters. And the last will use both date and time.

3.3. Test the Queries

The last thing we’ve to do is to set up some tests to check that these queries work as expected.

We’ll first import a few data into our database and then we’ll create the test class which will check each method of the repository:

@RunWith(SpringRunner.class)
@DataJpaTest
public class ArticleRepositoryIntegrationTest {

    @Autowired
    private ArticleRepository repository;

    @Test
    public void whenFindByPublicationDate_thenArticles1And2Returned() {
        List<Article> result = repository.findAllByPublicationDate(new SimpleDateFormat("yyyy-MM-dd").parse("2018-01-01"));

        assertEquals(2, result.size());
        assertTrue(result.stream()
          .map(Article::getId)
          .allMatch(id -> Arrays.asList(1, 2).contains(id)));
    }

    @Test
    public void whenFindByPublicationTimeBetween_thenArticles2And3Returned() {
        List<Article> result = repository.findAllByPublicationTimeBetween(
          new SimpleDateFormat("HH:mm").parse("15:15"),
          new SimpleDateFormat("HH:mm").parse("16:30"));

        assertEquals(2, result.size());
        assertTrue(result.stream()
          .map(Article::getId)
          .allMatch(id -> Arrays.asList(2, 3).contains(id)));
    }

    @Test
    public void givenArticlesWhenFindWithCreationDateThenArticles2And3Returned() {
        List<Article> result = repository.findAllWithCreationDateTimeBefore(
          new SimpleDateFormat("yyyy-MM-dd HH:mm").parse("2017-12-15 10:00"));

        assertEquals(2, result.size());
        assertTrue(result.stream()
          .map(Article::getId)
          .allMatch(id -> Arrays.asList(2, 3).contains(id));
    }
}

Each test verifies that only the articles matching the conditions are retrieved.

4. Conclusion

In this short article, we’ve seen how to query entities using their dates and times fields with Spring Data JPA.

We learned a bit of theory before using Spring Data mechanisms to query the entities. We saw those mechanisms work the same way with dates and times as they do with other types of data.

The source code for this article is available over on GitHub.

Guide to Java Instrumentation

$
0
0

1. Introduction

In this tutorial, we’re going to talk about Java Instrumentation API. It provides the ability to add byte-code to existing compiled Java classes.

We’ll also talk about java agents and how we use them to instrument our code.

2. Setup

Throughout the article, we’ll build an app using instrumentation.

Our application will consist of two modules:

  1. An ATM app that allows us to withdraw money
  2. And a Java agent that will allow us to measure the performance of our ATM by measuring the time invested spending money

The Java agent will modify the ATM byte-code allowing us to measure withdrawal time without having to modify the ATM app.

Our project will have the following structure:

<groupId>com.baeldung.instrumentation</groupId>
<artifactId>base</artifactId>
<version>1.0.0</version>
<packaging>pom</packaging>
<modules>
    <module>agent</module>
    <module>application</module>
</modules>

Before getting too much into the details of instrumentation, let’s see what a java agent is.

3. What is a Java Agent

In general, a java agent is just a specially crafted jar file. It utilizes the Instrumentation API that the JVM provides to alter existing byte-code that is loaded in a JVM.

For an agent to work, we need to define two methods:

  • premain – will statically load the agent using -javaagent parameter at JVM startup
  • agentmain – will dynamically load the agent into the JVM using the Java Attach API

An interesting concept to keep in mind is that a JVM implementation, like Oracle, OpenJDK, and others, can provide a mechanism to start agents dynamically, but it is not a requirement.

First, let’s see how we’d use an existing Java agent.

After that, we’ll look at how we can create one from scratch to add the functionality we need in our byte-code.

4. Loading a Java Agent

To be able to use the Java agent, we must first load it.

We have two types of load:

  • static – makes use of the premain to load the agent using -javaagent option
  • dynamic – makes use of the agentmain to load the agent into the JVM using the Java Attach API

Next, we’ll take a look at each type of load and explain how it works.

4.1. Static Load

Loading a Java agent at application startup is called static load. Static load modifies the byte-code at startup time before any code is executed.

Keep in mind that the static load uses the premain method, which will run before any application code runs, to get it running we can execute:

java -javaagent:agent.jar -jar application.jar

It’s important to note that we should always put the –javaagent parameter before the –jar parameter.

Below are the logs for our command:

22:24:39.296 [main] INFO - [Agent] In premain method
22:24:39.300 [main] INFO - [Agent] Transforming class MyAtm
22:24:39.407 [main] INFO - [Application] Starting ATM application
22:24:41.409 [main] INFO - [Application] Successful Withdrawal of [7] units!
22:24:41.410 [main] INFO - [Application] Withdrawal operation completed in:2 seconds!
22:24:53.411 [main] INFO - [Application] Successful Withdrawal of [8] units!
22:24:53.411 [main] INFO - [Application] Withdrawal operation completed in:2 seconds!

We can see when the premain method ran and when MyAtm class was transformed. We also see the two ATM withdrawal transactions logs which contain the time it took each operation to complete.

Remember that in our original application we didn’t have this time of completion for a transaction, it was added by our Java agent.

4.2. Dynamic Load

The procedure of loading a Java agent into an already running JVM is called dynamic load. The agent is attached using the Java Attach API.

A more complex scenario is when we already have our ATM application running in production and we want to add the total time of transactions dynamically without downtime for our application.

Let’s write a small piece of code to do just that and we’ll call this class AgentLoader. For simplicity, we’ll put this class in the application jar file. So our application jar file can both start our application, and attach our agent to the ATM application:

VirtualMachine jvm = VirtualMachine.attach(jvmPid);
jvm.loadAgent(agentFile.getAbsolutePath());
jvm.detach();

Now that we have our AgentLoader, we start our application making sure that in the ten-second pause between transactions, we’ll attach our Java agent dynamically using the AgentLoader.

Let’s also add the glue that will allow us to either start the application or load the agent.

We’ll call this class Launcher and it will be our main jar file class:

public class Launcher {
    public static void main(String[] args) throws Exception {
        if(args[0].equals("StartMyAtmApplication")) {
            new MyAtmApplication().run(args);
        } else if(args[0].equals("LoadAgent")) {
            new AgentLoader().run(args);
        }
    }
}

Starting the Application

java -jar application.jar StartMyAtmApplication
22:44:21.154 [main] INFO - [Application] Starting ATM application
22:44:23.157 [main] INFO - [Application] Successful Withdrawal of [7] units!

Attaching Java Agent

After the first operation, we attach the java agent to our JVM:

java -jar application.jar LoadAgent
22:44:27.022 [main] INFO - Attaching to target JVM with PID: 6575
22:44:27.306 [main] INFO - Attached to target JVM and loaded Java agent successfully

Check Application Logs

Now that we attached our agent to the JVM we’ll see that we have the total completion time for the second ATM withdrawal operation.

This means that we added our functionality on the fly, while our application was running:

22:44:27.229 [Attach Listener] INFO - [Agent] In agentmain method
22:44:27.230 [Attach Listener] INFO - [Agent] Transforming class MyAtm
22:44:33.157 [main] INFO - [Application] Successful Withdrawal of [8] units!
22:44:33.157 [main] INFO - [Application] Withdrawal operation completed in:2 seconds!

5. Creating a Java Agent

After learning how to use an agent, let’s see how we can create one. We’ll look at how to use Javassist to change byte-code and we’ll combine this with some instrumentation API methods.

Since a java agent makes use of the Java Instrumentation API, before getting too deep into creating our agent, let’s see some of the most used methods in this API and a short description of what they do:

  • addTransformer – adds a transformer to the instrumentation engine
  • getAllLoadedClasses – returns an array of all classes currently loaded by the JVM
  • retransformClasses – facilitates the instrumentation of already loaded classes by adding byte-code
  • removeTransformer – unregisters the supplied transformer
  • redefineClasses – redefine the supplied set of classes using the supplied class files, meaning that the class will be fully replaced, not modified as with retransformClasses

5.1. Create the Premain and Agentmain Methods

We know that every Java agent needs at least one of the premain or agentmain methods. The latter is used for dynamic load, while the former is used to statically load a java agent into a JVM.

Let’s define both of them in our agent so that we’re able to load this agent both statically and dynamically:

public static void premain(
  String agentArgs, Instrumentation inst) {
 
    LOGGER.info("[Agent] In premain method");
    String className = "com.baeldung.instrumentation.application.MyAtm";
    transformClass(className,inst);
}
public static void agentmain(
  String agentArgs, Instrumentation inst) {
 
    LOGGER.info("[Agent] In agentmain method");
    String className = "com.baeldung.instrumentation.application.MyAtm";
    transformClass(className,inst);
}

In each method, we declare the class that we want to change and then dig down to transform that class using the transformClass method.

Below is the code for the transformClass method that we defined to help us transform MyAtm class.

In this method, we find the class we want to transform and using the transform method. Also, we add the transformer to the instrumentation engine:

private static void transformClass(
  String className, Instrumentation instrumentation) {
    Class<?> targetCls = null;
    ClassLoader targetClassLoader = null;
    // see if we can get the class using forName
    try {
        targetCls = Class.forName(className);
        targetClassLoader = targetCls.getClassLoader();
        transform(targetCls, targetClassLoader, instrumentation);
        return;
    } catch (Exception ex) {
        LOGGER.error("Class [{}] not found with Class.forName");
    }
    // otherwise iterate all loaded classes and find what we want
    for(Class<?> clazz: instrumentation.getAllLoadedClasses()) {
        if(clazz.getName().equals(className)) {
            targetCls = clazz;
            targetClassLoader = targetCls.getClassLoader();
            transform(targetCls, targetClassLoader, instrumentation);
            return;
        }
    }
    throw new RuntimeException(
      "Failed to find class [" + className + "]");
}

private static void transform(
  Class<?> clazz, 
  ClassLoader classLoader,
  Instrumentation instrumentation) {
    AtmTransformer dt = new AtmTransformer(
      clazz.getName(), classLoader);
    instrumentation.addTransformer(dt, true);
    try {
        instrumentation.retransformClasses(clazz);
    } catch (Exception ex) {
        throw new RuntimeException(
          "Transform failed for: [" + clazz.getName() + "]", ex);
    }
}

With this out of the way, let’s define the transformer for MyAtm class.

5.2. Defining our Transformer

A class transformer must implement ClassFileTransformer and implement the transform method.

We’ll use Javassist to add byte-code to MyAtm class and add a log with the total ATW withdrawal transaction time:

public class AtmTransformer implements ClassFileTransformer {
    @Override
    public byte[] transform(
      ClassLoader loader, 
      String className, 
      Class<?> classBeingRedefined, 
      ProtectionDomain protectionDomain, 
      byte[] classfileBuffer) {
        byte[] byteCode = classfileBuffer;
        String finalTargetClassName = this.targetClassName
          .replaceAll("\\.", "/"); 
        if (!className.equals(finalTargetClassName)) {
            return byteCode;
        }

        if (className.equals(finalTargetClassName) 
              && loader.equals(targetClassLoader)) {
 
            LOGGER.info("[Agent] Transforming class MyAtm");
            try {
                ClassPool cp = ClassPool.getDefault();
                CtClass cc = cp.get(targetClassName);
                CtMethod m = cc.getDeclaredMethod(
                  WITHDRAW_MONEY_METHOD);
                m.addLocalVariable(
                  "startTime", CtClass.longType);
                m.insertBefore(
                  "startTime = System.currentTimeMillis();");

                StringBuilder endBlock = new StringBuilder();

                m.addLocalVariable("endTime", CtClass.longType);
                m.addLocalVariable("opTime", CtClass.longType);
                endBlock.append(
                  "endTime = System.currentTimeMillis();");
                endBlock.append(
                  "opTime = (endTime-startTime)/1000;");

                endBlock.append(
                  "LOGGER.info(\"[Application] Withdrawal operation completed in:" +
                                "\" + opTime + \" seconds!\");");

                m.insertAfter(endBlock.toString());

                byteCode = cc.toBytecode();
                cc.detach();
            } catch (NotFoundException | CannotCompileException | IOException e) {
                LOGGER.error("Exception", e);
            }
        }
        return byteCode;
    }
}

5.3. Creating an Agent Manifest File

Finally, in order to get a working Java agent, we’ll need a manifest file with a couple of attributes.

Hence, we can find the full list of manifest attributes in the Instrumentation Package official documentation.

In the final Java agent jar file, we will add the following lines to the manifest file:

Agent-Class: com.baeldung.instrumentation.agent.MyInstrumentationAgent
Can-Redefine-Classes: true
Can-Retransform-Classes: true
Premain-Class: com.baeldung.instrumentation.agent.MyInstrumentationAgent

Our Java instrumentation agent is now complete. To run it, please refer to Loading a Java Agent section of this article.

6. Conclusion

In this article, we talked about the Java Instrumentation API. We looked at how to load a Java agent into a JVM both statically and dynamically.

We also looked at how we would go about creating our own Java agent from scratch.

As always, the full implementation of the example can be found over on Github.

A Guide to SqlResultSetMapping

$
0
0

1. Introduction

In this guide, we’ll take a look at SqlResultSetMapping, out of the Java Persistence API (JPA).

The core functionality here involves mapping result sets from database SQL statements into Java objects.

2. Setup

Before we look at its usage, let’s do some setup.

2.1. Maven Dependency

Our required Maven dependencies are Hibernate and H2 Database. Hibernate gives us the implementation of the JPA specification.  We use H2 Database  for an in-memory database.

2.2. Database

Next, we’ll create two tables as seen here:

CREATE TABLE EMPLOYEE
(id BIGINT,
 name VARCHAR(10));

The EMPLOYEE table stores one result Entity object. SCHEDULE_DAYS contains records linked to the EMPLOYEE table by the column employeeId:

CREATE TABLE SCHEDULE_DAYS
(id IDENTITY,
 employeeId BIGINT,
 dayOfWeek  VARCHAR(10));

A script for data creation can be found in the code for this guide.

2.3. Entity Objects

Our Entity objects should look similar:

@Entity
public class Employee {
    @Id
    private Long id;
    private String name;
}

Entity objects might be named differently than database tables. We can annotate the class with @Table to explicitly map them:

@Entity
@Table(name = "SCHEDULE_DAYS")
public class ScheduledDay {

    @Id
    @GeneratedValue
    private Long id;
    private Long employeeId;
    private String dayOfWeek;
}

3. Scalar Mapping

Now that we have data we can start mapping query results.

3.1. ColumnResult

While SqlResultSetMapping and Query annotations work on Repository classes as well, we use the annotations on an Entity class in this example.

Every SqlResultSetMapping annotation requires only one property, name. However, without one of the member types, nothing will be mapped. The member types are ColumnResult, ConstructorResult, and EntityResult.

In this case, ColumnResult maps any column to a scalar result type:

@SqlResultSetMapping(
  name="FridayEmployeeResult",
  columns={@ColumnResult(name="employeeId")})

The ColumnResult property name identifies the column in our query:

@NamedNativeQuery(
  name = "FridayEmployees",
  query = "SELECT employeeId FROM schedule_days WHERE dayOfWeek = 'FRIDAY'",
  resultSetMapping = "FridayEmployeeResult")

Note that the value of resultSetMapping in our NamedNativeQuery annotation is important because it matches the name property from our ResultSetMapping declaration.

As a result, the NamedNativeQuery result set is mapped as expected. Likewise, StoredProcedure API requires this association.

3.2. ColumnResult Test

We’ll need some Hibernate specific objects to run our code:

@BeforeAll
public static void setup() {
    emFactory = Persistence.createEntityManagerFactory("java-jpa-scheduled-day");
    em = emFactory.createEntityManager();
}

Finally, we call the named query to run our test:

@Test
public void whenNamedQuery_thenColumnResult() {
    List<Long> employeeIds = em.createNamedQuery("FridayEmployees").getResultList();
    assertEquals(2, employeeIds.size());
}

4. Constructor Mapping

Let’s take a look at when we need to map a result set to an entire object.

4.1. ConstructorResult

Similarly to our ColumnResult example, we will add the SqlResultMapping annotation on our Entity class, ScheduledDay. However, in order to map using a constructor, we need to create one:

public ScheduledDay (
  Long id, Long employeeId, 
  Integer hourIn, Integer hourOut, 
  String dayofWeek) {
    this.id = id;
    this.employeeId = employeeId;
    this.dayOfWeek = dayofWeek;
}

Also, the mapping specifies the target class and columns (both required):

@SqlResultSetMapping(
    name="ScheduleResult",
    classes={
      @ConstructorResult(
        targetClass=com.baeldung.sqlresultsetmapping.ScheduledDay.class,
        columns={
          @ColumnResult(name="id", type=Long.class),
          @ColumnResult(name="employeeId", type=Long.class),
          @ColumnResult(name="dayOfWeek")})})

The order of the ColumnResults is very important. If columns are out of order the constructor will fail to be identified. In our example, the ordering matches the table columns, so it would actually not be required.

@NamedNativeQuery(name = "Schedules",
  query = "SELECT * FROM schedule_days WHERE employeeId = 8",
  resultSetMapping = "ScheduleResult")

Another unique difference for ConstructorResult is that the resulting object instantiation as “new” or “detached”.  The mapped Entity will be in the detached state when a matching primary key exists in the EntityManager otherwise it will be new.

Sometimes we may encounter runtime errors because of mismatching SQL datatypes to Java datatypes. Therefore, we can explicitly declare it with type.

4.2. ConstructorResult Test

Let’s test the ConstructorResult in a unit test:

@Test
public void whenNamedQuery_thenConstructorResult() {
  List<ScheduledDay> scheduleDays
    = Collections.checkedList(
      em.createNamedQuery("Schedules", ScheduledDay.class).getResultList(), ScheduledDay.class);
    assertEquals(3, scheduleDays.size());
    assertTrue(scheduleDays.stream().allMatch(c -> c.getEmployeeId().longValue() == 3));
}

5. Entity Mapping

Finally, for a simple entity mapping with less code, let’s have a look at EntityResult.

5.1. Single Entity

EntityResult requires us to specify the entity class, Employee. We use the optional fields property for more control. Combined with FieldResult, we can map aliases and fields that do not match:

@SqlResultSetMapping(
  name="EmployeeResult",
  entities={
    @EntityResult(
      entityClass = com.baeldung.sqlresultsetmapping.Employee.class,
        fields={
          @FieldResult(name="id",column="employeeNumber"),
          @FieldResult(name="name", column="name")})})

Now our query should include the aliased column:

@NamedNativeQuery(
  name="Employees",
  query="SELECT id as employeeNumber, name FROM EMPLOYEE",
  resultSetMapping = "EmployeeResult")

Similarly to ConstructorResult, EntityResult requires a constructor. However, a default one works here.

5.2. Multiple Entities

Mapping multiple entities is pretty straightforward once we have mapped a single Entity:

@SqlResultSetMapping(
  name = "EmployeeScheduleResults",
  entities = {
    @EntityResult(entityClass = com.baeldung.sqlresultsetmapping.Employee.class),
    @EntityResult(entityClass = com.baeldung.sqlresultsetmapping.ScheduledDay.class)

5.3. EntityResult Tests

Let’s have a look at EntityResult in action:

@Test
public void whenNamedQuery_thenSingleEntityResult() {
    List<Employee> employees = Collections.checkedList(
      em.createNamedQuery("Employees").getResultList(), Employee.class);
    assertEquals(3, employees.size());
    assertTrue(employees.stream().allMatch(c -> c.getClass() == Employee.class));
}

Since the multiple entity results join two entities, the query annotation on only one of the classes is confusing.

For that reason, we define the query in the test:

@Test
public void whenNamedQuery_thenMultipleEntityResult() {
    Query query = em.createNativeQuery(
      "SELECT e.id, e.name, d.id, d.employeeId, d.dayOfWeek "
        + " FROM employee e, schedule_days d "
        + " WHERE e.id = d.employeeId", "EmployeeScheduleResults");
    
    List<Object[]> results = query.getResultList();
    assertEquals(4, results.size());
    assertTrue(results.get(0).length == 2);

    Employee emp = (Employee) results.get(1)[0];
    ScheduledDay day = (ScheduledDay) results.get(1)[1];

    assertTrue(day.getEmployeeId() == emp.getId());
}

6. Conclusion

In this guide, we looked at different options for using the SqlResultSetMapping annotationSqlResultSetMapping is a key part to the Java Persistence API.

Code snippets can be found over on GitHub.

Mockito.mock() vs @Mock vs @MockBean

$
0
0

1. Overview

In this quick tutorial, we’ll look at three different ways of creating mock objects and how they differ from each other – with Mockito and with the Spring mocking support.

2. Mockito.mock()

The Mockito.mock() method allows us to create a mock object of a class or an interface.

Then, we can use the mock to stub return values for its methods and verify if they were called.

Let’s look at an example:

@Test
public void givenCountMethodMocked_WhenCountInvoked_ThenMockedValueReturned() {
    UserRepository localMockRepository = Mockito.mock(UserRepository.class);
    Mockito.when(localMockRepository.count()).thenReturn(111L);

    long userCount = localMockRepository.count();

    Assert.assertEquals(111L, userCount);
    Mockito.verify(localMockRepository).count();
}

This method doesn’t need anything else to be done before it can be used. We can use it to create mock class fields as well as local mocks in a method.

3. Mockito’s @Mock Annotation

This annotation is a shorthand for the Mockito.mock() method. As well, we should only use it in a test class. Unlike the mock() method, we need to enable Mockito annotations to use this annotation.

We can do this either by using the MockitoJUnitRunner to run the test or calling the MockitoAnnotations.initMocks() method explicitly.

Let’s look at an example using MockitoJUnitRunner:

@RunWith(MockitoJUnitRunner.class)
public class MockAnnotationUnitTest {
    
    @Mock
    UserRepository mockRepository;
    
    @Test
    public void givenCountMethodMocked_WhenCountInvoked_ThenMockValueReturned() {
        Mockito.when(mockRepository.count()).thenReturn(123L);

        long userCount = mockRepository.count();

        Assert.assertEquals(123L, userCount);
        Mockito.verify(mockRepository).count();
    }
}

Apart from making the code more readable, @Mock makes it easier to find the problem mock in case of a failure, as the name of the field appears in the failure message:

Wanted but not invoked:
mockRepository.count();
-> at org.baeldung.MockAnnotationTest.testMockAnnotation(MockAnnotationTest.java:22)
Actually, there were zero interactions with this mock.

  at org.baeldung.MockAnnotationTest.testMockAnnotation(MockAnnotationTest.java:22)

Also, when used in conjunction with @InjectMocks, it can reduce the amount of setup code significantly.

4. Spring Boot’s @MockBean Annotation

We can use the @MockBean to add mock objects to the Spring application context. The mock will replace any existing bean of the same type in the application context.

If no bean of the same type is defined, a new one will be added. This annotation is useful in integration tests where a particular bean – for example, an external service – needs to be mocked.

To use this annotation, we have to use SpringRunner to run the test:

@RunWith(SpringRunner.class)
public class MockBeanAnnotationIntegrationTest {
    
    @MockBean
    UserRepository mockRepository;
    
    @Autowired
    ApplicationContext context;
    
    @Test
    public void givenCountMethodMocked_WhenCountInvoked_ThenMockValueReturned() {
        Mockito.when(mockRepository.count()).thenReturn(123L);

        UserRepository userRepoFromContext = context.getBean(UserRepository.class);
        long userCount = userRepoFromContext.count();

        Assert.assertEquals(123L, userCount);
        Mockito.verify(mockRepository).count();
    }
}

When we use the annotation on a field, as well as being registered in the application context, the mock will also be injected into the field.

This is evident in the code above. Here, we have used the injected UserRepository mock to stub the count method. We have then used the bean from the application context to verify that it is indeed the mocked bean.

5. Conclusion

In this article, we saw how the three methods for creating mock objects differ and how each can be used.

The source code that accompanies this article is available over on GitHub.

Overriding System Time for Testing in Java

$
0
0

1. Overview

In this quick tutorial, we’ll focus on different ways to override the system time for testing.

Sometimes there’s a logic around the current date in our code. Maybe some function calls such as new Date() or Calendar.getInstance(), which eventually are going to call System.CurrentTimeMillis.

For an introduction to the use of Java Clock, please refer to this article here. Or, to the use of AspectJ, here.

2. Using Clock in java.time

The java.time package in Java 8 includes an abstract class java.time.Clock with the purpose of allowing alternate clocks to be plugged in as and when required. With that, we can plug our own implementation or find one that is already made to satisfy our needs.

To accomplish our goals, the above library includes static methods to yield special implementations. We’re going to use two of them which returns an immutable, thread-safe and serializable implementation.

The first one is fixed. From it, we can obtain a Clock that always returns the same Instantensuring that the tests aren’t dependent on the current clock.

To use it, we need an Instant and a ZoneOffset:

Instant.now(Clock.fixed( 
  Instant.parse("2018-08-22T10:00:00Z"),
  ZoneOffset.UTC))

The second static method is offset. In this one, a clock wraps another clock that makes it the returned object capable of getting instants that are later or earlier by the specified duration.

In other words, it’s possible to simulate running in the future, in the past, or in any arbitrary point in time:

Clock constantClock = Clock.fixed(ofEpochMilli(0), ZoneId.systemDefault());

// go to the future:
Clock clock = Clock.offset(constantClock, Duration.ofSeconds(10));
        
// rewind back with a negative value:
clock = Clock.offset(constantClock, Duration.ofSeconds(-5));
 
// the 0 duration returns to the same clock:
clock = Clock.offset(constClock, Duration.ZERO);

With the Duration class, it’s possible to manipulate from nanoseconds to days. Also, we can negate a duration, which means to get a copy of this duration with the length negated.


3. Using Aspect-Oriented Programming

Another way to override the system time is by AOP. With this approach, we’re able to weave the System class to return a predefined value which we can set within our test cases.

Also, it’s possible to weave the application classes to redirect the call to System.currentTimeMillis() or to new Date() to another utility class of our own.

One way to implement this is through the use of AspectJ:

public aspect ChangeCallsToCurrentTimeInMillisMethod {
    long around(): 
      call(public static native long java.lang.System.currentTimeMillis()) 
        && within(user.code.base.pckg.*) {
          return 0;
      }
}

In the above example, we’re catching every call to System.currentTimeMillis() inside a specified package, which in this case is user.code.base.pckg.*, and returning zero every time that this event happens.

It’s in this place where we can declare our own implementation to obtain the desired time in milliseconds.

One advantage of using AspectJ is that it operates on bytecode level directly, so it doesn’t need the original source code to work.

For that reason, we wouldn’t need to recompile it.

4. Conclusion

In this quick article, we’ve explored different ways to override the system time for testing. At first, with the native package java.time and its Clock class and finally, applying an aspect to weave the System class.
As always, code samples can be found over on GitHub.

Java Weekly, Issue 239

$
0
0

Here we go…

1. Spring and Java

>> Refining functional Spring [blog.frankel.ch]

A quick writeup touching on a few nuances of writing handlers and routes in this exciting new functional approach to Spring Boot.

>> Improve Application Performance with These Advanced GC Techniques [blog.takipi.com]

A solid primer on how garbage collection works in the JVM and a few tricks you can use to improve your application’s performance. Good stuff.

>> How to query parent rows when all children must match the filtering criteria with SQL and Hibernate [vladmihalcea.com]

A nice tutorial that gradually builds an optimal solution to this problem, first in a native SQL query, and then in a JPQL criteria-based query. Very cool.

>> Only modified files in Jenkins [blog.code-cop.org]

And, an interesting approach that uses a Groovy script to identify all files that have been changed since the last green build.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Update your Database Schema Without Downtime [thoughts-on-java.org]

If you absolutely can’t afford downtime, here are some great strategies to use when rolling out non-backwards-compatible schema updates in conjunction with an application update.

>> The future of WebAssembly – A look at upcoming features and proposals [blog.scottlogic.com]

Looks like some essential enhancements are coming soon to this browser-based VM, including reference types, exception handling, and garbage collection.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> How to Become an Engineer [dilbert.com]

>> Dilbert Joins MENSA [dilbert.com]

>> Upgrades Can Be Risky [dilbert.com]

4. Pick of the Week

>> Why Json Isn’t A Good Configuration Language [lucidchart.com]

Comparing Embedded Servlet Containers in Spring Boot

$
0
0

1. Introduction

The rising popularity of cloud-native applications and micro-services generate an increased demand for embedded servlet containers. Spring Boot allows developers to easily build applications or services using the 3 most mature containers available: Tomcat, Undertow, and Jetty.

In this tutorial, we’ll demonstrate a way to quickly compare container implementations using metrics obtained at startup and under some load.

2. Dependencies

Our setup for each available container implementation will always require that we declare a dependency on spring-boot-starter-web in our pom.xml.

In general, we want to specify our parent as spring-boot-starter-parent, and then include the starters we want:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.0.3.RELEASE</version>
    <relativePath/>
</parent>

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
</dependencies>

2.1. Tomcat

No further dependencies are required when using Tomcat because it is included by default when using spring-boot-starter-web.

2.2. Jetty

In order to use Jetty, we first need to exclude spring-boot-starter-tomcat from spring-boot-starter-web.

Then, we simply declare a dependency on spring-boot-starter-jetty:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <exclusions>
        <exclusion>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-tomcat</artifactId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-jetty</artifactId>
</dependency>

2.3. Undertow

Setting up for Undertow is identical to Jetty, except that we use spring-boot-starter-undertow as our dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <exclusions>
        <exclusion>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-tomcat</artifactId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-undertow</artifactId>
</dependency>

2.4. Actuator

We’ll use Spring Boot’s Actuator as a convenient way to both stress the system and query for metrics.

Check out this article for details on Actuator. We simply add a dependency in our pom to make it available:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

 2.5. Apache Bench

Apache Bench is an open source load testing utility that comes bundled with the Apache web server.

Windows users can download Apache from one of the 3rd party vendors linked here. If Apache is already installed on your Windows machine, you should be able to find ab.exe in your apache/bin directory.

If you are on a Linux machine, ab can be installed using apt-get with:

$ apt-get install apache2-utils

3. Startup Metrics

3.1. Collection

In order to collect our startup metrics, we’ll register an event handler to fire on Spring Boot’s ApplicationReadyEvent.

We’ll programmatically extract the metrics we’re interested in by directly working with the MeterRegistry used by the Actuator component:

@Component
public class StartupEventHandler {

    // logger, constructor
    
    private String[] METRICS = {
      "jvm.memory.used", 
      "jvm.classes.loaded", 
      "jvm.threads.live"};
    private String METRIC_MSG_FORMAT = "Startup Metric >> {}={}";
    
    private MeterRegistry meterRegistry;

    @EventListener
    public void getAndLogStartupMetrics(
      ApplicationReadyEvent event) {
        Arrays.asList(METRICS)
          .forEach(this::getAndLogActuatorMetric);
    }

    private void processMetric(String metric) {
        Meter meter = meterRegistry.find(metric).meter();
        Map<Statistic, Double> stats = getSamples(meter);
 
        logger.info(METRIC_MSG_FORMAT, metric, stats.get(Statistic.VALUE).longValue());
    }

    // other methods
}

We avoid the need to manually query Actuator REST endpoints or to run a standalone JMX console by logging interesting metrics on startup within our event handler.

3.2. Selection

There are a large number of metrics that Actuator provides out of the box. We selected 3 metrics that help to get a high-level overview of key runtime characteristics once the server is up:

  • jvm.memory.used – the total memory used by the JVM since startup
  • jvm.classes.loaded – the total number of classes loaded
  • jvm.threads.live – the total number of active threads. In our test, this value can be viewed as the thread count “at rest”

4. Runtime Metrics

4.1. Collection

In addition to providing startup metrics, we’ll use the /metrics endpoint exposed by the Actuator as the target URL when we run Apache Bench in order to put the application under load.

In order to test a real application under load, we might instead use endpoints provided by our application.

Once the server has started, we’ll get a command prompt and execute ab:

ab -n 10000 -c 10 http://localhost:8080/actuator/metrics

In the command above, we’ve specified a total of 10,000 requests using 10 concurrent threads.

4.2. Selection

Apache Bench is able to very quickly give us some useful information including connection times and the percentage of requests that are served within a certain time.

For our purposes, we focused on requests-per-second and time-per-request (mean).

5. Results

On startup, we found that the memory footprint of Tomcat, Jetty, and Undertow was comparable with Undertow requiring slightly more memory than the other two and Jetty requiring the smallest amount.

For our benchmark, we found that the performance of Tomcat, Jetty, and Undertow was comparable but that Undertow was clearly the fastest and Jetty only slightly less fast. 

Metric Tomcat Jetty Undertow
jvm.memory.used (MB) 168 155 164
jvm.classes.loaded 9869 9784 9787
jvm.threads.live 25 17 19
Requests per second 1542 1627 1650
Average time per request (ms) 6.483 6.148 6.059

Note that the metrics are, naturally, representative of bare-bones project; the metrics of your own application will most certainly be different.

6. Benchmark Discussion

Developing appropriate benchmark tests to perform thorough comparisons of server implementations can get complicated. In order to extract the most relevant information, it’s critical to have a clear understanding of what’s important for the use case in question.

It’s important to note that the benchmark measurements collected in this example were taken using a very specific workload consisting of HTTP GET requests to an Actuator endpoint.

It’s expected that different workloads would likely result in different relative measurements across container implementations. If more robust or precise measurements were required, it would be a very good idea to set up a test plan that more closely matched the production use case.

In addition, a more sophisticated benchmarking solution such as JMeter or Gatling would likely yield more valuable insights.

7. Choosing a Container

Selecting the right container implementation should likely be based on many factors that can’t be neatly summarized with a handful of metrics alone. Comfort level, features, available configuration options, and policy are often equally important, if not more so.

8. Conclusion

In this article, we looked at the Tomcat, Jetty, and Undertow embedded servlet container implementations. We examined the runtime characteristics of each container at startup with the default configurations by looking at metrics exposed by the Actuator component.

We executed a contrived workload against the running system and then measured performance using Apache Bench.

Lastly, we discussed the merits of this strategy and mentioned a few things to keep in mind when comparing implementation benchmarks. As always, all source code can be found over on GitHub.


Spring @Primary Annotation

$
0
0

1. Overview

In this quick tutorial, we’ll discuss Spring’s @Primary annotation which was introduced with version 3.0 of the framework.

Simply put, we use @Primary to give higher preference to a bean when there are multiple beans of the same type.

Let’s describe the problem in detail.

2. Why is @Primary Needed?

In some cases, we need to register more than one bean of the same type.

In this example we have JohnEmployee() and TonyEmployee() beans of the Employee type:

@Configuration
public class Config {

    @Bean
    public Employee JohnEmployee() {
        return new Employee("John");
    }

    @Bean
    public Employee TonyEmployee() {
        return new Employee("Tony");
    }
}

Spring throws NoUniqueBeanDefinitionException if we try to run the application.

To access beans with the same type we usually use @Qualifier(“beanName”) annotation.

We apply it at the injection point along with @Autowired. In our case, we select the beans at the configuration phase so @Qualifier can’t be applied here. We can learn more about @Qualifier annotation by following the link.

To resolve this issue Spring offers the @Primary annotation.

3. Use @Primary with @Bean

Let’s have a look at configuration class:

@Configuration
public class Config {

    @Bean
    public Employee JohnEmployee() {
        return new Employee("John");
    }

    @Bean
    @Primary
    public Employee TonyEmployee() {
        return new Employee("Tony");
    }
}

We mark TonyEmployee() bean with @Primary. Spring will inject TonyEmployee() bean preferentially over the JohnEmployee().

Now, let’s start the application context and get the Employee bean from it:

AnnotationConfigApplicationContext context
  = new AnnotationConfigApplicationContext(Config.class);

Employee employee = context.getBean(Employee.class);
System.out.println(employee);

After we run the application:

Employee{name='Tony'}

From the output, we can see that the TonyEmployee() instance has a preference while autowiring.

4. Use @Primary with @Component

We can use @Primary directly on the beans. Let’s have a look at the following scenario:

public interface Manager {
    String getManagerName();
}

We have a Manager interface and two subclass beans, DepartmentManager:

@Component
public class DepartmentManager implements Manager {
    @Override
    public String getManagerName() {
        return "Department manager";
    }
}

And the GeneralManager bean:

@Component
@Primary
public class GeneralManager implements Manager {
    @Override
    public String getManagerName() {
        return "General manager";
    }
}

They both override the getManagerName() of the Manager interface. Also, note that we mark the GeneralManager bean with @Primary.

This time, @Primary only makes sense when we enable the component scan:

@Configuration
@ComponentScan(basePackages="org.baeldung.primary")
public class Config {
}

Let’s create a service to use dependency injection while finding the right bean:

@Service
public class ManagerService {

    @Autowired
    private Manager manager;

    public Manager getManager() {
        return manager;
    }
}

Here, both beans DepartmentManager and GeneralManager are eligible for autowiring.

As we marked GeneralManager bean with @Primary, it will be selected for dependency injection:

ManagerService service = context.getBean(ManagerService.class);
Manager manager = service.getManager();
System.out.println(manager.getManagerName());

The output is “General manager”.

5. Conclusion

In this article, we learned about Spring’s @Primary annotation. With the code examples, we demonstrated the need and the use cases of the @Primary.

As usual, the complete code for this article is available over on GitHub project.

Vue.js Frontend with a Spring Boot Backend

$
0
0

1. Overview

In this tutorial, we’ll go over an example application that renders a single page with a Vue.js frontend, while using Spring Boot as a backend.

We’ll also utilize Thymeleaf to pass information to the template.

2. Spring Boot Setup

The application pom.xml uses the spring-boot-starter-thymeleaf dependency for template rendering along with the usual spring-boot-starter-web:

<dependency> 
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId> 
    <version>2.0.3.RELEASE</version>
</dependency> 
<dependency> 
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId> 
    <version>2.0.3.RELEASE</version>
</dependency>

Thymeleaf by default looks for view templates at templates/, we’ll add an empty index.html to src/main/resources/templates/index.html. We’ll update its contents in the next section.

Finally, our Spring Boot controller will be in src/main/java:

@Controller
public class MainController {
    @GetMapping("/")
    public String index(Model model) {
        model.addAttribute("eventName", "FIFA 2018");
        return "index";
    }
}

This controller renders a single template with data passed to the view via the Spring Web Model object using model.addAttribute.

Let’s run the application using:

mvn spring-boot:run

Browse to http://localhost:8080 to see the index page. It’ll be empty at this point, of course.

Our goal is to make the page print out something like this:

Name of Event: FIFA 2018

Lionel Messi
Argentina's superstar

Christiano Ronaldo
Portugal top-ranked player

3. Rendering Data Using a Vue.Js Component

3.1. Basic Setup of Template

In the template, let’s load Vue.js and Bootstrap (optional) to render the User Interface:

// in head tag

<!-- Include Bootstrap -->

//  other markup

// at end of body tag
<script 
  src="https://cdn.jsdelivr.net/npm/vue@2.5.16/dist/vue.js">
</script>
<script 
  src="https://cdnjs.cloudflare.com/ajax/libs/babel-standalone/6.21.1/babel.min.js">
</script>

Here we load Vue.js from a CDN, but you can host it too if that’s preferable.

We load Babel in-browser so that we can write some ES6-compliant code in the page without having to run transpilation steps.

In a real-world application, you’ll likely use a build process using a tool such as Webpack and Babel transpiler, instead of using in-browser Babel.

Now let’s save the page and restart using the mvn spring-boot:run command. We refresh the browser to see our updates; nothing interesting yet.

Next, let’s set up an empty div element to which we’ll attach our User Interface:

<div id="contents"></div>

Next, we set up a Vue application on the page:

<script type="text/babel">
    var app = new Vue({
        el: '#contents'
    });
</script>

What just happened? This code creates a Vue application on the page. We attach it to the element with CSS selector #contents.

That refers to the empty div element on the page. The application is now set up to use Vue.js!

3.2. Displaying Data in the Template

Next, let’s create a header which shows the ‘eventName‘ attribute we passed from Spring controller, and render it using Thymeleaf’s features:

<div class="lead">
    <strong>Name of Event:</strong>
    <span th:text="${eventName}"></span>
</div>

Now let’s attach a ‘data’ attribute to the Vue application to hold our array of player data, which is a simple JSON array.

Our Vue app now looks like this:

<script type="text/babel">
    var app = new Vue({
        el: '#contents',
        data: {
            players: [
                { id: "1", 
                  name: "Lionel Messi", 
                  description: "Argentina's superstar" },
                { id: "2", 
                  name: "Christiano Ronaldo", 
                  description: "World #1-ranked player from Portugal" }
            ]
        }
    });
</script>

Now Vue.js knows about a data attribute called players.

3.3. Rendering Data with a Vue.js Component

Next, let’s create a Vue.js component named player-card which renders just one player. Remember to register this component before creating the Vue app.

Otherwise, Vue won’t find it:

Vue.component('player-card', {
    props: ['player'],
    template: `<div class="card">
        <div class="card-body">
            <h6 class="card-title">
                {{ player.name }}
            </h6>
            <p class="card-text">
                <div>
                    {{ player.description }}
                </div>
            </p>
        </div>
        </div>`
});

Finally, let’s loop over the set of players in the app object and render a player-card component for each player:

<ul>
    <li style="list-style-type:none" v-for="player in players">
        <player-card
          v-bind:player="player" 
          v-bind:key="player.id">
        </player-card>
    </li>
</ul>

The logic here is the Vue directive called v-for, which will loop over each player in the players data attribute and render a player-card for each player entry inside a <li> element.

v-bind:player means that the player-card component will be given a property called player whose value will be the player loop variable currently being worked with. v-bind:key is required to make each <li> element unique.

Generally, player.id is a good choice since it is already unique.

Now if you reload this page, observe the generated HTML markup in devtools, and it will look similar to this:

<ul>
    <li style="list-style-type: none;">
        <div class="card">
            // contents
        </div>
    </li>
    <li style="list-style-type: none;">
        <div class="card">
            // contents
        </div>
    </li>
</ul>

A workflow improvement note: it’ll quickly become cumbersome to have to restart the application and refresh the browser each time you make a change to the code.

Therefore, to make life easier, please refer to this article on how to use Spring Boot devtools and automatic restart.

4. Conclusion

In this quick article, we went over how to set up a web application using Spring Boot for backend and Vue.js for the frontend. This recipe can form the basis for more powerful and scalable applications, and this is just a starting point for most such applications.

As usual, code samples can be found over on GitHub.

Remove the First Element from a List

$
0
0

1. Overview

In this super-quick tutorial, we’ll show how to remove the first element from a List.

We’ll perform this operation for two common implementations of the List interface – ArrayList and LinkedList.

2. Creating a List

Firstly, let’s populate our Lists:

@Before
public void init() {
    list.add("cat");
    list.add("dog");
    list.add("pig");
    list.add("cow");
    list.add("goat");

    linkedList.add("cat");
    linkedList.add("dog");
    linkedList.add("pig");
    linkedList.add("cow");
    linkedList.add("goat");
}

3. ArrayList

Secondly, let’s remove the first element from the ArrayList, and make sure that our list doesn’t contain it any longer:

@Test
public void givenList_whenRemoveFirst_thenRemoved() {
    list.remove(0);

    assertThat(list, hasSize(4));
    assertThat(list, not(contains("cat")));
}

As shown above, we’re using remove(index) method to remove the first element – this will also work for any implementation of the List interface.

4. LinkedList

LinkedList also implements remove(index) method (in its own way) but it has also the removeFirst() method.

Let’s make sure that it works as expected:

@Test
public void givenLinkedList_whenRemoveFirst_thenRemoved() {
    linkedList.removeFirst();

    assertThat(linkedList, hasSize(4));
    assertThat(linkedList, not(contains("cat")));
}

5. Time Complexity

Although the methods look similar, their efficiency differs. ArrayList‘s remove() method requires O(n) time, whereas LinkedList‘s removeFirst() method requires O(1) time.

This is because ArrayList uses an array under the hood, and the remove() operation requires copying the rest of the array to the beginning. The larger the array is, the more elements need to be shifted.

Unlike that, LinkedList uses pointers meaning that each element points to the next and the previous one.

Hence, removing the first element means just changing the pointer to the first element. This operation always requires the same time not depending on the size of a list.

6. Conclusion

In this article, we’ve covered how to remove the first element from a List, and have compared the efficiency of this operation for ArrayList and LinkedList implementations.

As usual, the complete source code is available over on GitHub.

Spring Session With JDBC

$
0
0

1. Overview

In this quick tutorial, we’ll learn how to use the Spring session JDBC to persist session information to a database.

For demonstration purposes, we’ll be using an in-memory H2 database.

2. Configuration Options

The easiest and fastest way to create our sample project is by using Spring Boot. However, we’ll also show a non-boot way to set things up.

Hence, you don’t need to complete both sections 3 and 4. Just pick one depending on whether or not we are using Spring Boot to configure Spring Session.

3. Spring Boot Configuration

First, let’s look at the required configuration for Spring Session JDBC.

3.1. Maven Dependencies

First, we need to add these dependencies to our project:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency> 
<dependency>
    <groupId>org.springframework.session</groupId>
    <artifactId>spring-session-jdbc</artifactId>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.197</version>
    <scope>runtime</scope>
</dependency>

Our application runs with Spring Boot, and the parent pom.xml provides versions for each entry. The latest version of each dependency can be found here: spring-boot-starter-webspring-boot-starter-test, spring-session-jdbc, and h2.

Surprisingly, the only configuration property that we need to enable Spring Session backed by a relational database is in the application.properties:

spring.session.store-type=jdbc

4. Standard Spring Config (no Spring Boot)

Let’s also have a look at the integrating and configuring spring-session without Spring Boot – just with plain Spring.

4.1. Maven Dependencies

First, if we’re adding spring-session-jdbc to a standard Spring project, we’ll need to add spring-session-jdbc and h2 to our pom.xml (last two dependencies from the snippet in the previous section).

4.2. Spring Session Configuration

Now let’s add a configuration class for Spring Session JDBC:

@Configuration
@EnableJdbcHttpSession
public class Config
  extends AbstractHttpSessionApplicationInitializer {

    @Bean
    public EmbeddedDatabase dataSource() {
        return new EmbeddedDatabaseBuilder()
          .setType(EmbeddedDatabaseType.H2)
          .addScript("org/springframework/session/jdbc/schema-h2.sql").build();
    }

    @Bean
    public PlatformTransactionManager transactionManager(DataSource dataSource) {
        return new DataSourceTransactionManager(dataSource);
    }
}

As we can see, the differences are minimal. Now we have to define our EmbeddedDatabase and PlatformTransactionManager beans explicitly – Spring Boot does it for us in the previous configuration.

The above ensures that  Spring bean by the name of springSessionRepositoryFilter is registered with our Servlet Container for every request.

5. A Simple App

Moving on, let’s look at a simple REST API that saves demonstrates session persistence

5.1. Controller

First, let’s add a Controller class to store and display information in the HttpSession:

@Controller
public class SpringSessionJdbcController {

    @GetMapping("/")
    public String index(Model model, HttpSession session) {
        List<String> favoriteColors = getFavColors(session);
        model.addAttribute("favoriteColors", favoriteColors);
        model.addAttribute("sessionId", session.getId());
        return "index";
    }

    @PostMapping("/saveColor")
    public String saveMessage
      (@RequestParam("color") String color, 
      HttpServletRequest request) {
 
        List<String> favoriteColors 
          = getFavColors(request.getSession());
        if (!StringUtils.isEmpty(color)) {
            favoriteColors.add(color);
            request.getSession().
              setAttribute("favoriteColors", favoriteColors);
        }
        return "redirect:/";
    }

    private List<String> getFavColors(HttpSession session) {
        List<String> favoriteColors = (List<String>) session
          .getAttribute("favoriteColors");
        
        if (favoriteColors == null) {
            favoriteColors = new ArrayList<>();
        }
        return favoriteColors;
    }
}

6. Testing Our Implementation

Now that we have an API with a GET and POST method, let’s write tests to invoke both methods.

In each case, we should be able to assert that the session information is persisted in the database. To verify this, we’ll query the session database directly.

Let’s first set things up:

@RunWith(SpringRunner.class)
@SpringBootTest(
  webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class SpringSessionJdbcApplicationTests {

    @LocalServerPort
    private int port;

    @Autowired
    private TestRestTemplate testRestTemplate;

    private List<String> getSessionIdsFromDatabase() 
      throws SQLException {
 
        List<String> result = new ArrayList<>();
        ResultSet rs = getResultSet(
          "SELECT * FROM SPRING_SESSION");
        
        while (rs.next()) {
            result.add(rs.getString("SESSION_ID"));
        }
        return result;
    }

    private List<byte[]> getSessionAttributeBytesFromDb() 
      throws SQLException {
 
        List<byte[]> result = new ArrayList<>();
        ResultSet rs = getResultSet(
          "SELECT * FROM SPRING_SESSION_ATTRIBUTES");
        
        while (rs.next()) {
            result.add(rs.getBytes("ATTRIBUTE_BYTES"));
        }
        return result;
    }

    private ResultSet getResultSet(String sql) 
      throws SQLException {
 
        Connection conn = DriverManager
          .getConnection("jdbc:h2:mem:testdb", "sa", "");
        Statement stat = conn.createStatement();
        return stat.executeQuery(sql);
    }
}

Note the use of @FixMethodOrder(MethodSorters.NAME_ASCENDING) to control the order of test case executionRead more about it here.

Let’s begin by asserting that the session tables are empty in the database:

@Test
public void whenH2DbIsQueried_thenSessionInfoIsEmpty() 
  throws SQLException {
 
    assertEquals(
      0, getSessionIdsFromDatabase().size());
    assertEquals(
      0, getSessionAttributeBytesFromDatabase().size());
}

Next, we test the GET endpoint:

@Test
public void whenH2DbIsQueried_thenOneSessionIsCreated() 
  throws SQLException {
 
    assertThat(this.testRestTemplate.getForObject(
      "http://localhost:" + port + "/", String.class))
      .isNotEmpty();
    assertEquals(1, getSessionIdsFromDatabase().size());
}

When the API is invoked for the first time, a session is created and persisted in the database. As we can see, there is only one row in the SPRING_SESSION table at this point.

Finally, we test the POST endpoint by providing a favorite color:

@Test
public void whenH2DbIsQueried_thenSessionAttributeIsRetrieved()
  throws Exception {
 
    MultiValueMap<String, String> map = new LinkedMultiValueMap<>();
    map.add("color", "red");
    this.testRestTemplate.postForObject(
      "http://localhost:" + port + "/saveColor", map, String.class);
    List<byte[]> queryResponse = getSessionAttributeBytesFromDatabase();
    
    assertEquals(1, queryResponse.size());
    ObjectInput in = new ObjectInputStream(
      new ByteArrayInputStream(queryResponse.get(0)));
    List<String> obj = (List<String>) in.readObject();
    assertEquals("red", obj.get(0));
}

As expected, SPRING_SESSION_ATTRIBUTES table persists the favorite color. Notice that we have to deserialize the contents of ATTRIBUTE_BYTES to a list of String objects since Spring does object serialization when persisting session attributes in the database.

7. How Does It Work?

Looking at the controller, there’s no indication of database persisting the session information. All the magic is happening in one line we added in the application.properties.

That is, when we specify spring.session.store-type=jdbc, behind the scenes, Spring Boot will apply a configuration that is equivalent to manually adding @EnableJdbcHttpSession annotation.

This creates a Spring Bean named springSessionRepositoryFilter that implements a SessionRepositoryFilter.

Another key point is that the filter intercepts every HttpServletRequest and wraps it into a SessionRepositoryRequestWrapper.

It also calls the commitSession method to persist the session information.

8. Session Information Stored in H2 Database

By adding the below properties, we could take a look at the tables where the session information is stored from the URL –  http://localhost:8080/h2-console/:

spring.h2.console.enabled=true
spring.h2.console.path=/h2-console
spring session jdbc spring session jdbc

9. Conclusion

Spring Session is a powerful tool for managing HTTP sessions in a distributed system architecture. Spring takes care of the heavy lifting for simple use cases by providing a predefined schema with minimal configuration. At the same time, it offers the flexibility to come up with our design on how we want to store session information.

Finally, for managing authentication information using Spring Session you can refer to this article – Guide to Spring Session.

As always, you can find the source code over on Github.

Get a Random Number in Kotlin

$
0
0

1. Introduction

This short tutorial will demonstrate how to generate a random number using Kotlin.

2. Random number using java.lang.Math

The easiest way to generate a random number in Kotlin is to use java.lang.Math. Below example will generate a random double number between 0 and 1.

@Test
fun whenRandomNumberWithJavaUtilMath_thenResultIsBetween0And1() {
    val randomNumber = Math.random()
    assertTrue { randomNumber >= 0 }
    assertTrue { randomNumber < 1 }
}

3. Random number using ThreadLocalRandom

We can also use java.util.concurrent.ThreadLocalRandom to generate a random double, integer or long value. Integer and long values generated this way can be both positive or negative.

ThreadLocalRandom is thread-safe and provides better performance in a multithreaded environment because it provides a separate Random object for every thread and thus reduces contention between threads:

@Test
fun whenRandomNumberWithJavaThreadLocalRandom_thenResultsInDefaultRanges() {
    val randomDouble = ThreadLocalRandom.current().nextDouble()
    val randomInteger = ThreadLocalRandom.current().nextInt()
    val randomLong = ThreadLocalRandom.current().nextLong()
    assertTrue { randomDouble >= 0 }
    assertTrue { randomDouble < 1 }
    assertTrue { randomInteger >= Integer.MIN_VALUE }
    assertTrue { randomInteger < Integer.MAX_VALUE }
    assertTrue { randomLong >= Long.MIN_VALUE }
    assertTrue { randomLong < Long.MAX_VALUE }
}

4. Random number using Kotlin.js

We can also generate a random double using the Math class from the kotlin.js library.

@Test
fun whenRandomNumberWithKotlinJSMath_thenResultIsBetween0And1() {
    val randomDouble = Math.random()
    assertTrue { randomDouble >=0 }
    assertTrue { randomDouble < 1 }
}

5. Random number in given range using pure Kotlin

Using pure Kotlin, we can create a list of numbers, shuffle it and then take the first element from that list:

@Test
fun whenRandomNumberWithKotlinNumberRange_thenResultInGivenRange() {
    val randomInteger = (1..12).shuffled().first()
    assertTrue { randomInteger >= 1 }
    assertTrue { randomInteger <= 12 }
}

6. Random number in given range using ThreadLocalRandom

ThreadLocalRandom presented in section 3 can also be used to generate a random number in a given range:

@Test
fun whenRandomNumberWithJavaThreadLocalRandom_thenResultsInGivenRanges() {
    val randomDouble = ThreadLocalRandom.current().nextDouble(1.0, 10.0)
    val randomInteger = ThreadLocalRandom.current().nextInt(1, 10)
    val randomLong = ThreadLocalRandom.current().nextLong(1, 10)
    assertTrue { randomDouble >= 1 }
    assertTrue { randomDouble < 10 }
    assertTrue { randomInteger >= 1 }
    assertTrue { randomInteger < 10 }
    assertTrue { randomLong >= 1 }
    assertTrue { randomLong < 10 }
}

7. Conclusion

In this article, we’ve learned a few ways to generate a random number in Kotlin.

As always, all code presented in this tutorial can be found over on GitHub.

Auto-import Classes in IntelliJ

$
0
0

1. Overview

This brief tutorial will describe each option of IntelliJ IDEA’s ‘auto-import’ feature.

2. Auto-import

There are several options in IntelliJ IDEA that we may configure in Settings > Editor > Auto Import:

Let’s review each of these options.

2.1. Show Import Popup

When enabled, IDEA will underline a class reference in our code and suggest an import to add:

If there are several options to choose from, Idea will let us choose an import from a list of alternatives:

2.2. Optimize Imports on the Fly

This one will make IDEA remove unused imports automatically and rearrange others according to the ‘Code Style’ preferences.

2.3. Add Unambiguous Imports on the Fly

Also, there is a way to automatically add an import as we add references to classes that need to be imported.

2.4. Show Import Suggestions for Static Methods and Fields

Our final option will enable the import popup feature for statics.

However, note that turning only this option on (without ‘Show import popup’) will not enable import suggestions for classes:

3. Conclusion

Some developers prefer to have total control over the imports in their classes, others rely on the IDE to handle this technical task.

Either of them may benefit from various configuration options that IntelliJ IDEA IDE has, including those for importing behavior.

Add Multiple Items to an Java ArrayList

$
0
0

1. Overview of ArrayList

In this quick tutorial, we’ll show to how to add multiple items to an already initialized ArrayList.

For an introduction to the use of the ArrayList, please refer to this article here.

2. AddAll

First of all, we’re going to introduce a simple way to add multiple items into an ArrayList.

First, we’ll be using addAll(), which takes a collection as its argument:

List<Integer> anotherList = Arrays.asList(5, 12, 9, 3, 15, 88);
list.addAll(anotherList);

It’s important to keep in mind that the elements added in the first list will reference the same objects as the elements in anotherList.

For that reason, every amends made in one of these elements will affect both lists.

3. Collections.addAll

The Collections class consists exclusively of static methods that operate on or return collections.

One of them is addAll, which needs a destination list and the items to be added may be specified individually or as an array.

Here it’s an example of how to use it with individual elements:

List<Integer> list = new ArrayList<>();
Collections.addAll(list, 1, 2, 3, 4, 5);

And another one to exemplify the operation with two arrays:

List<Integer> list = new ArrayList<>();
Integer[] otherList = new Integer[] {1, 2, 3, 4, 5};
Collections.addAll(list, otherList);

Similarly to the way explained in the above section, the contents of both lists here will refer to the same objects.

4. Using Java 8

This version of Java opens our possibilities by adding new tools. The one we’ll explore in the next examples is Stream:

List<Integer> source = ...;
List<Integer> target = ...;

source.stream()
  .forEachOrdered(target::add);

The main advantages of this way are the opportunity to use skip and filters. In the next example we’re going to skip the first element:

source.stream()
  .skip(1)
  .forEachOrdered(target::add);

It’s possible to filter the elements by our necessities. For instance, the Integer value:

source.stream()
  .filter(i -> i > 10)
  .forEachOrdered(target::add);

Finally, there are scenarios where we want to work in a null-safe way. For those ones, we can use Optional:

Optional.ofNullable(source).ifPresent(target::addAll)

In the above example, we’re adding elements from source to target by the method addAll.

5. Conclusion

In this article, we’ve explored different ways to add multiple items to an already initialized ArrayList.

As always, code samples can be found over on GitHub.


How to Filter a Collection in Java

$
0
0

1. Overview

In this short tutorial, we’ll have a look at different ways of filtering a Collection in Java – that is, finding all the items that meet a certain condition.

This is a fundamental task that is present in practically any Java application.

For this reason, the number of libraries that provide functionality for this purpose is significant.

Particularly, in this tutorial we’ll cover:

  • Java 8 Streams’ filter() function
  • Java 9 filtering collector
  • Relevant Eclipse Collections APIs
  • Apache’s CollectionUtils filter() method
  • Guava’s Collections2 filter() approach

2. Using Streams

Since Java 8 was introduced, Streams have gained a key role in most cases where we have to process a collection of data.

Consequently, this is the preferred approach in most cases as it is built in Java and requires no additional dependencies.

2.1. Filtering a Collection with Streams

For the sake of simplicity, in all the examples our objective will be to create a method that retrieves only the even numbers from a Collection of Integer values.

Thus, we can express the condition that we’ll use to evaluate each item as ‘value % 2 == 0‘.

In all the cases, we’ll have to define this condition as a Predicate object:

public Collection<Integer> findEvenNumbers(Collection<Integer> baseCollection) {
    Predicate<Integer> streamsPredicate = item -> item % 2 == 0;

    return baseCollection.stream()
      .filter(streamsPredicate)
      .collect(Collectors.toList());
}

It’s important to note that each library we analyze in this tutorial provides its own Predicate implementation, but that still, all of them are defined as functional interfaces, therefore allowing us to use Lambda functions to declare them.

In this case, we used a predefined Collector provided by Java that accumulates the elements into a List, but we could’ve used others, as discussed in this previous post.

2.2. Filtering After Grouping a Collection in Java 9

Streams allow us to aggregate items using the groupingBy collector.

Yet, if we filter as we did in the last section, some elements might get discarded in an early stage, before this collector comes into play.

For this reason, the filtering collector was introduced with Java 9, with the objective of processing the subcollections after they have been grouped.

Following our example, let’s imagine we want to group our collection based on the number of digits each Integer has, before filtering out the odd numbers:

public Map<Integer, List<Integer>> findEvenNumbersAfterGrouping(
  Collection<Integer> baseCollection) {
 
    Function<Integer, Integer> getQuantityOfDigits = item -> (int) Math.log10(item) + 1;
    
    return baseCollection.stream()
      .collect(groupingBy(
        getQuantityOfDigits,
        filtering(item -> item % 2 == 0, toList())));
}

In short, if we use this collector, we might end up with an empty value entry, whereas if we filter before grouping, the collector wouldn’t create such an entry at all.

Of course, we would choose the approach based on our requirements.

3. Using Eclipse Collections

We can also make use of some other third-party libraries to accomplish our objective, either if its because our application doesn’t support Java 8 or because we want to take advantage of some powerful functionality not provided by Java.

Such is the case of Eclipse Collections, a library that strives to keep up with the new paradigms, evolving and embracing the changes introduced by all the latest Java releases.

We can begin by exploring our Eclipse Collections Introductory post to have a broader knowledge of the functionality provided by this library.

3.1. Dependencies

Let’s begin by adding the following dependency to our project’s pom.xml:

<dependency>
    <groupId>org.eclipse.collections</groupId>
    <artifactId>eclipse-collections</artifactId>
    <version>9.2.0</version>
</dependency>

The eclipse-collections includes all the necessary data structure interfaces and the API itself.

3.2. Filtering a Collection with Eclipse Collections

Let’s now use eclipse’s filtering functionality on one of its data structures, such as its MutableList:

public Collection<Integer> findEvenNumbers(Collection<Integer> baseCollection) {
    Predicate<Integer> eclipsePredicate
      = item -> item % 2 == 0;
 
    Collection<Integer> filteredList = Lists.mutable
      .ofAll(baseCollection)
      .select(eclipsePredicate);

    return filteredList;
}

As an alternative, we could’ve used the Iterate‘s select() static method to define the filteredList object:

Collection<Integer> filteredList
 = Iterate.select(baseCollection, eclipsePredicate);

4. Using Apache’s CollectionUtils

To get started with Apache’s CollectionUtils library, we can check out this short tutorial where we covered its uses.

In this tutorial, however, we’ll focus on its filter() implementation.

4.1. Dependencies

First, we’ll need the following dependencies in our pom.xml file:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-collections4</artifactId>
    <version>4.2</version>
</dependency>

3.2. Filtering a Collection with CollectionUtils

We are now ready to use the CollectonUtils‘ methods:

public Collection<Integer> findEvenNumbers(Collection<Integer> baseCollection) {
    Predicate<Integer> apachePredicate = item -> item % 2 == 0;

    CollectionUtils.filter(baseCollection, apachePredicate);
    return baseCollection;
}

We have to take into account that this method modifies the baseCollection by removing every item that doesn’t match the condition.

This means that the base Collection has to be mutable, otherwise it will throw an exception.

4. Using Guava’s Collections2

As before, we can read our previous post ‘Filtering and Transforming Collections in Guava’ for further information on this subject.

3.1. Dependencies

Let’s start by adding this dependency in our pom.xml file:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>25.1-jre</version>
</dependency>

3.2. Filtering a Collection with Collections2

As we can see, this approach is fairly similar to the one followed in the last section:

public Collection<Integer> findEvenNumbers(Collection<Integer> baseCollection) {
    Predicate<Integer> guavaPredicate = item -> item % 2 == 0;
        
    return Collections2.filter(baseCollection, guavaPredicate);
}

Again, here we define a Guava specific Predicate object.

In this case, Guava doesn’t modify the baseCollection, it generates a new one, so we can use an immutable collection as input.

5. Conclusion

In summary, we’ve seen that there are many different ways of filtering collections in Java.

Even though Streams are usually the preferred approach, its good to know and keep in mind the functionality offered by other libraries.

Especially if we need to support older Java versions. However, if this is the case, we need to keep in mind recent Java features used throughout the tutorial such as lambdas should be replaced with anonymous classes.

As usual, we can find all the examples shown in this tutorial in our Github repo.

Display RSS Feed with Spring MVC

$
0
0

1. Introduction

This quick tutorial will show how to build a simple RSS feed using Spring MVC and the AbstractRssFeedView class.

Afterward, we’ll also implement a simple REST API – to expose our feed over the wire.

2. RSS Feed

Before going into the implementation details, let’s make a quick review on what RSS is and how it works.

RSS is a type of web feed which easily allows a user to keep track of updates from a website. Furthermore, RSS feeds are based on an XML file which summarizes the content of a site. A news aggregator can then subscribe to one or more feeds and display the updates by regularly checking if the XML has changed.

3. Dependencies

First of all, since Spring’s RSS support is based on the ROME framework, we’ll need to add it as a dependency to our pom before we can actually use it:

<dependency>
    <groupId>com.rometools</groupId>
    <artifactId>rome</artifactId>
    <version>1.10.0</version>
</dependency>

For a guide to Rome have a look at our previous article.

4. Feed Implementation

Next up, we’re going to build the actual feed. In order to do that, we’ll extend the AbstractRssFeedView class and implement two of its methods.

The first one will receive a Channel object as input and will populate it with the feed’s metadata.

The other will return a list of items which represents the feed’s content:

@Component
public class RssFeedView extends AbstractRssFeedView {

    @Override
    protected void buildFeedMetadata(Map<String, Object> model, 
      Channel feed, HttpServletRequest request) {
        feed.setTitle("Baeldung RSS Feed");
        feed.setDescription("Learn how to program in Java");
        feed.setLink("http://www.baeldung.com");
    }

    @Override
    protected List<Item> buildFeedItems(Map<String, Object> model, 
      HttpServletRequest request, HttpServletResponse response) {
        Item entryOne = new Item();
        entryOne.setTitle("JUnit 5 @Test Annotation");
        entryOne.setAuthor("donatohan.rimenti@gmail.com");
        entryOne.setLink("http://www.baeldung.com/junit-5-test-annotation");
        entryOne.setPubDate(Date.from(Instant.parse("2017-12-19T00:00:00Z")));
        return Arrays.asList(entryOne);
    }
}

5. Exposing the Feed

Finally, we’re going to build a simple REST service to make our feed available on the web. The service will return the view object that we just created:

@RestController
public class RssFeedController {

    @Autowired
    private RssFeedView view;
    
    @GetMapping("/rss")
    public View getFeed() {
        return view;
    }
}

Also, since we’re using Spring Boot to start up our application, we’ll implement a simple launcher class:

@SpringBootApplication
public class RssFeedApplication {
    public static void main(final String[] args) {
        SpringApplication.run(RssFeedApplication.class, args);
    }
}

After running our application, when performing a request to our service we’ll see the following RSS Feed:

<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
    <channel>
        <title>Baeldung RSS Feed</title>
        <link>http://www.baeldung.com</link>
        <description>Learn how to program in Java</description>
        <item>
            <title>JUnit 5 @Test Annotation</title>
            <link>http://www.baeldung.com/junit-5-test-annotation</link>
            <pubDate>Tue, 19 Dec 2017 00:00:00 GMT</pubDate>
            <author>donatohan.rimenti@gmail.com</author>
        </item>
    </channel>
</rss>

6. Conclusion

This article went through how to build a simple RSS feed with Spring and ROME and make it available for the consumers by using a Web Service.

In our example, we used Spring Boot to start up our application. For more details on this topic, continue reading this introductory article on Spring Boot.

As always, all the code used is available over on GitHub.

Parsing YAML with SnakeYAML

$
0
0

1. Overview

In this tutorial, we’ll learn how to use SnakeYAML library to serialize Java objects to YAML documents and vice versa.

2. Project Setup

In order to use SnakeYAML in our project, we’ll add the following Maven dependency (the latest version can be found here):

<dependency>
    <groupId>org.yaml</groupId>
    <artifactId>snakeyaml</artifactId>
    <version>1.21</version>            
</dependency>

3. Entry Point

The Yaml class is the entry point for the API:

Yaml yaml = new Yaml();

Since the implementation is not thread safe, different threads must have their own Yaml instance.

4. Loading a YAML Document

The library provides support for loading the document from a String or an InputStream. Majority of the code samples here would be based on parsing the InputStream.

Let’s start by defining a simple YAML document, and naming the file as customer.yaml:

firstName: "John"
lastName: "Doe"
age: 20

4.1. Basic Usage

Now we’ll parse the above YAML document with the Yaml class:

Yaml yaml = new Yaml();
InputStream inputStream = this.getClass()
  .getClassLoader()
  .getResourceAsStream("customer.yaml");
Map<String, Object> obj = yaml.load(inputStream);
System.out.println(obj);

The above code generates the following output:

{firstName=John, lastName=Doe, age=20}

By default, the load() method returns a Map instance. Querying the Map object each time would require us to know the property key names in advance, and it’s also not easy to traverse over nested properties.

4.2. Custom Type

The library also provides a way to load the document as a custom class. This option would allow easy traversal of data in memory.

Let’s define a Customer class and try to load the document again:

public class Customer {

    private String firstName;
    private String lastName;
    private int age;

    // getters and setters
}

Assuming the YAML document to be deserialized as a known type, we can specify an explicit global tag in the document.

Let’s update the document and store it in a new file customer_with_type.yaml:

!!com.baeldung.snakeyaml.Customer
firstName: "John"
lastName: "Doe"
age: 20

Note the first line in the document, which holds the info about the class to be used when loading it.

Now we’ll update the code used above, and pass the new file name as input:

Yaml yaml = new Yaml();
InputStream inputStream = this.getClass()
 .getClassLoader()
 .getResourceAsStream("yaml/customer_with_type.yaml");
Customer customer = yaml.load(inputStream);

The load() method now returns an instance of Customer typeThe drawback to this approach is that the type has to be exported as a library in order to be used where needed.

Although, we could use the explicit local tag for which we aren’t required to export libraries.

Another way of loading a custom type is by using the Constructor class. This way we can specify the root type for a YAML document to be parsed. Let us create a Constructor instance with the Customer type as root type and pass it to the Yaml instance.

Now on loading the customer.yaml, we’ll get the Customer object:

Yaml yaml = new Yaml(new Constructor(Customer.class));

4.3. Implicit Types

In case there’s no type defined for a given property, the library automatically converts the value to an implicit type.

For example:

1.0 -> Float
42 -> Integer
2009-03-30 -> Date

Let’s test this implicit type conversion using a test case:

@Test
public void whenLoadYAML_thenLoadCorrectImplicitTypes() {
   Yaml yaml = new Yaml();
   Map<Object, Object> document = yaml.load("3.0: 2018-07-22");
 
   assertNotNull(document);
   assertEquals(1, document.size());
   assertTrue(document.containsKey(3.0d));   
}

4.4. Nested Objects and Collections

Given a top-level type, the library automatically detects the types of nested objects, unless they’re an interface or an abstract class, and deserializes the document into the relevant nested type.

Let’s add Contact and Address details to the customer.yaml, and save the new file as customer_with_contact_details_and_address.yaml. 

Now we’ll parse the new YAML document:

firstName: "John"
lastName: "Doe"
age: 31
contactDetails:
   - type: "mobile"
     number: 123456789
   - type: "landline"
     number: 456786868
homeAddress:
   line: "Xyz, DEF Street"
   city: "City Y"
   state: "State Y"
   zip: 345657

Customer class should also reflect these changes. Here’s the updated class:

public class Customer {
    private String firstName;
    private String lastName;
    private int age;
    private List<Contact> contactDetails;
    private Address homeAddress;    
    // getters and setters
}

Let’s see how Contact and Address classes look like:

public class Contact {
    private String type;
    private int number;
    // getters and setters
}
public class Address {
    private String line;
    private String city;
    private String state;
    private Integer zip;
    // getters and setters
}

Now we’ll test the Yaml#load() with the given test case:

@Test
public void 
  whenLoadYAMLDocumentWithTopLevelClass_thenLoadCorrectJavaObjectWithNestedObjects() {
 
    Yaml yaml = new Yaml(new Constructor(Customer.class));
    InputStream inputStream = this.getClass()
      .getClassLoader()
      .getResourceAsStream("yaml/customer_with_contact_details_and_address.yaml");
    Customer customer = yaml.load(inputStream);
 
    assertNotNull(customer);
    assertEquals("John", customer.getFirstName());
    assertEquals("Doe", customer.getLastName());
    assertEquals(31, customer.getAge());
    assertNotNull(customer.getContactDetails());
    assertEquals(2, customer.getContactDetails().size());
    
    assertEquals("mobile", customer.getContactDetails()
      .get(0)
      .getType());
    assertEquals(123456789, customer.getContactDetails()
      .get(0)
      .getNumber());
    assertEquals("landline", customer.getContactDetails()
      .get(1)
      .getType());
    assertEquals(456786868, customer.getContactDetails()
      .get(1)
      .getNumber());
    assertNotNull(customer.getHomeAddress());
    assertEquals("Xyz, DEF Street", customer.getHomeAddress()
      .getLine());
}

4.5. Type-Safe Collections

When one or more properties of a given Java class are type-safe (generic) collections, then it’s important to specify the TypeDescription so that the correct parameterized type is identified.

Let’s take one Customer having more than one Contact, and try to load it:

firstName: "John"
lastName: "Doe"
age: 31
contactDetails:
   - { type: "mobile", number: 123456789}
   - { type: "landline", number: 123456789}

In order to load this document, we can specify the TypeDescription for the given property on the top level class:

Constructor constructor = new Constructor(Customer.class);
TypeDescription customTypeDescription = new TypeDescription(Customer.class);
customTypeDescription.addPropertyParameters("contactDetails", Contact.class);
constructor.addTypeDescription(customTypeDescription);
Yaml yaml = new Yaml(constructor);

4.6. Loading Multiple Documents

There could be cases where, in a single File there are several YAML documents, and we want to parse all of them. The Yaml class provides a loadAll() method to do such type of parsing.

By default, the method returns an instance of Iterable<Object> where each object is of type Map<String, Object>. If a custom type is desired then we can use the Constructor instance as discussed above

Consider the following documents in a single file:

---
firstName: "John"
lastName: "Doe"
age: 20
---
firstName: "Jack"
lastName: "Jones"
age: 25

We can parse the above using the loadAll() method as shown in the below code sample:

@Test
public void whenLoadMultipleYAMLDocuments_thenLoadCorrectJavaObjects() {
    Yaml yaml = new Yaml(new Constructor(Customer.class));
    InputStream inputStream = this.getClass()
      .getClassLoader()
      .getResourceAsStream("yaml/customers.yaml");

    int count = 0;
    for (Object object : yaml.loadAll(inputStream)) {
        count++;
        assertTrue(object instanceof Customer);
    }
    assertEquals(2,count);
}

5. Dumping YAML Documents

The library also provides a method to dump a given Java object into a YAML document. The output could be a String or a specified file/stream.

5.1. Basic Usage

We’ll start with a simple example of dumping an instance of Map<String, Object> to a YAML document (String):

@Test
public void whenDumpMap_thenGenerateCorrectYAML() {
    Map<String, Object> data = new LinkedHashMap<String, Object>();
    data.put("name", "Silenthand Olleander");
    data.put("race", "Human");
    data.put("traits", new String[] { "ONE_HAND", "ONE_EYE" });
    Yaml yaml = new Yaml();
    StringWriter writer = new StringWriter();
    yaml.dump(data, writer);
    String expectedYaml = "name: Silenthand Olleander\nrace: Human\ntraits: [ONE_HAND, ONE_EYE]\n";

    assertEquals(expectedYaml, writer.toString());
}

The above code produces the following output (note that using an instance of LinkedHashMap preserves the order of the output data):

name: Silenthand Olleander
race: Human
traits: [ONE_HAND, ONE_EYE]

5.2. Custom Java Objects

We can also choose to dump custom Java types into an output stream. This will, however, add the global explicit tag to the output document:

@Test
public void whenDumpACustomType_thenGenerateCorrectYAML() {
    Customer customer = new Customer();
    customer.setAge(45);
    customer.setFirstName("Greg");
    customer.setLastName("McDowell");
    Yaml yaml = new Yaml();
    StringWriter writer = new StringWriter();
    yaml.dump(customer, writer);        
    String expectedYaml = "!!com.baeldung.snakeyaml.Customer {age: 45, contactDetails: null, firstName: Greg,\n  homeAddress: null, lastName: McDowell}\n";

    assertEquals(expectedYaml, writer.toString());
}

With the above approach, we’re still dumping the tag information in YAML document.

This means we have to export our class as a library for any consumer who is deserializing it. In order to avoid the tag name in the output file, we can use the dumpAs() method provided by the library.

So in the above code, we could tweak the following to remove the tag:

yaml.dumpAs(customer, Tag.MAP, null);

6. Conclusion

This article illustrated usages of SnakeYAML library to serialize Java objects to YAML and vice versa.

All of the examples can be found in the GitHub project – this is a Maven based project, so it should be easy to import and run as it is.

A Guide to JavaFaker

$
0
0

1. Overview

JavaFaker is a library that can be used to generate a wide array of real-looking data from addresses to popular culture references.

In this tutorial, we’ll be looking at how to use JavaFaker’s classes to generate fake data. We’ll start by introducing the Faker class and the FakeValueService, before moving on to introducing locales to make the data more specific to a single place.

Finally, we’ll discuss how unique the data is. To test JavaFaker’s classes, we’ll make use of regular expressions, you can read more about them here.

2. Dependencies

Below is the single dependency we’ll need to get started with JavaFaker.

First, the dependency we’ll need for Maven-based projects:

<dependency>
    <groupId>com.github.javafaker</groupId>
    <artifactId>javafaker</artifactId>
    <version>0.15</version>
</dependency>

For Gradle users, you can add the following to your build.gradle file:

compile group: 'com.github.javafaker', name: 'javafaker', version: '0.15'

3. FakeValueService

The FakeValueService class provides methods for generating random sequences as well as resolving .yml files associated with the locale.

In this section, we’ll cover some of the useful methods that the FakerValueService has to offer.

3.1. Letterify, Numerify, and Bothify

Three useful methods are Letterify, Numberify, and Bothify. Letterify helps to generate random sequences of alphabetic characters.

Next, Numerify simply generates numeric sequences.

Finally, Bothify is a combination of the two and can create random alphanumeric sequences – useful for mocking things like ID strings.

FakeValueService requires a valid Locale, as well as a RandomService:

@Test
public void whenBothifyCalled_checkPatternMatches() throws Exception {

    FakeValuesService fakeValuesService = new FakeValuesService(
      new Locale("en-GB"), new RandomService());

    String email = fakeValuesService.bothify("????##@gmail.com");
    Matcher emailMatcher = Pattern.compile("\\w{4}\\d{2}@gmail.com").matcher(email);
 
    assertTrue(emailMatcher.find());
}

In this unit test, we create a new FakeValueService with a locale of en-GB and use the bothify method to generate a unique fake Gmail address.

It works by replacing ‘?’ with random letters and ‘#’ with random numbers. We can then check the output is correct with a simple Matcher check.

3.2. Regexify

Similarly, regexify generates a random sequence based on a chosen regex pattern.

In this snippet, we’ll use the FakeValueService to create a random sequence following a specified regex:

@Test
public void givenValidService_whenRegexifyCalled_checkPattern() throws Exception {

    FakeValuesService fakeValuesService = new FakeValuesService(
      new Locale("en-GB"), new RandomService());

    String alphaNumericString = fakeValuesService.regexify("[a-z1-9]{10}");
    Matcher alphaNumericMatcher = Pattern.compile("[a-z1-9]{10}").matcher(alphaNumericString);
 
    assertTrue(alphaNumericMatcher.find());
}

Our code creates a lower-case alphanumeric string of length 10. Our pattern checks the generated string against the regex.

4. JavaFaker’s Faker Class

The Faker class allows us to use JavaFaker’s fake data classes.

In this section, we’ll see how to instantiate a Faker object and use it to call some fake data:

Faker faker = new Faker();

String streetName = faker.address().streetName();
String number = faker.address().buildingNumber();
String city = faker.address().city();
String country = faker.address().country();

System.out.println(String.format("%s\n%s\n%s\n%s",
  number,
  streetName,
  city,
  country));

Above, we use the Faker Address object to generate a random address.

When we run this code, we’ll get an example of the output:

3188
Dayna Mountains
New Granvilleborough
Tonga

We can see that the data has no single geographical location since we didn’t specify a locale. To change this, we’ll learn to make the data more relevant to our location in the next section.

We could also use this faker object in a similar way to create data relating to many more objects such as:

  • Business
  • Beer
  • Food
  • PhoneNumber

You can find the full list here.

5. Introducing Locales

Here, we’ll introduce how to use locales to make the generated data more specific to a single location. We’ll introduce a Faker with a US locale, and a UK locale:

@Test
public void givenJavaFakersWithDifferentLocals_thenHeckZipCodesMatchRegex() {

    Faker ukFaker = new Faker(new Locale("en-GB"));
    Faker usFaker = new Faker(new Locale("en-US"));

    System.out.println(String.format("American zipcode: %s", usFaker.address().zipCode()));
    System.out.println(String.format("British postcode: %s", ukFaker.address().zipCode()));

    Pattern ukPattern = Pattern.compile(
      "([Gg][Ii][Rr] 0[Aa]{2})|((([A-Za-z][0-9]{1,2})|"
      + "(([A-Za-z][A-Ha-hJ-Yj-y][0-9]{1,2})|(([A-Za-z][0-9][A-Za-z])|([A-Za-z][A-Ha-hJ-Yj-y]" 
      + "[0-9]?[A-Za-z]))))\\s?[0-9][A-Za-z]{2})");
    Matcher ukMatcher = ukPattern.matcher(ukFaker.address().zipCode());

    assertTrue(ukMatcher.find());

    Matcher usMatcher = Pattern.compile("^\\d{5}(?:[-\\s]\\d{4})?$")
      .matcher(usFaker.address().zipCode());

    assertTrue(usMatcher.find());
}

Above, we see that the two Fakers with the locale match their regexes for the countries zip codes.

If the locale passed to the Faker does not exist, the Faker throws a LocaleDoesNotExistException.

We’ll test this with the following unit test:

@Test(expected = LocaleDoesNotExistException.class)
public void givenWrongLocale_whenFakerInitialised_testExceptionThrown() {
    Faker wrongLocaleFaker = new Faker(new Locale("en-seaWorld"));
}

6. Uniqueness

While JavaFaker seemingly generates data at Random, the uniqueness cannot be guaranteed.

JavaFaker supports seeding of its pseudo-random number generator (PRNG) in the form of a RandomService to provide the deterministic output of repeated method calls.

Simply put, pseudorandomness is a process that appears random but is not.

We can see how this works by creating two Fakers with the same seed:

@Test
public void givenJavaFakersWithSameSeed_whenNameCalled_CheckSameName() {

    Faker faker1 = new Faker(new Random(24));
    Faker faker2 = new Faker(new Random(24));

    assertEquals(faker1.name().firstName(), faker2.name().firstName());
}

The above code returns the same name from two different fakers.

7. Conclusion

In this tutorial, we have explored the JavaFaker library to generate real-looking fake data. We’ve also covered two useful classes the Faker class and the FakeValueService class.

We explored how we can use locales to generate location specific data.

Finally, we discussed how the data generated only seems random and the uniqueness of the data is not guaranteed.

As usual, code snippets can be found over on GitHub.

A Simple Guide to Connection Pooling in Java

$
0
0

1. Overview

Connection pooling is a well-known data access pattern, whose main purpose is to reduce the overhead involved in performing database connections and read/write database operations.

In a nutshell, a connection pool is, at the most basic level, a database connection cache implementation, which can be configured to suit specific requirements.

In this tutorial, we’ll make a quick roundup of a few popular connection pooling frameworks, and we’ll learn how to implement from scratch our own connection pool.

2. Why Connection Pooling?

The question is rhetorical, of course.

If we analyze the sequence of steps involved in a typical database connection life cycle, we’ll understand why:

  1. Opening a connection to the database using the database driver
  2. Opening a TCP socket for reading/writing data
  3. Reading / writing data over the socket
  4. Closing the connection
  5. Closing the socket

It becomes evident that database connections are fairly expensive operations, and as such, should be reduced to a minimum in every possible use case (in edge cases, just avoided).

Here’s where connection pooling implementations come into play.

By just simply implementing a database connection container, which allows us to reuse a number of existing connections, we can effectively save the cost of performing a huge number of expensive database trips, hence boosting the overall performance of our database-driven applications.

3. JDBC Connection Pooling Frameworks

From a pragmatic perspective, implementing a connection pool from the ground up is just pointless, considering the number of “enterprise-ready” connection pooling frameworks available out there.

From a didactic one, which is the goal of this article, it’s not.

Even so, before we learn how to implement a basic connection pool, let’s first showcase a few popular connection pooling frameworks.

3.1. Apache Commons DBCP

Let’s start this quick roundup with Apache Commons DBCP Component, a full-featured connection pooling JDBC framework:

public class DBCPDataSource {
    
    private static BasicDataSource ds = new BasicDataSource();
    
    static {
        ds.setUrl("jdbc:h2:mem:test");
        ds.setUsername("user");
        ds.setPassword("password");
        ds.setMinIdle(5);
        ds.setMaxIdle(10);
        ds.setMaxOpenPreparedStatements(100);
    }
    
    public static Connection getConnection() throws SQLException {
        return ds.getConnection();
    }
    
    private DBCPDataSource(){ }
}

In this case, we’ve used a wrapper class with a static block to easily configure DBCP’s properties.

Here’s how to get a pooled connection with the DBCPDataSource class:

Connection con = DBCPDataSource.getConnection();

3.2. HikariCP

Moving on, let’s look at HikariCP, a lightning fast JDBC connection pooling framework created by Brett Wooldridge (for the full details on how to configure and get the most out of HikariCP, please check this article):

public class HikariCPDataSource {
    
    private static HikariConfig config = new HikariConfig();
    private static HikariDataSource ds;
    
    static {
        config.setJdbcUrl("jdbc:h2:mem:test");
        config.setUsername("user");
        config.setPassword("password");
        config.addDataSourceProperty("cachePrepStmts", "true");
        config.addDataSourceProperty("prepStmtCacheSize", "250");
        config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
        ds = new HikariDataSource(config);
    }
    
    public static Connection getConnection() throws SQLException {
        return ds.getConnection();
    }
    
    private HikariCPDataSource(){}
}

Similarly, here’s how to get a pooled connection with the HikariCPDataSource class:

Connection con = HikariCPDataSource.getConnection();

3.3. C3PO

Last in this review is C3PO, a powerful JDBC4 connection and statement pooling framework developed by Steve Waldman:

public class C3poDataSource {

    private static ComboPooledDataSource cpds = new ComboPooledDataSource();

    static {
        try {
            cpds.setDriverClass("org.h2.Driver");
            cpds.setJdbcUrl("jdbc:h2:mem:test");
            cpds.setUser("user");
            cpds.setPassword("password");
        } catch (PropertyVetoException e) {
            // handle the exception
        }
    }
    
    public static Connection getConnection() throws SQLException {
        return cpds.getConnection();
    }
    
    private C3poDataSource(){}
}

As expected, getting a pooled connection with the C3poDataSource class is similar to the previous examples:

Connection con = C3poDataSource.getConnection();

4. A Simple Implementation

To better understand the underlying logic of connection pooling, let’s create a simple implementation.

Let’s start out with a loosely-coupled design, based on just one single interface:

public interface ConnectionPool {
    Connection getConnection();
    boolean releaseConnection(Connection connection);
    String getUrl();
    String getUser();
    String getPassword();
}

The ConnectionPool interface defines the public API of a basic connection pool.

Now, let’s create an implementation, which provides some basic functionality, including getting and releasing a pooled connection:

public class BasicConnectionPool 
  implements ConnectionPool {

    private String url;
    private String user;
    private String password;
    private List<Connection> connectionPool;
    private List<Connection> usedConnections = new ArrayList<>();
    private static int INITIAL_POOL_SIZE = 10;
    
    public static BasicConnectionPool create(
      String url, String user, 
      String password) throws SQLException {
 
        List<Connection> pool = new ArrayList<>(INITIAL_POOL_SIZE);
        for (int i = 0; i < INITIAL_POOL_SIZE; i++) {
            pool.add(createConnection(url, user, password));
        }
        return new BasicConnectionPool(url, user, password, pool);
    }
    
    // standard constructors
    
    @Override
    public Connection getConnection() {
        Connection connection = connectionPool
          .remove(connectionPool.size() - 1);
        usedConnections.add(connection);
        return connection;
    }
    
    @Override
    public boolean releaseConnection(Connection connection) {
        connectionPool.add(connection);
        return usedConnections.remove(connection);
    }
    
    private static Connection createConnection(
      String url, String user, String password) 
      throws SQLException {
        return DriverManager.getConnection(url, user, password);
    }
    
    public int getSize() {
        return connectionPool.size() + usedConnections.size();
    }

    // standard getters
}

While pretty naive, the BasicConnectionPool class provides the minimal functionality that we’d expect from a typical connection pooling implementation.

In a nutshell, the class initializes a connection pool based on an ArrayList that stores 10 connections, which can be easily reused.

It’s possible to create JDBC connections with the DriverManager class and with Datasource implementations.

As it’s much better to keep the creation of connections database agnostic, we’ve used the former, within the create() static factory method.

In this case, we’ve placed the method within the BasicConnectionPool, because this is the only implementation of the interface.

In a more complex design, with multiple ConnectionPool implementations, it’d be preferable to place it in the interface, therefore getting a more flexible design and a greater level of cohesion.

The most relevant point to stress here is that once the pool is created, connections are fetched from the pool, so there’s no need to create new ones.

Furthermore, when a connection is released, it’s actually returned back to the pool, so other clients can reuse it.

There’s no any further interaction with the underlying database, such as an explicit call to the Connection’s close() method.

5. Using the BasicConnectionPool Class

As expected, using our BasicConnectionPool class is straightforward.

Let’s create a simple unit test and get a pooled in-memory H2 connection:

@Test
public whenCalledgetConnection_thenCorrect() {
    ConnectionPool connectionPool = BasicConnectionPool
      .create("jdbc:h2:mem:test", "user", "password");
 
    assertTrue(connectionPool.getConnection().isValid(1));
}

6. Further Improvements and Refactoring

Of course, there’s plenty of room to tweak/extend the current functionality of our connection pooling implementation.

For instance, we could refactor the getConnection() method, and add support for maximum pool size. If all available connections are taken, and the current pool size is less than the configured maximum, the method will create a new connection:

@Override
public Connection getConnection() throws SQLException {
    if (connectionPool.isEmpty()) {
        if (usedConnections.size() < MAX_POOL_SIZE) {
            connectionPool.add(createConnection(url, user, password));
        } else {
            throw new RuntimeException(
              "Maximum pool size reached, no available connections!");
        }
    }

    Connection connection = connectionPool
      .remove(connectionPool.size() - 1);
    usedConnections.add(connection);
    return connection;
}

Note that the method now throws SQLException, meaning we’ll have to update the interface signature as well.

Or, we could add a method to gracefully shut down our connection pool instance:

public void shutdown() throws SQLException {
    usedConnections.forEach(this::releaseConnection);
    for (Connection c : connectionPool) {
        c.close();
    }
    connectionPool.clear();
}

In production-ready implementations, a connection pool should provide a bunch of extra features, such as the ability for tracking the connections that are currently in use, support for prepared statement pooling, and so forth.

As we’ll keep things simple, we’ll omit how to implement these additional features and keep the implementation non-thread-safe for the sake of clarity.

7. Conclusion

In this article, we took an in-depth look at what connection pooling is and learned how to roll our own connection pooling implementation.

Of course, we don’t have to start from scratch every time that we want to add a full-featured connection pooling layer to our applications.

That’s why we made first a simple roundup showing some of the most popular connection pool frameworks, so we can have a clear idea on how to work with them, and pick up the one that best suits our requirements.

As usual, all the code samples shown in this article are available over on GitHub.

Viewing all 4703 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>