Quantcast
Channel: Baeldung
Viewing all 4731 articles
Browse latest View live

Inheritance with Jackson

$
0
0

I usually post about Jackson and JSON stuff on Twitter - you can follow me there:

1. Overview

In this article we’ll have a look at working with class hierarchies in Jackson.

Two typical use cases are the inclusion of subtype metadata, and ignoring properties inherited from superclasses. We’re going to describe those two scenarios and a couple of circumstances where special treatment of subtypes is needed.

2. Inclusion of Subtype Information

There are two ways to add type information when serializing and deserializing data objects, namely global default typing and per-class annotations.

2.1. Global Default Typing

The following three Java classes will be used to illustrate global inclusion of type metadata.

Vehicle superclass:

public abstract class Vehicle {
    private String make;
    private String model;

    protected Vehicle(String make, String model) {
        this.make = make;
        this.model = model;
    }

    // no-arg constructor, getters and setters
}

Car subclass:

public class Car extends Vehicle {
    private int seatingCapacity;
    private double topSpeed;

    public Car(String make, String model, int seatingCapacity, double topSpeed) {
        super(make, model);
        this.seatingCapacity = seatingCapacity;
        this.topSpeed = topSpeed;
    }

    // no-arg constructor, getters and setters
}

Truck subclass:

public class Truck extends Vehicle {
    private double payloadCapacity;

    public Truck(String make, String model, double payloadCapacity) {
        super(make, model);
        this.payloadCapacity = payloadCapacity;
    }

    // no-arg constructor, getters and setters
}

Global default typing allows type information to be declared just once by enabling it on an ObjectMapper object. That type metadata will then be applied to all designated types. As a result, it is very convenient to use this method for adding type metadata, especially when there are a large number of types involved. The downside is that it uses fully-qualified Java type names as type identifiers, and is thus unsuitable for interactions with non-Java systems, and is only applicable to several pre-defined kinds of types.

The Vehicle structure shown above is used to populate an instance of Fleet class:

public class Fleet {
    private List<Vehicle> vehicles;
    
    // getters and setters
}

To embed type metadata, we need to enable the typing functionality on the ObjectMapper object that will be used for serialization and deserialization of data objects later on:

ObjectMapper.enableDefaultTyping(ObjectMapper.DefaultTyping applicability, JsonTypeInfo.As includeAs)

The applicability parameter determines the types requiring type information, and the includeAs parameter is the mechanism for type metadata inclusion. Additionally, two other variants of the enableDefaultTyping method are provided:

  • ObjectMapper.enableDefaultTyping(ObjectMapper.DefaultTyping applicability): allows the caller to specify the applicability, while using WRAPPER_ARRAY as the default value for includeAs
  • ObjectMapper.enableDefaultTyping(): uses OBJECT_AND_NON_CONCRETE as the default value for applicability and WRAPPER_ARRAY as the default value for includeAs

Let’s see how it works. To begin, we need to create an ObjectMapper object and enable default typing on it:

ObjectMapper mapper = new ObjectMapper();
mapper.enableDefaultTyping();

The next step is to instantiate and populate the data structure introduced at the beginning of this sub-section. The code to do it will be re-used later on in the subsequent sub-sections. For the sake of convenience and re-use, we will name it the vehicle instantiation block.

Car car = new Car("Mercedes-Benz", "S500", 5, 250.0);
Truck truck = new Truck("Isuzu", "NQR", 7500.0);

List<Vehicle> vehicles = new ArrayList<>();
vehicles.add(car);
vehicles.add(truck);

Fleet serializedFleet = new Fleet();
serializedFleet.setVehicles(vehicles);

Those populated objects will then be serialized:

String jsonDataString = mapper.writeValueAsString(serializedFleet);

The resulting JSON string:

{
    "vehicles": 
    [
        "java.util.ArrayList",
        [
            [
                "org.baeldung.jackson.inheritance.Car",
                {
                    "make": "Mercedes-Benz",
                    "model": "S500",
                    "seatingCapacity": 5,
                    "topSpeed": 250.0
                }
            ],

            [
                "org.baeldung.jackson.inheritance.Truck",
                {
                    "make": "Isuzu",
                    "model": "NQR",
                    "payloadCapacity": 7500.0
                }
            ]
        ]
    ]
}

During deserialization, objects are recovered from the JSON string with type data preserved:

Fleet deserializedFleet = mapper.readValue(jsonDataString, Fleet.class);

The recreated objects will be the same concrete subtypes as they were before serialization:

assertThat(deserializedFleet.getVehicles().get(0), instanceOf(Car.class));
assertThat(deserializedFleet.getVehicles().get(1), instanceOf(Truck.class));

2.2. Per-Class Annotations

Per-class annotation is a powerful method to include type information, and can be very useful for complex use cases where a significant level of customization is necessary. However, this can only be achieved at the expense of complication. Per-class annotations override global default typing if type information is configured in both ways.

To make use of this method, the supertype should be annotated with @JsonTypeInfo and several other relevant annotations. This subsection will use a data model similar to the Vehicle structure in the previous example to illustrate per-class annotations. The only change is the addition of annotations on Vehicle abstract class, as shown below:

@JsonTypeInfo(
  use = JsonTypeInfo.Id.NAME, 
  include = JsonTypeInfo.As.PROPERTY, 
  property = "type")
@JsonSubTypes({ 
  @Type(value = Car.class, name = "car"), 
  @Type(value = Truck.class, name = "truck") 
})
public abstract class Vehicle {
    // fields, constructors, getters and setters
}

Data objects are created using the vehicle instantiation block introduced in the previous subsection, and then serialized:

String jsonDataString = mapper.writeValueAsString(serializedFleet);

The serialization produces the following JSON structure:

{
    "vehicles": 
    [
        {
            "type": "car",
            "make": "Mercedes-Benz",
            "model": "S500",
            "seatingCapacity": 5,
            "topSpeed": 250.0
        },

        {
            "type": "truck",
            "make": "Isuzu",
            "model": "NQR",
            "payloadCapacity": 7500.0
        }
    ]
}

That string is used to re-create data objects:

Fleet deserializedFleet = mapper.readValue(jsonDataString, Fleet.class);

Finally, the whole progress is validated:

assertThat(deserializedFleet.getVehicles().get(0), instanceOf(Car.class));
assertThat(deserializedFleet.getVehicles().get(1), instanceOf(Truck.class));

3. Ignoring Properties from a Supertype

Sometimes, some properties inherited from superclasses need to be ignored during serialization or deserialization. This can be achieved by one of three methods: annotations, mix-ins and annotation introspection.

3.1. Annotations

There are two commonly used Jackson annotations to ignore properties, which are @JsonIgnore and @JsonIgnoreProperties. The former is directly applied to type members, telling Jackson to ignore the corresponding property when serializing or deserializing. The latter is used at any level, including type and type member, to list properties that should be ignored.

@JsonIgnoreProperties is more powerful than the other, since it allows us to ignore properties inherited from supertypes that we do not have control of, such as types in an external library. In addition, this annotation allows us to ignore many properties at once, which can lead to more understandable code in some cases.

The following class structure is used to demonstrate annotation usage:

public abstract class Vehicle {
    private String make;
    private String model;

    protected Vehicle(String make, String model) {
        this.make = make;
        this.model = model;
    }

    // no-arg constructor, getters and setters
}

@JsonIgnoreProperties({ "model", "seatingCapacity" })
public abstract class Car extends Vehicle {
    private int seatingCapacity;
    
    @JsonIgnore
    private double topSpeed;

    protected Car(String make, String model, int seatingCapacity, double topSpeed) {
        super(make, model);
        this.seatingCapacity = seatingCapacity;
        this.topSpeed = topSpeed;
    }

    // no-arg constructor, getters and setters
}

public class Sedan extends Car {
    public Sedan(String make, String model, int seatingCapacity, double topSpeed) {
        super(make, model, seatingCapacity, topSpeed);
    }

    // no-arg constructor
}

public class Crossover extends Car {
    private double towingCapacity;

    public Crossover(String make, String model, int seatingCapacity, 
      double topSpeed, double towingCapacity) {
        super(make, model, seatingCapacity, topSpeed);
        this.towingCapacity = towingCapacity;
    }

    // no-arg constructor, getters and setters
}

As you can see, @JsonIgnore tells Jackson to ignore Car.topSpeed property, while @JsonIgnoreProperties ignores the Vehicle.model and Car.seatingCapacity ones.

The behavior of both annotations are validated by the following test. First, we need to instantiate ObjectMapper and data classes, then use that ObjectMapper instance to serialize data objects:

ObjectMapper mapper = new ObjectMapper();

Sedan sedan = new Sedan("Mercedes-Benz", "S500", 5, 250.0);
Crossover crossover = new Crossover("BMW", "X6", 5, 250.0, 6000.0);

List<Vehicle> vehicles = new ArrayList<>();
vehicles.add(sedan);
vehicles.add(crossover);

String jsonDataString = mapper.writeValueAsString(vehicles);

jsonDataString contains the following JSON array:

[
    {
        "make": "Mercedes-Benz"
    },
    {
        "make": "BMW",
        "towingCapacity": 6000.0
    }
]

Finally, we will prove the presence or absence of various property names in the resulting JSON string:

assertThat(jsonDataString, containsString("make"));
assertThat(jsonDataString, not(containsString("model")));
assertThat(jsonDataString, not(containsString("seatingCapacity")));
assertThat(jsonDataString, not(containsString("topSpeed")));
assertThat(jsonDataString, containsString("towingCapacity"));

3.2. Mix-ins

Mix-ins allow us to apply behavior (such as ignoring properties when serializing and deserializing) without the need to directly apply annotations to a class. This is especially useful when dealing with third-party classes, in which we cannot modify the code directly.

This sub-section reuses the class inheritance chain introduced in the previous one, except that the @JsonIgnore and @JsonIgnoreProperties annotations on the Car class have been removed:

public abstract class Car extends Vehicle {
    private int seatingCapacity;
    private double topSpeed;
        
    // fields, constructors, getters and setters
}

In order to demonstrate operations of mix-ins, we will ignore Vehicle.make and Car.topSpeed properties, then use a test to make sure everything works as expected.

The first step is to declare a mix-in type:

private abstract class CarMixIn {
    @JsonIgnore
    public String make;
    @JsonIgnore
    public String topSpeed;
}

Next, the mix-in is bound to a data class through an ObjectMapper object:

ObjectMapper mapper = new ObjectMapper();
mapper.addMixIn(Car.class, CarMixIn.class);

After that, we instantiate data objects and serialize them into a string:

Sedan sedan = new Sedan("Mercedes-Benz", "S500", 5, 250.0);
Crossover crossover = new Crossover("BMW", "X6", 5, 250.0, 6000.0);

List<Vehicle> vehicles = new ArrayList<>();
vehicles.add(sedan);
vehicles.add(crossover);

String jsonDataString = mapper.writeValueAsString(vehicles);

jsonDataString now contains the following JSON:

[
    {
        "model": "S500",
        "seatingCapacity": 5
    },
    {
        "model": "X6",
        "seatingCapacity": 5,
        "towingCapacity": 6000.0
    }
]

Finally, let’s verify the result:

assertThat(jsonDataString, not(containsString("make")));
assertThat(jsonDataString, containsString("model"));
assertThat(jsonDataString, containsString("seatingCapacity"));
assertThat(jsonDataString, not(containsString("topSpeed")));
assertThat(jsonDataString, containsString("towingCapacity"));

3.3. Annotation Introspection

Annotation introspection is the most powerful method to ignore supertype properties, since it allows for detailed customization using the AnnotationIntrospector.hasIgnoreMarker API.

This sub-section makes use of the same class hierarchy as the preceding one. In this use case, we will ask Jackson to ignore Vehicle.model, Crossover.towingCapacity and all properties declared in the Car class. Let’s start with the declaration of a class that extends the JacksonAnnotationIntrospector interface:

class IgnoranceIntrospector extends JacksonAnnotationIntrospector {
    public boolean hasIgnoreMarker(AnnotatedMember m) {
        return m.getDeclaringClass() == Vehicle.class && m.getName() == "model" 
          || m.getDeclaringClass() == Car.class 
          || m.getName() == "towingCapacity" 
          || super.hasIgnoreMarker(m);
    }
}

The introspector will ignore any properties (that is, it will treat them as if they were marked as ignored via one of the other methods) that match the set of conditions defined in the method.

The next step is to register an instance of the IgnoranceIntrospector class with an ObjectMapper object:

ObjectMapper mapper = new ObjectMapper();
mapper.setAnnotationIntrospector(new IgnoranceIntrospector());

Now we create and serialize data objects in the same way as in section 3.2. The contents of the newly produced string are:

[
    {
        "make": "Mercedes-Benz"
    },
    {
        "make": "BMW"
    }
]

Finally, we’ll verify that the introspector worked as intended:

assertThat(jsonDataString, containsString("make"));
assertThat(jsonDataString, not(containsString("model")));
assertThat(jsonDataString, not(containsString("seatingCapacity")));
assertThat(jsonDataString, not(containsString("topSpeed")));
assertThat(jsonDataString, not(containsString("towingCapacity")));

4. Subtype Handling Scenarios

This section will deal with two interesting scenarios relevant to subclass handling.

4.1. Conversion Between Subtypes

Jackson allows an object to be converted to a type other than the original one. In fact, this conversion may happen among any compatible types, but it is most helpful when used between two subtypes of the same interface or class to secure values and functionality.

In order to demonstrate conversion of a type to another one, we will reuse the Vehicle hierarchy taken from section 2, with the addition of the @JsonIgnore annotation on properties in Car and Truck to avoid incompatibility.

public class Car extends Vehicle {
    @JsonIgnore
    private int seatingCapacity;

    @JsonIgnore
    private double topSpeed;

    // constructors, getters and setters
}

public class Truck extends Vehicle {
    @JsonIgnore
    private double payloadCapacity;

    // constructors, getters and setters
}

The following code will verify that a conversion is successful, and that the new object preserves data values from the old one:

ObjectMapper mapper = new ObjectMapper();

Car car = new Car("Mercedes-Benz", "S500", 5, 250.0);
Truck truck = mapper.convertValue(car, Truck.class);

assertEquals("Mercedes-Benz", truck.getMake());
assertEquals("S500", truck.getModel());

4.2. Deserialization Without No-arg Constructors

By default, Jackson recreates data objects by using no-arg constructors. This is inconvenient in some cases, such as when a class has non-default constructors and users have to write no-arg ones just to satisfy Jackson’s requirements. It is even more troublesome in a class hierarchy where a no-arg constructor must be added to a class and all those higher in the inheritance chain. In these cases, creator methods come to the rescue.

This section will use an object structure similar to the one in section 2, with some changes to constructors. Specifically, all no-arg constructors are dropped, and constructors of concrete subtypes are annotated with @JsonCreator and @JsonProperty to make them creator methods.

public class Car extends Vehicle {

    @JsonCreator
    public Car(
      @JsonProperty("make") String make, 
      @JsonProperty("model") String model, 
      @JsonProperty("seating") int seatingCapacity, 
      @JsonProperty("topSpeed") double topSpeed) {
        super(make, model);
        this.seatingCapacity = seatingCapacity;
            this.topSpeed = topSpeed;
    }

    // fields, getters and setters
}

public class Truck extends Vehicle {

    @JsonCreator
    public Truck(
      @JsonProperty("make") String make, 
      @JsonProperty("model") String model, 
      @JsonProperty("payload") double payloadCapacity) {
        super(make, model);
        this.payloadCapacity = payloadCapacity;
    }

    // fields, getters and setters
}

A test will verify that Jackson can deal with objects that lack no-arg constructors:

ObjectMapper mapper = new ObjectMapper();
mapper.enableDefaultTyping();
        
Car car = new Car("Mercedes-Benz", "S500", 5, 250.0);
Truck truck = new Truck("Isuzu", "NQR", 7500.0);

List<Vehicle> vehicles = new ArrayList<>();
vehicles.add(car);
vehicles.add(truck);

Fleet serializedFleet = new Fleet();
serializedFleet.setVehicles(vehicles);

String jsonDataString = mapper.writeValueAsString(serializedFleet);
mapper.readValue(jsonDataString, Fleet.class);

5. Conclusion

This tutorial has covered several interesting use cases to demonstrate Jackson’s support for type inheritance, with a focus on polymorphism and ignorance of supertype properties.

The implementation of all these examples and code snippets can be found in a GitHub project.

I usually post about Jackson and JSON stuff on Twitter - you should follow me there:



Looking for a new Java Technical Editor for Baeldung

$
0
0

We’re looking for a part-time Java Technical Editor to work with new authors.

You will be working with authors to review their articles and help them with feedback (in video form – screencasts).

All articles are focused on Java, and generally also involve various Spring technologies as well.

Budget and Time Commitment

Overall, you’ll spend around 12-14 hours / month and the budget for the position is 600$ / month.

That budget targets a 12 – 16 articles / month workload (12 x 1250 words or 16 x 1000 words articles).

And the typical article will take about 2 rounds of review until it’s ready to go. All of this usually takes about 30 to 45 minutes of work for a small to medium article and can take 60 to 90 minutes for larger pieces.

Who is the right candidate?

First – you need to be a developer yourself, working or actively involved in the Java and Spring ecosystem. All of these articles are code-centric, so being in the trenches and able to code is instrumental.

Second – you need to be able to provide feedback via screencasts. These video are informal – they’re generally you explaining what an author needs to improve on their article.

Finally – and it almost goes without saying – you should have a good command of the English language.

What Will You Be Doing?

The goal is to make sure that the article hits a high level of quality before it gets published. More specifically – articles should match the code and style guidelines of the site.

Beyond formatting and style, articles should be code-focused, clean and easy to understand. Sometimes an article is almost there, but not quite – and the author needs be guided towards a better solution, or a better way of explaining some specific concept.

Apply

If you think you’re well suited for this work, I’d love to work with you to help grow Baeldung.

Email me at eugen@baeldung.com with your details.

Cheers,

Eugen. 

A Guide to Querydsl with JPA

$
0
0

1. Overview

Querydsl is an extensive Java framework, which helps with creating and running type-safe queries in a domain specific language that is similar to SQL.

In this article we’ll explore Querydsl with the Java Persistence API.

A quick side note here is that HQL for Hibernate was the first target language for Querydsl, but nowadays it supports JPA, JDO, JDBC, Lucene, Hibernate Search, MongoDB, Collections and RDFBean as backends.

2. Preparations

Let’s first add the necessary dependencies into our Maven project:

<properties>
    <querydsl.version>2.5.0</querydsl.version>
</properties>

<dependency>
    <groupId>com.querydsl</groupId>
    <artifactId>querydsl-apt</artifactId>
    <version>${querydsl.version}</version>
    <scope>provided</scope>
</dependency>

<dependency>
    <groupId>com.querydsl</groupId>
    <artifactId>querydsl-jpa</artifactId>
    <version>${querydsl.version}</version>
</dependency>

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.6.1</version>
</dependency>

And now let’s configure the Maven APT plugin:

<project>
    <build>
    <plugins>
    ...
    <plugin>
        <groupId>com.mysema.maven</groupId>
        <artifactId>apt-maven-plugin</artifactId>
        <version>1.1.3</version>
        <executions>
        <execution>
            <goals>
                <goal>process</goal>
            </goals>
            <configuration>
                <outputDirectory>target/generated-sources</outputDirectory>
                <processor>com.querydsl.apt.jpa.JPAAnnotationProcessor</processor>
            </configuration>
        </execution>
        </executions>
    </plugin>
    ...
    </plugins>
    </build>
</project>

The JPAAnnotationProcessor will find domain types annotated with javax.persistence.Entity annotation and generates query types for them.

3. Queries With Querydsl

Queries are constructed based on generated query types that reflect the properties of your domain types. Also function/method invocations are constructed in a fully type-safe manner.

The query paths and operations are the same in all implementations and also the Query interfaces have a common base interface.

3.1. An Entity and the Querydsl Query Type

Let’s first define a simple entity we’re going to make use of as we go through examples:

@Entity
public class Person {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    @Column
    private String firstname;

    @Column
    private String surname;
    
    Person() {
    }

    public Person(String firstname, String surname) {
        this.firstname = firstname;
        this.surname = surname;
    }

    // standard getters and setters

}

Querydsl will generate a query type with the simple name QPerson into the same package as Person. QPerson can be used as a statically typed variable in Querydsl queries as a representative for the Person type.

First – QPerson has a default instance variable which can be accessed as a static field:

QPerson person = QPerson.person;

Alternatively you can define your own Person variables like this:

QPerson person = new QPerson("Erich", "Gamma");

3.2. Build Query Using JPAQuery

We can now use JPAQuery instances for our queries:

JPAQuery query = new JPAQuery(entityManager);

Note that the entityManager is a JPA EntityManager.

Let’s now retrieve all the persons with the first name “Kent” as a quick example:

QPerson person = QPerson.person;
List<Person> persons = query.from(person).where(person.firstName.eq("Kent")).list(person);

The from call defines the query source and projection, the where part defines the filter and list tells Querydsl to return all matched elements.

We can also use multiple filters:

query.from(person).where(person.firstName.eq("Kent"), person.surname.eq("Beck"));

Or:

query.from(person).where(person.firstName.eq("Kent").and(person.surname.eq("Beck")));

In native JPQL form the query would be written like this:

select person from Person as person where person.firstName = "Kent" and person.surname = "Beck"

If you want to combine the filters via “or” then use the following pattern:

query.from(person).where(person.firstName.eq("Kent").or(person.surname.eq("Beck")));

4. Ordering and Aggregation in Querydsl

Let’s now have a look at how ordering and aggregation work within the Querydsl library.

4.1. Ordering

We’ll start by ordering our results in descending order by the surname field:

QPerson person = QPerson.person;
List<Person> persons = query.from(person)
    .where(person.firstname.eq(firstname))
    .orderBy(person.surname.desc())
    .list(person);

4.2. Aggregation

Let’s now use a simple aggregation, as we do have a few available (Sum, Avg, Max, Min):

QPerson person = QPerson.person;    
int maxAge = query.from(person).list(person.age.max()).get(0);

4.3. Aggregation with GroupBy

The com.mysema.query.group.GroupBy class provides aggregation functionality which we can use to aggregate query results in memory.

Here’s a quick example where the result are returned as Map with firstname as the key and max age as the value:

QPerson person = QPerson.person;   
Map<String, Integer> results = 
  query.from(person).transform(
      GroupBy.groupBy(person.firstname).as(GroupBy.max(person.age)));

5. Testing With Querydsl

Now, let’s define a DAO implementation using Querydsl – and let’s define the following search operation:

public List<Person> findPersonsByFirstnameQueryDSL(String firstname) {
    JPAQuery query = new JPAQuery(em);
    QPerson person = QPerson.person;
    return query.from(person).where(person.firstname.eq(firstname)).list(person);
}

And now let’s build a few tests using this new DAO and let’s use Querydsl to search for newly created Person objects (implemented in PersonDao class) and in another test aggregation using GroupBy class is tested:

@Autowired
private PersonDao personDao;

@Test
public void givenExistingPersons_whenFindingPersonByFirstName_thenFound() {
    personDao.save(new Person("Erich", "Gamma"));
    Person person = new Person("Kent", "Beck");
    personDao.save(person);
    personDao.save(new Person("Ralph", "Johnson"));

    Person personFromDb =  personDao.findPersonsByFirstnameQueryDSL("Kent").get(0);
    Assert.assertEquals(person.getId(), personFromDb.getId());
}

@Test
public void givenExistingPersons_whenFindingMaxAgeByName_thenFound() {
    personDao.save(new Person("Kent", "Gamma", 20));
    personDao.save(new Person("Ralph", "Johnson", 35));
    personDao.save(new Person("Kent", "Zivago", 30));

    Map<String, Integer> maxAge = personDao.findMaxAgeByName();
    Assert.assertTrue(maxAge.size() == 2);
    Assert.assertSame(35, maxAge.get("Ralph"));
    Assert.assertSame(30, maxAge.get("Kent"));
}

6. Conclusion

This tutorial illustrated how to build JPA project using Querydsl.

The full implementation of this article can be found in the github project – this is an Eclipse based maven project, so it should be easy to import and run as it is.

A quick note here is – run a simple maven build (mvn clean install) to generate the types into target/generated-sources – and then, if you’re using Eclipse – include the folder as a source folder of the project.

A Guide to Stored Procedures with JPA

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

In this quick tutorial we’ll explore the use of Stored Procedures within the Java Persistence API (JPA).

2. Project Setup

2.1. Maven Setup

We first need to define the following dependencies in our pom.xml:

  • javax.javaee-api – as it includes the JPA API
  • a JPA API implementation – in this example we will use Hibernate, but EclipseLink would be an OK alternative as well
  • a MySQL Database
<properties>
    <jee.version>7.0</jee.version>
    <mysql.version>11.2.0.4</mysql.version>
    <hibernate.version>5.1.38</hibernate.version>
</properties>
<dependencies>
    <dependency>
        <groupId>javax</groupId>
        <artifactId>javaee-api</artifactId>
        <version>${jee.version}</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.hibernate</groupId>
        <artifactId>hibernate-entitymanager</artifactId>
        <version>${hibernate.version}</version>
    </dependency>
    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <version>${mysql.version}</version>
    </dependency>
</dependencies>

2.2. Persistence Unit Definition

The second step is the creation of persistence.xml file – which contains the persistence unit definitions:

<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
    http://java.sun.com/xml/ns/persistence/persistence_2_1.xsd"
    version="2.1">

    <persistence-unit name="jpa-db">
        <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider>
        <class>com.baeldung.jpa.model.Car</class>
        <properties>
            <property name="javax.persistence.jdbc.driver" 
              value="com.mysql.jdbc.Driver" />
            <property name="javax.persistence.jdbc.url" 
              value="jdbc:mysql://127.0.0.1:3306/baeldung" />
            <property name="javax.persistence.jdbc.user" 
              value="baeldung" />
            <property name="javax.persistence.jdbc.password" 
              value="YourPassword" />
            <property name="hibernate.dialect" 
              value="org.hibernate.dialect.MySQLDialect" />
            <property name="hibernate.show_sql" value="true" />
        </properties>
    </persistence-unit>

</persistence>

All Hibernate properties defined are not needed if you refer to a JNDI DataSource (JEE environments):

<jta-data-source>java:jboss/datasources/JpaStoredProcedure</jta-data-source>

2.3. Table Creation Script

Let’s now create a Table ( CAR ) – with three attributes: ID, MODEL and YEAR:

CREATE TABLE `car` (
  `ID` int(10) NOT NULL AUTO_INCREMENT,
  `MODEL` varchar(50) NOT NULL,
  `YEAR` int(4) NOT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;

The assumption was, of course, that the DB schema and permissions are already in place.

2.4. Stored Procedure Creation on DB

The very last step before jump to the java code is the stored procedure creation in our MySQL Database:

DELIMITER $$
CREATE DEFINER=`root`@`localhost` PROCEDURE `FIND_CAR_BY_YEAR`(in p_year int)
begin
SELECT ID, MODEL, YEAR
    FROM CAR
    WHERE YEAR = p_year;
end
$$
DELIMITER ;

3. The JPA Stored Procedure

We are now ready to use JPA to communicate with the database and execute the stored procedure we defined.

Once we do that, we’ll also be able to iterate over the results as a List of Car.

3.1. The Car Entity

Below the Car entity that well be mapped to the CAR database table by the Entity Manager.

Notice that we’re also defining our stored procedure directly on the entity by using the @NamedStoredProcedureQueries annotation:

@Entity
@Table(name = "CAR")
@NamedStoredProcedureQueries({
  @NamedStoredProcedureQuery(
    name = "findByYearProcedure", 
    procedureName = "FIND_CAR_BY_YEAR", 
    resultClasses = { Car.class }, 
    parameters = { 
        @StoredProcedureParameter(
          name = "p_year", 
          type = Integer.class, 
          mode = ParameterMode.IN) }) 
})
public class Car {

    private long id;
    private String model;
    private Integer year;

    public Car(String model, Integer year) {
        this.model = model;
        this.year = year;
    }

    public Car() {
    }

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    @Column(name = "ID", unique = true, nullable = false, scale = 0)
    public long getId() {
        return id;
    }

    @Column(name = "MODEL")
    public String getModel() {
        return model;
    }

    @Column(name = "YEAR")
    public Integer getYear() {
        return year;
    }
    
    // standard setter methods
}

3.2. Accessing the Database

Now, with everything defined and in place, let’s write a couple of tests actually using JPA to execute the procedure.

We are going to retrieve all Cars in a given year:

public class StoredProcedureTest {

    private static EntityManagerFactory factory = null;
    private static EntityManager entityManager = null;

    @BeforeClass
    public static void init() {
        factory = Persistence.createEntityManagerFactory("jpa-db");
        entityManager = factory.createEntityManager();
    }

    @Test
    public void findCarsByYearWithNamedStored() {
        StoredProcedureQuery findByYearProcedure = 
          entityManager.createNamedStoredProcedureQuery("findByYearProcedure");
        
        StoredProcedureQuery storedProcedure = 
          findByYearProcedure.setParameter("p_year", 2015);
        
        storedProcedure.getResultList()
          .forEach(c -> Assert.assertEquals(new Integer(2015), ((Car) c).getYear())); 
    }

    @Test
    public void findCarsByYearNoNamedStored() {
        StoredProcedureQuery storedProcedure = 
          entityManager
            .createStoredProcedureQuery("FIND_CAR_BY_YEAR",Car.class)
            .registerStoredProcedureParameter(1, Integer.class, ParameterMode.IN)
            .setParameter(1, 2015);
       
        storedProcedure.getResultList()
          .forEach(c -> Assert.assertEquals(new Integer(2015), ((Car) c).getYear()));
    }

}

Notice that in the second test, we’re no longer using the stored procedure we defined on the entity. Instead, we’re are defining the procedure from scratch.

That can be very useful when you need to use stored procedures but you don’t have the option to modify your entities and recompile them.

4. Conclusion

In this tutorial we discussed using Stored Procedure with the Java Persistence API.

The example used in this article is available as a sample project in GitHub.

I usually post about Persistence on Twitter - you can follow me there:


Spring Data Couchbase

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

In this tutorial on Spring Data, we’ll discuss how to set up a persistence layer for Couchbase documents using both the Spring Data repository and template abstractions, as well as the steps required to prepare Couchbase to support these abstractions using views and/or indexes.

2. Maven Dependencies

First, we add the following Maven dependency to our pom.xml file:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-couchbase</artifactId>
    <version>2.0.0.RELEASE</version>
</dependency>

Note that by including this dependency, we automatically get a compatible version of the native Couchbase SDK, so we need not include it explicitly.

To add support for JSR-303 bean validation, we also include the following dependency:

<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-validator</artifactId>
    <version>5.2.4.Final</version>
</dependency>

Spring Data Couchbase supports date and time persistence via the traditional Date and Calendar classes, as well as via the Joda Time library, which we include as follows:

<dependency>
    <groupId>joda-time</groupId>
    <artifactId>joda-time</artifactId>
    <version>2.9.2</version>
</dependency>

3. Configuration

Next, we’ll need to configure the Couchbase environment by specifying one or more nodes of our Couchbase cluster and the name and password of the bucket in which we will store our documents.

3.1. Java Configuration

For Java class configuration, we simply extend the AbstractCouchbaseConfiguration class:

@Configuration
@EnableCouchbaseRepositories(basePackages={"org.baeldung.spring.data.couchbase"})
public class MyCouchbaseConfig extends AbstractCouchbaseConfiguration {

    @Override
    protected List<String> getBootstrapHosts() {
        return Arrays.asList("localhost");
    }

    @Override
    protected String getBucketName() {
        return "baeldung";
    }

    @Override
    protected String getBucketPassword() {
        return "";
    }
}

If your project requires more customization of the Couchbase environment, you may provide one by overriding the getEnvironment() method:

@Override
protected CouchbaseEnvironment getEnvironment() {
   ...
}

3.2. XML Configuration

Here is the equivalent configuration in XML:

<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns:beans="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns="http://www.springframework.org/schema/data/couchbase
  xsi:schemaLocation="http://www.springframework.org/schema/beans
    http://www.springframework.org/schema/beans/spring-beans.xsd
    http://www.springframework.org/schema/data/couchbase
    http://www.springframework.org/schema/data/couchbase/spring-couchbase.xsd">

    <couchbase:cluster>
        <couchbase:node>localhost</couchbase:node>
    </couchbase:cluster>

    <couchbase:clusterInfo login="baeldung" password="" />

    <couchbase:bucket bucketName="baeldung" bucketPassword=""/>

    <couchbase:repositories base-package="org.baeldung.spring.data.couchbase"/>
</beans:beans>

Note: the “clusterInfo” node accepts either cluster credentials or bucket credentials and is required so that the library can determine whether or not your Couchbase cluster supports N1QL (a superset of SQL for NoSQL databases, available in Couchbase 4.0 and later).

If your project requires a custom Couchbase environment, you may provide one using the <couchbase:env/> tag.

4. Data Model

Let’s create an entity class representing the JSON document to persist. We first annotate the class with @Document, and then we annotate a String field with @Id to represent the Couchbase document key.

You may use either the @Id annotation from Spring Data or the one from the native Couchbase SDK. Be advised that if you use both @Id annotations in the same class on two different fields, then the field annotated with the Spring Data @Id annotation will take precedence and will be used as the document key.

To represent the JSON documents’ attributes, we add private member variables annotated with @Field. We use the @NotNull annotation to mark certain fields as required:

@Document
public class Person {
    @Id
    private String id;
    
    @Field
    @NotNull
    private String firstName;
    
    @Field
    @NotNull
    private String lastName;
    
    @Field
    @NotNull
    private DateTime created;
    
    @Field
    private DateTime updated;
    
    // standard getters and setters
}

Note that the property annotated with @Id merely represents the document key and is not necessarily part of the stored JSON document unless it is also annotated with @Field as in:

@Id
@Field
private String id;

If you want to name a field in the entity class differently from what is to be stored in the JSON document, simply qualify its @Field annotation, as in this example:

@Field("fname")
private String firstName;

Here is an example showing how a persisted Person document would look:

{
    "firstName": "John",
    "lastName": "Smith",
    "created": 1457193705667
    "_class": "org.baeldung.spring.data.couchbase.model.Person"
}

Notice that Spring Data automatically adds to each document an attribute containing the full class name of the entity. By default, this attribute is named “_class”, although you can override that in your Couchbase configuration class by overriding the typeKey() method.

For example, if you want to designate a field named “dataType” to hold the class names, you would add this to your Couchbase configuration class:

@Override
public String typeKey() {
    return "dataType";
}

Another popular reason to override typeKey() is if you are using a version of Couchbase Mobile that does not support fields prefixed with the underscore. In this case, you can choose your own alternate type field as in the previous example, or you can use an alternate provided by Spring:

@Override
public String typeKey() {
    // use "javaClass" instead of "_class"
    return MappingCouchbaseConverter.TYPEKEY_SYNCGATEWAY_COMPATIBLE;
}

5. Couchbase Repository

Spring Data Couchbase provides the same built-in queries and derived query mechanisms as other Spring Data modules such as JPA.

We declare a repository interface for the Person class by extending CrudRepository<String,Person> and adding a derivable query method:

public interface PersonRepository extends CrudRepository<Person, String> {
    List<Person> findByFirstName(String firstName);
}

6. N1QL Support via Indexes

If using Couchbase 4.0 or later, then by default, custom queries are processed using the N1QL engine (unless their corresponding repository methods are annotated with @View to indicate the use of backing views as described in the next section).

To add support for N1QL, you must create a primary index on the bucket. You may create the index by using the cbq command-line query processor (see your Couchbase documentation on how to launch the cbq tool for your environment) and issuing the following command:

CREATE PRIMARY INDEX ON baeldung USING GSI;

In the above command, GSI stands for global secondary index, which is a type of index particularly suited for optimization of ad hoc N1QL queries in support of OLTP systems and is the default index type if not otherwise specified.

Unlike view-based indexes, GSI indexes are not automatically replicated across all index nodes in a cluster, so if your cluster contains more than one index node, you will need to create each GSI index on each node in the cluster, and you must provide a different index name on each node.

You may also create one or more secondary indexes. When you do, Couchbase will use them as needed in order to optimize its query processing.

For example, to add an index on the firstName field, issue the following command in the cbq tool:

CREATE INDEX idx_firstName ON baeldung(firstName) USING GSI;

7. Backing Views

For each repository interface, you will need to create a Couchbase design document and one or more views in the target bucket. The design document name must be the lowerCamelCase version of the entity class name (e.g. “person”).

Regardless of which version of Couchbase Server you are running, you must create a backing view named “all” to support the built-in “findAll” repository method. Here is the map function for the “all” view for our Person class:

function (doc, meta) {
    if(doc._class == "org.baeldung.spring.data.couchbase.model.Person") {
        emit(meta.id, null);
    }
}

Custom repository methods must each have a backing view when using a Couchbase version prior to 4.0 (the use of backing views is optional in 4.0 or later).

View-backed custom methods must be annotated with @View as in the following example:

@View
List<Person> findByFirstName(String firstName);

The default naming convention for backing views is to use the lowerCamelCase version of that part of the method name following the “find” keyword (e.g. “byFirstName”).

Here is how you would write the map function for the “byFirstName” view:

function (doc, meta) {
    if(doc._class == "org.baeldung.spring.data.couchbase.model.Person"
      && doc.firstName) {
        emit(doc.firstName, null);
    }
}

You can override this naming convention and use your own view names by qualifying each @View annotation with the name of your corresponding backing view. For example:

@View("myCustomView")
List<Person> findByFirstName(String lastName);

8. Service Layer

For our service layer, we define an interface and two implementations: one using the Spring Data repository abstraction, and another using the Spring Data template abstraction. Here is our PersonService interface:

public interface PersonService {
    Person findOne(String id);
    List<Person> findAll();
    List<Person> findByFirstName(String firstName);
    
    void create(Person person);
    void update(Person person);
    void delete(Person person);
}

8.1. Repository Service

Here is an implementation using the repository we defined above:

@Service
@Qualifier("PersonRepositoryService")
public class PersonRepositoryService implements PersonService {
    
    @Autowired
    private PersonRepository repo; 

    public Person findOne(String id) {
        return repo.findOne(id);
    }

    public List<Person> findAll() {
        List<Person> people = new ArrayList<Person>();
        Iterator<Person> it = repo.findAll().iterator();
        while(it.hasNext()) {
            people.add(it.next());
        }
        return people;
    }

    public List<Person> findByFirstName(String firstName) {
        return repo.findByFirstName(firstName);
    }

    public void create(Person person) {
        person.setCreated(DateTime.now());
        repo.save(person);
    }

    public void update(Person person) {
        person.setUpdated(DateTime.now());
        repo.save(person);
    }

    public void delete(Person person) {
        repo.delete(person);
    }
}

8.2. Template Service

For the template-based implementation, we must create the backing views listed in section 7 above. The CouchbaseTemplate object is available in our Spring context and may be injected into the service class.

Here is the implementation using the template abstraction:

@Service
@Qualifier("PersonTemplateService")
public class PersonTemplateService implements PersonService {
    private static final String DESIGN_DOC = "person";
    @Autowired
    private CouchbaseTemplate template;
    
    public Person findOne(String id) {
       return template.findById(id, Person.class);
    }

    public List<Person> findAll() {
        return template.findByView(ViewQuery.from(DESIGN_DOC, "all"), Person.class);
    }

    public List<Person> findByFirstName(String firstName) {
        return template.findByView(ViewQuery.from(DESIGN_DOC, "byFirstName"), Person.class);
    }

    public void create(Person person) {
        person.setCreated(DateTime.now());
        template.insert(person);
    }

    public void update(Person person) {
        person.setUpdated(DateTime.now());
        template.update(person);
    }

    public void delete(Person person) {
        template.remove(person);
    }
}

9. Conclusion

We have shown how to configure a project to use the Spring Data Couchbase module and how to write a simple entity class and its repository interface. We wrote a simple service interface and provided one implementation using the repository and another implementation using the Spring Data template API.

You can view the complete source code for this tutorial in the github project.

For further information, visit the Spring Data Couchbase project site.

I usually post about Persistence on Twitter - you can follow me there:


Spring MVC Content Negotiation

$
0
0

I just announced the release of my "REST With Spring" Classes:

>> THE "REST WITH SPRING" CLASSES

1. Overview

This article describes how to implement content negotiation in a Spring MVC project.

Generally, there are three options to determine the media type of a request:

  • Using URL suffixes (extensions) in the request (eg .xml/.json)
  • Using URL parameter in the request (eg ?format=json)
  • Using Accept header in the request

By default, this is the order in which the Spring content negotiation manager will try to use these three strategies. And if none of these are enabled, we can specify a fallback to a default content type.

2. Content Negotiation Strategies

Let’s start with the necessary dependencies – we are working with JSON and XML representations, so for this article we’ll use Jackson for JSON:

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-core</artifactId>
    <version>2.7.2</version>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.7.2</version>
</dependency>

For XML support we can use either JAXB, XStream or the newer Jackson-XML support.

Since we have explained the use of the Accept header in an earlier article on HttpMessageConverters, let’s focus on the first two strategies in depth.

3. The URL Suffix Strategy

By default, this strategy is disabled, but the framework can check for a path extension right from the URL to determine the output content type.

Before going into configurations, let’s have a quick look at an example. We have the following simple API method implementation in a typical Spring controller:

@RequestMapping(
  value = "/employee/{id}", 
  produces = { "application/json", "application/xml" }, 
  method = RequestMethod.GET)
public @ResponseBody Employee getEmployeeById(@PathVariable long id) {
    return employeeMap.get(id);
}

Let’s invoke it making use of the JSON extension to specify the media type of the resource:

curl http://localhost:8080/spring-mvc-java/employee/10.json

Here’s what we might get back if we use a JSON extension:

{
    "id": 10,
    "name": "Test Employee",
    "contactNumber": "999-999-9999"
}

And here’s what the request – response will look like with XML:

curl http://localhost:8080/spring-mvc-java/employee/10.xml

The response body:

<employee>
    <contactNumber>999-999-9999</contactNumber>
    <id>10</id>
    <name>Test Employee</name>
</employee>

Now:

Now if we do not use any extension or use one that is not configured – the default content type will be returned:

curl http://localhost:8080/spring-mvc-java/employee/10

Let’s now have a look at setting up this strategy – both with Java and XML configurations.

3.1. Java Configuration

public void configureContentNegotiation(ContentNegotiationConfigurer configurer) {
    configurer.favorPathExtension(true).
    favorParameter(false).
    ignoreAcceptHeader(true).
    useJaf(false).
    defaultContentType(MediaType.APPLICATION_JSON); 
}

Let’s go over the details.

First, we’re enabling the path extensions strategy.

Then, we’re disabling the URL parameter strategy as well as the Accept header strategy – because we want to only rely on the path extension way of determining the type of the content.

We’re then turning off the Java Activation Framework; JAF can be used as a fallback mechanism to select the output format if the incoming request is not matching any of the strategies we configured. We’re disabling it because we’re going to configure JSON as the default content type.

And finally – we are setting up JSON to be the default. That means if none of the two strategies are matched, all incoming request will be mapped to a controller method that serves JSON.

3.2. XML Configuration

Let’s also have a quick look at the same exact configuration, only using XML:

<bean id="contentNegotiationManager" 
  class="org.springframework.web.accept.ContentNegotiationManagerFactoryBean">
    <property name="favorPathExtension" value="true" />
    <property name="favorParameter" value="false"/>
    <property name="ignoreAcceptHeader" value="true" />
    <property name="defaultContentType" value="application/json" />
    <property name="useJaf" value="false" />
</bean>

4. The URL Parameter Strategy

We’ve used path extensions in the previous section – let’s now set up Spring MVC to make use of a path parameter.

We can enable this strategy by setting the value of the property favorParameter to true.

Let’s have a quick look at how that would work with our previous example:

curl http://localhost:8080/spring-mvc-java/employee/10?mediaType=json

And here’s what the JSON response body will be:

{
    "id": 10,
    "name": "Test Employee",
    "contactNumber": "999-999-9999"
}

If we use XML parameter, the output will be in XML form:

curl http://localhost:8080/spring-mvc-java/employee/10?mediaType=xml

The response body:

<employee>
    <contactNumber>999-999-9999</contactNumber>
    <id>10</id>
    <name>Test Employee</name>
</employee>

Now let’s do the configuration – again, first using Java and then XML.

4.1. Java Configuration

public void configureContentNegotiation(ContentNegotiationConfigurer configurer) {
    configurer.favorPathExtension(false).
    favorParameter(true).
    parameterName("mediaType").
    ignoreAcceptHeader(true).
    useJaf(false).
    defaultContentType(MediaType.APPLICATION_JSON).
    mediaType("xml", MediaType.APPLICATION_XML). 
    mediaType("json", MediaType.APPLICATION_JSON); 
}

Let’s read through this configuration.

First, of course the path extension and the Accept header strategies are disabled (as well as JAF).

The rest of the configuration is the same.

4.2. XML Configuration

<bean id="contentNegotiationManager" 
  class="org.springframework.web.accept.ContentNegotiationManagerFactoryBean">
    <property name="favorPathExtension" value="false" />
    <property name="favorParameter" value="true"/>
    <property name="parameterName" value="mediaType"/>
    <property name="ignoreAcceptHeader" value="true" />
    <property name="defaultContentType" value="application/json" />
    <property name="useJaf" value="false" />

    <property name="mediaTypes">
        <map>
            <entry key="json" value="application/json" />
            <entry key="xml" value="application/xml" />
        </map>
    </property>
</bean>

Also we can have both strategies (extension and parameter) enabled at the same time:

public void configureContentNegotiation(ContentNegotiationConfigurer configurer) {
    configurer.favorPathExtension(true).
    favorParameter(true).
    parameterName("mediaType").
    ignoreAcceptHeader(true).
    useJaf(false).
    defaultContentType(MediaType.APPLICATION_JSON).
    mediaType("xml", MediaType.APPLICATION_XML). 
    mediaType("json", MediaType.APPLICATION_JSON); 
}

In this case Spring will look for path extension first, if that is not present then will look for path parameter. And if both of these are not available in the input request, then default content type will be returned back.

5. The Accept Header Strategy

If Accept header is enabled, Spring MVC will look for its value in the incoming request to determine the representation type.

We have to set the value of ignoreAcceptHeader to false to enable this approach and we’re disabling the other two strategies just so that we know we’re only relying on the Accept header.

5.1. Java Configuration

public void configureContentNegotiation(ContentNegotiationConfigurer configurer) {
    configurer.favorPathExtension(true).
    favorParameter(false).
    parameterName("mediaType").
    ignoreAcceptHeader(false).
    useJaf(false).
    defaultContentType(MediaType.APPLICATION_JSON).
    mediaType("xml", MediaType.APPLICATION_XML). 
    mediaType("json", MediaType.APPLICATION_JSON); 
}

5.2. XML Configuration

<bean id="contentNegotiationManager" 
  class="org.springframework.web.accept.ContentNegotiationManagerFactoryBean">
    <property name="favorPathExtension" value="true" />
    <property name="favorParameter" value="false"/>
    <property name="parameterName" value="mediaType"/>
    <property name="ignoreAcceptHeader" value="false" />
    <property name="defaultContentType" value="application/json" />
    <property name="useJaf" value="false" />

    <property name="mediaTypes">
        <map>
            <entry key="json" value="application/json" />
            <entry key="xml" value="application/xml" />
        </map>
    </property>
</bean>

Finally we need to switch on the content negotiation manager by plug-in it into the overall configuration:

<mvc:annotation-driven content-negotiation-manager="contentNegotiationManager" />

6. Conclusion

And we’re done. We looked at how content negotiation works in Spring MVC and we focused on a few examples of setting that up to use various strategies to determine the content type.

The full implementation of this article can be found in the github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Sign Up to the newly released "REST With Spring" Master Class:

>> CHECK OUT THE CLASSES

Jackson vs Gson: A Quick Look At Performance

$
0
0

I usually post about Jackson and JSON stuff on Twitter - you can follow me there:

1. Overview

In this quick article we’ll run some quick tests and determine the performance of two popular JSON processing libraries: Gson and Jackson.

We’ll be running benchmarking for both JSON-to-Java and Java-to-JSON conversion tasks with both very simple and moderately complex input.

The performance characteristics of the libraries are similar in some cases and very different in others, and of course, picking the right one for a given application can have a drastic impact on its performance.

2. Project Setup

First, let’s add the Maven dependencies for both Gson and Jackson, as shown below.

2.1. Gson Dependency

Gson is an open-source Java library for processing JSON. It provides toJson() and fromJson() methods to convert Java objects to JSON and vice-versa:

<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>2.6.2</version>
</dependency>

2.2. Jackson Dependency

Jackson is another open-source Java library for processing JSON:

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.7.2</version>
</dependency>

3. Basic Comparison

Comparison Notes Jackson Gson
Thread Safety Configuring an instance is not synchronized or thread-safe , but once it is created, operation is fully thread-safe and synchronized. Refer JacksonFAQThreadSafety. Gson instance is immutable, so thread-safe.
Null References – Generated Json Jackson includes null references, so does include them in the generated json, thereby generating longer strings, class can be annotated with @JsonInclude(Include.NON_NULL), or the ObjectMapper can be configured to Include.NON_NULL Gson skips null reference fields, and they will not appear in generated json string
Exception Handling Jackson requires explicit handling of the possible checked JsonParseException, JsonMappingException exceptions. Gson does not need checked exception handling.
Additional Elements Jackson instead throws an UnrecognizedPropertyException. For additional attributes handling incase of Jackson, class can be annotated with @JsonIgnoreProperties(ignoreUnknown=true), or the ObjectMapper can be configured to not FAIL_ON_UNKNOWN_PROPERTIES. There are more ways of doing this, please refer JacksonHowToIgnoreUnknown. Gson ignores extra JSON attributes present in string, that do not have matching Java fields
Github URL https://github.com/FasterXML/jackson-databind https://github.com/google/gson

4. The Scenarios and the Input Data

We will use two different structures in our benchmarks: a simple object and a moderately complex object.

The simple object will have one level of nesting: CustomerPortfolioSimple – has a List of Customer.

The more complex object will have more than one level of nesting:

5. Doing the Conversions

5.1. Converting Java Object to JSON String

Using Gson:

private static String generateJson(CustomerPortfolioComplex customerPortfolioComplex) {
    Gson gson = new Gson();
    return gson.toJson(customerPortfolioComplex);
}

Using Jackson:

private static String generateJson(CustomerPortfolioComplex customerPortfolioComplex)
  throws JsonProcessingException {
    ObjectMapper mapper = new ObjectMapper();
    return mapper.writeValueAsString(customerPortfolioComplex);
}

5.2. JSON String to Map

Using Gson:

private static void parseJsonToMap(String jsonStr) {
    Gson gson = new Gson();
    Map parsedMap = gson.fromJson(jsonStr , Map.class); 
}

Using Jackson:

private static void parseJsonToMap(String jsonStr) 
  throws JsonParseException , JsonMappingException , IOException { 
    ObjectMapper mapper = new ObjectMapper(); 
    Map parsedMap = mapper.readValue(jsonStr , Map.class); 
}

5.3. JSON String to Java Object

Using Gson:

private static void parseJsonToActualObject(String jsonStr) {
    Gson gson = new Gson();
    CustomerPortfolioComplex customerPortfolioComplex = 
      gson.fromJson(jsonStr , CustomerPortfolioComplex.class);
}

Using Jackson:

private static void parseJsonToActualObject(String jsonStr)
  throws JsonParseException , JsonMappingException, IOException {
    ObjectMapper mapper = new ObjectMapper();
    CustomerPortfolioComplex customerPortfolioComplex = 
      mapper.readValue(jsonStr , CustomerPortfolioComplex.class);
}

We can see that there is no difference in the number of lines of code written to use both the libraries. Gson uses the Gson object to parse/transform objects and JSON strings, whereas Jackson uses ObjectMapper.

6. Benchmarking

For the benchmarks, we ran tests for both 10 and 50 iterations for JSON string length of sizes 10 KB, 100 MB and 250 MB.

Actions Performed

  • Java Object to JSON String
  • JSON String to Map
  • JSON String back to Java Object

6.1. Result Data

The tables below report the time in seconds recorded for above operations on both the simple and complex object graphs:

Jackson vs Gson - Performance Numbers

From the data, it is clear that for the 10KB tests, Gson performed significantly better than Jackson. However, in the 100MB and 250MB tests, Jackson performed better.

6.2. Benchmark Conclusions

We ran two iterations of 10 and 50 each on file sizes of 10 KB, 100 MB and 250 MB for both simple and complex object graphs.

For small amounts of data, GSON seems to be a winner by a good margin.

From more complex object graphs – it looks that as the amount of data increases, GSON performance dips compared to Jackson.

Additionally, for the conversion of a JSON string to a Java Map, we can see the substantial difference in the performance of two tools. GSON seems to be in good competition with Jackson, but with huge data files, Jackson performs better.

7. Conclusion

Choosing the right JSON processing library for your application will, in most cases, not be solely based on performance characteristics. However, performance is worth considering as part of that determination.

Our conclusions are simple:

  • If your application works with complex object graphs, Jackson is the stronger performer
  • If your application makes use of simple objects, Gson is extremely efficient

And, of course, this review of these libraries’ performance is simply a point-in-time comparison. There are frequent new releases of both Gson and Jackson with continuous performance improvements. If the performance of JSON parsing is critical to your application, you should continually evaluate the newest releases.

Also note that this is was a rather quick and simple test, not an extensive benchmark of the two libraries.

The complete set of code used for this article can be found in a GitHub project.

I usually post about Jackson and JSON stuff on Twitter - you should follow me there:


Introduction to Spring Data REST

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Overview

This article will explain the basics of Spring Data REST and show how to use it to build a simple REST API.

In general, Spring Data REST is built on top of the Spring Data project and makes it easy to build hypermedia-driven REST web services that connect to Spring Data repositories – all using HAL as the driving hypermedia type.

It takes away a lot of the manual work usually associated with such tasks and makes implementing basic CRUD functionality for web applications quite simple.

2. Maven Dependencies

The following Maven dependencies are required for our simple application:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId
    <artifactId>spring-boot-starter-data-rest</artifactId></dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
</dependency>

We decided to use Spring Boot for this example, but classic Spring will also work fine. We also chose to use the H2 embedded database in order to avoid any extra setup, but the example can be applied to any database.

3. Writing the Application

We will start by writing a domain object to represent a user of our website:

@Entity
public class WebsiteUser {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private long id;

    private String name;
    private String email;

    // standard getters and setters
}

Every user has a name and an email, as well as an automatically-generated id. Now we can write a simple repository:

@RepositoryRestResource(collectionResourceRel = "users", path = "users")
public interface UserRepository extends PagingAndSortingRepository<WebsiteUser, Long> {
    List<WebsiteUser> findByName(@Param("name") String name);
}

This is an interface that allows you to perform various operations with WebsiteUser objects. We also defined a custom query that will provide a list of users based on a given name.

The @RepositoryRestResource annotation is optional and is used to customize the REST endpoint. If we decided to omit it, Spring would automatically create an endpoint at “/websiteUsers” instead of “/users“.

Finally, we will write a standard Spring Boot main class to initialize the application:

@SpringBootApplication
public class SpringDataRestApplication {
    public static void main(String[] args) {
        SpringApplication.run(SpringDataRestApplication.class, args);
    }
}

That’s it! We now have a fully-functional REST API. Let’s take a look at it in action.

4. Accessing the REST API

If we run the application and go to http://localhost:8080/ in a browser, we will receive the following JSON:

{
  "_links" : {
    "users" : {
      "href" : "http://localhost:8080/users{?page,size,sort}",
      "templated" : true
    },
    "profile" : {
      "href" : "http://localhost:8080/profile"
    }
  }
}

As you can see, there is a “/users” endpoint available, and it already has the “?page“, “?size” and “?sort” options.

There is also a standard “/profile” endpoint, which provides application metadata. It is important to note that the response is structured in a way that follows the constraints of the REST architecture style. Specifically, it provides a uniform interface and self-descriptive messages. This means that each message contains enough information to describe how to process the message.

There are no users in our application yet, so going to http://localhost:8080/users would just show an empty list of users. Let’s use curl to add a user.

$ curl -i -X POST -H "Content-Type:application/json" -d '{  "name" : "Test", \ 
"email" : "test@test.com" }' http://localhost:8080/users
{
  "name" : "test",
  "email" : "test@test.com",
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/users/1"
    },
    "websiteUser" : {
      "href" : "http://localhost:8080/users/1"
    }
  }
}

Lets take a look at the response headers as well:

HTTP/1.1 201 Created
Server: Apache-Coyote/1.1
Location: http://localhost:8080/users/1
Content-Type: application/hal+json;charset=UTF-8
Transfer-Encoding: chunked

You will notice that the returned content type is “application/hal+json“. HAL is a simple format that gives a consistent and easy way to hyperlink between resources in your API. The header also automatically contains the Location header, which is the address we can use to access the newly created user.

We can now access this user at http://localhost:8080/users/1

{
  "name" : "test",
  "email" : "test@test.com",
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/users/1"
    },
    "websiteUser" : {
      "href" : "http://localhost:8080/users/1"
    }
  }
}

You can also use curl or any other REST client to issue PUT, PATCH, and DELETE requests. It also is important to note that Spring Data REST automatically follows the principles of HATEOAS. HATEOAS is one of the constraints of the REST architecture style, and it means that hypertext should be used to find your way through the API.

Finally, lets try to access the custom query that we wrote earlier and find all users with the name “test”. This is done by going to http://localhost:8080/users/search/findByName?name=test

{
  "_embedded" : {
    "users" : [ {
      "name" : "test",
      "email" : "test@test.com",
      "_links" : {
        "self" : {
          "href" : "http://localhost:8080/users/1"
        },
        "websiteUser" : {
          "href" : "http://localhost:8080/users/1"
        }
      }
    } ]
  },
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/users/search/findByName?name=test"
    }
  }
}

5. Conclusion

This tutorial demonstrated the basics of creating a simple REST API with Spring Data REST.  The example used in this article can be found in the linked GitHub project.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES


Java 8 Adoption in March 2016

$
0
0

Java 8 Adoption Trend

Java 8 was released on 18 March 2014 and has seen a strong adoption trend right from the start.

In October 2014, Typesafe released early numbers which had the adoption rate of the new version of the language at 27%.

And in May 2015, I ran a survey that puts that number at 38%.

The New 2016 Numbers

Well, things have certainly changed over the course of a year. I ran the “Java and Spring in 2016” survey last week and received 2253 answers from the community.

The Java 8 adoption numbers in March of 2016 is at 64.3% – which is a strong spike from last years numbers.

Here’s the full breakdown of Java 6, 7 and 8 usage:

Conclusion

Java 9 is coming out a year from now, and Java 8 is clearly no longer new, having been out for two years at this point.

And the high adoption numbers (on a 2000+ sample size) definitely show a strong trend in the broader ecosystem to jump in and use Java 8 more than we saw before with Java 7.

Using JWT with Spring Security OAuth

$
0
0

Just announced my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

1. Overview

In this tutorial we’ll discuss how to get our Spring Security OAuth2 implementation to make use of JSON Web Tokens.

We’re also continuing to built on top of the previous article in this OAuth series.

2. Maven Configuration

First, we need to add spring-security-jwt dependency to our pom.xml:

<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-jwt</artifactId>
</dependency>

Note that we need to add spring-security-jwt dependency to both Authorization Server and Resource Server.

3. Authorization Server

Next, we will configure our authorization server to use JwtTokenStore – as follows:

@Configuration
@EnableAuthorizationServer
public class OAuth2AuthorizationServerConfig extends AuthorizationServerConfigurerAdapter {
    @Override
    public void configure(AuthorizationServerEndpointsConfigurer endpoints) throws Exception {
        endpoints.tokenStore(tokenStore())
                 .accessTokenConverter(accessTokenConverter())
                 .authenticationManager(authenticationManager);
    }

    @Bean
    public TokenStore tokenStore() {
        return new JwtTokenStore(accessTokenConverter());
    }

    @Bean
    public JwtAccessTokenConverter accessTokenConverter() {
        JwtAccessTokenConverter converter = new JwtAccessTokenConverter();
        converter.setSigningKey("123");
        return converter;
    }

    @Bean
    @Primary
    public DefaultTokenServices tokenServices() {
        DefaultTokenServices defaultTokenServices = new DefaultTokenServices();
        defaultTokenServices.setTokenStore(tokenStore());
        defaultTokenServices.setSupportRefreshToken(true);
        return defaultTokenServices;
    }
}

Note that we used a symmetric key in our JwtAccessTokenConverter to sign our tokens – which means we will need to use the same exact key for the Resources Server as well.

4. Resource Server

Now, let’s take a look at our Resource Server configuration – which is very similar to the config of the Authorization Server:

@Configuration
@EnableResourceServer
public class OAuth2ResourceServerConfig extends ResourceServerConfigurerAdapter {
    @Override
    public void configure(ResourceServerSecurityConfigurer config) {
        config.tokenServices(tokenServices());
    }

    @Bean
    public TokenStore tokenStore() {
        return new JwtTokenStore(accessTokenConverter());
    }

    @Bean
    public JwtAccessTokenConverter accessTokenConverter() {
        JwtAccessTokenConverter converter = new JwtAccessTokenConverter();
        converter.setSigningKey("123");
        return converter;
    }

    @Bean
    @Primary
    public DefaultTokenServices tokenServices() {
        DefaultTokenServices defaultTokenServices = new DefaultTokenServices();
        defaultTokenServices.setTokenStore(tokenStore());
        return defaultTokenServices;
    }
}

Keep in mind that we’re defining these two servers as entirely separate and independently deployable. That’s the reason we need to declare some of the same beans again here, in the new configuration.

5. Custom Claims in the Token

Let’s now set up some infrastructure to be able to add a few custom claims in the Access Token. The standard claims provided by the framework are all well and good, but most of the time we’ll need some extra information in the token to utilize on the client side.

We’ll define a TokenEnhancer to customize our Access Token with these additional claims.

In the following example, we will add an extra field “organization” to our Access Token – with this CustomTokenEnhancer:

public class CustomTokenEnhancer implements TokenEnhancer {
    @Override
    public OAuth2AccessToken enhance(
     OAuth2AccessToken accessToken, 
     OAuth2Authentication authentication) {
        Map<String, Object> additionalInfo = new HashMap<>();
        additionalInfo.put("organization", authentication.getName() + randomAlphabetic(4));
        ((DefaultOAuth2AccessToken) accessToken).setAdditionalInformation(additionalInfo);
        return accessToken;
    }
}

Then, we’ll wire that into our Authorization Server configuration – as follows:

@Override
public void configure(AuthorizationServerEndpointsConfigurer endpoints) throws Exception {
    TokenEnhancerChain tokenEnhancerChain = new TokenEnhancerChain();
    tokenEnhancerChain.setTokenEnhancers(
      Arrays.asList(tokenEnhancer(), accessTokenConverter()));

    endpoints.tokenStore(tokenStore())
             .tokenEnhancer(tokenEnhancerChain)
             .authenticationManager(authenticationManager);
}

@Bean
public TokenEnhancer tokenEnhancer() {
    return new CustomTokenEnhancer();
}

With this new configuration up and running – here’s what a token token payload would look like:

{
    "user_name": "john",
    "scope": [
        "foo",
        "read",
        "write"
    ],
    "organization": "johnIiCh",
    "exp": 1458126622,
    "authorities": [
        "ROLE_USER"
    ],
    "jti": "e0ad1ef3-a8a5-4eef-998d-00b26bc2c53f",
    "client_id": "fooClientIdPassword"
}

5.1. Use the Access Token in the JS Client

Finally, we’ll want to make use of the token information over in our AngualrJS client application. We’ll use the angular-jwt library for that.

So what we’re going to do is we’re going to make use of the “organization” claim in our index.html:

<p class="navbar-text navbar-right">{{organization}}</p>

<script type="text/javascript" 
  src="https://cdn.rawgit.com/auth0/angular-jwt/master/dist/angular-jwt.js">
</script>

<script>
var app = angular.module('myApp', ["ngResource","ngRoute", "ngCookies", "angular-jwt"]);

app.controller('mainCtrl', function($scope, $cookies, jwtHelper,...) {
    $scope.organiztion = "";

    function getOrganization(){
    	var token = $cookies.get("access_token");
    	var payload = jwtHelper.decodeToken(token);
    	$scope.organization = payload.organization;
    }
    ...
});

6. Asymmetric KeyPair

In our previous configuration we used symmetric keys to sign our token:

@Bean
public JwtAccessTokenConverter accessTokenConverter() {
    JwtAccessTokenConverter converter = new JwtAccessTokenConverter();
    converter.setSigningKey("123");
    return converter;
}

We can also use asymmetric keys (Public and Private keys) to do the signing process.

6.1. Generate JKS Java KeyStore File

Let’s first generate the keys – and more specifically a .jks file – using the command line tool keytool:

keytool -genkeypair -alias mytest 
                    -keyalg RSA 
                    -keypass mypass 
                    -keystore mytest.jks 
                    -storepass mypass

The command will generate a file called mytest.jks which contains our keys -the Public and Private keys.

Also make sure keypass and storepass are the same.

6.2. Export Public Key

Next, we need to export our Public key from generated JKS, we can use the following command to do so:

keytool -list -rfc --keystore mytest.jks | openssl x509 -inform pem -pubkey

A sample response will look like this:

-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAgIK2Wt4x2EtDl41C7vfp
OsMquZMyOyteO2RsVeMLF/hXIeYvicKr0SQzVkodHEBCMiGXQDz5prijTq3RHPy2
/5WJBCYq7yHgTLvspMy6sivXN7NdYE7I5pXo/KHk4nz+Fa6P3L8+L90E/3qwf6j3
DKWnAgJFRY8AbSYXt1d5ELiIG1/gEqzC0fZmNhhfrBtxwWXrlpUDT0Kfvf0QVmPR
xxCLXT+tEe1seWGEqeOLL5vXRLqmzZcBe1RZ9kQQm43+a9Qn5icSRnDfTAesQ3Cr
lAWJKl2kcWU1HwJqw+dZRSZ1X4kEXNMyzPdPBbGmU6MHdhpywI7SKZT7mX4BDnUK
eQIDAQAB
-----END PUBLIC KEY-----
-----BEGIN CERTIFICATE-----
MIIDCzCCAfOgAwIBAgIEGtZIUzANBgkqhkiG9w0BAQsFADA2MQswCQYDVQQGEwJ1
czELMAkGA1UECBMCY2ExCzAJBgNVBAcTAmxhMQ0wCwYDVQQDEwR0ZXN0MB4XDTE2
MDMxNTA4MTAzMFoXDTE2MDYxMzA4MTAzMFowNjELMAkGA1UEBhMCdXMxCzAJBgNV
BAgTAmNhMQswCQYDVQQHEwJsYTENMAsGA1UEAxMEdGVzdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAICCtlreMdhLQ5eNQu736TrDKrmTMjsrXjtkbFXj
Cxf4VyHmL4nCq9EkM1ZKHRxAQjIhl0A8+aa4o06t0Rz8tv+ViQQmKu8h4Ey77KTM
urIr1zezXWBOyOaV6Pyh5OJ8/hWuj9y/Pi/dBP96sH+o9wylpwICRUWPAG0mF7dX
eRC4iBtf4BKswtH2ZjYYX6wbccFl65aVA09Cn739EFZj0ccQi10/rRHtbHlhhKnj
iy+b10S6ps2XAXtUWfZEEJuN/mvUJ+YnEkZw30wHrENwq5QFiSpdpHFlNR8CasPn
WUUmdV+JBFzTMsz3TwWxplOjB3YacsCO0imU+5l+AQ51CnkCAwEAAaMhMB8wHQYD
VR0OBBYEFOGefUBGquEX9Ujak34PyRskHk+WMA0GCSqGSIb3DQEBCwUAA4IBAQB3
1eLfNeq45yO1cXNl0C1IQLknP2WXg89AHEbKkUOA1ZKTOizNYJIHW5MYJU/zScu0
yBobhTDe5hDTsATMa9sN5CPOaLJwzpWV/ZC6WyhAWTfljzZC6d2rL3QYrSIRxmsp
/J1Vq9WkesQdShnEGy7GgRgJn4A8CKecHSzqyzXulQ7Zah6GoEUD+vjb+BheP4aN
hiYY1OuXD+HsdKeQqS+7eM5U7WW6dz2Q8mtFJ5qAxjY75T0pPrHwZMlJUhUZ+Q2V
FfweJEaoNB9w9McPe1cAiE+oeejZ0jq0el3/dJsx3rlVqZN+lMhRJJeVHFyeb3XF
lLFCUGhA7hxn2xf3x1JW
-----END CERTIFICATE-----

We take only our Public key and copy it to our resource server src/main/resources/public.txt:

-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAgIK2Wt4x2EtDl41C7vfp
OsMquZMyOyteO2RsVeMLF/hXIeYvicKr0SQzVkodHEBCMiGXQDz5prijTq3RHPy2
/5WJBCYq7yHgTLvspMy6sivXN7NdYE7I5pXo/KHk4nz+Fa6P3L8+L90E/3qwf6j3
DKWnAgJFRY8AbSYXt1d5ELiIG1/gEqzC0fZmNhhfrBtxwWXrlpUDT0Kfvf0QVmPR
xxCLXT+tEe1seWGEqeOLL5vXRLqmzZcBe1RZ9kQQm43+a9Qn5icSRnDfTAesQ3Cr
lAWJKl2kcWU1HwJqw+dZRSZ1X4kEXNMyzPdPBbGmU6MHdhpywI7SKZT7mX4BDnUK
eQIDAQAB
-----END PUBLIC KEY-----

6.3. Maven Configuration

Next, we don’t want the JKS file to be picked up by the maven filtering process – so we’ll make sure to exclude it in the pom.xml:

<build>
    <resources>
        <resource>
            <directory>src/main/resources</directory>
            <filtering>true</filtering>
            <excludes>
                <exclude>*.jks</exclude>
            </excludes>
        </resource>
    </resources>
    ...
</build>

6.4. Authorization Server

Now, we will configure JwtAccessTokenConverter to use our KeyPair from mytest.jks – as follows:

@Bean
public JwtAccessTokenConverter accessTokenConverter() {
    JwtAccessTokenConverter converter = new JwtAccessTokenConverter();
    KeyStoreKeyFactory keyStoreKeyFactory = 
      new KeyStoreKeyFactory(new ClassPathResource("mytest.jks"), "mypass".toCharArray());
    converter.setKeyPair(keyStoreKeyFactory.getKeyPair("mytest"));
    return converter;
}

6.5. Resource Server

Finally, we need to configure our resource server to use Public key – as follows:

@Bean
public JwtAccessTokenConverter accessTokenConverter() {
    JwtAccessTokenConverter converter = new JwtAccessTokenConverter();
    Resource resource = new ClassPathResource("public.txt");
    String publicKey = null;
    try {
        publicKey = IOUtils.toString(resource.getInputStream());
    } catch (final IOException e) {
        throw new RuntimeException(e);
    }
    converter.setVerifierKey(publicKey);
    return converter;
}

7. Conclusion

In this quick article we focused on setting up our Spring Security OAuth2 project to use JSON Web Tokens.

The full implementation of this tutorial can be found in the github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Just announced my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

The early-bird pricing will be up available the launch of the Starter Class.

PubSub Messaging with Spring Data Redis

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

In this second article from the series exploring Spring Data Redis, we’ll have a look at the pub/sub message queues.

In Redis, publishers are not programmed to send their messages to specific subscribers. Rather, published messages are characterized into channels, without knowledge of what (if any) subscribers there may be.

Similarly, subscribers express interest in one or more topics and only receive messages that are of interest, without knowledge of what (if any) publishers there are.

This decoupling of publishers and subscribers can allow for greater scalability and a more dynamic network topology.

2. Redis Configuration

Let’s start adding the configuration which is required for the message queues.

First, we’ll define a MessageListenerAdapter bean which contains a custom implementation of the MessageListener interface called RedisMessageSubscriber. This bean acts as a subscriber in the pub-sub messaging model:

@Bean
MessageListenerAdapter messageListener() { 
    return new MessageListenerAdapter(new RedisMessageSubscriber());
}

RedisMessageListenerContainer is a class provided by Spring Data Redis which provides asynchronous behavior for Redis message listeners.  This is called internally and, according to the Spring Data Redis documentation – “handles the low level details of listening, converting and message dispatching.”

@Bean
RedisMessageListenerContainer redisContainer() {
    RedisMessageListenerContainer container 
      = new RedisMessageListenerContainer(); 
    container.setConnectionFactory(jedisConnectionFactory()); 
    container.addMessageListener(messageListener(), topic()); 
    return container; 
}

We will also create a bean using a custom-built MessagePublisher interface and a RedisMessagePublisher implementation. This way, we can have a generic message-publishing API, and have the Redis implementation take a redisTemplate and topic as constructor arguments:

@Bean
MessagePublisher redisPublisher() { 
    return new RedisMessagePublisher(redisTemplate(), topic());
}

Finally, we’ll set up a topic to which the publisher will send messages, and the subscriber will receive them:

@Bean
ChannelTopic topic() {
    return new ChannelTopic("messageQueue");
}

3. Publishing Messages

3.1. Defining the MessagePublisher Interface

Spring Data Redis does not provide a MessagePublisher interface to be used for message distribution. We can define a custom interface which will use redisTemplate in implementation:

public interface MessagePublisher {
    void publish(String message);
}

3.2. RedisMessagePublisher Implementation

Our next step is to provide an implementation of the MessagePublisher interface, adding message publishing details and using the functions in redisTemplate.

The template contains a very rich set of functions for wide range of operations – out of which convertAndSend is capable of sending a message to a queue through a topic:

public class RedisMessagePublisher implements MessagePublisher {

    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    @Autowired
    private ChannelTopic topic;

    public RedisMessagePublisher() {
    }

    public RedisMessagePublisher(
      RedisTemplate<String, Object> redisTemplate, ChannelTopic topic) {
      this.redisTemplate = redisTemplate;
      this.topic = topic;
    }

    public void publish(String message) {
        redisTemplate.convertAndSend(topic.getTopic(), message);
    }
}

As you can see, the publisher implementation is straightforward. It uses the convertAndSend() method of the redisTemplate to format and publish the given message to the configured topic.

A topic implements publish and subscribe semantics: when a message is published, it goes to all the subscribers who are registered to listen on that topic.

4. Subscribing to Messages

RedisMessageSubscriber implements the Spring Data Redis-provided MessageListener interface:

@Service
public class RedisMessageSubscriber implements MessageListener {

    public static List<String> messageList = new ArrayList<String>();

    public void onMessage(Message message, byte[] pattern) {
        messageList.add(message.toString());
        System.out.println("Message received: " + message.toString());
    }
}

Note that there is a second parameter called pattern, which we have not used in this example. The Spring Data Redis documentation states that this parameter represents the, “pattern matching the channel (if specified)”, but that it can be null.

5. Sending and Receiving Messages

Now we’ll put it all together. Let’s create a message and then publish it using the RedisMessagePublisher:

String message = "Message " + UUID.randomUUID();
redisMessagePublisher.publish(message);

When we call publish(message), the content is sent to Redis, where it is routed to the message queue topic defined in our publisher. Then it is distributed to the subscribers of that topic.

You may already have noticed that RedisMessageSubscriber is a listener, which registers itself to the queue for retrieval of messages.

On the arrival of the message, the subscriber’s onMessage() method defined triggered.

In our example, we can verify that we’ve received messages that have been published by checking the messageList in our RedisMessageSubscriber:

RedisMessageSubscriber.messageList.get(0).contains(message)

6. Conclusion

In this article, we examined a pub/sub message queue implementation using Spring Data Redis.

The implementation of the above example can be found in a GitHub project.

I usually post about Persistence on Twitter - you can follow me there:


Elasticsearch Queries with Spring Data

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

In a previous article, we demonstrated how to configure and use Spring Data Elasticsearch for a project. In this article we will examine several query types offered by Elasticsearch and we’ll also talk about field analyzers and their impact on search results.

2. Analyzers

All stored string fields are, by default, processed by an analyzer. An analyzer consists of one tokenizer and several token filters, and is usually preceded by one or more character filters.

The default analyzer splits the string by common word separators (such as spaces or punctuation) and puts every token in lowercase. It also ignores common English words.

Elasticsearch can also be configured to regard a field as analyzed and not-analyzed at the same time.

For example, in an Article class, suppose we store the title field as a standard analyzed field. The same field with the suffix verbatim will be stored as a not-analyzed field:

@MultiField(
  mainField = @Field(type = String),
  otherFields = {
      @NestedField(index = not_analyzed, dotSuffix = "verbatim", type = String)
  }
)
private String title;

Here, we apply the @MultiField annotation to tell Spring Data that we would like this field to be indexed in several ways. The main field will use the name title and will be analyzed according to the rules described above.

But we also provide a second annotation, @NestedField, which describes an additional indexing of the title field. We use FieldIndex.not_analyzed to indicate that we do not want to use an analyzer when performing the additional indexing of the field, and that this value should be stored using a nested field with the suffix verbatim.

2.1. Analyzed Fields

Let’s look at an example. Suppose an article with the title “Spring Data Elasticsearch” is added to our index. The default analyzer will break up the string at the space characters and produce lowercase tokens: “spring“, “data“, and “elasticsearch“.

Now we may use any combination of these terms to match a document:

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchQuery("title", "elasticsearch data"))
  .build();

2.2. Non-analyzed Fields

A non-analyzed field is not tokenized, so can only be matched as a whole when using match or term queries:

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchQuery("title.verbatim", "Second Article About Elasticsearch"))
  .build();

Using a match query, we may only search by the full title, which is also case-sensitive.

3. Match Query

A match query accepts text, numbers and dates.

There are three type of “match” query:

  • boolean
  • phrase and
  • phrase_prefix

In this section we will explore the boolean match query.

3.1. Matching with Boolean Operators

boolean is the default type of a match query; you can specify which boolean operator to use (or is default):

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchQuery("title","Search engines").operator(AND))
  .build();
List<Article> articles = getElasticsearchTemplate()
  .queryForList(searchQuery, Article.class);

This query would return an article with the title “Search engines” by specifying two terms from the title with and operator. But what will happen if we search with the default (or) operator when only one of the terms matches?

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchQuery("title", "Engines Solutions"))
  .build();
List<Article> articles = getElasticsearchTemplate()
  .queryForList(searchQuery, Article.class);
assertEquals(1, articles.size());
assertEquals("Search engines", articles.get(0).getTitle());

The “Search engines” article is still matched, but it will have a lower score because not all of the terms matched.

The sum of the scores of each matching term add up to the total score of each resulting document.

There may be situations in which a document containing a rare term entered in the query will have higher rank then a document which contains several common terms.

3.2. Fuzziness

When the user makes a typo in a word, it is still possible to match it with a search by specifying a fuzziness parameter, which allows inexact matching.

For string fields fuzziness means the edit distance: the number of one-character changes that need to be made to one string to make it the same as another string.

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchQuery("title", "spring date elasticsearch")
  .operator(AND)
  .fuzziness(Fuzziness.ONE)
  .prefixLength(3))
  .build();

The prefix_length parameter is used to improve performance. In this case, we require that the first three characters should match exactly, which reduces the number of possible combinations.

5. Phrase Search

Phase search is stricter, although you can control it with the slop parameter. This parameter tells the phrase query how far apart terms are allowed to be while still considering the document a match.

In other words, it represents the number of times you need to move a term in order to make the query and document match:

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchPhraseQuery("title", "spring elasticsearch").slop(1))
  .build();

Here the query will match the document with the title “Spring Data Elasticsearch” because we set the slop to one.

6. Multi Match Query

When you want to search in multiple fields then you could use QueryBuilders#multiMatchQuery() where you specify all the fields to match:

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(multiMatchQuery("tutorial")
    .field("title")
    .field("tags")
    .type(MultiMatchQueryBuilder.Type.BEST_FIELDS))
  .build();

Here we search the title and tags fields for a match.

Notice that here we use the “best fields” scoring strategy.  It will take the maximum score among the fields as a document score.

7. Aggregations

In our Article class we have also defined a tags field, which is non-analyzed. We could easily create a tag cloud by using an aggregation.

Remember that, because the field is non-analyzed, the tags will not be tokenized:

TermsBuilder aggregation = AggregationBuilders.terms("top_tags")
  .field("tags")
  .order(Terms.Order.aggregation("_count", false));
SearchResponse response = client.prepareSearch("blog")
  .setTypes("article")
  .addAggregation(aggregation)
  .execute().actionGet();

Map<String, Aggregation> results = response.getAggregations().asMap();
StringTerms topTags = (StringTerms) results.get("top_tags");

List<String> keys = topTags.getBuckets()
  .stream()
  .map(b -> b.getKey())
  .collect(toList());
assertEquals(asList("elasticsearch", "spring data", "search engines", "tutorial"), keys);

8. Summary

In this article we discussed the difference between analyzed and non-analyzed fields, and how this distinction affects search.

We also learned about several types of queries provided by Elasticsearch, such as the match query, phrase match query, full-text search query, and boolean query.

Elasticsearch provides many other types of queries, such as geo queries, script queries and compound queries. You can read about them in the Elasticsearch documentation and explore the Spring Data Elasticsearch API in order to use these queries in your code.

You can find a project containing the examples used in this article in the GitHub repository.

I usually post about Persistence on Twitter - you can follow me there:


Java Web Weekly, Issue 116

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Reactor Core 2.5 becomes a unified Reactive Foundation on Java 8 [spring.io]

The focus and the driving force behind Spring 5 is clearly going to be reactive programming.

So, if you’re doing Spring work, definitely have a quick read and see how the ecosystem is growing and what you can do with the new infrastructure.

>> Java app monitoring with ELK – Part I [balamaci.ro]

I’ve been using the ELK stack for logging visualization and analysis for over two years now and it has definitely been one of the nicer and more useful tools I’m working with. There’s so much you can do with clean, rich logging info and a good analysis tool on top of all that data.

>> Jigsaw Finally Arrives in JDK 9 [infoq.com]

Modularity finally made it into the JDK 9 builds – time to play.

>> Caching de luxe with Spring and Guava [codecentric.de]

A long, slightly weird but ultimately interesting read on actually using caching in real-world scenarios, not just setting it up in a toy project

>> Ceylon Might Just be the Only (JVM) Language that Got Nulls Right [jooq.org]

A nice way Ceylon handles and works with nulls. If you’re a language aficionado and you haven’t done any work in Ceylon before, definitely have a read.

>> Java EE 8 MVC: Working with bean parameters [mscharhag.com]

The exploration of Java EE 8 goes on, this time with mapping bean parameters in an MVC style application.

>> When to write setters [giorgiosironi.com]

A back-to-basic kind of writeup with the benefit of real-world experience.

>> Adding Type Inference to Java: Good or Evil? [beyondjava.net]

>> Java May Adopt (Really Useful) Type Inference at Last [beyondjava.net]

A bit of a deeper look into the newly proposed JEP that may add type inference to the Java language.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> The Most Important Code Metrics You’ve Never Heard Of [daedtech.com]

Developer productivity is a unsurprisingly very difficult to measure. Putting that aside though – definitely keep track of some of the metrics this writeup talks about – they’re highly useful when determining the overall health of your codebase.

>> Trackers [jacquesmattheij.com]

A concerning (and funny) read about the tracking and data driven culture we’re all living in.

>> 10 Lessons from 10 Years of Amazon Web Services [allthingsdistributed.com] and >> Ten Years in the AWS Cloud – How Time Flies! [aws.amazon.com]

10 years of running one of the more complex systems, highly distributed systems yielded some very interesting lessons.

>> Impressions from Voxxed Days Bucharest 2016 [vladmihalcea.com]

This was definitely a well put together event and I enjoyed speaking about Event Sourcing and meeting a whole lot of cool people.

>> The First Winter [mdswanson.com]

A quick writeup but rich in takeaways. These little things do add up to a good culture.

>> Writing Tests Doesn’t Have to Be Extra Work [daedtech.com]

Done right, tests can and will definitely speed you up – once you get through the productivity hit that does usually occur in the first few weeks after picking up TDD.

>> Firing People [zachholman.com]

A long and personal read that I’m including in the review just because I enjoy Zachs writing.

>> The Trouble with Career Sites [daedtech.com]

And since the last article was about firing people, let’s now look at hiring and be brutally honest about the process and what works and doesn’t work.

Also worth reading:

3. Comics

And my favorite Dilberts of the week (absolutely hilarious):

>> BUILD AN ARK! [dilbert.com]

>> An internet hoax [dilbert.com]

>> It’s sort of an abusive relationship? [dilbert.com]

 

4. Pick of the Week

>> How GitHub Works: Be Asynchronous [zachholman.com]

 

I usually post about Dev stuff on Twitter - you can follow me there:


Spring and Spring Boot Adoption in March 2016

$
0
0

The 2015 Numbers

Spring 4 came out in December 2013 and it’s been slowly ramping up ever since.

In May 2015, I ran a survey that puts Spring 4 adoption at 65% and Spring Boot adoption at 34%.

The New 2016 Spring Numbers

Last week I just wrapped up a the new “Java and Spring in 2016” survey and received 2253 answers from the community.

In March of 2016, 81.1% of Spring users are using Spring 4 – so definitely a lot of growth since last years 65%:

The Spring Boot Numbers

Spring Boot is even more interesting, growing from 34% last year to 52.8%:

Conclusion

The new numbers are quite interesting, especially the Spring Boot ones which clearly show Boot reached critical mass as it broke the 50% mark.

Out of the full 2255 participants, 388 (17.2%) voted as not using Spring – but keep in mind that my audience is skewed towards Spring developers.

And of course Spring 5 is less than a year away and 4.3 is even closer – so things are going to be just as fast-paced in 2016.

More Jackson Annotations

$
0
0

I usually post about Jackson and JSON stuff on Twitter - you can follow me there:

1. Overview

This article covers some additional annotations that were not covered in the previous article, A Guide to Jackson Annotations – we will go through seven of these.

2. @JsonIdentityReference

@JsonIdentityReference is used for customization of references to objects that will be serialized as object identities instead of full POJOs. It works in collaboration with @JsonIdentityInfo to force usage of object identities in every serialization, different from all-but-the-first-time when @JsonIdentityReference is absent. This couple of annotations is most helpful when dealing with circular dependencies among objects. Please refer to section 4 of the Jackson – Bidirectional Relationship article for more information.

In order to demonstrate the use @JsonIdentityReference, we will define two different bean classes, without and with this annotation.

The bean without @JsonIdentityReference:

@JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "id")
public class BeanWithoutIdentityReference {
    private int id;
    private String name;

    // constructor, getters and setters
}

For the bean using @JsonIdentityReference, we choose the id property to be the object identity:

@JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "id")
@JsonIdentityReference(alwaysAsId = true)
public class BeanWithIdentityReference {
    private int id;
    private String name;
    
    // constructor, getters and setters
}

In the first case, where @JsonIdentityReference is absent, that bean is serialized with full details on its properties:

BeanWithoutIdentityReference bean 
  = new BeanWithoutIdentityReference(1, "Bean Without Identity Reference Annotation");
String jsonString = mapper.writeValueAsString(bean);

The output of the serialization above:

{
    "id": 1,
    "name": "Bean Without Identity Reference Annotation"
}

When @JsonIdentityReference is used, the bean is serialized as a simple identity instead:

BeanWithIdentityReference bean 
  = new BeanWithIdentityReference(1, "Bean With Identity Reference Annotation");
String jsonString = mapper.writeValueAsString(bean);
assertEquals("1", jsonString);

3. @JsonAppend

The @JsonAppend annotation is used to add virtual properties to an object in addition to regular ones when that object is serialized. This is necessary when we want to add supplementary information directly into a JSON string, rather than changing the class definition. For instance, it might be more convenient to insert the version metadata of a bean to the corresponding JSON document than to provide it with an additional property.

Assume we have a bean without @JsonAppend as follows:

public class BeanWithoutAppend {
    private int id;
    private String name;

    // constructor, getters and setters
}

A test will confirm that in the absence of the @JsonAppend annotation, the serialization output does not contain information on the supplementary version property, despite the fact that we attempt to add to the ObjectWriter object:

BeanWithoutAppend bean = new BeanWithoutAppend(2, "Bean Without Append Annotation");
ObjectWriter writer 
  = mapper.writerFor(BeanWithoutAppend.class).withAttribute("version", "1.0");
String jsonString = writer.writeValueAsString(bean);

The serialization output:

{
    "id": 2,
    "name": "Bean Without Append Annotation"
}

Now, let’s say we have a bean annotated with @JsonAppend:

@JsonAppend(attrs = { 
  @JsonAppend.Attr(value = "version") 
})
public class BeanWithAppend {
    private int id;
    private String name;

    // constructor, getters and setters
}

A similar test to the previous one will verify that when the @JsonAppend annotation is applied, the supplementary property is included after serialization:

BeanWithAppend bean = new BeanWithAppend(2, "Bean With Append Annotation");
ObjectWriter writer 
  = mapper.writerFor(BeanWithAppend.class).withAttribute("version", "1.0");
String jsonString = writer.writeValueAsString(bean);

The output of that serialization shows that the version property has been added:

{
    "id": 2,
    "name": "Bean With Append Annotation",
    "version": "1.0"
}

4. @JsonNaming

The @JsonNaming annotation is used to choose the naming strategies for properties in serialization, overriding the default. Using the value element, we can specify any strategy, including custom ones.

In addition to the default, which is LOWER_CAMEL_CASE (e.g. lowerCamelCase), Jackson library provides us with four other built-in property naming strategies for convenience:

  • KEBAB_CASE: Name elements are separated by hyphens, e.g. kebab-case.
  • LOWER_CASE: All letters are lowercase with no separators, e.g. lowercase.
  • SNAKE_CASE: All letters are lowercase with underscores as separators between name elements, e.g. snake_case.
  • UPPER_CAMEL_CASE: All name elements, including the first one, start with a capitalized letter, followed by lowercase ones and there are no separators, e.g. UpperCamelCase.

This example will illustrate the way to serialize properties using snake case names, where a property named beanName is serialized as bean_name.

Given a bean definition:

@JsonNaming(PropertyNamingStrategy.SnakeCaseStrategy.class)
public class NamingBean {
    private int id;
    private String beanName;

    // constructor, getters and setters
}

The test below demonstrates that the specified naming rule works as required:

NamingBean bean = new NamingBean(3, "Naming Bean");
String jsonString = mapper.writeValueAsString(bean);        
assertThat(jsonString, containsString("bean_name"));

The jsonString variable contains following data:

{
    "id": 3,
    "bean_name": "Naming Bean"
}

5. @JsonPropertyDescription

The Jackson library is able to create JSON schemas for Java types with the help of a separate module called JSON Schema. The schema is useful when we want to specify expected output when serializing Java objects, or to validate a JSON document before deserialization.

The @JsonPropertyDescription annotation allows a human readable description to be added to the created JSON schema by providing the description field.

This section makes use of the bean declared below to demonstrate the capabilities of @JsonPropertyDescription:

public class PropertyDescriptionBean {
    private int id;
    @JsonPropertyDescription("This is a description of the name property")
    private String name;

    // getters and setters
}

The method for generating a JSON schema with the addition of the description field is shown below:

SchemaFactoryWrapper wrapper = new SchemaFactoryWrapper();
mapper.acceptJsonFormatVisitor(PropertyDescriptionBean.class, wrapper);
JsonSchema jsonSchema = wrapper.finalSchema();
String jsonString = mapper.writeValueAsString(jsonSchema);
assertThat(jsonString, containsString("This is a description of the name property"));

As we can see, the generation of JSON schema was successful:

{
    "type": "object",
    "id": "urn:jsonschema:com:baeldung:jackson:annotation:extra:PropertyDescriptionBean",
    "properties": 
    {
        "name": 
        {
            "type": "string",
            "description": "This is a description of the name property"
        },

        "id": 
        {
            "type": "integer"
        }
    }
}

6. @JsonPOJOBuilder

The @JsonPOJOBuilder annotation is used to configure a builder class to customize deserialization of a JSON document to recover POJOs when the naming convention is different from the default.

Suppose we need to deserialize the following JSON string:

{
    "id": 5,
    "name": "POJO Builder Bean"
}

That JSON source will be used to create an instance of the POJOBuilderBean:

@JsonDeserialize(builder = BeanBuilder.class)
public class POJOBuilderBean {
    private int identity;
    private String beanName;

    // constructor, getters and setters
}

The names of the bean’s properties are different from those of the fields in JSON string. This is where @JsonPOJOBuilder comes to the rescue.

The @JsonPOJOBuilder annotation is accompanied by two properties:

  • buildMethodName: The name of the no-arg method used to instantiate the expected bean after binding JSON fields to that bean’s properties. The default name is build.
  • withPrefix: The name prefix for auto-detection of matching between the JSON and bean’s properties. The default prefix is with.

This example makes use of the BeanBuilder class below, which is used on POJOBuilderBean:

@JsonPOJOBuilder(buildMethodName = "createBean", withPrefix = "construct")
public class BeanBuilder {
    private int idValue;
    private String nameValue;

    public BeanBuilder constructId(int id) {
        idValue = id;
        return this;
    }

    public BeanBuilder constructName(String name) {
        nameValue = name;
        return this;
    }

    public POJOBuilderBean createBean() {
        return new POJOBuilderBean(idValue, nameValue);
    }
}

In the code above, we have configured the @JsonPOJOBuilder to use a build method called createBean and the construct prefix for matching properties.

The application of @JsonPOJOBuilder to a bean is described and tested as follows:

String jsonString = "{\"id\":5,\"name\":\"POJO Builder Bean\"}";
POJOBuilderBean bean = mapper.readValue(jsonString, POJOBuilderBean.class);

assertEquals(5, bean.getIdentity());
assertEquals("POJO Builder Bean", bean.getBeanName());

The result shows that a new data object has been successfully re-created from a JSON source in despite a mismatch in properties’ names.

7. @JsonTypeId

The @JsonTypeId annotation is used to indicate that the annotated property should be serialized as the type id when including polymorphic type information, rather than as a regular property. That polymorphic metadata is used during deserialization to recreate objects of the same subtypes as they were before serialization, rather than of the declared supertypes.

For more information on Jackson’s handling of inheritance, see section 2 of the Inheritance in Jackson.

Let’s say we have a bean class definition as follows:

public class TypeIdBean {
    private int id;
    @JsonTypeId
    private String name;

    // constructor, getters and setters
}

The following test validates that @JsonTypeId works as it is meant to:

mapper.enableDefaultTyping(DefaultTyping.NON_FINAL);
TypeIdBean bean = new TypeIdBean(6, "Type Id Bean");
String jsonString = mapper.writeValueAsString(bean);
        
assertThat(jsonString, containsString("Type Id Bean"));

The serialization process’ output:

[
    "Type Id Bean",
    {
        "id": 6
    }
]

8. @JsonTypeIdResolver

The @JsonTypeIdResolver annotation is used to signify a custom type identity handler in serialization and deserialization. That handler is responsible for conversion between Java types and type id included in a JSON document.

Suppose that we want to embed type information in a JSON string when dealing with the following class hierarchy.

The AbstractBean superclass:

@JsonTypeInfo(
  use = JsonTypeInfo.Id.NAME, 
  include = JsonTypeInfo.As.PROPERTY, 
  property = "@type"
)
@JsonTypeIdResolver(BeanIdResolver.class)
public class AbstractBean {
    private int id;

    protected AbstractBean(int id) {
        this.id = id;
    }

    // no-arg constructor, getter and setter
}

The FirstBean subclass:

public class FirstBean extends AbstractBean {
    String firstName;

    public FirstBean(int id, String name) {
        super(id);
        setFirstName(name);
    }

    // no-arg constructor, getter and setter
}

The LastBean subclass:

public class LastBean extends AbstractBean {
    String lastName;

    public LastBean(int id, String name) {
        super(id);
        setLastName(name);
    }

    // no-arg constructor, getter and setter
}

Instances of those classes are used to populate a BeanContainer object:

public class BeanContainer {
    private List<AbstractBean> beans;

    // getter and setter
}

We can see that the AbstractBean class is annotated with @JsonTypeIdResolver, indicating that it uses a custom TypeIdResolver to decide how to include subtype information in serialization and how to make use of that metadata the other way round.

Here is the resolver class to handle inclusion of type information:

public class BeanIdResolver extends TypeIdResolverBase {
    
    private JavaType superType;

    @Override
    public void init(JavaType baseType) {
        superType = baseType;
    }

    @Override
    public Id getMechanism() {
        return Id.NAME;
    }

    @Override
    public String idFromValue(Object obj) {
        return idFromValueAndType(obj, obj.getClass());
    }

    @Override
    public String idFromValueAndType(Object obj, Class<?> subType) {
        String typeId = null;
        switch (subType.getSimpleName()) {
        case "FirstBean":
            typeId = "bean1";
            break;
        case "LastBean":
            typeId = "bean2";
        }
        return typeId;
    }

    @Override
    public JavaType typeFromId(DatabindContext context, String id) {
        Class<?> subType = null;
        switch (id) {
        case "bean1":
            subType = FirstBean.class;
            break;
        case "bean2":
            subType = LastBean.class;
        }
        return context.constructSpecializedType(superType, subType);
    }
}

The two most notable methods are idFromValueAndType and typeFromId, with the former telling the way to include type information when serializing POJOs and the latter determining the subtypes of re-created objects using that metadata.

In order to make sure that both serialization and deserialization work well, let’s write a test to validate the complete progress.

First, we need to instantiate a bean container and bean classes, then populate that container with bean instances:

FirstBean bean1 = new FirstBean(1, "Bean 1");
LastBean bean2 = new LastBean(2, "Bean 2");

List<AbstractBean> beans = new ArrayList<>();
beans.add(bean1);
beans.add(bean2);

BeanContainer serializedContainer = new BeanContainer();
serializedContainer.setBeans(beans);

Next, the BeanContainer object is serialized and we confirm that the resulting string contains type information:

String jsonString = mapper.writeValueAsString(serializedContainer);
assertThat(jsonString, containsString("bean1"));
assertThat(jsonString, containsString("bean2"));

The output of serialization is shown below:

{
    "beans": 
    [
        {
            "@type": "bean1",
            "id": 1,
            "firstName": "Bean 1"
        },

        {
            "@type": "bean2",
            "id": 2,
            "lastName": "Bean 2"
        }
    ]
}

That JSON structure will be used to re-create objects of the same subtypes as before serialization. Here are the implementation steps for deserialization:

BeanContainer deserializedContainer = mapper.readValue(jsonString, BeanContainer.class);
List<AbstractBean> beanList = deserializedContainer.getBeans();
assertThat(beanList.get(0), instanceOf(FirstBean.class));
assertThat(beanList.get(1), instanceOf(LastBean.class));

9. Conclusion

This tutorial has explained several less-common Jackson annotations in detail. The implementation of these examples and code snippets can be found in a GitHub project.

I usually post about Jackson and JSON stuff on Twitter - you should follow me there:



Java Web Weekly, Issue 117

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> JEP 286 Survey Results for Local Variable Type Inference [infoq.com]

A quick followup to the survey Brian Goetz ran to take the pulse of the community on the best way to implement type inference in Java. Looks like a pretty decisive yes.

>> Simplifying Database Queries with Jinq [infoq.com]

Jing looks like a clean, nice way to access your SQL data – here’s just a quick example to show you what the library can do.

>> Improve Your JUnit Experience with this Annotation [jooq.org]

Very quick and to the point way to run your tests in a more predictable order – which makes a lot of sense.

I personally actually like the unpredictable nature of tests – it’s a quick and nice way to flush out any unforeseen connections between them – but I can certainly see the appeal of running them in a clear order.

>> How to call Oracle stored procedures and functions from Hibernate [vladmihalcea.com]

A very practical and useful guide to using stored procedures with Hibernate. A bit annotation-heavy, but if you’re using JPA, you’re already used to that.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Understanding CSRF, the video tutorial edition [troyhunt.com]

Having a solid understanding of CSRF attacks well beyond the basics – can save your bacon when taking your system to production. Definitely have a look at this one.

>> Uber Bug Bounty: Turning Self-XSS into Good-XSS [fin1te.net]

I enjoy reading through the details of these attacks. I’m saving this one for the weekend but it looks promising, so I’m including it here as well.

>> Writing OpenAPI (Swagger) Specification Tutorial – Part 3 – Simplifying specification file [apihandyman.io]

API documentation is the new hotness, yes, but it’s also necessary. And while I’m using Swagger myself, I’m keeping a close eye on the other tools available out there.

>> Event Sourcing vs CRUD [alexecollins.com]

A very quick and to the point set of questions to ask yourself before deciding if Event Sourcing makes sense for the architecture of your system.

Also worth reading:

3. Musings

>> That Code’s Not Dead — It Went To a Farm Upstate… And You’re Paying For It [daedtech.com]

Removing “dead” code is critical to keep the sanity of your system (and your own while you work on that system).

One of the cleanest and easiest to work with codebases that I ever touched early on in my career – was one where the team lead was ruthless with cutting code that wasn’t used immediately.

>> My Passion Was My Weak Spot [jacquesmattheij.com]

Passion is one thing, and allowing it to put you in unhealthy, one-side type of work is another.

This piece is definitely worth the read, especially if you’re relatively new to working as a developer.

>> Take a Step Back [techblog.bozho.net]

Some solid advice if there ever was any – think through those little, day to day decisions to keep your system and your codebase clean and nimble.

>> AppDynamics vs Dynatrace: Battle of the Enterprise Monitoring Giants [takipi.com]

If you ever asked the monitoring question for the system you’re working on, you’ve asked yourself this exact question more than once.

My only gripe about this one is that it doesn’t include the other major player in the space – New Relic. Other than that – some solid information over here.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> My PowerPoint slides have a little something for everyone [dilbert.com]

>> I’m in the mood to tweet [dilbert.com]

>> You’re exactly what I’m trying to avoid [dilbert.com]

 

5. Pick of the Week

Every year I run a survey to find out how the adoption of new technologies is going. Here are the new numbers for Spring and Spring Boot:

>> Spring and Spring Boot Adoption in March 2016 [baeldung.com]

 

I usually post about Dev stuff on Twitter - you can follow me there:


XStream User Guide: Converting Objects to XML

$
0
0

1. Overview

In this tutorial, we’ll learn how to use the XStream library to serialize Java objects to XML.

2. Features

There are quite a few interesting benefits to using XStream to serialize and deserialize XML:

  • Configured properly, it produces very clean XML
  • Provides significant opportunities for customization of the XML output
  • Support for object graphs, including circular references
  • For most use cases, the XStream instance is thread-safe, once configured (there are caveats when using annotations)
  • Clear messages are provided during exception handling to help diagnose issues
  • Starting with version 1.4.7, we have security features available to disallow serialization of certain types

3. Project Setup

In order to use XStream in our project we will add the following Maven dependency:

<dependency>
    <groupId>com.thoughtworks.xstream</groupId>
    <artifactId>xstream</artifactId>
    <version>1.4.9</version>
</dependency>

 4. Basic Usage

The XStream class is a facade for the API. When creating an instance of XStream, we need to take care of thread safety issues as well:

XStream xstream = new XStream();

Once an instance is created and configured, it may be shared across multiple threads for marshalling/unmarshalling unless you enable annotation processing.

4.1. Drivers

Several drivers are supported, such as DomDriver, StaxDriver, XppDriver, and more. These drivers have different performance and resource usage characteristics.

The XPP3 driver is used by default, but of course we can easily change the driver:

XStream xstream = new XStream(new StaxDriver());

4.2. Generating XML

Let’s start by defining a simple POJO for – Customer:

public class Customer {

    private String firstName;
    private String lastName;
    private Date dob;

    // standard constructor, setters, and getters
}

Let’s now generate an XML representation of the object:

Customer customer = new Customer("John", "Doe", new Date());
String dataXml = xstream.toXML(customer);

Using the default settings, the following output is produced:

<com.baeldung.pojo.Customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 03:46:16.381 UTC</dob>
</com.baeldung.pojo.Customer>

From this output, we can clearly see that the containing tag uses the fully-qualified class name of Customer by default.

There are many reasons we might decide that the default behavior doesn’t suit our needs. For example, we might not be comfortable exposing the package structure of our application. Also, the XML generated is significantly longer.

5. Aliases

An alias is a name we wish to use for elements rather than using default names.

For example, we can replace com.baeldung.pojo.Customer with customer by registering an alias for the Customer class. We can also add aliases for properties of a class. By using aliases, we can make our XML output much more readable and less Java-specific.

5.1. Class Aliases

Aliases can be registered either programmatically or using annotations.

Let’s now annotate our Customer class with @XStreamAlias:

@XStreamAlias("customer")

Now we need to configure our instance to use this annotation:

xstream.processAnnotations(Customer.class);

Alternatively, if we wish to configure an alias programmatically, we can use the code below:

xstream.alias("customer", Customer.class);

Whether using the alias or programmatic configuration, the output for a Customer object will be much cleaner:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 03:46:16.381 UTC</dob>
</customer>

5.2. Field Aliases

We can also add aliases for fields using the same annotation used for aliasing classes. For example, if we wanted the field firstName to be replaced with fn in the XML representation, we could use the following annotation:

@XStreamAlias("fn")
private String firstName;

Alternatively, we can accomplish the same goal programmatically:

xstream.aliasField("fn", Customer.class, "firstName");

The aliasField method accepts three arguments: the alias we wish to use, the class in which the property is defined, and the property name we wish to alias.

Whichever method is used the output is the same:

<customer>
    <fn>John</fn>
    <lastName>Doe</lastName>
    <dob>1986-02-14 03:46:16.381 UTC</dob>
</customer>

5.3. Default Aliases

There are several aliases pre-registered for classes – here’s a few of these:

alias("float", Float.class);
alias("date", Date.class);
alias("gregorian-calendar", Calendar.class);
alias("url", URL.class);
alias("list", List.class);
alias("locale", Locale.class);
alias("currency", Currency.class);

6. Collections

Now we will add a list of ContactDetails inside the Customer class.

private List<ContactDetails> contactDetailsList;

With default settings for collection handling, this is the output:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 04:14:05.874 UTC</dob>
    <contactDetailsList>
        <ContactDetails>
            <mobile>6673543265</mobile>
            <landline>0124-2460311</landline>
        </ContactDetails>
        <ContactDetails>
            <mobile>4676543565</mobile>
            <landline>0120-223312</landline>
        </ContactDetails>
    </contactDetailsList>
</customer>

Let’s suppose we need to omit the contactDetailsList parent tags, and we just want each ContactDetails element to be a child of the customer element. Let us modify our example again:

xstream.addImplicitCollection(Customer.class, "contactDetailsList");

Now, when the XML is generated, the root tags are omitted, resulting in the XML below:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 04:14:20.541 UTC</dob>
    <ContactDetails>
        <mobile>6673543265</mobile>
        <landline>0124-2460311</landline>
    </ContactDetails>
    <ContactDetails>
        <mobile>4676543565</mobile>
        <landline>0120-223312</landline>
    </ContactDetails>
</customer>

The same can also be achieved using annotations:

@XStreamImplicit
private List<ContactDetails> contactDetailsList;

7. Converters

XStream uses a map of Converter instances, each with its own conversion strategy. These convert supplied data to a particular format in XML and back again.

In addition to using the default converters, we can modify the defaults or register custom converters.

7.1. Modifying an Existing Converter

Suppose we weren’t happy with the way the dob tags were generated using the default settings. We can modify the custom converter for Date provided by XStream (DateConverter):

xstream.registerConverter(new DateConverter("dd-MM-yyyy", null));

The above will produce the output in “dd-MM-yyyy” format:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>14-02-1986</dob>
</customer>

7.2. Custom Converters

We can also create a custom converter to accomplish the same output as in the previous section:

public class MyDateConverter implements Converter {

    private SimpleDateFormat formatter = new SimpleDateFormat("dd-MM-yyyy");

    @Override
    public boolean canConvert(Class clazz) {
        return Date.class.isAssignableFrom(clazz);
    }

    @Override
    public void marshal(
      Object value, HierarchicalStreamWriter writer, MarshallingContext arg2) {
        Date date = (Date)value;
        writer.setValue(formatter.format(date));
    }

    // other methods
}

Finally, we register our MyDateConverter class as below:

xstream.registerConverter(new MyDateConverter());

We can also create converters that implement the SingleValueConverter interface, which is designed to convert an object into a string.

public class MySingleValueConverter implements SingleValueConverter {

    @Override
    public boolean canConvert(Class clazz) {
        return Customer.class.isAssignableFrom(clazz);
    }

    @Override
    public String toString(Object obj) {
        SimpleDateFormat formatter = new SimpleDateFormat("dd-MM-yyyy");
        Date date = ((Customer) obj).getDob();
        return ((Customer) obj).getFirstName() + "," 
          + ((Customer) obj).getLastName() + ","
          + formatter.format(date);
    }

    // other methods
}

Finally, we register MySingleValueConverter:

xstream.registerConverter(new MySingleValueConverter());

Using MySingleValueConverter, the XML output for a Customer is as follows:

<customer>John,Doe,14-02-1986</customer>

7.3. Converter Priority

When registering Converter objects, is is possible to set their priority level, as well.

From the XStream javadocs:

The converters can be registered with an explicit priority. By default they are registered with XStream.PRIORITY_NORMAL. Converters of same priority will be used in the reverse sequence they have been registered. The default converter, i.e. the converter which will be used if no other registered converter is suitable, can be registered with priority XStream.PRIORITY_VERY_LOW. XStream uses by default the ReflectionConverter as the fallback converter.

The API provides several named priority values:

private static final int PRIORITY_NORMAL = 0;
private static final int PRIORITY_LOW = -10;
private static final int PRIORITY_VERY_LOW = -20;

8. Omitting Fields

We can omit fields from our generated XML using either annotations or programmatic configuration. In order to omit a field using an annotation, we simply apply the @XStreamOmitField annotation to the field in question:

@XStreamOmitField 
private String firstName;

In order to omit the field programmatically, we use the following method:

xstream.omitField(Customer.class, "firstName");

Whichever method we select, the output is the same:

<customer> 
    <lastName>Doe</lastName> 
    <dob>14-02-1986</dob> 
</customer>

9. Attribute Fields

Sometimes we may wish to serialize a field as an attribute of an element rather than as element itself. Suppose we add a contactType field:

private String contactType;

If we want to set contactType as an XML attribute, we can use the @XStreamAsAttribute annotation:

@XStreamAsAttribute
private String contactType;

Alternatively, we can accomplish the same goal programmatically:

xstream.useAttributeFor(ContactDetails.class, "contactType");

The output of either of the above methods is the same:

<ContactDetails contactType="Office">
    <mobile>6673543265</mobile>
    <landline>0124-2460311</landline>
</ContactDetails>

10. Concurrency

XStream’s processing model presents some challenges. Once the instance is configured, it is thread-safe.

It is important to note that processing of annotations modifies the configuration just before marshalling/unmarshalling. And so – if we require the instance to be configured on-the-fly using annotations, it is generally a good idea to use a separate XStream instance for each thread.

11. Conclusion

In this article, we covered the basics of using XStream to convert objects to XML. We also learned about customizations we can use to ensure the XML output meets our needs. Finally, we looked at thread-safety problems with annotations.

In the next article in this series, we will learn about converting XML back to Java objects.

The complete source code for this article can be downloaded from the linked GitHub repository.

Java Web Weekly, Issue 118

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Var and val in Java? [joda.org]

An interesting opinion piece about the introduction of local variable type inference in Java.

>> The Spring Boot Dashboard in STS – Part 5: Working with Launch configurations [spring.io]

Launch configs have always been a bit hard to manage in Eclipse – it’s nice to see the new Boot dashboards make some headway into getting these easier to manage.

>> Jenkins 2.0 Beta Available, Adds New Pipeline Build System [infoq.com] and >> Jenkins 2.0 Overview [jenkins.io]

The Jenkins ecosystem is moving forward and we’ve all but forgotten that Hudson was even a thing.

>> Retry handling with Spring-Retry [mscharhag.com]

Retry logic was something I had to roll out by hand many years back – so having out of the box support for it in Spring is highly useful.

>> 10 Features I Wish Java Would Steal From the Kotlin Language [jooq.org]

A fun read and a whole lot of wishful thinking :)

>> JUnit 5 – Architecture [codefx.org]

A deeper look into the architecture of the upcoming JUnit 5, and how the improvements will help in quite a number of scenarios (including IDEs). Cool stuff.

>> Benchmarking High-Concurrency HTTP Servers on the JVM [paralleluniverse.co]

A very detailed and well researched look at the state of concurrency of our HTTP servers running on the JVM.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

Worth reading:

3. Musings

>> Thanks For Ruining Another Game Forever, Computers [codinghorror.com]

If you’ve been at least mildly interested in the ongoing trend of computers defeating human players in games like chess and more recently Go – this this is a fun and interesting read.

>> So You Want Some Passive Income [daedtech.com]

A quick and practical read if you’re starting to think about passive(ish) income.

Just keep in mind that passive is an umbrella term, a long-term play and an oversimplification. It’s also, done right – a very good way to pay the bills.

>> Software Can’t Live On Its Own [techblog.bozho.net]

The idea of unsupervised software, much like the concept of passive income doesn’t quite work out in practice.

And so exploring this concept and being realistic about what it takes to actually support a system that’s seeing real-world use is definitely important.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Oh, it’s on now [dilbert.com]

>> Do you mind while I pretend to be helpful? [dilbert.com]

>> I showed an interest in her opinion [dilbert.com]

 

5. Pick of the Week

>> Sleep deprivation is not a badge of honor [signalvnoise.com]

 

I usually post about Dev stuff on Twitter - you can follow me there:


Introduction to jOOQ with Spring

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

This article will introduce Java Object Oriented Querying – jOOQ – and a simple way to set it up in collaboration with the Spring Framework.

Most Java applications have some sort of SQL persistence and access that layer with the help of higher level tools such as JPA. And while that’s useful, in some cases you really need a finer, more nuanced tool to get to your data, or to actually take advantage of everything the underlying DB has to offer.

jOOQ avoids some typical ORM patterns and generates code that allows us to build typesafe queries, and get a more complete control of the generated SQL via a clean and powerful fluent API.

2. Maven Dependencies

The following dependencies are necessary to run the code in this tutorial.

2.1. jOOQ

<dependency>
    <groupId>org.jooq</groupId>
    <artifactId>jooq</artifactId>
    <version>3.7.3</version>
</dependency>

2.2. Spring

There are several Spring dependencies required for our example; however, to make things simple, we just need to explicitly include two of them in the POM file:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>4.2.5.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-jdbc</artifactId>
    <version>4.2.5.RELEASE</version>
</dependency>

2.3. Database

To make things easy for our example, we will make use of the H2 embedded database:

<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.191</version>
</dependency>

3. Code Generation

3.1. Database Structure

Let’s introduce the database structure we will be working with throughout this article. Suppose that we need to create a database for a publisher to store information the books and authors they manage, where an author may write many books and a book may be co-written by many authors.

To make it simple, we will generate only three tables: book for books, author for authors, and another table called author_book to represent the many-to-many relationship between authors and books. The author table has three columns: id, first_name and last_name. The book table contains only a title column and the id primary key.

The following SQL queries, stored in the intro_schema.sql resource file, will be executed against the database we have already set up before to create the necessary tables and populate them with sample data:

DROP TABLE IF EXISTS author_book, author, book;

CREATE TABLE author (
  id             INT          NOT NULL PRIMARY KEY,
  first_name     VARCHAR(50),
  last_name      VARCHAR(50)  NOT NULL
);

CREATE TABLE book (
  id             INT          NOT NULL PRIMARY KEY,
  title          VARCHAR(100) NOT NULL
);

CREATE TABLE author_book (
  author_id      INT          NOT NULL,
  book_id        INT          NOT NULL,
  
  PRIMARY KEY (author_id, book_id),
  CONSTRAINT fk_ab_author     FOREIGN KEY (author_id)  REFERENCES author (id)  
    ON UPDATE CASCADE ON DELETE CASCADE,
  CONSTRAINT fk_ab_book       FOREIGN KEY (book_id)    REFERENCES book   (id)
);

INSERT INTO author VALUES 
  (1, 'Kathy', 'Sierra'), 
  (2, 'Bert', 'Bates'), 
  (3, 'Bryan', 'Basham');

INSERT INTO book VALUES 
  (1, 'Head First Java'), 
  (2, 'Head First Servlets and JSP'),
  (3, 'OCA/OCP Java SE 7 Programmer');

INSERT INTO author_book VALUES (1, 1), (1, 3), (2, 1);

3.2. Properties Maven Plugin

We will use three different Maven plugins to generate the jOOQ code. The first of these is the Properties Maven plugin.

This plugin is used to read configuration data from a resource file. It is not required since the data may be directly added to the POM, but it is a good idea to manage the properties externally.

In this section, we will define properties for database connections, including the JDBC driver class, database URL, username and password, in a file named intro_config.properties. Externalizing these properties makes it easy to switch the database or just change the configuration data.

The read-project-properties goal of this plugin should be bound to an early phase so that the configuration data can be prepared for use by other plugins. In this case, it is bound to the initialize phase:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>properties-maven-plugin</artifactId>
    <version>1.0.0</version>
    <executions>
        <execution>
            <phase>initialize</phase>
            <goals>
                <goal>read-project-properties</goal>
            </goals>
            <configuration>
                <files>
                    <file>src/main/resources/intro_config.properties</file>
                </files>
            </configuration>
        </execution>
    </executions>
</plugin>

3.3. SQL Maven Plugin

The SQL Maven plugin is used to execute SQL statements to create and populate database tables. It will make use of the properties that have been extracted from the intro_config.properties file by the Properties Maven plugin, and take the SQL statements from the intro_schema.sql resource.

The SQL Maven plugin is configured as below:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>sql-maven-plugin</artifactId>
    <version>1.5</version>
    <executions>
        <execution>
            <phase>initialize</phase>
            <goals>
                <goal>execute</goal>
            </goals>
            <configuration>
                <driver>${db.driver}</driver>
                <url>${db.url}</url>
                <username>${db.username}</username>
                <password>${db.password}</password>
                <srcFiles>
                    <srcFile>src/main/resources/intro_schema.sql</srcFile>
                </srcFiles>
            </configuration>
        </execution>
    </executions>
    <dependencies>
        <dependency>
            <groupId>com.h2database</groupId>
            <artifactId>h2</artifactId>
            <version>1.4.191</version>
        </dependency>
    </dependencies>
</plugin>

Note that this plugin must be placed later than the Properties Maven plugin in the POM file since their execution goals are both bound to the same phase, and Maven will execute the them in the order they are listed.

3.4. jOOQ Codegen Plugin

The jOOQ Codegen plugin generates Java code from a database table structure. Its generate goal should be bound to the generate-sources phase to ensure the correct order of execution. The plugin metadata looks like the following:

<plugin>
    <groupId>org.jooq</groupId>
    <artifactId>jooq-codegen-maven</artifactId>
    <version>${org.jooq.version}</version>
    <executions>
        <execution>
            <phase>generate-sources</phase>
            <goals>
                <goal>generate</goal>
            </goals>
            <configuration>
                <jdbc>
                    <driver>${db.driver}</driver>
                    <url>${db.url}</url>
                    <user>${db.username}</user>
                    <password>${db.password}</password>
                </jdbc>
                <generator>
                    <target>
                        <packageName>com.baeldung.jooq.introduction.db</packageName>
                        <directory>src/main/java</directory>
                    </target>
                </generator>
            </configuration>
        </execution>
    </executions>
</plugin>

3.5. Generating Code

To finish up the process of source code generation, we need to run the Maven generate-sources phase. In Eclipse, we can do this by right-clicking on the project and choosing Run As –> Maven generate-sources. After the command is completed, source files corresponding to the author, book, author_book tables (and several others for supporting classes) are generated.

Let’s dig into table classes to see what jOOQ produced. Each class has a static field of the same name as the class, except that all letters in the name are capitalized. The following are code snippets taken from the generated classes’ definitions:

The Author class:

public class Author extends TableImpl<AuthorRecord> {
    public static final Author AUTHOR = new Author();

    // other class members
}

The Book class:

public class Book extends TableImpl<BookRecord> {
    public static final Book BOOK = new Book();

    // other class members
}

The AuthorBook class:

public class AuthorBook extends TableImpl<AuthorBookRecord> {
    public static final AuthorBook AUTHOR_BOOK = new AuthorBook();

    // other class members
}

The instances referenced by those static fields will serve as data access objects to represent the corresponding tables when working with other layers in a project.

4. Spring Configuration

4.1. Translating jOOQ Exceptions to Spring

In order to make exceptions thrown from jOOQ execution consistent with Spring support for database access, we need to translate them into subtypes of the DataAccessException class.

Let’s define an implementation of the ExecuteListener interface to convert exceptions:

public class ExceptionTranslator extends DefaultExecuteListener {
    public void exception(ExecuteContext context) {
        SQLDialect dialect = context.configuration().dialect();
        SQLExceptionTranslator translator 
          = new SQLErrorCodeSQLExceptionTranslator(dialect.name());
        context.exception(translator
          .translate("Access database using jOOQ", context.sql(), context.sqlException()));
    }
}

This class will be used by the Spring application context.

4.2. Configuring Spring

This section will go through steps to define a PersistenceContext that contains metadata and beans to be used by the Spring application context.

Let’s get started by applying necessary annotations to the class:

  • @Configuration: Make the class to be recognized as a container for beans
  • @ComponentScan: Configure scanning directives, including the value option to declare an array of package names to search for components. In this tutorial, the package to be searched is the one generated by the jOOQ Codegen Maven plugin
  • @EnableTransactionManagement: Enable transactions to be managed by Spring
  • @PropertySource: Indicate the locations of the properties files to be loaded. The value in this article points to the file containing configuration data and dialect of the database, which happens to be the same file mentioned in subsection 4.1.
@Configuration
@ComponentScan({"com.baeldung.jooq.introduction.db.public_.tables"})
@EnableTransactionManagement
@PropertySource("classpath:intro_config.properties")
public class PersistenceContext {
    // Other declarations
}

Next, use an Environment object to get the configuration data, which is then used to configure the DataSource bean:

@Autowired
private Environment environment;

@Bean
public DataSource dataSource() {
    JdbcDataSource dataSource = new JdbcDataSource();

    dataSource.setUrl(environment.getRequiredProperty("db.url"));
    dataSource.setUser(environment.getRequiredProperty("db.username"));
    dataSource.setPassword(environment.getRequiredProperty("db.password"));
    return dataSource; 
}

Now we define several beans to work with database access operations:

@Bean
public TransactionAwareDataSourceProxy transactionAwareDataSource() {
    return new TransactionAwareDataSourceProxy(dataSource());
}

@Bean
public DataSourceTransactionManager transactionManager() {
    return new DataSourceTransactionManager(dataSource());
}

@Bean
public DataSourceConnectionProvider connectionProvider() {
    return new DataSourceConnectionProvider(transactionAwareDataSource());
}

@Bean
public ExceptionTranslator exceptionTransformer() {
    return new ExceptionTranslator();
}
    
@Bean
public DefaultDSLContext dsl() {
    return new DefaultDSLContext(configuration());
}

Finally, we provide a jOOQ Configuration implementation and declare it as a Spring bean to be used by the DSLContext class:

@Bean
public DefaultConfiguration configuration() {
    DefaultConfiguration jooqConfiguration = new DefaultConfiguration();
    jooqConfiguration.set(connectionProvider());
    jooqConfiguration.set(new DefaultExecuteListenerProvider(exceptionTransformer()));

    String sqlDialectName = environment.getRequiredProperty("jooq.sql.dialect");
    SQLDialect dialect = SQLDialect.valueOf(sqlDialectName);
    jooqConfiguration.set(dialect);

    return jooqConfiguration;
}

5. Using jOOQ with Spring

This section demonstrates the use of jOOQ in common database access queries. There are two tests, one for commit and one for rollback, for each type of “write” operation, including inserting, updating and deleting data. The use of “read” operation is illustrated when selecting data to verify the “write” queries.

We will begin by declaring an auto-wired DSLContext object and instances of jOOQ generated classes to be used by all testing methods:

@Autowired
private DSLContext dsl;

Author author = Author.AUTHOR;
Book book = Book.BOOK;
AuthorBook authorBook = AuthorBook.AUTHOR_BOOK;

5.1. Inserting Data

The first step is to insert data into tables:

dsl.insertInto(author)
  .set(author.ID, 4)
  .set(author.FIRST_NAME, "Herbert")
  .set(author.LAST_NAME, "Schildt")
  .execute();
dsl.insertInto(book)
  .set(book.ID, 4)
  .set(book.TITLE, "A Beginner's Guide")
  .execute();
dsl.insertInto(authorBook)
  .set(authorBook.AUTHOR_ID, 4)
  .set(authorBook.BOOK_ID, 4)
  .execute();

A SELECT query to extract data:

Result<Record3<Integer, String, Integer>> result = dsl
  .select(author.ID, author.LAST_NAME, DSL.count())
  .from(author)
  .join(authorBook)
  .on(author.ID.equal(authorBook.AUTHOR_ID))
  .join(book)
  .on(authorBook.BOOK_ID.equal(book.ID))
  .groupBy(author.LAST_NAME)
  .fetch();

The above query produces the following output:

+----+---------+-----+
|  ID|LAST_NAME|count|
+----+---------+-----+
|   1|Sierra   |    2|
|   2|Bates    |    1|
|   4|Schildt  |    1|
+----+---------+-----+

The result is confirmed by the Assert API:

assertEquals(3, result.size());
assertEquals("Sierra", result.getValue(0, author.LAST_NAME));
assertEquals(Integer.valueOf(2), result.getValue(0, DSL.count()));
assertEquals("Schildt", result.getValue(2, author.LAST_NAME));
assertEquals(Integer.valueOf(1), result.getValue(2, DSL.count()));

When a failure occurs due to an invalid query, an exception is thrown and the transaction rolls back. In the following example, the INSERT query violates a foreign key constraint, resulting in an exception:

@Test(expected = DataAccessException.class)
public void givenInvalidData_whenInserting_thenFail() {
    dsl.insertInto(authorBook)
      .set(authorBook.AUTHOR_ID, 4)
      .set(authorBook.BOOK_ID, 5)
      .execute();
}

5.2. Updating Data

Now let’s update the existing data:

dsl.update(author)
  .set(author.LAST_NAME, "Baeldung")
  .where(author.ID.equal(3))
  .execute();
dsl.update(book)
  .set(book.TITLE, "Building your REST API with Spring")
  .where(book.ID.equal(3))
  .execute();
dsl.insertInto(authorBook)
  .set(authorBook.AUTHOR_ID, 3)
  .set(authorBook.BOOK_ID, 3)
  .execute();

Get the necessary data:

Result<Record3<Integer, String, String>> result = dsl
  .select(author.ID, author.LAST_NAME, book.TITLE)
  .from(author)
  .join(authorBook)
  .on(author.ID.equal(authorBook.AUTHOR_ID))
  .join(book)
  .on(authorBook.BOOK_ID.equal(book.ID))
  .where(author.ID.equal(3))
  .fetch();

The output should be:

+----+---------+----------------------------------+
|  ID|LAST_NAME|TITLE                             |
+----+---------+----------------------------------+
|   3|Baeldung |Building your REST API with Spring|
+----+---------+----------------------------------+

The following test will verify that jOOQ worked as expected:

assertEquals(1, result.size());
assertEquals(Integer.valueOf(3), result.getValue(0, author.ID));
assertEquals("Baeldung", result.getValue(0, author.LAST_NAME));
assertEquals("Building your REST API with Spring", result.getValue(0, book.TITLE));

In case of a failure, an exception is thrown and the transaction rolls back, which we confirm with a test:

@Test(expected = DataAccessException.class)
public void givenInvalidData_whenUpdating_thenFail() {
    dsl.update(authorBook)
      .set(authorBook.AUTHOR_ID, 4)
      .set(authorBook.BOOK_ID, 5)
      .execute();
}

5.3. Deleting Data

The following method deletes some data:

dsl.delete(author)
  .where(author.ID.lt(3))
  .execute();

Here is the query to read the affected table:

Result<Record3<Integer, String, String>> result = dsl
  .select(author.ID, author.FIRST_NAME, author.LAST_NAME)
  .from(author)
  .fetch();

The query output:

+----+----------+---------+
|  ID|FIRST_NAME|LAST_NAME|
+----+----------+---------+
|   3|Bryan     |Basham   |
+----+----------+---------+

The following test verifies the deletion:

assertEquals(1, result.size());
assertEquals("Bryan", result.getValue(0, author.FIRST_NAME));
assertEquals("Basham", result.getValue(0, author.LAST_NAME));

On the other hand, if a query is invalid, it will throw an exception and the transaction rolls back. The following test will prove that:

@Test(expected = DataAccessException.class)
public void givenInvalidData_whenDeleting_thenFail() {
    dsl.delete(book)
      .where(book.ID.equal(1))
      .execute();
}

6. Conclusion

This tutorial introduced the basics of jOOQ, a Java library for working with databases. It covered the steps to generate source code from a database structure and how to interact with that database using the newly created classes.

The implementation of all these examples and code snippets can be found in a GitHub project.

I usually post about Persistence on Twitter - you can follow me there:


XStream User Guide: Converting XML to Objects

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

1. Overview

In a previous article, we learned how to use XStream to serialize Java objects to XML. In this tutorial, we will learn how to do the reverse: deserialize XML to Java objects. These tasks can be accomplished using annotations or programmatically.

To learn about the basic requirements for setting up XStream and its dependencies, please reference the previous article.

2. Deserialize an Object from XML

To start with, suppose we have the following XML:

<com.baeldung.pojo.Customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 03:46:16.381 UTC</dob>
</com.baeldung.pojo.Customer>

We need to convert this to a Java Customer object:

public class Customer {
 
    private String firstName;
    private String lastName;
    private Date dob;
 
    // standard setters and getters
}

The XML can be input in a number of ways, including FileInputStream, Reader, or String. For simplicity, we’ll assume that we have the XML above in a String object.

Customer convertedCustomer = (Customer) xstream.fromXML(customerXmlString);
Assert.assertTrue(convertedCustomer.getFirstName().equals("John"));

3. Aliases

In the first example, the XML had the fully-qualified name of the class in the outermost XML tag, matching the location of our Customer class. With this setup, XStream easily converts the XML to our object without any extra configuration. But we may not always have these conditions. We might not have control over the XML tag naming, or we might decide to add aliases for fields.

For example, suppose we modified our XML to not use the fully-qualified class name for the outer tag:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 03:46:16.381 UTC</dob>
</customer>

We can covert this XML by creating aliases.

3.1. Class Aliases

We register aliases with the XStream instance either programmatically or using annotations. We can annotate our Customer class with @XStreamAlias:

@XStreamAlias("customer")
public class Customer {
    //...
}

Now we need to configure our XStream instance to use this annotation:

xstream.processAnnotations(Customer.class);

Alternatively, if we wish to configure an alias programmatically, we can use the code below:

xstream.alias("customer", Customer.class);

3.2. Field Aliases

Suppose we have the following XML:

<customer>
    <fn>John</fn>
    <lastName>Doe</lastName>
    <dob>1986-02-14 03:46:16.381 UTC</dob>
</customer>

The fn tag doesn’t match any fields in our Customer object, so we will need to define an alias for that field if we wish to deserialize it. We can achieve this using the following annotation:

@XStreamAlias("fn")
private String firstName;

Alternatively, we can accomplish the same goal programmatically:

xstream.aliasField("fn", Customer.class, "firstName");

4. Implicit Collections

Let’s say we have the following XML, containing a simple list of ContactDetails:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 04:14:20.541 UTC</dob>
    <ContactDetails>
        <mobile>6673543265</mobile>
        <landline>0124-2460311</landline>
    </ContactDetails>
    <ContactDetails>...</ContactDetails>
</customer>

We want to load the list of ContactDetails into a List<ContactDetails> field in our Java object. We can achieve this by using the following annotation:

@XStreamImplicit
private List<ContactDetails> contactDetailsList;

Alternatively, we can accomplish the same goal programmatically:

xstream.addImplicitCollection(Customer.class, "contactDetailsList");

5. Ignore Fields

Let’s say we have following XML:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 04:14:20.541 UTC</dob>
    <fullName>John Doe</fullName>
</customer>

In the XML above, we have extra element <fullName> which is missing from our Java Customer object.

If we try to deserialize the above xml without taking any care for extra element, program throws an UnknownFieldException.

No such field com.baeldung.pojo.Customer.fullName

As the exception clearly states, XStream does not recognize the field fullName.

To overcome this problem we need to configure it to ignore unknown elements:

xstream.ignoreUnknownElements();

6. Attribute Fields

Suppose we have XML with attributes as part of elements that we’d like to deserialize as a field in our object. We will add a contactType attribute to our ContactDetails object:

<ContactDetails contactType="Office">
    <mobile>6673543265</mobile>
    <landline>0124-2460311</landline>
</ContactDetails>

If we want to deserialize the contactType XML attribute, we can use the @XStreamAsAttribute annotation on the field we’d like it to appear in:

@XStreamAsAttribute
private String contactType;

Alternatively, we can accomplish the same goal programmatically:

xstream.useAttributeFor(ContactDetails.class, "contactType");

7. Conclusion

In this article, we explored the options we have available when deserializing XML to Java objects using XStream.

The complete source code for this article can be downloaded from the linked GitHub repository.

I usually post about Dev stuff on Twitter - you can follow me there:


Viewing all 4731 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>