Quantcast
Channel: Baeldung
Viewing all 4754 articles
Browse latest View live

Quick Guide to @RestClientTest in Spring Boot

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

This article is a quick introduction to a new feature of the upcoming Spring Boot 1.4.0 release — the @RestClientTest annotation.

The new annotation helps simplify and speed up the testing of REST clients in your Spring applications.

And of course it’s already available as a part of the first release candidate Spring Boot 1.4.0.

2. REST Client Support in Spring Boot Pre-1.4

Spring Boot is a handy framework that provides many auto-configured Spring beans with typical settings that allow you to concentrate less on configuration of a Spring application and more on your code and business logic.

But in version 1.3 we don’t get a lot of help when we want to create or test REST services clients. Its support for REST clients is not very deep.

To create a client for a REST API – a RestTemplate instance is typically used. Usually it has to be configured before usage and its configuration may vary, so Spring Boot does not provide any universally configured RestTemplate bean.

Same goes for testing REST clients. Before Spring Boot 1.4.0, the procedure of testing a Spring REST client was not very different than in any other Spring-based application. You would create a MockRestServiceServer instance, bind it to RestTemplate instance under test and provide it with mock responses to requests, like this:

RestTemplate restTemplate = new RestTemplate();

MockRestServiceServer mockServer =
  MockRestServiceServer.bindTo(restTemplate).build();
mockServer.expect(requestTo("/greeting"))
  .andRespond(withSuccess());

// Test code that uses the above RestTemplate ...

mockServer.verify();

You would also have to initialize the Spring container and make sure that only the needed components are loaded into the context, to speed up the context load time (and consequently, the test execution time).

3. New REST Client Features in Spring Boot 1.4

In Spring Boot 1.4, the team has made a solid effort to simplify and speed up the creation and testing of REST clients.

So, let’s check out the new features.

3.1. Adding Spring Boot 1.4.0 to Your Project

First, you’ll need to make sure your project is using Spring Boot 1.4.x:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.4.0.RELEASE</version>
    <relativePath/> <!-- lookup parent from repository -->
</parent>
<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies>

And if you read this article prior to 1.4.0 release, you should also make sure to include Spring snapshot and milestone repositories, as described in the article “Spring Maven Repositories”, and use Spring Boot version 1.4.0.RC1. Newest release versions can be found here.

3.2. RestTemplateBuilder

Spring Boot 1.4 brings both the auto-configured RestTemplateBuilder to simplify creating RestTemplates, and the matching @RestClientTest annotation to test the clients built with RestTemplateBuilder. Here’s how you can create a simple REST client with RestTemplateBuilder auto-injected for you:

@Service
public class DetailsServiceClient {

    private final RestTemplate restTemplate;

    public DetailsServiceClient(RestTemplateBuilder restTemplateBuilder) {
        restTemplate = restTemplateBuilder.build();
    }

    public Details getUserDetails(String name) {
        return restTemplate.getForObject("/{name}/details",
          Details.class, name);
    }
}

Notice that we did not explicitly wire the RestTemplateBuilder instance to a constructor. This is possible thanks to a new Spring 4.3 feature called implicit constructor injection, which is discussed in this article. 

RestTemplateBuilder provides convenience methods for registering message converters, error handlers, URI template handlers, basic authorization and also use any additional customizers that you need.

3.3. @RestClientTest

For testing such a REST client built with RestTemplateBuilder, you may use a SpringRunner-executed test class annotated with @RestClientTest. This annotation disables full auto-configuration and only applies configuration relevant to REST client tests, i.e. Jackson or GSON auto-configuration and @JsonComponent beans, but not regular @Component beans.

@RestClientTest ensures that Jackson and GSON support is auto-configured, and also adds pre-configured RestTemplateBuilder and MockRestServiceServer instances to the context. The bean under test is specified with value or components attribute of the @RestClientTest annotation:

@RunWith(SpringRunner.class)
@RestClientTest(DetailsServiceClient.class)
public class DetailsServiceClientTest {

    @Autowired
    private DetailsServiceClient client;

    @Autowired
    private MockRestServiceServer server;

    @Autowired
    private ObjectMapper objectMapper;

    @Before
    public void setUp() throws Exception {
        String detailsString = 
          objectMapper.writeValueAsString(new Details("John Smith", "john"));
        
        this.server.expect(requestTo("/john/details"))
          .andRespond(withSuccess(detailsString, MediaType.APPLICATION_JSON));
    }

    @Test
    public void whenCallingGetUserDetails_thenClientMakesCorrectCall() 
      throws Exception {

        Details details = this.client.getUserDetails("john");

        assertThat(details.getLogin()).isEqualTo("john");
        assertThat(details.getName()).isEqualTo("John Smith");
    }
}

Firstly, we need to ensure that this test is run with SpringRunner by adding the @RunWith(SpringRunner.class) annotation.

So, what’s new?

First – the @RestClientTest annotation allows us to specify the exact service under test – in our case it is the DetailsServiceClient class. This service will be loaded into the test context, while everything else is filtered out.

This allows us to autowire the DetailsServiceClient instance inside our test and leave everything else outside, which speeds up the loading of the context.

Second – as the MockRestServiceServer instance is also configured for a @RestClientTest-annotated test (and bound to the DetailsServiceClient instance for us), we can simply inject it and use.

Finally – JSON support for @RestClientTest allows us to inject the Jackson’s ObjectMapper instance to prepare the MockRestServiceServer’s mock answer value.

All that is left to do is to execute the call to our service and verify the results.

4. Conclusion

In this article we’ve discussed the new @RestClientTest annotation that allows easy and quick testing of REST clients built with Spring and is expected in the upcoming Spring Boot 1.4.0 release.

The source code for the article is available on GitHub.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE


A Guide to REST-assured

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Introduction

REST-assured was designed to simplify the testing and validation of REST APIs and is highly influenced by testing techniques used in dynamic languages such as Ruby and Groovy.

The library has solid support for HTTP, starting of course with the verbs and standard HTTP operations, but also going well beyond these basics.

In this guide, we are going to explore REST-assured and we’re going to use Hamcrest to do assertion. If you are not already familiar with Hamcrest, you should first brush up with the tutorial: Testing with Hamcrest.

Let’s dive in with a simple example.

2. Simple Example Test

Before we get started ensure that your tests have the following static imports:

io.restassured.RestAssured.*
io.restassured.matcher.RestAssuredMatchers.*
org.hamcrest.Matchers.*

We’ll need that to keep tests simple and have easy access to the main APIs.

Now, let’s get started with the simple example – a basic betting system exposing some data for games:

{
    "id": "390",
    "data": {
        "leagueId": 35,
        "homeTeam": "Norway",
        "visitingTeam": "England",
    },
    "odds": [{
        "price": "1.30",
        "name": "1"
    },
    {
        "price": "5.25",
        "name": "X"
    }]
}

Let’s say that this is the JSON response from hitting the locally deployed API – http://localhost:8080/events?id=390. :

Let’s now use REST-assured to verify some interesting features of the response JSON:

@Test
public void givenUrl_whenSuccessOnGetsResponseAndJsonHasRequiredKV_thenCorrect() {
   get("/events?id=390").then().statusCode(200).assertThat()
      .body("data.leagueId", equalTo(35)); 
}

So, what we did here is – we verified that a call to the endpoint /events?id=390 responds with a body containing a JSON String whose leagueId of the data object is 35.

Let’s have a look at a more interesting example. Let’s say you would like to verify that the odds array has records with prices 1.30 and 5.25:

@Test
public void givenUrl_whenJsonResponseHasArrayWithGivenValuesUnderKey_thenCorrect() {
   get("/events?id=390").then().assertThat()
      .body("odds.price", hasItems("1.30", "5.25"));
}

3. REST-assured Setup

If your favorite dependency tool is Maven, we add the following dependency in the pom.xml file:

<dependency>
    <groupId>io.rest-assured</groupId>
    <artifactId>rest-assured</artifactId>
    <version>3.0.0</version>
    <scope>test</scope>
</dependency>

To get the latest version, follow this link.

In addition, we may prefer to validate the JSON response based on a predefined JSON schema. In this case, we need to also include the json-schema-validator module in the pom.xml file:

<dependency>
    <groupId>io.rest-assured</groupId>
    <artifactId>json-schema-validator</artifactId>
    <version>3.0.0</version>
</dependency>

To ensure you have the latest version, follow this link.

We also need another library with the same name but a different author and functionality. It is not a module from REST-assured but rather, it is used under the hood by the json-schema-validator to perform validation.

Let’s add it like so:

<dependency>
    <groupId>com.github.fge</groupId>
    <artifactId>json-schema-validator</artifactId>
    <version>2.2.6</version>
</dependency>

It’s latest version can be found here.

The library, json-schema-validator, may also need the json-schema-core dependency:

<dependency>
    <groupId>com.github.fge</groupId>
    <artifactId>json-schema-core</artifactId>
    <version>1.2.5</version>
</dependency>

and the latest version is always found here.

REST-assured takes advantage of the power of Hamcrest matchers to perform it’s assertion, so we must include that dependency as well:

<dependency>
    <groupId>org.hamcrest</groupId>
    <artifactId>hamcrest-all</artifactId>
    <version>1.3</version>
</dependency>

The latest version will always be available at this link.

4. JSON Schema Validation

From time to time it may be desirable, without analyzing the response in detail, to know first-off whether the JSON body conforms to a certain JSON format.

Let’s have a look at an examples. Assume the JSON introduced in the above example has been saved in a file called event_0.json, and the file is present in the classpath, we can then use this JSON as a definition of our schema.

Then assuming that this is the general format followed by all data returned by our REST API, we can then check a JSON response for conformance like so:

@Test
public void givenUrl_whenJsonResponseConformsToSchema_thenCorrect() {
    get("/events?id=390").then().assertThat()
      .body(matchesJsonSchemaInClasspath("event_0.json"));
}

Notice that we’ll still statically import matchesJsonSchemaInClasspath from io.restassured.module.jsv.JsonSchemaValidator just as we do for all other methods.

5. JSON Schema Validation Settings

5.1. Validate a Response

The json-schema-validator module of REST-assured allows us the power to perform fine grained validation by defining our own custom configuration rules.

Say we want our validation to always use the JSON schema version 4:

@Test
public void givenUrl_whenValidatesResponseWithInstanceSettings_thenCorrect() {
    JsonSchemaFactory jsonSchemaFactory = JsonSchemaFactory.newBuilder()
      .setValidationConfiguration(
        ValidationConfiguration.newBuilder()
          .setDefaultVersion(SchemaVersion.DRAFTV4).freeze())
            .freeze();
    get("/events?id=390").then().assertThat()
      .body(matchesJsonSchemaInClasspath("event_0.json")
        .using(jsonSchemaFactory));
}

We would do this by using the JsonSchemaFactory and specify the version 4 SchemaVersion and assert that it is using that schema when a request is made.

5.2. Check Validations

By default, the json-schema-validator runs checked validations on the JSON response String. This means that if the schema defines odds as an array as in the following JSON:

{
    "odds": [{
        "price": "1.30",
        "name": "1"
    },
    {
        "price": "5.25",
        "name": "X"
    }]
}

then the validator will always be expecting an array as the value for odds, hence a response where odds is a String will fail validation. So, if we would like to be less strict with our responses, we can add a custom rule during validation by first making the following static import:

io.restassured.module.jsv.JsonSchemaValidatorSettings.settings;

then execute the test with the validation check set to false:

@Test
public void givenUrl_whenValidatesResponseWithStaticSettings_thenCorrect() {
    get("/events?id=390").then().assertThat().body(matchesJsonSchemaInClasspath
      ("event_0.json").using(settings().with().checkedValidation(false)));
}

5.3. Global Validation Configuration

These customizations are very flexible, but with a large number of tests we would have to define a validation for each test, this is cumbersome and not very maintainable.

To avoid this, we have the freedom to define our configuration just once and let it apply to all tests.

We will configure the validation to be unchecked and to always use it against JSON schema version 3:

JsonSchemaFactory factory = JsonSchemaFactory.newBuilder()
  .setValidationConfiguration(
   ValidationConfiguration.newBuilder()
    .setDefaultVersion(SchemaVersion.DRAFTV3)
      .freeze()).freeze();
JsonSchemaValidator.settings = settings()
  .with().jsonSchemaFactory(factory)
      .and().with().checkedValidation(false);

then to remove this configuration call the reset method:

JsonSchemaValidator.reset();

5. Anonymous JSON Root Validation

Consider an array that comprises of primitives rather than objects:

[1, 2, 3]

This is called an anonymous JSON root, meaning that it has no key-value pair nevertheless it is still valid JSON data.

We can run validation in such a scenario by using the $ symbol or an empty String ( “” ) as path. Assume we expose the above service through http://localhost:8080/json then we can validate it like this with REST-assured:

when().get("/json").then().body("$", hasItems(1, 2, 3));

or like this:

when().get("/json").then().body("", hasItems(1, 2, 3));

6. Floats and Doubles

When we start using REST-assured to test our REST services, we need to understand that floating point numbers in JSON responses are mapped to primitive type float.

The use of float type is not interchangeable with double as is the case for many scenarios in java.

Case in point is this response:

{
    "odd": {
        "price": "1.30",
        "ck": 12.2,
        "name": "1"
    }
}

assume we are running the following test on the value of ck:

get("/odd").then().assertThat().body("odd.ck", equalTo(12.2));

This test will fail even if the value we are testing is equal to the value in the response. This is because we are comparing to a double rather than to a float.

To make it work, we have to explicitly specify the operand to the equalTo matcher method as a float, like so:

get("/odd").then().assertThat().body("odd.ck", equalTo(12.2f));

7. XML Response Verification

Not only can you validate a JSON response you can validate XML as well.

Let’s assume we make a request to http://localhost:8080/employees and we get the following response:

<employees>
    <employee category="skilled">
        <first-name>Jane</first-name>
        <last-name>Daisy</last-name>
        <sex>f</sex>
    </employee>
</employees>

We can verify that the first-name is Jane like so:

@Test
public void givenUrl_whenXmlResponseValueTestsEqual_thenCorrect() {
    post("/employees").then().assertThat()
      .body("employees.employee.first-name", equalTo("Jane"));
}

We can also verify that all values match our expected values by chaining body matchers together like so:

@Test
public void givenUrl_whenMultipleXmlValuesTestEqual_thenCorrect() {
    post("/employees").then().assertThat()
      .body("employees.employee.first-name", equalTo("Jane"))
        .body("employees.employee.last-name", equalTo("Daisy"))
          .body("employees.employee.sex", equalTo("f"));
}

Or using the short hand version with variable arguments:

@Test
public void givenUrl_whenMultipleXmlValuesTestEqualInShortHand_thenCorrect() {
    post("/employees")
      .then().assertThat().body("employees.employee.first-name", 
        equalTo("Jane"),"employees.employee.last-name", 
          equalTo("Daisy"), "employees.employee.sex", 
            equalTo("f"));
}

8. XPath for XML

We can verify our responses using XPath. Consider the example below that executes a matcher on the first-name:

@Test
public void givenUrl_whenValidatesXmlUsingXpath_thenCorrect() {
    post("/employees").then().assertThat().
      body(hasXPath("/employees/employee/first-name", containsString("Ja")));
}

XPath also accepts an alternate way of running the equalTo matcher:

@Test
public void givenUrl_whenValidatesXmlUsingXpath2_thenCorrect() {
    post("/employees").then().assertThat()
      .body(hasXPath("/employees/employee/first-name[text()='Jane']"));
}

9. Using Groovy

Since Rest-assured uses Groovy under the hood, we actually have the opportunity to use raw Groovy syntax to create more powerful test cases. This is where the framework really comes to life.

9.1. Groovy’s Collection API

If Groovy is new to you, read on, otherwise you can skip to the section Validate JSON with Groovy.

We will take a quick look at some basic Groovy concepts with a simple examples to equip us with just what we need.

9.2. The findAll method

In this example, we will just pay attention to methods, closures and the it implicit variable. Let us first create a Groovy collection of words:

def words = ['ant', 'buffalo', 'cat', 'dinosaur']

Let’s now create another collection out of the above with words with lengths that exceed four letters:

def wordsWithSizeGreaterThanFour = words.findAll { it.length() > 4 }

Here, findAll is a method applied to the collection with a closure applied to the method. The method defines what logic to apply to the collection and the closure gives the method a predicate to customize the logic.

We are telling Groovy to loop through the collection and find all words whose length is greater than four and return the result into a new collection.

9.3. The it variable

The implicit variable it holds the current word in the loop. The new collection wordsWithSizeGreaterThanFour will contain the words buffalo and dinosaur.

['buffalo', 'dinosaur']

Apart from findAll, there are other Groovy methods.

9.4. The collect iterator

Finally, there is collect, it calls the closure on each item in the collection and returns a new collection with the results of each. Let’s create a new collection out of the sizes of each item in the words collection:

def sizes = words.collect{it.length()}

The result:

[3,7,3,8]

We use sum, as the name suggests to add up all elements in the collection. We can sum up the items in the sizes collection like so:

def charCount = sizes.sum()

and the result will be 21, the character count of all the items in the words collection.

9.5. The max/min operators

The max/min operators are intuitively named to find the maximum or minimum number in a collection :

def maximum = sizes.max()

The result should be obvious, 8.

9.6. The find iterator

We use find to search for only one collection value matching the closure predicate.

def greaterThanSeven=sizes.find{it>7}

The result, 8, the first occurrence of the collection item that meets the predicate.

10. Validate JSON with Groovy

If we have a service at http://localhost:8080/odds, that returns a list of odds of our favorite football matches, like this:

{
    "odds": [{
        "price": 1.30,
        "status": 0,
        "ck": 12.2,
        "name": "1"
    },
    {
        "price": 5.25,
        "status": 1,
        "ck": 13.1,
        "name": "X"
    },
    {
        "price": 2.70,
        "status": 0,
        "ck": 12.2,
        "name": "0"
    },
    {
        "price": 1.20,
        "status": 2,
        "ck": 13.1,
        "name": "2"
    }]
}

and if we want to verify that the odds with a status greater than 1 have prices 1.20 and 5.25, then we do this:

@Test
public void givenUrl_whenVerifiesOddPricesAccuratelyByStatus_thenCorrect() {
    get("/odds").then().body("odds.findAll { it.status > 0 }.price",
      hasItems(5.25f, 1.20f));
}

What is happening here is this; we use Groovy syntax to load the JSON array under the key odds. Since it has more than one item, we obtain a Groovy collection. We then invoke the findAll method on this collection.

The closure predicate tells Groovy to create another collection with JSON objects where status is greater than zero.

We end our path with price which tells groovy to create another list of only prices of the odds in our previous list of JSON objects. We then apply the hasItems Hamcrest matcher to this list.

11. Validate XML with Groovy

Let’s assume we have a service at http://localhost:8080/teachers, that returns a list of teachers by their id, department and subjects taught as below:

<teachers>
    <teacher department="science" id=309>
        <subject>math</subject>
        <subject>physics</subject>
    </teacher>
    <teacher department="arts" id=310>
        <subject>political education</subject>
        <subject>english</subject>
    </teacher>
</teachers>

Now we can verify that the science teacher returned in the response teaches both math and physics:

@Test
public void givenUrl_whenVerifiesScienceTeacherFromXml_thenCorrect() {
    get("/teachers").then().body(
      "teachers.teacher.find { it.@department == 'science' }.subject",
        hasItems("math", "physics"));
}

We have used the XML path teachers.teacher to get a list of teachers by the XML attribute, department. We then call the find method on this list.

Our closure predicate to find ensures we end up with only teachers from the science department. Our XML path terminates at the subject tag.

Since there is more than one subject, we will get a list which we validate with the hasItems Hamcrest matcher.

11. Conclusion

In this tutorial, we have explored the REST-assured framework and looked at its most important features which we can use to test our RESTful services and validate their responses.

The full implementation of all these examples and code snippets can be found in the REST-assured github project.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

A Guide to FastJson

$
0
0

I usually post about Jackson and JSON stuff on Twitter - you can follow me there:

1. Overview

FastJson is a lightweight Java library used to effectively convert JSON strings to Java objects and vice versa.

In this article we’re going to dive into several concrete and practical applications of the FastJson library.

2. Maven Configuration

In order to start working with FastJson, we first need to add that to our pom.xml:

<dependency>
    <groupId>com.alibaba</groupId>
    <artifactId>fastjson</artifactId>
    <version>1.2.13</version>
</dependency>

And as a quick note – here’s the most updated version of the library on Maven Central.

3. Convert Java Objects to JSON Format

Let’s define the following Person Java bean:

public class Person {
    
    @JSONField(name = "AGE")
    private int age;

    @JSONField(name = "FULL NAME")
    private String fullName;

    @JSONField(name = "DATE OF BIRTH")
    private Date dateOfBirth;

    public Person(int age, String fullName, Date dateOfBirth) {
        super();
        this.age = age;
        this.fullName= fullName;
        this.dateOfBirth = dateOfBirth;
    }

    // standard getters & setters
}

We can use JSON.toJSONString() to convert a Java object to a JSON String:

private List<Person> listOfPersons = new ArrayList<Person>();

@Before
public void setUp() {
    listOfPersons.add(new Person(15, "John Doe", new Date()));
    listOfPersons.add(new Person(20, "Janette Doe", new Date()));
}

@Test
public void whenJavaList_thanConvertToJsonCorrect() {
    String jsonOutput= JSON.toJSONString(listOfPersons);
}

And here’s the result:

[  
    {  
        "AGE":15,
        "DATE OF BIRTH":1468962431394,
        "FULL NAME":"John Doe"
    },
    {  
        "AGE":20,
        "DATE OF BIRTH":1468962431394,
        "FULL NAME":"Janette Doe"
    }
]

We can also go further and start customizing the output and control things like ordering, date formatting, or serialization flags.

For example – let’s update the bean and add a couple more fields:

@JSONField(name="AGE", serialize=false)
private int age;

@JSONField(name="LAST NAME", ordinal = 2)
private String lastName;

@JSONField(name="FIRST NAME", ordinal = 1)
private String firstName;

@JSONField(name="DATE OF BIRTH", format="dd/MM/yyyy", ordinal = 3)
private Date dateOfBirth;

Here’s a list of the most basic parameters that we can use alongside the @JSONField annotation, in order to customize the conversion process:

  • The parameter format is used to properly format the date attribute
  • By default, the FastJson library serialize the Java bean entirely, but we can make use of the parameter serialize to ignore serialization for specific fields
  • The parameter ordinal is used to specify the fields order

And here’s the new output:

[
    {
        "FIRST NAME":"Doe",
        "LAST NAME":"Jhon",
        "DATE OF BIRTH":"19/07/2016"
    },
    {
        "FIRST NAME":"Doe",
        "LAST NAME":"Janette",
        "DATE OF BIRTH":"19/07/2016"
    }
]

FastJson also supports a very interesting BeanToArray serialization feature:

String jsonOutput= JSON.toJSONString(listOfPersons, SerializerFeature.BeanToArray);

Here’s what the output will look like in this case:

[
    [
        15,
        1469003271063,
        "John Doe"
    ],
    [
        20,
        1469003271063,
        "Janette Doe"
    ]
]

4. Create JSON Objects

Like other JSON libraries, creating a JSON object from scratch is pretty straightforward, it’s only a matter of combining JSONObject and JSONArray objects:

@Test
public void whenGenerateJson_thanGenerationCorrect() throws ParseException {
    JSONArray jsonArray = new JSONArray();
    for (int i = 0; i < 2; i++) {
        JSONObject jsonObject = new JSONObject();
        jsonObject.put("AGE", 10);
        jsonObject.put("FULL NAME", "Doe " + i);
        jsonObject.put("DATE OF BIRTH", "2016/12/12 12:12:12");
        jsonArray.add(jsonObject);
    }
    String jsonOutput = jsonArray.toJSONString();
}

And here’s what the output will look like here:

[
   {
      "AGE":"10",
      "DATE OF BIRTH":"2016/12/12 12:12:12",
      "FULL NAME":"Doe 0"
   },
   {
      "AGE":"10",
      "DATE OF BIRTH":"2016/12/12 12:12:12",
      "FULL NAME":"Doe 1"
   }
]

5. Parse JSON Strings into Java Objects

Now that we know how to create a JSON object from scratch, and how to convert Java objects to their JSON representations, let’s put the focus on how to parse a JSON representation:

@Test
public void whenJson_thanConvertToObjectCorrect() {
    Person person = new Person(20, "John", "Doe", new Date());
    String jsonObject = JSON.toJSONString(person);
    Person newPerson = JSON.parseObject(jsonObject, Person.class);
    
    assertEquals(newPerson.getAge(), 0); // if we set serialize to false
    assertEquals(newPerson.getFullName(), listOfPersons.get(0).getFullName());
}

We can use JSON.parseObject() to get a Java object from a JSON String.

Note that you have to define a no-args or default constructor if you have already declared your own parametrized one, otherwise a com.alibaba.fastjson.JSONException will be thrown.

Here’s the output of this simple test:

Person [age=20, fullName=John Doe, dateOfBirth=Wed Jul 20 08:51:12 WEST 2016]

By using the option deserialize inside the @JSONField annotation, we can ignore deserialization for a specific field, in this case, the default value will apply automatically to the ignored field:

@JSONField(name = "DATE OF BIRTH", deserialize=false)
private Date dateOfBirth;

And here’s the newly created object:

Person [age=20, fullName=John Doe, dateOfBirth=null]

6. Configure JSON Conversion Using ContextValueFilter

In some scenarios, we may need to have more control over the conversion process from Java objects to JSON format.

In this case we can make use of the ContextValueFilter object to apply additional filtering and custom processing to the conversion flow:

@Test
public void givenContextFilter_whenJavaObject_thanJsonCorrect() {
    ContextValueFilter valueFilter = new ContextValueFilter () {
        public Object process(
          BeanContext context, Object object, String name, Object value) {
            if (name.equals("DATE OF BIRTH")) {
                return "NOT TO DISCLOSE";
            }
            if (value.equals("John")) {
                return ((String) value).toUpperCase();
            } else {
                return null;
            }
        }
    };
    String jsonOutput = JSON.toJSONString(listOfPersons, valueFilter);
}

In this example, we hid the DATE OF BIRTH field, by forcing a constant value, we also ignored all fields that are not John or Doe:

[
    {
        "FULL NAME":"JOHN DOE",
        "DATE OF BIRTH":"NOT TO DISCLOSE"
    }
]

As you can see, this is a pretty basic example, but you can of course use the same concepts for more complex scenarios as well – combining these powerful and lightweight set of tools offered by FastJson in a real world project.

7. Using NameFilter and SerializeConfig

FastJson offers a set of tools to customize you JSON operations when dealing with arbitrary object – objects we don’t have the source code of.

Let’s imagine we have a compiled version of the Person Java bean, initially declared in this article, and we need to make some enhancement on fields naming and basic formatting:

@Test
public void givenSerializeConfig_whenJavaObject_thanJsonCorrect() {
    NameFilter formatName = new NameFilter() {
        public String process(Object object, String name, Object value) {
            return name.toLowerCase().replace(" ", "_");
        }
    };
    
    SerializeConfig.getGlobalInstance().addFilter(Person.class,  formatName);
    String jsonOutput = 
      JSON.toJSONStringWithDateFormat(listOfPersons, "yyyy-MM-dd");
}

We’ve declared the formatName filter using the NameFilter anonymous class to process fields names. The newly created filter is associated to the Person class, and then added to a global instance – which is basically a static attribute in the SerializeConfig class.

Now we can comfortably convert our object to JSON format as shown earlier in this article.

Note that we’ve used toJSONStringWithDateFormat() instead of toJSONString() to quickly apply the same formatting rule on date fields.

And here’s the output:

[  
    {  
        "full_name":"John Doe",
        "date_of_birth":"2016-07-21"
    },
    {  
        "full_name":"Janette Doe",
        "date_of_birth":"2016-07-21"
    }
]

As you can see – the fields names got changed, and the date value did got properly formatted.

Combining SerializeFilter with ContextValueFilter can give full control over the conversion process for arbitrary and complex Java objects.

8. Conclusion

In this article we showed how to use FastJson to convert Java beans to JSON strings and how to go the other way around. We also showed how to use some of the core features of FastJson in order to customize the JSON output.

As you can see, the library offers a relatively simple to use but still very powerful API. JSON.toJSONString and JSON.parseObject are all you need to use in order to meet most of your needs – if not all.

You can checkout the examples provided in this article in the linked GitHub project.

I usually post about Jackson and JSON stuff on Twitter - you should follow me there:


Introduction To XMLUnit 2.x

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

XMLUnit 2.x is a powerful library that helps us test and verify XML content, and comes in particularly handy when we know exactly what that XML should contain.

And so we’ll mainly be using XMLUnit inside unit tests to verify that what we have is valid XML, that it contains certain information or conforms to a certain style document.

Additionally, with XMLUnit, we have control over what kind of difference is important to us and which part of the style reference to compare with which part of your comparison XML.

Since we are focusing on XMLUnit 2.x and not XMLUnit 1.x, whenever we use the word XMLUnit, we are strictly referring to 2.x.

Finally, we’ll also be using Hamcrest matchers for assertion, so it’s a good idea to to brush up on Hamcrest in case you are not familiar with it.

2. XMLUnit Maven Setup

To use the library in our maven projects, we need to have the following dependencies in pom.xml:

<dependency>
    <groupId>org.xmlunit</groupId>
    <artifactId>xmlunit-core</artifactId>
    <version>2.2.1</version>
</dependency>

The latest version of xmlunit-core can be found by following this link. And:

<dependency>
    <groupId>org.xmlunit</groupId>
    <artifactId>xmlunit-matchers</artifactId>
    <version>2.2.1</version>
</dependency>

The latest version of xmlunit-matchers is available at this link.

3. Comparing XML

3.1. Simple Difference Examples

Let’s assume we have two pieces of XML. They are deemed to be identical when the content and sequence of the nodes in the documents are exactly the same, so the following test will pass:

@Test
public void given2XMLS_whenIdentical_thenCorrect() {
    String controlXml = "<struct><int>3</int><boolean>false</boolean></struct>";
    String testXml = "<struct><int>3</int><boolean>false</boolean></struct>";
    assertThat(testXml, CompareMatcher.isIdenticalTo(controlXml));
}

This next test fails as the two pieces of XML are similar but not identical as their nodes occur in a different sequence:

@Test
public void given2XMLSWithSimilarNodesButDifferentSequence_whenNotIdentical_thenCorrect() {
    String controlXml = "<struct><int>3</int><boolean>false</boolean></struct>";
    String testXml = "<struct><boolean>false</boolean><int>3</int></struct>";
    assertThat(testXml, assertThat(testXml, not(isIdenticalTo(controlXml)));
}

3.2. Detailed Difference Example

Differences between two XML documents above are detected by the Difference Engine.

By default and for efficiency reasons, it stops the comparison process as soon as the first difference is found.

To get all the differences between two pieces of XML we use an instance of the Diff class like so:

@Test
public void given2XMLS_whenGeneratesDifferences_thenCorrect(){
    String controlXml = "<struct><int>3</int><boolean>false</boolean></struct>";
    String testXml = "<struct><boolean>false</boolean><int>3</int></struct>";
    Diff myDiff = DiffBuilder.compare(controlXml).withTest(testXml).build();
    
    Iterator<Difference> iter = myDiff.getDifferences().iterator();
    int size = 0;
    while (iter.hasNext()) {
        iter.next().toString();
        size++;
    }
    assertThat(size, greaterThan(1));
}

If we print the values returned in the while loop, the result is as below:

Expected element tag name 'int' but was 'boolean' - 
  comparing <int...> at /struct[1]/int[1] to <boolean...> 
    at /struct[1]/boolean[1] (DIFFERENT)
Expected text value '3' but was 'false' - 
  comparing <int ...>3</int> at /struct[1]/int[1]/text()[1] to 
    <boolean ...>false</boolean> at /struct[1]/boolean[1]/text()[1] (DIFFERENT)
Expected element tag name 'boolean' but was 'int' - 
  comparing <boolean...> at /struct[1]/boolean[1] 
    to <int...> at /struct[1]/int[1] (DIFFERENT)
Expected text value 'false' but was '3' - 
  comparing <boolean ...>false</boolean> at /struct[1]/boolean[1]/text()[1] 
    to <int ...>3</int> at /struct[1]/int[1]/text()[1] (DIFFERENT)

Each instance describes both the type of difference found between a control node and test node and the detail of those nodes (including the XPath location of each node).

If we want to force the Difference Engine to stop after the first difference is found and not proceed to enumerate further differences – we need to supply a ComparisonController:

@Test
public void given2XMLS_whenGeneratesOneDifference_thenCorrect(){
    String myControlXML = "<struct><int>3</int><boolean>false</boolean></struct>";
    String myTestXML = "<struct><boolean>false</boolean><int>3</int></struct>";
    
    Diff myDiff = DiffBuilder
      .compare(myControlXML)
      .withTest(myTestXML)
      .withComparisonController(ComparisonControllers.StopWhenDifferent)
       .build();
    
    Iterator<Difference> iter = myDiff.getDifferences().iterator();
    int size = 0;
    while (iter.hasNext()) {
        iter.next().toString();
        size++;
    }
    assertThat(size, equalTo(1));
}

The difference message is simpler:

Expected element tag name 'int' but was 'boolean' - 
  comparing <int...> at /struct[1]/int[1] 
    to <boolean...> at /struct[1]/boolean[1] (DIFFERENT)

4. Input Sources

With XMLUnit, we can pick XML data from a variety of sources that may be convenient for our application’s needs. In this case, we use the Input class with its array of static methods.

To pick input from an XML file located in the project root, we do the following:

@Test
public void givenFileSource_whenAbleToInput_thenCorrect() {
    ClassLoader classLoader = getClass().getClassLoader();
    String testPath = classLoader.getResource("test.xml").getPath();
    String controlPath = classLoader.getResource("control.xml").getPath();
    
    assertThat(
      Input.fromFile(testPath), isSimilarTo(Input.fromFile(controlPath)));
}

To pick an input source from an XML string, like so:

@Test
public void givenStringSource_whenAbleToInput_thenCorrect() {
    String controlXml = "<struct><int>3</int><boolean>false</boolean></struct>";
    String testXml = "<struct><int>3</int><boolean>false</boolean></struct>";
    
    assertThat(
      Input.fromString(testXml),isSimilarTo(Input.fromString(controlXml)));
}

Let’s now use a stream as the input:

@Test
public void givenStreamAsSource_whenAbleToInput_thenCorrect() {
    assertThat(Input.fromStream(XMLUnitTests.class
      .getResourceAsStream("/test.xml")),
        isSimilarTo(
          Input.fromStream(XMLUnitTests.class
            .getResourceAsStream("/control.xml"))));
}

We could also use Input.from(Object) where we pass in any valid source to be resolved by XMLUnit.

For example, we can pass a file in:

@Test
public void givenFileSourceAsObject_whenAbleToInput_thenCorrect() {
    ClassLoader classLoader = getClass().getClassLoader();
    
    assertThat(
      Input.from(new File(classLoader.getResource("test.xml").getFile())), 
      isSimilarTo(Input.from(new File(classLoader.getResource("control.xml").getFile()))));
}

Or a String:

@Test
public void givenStringSourceAsObject_whenAbleToInput_thenCorrect() {
    assertThat(
      Input.from("<struct><int>3</int><boolean>false</boolean></struct>"),
      isSimilarTo(Input.from("<struct><int>3</int><boolean>false</boolean></struct>")));
}

Or a Stream:

@Test
public void givenStreamAsObject_whenAbleToInput_thenCorrect() {
    assertThat(
      Input.from(XMLUnitTest.class.getResourceAsStream("/test.xml")), 
      isSimilarTo(Input.from(XMLUnitTest.class.getResourceAsStream("/control.xml"))));
}

and they will all be resolved.

5. Comparing Specific Nodes

In section 2 above, we only looked at identical XML because similar XML needs a little bit of customization using features from xmlunit-core library:

@Test
public void given2XMLS_whenSimilar_thenCorrect() {
    String controlXml = "<struct><int>3</int><boolean>false</boolean></struct>";
    String testXml = "<struct><boolean>false</boolean><int>3</int></struct>";
    
    assertThat(testXml, isSimilarTo(controlXml));
}

The above test should pass, since the XMLs have similar nodes, however it fails. This is because XMLUnit compares control and test nodes at the same depth relative to the root node.

So a isSimilarTo condition is a little bit more interesting to test than an isIdenticalTo condition. The node <int>3</int> in controlXml will be compared with <boolean>false</boolean> in testXml, automatically giving failure message:

java.lang.AssertionError: 
Expected: Expected element tag name 'int' but was 'boolean' - 
  comparing <int...> at /struct[1]/int[1] to <boolean...> at /struct[1]/boolean[1]:
<int>3</int>
   but: result was: 
<boolean>false</boolean>

This is where the DefaultNodeMatcher and ElementSelector classes of XMLUnit come in handy

The DefaultNodeMatcher class is consulted by XMLUnit at comparison stage as it loops over nodes of controlXml, to determine which XML node from testXml to compare with the current XML node it encounters in controlXml.

Before that, DefaultNodeMatcher will have already consulted ElementSelector to decide how to match nodes.

Our test has failed because in the default state, XMLUnit will use a depth-first approach to traverse the XMLs and based on document order to match nodes, hence <int> is matched with <boolean>.

Let’s tweak our test so that it passes:

@Test
public void given2XMLS_whenSimilar_thenCorrect() {
    String controlXml = "<struct><int>3</int><boolean>false</boolean></struct>";
    String testXml = "<struct><boolean>false</boolean><int>3</int></struct>";
    
    assertThat(testXml, 
      isSimilarTo(controlXml).withNodeMatcher(
      new DefaultNodeMatcher(ElementSelectors.byName)));
}

In this case we are telling DefaultNodeMatcher that when XMLUnit asks for a node to compare, you should have sorted and matched the nodes by their element names already.

The initial failed example was similar to passing ElementSelectors.Default to DefaultNodeMatcher.

Alternatively, we could have used a Diff from xmlunit-core rather than using xmlunit-matchers:

@Test
public void given2XMLs_whenSimilarWithDiff_thenCorrect() throws Exception {
    String myControlXML = "<struct><int>3</int><boolean>false</boolean></struct>";
    String myTestXML = "<struct><boolean>false</boolean><int>3</int></struct>";
    Diff myDiffSimilar = DiffBuilder.compare(myControlXML).withTest(myTestXML)
      .withNodeMatcher(new DefaultNodeMatcher(ElementSelectors.byName))
      .checkForSimilar().build();
    
    assertFalse("XML similar " + myDiffSimilar.toString(),
      myDiffSimilar.hasDifferences());
}

6. Custom DifferenceEvaluator

A DifferenceEvaluator makes determinations of the outcome of a comparison. Its role is restricted to determining the severity of a comparison’s outcome.

It’s the class that decides whether two XML pieces are identical, similar or different.

Consider the following XML pieces:

<a>
    <b attr="abc">
    </b>
</a>

and:

<a>
    <b attr="xyz">
    </b>
</a>

In the default state, they are technically evaluated as different because their attr attributes have different values. Let’s take a look at a test:

@Test
public void given2XMLsWithDifferences_whenTestsDifferentWithoutDifferenceEvaluator_thenCorrect(){
    final String control = "<a><b attr=\"abc\"></b></a>";
    final String test = "<a><b attr=\"xyz\"></b></a>";
    Diff myDiff = DiffBuilder.compare(control).withTest(test)
      .checkForSimilar().build();
    assertFalse(myDiff.toString(), myDiff.hasDifferences());
}

Failure message:

java.lang.AssertionError: Expected attribute value 'abc' but was 'xyz' - 
  comparing <b attr="abc"...> at /a[1]/b[1]/@attr 
  to <b attr="xyz"...> at /a[1]/b[1]/@attr

If we don’t really care about the attribute, we can change the behaviour of DifferenceEvaluator to ignore it. We do this by creating our own:

public class IgnoreAttributeDifferenceEvaluator implements DifferenceEvaluator {
    private String attributeName;
    public IgnoreAttributeDifferenceEvaluator(String attributeName) {
        this.attributeName = attributeName;
    }
    
    @Override
    public ComparisonResult evaluate(Comparison comparison, ComparisonResult outcome) {
        if (outcome == ComparisonResult.EQUAL)
            return outcome;
        final Node controlNode = comparison.getControlDetails().getTarget();
        if (controlNode instanceof Attr) {
            Attr attr = (Attr) controlNode;
            if (attr.getName().equals(attributeName)) {
                return ComparisonResult.SIMILAR;
            }
        }
        return outcome;
    }
}

We then rewrite our initial failed test and supply our own DifferenceEvaluator instance, like so:

@Test
public void given2XMLsWithDifferences_whenTestsSimilarWithDifferenceEvaluator_thenCorrect() {
    final String control = "<a><b attr=\"abc\"></b></a>";
    final String test = "<a><b attr=\"xyz\"></b></a>";
    Diff myDiff = DiffBuilder.compare(control).withTest(test)
      .withDifferenceEvaluator(new IgnoreAttributeDifferenceEvaluator("attr"))
      .checkForSimilar().build();
    
    assertFalse(myDiff.toString(), myDiff.hasDifferences());
}

This time it passes.

7. Validation

XMLUnit performs XML validation using the Validator class. You create an instance of it using the forLanguage factory method while passing in the schema to use in validation.

The schema is passed in as a URI leading to its location, XMLUnit abstracts the schema locations it supports in the Languages class as constants.

We typically create an instance of Validator class like so:

Validator v = Validator.forLanguage(Languages.W3C_XML_SCHEMA_NS_URI);

After this step, if we have our own XSD file to validate against our XML, we simply specify its source and then call Validator‘s validateInstance method with our XML file source.

Take for example our students.xsd:

<?xml version = "1.0"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
    <xs:element name='class'>
        <xs:complexType>
            <xs:sequence>
                <xs:element name='student' type='StudentObject'
                   minOccurs='0' maxOccurs='unbounded' />
            </xs:sequence>
        </xs:complexType>
    </xs:element>
    <xs:complexType name="StudentObject">
        <xs:sequence>
            <xs:element name="name" type="xs:string" />
            <xs:element name="age" type="xs:positiveInteger" />
        </xs:sequence>
        <xs:attribute name='id' type='xs:positiveInteger' />
    </xs:complexType>
</xs:schema>

And students.xml:

<?xml version = "1.0"?>
<class>
    <student id="393">
        <name>Rajiv</name>
        <age>18</age>
    </student>
    <student id="493">
        <name>Candie</name>
        <age>19</age>
    </student>
</class>

Let’s then run a test:

@Test
public void givenXml_whenValidatesAgainstXsd_thenCorrect() {
    Validator v = Validator.forLanguage(Languages.W3C_XML_SCHEMA_NS_URI);
    v.setSchemaSource(Input.fromStream(
      XMLUnitTests.class.getResourceAsStream("/students.xsd")).build());
    ValidationResult r = v.validateInstance(Input.fromStream(
      XMLUnitTests.class.getResourceAsStream("/students.xml")).build());
    Iterator<ValidationProblem> probs = r.getProblems().iterator();
    while (probs.hasNext()) {
        probs.next().toString();
    }
    assertTrue(r.isValid());
}

The result of the validation is an instance of ValidationResult which contains a boolean flag indicating whether the document has been validated successfully.

The ValidationResult also contains an Iterable with ValidationProblems in case there is a failure. Let’s create a new XML with errors called students_with_error.xml. Instead of <student>, our starting tags are all </studet>:

<?xml version = "1.0"?>
<class>
    <studet id="393">
        <name>Rajiv</name>
        <age>18</age>
    </student>
    <studet id="493">
        <name>Candie</name>
        <age>19</age>
    </student>
</class>

Then run this test against it:

@Test
public void givenXmlWithErrors_whenReturnsValidationProblems_thenCorrect() {
    Validator v = Validator.forLanguage(Languages.W3C_XML_SCHEMA_NS_URI);
    v.setSchemaSource(Input.fromStream(
       XMLUnitTests.class.getResourceAsStream("/students.xsd")).build());
    ValidationResult r = v.validateInstance(Input.fromStream(
      XMLUnitTests.class.getResourceAsStream("/students_with_error.xml")).build());
    Iterator<ValidationProblem> probs = r.getProblems().iterator();
    int count = 0;
    while (probs.hasNext()) {
        count++;
        probs.next().toString();
    }
    assertTrue(count > 0);
}

If we were to print the errors in the while loop, they would look like:

ValidationProblem { line=3, column=19, type=ERROR,message='cvc-complex-type.2.4.a: 
  Invalid content was found starting with element 'studet'. 
    One of '{student}' is expected.' }
ValidationProblem { line=6, column=4, type=ERROR, message='The element type "studet" 
  must be terminated by the matching end-tag "</studet>".' }
ValidationProblem { line=6, column=4, type=ERROR, message='The element type "studet" 
  must be terminated by the matching end-tag "</studet>".' }

8. XPath

When an XPath expression is evaluated against a piece of XML a NodeList is created that contains the matching Nodes.

Consider this piece of XML saved in a file called teachers.xml:

<teachers>
    <teacher department="science" id='309'>
        <subject>math</subject>
        <subject>physics</subject>
    </teacher>
    <teacher department="arts" id='310'>
        <subject>political education</subject>
        <subject>english</subject>
    </teacher>
</teachers>

XMLUnit offers a number of XPath related assertion methods, as demonstrated below.

We can retrieve all the nodes called teacher and perform assertions on them individually:

@Test
public void givenXPath_whenAbleToRetrieveNodes_thenCorrect() {
    Iterable<Node> i = new JAXPXPathEngine()
      .selectNodes("//teacher", Input.fromFile(new File("teachers.xml")).build());
    assertNotNull(i);
    int count = 0;
    for (Iterator<Node> it = i.iterator(); it.hasNext();) {
        count++;
        Node node = it.next();
        assertEquals("teacher", node.getNodeName());
        
        NamedNodeMap map = node.getAttributes();
        assertEquals("department", map.item(0).getNodeName());
        assertEquals("id", map.item(1).getNodeName());
        assertEquals("teacher", node.getNodeName());
    }
    assertEquals(2, count);
}

Notice how we validate the number of child nodes, the name of each node and the attributes in each node. Many more options are available after retrieving the Node.

To verify that a path exists, we can do the following:

@Test
public void givenXmlSource_whenAbleToValidateExistingXPath_thenCorrect() {
    assertThat(Input.fromFile(new File("teachers.xml")), hasXPath("//teachers"));
    assertThat(Input.fromFile(new File("teachers.xml")), hasXPath("//teacher"));
    assertThat(Input.fromFile(new File("teachers.xml")), hasXPath("//subject"));
    assertThat(Input.fromFile(new File("teachers.xml")), hasXPath("//@department"));
}

To verify that a path does not exist, this is what we can do:

@Test
public void givenXmlSource_whenFailsToValidateInExistentXPath_thenCorrect() {
    assertThat(Input.fromFile(new File("teachers.xml")), not(hasXPath("//sujet")));
}

XPaths are especially useful where a document is made up largely of known, unchanging content with only a small amount of changing content created by the system.

9. Conclusion

In this tutorial, we have introduced most of the basic features of XMLUnit 2.x and how to use them to validate XML documents in our applications.

The full implementation of all these examples and code snippets can be found in the XMLUnit github project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Intro to Spring Security Expressions

$
0
0

1. Introduction

In this tutorial we’ll focus on Spring Security Expressions, and of course on practical examples with these expressions.

Before looking at more complex implementations (such as ACL), it’s important to have a solid grasp on security expressions – as they can be quite flexible and powerful if used correctly.

This article is an extension of Spring Security Expressions – hasRole Example.

2. Maven Dependencies

In order to use Spring Security, you need to include the following section in your pom.xml file:

<dependencies>
    <dependency>
        <groupId>org.springframework.security</groupId>
        <artifactId>spring-security-web</artifactId>
        <version>4.1.1.RELEASE</version>
    </dependency>
</dependencies>

Latest version can be found here.

And a quick note – this dependency only covers Spring Security; don’t forget to add spring-core and spring-context for a full web application.

3. Configuration

First, let’s take a look at a Java configuration.

We’ll extend WebSecurityConfigurerAdapter – so that we have the option to hook into any of the extension points that base class offers:

@Configuration
@EnableAutoConfiguration
@EnableWebSecurity
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class SecurityWithoutCsrfConfig extends WebSecurityConfigurerAdapter {
    ...
}

We can of course do an XML configuration as well:

<?xml version="1.0" encoding="UTF-8"?>
<beans:beans ...>
    <global-method-security pre-post-annotations="enabled"/>
</beans:beans>

4. Web Security Expressions

Now, let’s start looking at the security expressions:

  • hasRole, hasAnyRole
  • hasAuthority, hasAnyAuthority
  • permitAll, denyAll
  • isAnonymous, isRememberMe, isAuthenticated, isFullyAuthenticated
  • principal, authentication
  • hasPermission

And let’s now go over each of these in detail.

4.1. hasRole, hasAnyRole

These expressions are responsible for defining the access control or authorization to specific URLs or methods in your application.

Let’s look at the example:

@Override
protected void configure(final HttpSecurity http) throws Exception {
    ...
    .antMatchers("/auth/admin/*").hasRole("ADMIN")
    .antMatchers("/auth/*").hasAnyRole("ADMIN","USER")
    ...
}

In this example we specify access to all links starting with /auth/ restricted to users that are logged in with role USER or role ADMIN. Moreover, to access links starting with /auth/admin/ we need to have ADMIN role in the system.

The same configuration might be achieved in XML file, by writing:

<http>
    <intercept-url pattern="/auth/admin/*" access="hasRole('ADMIN')"/>
    <intercept-url pattern="/auth/*" access="hasAnyRole('ADMIN','USER')"/>
</http>

4.2. hasAuthority, hasAnyAuthority

Roles and authorities are similar in Spring.

The main difference is that, roles have special semantics – starting with Spring Security 4, the ‘ROLE_‘ prefix is automatically added (if it’s not already there) by any role related method.

So hasAuthority(‘ROLE_ADMIN’) is similar to hasRole(‘ADMIN’) because the ‘ROLE_‘ prefix gets added automatically.

But the good thing about using authorities is that we don’t have to use the ROLE_ prefix at all.

Here’s a quick example where we’re defining users with specific authorities:

@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
    auth.inMemoryAuthentication()
      .withUser("user1").password("user1Pass")
      .authorities("USER")
      .and().withUser("admin").password("adminPass")
      .authorities("ADMIN");
}

We can then of course use these authorities expressions:

@Override
protected void configure(final HttpSecurity http) throws Exception {
    ...
    .antMatchers("/auth/admin/*").hasAuthority("ADMIN")
    .antMatchers("/auth/*").hasAnyAuthority("ADMIN", "USER")
    ...
}

As we can see – we’re not mentioning roles at all here.

Finally – we can of course achieve the same functionality using XML configuration as well:

<authentication-manager>
    <authentication-provider>
        <user-service>
            <user name="user1" password="user1Pass" authorities="ROLE_USER"/>
            <user name="admin" password="adminPass" authorities="ROLE_ADMIN"/>
        </user-service>
    </authentication-provider>
</authentication-manager>

And:

<http>
    <intercept-url pattern="/auth/admin/*" access="hasAuthority('ADMIN')"/>
    <intercept-url pattern="/auth/*" access="hasAnyAuthority('ADMIN','USER')"/>
</http>

4.3. permitAll, denyAll

Those two annotations are also quite straightforward. We either may permit access to some URL in our service or we may deny access.

Let’s have a look at the example:

...
.antMatchers("/*").permitAll()
...

With this config, we will authorize all users (both anonymous and logged in) to access the page starting with ‘/’ (for example, our homepage).

We can also deny access to our entire URL space:

...
.antMatchers("/*").denyAll()
...

And again, the same config can be done with an XML config as well:

<http auto-config="true" use-expressions="true">
    <intercept-url access="permitAll" pattern="/*" /> <!-- Choose only one -->
    <intercept-url access="denyAll" pattern="/*" /> <!-- Choose only one -->
</http>

4.4. isAnonymous, isRememberMe, isAuthenticated, isFullyAuthenticated

In this subsection we focus on expressions related to login status of the user. Let’s start with user that didn’t log in to our page. By specifying following in Java config, we enable all unauthorized users to access our main page:

...
.antMatchers("/*").anonymous()
...

The same in XML config:

<http>
    <intercept-url pattern="/*" access="isAnonymous()"/>
</http>

If we want to secure the website that everybody who uses it will be required to login, we need to use isAuthenticated() method:

...
.antMatchers("/*").authenticated()
...

or XML version:

<http>
    <intercept-url pattern="/*" access="isAuthenticated()"/>
</http>

Moreover, we have two additional expressions, isRememberMe() and isFullyAuthenticated(). Through the use of cookies, Spring enables remember-me capabilities so there is no need to log into the system each time. You can read more about Remember Me here.

In order to give the access to users that were logged in only by remember me function, we may use this:

...
.antMatchers("/*").rememberMe()
...

or XML version:

<http>
    <intercept-url pattern="*" access="isRememberMe()"/>
</http>

Finally, some parts of our services require the user to be authenticated again even if the user is already logged in. For example, user wants to change settings or payment information; it’s of course good practice to ask for manual authentication in the more sensitive areas of the system.

In order to do that, we may specify isFullyAuthenticated(), which returns true if the user is not an anonymous or a remember-me user:

...
.antMatchers("/*").fullyAuthenticated()
...

or the XML version:

<http>
    <intercept-url pattern="*" access="isFullyAuthenticated()"/>
</http>

4.5. principal, authentication

These expressions allow accessing the principal object representing the current authorized (or anonymous) user and the current Authentication object from the SecurityContext, respectively.

We can, for example, use principal to load a user’s email, avatar, or any other data that is accessible in the logged in user.

And authentication provides information about the full Authentication object, along with its granted authorities.

Both are described in further detail in the following article: Retrieve User Information in Spring Security.

4.6. hasPermission APIs

This expression is documented and intended to bridge between the expression system and Spring Security’s ACL system, allowing us to specify authorization constraints on individual domain objects, based on abstract permissions.

Let’s have a look at an example. We have a service that allows cooperative writing articles, with a main editor, deciding which article proposed by other authors should be published.

In order to allow usage of such a service, we may create following methods with access control methods:

@PreAuthorize("hasPermission(#articleId, 'isEditor')")
public void acceptArticle(Article article) {
   …
}

Only authorized user can call this method, and also user needs to have permission isEditor in the service.

We also need to remember to explicitly configure a PermissionEvaluator in our application context:

<global-method-security pre-post-annotations="enabled">
    <expression-handler ref="expressionHandler"/>
</global-method-security>

<bean id="expressionHandler"
    class="org.springframework.security.access.expression
      .method.DefaultMethodSecurityExpressionHandler">
    <property name="permissionEvaluator" ref="customInterfaceImplementation"/>
</bean>

where customInterfaceImplementation will be the class that implements PermissionEvaluator. 

Of course we can also do this with Java configuration as well:

@Override
protected MethodSecurityExpressionHandler expressionHandler() {
    DefaultMethodSecurityExpressionHandler expressionHandler = 
      new DefaultMethodSecurityExpressionHandler();
    expressionHandler.setPermissionEvaluator(new CustomInterfaceImplementation());
    return expressionHandler;
}

5. Conclusion

This tutorial is a comprehensive introduction and guide to Spring Security Expressions.

All examples discussed here are available on the GitHub project.

Quick Guide to Spring MVC with Velocity

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

Velocity is a template engine from the Apache Software Foundation that can work with normal text files, SQL, XML, Java code and many other types.

In this article we’re going to focus on utilizing Velocity with a typical Spring MVC web application.

2. Maven Dependencies

Let’s start by enabling the Velocity support – with the following dependencies:

<dependency>
    <groupId>org.apache.velocity</groupId>
    <artifactId>velocity</artifactId>
    <version>1.7</version>
</dependency>
		
<dependency>
    <groupId>org.apache.velocity</groupId>
    <artifactId>velocity-tools</artifactId>
    <version>2.0</version>
</dependency>

The newest versions of both can be found here: velocity and velocity-tools.

3. Configuration

3.1. Web Config

If we don’t want to use an web.xml, let’s configure our web project using Java and an initializer:

public class MainWebAppInitializer implements WebApplicationInitializer {

    @Override
    public void onStartup(ServletContext sc) throws ServletException {
        AnnotationConfigWebApplicationContext root = new AnnotationConfigWebApplicationContext();
        root.register(WebConfig.class);

        sc.addListener(new ContextLoaderListener(root));

        ServletRegistration.Dynamic appServlet = 
          sc.addServlet("mvc", new DispatcherServlet(new GenericWebApplicationContext()));
        appServlet.setLoadOnStartup(1);
    }
}

Alternatively, we can of course use the traditional web.xml:

<web-app ...>
    <display-name>Spring MVC Velocity</display-name>
    <servlet>
        <servlet-name>mvc</servlet-name>
	<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
        <init-param>
	    <param-name>contextConfigLocation</param-name>
	    <param-value>/WEB-INF/mvc-servlet.xml</param-value>
	 </init-param>
	 <load-on-startup>1</load-on-startup>
    </servlet>
	
    <servlet-mapping>
        <servlet-name>mvc</servlet-name>
	<url-pattern>/*</url-pattern>
    </servlet-mapping>
 
    <context-param>
        <param-name>contextConfigLocation</param-name>
	<param-value>/WEB-INF/spring-context.xml</param-value>
    </context-param>

    <listener>
        <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
    </listener>
</web-app>

Notice that we mapped our servlet on the “/*” path.

3.2. Spring Config

Let’s now go over a simple Spring configuration – again, starting with Java:

@Configuration
@EnableWebMvc
@ComponentScan(basePackages= {
  "com.baeldung.mvc.velocity.controller",
  "com.baeldung.mvc.velocity.service" }) 
public class WebConfig extends WebMvcConfigurerAdapter {

    @Override
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry
          .addResourceHandler("/resources/**")
          .addResourceLocations("/resources/");
    }
 
    @Override
    public void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer) {
        configurer.enable();
    }

    @Bean
    public ViewResolver viewResolver() {
        VelocityLayoutViewResolver bean = new VelocityLayoutViewResolver();
        bean.setCache(true);
        bean.setPrefix("/WEB-INF/views/");
        bean.setLayoutUrl("/WEB-INF/layouts/layout.vm");
        bean.setSuffix(".vm");
        return bean;
    }
    
    @Bean
    public VelocityConfigurer velocityConfig() {
        VelocityConfigurer velocityConfigurer = new VelocityConfigurer();
        velocityConfigurer.setResourceLoaderPath("/");
        return velocityConfigurer;
    }
}

And let’s also have a quick look at the XML version of the configuration:

<beans ...>
    <context:component-scan base-package="com.baeldung.mvc.velocity.*" />
    <context:annotation-config /> 
    <bean id="velocityConfig" 
      class="org.springframework.web.servlet.view.velocity.VelocityConfigurer">
        <property name="resourceLoaderPath">
            <value>/</value>
        </property>
    </bean> 
    <bean id="viewResolver"
      class="org.springframework.web.servlet.view.velocity.VelocityLayoutViewResolver">
        <property name="cache" value="true" />
        <property name="prefix" value="/WEB-INF/views/" />
        <property name="layoutUrl" value="/WEB-INF/layouts/layout.vm" />
        <property name="suffix" value=".vm" />
    </bean>
</beans>

Here we are telling Spring where to look for annotated bean definitions:

<context:component-scan base-package="com.baeldung.mvc.velocity.*" />

We are indicating we are going to use annotation-driven configuration in our project with the following line: 

<context:annotation-config />

By creating “velocityConfig” and “viewResolver” beans we are telling VelocityConfigurer where to look for templates, and VelocityLayoutViewResolver where to find views and layouts.

4. Velocity Templates

Finally, let’s create our templates – starting with a common header:

<div style="...">
    <div style="float: left">
        <h1>Our tutorials</h1>
    </div>
</div>

and footer:

<div style="...">
    @Copyright baeldung.com
</div>

And let’s define a common layout for our site where we are going to use above fragments with parse in the following code:

<html>
    <head>
        <title>Spring & Velocity</title>  
    </head>
    <body>
        <div>
            #parse("/WEB-INF/fragments/header.vm")
        </div>  
        <div>
            <!-- View index.vm is inserted here -->
            $screen_content
        </div>  
        <div>
            #parse("/WEB-INF/fragments/footer.vm")
        </div>
    </body>
</html>

You can check that $screen_content variable has the content of the pages.

Finally, we’ll create a template for the main content:

<h1>Index</h1>
 
<h2>Tutorials list</h2>
<table border="1">
    <tr>
        <th>Tutorial Id</th>
        <th>Tutorial Title</th>
        <th>Tutorial Description</th>
        <th>Tutorial Author</th>
    </tr>
    #foreach($tut in $tutorials)
    <tr>
        <td>$tut.tutId</td>
        <td>$tut.title</td>
        <td>$tut.description</td>
        <td>$tut.author</td>
    </tr>
    #end
</table>

5. Controller Side

We have created a simple controller which returns a list of tutorials as content for our layout to be populated with:

@Controller
@RequestMapping("/")
public class MainController {
 
    @Autowired
    private ITutorialsService tutService;

    @RequestMapping(value ="/", method = RequestMethod.GET)
    public String defaultPage() {
        return "index";
    }

    @RequestMapping(value ="/list", method = RequestMethod.GET)
    public String listTutorialsPage(Model model) { 
        List<Tutorial> list = tutService.listTutorials();
        model.addAttribute("tutorials", list);
        return "index";
    }
}

Finally, we can access this simple example locally – for example at: localhost:8080/spring-mvc-velocity/

6. Conclusion

In this simple tutorial, we have configured Spring MVC web application with Velocity template engine.

The full sample code for this tutorial can be found in our GitHub repository.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 135

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Reactive Programming with Spring 5.0 M1 [spring.io]

Reactive support has finally been merged and is included into this cut of the 5.x version of the framework.

If you’ve been holding off from exploring it before, now’s a good time to get your feet wet.

>> Java on Steroids: 5 Super Useful JIT Optimization Techniques [takipi.com]

Interesting notes about what JVM compiler does to optimize the performance of our production code.

>> Java 8 Top Tips [jetbrains.com]

A good read with or without the IntelliJ tips. Of course if you’re using IntelliJ, it’s even better.

>> Oh No, I Forgot Stream::iterate! [codefx.org]

Quick and to the point – some nice APIs are coming along with Java 9.

>> The best way to map a @OneToOne relationship with JPA and Hibernate [vladmihalcea.com]

Hehe, the good ol’ one-to-one relationship.

>> Oracle Paves the Way to Standardise Command Line Options in the JDK [infoq.com]

A quick positive news item slotted for Java 10, among all the negative news about Java EE lately.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> DDD Decoded – Don’t Fear Eventual Consistency [sapiensworks.com]

“The only people worried about Eventual Consistency are CRUD programmers”. Priceless.

I could keep quoting, but do yourself a favor and read this one, not only for the technical aspects, but also for the laughs.

>> I wanna go fast: HTTPS’ massive speed advantage [troyhunt.com]

Quite some interesting data here, looking at the speed advantage of HTTP/2.

>> The Business-Personal Value Continuum [daedtech.com]

The writeup explores a tough topic to pin down – the way we approach our work and the balance between our natural need towards craftsmanship and actually putting out work into the world.

I personally strive to keep a healthy tension between the two and, as Sean says – ship when it’s 90% perfect.

>> Logs for SEO [daedtech.com]

A fun read for anyone running any kind of site and dabbling in SEO.

>> 12 great months for Thoughts on Java and some big changes ahead [thoughts-on-java.org]

I always enjoy this kind of post. I think there’s a lot here to unpack and reading through the thought process some else has about a tough life choice is quite helpful. Especially when it deals with such important decisions as – should I quit my job?

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Mordac, the preventer of information services [dilbert.com]

>> What do you do for a living? [dilbert.com]

>> This design would be inefficient [dilbert.com]

4. Pick of the Week

My talk from Spring I/O 2016 came out – if you’re working on REST API and Hypermedia with Spring, let me know what you think:

>> Get HATEOAS and Hypermedia right with Spring [youtube.com]

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to Spring with Akka

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

In this article we’ll focus on integrating Akka with the Spring Framework – to allow injection of Spring-based services into Akka actors.

Before reading this article, a prior knowledge of Akka’s basics is recommended.

2. Dependency Injection in Akka

Akka is a powerful application framework based on the Actor concurrency model. The framework is written in Scala which of course makes it fully usable in a Java-based applications as well. And so it’s very often we will want to integrate Akka with an existing Spring-based application or simply use Spring for wiring beans into actors.

The problem with Spring/Akka integration lies in the difference between the management of beans in Spring and the management of actors in Akka: actors have a specific lifecycle that differs from typical Spring bean lifecycle.

Moreover, actors are split into an actor itself (which is an internal implementation detail and cannot be managed by Spring) and an actor reference, which is accessible by a client code, as well as serializable and portable between different Akka runtimes.

Luckily, Akka provides a mechanism, namely Akka extensions, that makes using external dependency injection frameworks a fairly easy task.

3. Maven Dependencies

To demonstrate the usage of Akka in our Spring project, we’ll need a bare minimum Spring dependency — the spring-context library, and also the akka-actor library. The library versions can be extracted to the <properties> section of the pom:

<properties>
    <spring.version>4.3.1.RELEASE</spring.version>
    <akka.version>2.4.8</akka.version>
</properties>

<dependencies>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-context</artifactId>
        <version>${spring.version}</version>
    </dependency>

    <dependency>
        <groupId>com.typesafe.akka</groupId>
        <artifactId>akka-actor_2.11</artifactId>
        <version>${akka.version}</version>
    </dependency>

</dependencies>

Make sure to check Maven Central for the latest versions of spring-context and akka-actor dependencies.

And notice how, that the akka-actor dependency has a _2.11 postfix in its name, which signifies that this version of Akka framework was built against the Scala version 2.11. The corresponding version of Scala library will be transitively included in your build.

4. Injecting Spring Beans into Akka Actors

Let’s create a simple Spring/Akka application consisting of a single actor that can answer to a person’s name by issuing a greeting to this person. The logic of greeting will be extracted to a separate service. We will want to autowire this service to an actor instance. Spring integration will help us in this task.

4.1. Defining an Actor and a Service

To demonstrate injection of a service into an actor, we’ll create a simple class GreetingActor defined as an untyped actor (extending the Akka’s UntypedActor base class). The main method of every Akka actor is the onReceive method which receives a message and processes it according to some specified logic.

In our case, the GreetingActor implementation checks if the message is of a predefined type Greet, then takes the name of the person from the Greet instance, then uses the GreetingService to receive a greeting for this person and answers sender with the received greeting string. If the message is of some other unknown type, it is passed to the actor’s predefined unhandled method.

Let’s have a look:

@Component
@Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class GreetingActor extends UntypedActor {

    private GreetingService greetingService;

    // constructor

    @Override
    public void onReceive(Object message) throws Throwable {
        if (message instanceof Greet) {
            String name = ((Greet) message).getName();
            getSender().tell(greetingService.greet(name), getSelf());
        } else {
            unhandled(message);
        }
    }

    public static class Greet {

        private String name;

        // standard constructors/getters

    }
}

Notice that the Greet message type is defined as a static inner class inside this actor, which is considered a good practice. Accepted message types should be defined as close to an actor as possible to avoid confusion on which message types this actor can process.

Also notice the Spring annotations @Component and @Scope – these define the class as a Spring-managed bean with the prototype scope.

The scope is very important, because every bean retrieval request should result in a newly created instance, as this behavior matches Akka’s actor lifecycle. If you implement this bean with some other scope, the typical case of restarting actors in Akka will most likely function incorrectly.

Finally, notice that we didn’t have to explicitly @Autowire the GreetingService instance — this is possible due to the new feature of Spring 4.3 called Implicit Constructor Injection.

The implementation of GreeterService is pretty straightforward, notice that we defined it as a Spring-managed bean by adding the @Component annotation to it (with default singleton scope):

@Component
public class GreetingService {

    public String greet(String name) {
        return "Hello, " + name;
    }
}

4.2. Adding Spring Support via Akka Extension

The easiest way to integrate Spring with Akka is through an Akka extension.

An extension is a singleton instance created per actor system. It consists of an extension class itself, which implements the marker interface Extension, and an extension id class which usually inherits AbstractExtensionId.

As these two classes are tightly coupled, it makes sense to implement the Extension class nested within the ExtensionId class:

public class SpringExtension 
  extends AbstractExtensionId<SpringExtension.SpringExt> {

    public static final SpringExtension SPRING_EXTENSION_PROVIDER 
      = new SpringExtension();

    @Override
    public SpringExt createExtension(ExtendedActorSystem system) {
        return new SpringExt();
    }

    public static class SpringExt implements Extension {
        private volatile ApplicationContext applicationContext;

        public void initialize(ApplicationContext applicationContext) {
            this.applicationContext = applicationContext;
        }

        public Props props(String actorBeanName) {
            return Props.create(
              SpringActorProducer.class, applicationContext, actorBeanName);
        }
    }
}

FirstSpringExtension implements a single createExtension method from the AbstractExtensionId class – which accounts for creation of an extension instance, the SpringExt object.

The SpringExtension class also has a static field SPRING_EXTENSION_PROVIDER which holds a reference to its only instance. It often makes sense to add a private constructor to explicitly state that SpringExtention is supposed to be a singleton class, but we’ll omit it for clarity.

Secondly, the static inner class SpringExt is the extension itself. As Extension is simply a marker interface, we may define the contents of this class as we see fit.

In our case, we’re going to need the initialize method for keeping a Spring ApplicationContext instance — this method will be called only once per extension’s initialization.

Also we’ll require the props method for creating a Props object. Props instance is a blueprint for an actor, and in our case the Props.create method receives a SpringActorProducer class and constructor arguments for this class. These are the arguments that this class’ constructor will be called with.

The props method will be executed every time we’ll need a Spring-managed actor reference.

The third and last piece of the puzzle is the SpringActorProducer class. It implements Akka’s IndirectActorProducer interface which allows overriding the instantiation process for an actor by implementing the produce and actorClass methods.

As you probably already have guessed, instead of direct instantiation, it will always retrieve an actor instance from the Spring’s ApplicationContext. As we’ve made the actor a prototype-scoped bean, every call to the produce method will return a new instance of the actor:

public class SpringActorProducer implements IndirectActorProducer {

    private ApplicationContext applicationContext;

    private String beanActorName;

    public SpringActorProducer(ApplicationContext applicationContext, 
      String beanActorName) {
        this.applicationContext = applicationContext;
        this.beanActorName = beanActorName;
    }

    @Override
    public Actor produce() {
        return (Actor) applicationContext.getBean(beanActorName);
    }

    @Override
    public Class<? extends Actor> actorClass() {
        return (Class<? extends Actor>) applicationContext
          .getType(beanActorName);
    }
}

4.3. Putting It All Together

The only thing that’s left to do is to create a Spring configuration class (marked with @Configuration annotation) which will tell Spring to scan the current package together with all nested packages (this is ensured by the @ComponentScan annotation) and create a Spring container.

We only need to add a single additional bean — the ActorSystem instance — and initialize the Spring extension on this ActorSystem:

@Configuration
@ComponentScan
public class AppConfiguration {

    @Autowired
    private ApplicationContext applicationContext;

    @Bean
    public ActorSystem actorSystem() {
        ActorSystem system = ActorSystem.create("akka-spring-demo");
        SPRING_EXTENSION_PROVIDER.get(system)
          .initialize(applicationContext);
        return system;
    }
}

4.4. Retrieving Spring-Wired Actors

To test that everything works correctly, we may inject the ActorSystem instance into our code (either some Spring-managed application code, or a Spring-based test), create a Props object for an actor using our extension, retrieve a reference to an actor via Props object and try to greet somebody:

ActorRef greeter = system.actorOf(SPRING_EXTENSION_PROVIDER.get(system)
  .props("greetingActor"), "greeter");

FiniteDuration duration = FiniteDuration.create(1, TimeUnit.SECONDS);
Timeout timeout = Timeout.durationToTimeout(duration);

Future<Object> result = ask(greeter, new Greet("John"), timeout);

Assert.assertEquals("Hello, John", Await.result(result, duration));

Here we use the typical akka.pattern.Patterns.ask pattern that returns a Scala’s Future instance. Once the computation is completed, the Future is resolved with a value that we returned in our GreetingActor.onMessasge method.

We may either wait for the result by applying the Scala’s Await.result method to the Future, or, more preferably, build the entire application with asynchronous patterns.

5. Conclusion

In this article we’ve showed how to integrate Spring Framework with Akka and autowire beans into actors.

The source code for the article is available on GitHub.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE


Introduction to AutoValue

$
0
0

1. Overview

AutoValue is a source code generator for Java, and more specifically it’s a library for generating source code for value objects or value-typed objects.

In order to generate a value-type object all you have to do is to annotate an abstract class with the @AutoValue annotation and compile your class. What is generated is a value object with accessor methods, parameterized constructor, properly overridden toString(), equals(Object) and hashCode() methods.

The following code snippet is a quick example of an abstract class that when compiled will result in a value object named AutoValue_Person.

@AutoValue
abstract class Person {
    static Person create(String name, int age) {
        return new AutoValue_Person(name, age);
    }

    abstract String name();
    abstract int age();
}

Let’s continue and find out more about value objects, why we need them and how AutoValue can help make the task of generating and refactoring code much less time consuming.

2. Maven Setup

To use AutoValue in a Maven projects, you need to include the following dependency in the pom.xml:

<dependency>
    <groupId>com.google.auto.value</groupId>
    <artifactId>auto-value</artifactId>
    <version>1.2</version>
</dependency>

The latest version can be found by following this link.

3. Value-Typed Objects

Value-types are the end product of library, so to appreciate its place in our development tasks, we must thoroughly understand value-types, what they are, what they are not and why we need them.

3.1. What are Value-Types?

Value-type objects are objects whose equality to one another is not determined by identity but rather their internal state. This means that two instances of a value-typed object are considered equal as long as they have equal field values.

Typically, value-types are immutable. Their fields must be made final and they must not have setter methods as this will make them changeable after instantiation.

They must consume all field values through a constructor or a factory method.

Value-types are not JavaBeans because they don’t have a default or zero argument constructor and neither do they have setter methods, similarly, they are not Data Transfer Objects nor Plain Old Java Objects.

Additionally, a value-typed class must be final, so that they are not extendable, least that someone overrides the methods. JavaBeans, DTOs and POJOs need not be final.

3.2. Creating a Value-Type

Assuming we want to create a value-type called Foo with fields called text and number. How would we go about it?

We would make a final class and mark all its fields as final. Then we would use the IDE to generate the constructor, the hashCode() method, the equals(Object) method, the getters as mandatory methods and a toString() method, and we would have a class like so:

public final class Foo {
    private final String text;
    private final int number;
    
    public Foo(String text, int number) {
        this.text = text;
        this.number = number;
    }
    
    // standard getters
    
    @Override
    public int hashCode() {
        return Objects.hash(text, number);
    }
    @Override
    public String toString() {
        return "Foo [text=" + text + ", number=" + number + "]";
    }
    @Override
    public boolean equals(Object obj) {
        if (this == obj) return true;
        if (obj == null) return false;
        if (getClass() != obj.getClass()) return false;
        Foo other = (Foo) obj;
        if (number != other.number) return false;
        if (text == null) {
            if (other.text != null) return false;
        } else if (!text.equals(other.text)) {
            return false;
        }
        return true;
    }
}

After creating an instance of Foo, we expect it’s internal state to remain the same for its entire life cycle.

As we will see in a following subsection the hashCode of an object must change from instance to instance, but for value-types, we have to tie it to the fields which define the internal state of the value object.

Therefore, even changing a field of the same object would change the hashCode value.

3.3. How Value-Types Work

The reason value-types must be immutable is to prevent any change to their internal state by the application after they have been instantiated.

Whenever we want to compare any two value-typed objects, we must therefore use the equals(Object) method of the Object class.

This means that we must always override this method in our own value-types and only return true if the fields of the value objects we are comparing have equal values.

Moreover, for us to use our value objects in hash-based collections like HashSets and HashMaps without breaking, we must properly implement the hashCode() method.

3.4. Why we Need Value-Types

The need for value-types comes up quite often. These are cases where we would like to override the default behaviour of the original Object class.

As we already know, the default implementation of the Object class considers two objects equal when they have the same identity however for our purposes we consider two objects equal when they have the same internal state.

Assuming we would like to create a money object as follows:

public class MutableMoney {
    private long amount;
    private String currency;
    
    public MutableMoney(long amount, String currency) {
        this.amount = amount;
        this.currency = currency;
    }
    
    // standard getters and setters
    
}

We can run the following test on it to test it’s equality:

@Test
public void givenTwoSameValueMoneyObjects_whenEqualityTestFails_thenCorrect() {
    MutableMoney m1 = new MutableMoney(10000, "USD");
    MutableMoney m2 = new MutableMoney(10000, "USD");
    assertFalse(m1.equals(m2));
}

Notice the semantics of the test.

We consider it to have passed when the two money objects are not equal. This is because we have not overridden the equals method so equality is measured by comparing the memory references of the objects, which of course are not going to be different, because they are different objects occupying different memory locations.

Each object represents 10,000 USD but Java tells us our money objects are not equal. We want the two objects to test unequal only when either the currency amounts are different or the currency types are different.

Now let us create an equivalent value object and this time we will let the IDE generate most of the code:

public final class ImmutableMoney {
    private final long amount;
    private final String currency;
    
    public ImmutableMoney(long amount, String currency) {
        this.amount = amount;
        this.currency = currency;
    }
    
    @Override
    public int hashCode() {
        final int prime = 31;
        int result = 1;
        result = prime * result + (int) (amount ^ (amount >>> 32));
        result = prime * result	+ ((currency == null) ? 0 : currency.hashCode());
        return result;
    }
    
    @Override
    public boolean equals(Object obj) {
        if (this == obj) return true;
        if (obj == null) return false;
        if (getClass() != obj.getClass()) return false;
        ImmutableMoney other = (ImmutableMoney) obj;
        if (amount != other.amount) return false;
        if (currency == null) {
            if (other.currency != null) return false;
        } else if (!currency.equals(other.currency))
            return false;
        return true;
    }
}

The only difference is that we overrode the equals(Object) and hashCode() methods, now we have control over how we want Java to compare our money objects. Let’s run its equivalent test:

@Test
public void givenTwoSameValueMoneyValueObjects_whenEqualityTestPasses_thenCorrect() {
    ImmutableMoney m1 = new ImmutableMoney(10000, "USD");
    ImmutableMoney m2 = new ImmutableMoney(10000, "USD");
    assertTrue(m1.equals(m2));
}

Notice the semantics of this test, we expect it to pass when both money objects test equal via the equals method.

4. Why AutoValue?

Now that we thoroughly understand value-types and why we need them, we can look at AutoValue and how it comes into the equation.

4.1. Issues With Hand-Coding

When we create value-types like we have done in the preceding section, we will run into a number of issues related with bad design and a lot of boilerplate code.

A two field class will have 9 lines of code: one for package declaration, two for the class signature and its closing brace, two for field declarations, two for constructors and its closing brace and two for initializing the fields, but then we need getters for the fields, each taking three more lines of code, making six extra lines.

Overriding the hashCode() and equalTo(Object) methods require about 9 lines and 18 lines respectively and overriding the toString() method adds another five lines.

That means a well-formatted code base for our two field class would take about 50 lines of code.

4.2 IDEs to The Rescue?

This is is easy with an IDE like Eclipse or IntilliJ and with only one or two value-typed classes to create. Think about a multitude of such classes to create, would it still be as easy even if the IDE helps us?

Fast forward, some months down the road, assume we have to revisit our code and make amendments to our Money classes and perhaps convert the currency field from the String type to another value-type called Currency. 

4.3 IDEs not Really so Helpful

An IDE like Eclipse can’t simply edit for us our accessor methods nor the toString(), hashCode() or equals(Object) methods.

This refactoring would have to be done by hand. Editing code increases the potential for bugs and with every new field we add to the Money class, the number of lines increases exponentially.

Recognizing the fact that this scenario happens, that it happens often and in large volumes will make us really appreciate the role of AutoValue.

5. AutoValue Example

The problem AutoValue solves is to take all the boilerplate code that we talked about in the preceding section, out of our way so that we never have to write it, edit it or even read it.

We will look at the very same Money example, but this time with AutoValue. We will call this class AutoValueMoney for the sake of consistency:

@AutoValue
public abstract class AutoValueMoney {
    public abstract String getCurrency();
    public abstract long getAmount();
    
    public static AutoValueMoney create(String currency, long amount) {
        return new AutoValue_AutoValueMoney(currency, amount);
    }
}

What has happened is that we write an abstract class, define abstract accessors for it but no fields, we annotate the class with @AutoValue all totalling to only 8 lines of code,  and javac generates a concrete subclass for us which looks like this:

public final class AutoValue_AutoValueMoney extends AutoValueMoney {
    private final String currency;
    private final long amount;
    
    AutoValue_AutoValueMoney(String currency, long amount) {
        if (currency == null) throw new NullPointerException(currency);
        this.currency = currency;
        this.amount = amount;
    }
    
    // standard getters
    
    @Override
    public int hashCode() {
        int h = 1;
        h *= 1000003;
        h ^= currency.hashCode();
        h *= 1000003;
        h ^= amount;
        return h;
    }
    
    @Override
    public boolean equals(Object o) {
        if (o == this) {
            return true;
        }
        if (o instanceof AutoValueMoney) {
            AutoValueMoney that = (AutoValueMoney) o;
            return (this.currency.equals(that.getCurrency()))
              && (this.amount == that.getAmount());
        }
        return false;
    }
}

We never have to deal with this class directly at all, neither do we have to edit it when we need to add more fields or make changes to our fields like the currency scenario in the previous section.

Javac will always regenerate updated code for us.

While using this new value-type, all callers see is only the parent type as we will see in the following unit tests.

Here is a test that verifies that our fields are being set correctly:

@Test
public void givenValueTypeWithAutoValue_whenFieldsCorrectlySet_thenCorrect() {
    AutoValueMoney m = AutoValueMoney.create("USD", 10000);
    assertEquals(m.getAmount(), 10000);
    assertEquals(m.getCurrency(), "USD");
}

A test to verify that two AutoValueMoney objects with the same currency and same amount test equal follows:

@Test
public void given2EqualValueTypesWithAutoValue_whenEqual_thenCorrect() {
    AutoValueMoney m1 = AutoValueMoney.create("USD", 5000);
    AutoValueMoney m2 = AutoValueMoney.create("USD", 5000);
    assertTrue(m1.equals(m2));
}

When we change the currency type of one money object to GBP, the test: 5000 GBP == 5000 USD is no longer true:

@Test
public void given2DifferentValueTypesWithAutoValue_whenNotEqual_thenCorrect() {
    AutoValueMoney m1 = AutoValueMoney.create("GBP", 5000);
    AutoValueMoney m2 = AutoValueMoney.create("USD", 5000);
    assertFalse(m1.equals(m2));
}

6. AutoValue With Builders

The initial example we have looked at covers the basic usage of AutoValue using a static factory method as our public creation API.

Notice that if all our fields were Strings, it would be easy to interchange them as we passed them to the static factory method, like placing amount in the place of currency and vice versa.

This is especially likely to happen if we have many fields and all are of String type. This problem is made worse by the fact that with AutoValue, all fields are initialized through the constructor.

To solve this problem we should use the builder pattern. Fortunately. this can be generated by AutoValue.

Our AutoValue class does not really change much, except that the static factory method is replaced by a builder:

@AutoValue
public abstract class AutoValueMoneyWithBuilder {
    public abstract String getCurrency();
    public abstract long getAmount();
    static Builder builder() {
        return new AutoValue_AutoValueMoneyWithBuilder.Builder();
    }
    
    @AutoValue.Builder
    abstract static class Builder {
        abstract Builder setCurrency(String currency);
        abstract Builder setAmount(long amount);
        abstract AutoValueMoneyWithBuilder build();
    }
}

The generated class is just the same as the first one but a concrete inner class for the builder is generated as well implementing the abstract methods in the builder:

static final class Builder extends AutoValueMoneyWithBuilder.Builder {
    private String currency;
    private long amount;
    Builder() {
    }
    Builder(AutoValueMoneyWithBuilder source) {
        this.currency = source.getCurrency();
        this.amount = source.getAmount();
    }
    
    @Override
    public AutoValueMoneyWithBuilder.Builder setCurrency(String currency) {
        this.currency = currency;
        return this;
    }
    
    @Override
    public AutoValueMoneyWithBuilder.Builder setAmount(long amount) {
        this.amount = amount;
        return this;
    }
    
    @Override
    public AutoValueMoneyWithBuilder build() {
        String missing = "";
        if (currency == null) {
            missing += " currency";
        }
        if (amount == 0) {
            missing += " amount";
        }
        if (!missing.isEmpty()) {
            throw new IllegalStateException("Missing required properties:" + missing);
        }
        return new AutoValue_AutoValueMoneyWithBuilder(this.currency,this.amount);
    }
}

Notice also how the test results don’t change.

If we want to know that the field values are actually correctly set through the builder, we can execute this test:

@Test
public void givenValueTypeWithBuilder_whenFieldsCorrectlySet_thenCorrect() {
    AutoValueMoneyWithBuilder m = AutoValueMoneyWithBuilder.builder().
      setAmount(5000).setCurrency("USD").build();
    assertEquals(m.getAmount(), 5000);
    assertEquals(m.getCurrency(), "USD");
}

To test that equality depends on internal state:

@Test
public void given2EqualValueTypesWithBuilder_whenEqual_thenCorrect() {
    AutoValueMoneyWithBuilder m1 = AutoValueMoneyWithBuilder.builder()
      .setAmount(5000).setCurrency("USD").build();
    AutoValueMoneyWithBuilder m2 = AutoValueMoneyWithBuilder.builder()
      .setAmount(5000).setCurrency("USD").build();
    assertTrue(m1.equals(m2));
}

And when the field values are different:

@Test
public void given2DifferentValueTypesBuilder_whenNotEqual_thenCorrect() {
    AutoValueMoneyWithBuilder m1 = AutoValueMoneyWithBuilder.builder()
      .setAmount(5000).setCurrency("USD").build();
    AutoValueMoneyWithBuilder m2 = AutoValueMoneyWithBuilder.builder()
      .setAmount(5000).setCurrency("GBP").build();
    assertFalse(m1.equals(m2));
}

7. Conclusion

In this tutorial, we have introduced most of the basics of Google’s AutoValue library and how to use it to create value-type’s with very little code on our part.

An alternative to Google’s AutoValue is the Lombok project – you can have a look at the introductory article about using Lombok here.

The full implementation of all these examples and code snippets can be found in the AutoValue github project.

A Custom Security Expression with Spring Security

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In this tutorial, we’ll focus on creating a custom security expression with Spring Security.

Sometimes, the expressions available in the framework by default are simply not expressive enough. And, in these cases, it’s relatively simple to built up a new expression that is semantically richer than the existing ones.

We’ll first discuss how to create a custom PermissionEvaluator, then a fully custom expression – and finally how to override one of the built-in security expression.

2. A User Entity

First, let’s prepare the foundation for creating the new security expressions.

Let’s have a look at our User entity – which has a Privileges and an Organization:

@Entity
public class User{
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    @Column(nullable = false, unique = true)
    private String username;

    private String password;

    @ManyToMany(fetch = FetchType.EAGER) 
    @JoinTable(name = "users_privileges", 
      joinColumns = 
        @JoinColumn(name = "user_id", referencedColumnName = "id"),
      inverseJoinColumns = 
        @JoinColumn(name = "privilege_id", referencedColumnName = "id")) 
    private Set<Privilege> privileges;

    @ManyToOne(fetch = FetchType.EAGER)
    @JoinColumn(name = "organization_id", referencedColumnName = "id")
    private Organization organization;

    // standard getters and setters
}

And here is our simple Privilege:

@Entity
public class Privilege {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    @Column(nullable = false, unique = true)
    private String name;

    // standard getters and setters
}

And our Organization:

@Entity
public class Organization {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    @Column(nullable = false, unique = true)
    private String name;

    // standard setters and getters
}

Finally – we’ll use a simpler custom Principal:

public class MyUserPrincipal implements UserDetails {

    private User user;

    public MyUserPrincipal(User user) {
        this.user = user;
    }

    @Override
    public String getUsername() {
        return user.getUsername();
    }

    @Override
    public String getPassword() {
        return user.getPassword();
    }

    @Override
    public Collection<? extends GrantedAuthority> getAuthorities() {
        List<GrantedAuthority> authorities = new ArrayList<GrantedAuthority>();
        for (Privilege privilege : user.getPrivileges()) {
            authorities.add(new SimpleGrantedAuthority(privilege.getName()));
        }
        return authorities;
    }
    
    ...
}

With all of these classes ready, we’re going to use our custom Principal in a basic UserDetailsService implementation:

@Service
public class MyUserDetailsService implements UserDetailsService {

    @Autowired
    private UserRepository userRepository;

    @Override
    public UserDetails loadUserByUsername(String username) {
        User user = userRepository.findByUsername(username);
        if (user == null) {
            throw new UsernameNotFoundException(username);
        }
        return new MyUserPrincipal(user);
    }
}

As you can see, there’s nothing complicated about these relationships – the user has one or more privileges, and each user belong to one organization.

3. Data Setup

Next – let’s initialize our database with simple test data:

@Component
public class SetupData {
    @Autowired
    private UserRepository userRepository;

    @Autowired
    private PrivilegeRepository privilegeRepository;

    @Autowired
    private OrganizationRepository organizationRepository;

    @PostConstruct
    public void init() {
        initPrivileges();
        initOrganizations();
        initUsers();
    }
}

Here is our init methods:

private void initPrivileges() {
    Privilege privilege1 = new Privilege("FOO_READ_PRIVILEGE");
    privilegeRepository.save(privilege1);

    Privilege privilege2 = new Privilege("FOO_WRITE_PRIVILEGE");
    privilegeRepository.save(privilege2);
}
private void initOrganizations() {
    Organization org1 = new Organization("FirstOrg");
    organizationRepository.save(org1);
    
    Organization org2 = new Organization("SecondOrg");
    organizationRepository.save(org2);
}
private void initUsers() {
    Privilege privilege1 = privilegeRepository.findByName("FOO_READ_PRIVILEGE");
    Privilege privilege2 = privilegeRepository.findByName("FOO_WRITE_PRIVILEGE");
    
    User user1 = new User();
    user1.setUsername("john");
    user1.setPassword("123");
    user1.setPrivileges(new HashSet<Privilege>(Arrays.asList(privilege1)));
    user1.setOrganization(organizationRepository.findByName("FirstOrg"));
    userRepository.save(user1);
    
    User user2 = new User();
    user2.setUsername("tom");
    user2.setPassword("111");
    user2.setPrivileges(new HashSet<Privilege>(Arrays.asList(privilege1, privilege2)));
    user2.setOrganization(organizationRepository.findByName("SecondOrg"));
    userRepository.save(user2);
}

Note that:

  • User “john” has only FOO_READ_PRIVILEGE
  • User “tom” has both FOO_READ_PRIVILEGE and FOO_WRITE_PRIVILEGE

4. A Custom Permission Evaluator

At this point we’re ready to start implementing our new expression – through a new, custom permission evaluator.

We are going to use the user’s privileges to secure our methods – but instead of using hard coded privilege names, we want to reach a more open, flexible implementation.

Let’s get started.

4.1. PermissionEvaluator

In order to create our own custom permission evaluator we need to implement the PermissionEvaluator interface:

public class CustomPermissionEvaluator implements PermissionEvaluator {
    @Override
    public boolean hasPermission(
      Authentication auth, Object targetDomainObject, Object permission) {
        if ((auth == null) || (targetDomainObject == null) || !(permission instanceof String)){
            return false;
        }
        String targetType = targetDomainObject.getClass().getSimpleName().toUpperCase();
        
        return hasPrivilege(auth, targetType, permission.toString().toUpperCase());
    }

    @Override
    public boolean hasPermission(
      Authentication auth, Serializable targetId, String targetType, Object permission) {
        if ((auth == null) || (targetType == null) || !(permission instanceof String)) {
            return false;
        }
        return hasPrivilege(auth, targetType.toUpperCase(), 
          permission.toString().toUpperCase());
    }
}

Here is our hasPrivilege() method:

private boolean hasPrivilege(Authentication auth, String targetType, String permission) {
    for (GrantedAuthority grantedAuth : auth.getAuthorities()) {
        if (grantedAuth.getAuthority().startsWith(targetType)) {
            if (grantedAuth.getAuthority().contains(permission)) {
                return true;
            }
        }
    }
    return false;
}

We now have a new security expression available and ready to be used: hasPermission.

And so, instead of using the more hardcoded version:

@PostAuthorize("hasAuthority('FOO_READ_PRIVILEGE')")

We can use use:

@PostAuthorize("hasPermission(returnObject, 'read')")

or

@PreAuthorize("hasPermission(#id, 'Foo', 'read')")

Note: #id refers to method parameter and ‘Foo‘ refers to target object type.

4.2. Method Security Configuration

It’s not enough to define the CustomPermissionEvaluator – we also need to use it in our method security configuration:

@Configuration
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class MethodSecurityConfig extends GlobalMethodSecurityConfiguration {

    @Override
    protected MethodSecurityExpressionHandler createExpressionHandler() {
        DefaultMethodSecurityExpressionHandler expressionHandler = 
          new DefaultMethodSecurityExpressionHandler();
        expressionHandler.setPermissionEvaluator(new CustomPermissionEvaluator());
        return expressionHandler;
    }
}

4.3. Example In Practice

Let’s now start making use of the new expression – in a few simple controller methods:

@Controller
public class MainController {
    
    @PostAuthorize("hasPermission(returnObject, 'read')")
    @RequestMapping(method = RequestMethod.GET, value = "/foos/{id}")
    @ResponseBody
    public Foo findById(@PathVariable long id) {
        return new Foo("Sample");
    }

    @PreAuthorize("hasPermission(#foo, 'write')")
    @RequestMapping(method = RequestMethod.POST, value = "/foos")
    @ResponseStatus(HttpStatus.CREATED)
    @ResponseBody
    public Foo create(@RequestBody Foo foo) {
        return foo;
    }
}

And there we go – we’re all set and using the new expression in practice.

4.4. The Live Test 

Let’s now write a simple live tests – hitting the API and making sure everything’s in working order:

@Test
public void givenUserWithReadPrivilegeAndHasPermission_whenGetFooById_thenOK() {
    Response response = givenAuth("john", "123").get("http://localhost:8081/foos/1");
    assertEquals(200, response.getStatusCode());
    assertTrue(response.asString().contains("id"));
}

@Test
public void givenUserWithNoWritePrivilegeAndHasPermission_whenPostFoo_thenForbidden() {
    Response response = givenAuth("john", "123").contentType(MediaType.APPLICATION_JSON_VALUE)
                                                .body(new Foo("sample"))
                                                .post("http://localhost:8081/foos");
    assertEquals(403, response.getStatusCode());
}

@Test
public void givenUserWithWritePrivilegeAndHasPermission_whenPostFoo_thenOk() {
    Response response = givenAuth("tom", "111").contentType(MediaType.APPLICATION_JSON_VALUE)
                                               .body(new Foo("sample"))
                                               .post("http://localhost:8081/foos");
    assertEquals(201, response.getStatusCode());
    assertTrue(response.asString().contains("id"));
}

And here is our givenAuth() method:

private RequestSpecification givenAuth(String username, String password) {
    FormAuthConfig formAuthConfig = 
      new FormAuthConfig("http://localhost:8081/login", "username", "password");
    
    return RestAssured.given().auth().form(username, password, formAuthConfig);
}

5. A New Security Expression

With the previous solution, we were able to define and use the hasPermission expression – which can be quite useful.

However, we’re still somewhat limited here by the name and semantics of the expression itself.

And so, in this section, we’re going to go full custom – and we’re going to implement a security expression called isMember() – checking if the principal is a member of a Organization.

5.1. Custom Method Security Expression

In order to create this new custom expression, we need start by implementing the root note where the evaluation of all security expressions starts:

public class CustomMethodSecurityExpressionRoot 
  extends SecurityExpressionRoot implements MethodSecurityExpressionOperations {

    public CustomMethodSecurityExpressionRoot(Authentication authentication) {
        super(authentication);
    }

    public boolean isMember(Long OrganizationId) {
        User user = ((MyUserPrincipal) this.getPrincipal()).getUser();
        return user.getOrganization().getId().longValue() == OrganizationId.longValue();
    }

    ...
}

Now how we provided this new operation right in the root note here; isMember() is used to check if current user is a member in given Organization.

Also note how we extended the SecurityExpressionRoot to include the built-in expressions as well.

5.2. Custom Expression Handler

Next, we need to inject our CustomMethodSecurityExpressionRoot in our expression handler:

public class CustomMethodSecurityExpressionHandler 
  extends DefaultMethodSecurityExpressionHandler {
    private AuthenticationTrustResolver trustResolver = 
      new AuthenticationTrustResolverImpl();

    @Override
    protected MethodSecurityExpressionOperations createSecurityExpressionRoot(
      Authentication authentication, MethodInvocation invocation) {
        CustomMethodSecurityExpressionRoot root = 
          new CustomMethodSecurityExpressionRoot(authentication);
        root.setPermissionEvaluator(getPermissionEvaluator());
        root.setTrustResolver(this.trustResolver);
        root.setRoleHierarchy(getRoleHierarchy());
        return root;
    }
}

5.3. Method Security Configuration

Now, we need to use our CustomMethodSecurityExpressionHandler in the method security configuration:

@Configuration
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class MethodSecurityConfig extends GlobalMethodSecurityConfiguration {
    @Override
    protected MethodSecurityExpressionHandler createExpressionHandler() {
        CustomMethodSecurityExpressionHandler expressionHandler = 
          new CustomMethodSecurityExpressionHandler();
        expressionHandler.setPermissionEvaluator(new CustomPermissionEvaluator());
        return expressionHandler;
    }
}

5.4. Using the New Expression

Here is a simple example to secure our controller method using isMember():

@PreAuthorize("isMember(#id)")
@RequestMapping(method = RequestMethod.GET, value = "/organizations/{id}")
@ResponseBody
public Organization findOrgById(@PathVariable long id) {
    return organizationRepository.findOne(id);
}

5.5. Live Test

Finally, here is a simple live test for user “john“:

@Test
public void givenUserMemberInOrganization_whenGetOrganization_thenOK() {
    Response response = givenAuth("john", "123").get("http://localhost:8081/organizations/1");
    assertEquals(200, response.getStatusCode());
    assertTrue(response.asString().contains("id"));
}

@Test
public void givenUserMemberNotInOrganization_whenGetOrganization_thenForbidden() {
    Response response = givenAuth("john", "123").get("http://localhost:8081/organizations/2");
    assertEquals(403, response.getStatusCode());
}

6. Disable a Built-in Security Expression

Finally, let’s see how to override a built-in security expression – we’ll discuss disabling hasAuthority().

6.1. Custom Security Expression Root

We’ll start similarly by writing our own SecurityExpressionRoot – mainly because the built-in methods are final and so we can’t override them:

public class MySecurityExpressionRoot implements MethodSecurityExpressionOperations {
    public MySecurityExpressionRoot(Authentication authentication) {
        if (authentication == null) {
            throw new IllegalArgumentException("Authentication object cannot be null");
        }
        this.authentication = authentication;
    }

    @Override
    public final boolean hasAuthority(String authority) {
        throw new RuntimeException("method hasAuthority() not allowed");
    }
    ...
}

After defining this root note, we’ll have to inject it into the expression handler and then wire that handler into our configuration – just as we did above in Section 5.

6.2. Example – Using the Expression

Now, if we want to use hasAuthority() to secure methods – as follows, it will throw RuntimeException when we try to access method:

@PreAuthorize("hasAuthority('FOO_READ_PRIVILEGE')")
@RequestMapping(method = RequestMethod.GET, value = "/foos")
@ResponseBody
public Foo findFooByName(@RequestParam String name) {
    return new Foo(name);
}

6.3. Live Test

Finally, here is our simple test:

@Test
public void givenDisabledSecurityExpression_whenGetFooByName_thenError() {
    Response response = givenAuth("john", "123").get("http://localhost:8081/foos?name=sample");
    assertEquals(500, response.getStatusCode());
    assertTrue(response.asString().contains("method hasAuthority() not allowed"));
}

7. Conclusion

In this guide, we did a deep-dive into the various ways we can implement a custom security expression in Spring Security, if the existing ones aren’t enough.

And, as always, the full source code can be found on GitHub.

The Master Class "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Introduction to Immutables

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Introduction

In this article we will be showing how to work with the Immutables library.

Immutables consists of annotations and annotation processors for generating and working with serializable and customizable immutable objects.

2. Maven Dependencies

In order to use Immutables in your project, you need to add the following dependency to the dependencies section of your pom.xml file:

<dependency>
    <groupId>org.immutables</groupId>
    <artifactId>value</artifactId>
    <version>2.2.10</version>
    <scope>provided</scope>
</dependency>

As this artifact is not required during runtime, so it’s advisable to specify the provided scope.

The newest version of the library can be found here.

3. Immutables

The library generates immutable objects from abstract types: Interface, Class, Annotation.

The key to achieving this is the proper use of @Value.Immutable annotation. It generates an immutable version of an annotated type and prefixes its name with the Immutable keyword.

If we try to generate an immutable version of class named “X“, it will generate a class named “ImmutableX”. Generated classes are not recursively-immutable, so it’s good to keep that in mind.

And a quick note – because Immutables utilizes annotation processing, you need to remember to enable annotation processing in your IDE.

3.1. Using @Value.Immutable With Abstract Classes and Interfaces

Let’s create a simple abstract class Person consisting of two abstract accessor methods representing the to-be-generated fields, and then annotate the class with the @Value.Immutable annotation:

@Value.Immutable
public abstract class Person {

    abstract String getName();
    abstract Integer getAge();

}

After annotation processing is done, we can find a ready-to-use, newly-generated ImmutablePerson class in a target/generated-sources directory:

@Generated({"Immutables.generator", "Person"})
public final class ImmutablePerson extends Person {

    private final String name;
    private final Integer age;

    private ImmutablePerson(String name, Integer age) {
        this.name = name;
        this.age = age;
    }

    @Override
    String getName() {
        return name;
    }

    @Override
    Integer getAge() {
        return age;
    }

    // toString, hashcode, equals, copyOf and Builder omitted

}

The generated class comes with implemented toString, hashcode, equals methods and with a stepbuilder ImmutablePerson.Builder. Notice that the generated constructor has private access.

In order to construct an instance of ImmutablePerson class, we need to use the builder or static method ImmutablePerson.copyOf, which can create an ImmutablePerson copy from a Person object.

If we want to construct an instance using the builder, we can simply code:

ImmutablePerson john = ImmutablePerson.builder()
  .age(42)
  .name("John")
  .build();

Generated classes are immutable which means they can’t be modified. If you want to modify an already existing object, you can use one of the “withX” methods, which do not modify an original object, but create a new instance with a modified field.

Let’s update john’s age and create a new john43 object:

ImmutablePerson john43 = john.withAge(43);

In such a case the following assertions will be true:

assertThat(john).isNotSameAs(john43);
assertThat(john.getAge()).isEqualTo(42);

4. Additional Utilities

Such class generation would not be very useful without being able to customize it. Immutables library comes with a set of additional annotations that can be used for customizing @Value.Immutable‘s output. To see all of them, please refer to Immutables’ documentation.

4.1. The @Value.Parameter Annotation

The @Value.Parameter annotation can be used for specifying fields, for which constructor method should be generated.

If you annotate your class like this:

@Value.Immutable
public abstract class Person {

    @Value.Parameter
    abstract String getName();

    @Value.Parameter
    abstract Integer getAge();
}

It will be possible to instantiate it in the following way:

ImmutablePerson.of("John", 42);

4.2. The @Value.Default Annotation

The @Value.Default annotation allows you to specify a default value that should be used when an initial value is not provided. In order to do this, you need to create a non-abstract accessor method returning a fixed value and annotate it with @Value.Default:

@Value.Immutable
public abstract class Person {

    abstract String getName();

    @Value.Default
    Integer getAge() {
        return 42;
    }
}

The following assertion will be true:

ImmutablePerson john = ImmutablePerson.builder()
  .name("John")
  .build();

assertThat(john.getAge()).isEqualTo(42);

4.3. The @Value.Auxiliary Annotation

The @Value.Auxiliary annotation can be used for annotating a property that will be stored in an object’s instance, but will be ignored by equals, hashCode and toString implementations.

If you annotate your class like this:

@Value.Immutable
public abstract class Person {

    abstract String getName();
    abstract Integer getAge();

    @Value.Auxiliary
    abstract String getAuxiliaryField();

}

The following assertions will be true when using the auxiliary field:

ImmutablePerson john1 = ImmutablePerson.builder()
  .name("John")
  .age(42)
  .auxiliaryField("Value1")
  .build();

ImmutablePerson john2 = ImmutablePerson.builder()
  .name("John")
  .age(42)
  .auxiliaryField("Value2")
  .build();

assertThat(john1.equals(john2)).isTrue();
assertThat(john1.toString()).isEqualTo(john2.toString());
assertThat(john1.hashCode()).isEqualTo(john2.hashCode());

4.4. The @Value.Immutable(prehash = true) Annotation

Since our generated classes are immutable and can never get modified, hashCode results will always remain the same and can be computed only once during the object’s instantiation.

If you annotate your class like this:

@Value.Immutable(prehash = true)
public abstract class Person {

    abstract String getName();
    abstract Integer getAge();

}

When inspecting the generated class, you can see that hashcode value is now precomputed and stored in a field:

@Generated({"Immutables.generator", "Person"})
public final class ImmutablePerson extends Person {

    private final String name;
    private final Integer age;
    private final int hashCode;

    private ImmutablePerson(String name, Integer age) {
        this.name = name;
        this.age = age;
        this.hashCode = computeHashCode();
    }

    // generated methods
 
    @Override
    public int hashCode() {
        return hashCode;
    }
}

The hashCode() method returns a precomputed hashcode generated when the object was constructed.

5. Conclusion

In this quick tutorial we showed the basic workings of the Immutables library.

All source code and unit tests in the article can be found in the GitHub repository.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Spring JSON-P with Jackson

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Overview

If you’ve been developing anything on the web, you’re aware of the same-origin policy constraint browsers have when dealing with AJAX requests. The simple overview of the constraint is that any request originating from different domain, schema or port, will not be permitted.

One way to relax this browser restriction when working with JSON data – is by using JSON with padding (JSON-P).

This article discusses Spring’s support for working with JSON-P data – with the help of AbstractJsonpResponseBodyAdvice.

2. JSON-P in Action

The same-origin policy is not imposed over <script> tag, allowing scripts to be loaded across different domains. JSON-P technique takes advantage of this by passing the JSON response as the argument of the javascript function.

2.1. Preparation

In our examples, we will use this simple Company class:

public class Company {
 
    private long id;
    private String name;
 
    // standard setters and getters
}

This class will bind the request parameters, and shall be returned from the server as JSON representation.

The Controller method is a simple implementation as well – returning the Company instance:

@RestController
public class CompanyController {

    @RequestMapping(value = "/companyRest",
      produces = MediaType.APPLICATION_JSON_VALUE)
    public Company getCompanyRest() {
        Company company = new Company(1, "Xpto");
        return company;
    }
}

On the client side we can use jQuery library to create and send an AJAX request:

$.ajax({
    url: 'http://localhost:8080/spring-mvc-java/companyRest',
    data: {
        format: 'json'
    },
    type: 'GET',
    ...
});

Consider an AJAX request against the following URL:

http://localhost:8080/spring-mvc-java/companyRest

The response from the server would be the following:

{"id":1,"name":"Xpto"}

As the request was send against the same schema, domain and port, the response will not get blocked, and JSON data will be allowed by the browser.

2.2. Cross-Origin Request

By changing the request URL to:

http://127.0.0.1:8080/spring-mvc-java/companyRest

the response will get blocked by the browser, due to request being sent from localhost to 127.0.0.1 which is considered a different domain and presents a violation of the same origin policy.

With JSON-P, we are able to add a callback parameter to the request:

http://127.1.1.1:8080/spring-mvc-java/companyRest?callback=getCompanyData

On the client side its as easy as adding the following parameters to the AJAX request:

$.ajax({
    ...
    jsonpCallback:'getCompanyData',
    dataType: 'jsonp',
    ...
});

The getCompanyData will be the function called when the response is received.

If the server formats the response like the following:

getCompanyData({"id":1,"name":"Xpto"});

browsers will not block it, as it will treat the response as a script negotiated and agreed upon between the client and the server on account of matching getCompanyData in both request and the response.

3. @ControllerAdvice Annotation

The beans annotated with @ControllerAdvice are able to assist all, or a specific subset of Controllers and is used to encapsulate cross-cutting behaviour shared between different Controllers. Typical usage patters are related to exception handling, adding attributes to models or registering binders.

Starting with Spring 4.1, @ControllerAdvice is able to register the implementations of ResponseBodyAdvice interface which allows changing the response after its being returned by a controller method but before being written by a suitable converter.

4. Changing the Response Using AbstractJsonpResponseBodyAdvice

Also starting with Spring 4.1, we now have access to the AbstractJsonpResponseBodyAdvice class – which formats the response according to JSON-P standards.

This section explains how to put the base class at play and change the response without making any changes to the existing Controllers.

In order to enable Spring support for JSON-P, let’s start with the configuration:

@ControllerAdvice
public class JsonpControllerAdvice 
  extends AbstractJsonpResponseBodyAdvice {

    public JsonpControllerAdvice() {
        super("callback");
    }
}

The support is made using the AbstractJsonpResponseBodyAdvice class. The key passed on the super method is the one that will be used in URL requesting JSON-P data.

With this controller advice, we automatically convert the response to JSON-P.

5. JSON-P with Spring in Practice

With the previously discussed configuration in place, we are able to make our REST applications respond with JSON-P. In the following example, we will return company’s data, so our AJAX request URL should be something like this:

http://127.0.0.1:8080/spring-mvc-java/companyRest?callback=getCompanyData

As a result of the previous configuration, the response will look as follows:

getCompanyData({"id":1,"name":"Xpto"});

As discussed, the response in this format will not get blocked despite originating from different domain.

The JsonpControllerAdvice can easily be applied to any method that returns a response annotated with @ResponseBody and ResponseEntity.

There should be a function with the same name passed in the callback, getCompanyData, for handling all the responds.

6. Conclusion

This quick article shows how an otherwise tedious work of formatting the response to take advantage of JSON-P is simplified using the new functionality in Spring 4.1.

The implementation of the examples and code snippets can be found in this github project.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

Quick Guide to Spring Controllers

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

In this article we’ll focus on a core concept in Spring MVC – Controllers.

2. Overview

Let’s start by taking a step back and having a look at the concept of the Front Controller in the typical Spring Model View Controller architecture.

At a very high level, here are the main responsibilities we’re looking at:

  • Intercepts incoming requests
  • Converts the payload of the request to the internal structure of the data
  • Sends the data to Model for further processing
  • Gets processed data from the Model and advances that data to the View for rendering

Here’s a quick diagram for the high level flow in Spring MVC:

SpringMVC

As you can see, the DispatcherServlet plays the role of the Front Controller in the architecture.

The diagram is applicable both to typical MVC controllers as well as RESTful controllers – with some small differences (described below).

In the traditional approach, MVC applications are not service-oriented hence there is a View Resolver that renders final views based on data received from a Controller.

RESTful applications are designed to be service-oriented and return raw data (JSON/XML typically). Since these applications do not do any view rendering, there are no View Resolvers – the Controller is generally expected to send data directly via the HTTP response.

Let’s start with the MVC0-style controllers.

3. Maven Dependencies

In order to be able to work with Spring MVC, let’s deal with the Maven dependencies first:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>4.3.1.RELEASE</version>
<dependency>

To get the latest version of the library, have a look at spring-webmvc on Maven Central.

4. Project Web Config

Now, before looking at the controllers themselves, we first need to set up a simple web project and do a quick Servlet configuration.

Lets first see how the DispatcherServlet can be set up without using web.xml – but instead using an initializer:

public class StudentControllerConfig implements WebApplicationInitializer {

    @Override
    public void onStartup(ServletContext sc) throws ServletException {
        AnnotationConfigWebApplicationContext root = 
          new AnnotationConfigWebApplicationContext();
        root.register(WebConfig.class);

        root.refresh();
        root.setServletContext(sc);

        sc.addListener(new ContextLoaderListener(root));

        DispatcherServlet dv = 
          new DispatcherServlet(new GenericWebApplicationContext());

        ServletRegistration.Dynamic appServlet = sc.addServlet("test-mvc", dv);
        appServlet.setLoadOnStartup(1);
        appServlet.addMapping("/test/*");
    }
}

To set things up with no XML, make sure to have servlet-api 3.1.0 on your classpath.

Here’s how the web.xml would look like:

<servlet>
    <servlet-name>test-mvc</servlet-name>
    <servlet-class>
      org.springframework.web.servlet.DispatcherServlet
    </servlet-class>
    <load-on-startup>1</load-on-startup>
    <init-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/test-mvc.xml</param-value>
    </init-param>
</servlet>

We’re setting the contextConfigLocation property here – pointing to the XML file used to load the Spring context. If the property is not there, Spring will search for a file named {servlet_name}-servlet.xml.

In our case the servlet_name is test-mvc and so, in this example the DispatcherServlet would search for a file called test-mvc-servlet.xml.

Finally, let’s set the DispatcherServlet up and map it to a particular URL – to finish our Front Controller based system here:

<servlet-mapping>
    <servlet-name>test-mvc</servlet-name>
    <url-pattern>/test/*</url-pattern>
</servlet-mapping>

Thus in this case the DispatcherServlet would intercept all requests within the pattern /test/* .

5. Spring MVC Web Config

Lets now look at how the Dispatcher Servlet can be setup using Spring Config:

@Configuration
@EnableWebMvc
@ComponentScan(basePackages= {
  "org.baeldung.controller.controller",
  "org.baeldung.controller.config" }) 
public class WebConfig extends WebMvcConfigurerAdapter {
    
    @Override
    public void configureDefaultServletHandling(
      DefaultServletHandlerConfigurer configurer) {
        configurer.enable();
    }
 
    @Bean
    public ViewResolver viewResolver() {
        InternalResourceViewResolver bean = 
          new InternalResourceViewResolver();
        bean.setPrefix("/WEB-INF/");
        bean.setSuffix(".jsp");
        return bean;
    }
}

Let’s now look at setting up the Dispatcher Servlet using XML . A snapshot of the DispatcherServlet XML file – the XML file which the DispatcherServlet uses for loading custom controllers and other Spring entities is shown below:

<context:component-scan base-package="com.baledung.controller" />
<mvc:annotation-driven />
<bean class="org.springframework.web.servlet.view.InternalResourceViewResolver">
    <property name="prefix">
        <value>/WEB-INF/</value>
    </property>
    <property name="suffix">
        <value>.jsp</value>
    </property>
</bean>

Based on this simple configuration, the framework will of course initialize any controller bean that it will find on the classpath.

Notice that we’re also defining the View Resolver, responsible for view rendering – we’ll be using Spring’s InternalResourceViewResolver here. This expects a name of a view to be resolved, which means finding a corresponding page by using prefix and suffix (both defined in the XML configuration).

So for example if the Controller returns a view named “welcome”the view resolver will try to resolve a page called “welcome.jsp” in the WEB-INF folder.

6. The MVC Controller

Let’s not finally implement the MVC style controller.

Notice how we’re returning a ModelAndView object – which contains a model map and a view object; both will be used by the View Resolver for data rendering:

@Controller
@RequestMapping(value = "/test")
public class TestController {

    @GetMapping
    public ModelAndView getTestData() {
        ModelAndView mv = new ModelAndView();
        mv.setViewName("welcome");
        mv.getModel().put("data", "Welcome home man");

        return mv;
    }
}

So, what exactly did we set up here.

First, we created a controller called TestController and mapped it to the “/test” path. In the class we have created a method which returns a ModelAndView object and is mapped to a GET request thus any URL call ending with “test” would be routed by the DispatcherServlet to the getTestData method in the TestController.

And of course we’re returning the ModelAndView object with some model data for good measure.

The view object has a name set to “welcome“. As discussed above, the View Resolver will search for a page in the WEB-INF folder called “welcome.jsp“.

Below you can see the result of an example GET operation:

result_final

Note that the URL ends with “test”. The pattern of the URL is “/test/test“.

The first “/test” comes from the Servlet, and the second one comes from the mapping of the controller.

7. More Spring Dependencies for REST

Let’s now start looking at a RESTful controller. Of course, a good place to start is the extra Maven dependencies we need for it:

<dependencies>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-webmvc</artifactId>
        <version>4.3.0.RELEASE</version>
    </dependency>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-web</artifactId>
        <version>4.3.0.RELEASE</version>
    </dependency>
    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-databind</artifactId>
        <version>2.8.0</version>
    </dependency>
</dependencies>

Please refer to jackson-corespring-webmvc and spring-web links for the newest versions of those dependencies.

Jackson is of course not mandatory here, but it’s certainly a good way to enable JSON support. If you’re interested to dive deeper into that support, have a look at the message converters article here.

8. The REST Controller

The setup for a Spring RESTful application is the same as the one for the MVC application with the only difference being that there is no View Resolvers and no model map.

The API will generally simply return raw data back to the client – XML and JSON representations usually – and so the DispatcherServlet bypasses the view resolvers and returns the data right in the HTTP response body.

Let’s have a look at a simple RESTful controller implementation:

@Controller
public class RestController {

    @GetMapping(value = "/student/{studentId}")
    public @ResponseBody Student getTestData(@PathVariable Integer studentId) {
        Student student = new Student();
        student.setName("Peter");
        student.setId(studentId);

        return student;
    }
}

Note the @ResponseBody annotation on the method – which instructs Spring to bypass the view resolver and essentially write out the output directly to the body of the HTTP response.

A quick snapshot of the output is displayed below:

16th_july

The above output is a result of sending the GET request to the API with the the student id of 1.

One quick note here is – the @RequestMapping annotation is one of those central annotations that you’ll really have to explore in order to use to its full potential.

9. Spring Boot and the @RestController Annotation

The @RestController annotation from Spring Boot is basically a quick shortcut that saves us from always having to define @ResponseBody.

Here’s the previous example controller using this new annotation:

@RestController
public class RestAnnotatedController {
    @GetMapping(value = "/annotated/student/{studentId}")
    public Student getData(@PathVariable Integer studentId) {
        Student student = new Student();
        student.setName("Peter");
        student.setId(studentId);

        return student;
    }
}

10. Conclusion

In this guide, we explore the basics of using controllers in Spring, both from the point of view of a typical MVC application as well as a RESTful API.

Of course all the code in the article is available over on Giuthub.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

JMockit Advanced Usage

$
0
0

1. Introduction

In this article, we’ll go beyond the JMockit basics and we’ll start looking at some advanced scenarios, such as:

  • Faking (or the MockUp API)
  • The Deencapsulation utility class
  • How to mock more than one interface using only one mock
  • How to reuse expectations and verifications

If you want to discover JMockit’s basics, check other articles from this series. You can find relevant links at the bottom of the page.

2. Private Methods/Inner Classes Mocking

Mocking and testing of private methods or inner classes is often not considered good practice.

The reasoning behind it is that if they’re private, they shouldn’t be tested directly as they’re the innermost guts of the class, but sometimes it still needs to be done, especially when dealing with legacy code.

With JMockit, you have two options to handle these:

  • The MockUp API to alter the real implementation (for the second case)
  • The Deencapsulation utility class, to call any method directly (for the first case)

All following examples will be done for the following class and we’ll suppose that are run on a test class with the same configuration as the first one (to avoid repeating code):

public class AdvancedCollaborator {
    int i;
    private int privateField = 5;

    // default constructor omitted 
    
    public AdvancedCollaborator(String string) throws Exception{
        i = string.length();
    }

    public String methodThatCallsPrivateMethod(int i) {
        return privateMethod() + i;
    }
    public int methodThatReturnsThePrivateField() {
        return privateField;
    }
    private String privateMethod() {
        return "default:";
    }

    class InnerAdvancedCollaborator {...}
}

2.1. Faking with MockUp

JMockit’s Mockup API provides support for the creation of fake implementations or mock-ups. Typically, a mock-up targets a few methods and/or constructors in the class to be faked, while leaving most other methods and constructors unmodified. This allows for a complete re-write of a class, so any method or constructor (with any access modifier) can be targeted.

Let’s see how we can re-define privateMethod() using the Mockup’s API:

@RunWith(JMockit.class)
public class AdvancedCollaboratorTest {

    @Tested
    private AdvancedCollaborator mock;

    @Test
    public void testToMockUpPrivateMethod() {
        new MockUp<AdvancedCollaborator>() {
            @Mock
            private String privateMethod() {
                return "mocked: ";
            }
        };
        String res = mock.methodThatCallsPrivateMethod(1);
        assertEquals("mocked: 1", res);
    }
}

In this example we’re defining a new MockUp for the AdvancedCollaborator class using the @Mock annotation on a method with matching signature. After this, calls to that method will be delegated to our mocked one.

We can also use this to mock-up the constructor of a class that needs specific arguments or configuration in order to simplify tests:

@Test
public void testToMockUpDifficultConstructor() throws Exception{
    new MockUp<AdvancedCollaborator>() {
        @Mock
        public void $init(Invocation invocation, String string) {
            ((AdvancedCollaborator)invocation.getInvokedInstance()).i = 1;
        }
    };
    AdvancedCollaborator coll = new AdvancedCollaborator(null);
    assertEquals(1, coll.i);
}

In this example, we can see that for constructor mocking you need to mock the $init method. You can pass an extra argument of type Invocation, with which you can access information about the invocation of the mocked method, including the instance to which the invocation is being performed.

2.2. Using the Deencapsulation Class

JMockit includes a test utility class: the Deencapsulation. As its name indicates, it’s used to de-encapsulate a state of an object, and using it, you can simplify testing by accessing fields and methods that could not be accessed otherwise.

You can invoke a method:

@Test
public void testToCallPrivateMethodsDirectly(){
    Object value = Deencapsulation.invoke(mock, "privateMethod");
    assertEquals("default:", value);
}

You can also set fields:

@Test
public void testToSetPrivateFieldDirectly(){
    Deencapsulation.setField(mock, "privateField", 10);
    assertEquals(10, mock.methodThatReturnsThePrivateField());
}

And get fields:

@Test
public void testToGetPrivateFieldDirectly(){
    int value = Deencapsulation.getField(mock, "privateField");
    assertEquals(5, value);
}

And create new instances of classes:

@Test
public void testToCreateNewInstanceDirectly(){
    AdvancedCollaborator coll = Deencapsulation
      .newInstance(AdvancedCollaborator.class, "foo");
    assertEquals(3, coll.i);
}

Even new instances of inner classes:

@Test
public void testToCreateNewInnerClassInstanceDirectly(){
    InnerCollaborator inner = Deencapsulation
      .newInnerInstance(InnerCollaborator.class, mock);
    assertNotNull(inner);
}

As you can see, the Deencapsulation class is extremely useful when testing air tight classes. One example could be to set dependencies of a class that uses @Autowired annotations on private fields and has no setters for them, or to unit test inner classes without having to depend on the public interface of its container class.

3. Mocking Multiple Interfaces in One Same Mock

Let’s assume that you want to test a class – not yet implemented – but you know for sure that it will implement several interfaces.

Usually, you wouldn’t be able to test said class before implementing it, but with JMockit you have the ability to prepare tests beforehand by mocking more than one interface using one mock object.

This can be achieved by using generics and defining a type that extends several interfaces. This generic type can be either defined for a whole test class or for just one test method.

For example, we’re going to create a mock for interfaces List and Comparable two ways:

@RunWith(JMockit.class)
public class AdvancedCollaboratorTest<MultiMock
  extends List<String> & Comparable<List<String>>> {
    
    @Mocked
    private MultiMock multiMock;
    
    @Test
    public void testOnClass() {
        new Expectations() {{
            multiMock.get(5); result = "foo";
            multiMock.compareTo((List<String>) any); result = 0;
        }};
        assertEquals("foo", multiMock.get(5));
        assertEquals(0, multiMock.compareTo(new ArrayList<>()));
    }

    @Test
    public <M extends List<String> & Comparable<List<String>>>
      void testOnMethod(@Mocked M mock) {
        new Expectations() {{
            mock.get(5); result = "foo";
            mock.compareTo((List<String>) any); result = 0; 
        }};
        assertEquals("foo", mock.get(5));
        assertEquals(0, mock.compareTo(new ArrayList<>()));
    }
}

As you can see in the line 2, we can define a new test type for the whole test by using generics on the class name. That way, MultiMock will be available as a type and you’ll be able to create mocks for it using any of JMockit’s annotations.

In lines from 7 to 18, we can see an example using a mock of a multi-class defined for the whole test class.

If you need the multi-interface mock for just one test, you can achieve this by defining the generic type on the method signature and passing a new mock of that new generic as the test method argument. In lines 20 to 32, we can see an example of doing so for the same tested behavior as in the previous test.

4. Reusing Expectations and Verifications

In the end, when testing classes, you may encounter cases where you’re repeating the same Expectations and/or Verifications over and over. To ease that, you can reuse both easily.

We’re going to explain it by an example (we’re using the classes Model, Collaborator, and Performer from our JMockit 101 article):

@RunWith(JMockit.class)
public class ReusingTest {

    @Injectable
    private Collaborator collaborator;
    
    @Mocked
    private Model model;

    @Tested
    private Performer performer;
    
    @Before
    public void setup(){
        new Expectations(){{
           model.getInfo(); result = "foo"; minTimes = 0;
           collaborator.collaborate("foo"); result = true; minTimes = 0; 
        }};
    }

    @Test
    public void testWithSetup() {
        performer.perform(model);
        verifyTrueCalls(1);
    }
    
    protected void verifyTrueCalls(int calls){
        new Verifications(){{
           collaborator.receive(true); times = calls; 
        }};
    }
    
    final class TrueCallsVerification extends Verifications{
        public TrueCallsVerification(int calls){
            collaborator.receive(true); times = calls; 
        }
    }
    
    @Test
    public void testWithFinalClass() {
        performer.perform(model);
        new TrueCallsVerification(1);
    }
}

In this example, you can see in lines from 15 to 18 that we’re preparing an expectation for every test so that model.getInfo() always returns “foo” and for collaborator.collaborate() to always expect “foo” as the argument and returning true. We put the minTimes = 0 statement so no fails appear when not actually using them in tests.

Also, we’ve created method verifyTrueCalls(int) to simplify verifications to the collaborator.receive(boolean) method when the passed argument is true.

Lastly, you can also create new types of specific expectations and verifications just extending any of Expectations or Verifications classes. Then you define a constructor if you need to configure the behavior and create a new instance of said type in a test as we do in lines from 33 to 43.

5. Conclusion

With this installment of the JMockit series, we have touched on several advanced topics that will definitely help you with everyday mocking and testing.

We may do more articles on JMockit, so stay tuned to learn even more.

And, as always, the full implementation of this tutorial can be found on the GitHub.

5.1. Articles in the Series

All articles of the series:

A Guide to Mapping With Dozer

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

Dozer is a Java Bean to Java Bean mapper that recursively copies data from one object to another, attribute by attribute.

The library not only supports mapping between attribute names of Java Beans, but also automatically converts between types – if they’re different.

Most conversion scenarios are supported out of the box, but Dozer also allows you to specify custom conversions via XML.

2. Simple Example

For our first example, let’s assume that the source and destination data objects all share the same common attribute names.

This is the most basic mapping one can do with Dozer:

public class Source {
    private String name;
    private int age;

    public Source() {}

    public Source(String name, int age) {
        this.name = name;
        this.age = age;
    }
    
    // standard getters and setters
}

Then our destination file, Dest.java:

public class Dest {
    private String name;
    private int age;

    public Dest() {}

    public Dest(String name, int age) {
        this.name = name;
        this.age = age;
    }
    
    // standard getters and setters
}

We need to make sure to include the default or zero argument constructors, since Dozer uses reflection under the hood.

And, for performance purposes, let’s make our mapper global and create a single object we’ll use throughout our tests:

DozerBeanMapper mapper;

@Before
public void before() throws Exception {
    mapper = new DozerBeanMapper();
}

Now, let’s run our first test to confirm that when we create a Source object, we can map it directly onto a Dest object:

@Test
public void givenSourceObjectAndDestClass_whenMapsSameNameFieldsCorrectly_
  thenCorrect() {
    Source source = new Source("Baeldung", 10);
    Dest dest = mapper.map(source, Dest.class);

    assertEquals(dest.getName(), "Baeldung");
    assertEquals(dest.getAge(), 10);
}

As we can see, after the Dozer mapping, the result will be a new instance of the Dest object that contains values for all fields that have the same field name as the Source object.

Alternatively, instead of passing mapper the Dest class, we could just have created the Dest object and passed mapper its reference:

@Test
public void givenSourceObjectAndDestObject_whenMapsSameNameFieldsCorrectly_
  thenCorrect() {
    Source source = new Source("Baeldung", 10);
    Dest dest = new Dest();
    mapper.map(source, dest);

    assertEquals(dest.getName(), "Baeldung");
    assertEquals(dest.getAge(), 10);
}

3. Maven Setup

Now that we have a basic understanding of how Dozer works, let’s add the following dependency to the pom.xml:

<dependency>
    <groupId>net.sf.dozer</groupId>
    <artifactId>dozer</artifactId>
    <version>5.5.1</version>
</dependency>

The latest version is available here.

4. Data Conversion Example

As we already know, Dozer can map an existing object to another as long as it finds attributes of the same name in both classes.

However, that’s not always the case; and so, if any of the mapped attributes are of different data types, the Dozer mapping engine will automatically perform a data type conversion.

Let’s see this new concept in action:

public class Source2 {
    private String id;
    private double points;

    public Source2() {}

    public Source2(String id, double points) {
        this.id = id;
        this.points = points;
    }
    
    // standard getters and setters
}

And the destination class:

public class Dest2 {
    private int id;
    private int points;

    public Dest2() {}

    public Dest2(int id, int points) {
        super();
        this.id = id;
        this.points = points;
    }
    
    // standard getters and setters
}

Notice that the attribute names are the same but their data types are different.

In the source class, id is a String and points is a double, whereas in the destination class, id and points are both integers.

Let’s now see how Dozer correctly handles the conversion:

@Test
public void givenSourceAndDestWithDifferentFieldTypes_
  whenMapsAndAutoConverts_thenCorrect() {
    Source2 source = new Source2("320", 15.2);
    Dest2 dest = mapper.map(source, Dest2.class);

    assertEquals(dest.getId(), 320);
    assertEquals(dest.getPoints(), 15);
}

We passed “320” and 15.2, a String and a double into the source object and the result had 320 and 15, both integers in the destination object.

5. Basic Custom Mappings Via XML

In all the previous examples we have seen, both the source and destination data objects have the same field names, which allows for easy mapping on our side.

However, in real world applications, there will be countless times where the two data objects we’re mapping won’t have fields that share a common property name.

To solve this, Dozer gives us an option to create a custom mapping configuration in XML.

In this XML file, we can define class mapping entries which the Dozer mapping engine will use to decide what source attribute to map to what destination attribute.

Let’s have a look at an example, and let’s try unmarshalling data objects from an application built by a French programmer, to an English style of naming our objects.

We have a Person object with name, nickname and age fields:

public class Person {
    private String name;
    private String nickname;
    private int age;

    public Person() {}

    public Person(String name, String nickname, int age) {
        super();
        this.name = name;
        this.nickname = nickname;
        this.age = age;
    }
    
    // standard getters and setters
}

The object we are unmarshalling is named Personne and has fields nom, surnom and age:

public class Personne {
    private String nom;
    private String surnom;
    private int age;

    public Personne() {}

    public Personne(String nom, String surnom, int age) {
        super();
        this.nom = nom;
        this.surnom = surnom;
        this.age = age;
    }
    
    // standard getters and setters
}

These objects really achieve the same purpose but we have a language barrier. In order to help with that barrier, we can use Dozer to map the French Personne object to our Person object.

We only have to create a custom mapping file to help Dozer do this, we will call it dozer_mapping.xml:

<?xml version="1.0" encoding="UTF-8"?>
<mappings xmlns="http://dozer.sourceforge.net" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://dozer.sourceforge.net
      http://dozer.sourceforge.net/schema/beanmapping.xsd">
    <mapping>
        <class-a>com.baeldung.dozer.Personne</class-a>
        <class-b>com.baeldung.dozer.Person</class-b>
        <field>
            <a>nom</a>
            <b>name</b>
        </field>
        <field>
            <a>surnom</a>
            <b>nickname</b>
        </field>
    </mapping>
</mappings>

This is the simplest example of a custom XML mapping file we can have.

For now, it’s enough to notice that we have <mappings> as our root element, which has a child <mapping>, we can have as many of these children inside <mappings> as there are incidences of class pairs that need custom mapping.

Notice also how we specify the source and destination classes inside the <mapping></mapping> tags. This is followed by a <field></field> for each source and destination field pair that need custom mapping.

Finally, notice that we have not included the field age in our custom mapping file. The French word for age is still age, which brings us to another important feature of Dozer.

Properties that are of the same name do not need to be specified in the mapping XML file. Dozer automatically maps all fields with the same property name from the source object into the destination object.

We will then place our custom XML file on the classpath directly under the src folder. However, wherever we place it on the classpath, Dozer will search the entire classpath looking for the specified file.

Let us create a helper method to add mapping files to our mapper:

public void configureMapper(String... mappingFileUrls) {
    mapper.setMappingFiles(Arrays.asList(mappingFileUrls));
}

Let’s now test the code:

@Test
public void givenSrcAndDestWithDifferentFieldNamesWithCustomMapper_
  whenMaps_thenCorrect() {
    configureMapper("dozer_mapping.xml");
    Personne frenchAppPerson = new Personne("Sylvester Stallone", "Rambo", 70);
    Person englishAppPerson = mapper.map(frenchAppPerson, Person.class);

    assertEquals(englishAppPerson.getName(), frenchAppPerson.getNom());
    assertEquals(englishAppPerson.getNickname(), frenchAppPerson.getSurnom());
    assertEquals(englishAppPerson.getAge(), frenchAppPerson.getAge());
}

As shown in the test, DozerBeanMapper accepts a list of custom XML mapping files and decides when to use each at runtime.

Assuming we now start unmarshalling these data objects back and forth between our English app and the French app. We don’t need to create another mapping in the XML file, Dozer is smart enough to map the objects both ways with only one mapping configuration:

@Test
public void givenSrcAndDestWithDifferentFieldNamesWithCustomMapper_
  whenMapsBidirectionally_thenCorrect() {
    configureMapper("dozer_mapping.xml");
    Person englishAppPerson = new Person("Dwayne Johnson", "The Rock", 44);
    Personne frenchAppPerson = mapper.map(englishAppPerson, Personne.class);

    assertEquals(frenchAppPerson.getNom(), englishAppPerson.getName());
    assertEquals(frenchAppPerson.getSurnom(),englishAppPerson.getNickname());
    assertEquals(frenchAppPerson.getAge(), englishAppPerson.getAge());
}

And so this example test uses this another feature of Dozer – the fact that the Dozer mapping engine is bi-directional, so if we want to map the destination object to the source object, we do not need to add another class mapping to the XML file.

We can also load a custom mapping file from outside the classpath, if we need to, using the “file:” prefix in the resource name.

On a Windows environment (such as the test below), we’ll of course use the Windows specific file syntax.

On a Linux box, we may store the file under /home and then:

configureMapper("file:/home/dozer_mapping.xml");

And on Mac OS:

configureMapper("file:/Users/me/dozer_mapping.xml");

If you are running the unit tests from the github project (which you should), you can copy the mapping file to the appropriate location and change the input for configureMapper method.

The mapping file is available under test/resources folder of the GitHub project:

@Test
public void givenMappingFileOutsideClasspath_whenMaps_thenCorrect() {
    configureMapper("file:E:\\dozer_mapping.xml");
    Person englishAppPerson = new Person("Marshall Bruce Mathers III","Eminem", 43);
    Personne frenchAppPerson = mapper.map(englishAppPerson, Personne.class);

    assertEquals(frenchAppPerson.getNom(), englishAppPerson.getName());
    assertEquals(frenchAppPerson.getSurnom(),englishAppPerson.getNickname());
    assertEquals(frenchAppPerson.getAge(), englishAppPerson.getAge());
}

6. Wildcards and Further XML Customization

Let’s create a second custom mapping file called dozer_mapping2.xml:

<?xml version="1.0" encoding="UTF-8"?>
<mappings xmlns="http://dozer.sourceforge.net" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://dozer.sourceforge.net 
      http://dozer.sourceforge.net/schema/beanmapping.xsd">
    <mapping wildcard="false">
        <class-a>com.baeldung.dozer.Personne</class-a>
        <class-b>com.baeldung.dozer.Person</class-b>
        <field>
            <a>nom</a>
            <b>name</b>
        </field>
        <field>
            <a>surnom</a>
            <b>nickname</b>
        </field>
    </mapping>
</mappings>

Notice that we have added an attribute wildcard to the <mapping></mapping> element which was not there before.

By default, wildcard is true. It tells the Dozer engine that we want all fields in the source object to be mapped to their appropriate destination fields.

When we set it to false, we are telling Dozer to only map fields we have explicitly specified in the XML.

So in the above configuration, we only want two fields mapped, leaving out age:

@Test
public void givenSrcAndDest_whenMapsOnlySpecifiedFields_thenCorrect() {
    configureMapper("dozer_mapping2.xml");
    Person englishAppPerson = new Person("Shawn Corey Carter","Jay Z", 46);
    Personne frenchAppPerson = mapper.map(englishAppPerson, Personne.class);

    assertEquals(frenchAppPerson.getNom(), englishAppPerson.getName());
    assertEquals(frenchAppPerson.getSurnom(),englishAppPerson.getNickname());
    assertEquals(frenchAppPerson.getAge(), 0);
}

As we can see in the last assertion, the destination age field remained 0.

7. Custom Mapping Via Annotations

For simple mapping cases and cases where we also have write access to the data objects we would like to map, we may not need to use XML mapping.

Mapping differently named fields via annotations is very simple and we have to write much less code than in XML mapping but can only help us in simple cases.

Let’s replicate our data objects into Person2.java and Personne2.java without changing the fields at all.

To implement this, we only need to add @mapper(“destinationFieldName”) annotation on the getter methods in the source object. Like so:

@Mapping("name")
public String getNom() {
    return nom;
}

@Mapping("nickname")
public String getSurnom() {
    return surnom;
}

This time we are treating Personne2 as the source, but it does not matter due to the bi-directional nature of the Dozer Engine.

Now with all the XML related code stripped out, our test code is shorter:

@Test
public void givenAnnotatedSrcFields_whenMapsToRightDestField_thenCorrect() {
    Person2 englishAppPerson = new Person2("Jean-Claude Van Damme", "JCVD", 55);
    Personne2 frenchAppPerson = mapper.map(englishAppPerson, Personne2.class);

    assertEquals(frenchAppPerson.getNom(), englishAppPerson.getName());
    assertEquals(frenchAppPerson.getSurnom(), englishAppPerson.getNickname());
    assertEquals(frenchAppPerson.getAge(), englishAppPerson.getAge());
}

We can also test for bi-directionality:

@Test
public void givenAnnotatedSrcFields_whenMapsToRightDestFieldBidirectionally_
  thenCorrect() {
    Personne2 frenchAppPerson = new Personne2("Jason Statham", "transporter", 49);
    Person2 englishAppPerson = mapper.map(frenchAppPerson, Person2.class);

    assertEquals(englishAppPerson.getName(), frenchAppPerson.getNom());
    assertEquals(englishAppPerson.getNickname(), frenchAppPerson.getSurnom());
    assertEquals(englishAppPerson.getAge(), frenchAppPerson.getAge());
}

8. Custom API Mapping

In our previous examples where we are unmarshalling data objects from a french application, we used XML and annotations to customize our mapping.

Another alternative available in Dozer, similar to annotation mapping is API mapping. They are similar because we eliminate XML configuration and strictly use Java code.

In this case, we use BeanMappingBuilder class, defined in our simplest case like so:

BeanMappingBuilder builder = new BeanMappingBuilder() {
    @Override
    protected void configure() {
        mapping(Person.class, Personne.class)
          .fields("name", "nom")
            .fields("nickname", "surnom");
    }
};

As we can see, we have an abstract method, configure(), which we must override to define our configurations. Then, just like our <mapping></mapping> tags in XML, we define as many TypeMappingBuilders as we require.

These builders tell Dozer which source to destination fields we are mapping. We then pass the BeanMappingBuilder to DozerBeanMapper as we would, the XML mapping file, only with a different API:

@Test
public void givenApiMapper_whenMaps_thenCorrect() {
    mapper.addMapping(builder);
 
    Personne frenchAppPerson = new Personne("Sylvester Stallone", "Rambo", 70);
    Person englishAppPerson = mapper.map(frenchAppPerson, Person.class);

    assertEquals(englishAppPerson.getName(), frenchAppPerson.getNom());
    assertEquals(englishAppPerson.getNickname(), frenchAppPerson.getSurnom());
    assertEquals(englishAppPerson.getAge(), frenchAppPerson.getAge());
}

The mapping API is also bi-directional:

@Test
public void givenApiMapper_whenMapsBidirectionally_thenCorrect() {
    mapper.addMapping(builder);
 
    Person englishAppPerson = new Person("Sylvester Stallone", "Rambo", 70);
    Personne frenchAppPerson = mapper.map(englishAppPerson, Personne.class);

    assertEquals(frenchAppPerson.getNom(), englishAppPerson.getName());
    assertEquals(frenchAppPerson.getSurnom(), englishAppPerson.getNickname());
    assertEquals(frenchAppPerson.getAge(), englishAppPerson.getAge());
}

Or we can choose to only map explicitely specified fields with this builder configuration:

BeanMappingBuilder builderMinusAge = new BeanMappingBuilder() {
    @Override
    protected void configure() {
        mapping(Person.class, Personne.class)
          .fields("name", "nom")
            .fields("nickname", "surnom")
              .exclude("age");
    }
};

and our age==0 test is back:

@Test
public void givenApiMapper_whenMapsOnlySpecifiedFields_thenCorrect() {
    mapper.addMapping(builderMinusAge); 
    Person englishAppPerson = new Person("Sylvester Stallone", "Rambo", 70);
    Personne frenchAppPerson = mapper.map(englishAppPerson, Personne.class);

    assertEquals(frenchAppPerson.getNom(), englishAppPerson.getName());
    assertEquals(frenchAppPerson.getSurnom(), englishAppPerson.getNickname());
    assertEquals(frenchAppPerson.getAge(), 0);
}

9. Custom Converters

Another scenario we may face in mapping is where we would like to perform custom mapping between two objects.

We have looked at scenarios where source and destination field names are different like in the French Personne object. This section solves a different problem.

What if a data object we are unmarshalling represents a date and time field such as a long or Unix time like so:

1182882159000

But our own equivalent data object represents the same date and time field and value in this ISO format such as a String:

2007-06-26T21:22:39Z

The default converter would simply map the long value to a String like so:

"1182882159000"

This would definitely bug our app. So how do we solve this? We solve it by adding a configuration block in the mapping XML file and specifying our own converter.

First, let’s replicate the remote application’s Person DTO with a name, then date and time of birth, dtob field:

public class Personne3 {
    private String name;
    private long dtob;

    public Personne3(String name, long dtob) {
        super();
        this.name = name;
        this.dtob = dtob;
    }
    
    // standard getters and setters
}

and here is our own:

public class Person3 {
    private String name;
    private String dtob;

    public Person3(String name, String dtob) {
        super();
        this.name = name;
        this.dtob = dtob;
    }
    
    // standard getters and setters
}

Notice the type difference of dtob in the source and destination DTOs.

Let’s also create our own CustomConverter to pass to Dozer in the mapping XML:

public class MyCustomConvertor implements CustomConverter {
    @Override
    public Object convert(Object dest, Object source, Class<?> arg2, Class<?> arg3) {
        if (source == null) 
            return null;
		
        if (source instanceof Personne3) {
            Personne3 person = (Personne3) source;
            Date date = new Date(person.getDtob());
            DateFormat format = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'");
            String isoDate = format.format(date);
            return new Person3(person.getName(), isoDate);

        } else if (source instanceof Person3) {
            Person3 person = (Person3) source;
            DateFormat format = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'");
            Date date = format.parse(person.getDtob());
            long timestamp = date.getTime();
            return new Personne3(person.getName(), timestamp);
        }
    }
}

We only have to override convert() method then return whatever we want to return in it. We are availed with the source and destination objects and their class types.

Notice how we have taken care of bi-directionality by assuming the source can be either of the two classes we are mapping.

We will create a new mapping file for clarity, dozer_custom_convertor.xml:

<?xml version="1.0" encoding="UTF-8"?>
<mappings xmlns="http://dozer.sourceforge.net" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://dozer.sourceforge.net
      http://dozer.sourceforge.net/schema/beanmapping.xsd">
    <configuration>
        <custom-converters>
            <converter type="com.baeldung.dozer.MyCustomConvertor">
                <class-a>com.baeldung.dozer.Personne3</class-a>
                <class-b>com.baeldung.dozer.Person3</class-b>
            </converter>
        </custom-converters>
    </configuration>
</mappings>

This is the normal mapping file we have seen in preceding sections, we have only added a <configuration></configuration> block within which we can define as many custom converters as we require with their respective source and destination data classes.

Let’s test our new CustomConverter code:

@Test
public void givenSrcAndDestWithDifferentFieldTypes_whenAbleToCustomConvert_
  thenCorrect() {

    configureMapper("dozer_custom_convertor.xml");
    String dateTime = "2007-06-26T21:22:39Z";
    long timestamp = new Long("1182882159000");
    Person3 person = new Person3("Rich", dateTime);
    Personne3 person0 = mapper.map(person, Personne3.class);

    assertEquals(timestamp, person0.getDtob());
}

We can also test to ensure it is bi-directional:

@Test
public void givenSrcAndDestWithDifferentFieldTypes_
  whenAbleToCustomConvertBidirectionally_thenCorrect() {
    configureMapper("dozer_custom_convertor.xml");
    String dateTime = "2007-06-26T21:22:39Z";
    long timestamp = new Long("1182882159000");
    Personne3 person = new Personne3("Rich", timestamp);
    Person3 person0 = mapper.map(person, Person3.class);

    assertEquals(dateTime, person0.getDtob());
}

10. Conclusion

In this tutorial, we have introduced most of the basics of the Dozer Mapping library and how to use it in our applications.

The full implementation of all these examples and code snippets can be found in the Dozer github project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Java Web Weekly, Issue 136

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Groovy for Java Developers?! Meet Gradle, Grails and Spock [takipi.com]

A good intro to the Groovy and the many tools in that side of the ecosystem.

I’ve been selectively using some of these tools in my day to day work, but there’s a whole bunch of tools I haven’t tried out yet, and look potentially quite useful.

>> How to fetch multiple entities by id with Hibernate 5 [thoughts-on-java.org]

A basic operation me and most of the ORM using world needed at some point or another. Very nice additional to Hibernate.

>> Resizing the HashMap: dangers ahead [plumbr.eu]

The HashMap is definitely the workhorse of so many Java codebases, that it’s not even funny.

So, whether you’re using it as a blunt tool or as a sharp instrument, you definitely need to understanding it well. A solid writeup overall.

>> SpringOne Platform 2016 Recap: Day 1 [spring.io] and >> SpringOne Platform 2016 Recap: Day 2

A bit of fun from SpringOne.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> DDD Decoded – Entities and Value Objects Explained [sapiensworks.com]

Another solid intro to DDD article here. This series is shaping up to be great reference material.

>> Writing OpenAPI (Swagger) Specification Tutorial – Part 8 – Splitting specification file [apihandyman.io]

I thoroughly enjoy this deep-dive into Swagger – the entire series is chock full of solid info, and these last few installments have been exploring some aspects of Swagger I had no idea about. Very cool.

Also worth reading:

3. Musings

>> Hiring Engineers [dandreamsofcoding.com]

A high level intro to hiring engineers that’s well worth reading.

There are definitely a lot of ways you can go about the process – some better than others – but it’s worth understanding that some of the traditional approaches can work if done well.

>> The Human Cost of Tech Debt [daedtech.com]

Unmanaged technical debt goes way beyond just the technical downsides and always has a deep impact on teams.

And given enough time, it will give a strong nudge to developers to get past the unpleasantness of looking for a new job.

>> Combine smart people with crazily hard projects [lemire.me]

Some interesting musings on the huge benefits of stepping out of your comfort zone, tackling a hard problem and getting help.

>> Is Your Source Control Usage Conducive to Code Review? [daedtech.com]

That is a fantastic question to ask. And the answer to it is ultimately rooted in discipline and respect for your team, trying to make the review job easier.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Breakout groups to fantasize about being relevant [dilbert.com]

>> I love getting rich at your expense … and golfing [dilbert.com]

>> I can’t remember if we’re cheap or smart [dilbert.com]

5. Pick of the Week

>> Keep earning your title, or it expires [sivers.org]

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

A Guide to Spring Cloud Configuration

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

Spring Cloud Config is Spring’s client/server approach for storing and serving distributed configurations across multiple applications and environments.

This configuration store is ideally versioned under Git version control and can be modified at application runtime. While it fits very well in Spring applications using all the supported configuration file formats together with constructs like Environment, PropertySource or @Value, it can be used in any environment running any programming language.

In this write-up, we’ll focus on an example of how to setup a Git-backed config server, use it in a simple REST application server and setup a secured environment including encrypted property values.

2. Project Setup and Dependencies

To get ready for writing some code, we create two new Maven projects first. The server project is relying on the spring-cloud-config-server module, as well as the spring-boot-starter-security and spring-boot-starter-web starter bundles:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-config-server</artifactId>
    <version>1.1.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>

However for the client project we’re going to only need the spring-cloud-starter-config and the spring-boot-starter-web modules:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-config</artifactId>
    <version>1.1.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>

3. A Config Server Implementation

The main part of the application is a config class – more specifically a @SpringBootApplication – which pulls in all the required setup through the auto-configure annotation @EnableConfigServer.

To secure our config server with Basic-Authentication, we additionally annotate the class with @EnableWebSecurity:

@SpringBootApplication
@EnableConfigServer
@EnableWebSecurity
public class ConfigServer {
    
    public static void main(String[] arguments) {
        SpringApplication.run(ConfigServer.class, arguments);
    }
}

Now we need to configure the server port on which our server is listening and a Git-url which provides our version-controlled configuration content. The latter can be used with protocols like http, ssh or a simple file on a local filesystem.

Tip: If you are planning to use multiple config server instances pointing to the same config repository, you can configure the server to clone your repo into a local temporary folder. But be aware of private repositories with two-factor authentication, they are difficult to handle! In such a case, it is easier to clone them on your local filesystem and work with the copy.

There are also some placeholder variables and search patterns for configuring the repository-url available; but this is beyond the scope of our article. If you are interested, the official documentation is a good place to start.

We also need to set a username and a password for the Basic-Authentication in our application.properties to avoid an auto-generated password on every application restart:

server.port=8888
spring.cloud.config.server.git.uri=ssh://localhost/config-repo
spring.cloud.config.server.git.clone-on-start=true
security.user.name=root
security.user.password=s3cr3t

4. A Git Repository as Configuration Storage

To complete our server, we have to initialize a Git repository under the configured url, create some new properties files and popularize them with some values.

The name of the configuration file is composed like a normal Spring application.properties, but instead of the word ‘application’ a configured name, e.g. the value of the property ‘spring.application.name’ of the client is used, followed by a dash and the active profile. For example:

$> git init
$> echo 'user.role=Developer' > config-client-development.properties
$> echo 'user.role=User'      > config-client-production.properties
$> git add .
$> git commit -m 'Initial config-client properties'

Troubleshooting: If you run into ssh-related authentication issues, double check ~/.ssh/known_hosts and ~/.ssh/authorized_keys on your ssh server!

5. Querying the Configuration

Now we’re able to start our server. The Git-backed configuration API provided by our server can be queried using the following paths:

/{application}/{profile}[/{label}]
/{application}-{profile}.yml
/{label}/{application}-{profile}.yml
/{application}-{profile}.properties
/{label}/{application}-{profile}.properties

In which the {label} placeholder refers to a Git branch, {application} to the client’s application name and the {profile} to the client’s current active application profile.

So we can retrieve the configuration for our planned config client running under development profile in branch master via:

$> curl http://root:s3cr3t@localhost:8888/config-client/development/master

6. The Client Implementation

Next, let’s take care of the client. This will be a very simple client application, consisting of a REST controller with one GET method.

The configuration, to fetch our server, must be placed in a resource file named bootstrap.application, because this file (like the name implies) will be loaded very early while the application starts:

@SpringBootApplication
@RestController
public class ConfigClient {
    
    @Value("${user.role}")
    private String role;

    public static void main(String[] args) {
        SpringApplication.run(ConfigClient.class, args);
    }

    @RequestMapping(
      value = "/whoami/{username}", 
      method = RequestMethod.GET, 
      produces = MediaType.TEXT_PLAIN_VALUE)
    public String whoami(@PathVariable("username") String username) {
        return String.format("Hello! 
          You're %s and you'll become a(n) %s...\n", username, role);
    }
}

In addition to the application name, we also put the active profile and the connection-details in our bootstrap.properties:

spring.application.name=config-client
spring.profiles.active=development
spring.cloud.config.uri=http://localhost:8888
spring.cloud.config.username=root
spring.cloud.config.password=s3cr3t

To test, if the configuration is properly received from our server and the role value gets injected in our controller method, we simply curl it after booting the client:

$> curl http://localhost:8080/whoami/Mr_Pink

If the response is as follows, our Spring Cloud Config Server and its client are working fine for now:

Hello! You're Mr_Pink and you'll become a(n) Developer...

7. Encryption and Decryption

Requirement: To use cryptographically strong keys together with Spring encryption and decryption features you need the ‘Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files’ installed in your JVM. These can be downloaded for example from Oracle. To install follow the instructions included in the download. Some Linux distributions also provide an installable package through their package managers.

Since the config server is supporting encryption and decryption of property values, you can use public repositories as storage for sensitive data like usernames and passwords. Encrypted values are prefixed with the string {cipher} and can be generated by an REST-call to the path ‘/encrypt’, if the server is configured to use a symmetric key or a key pair.

An endpoint to decrypt is also available. Both endpoints accepts a path containing placeholders for the name of the application and its current profile: ‘/*/{name}/{profile}’. This is especially useful for controlling cryptography per client. However, before they become useful, you have to configure a cryptographic key which we will do in the next section.

Tip: If you use curl to call the en-/decryption API, it’s better to use the –data-urlencode option (instead of –data/-d), or set the ‘Content-Type’ header explicit to ‘text/plain’. This ensure a correct handling of special characters like ‘+’ in the encrypted values.

If a value can’t be decrypted automatically while fetching through the client, its key is renamed with the name itself, prefixed by the word ‘invalid’. This should preventing, for example the usage of an encrypted value as password.

Tip: When setting-up a repository containing YAML files, you have to surround your encrypted and prefixed values with single-quotes! With Properties this is not the case.

7.1. Key Management

The config server is per default enabled to encrypt property values in a symmetric or asymmetric way.

To use symmetric cryptography, you simply have to set the property ‘encrypt.key’ in your application.properties to a secret of your choiceAlternatively you can pass-in the environment variable ENCRYPT_KEY.

For asymmetric cryptography, you can set ‘encrypt.key’ to a PEM-encoded string value or configure a keystore to use.

Because we need a highly secured environment for our demo server, we chose the latter option and generating a new keystore, including a RSA key-pair, with the Java keytool first:

$> keytool -genkeypair -alias config-server-key \
       -keyalg RSA -keysize 4096 -sigalg SHA512withRSA \
       -dname 'CN=Config Server,OU=Spring Cloud,O=Baeldung' \
       -keypass my-k34-s3cr3t -keystore config-server.jks \
       -storepass my-s70r3-s3cr3t

After that, we’re adding the created keystore to our server’s application.properties and re-run it:

encrypt.key-store.location=classpath:/config-server.jks
encrypt.key-store.password=my-s70r3-s3cr3t
encrypt.key-store.alias=config-server-key
encrypt.key-store.secret=my-k34-s3cr3t

As next step we can query the encryption-endpoint and add the response as value to a configuration in our repository:

$> export PASSWORD=$(curl -X POST --data-urlencode d3v3L \
       http://root:s3cr3t@localhost:8888/encrypt)
$> echo "user.password=$PASSWORD" >> config-client-development.properties
$> git commit -am 'Added encrypted password'
$> curl -X POST http://root:s3cr3t@localhost:8888/refresh

To test, if our setup works correctly, we’re modifying the ConfigClient class and restart our client:

@SpringBootApplication
@RestController
public class ConfigClient {

    ...
    
    @Value("${user.password}")
    private String password;

    ...
    public String whoami(@PathVariable("username") String username) {
        return String.format("Hello! 
          You're %s and you'll become a(n) %s, " +
          "but only if your password is '%s'!\n", 
          username, role, password);
    }
}

A final query against our client will show us, if our configuration value is being correct decrypted:

$> curl http://localhost:8080/whoami/Mr_Pink
Hello! You're Mr_Pink and you'll become a(n) Developer, \
  but only if your password is 'd3v3L'!

7.2. Using Multiple Keys

If you want to use multiple keys for encryption and decryption, for example: a dedicated one for each served application, you can add another prefix in the form of {name:value} between the {cipher} prefix and the BASE64-encoded property value.

The config server understands prefixes like {secret:my-crypto-secret} or {key:my-key-alias} nearly out-of-the-box. The latter option needs a configured keystore in your application.properties. This keystore is searched for a matching key alias. For example:

user.password={cipher}{secret:my-499-s3cr3t}AgAMirj1DkQC0WjRv...
user.password={cipher}{key:config-client-key}AgAMirj1DkQC0WjRv...

For scenarios without keystore you have to implement a @Bean of type TextEncryptorLocator which handles the lookup and returns a TextEncryptor-Object for each key.

7.3. Serving Encrypted Properties

If you want to disable server-side cryptography and handle decryption of property-values locally, you can put the following in your server’s application.properties:

spring.cloud.config.server.encrypt.enabled=false

Furthermore you can delete all the other ‘encrypt.*’ properties to disable the REST endpoints.

8. Conclusion

Now we are able to create a configuration server to provide a set of configuration files from a Git repository to client applications. There are a few other things you can do with such a server.

For example:

  • Serve configuration in YAML or Properties format instead of JSON – also with placeholders resolved. Which can be useful, when using it in non-Spring environments, where the configuration is not directly mapped to a PropertySource.
  • Serve plain text configuration files – in turn optionally with resolved placeholders. This can be useful for example to provide a environment-dependent logging-configuration.
  • Embed the config server into an application, where it configures itself from a Git repository, instead of running as standalone application serving clients. Therefore some bootstrap properties must be set and/or the @EnableConfigServer annotation must be removed, which depends on the use case.
  • Make the config server available at Spring Netflix Eureka service discovery and enable automatic server discovery in config clients. This becomes important if the server has no fixed location or it moves in its location.

And to wrap up, you’ll find the source code to this article on Github.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

A Guide to JaCoCo

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

Code coverage is a software metric used to measure how many lines of our code are executed during automated tests.

In this article we’re going to stroll through some practical aspects of using JaCoCo – a code coverage reports generator for Java projects.

2. Maven Configuration

In order to get up and running with JaCoCo, we need to declare this maven plugin in our pom.xml file:

<plugin>
    <groupId>org.jacoco</groupId>
    <artifactId>jacoco-maven-plugin</artifactId>
    <version>0.7.7.201606060606</version>
    <executions>
        <execution>
            <goals>
                <goal>prepare-agent</goal>
            </goals>
        </execution>
        <execution>
            <id>report</id>
            <phase>prepare-package</phase>
            <goals>
                <goal>report</goal>
            </goals>
        </execution>
    </executions>
</plugin>

The link provided here-before will always lead you to the latest version of the plugin in the maven central repository.

3. Code Coverage Reports

Before we start looking at JaCoCo’s code coverage capabilities, we need to have a code sample. Here’s a simple Java function that checks whether a string reads the same backward and forward:

public boolean isPalindrome(String inputString) {
    if (inputString.length() == 0) {
        return true;
    } else {
        char firstChar = inputString.charAt(0);
        char lastChar = inputString.charAt(inputString.length() - 1);
        String mid = inputString.substring(1, inputString.length() - 1);
        return (firstChar == lastChar) && isPalindrome(mid);
    }
}

All we need now is a simple JUnit test:

@Test
public void whenEmptyString_thenAccept() {
    Palindrome palindromeTester = new Palindrome();
    assertTrue(palindromeTester.isPalindrome(""));
}

Running the test using JUnit will automatically set in motion the JaCoCo agent, thus, it will create a coverage report in binary format in the target directory – target/jacoco.exec.

Obviously we cannot interpret the output single-handedly, but other tools and plugins can – e.g. Sonar Qube.

The good news is that we can use the jacoco:report goal in order to generate readable code coverage reports in several formats – e.g. HTML, CSV, and XML.

We can now take a look for example at target/site/jacoco/index.html page to see what the generated report looks like:

coverage

Following the link provided in the report – Palindrome.java , we can drill through a more detailed view for each Java class:

Note that you can straightforwardly manage code coverage using JaCoCo inside Eclipse with zero configuration, thanks to EclEmma Eclipse plugin.

4. Report Analysis

Our report shows 21% instructions coverage, 17% branches coverage, 3/5 for cyclomatic complexity and so on.

The 38 instructions shown by JaCoCo in the report refers to the bytecode instructions as opposed to ordinary Java code instructions.

JaCoCo reports help to visually analyze code coverage by using diamonds colors for branches and background colors for lines:

  • Red diamond means that no branches have been exercised during the test phase.
  • Yellow diamond shows that the code is partially covered – some branches have not been exercised.
  • Green diamond means that all branches have been exercised during the test.

The same color code applies on the background color, but for lines coverage.

JaCoCo mainly provide three important metrics:

  • Lines coverage reflects the amount of code that has been exercised based on the number of Java byte code instructions called by the tests.
  • Branches coverage shows the percent of exercised branches in the code – typically related to if/else and switch statements.
  • Cyclomatic complexity reflects the complexity of code by giving the number of paths needed to cover all the possible paths in a code through linear combination.

To take a trivial example, if there is no if or switch statements in the code, the cyclomatic complexity will be 1, as we only need one execution path to cover the entire code.

Generally the cyclomatic complexity reflects the number of test cases we need to implement in order to cover the entire code.

5. Concept Breakdown

JaCoCo runs as a Java agent, it is responsible for instrumenting the bytecode while running the tests. JaCoCo drills into each instruction and shows which lines are exercised during each test.

To gather coverage data, JaCoCo uses ASM for code instrumentation on the fly, receiving events from the JVM Tool Interface in the process:

jacoco concept

It is also possible to run the JaCoCo agent is server mode, in this case, we can run our tests with jacoco:dump as a goal, to initiate a dump request.

You can follow the official documentation link for more in-depth details about JaCoCo design.

6. Code Coverage Score

Now that we know a bit about how JaCoCo works, let’s improve our code coverage score.

In order to achieve 100% code coverage we need to introduce tests, that covers the missing parts shown in the initial report:

@Test
public void whenPalindrom_thenAccept() {
    Palindrome palindromeTester = new Palindrome();
    assertTrue(palindromeTester.isPalindrome("noon"));
}
    
@Test
public void whenNotPalindrom_thenReject(){
    Palindrome palindromeTester = new Palindrome();
    assertFalse(palindromeTester.isPalindrome("neon"));
}

Now we can say that we have enough tests to cover our the entire code, but to make sure of that, let’s run the Maven command mvn jacoco:report to publish the coverage report:

coverage

As you can see all lines/branches/paths in our code are fully covered:

coverage

In real world project, and as developments go further, we need to keep in track the code coverage score.

JaCoCo offers a simple way of declaring minimum requirements that should be met, otherwise the build will fail.

We can do that by adding the following check goal in our pom.xml file:

<execution>
    <id>jacoco-check</id>
    <goals>
        <goal>check</goal>
    </goals>
    <configuration>
        <rules>
            <rule>
                <element>PACKAGE</element>
                <limits>
                    <limit>
                        <counter>LINE</counter>
                        <value>COVEREDRATIO</value>
                        <minimum>0.50</minimum>
                    </limit>
                </limits>
            </rule>
        </rules>
    </configuration>
</execution>

As you can probably guess, we’re limiting here the minimum score for lines coverage to 50%.

The jacoco:check goal is bound to verify, so we can run the Maven command – mvn clean verify to check whether the rules are respected or not. The logs will show something like:

[ERROR] Failed to execute goal org.jacoco:jacoco-maven-plugin:0.7.7.201606060606:check 
  (jacoco-check) on project mutation-testing: Coverage checks have not been met.

7. Conclusion

In this article we’ve seen how to make use of JaCoCo maven plugin to generate code coverage reports for Java projects.

Keep in mind though, 100% code coverage does not necessary reflects effective testing, as it only reflects the amount of code exercised during tests. In a previous article, we’ve talked about mutation testing as a more sophisticated way to track tests effectiveness compared to ordinary code coverage.

You can check out the example provided in this article in the linked GitHub project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction To Orika

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

Orika is a Java Bean mapping framework that recursively copies data from one object to another. It can be very useful when developing multi-layered applications.

While moving data objects back and forth between these layers it is common to find that we need to convert objects from one instance into another to accommodate different APIs.

Some ways to achieve this are: hard coding the copying logic or to implement bean mappers like Dozer. However, it can be used to simplify the process of mapping between one object layer and another.

Orika uses byte code generation to create fast mappers with minimal overhead, making it much faster than other reflection based mappers like Dozer.

2. Simple Example

The basic cornerstone of the mapping framework is the MapperFactory class. This is the class we will use to configure mappings and obtain the MapperFacade instance which performs the actual mapping work.

We create a MapperFactory object like so:

MapperFactory mapperFactory = new DefaultMapperFactory.Builder().build();

Then assuming we have a source data object, Source.java, with two fields:

public class Source {
    private String name;
    private int age;
    
    public Source(String name, int age) {
        this.name = name;
        this.age = age;
    }
    
    // standard getters and setters
}

And a similar destination data object, Dest.java:

public class Dest {
    private String name;
    private int age;
    
    public Dest(String name, int age) {
        this.name = name;
        this.age = age;
    }
    
    // standard getters and setters
}

This is the most basic of  bean mapping using Orika:

@Test
public void givenSrcAndDest_whenMaps_thenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class);
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Source src = new Source("Baeldung", 10);
    Dest dest = mapper.map(src, Dest.class);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), src.getName());
}

As we can observe, we have created a Dest object with identical fields as Source, simply by mapping. Bidirectional or reverse mapping is also possible by default:

@Test
public void givenSrcAndDest_whenMapsReverse_thenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class).byDefault();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Dest src = new Dest("Baeldung", 10);
    Source dest = mapper.map(src, Source.class);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), src.getName());
}

3. Maven Setup

To use Orika mapper in our maven projects, we need to have orika-core dependency in pom.xml:

<dependency>
    <groupId>ma.glasnost.orika</groupId>
    <artifactId>orika-core</artifactId>
    <version>1.4.6</version>
</dependency>

The latest version can always be found here.

3. Working With MapperFactory

The general pattern of mapping with Orika involves creating a MapperFactory object, configuring it incase we need to tweak the default mapping behaviour, obtaining a MapperFacade object from it and finally, actual mapping.

We shall be observing this pattern in all our examples. But our very first example showed the default behaviour of the mapper without any tweak from our side.

3.1. The BoundMapperFacade vs MapperFacade

One thing to note is that we could choose to use BoundMapperFacade over the default MapperFacade which is quite slow. These are cases where we have a specific pair of types to map.

Our initial test would thus become:

@Test
public void givenSrcAndDest_whenMapsUsingBoundMapper_thenCorrect() {
    BoundMapperFacade<Source, Dest> 
      boundMapper = mapperFactory.getMapperFacade(Source.class, Dest.class);
    Source src = new Source("baeldung", 10);
    Dest dest = boundMapper.map(src);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), src.getName());
}

However, for BoundMapperFacade to map bi-directionally, we have to explicitely call the mapReverse method rather than the map method we have looked at for the case of the default MapperFacade:

@Test
public void givenSrcAndDest_whenMapsUsingBoundMapperInReverse_thenCorrect() {
    BoundMapperFacade<Source, Dest> 
      boundMapper = mapperFactory.getMapperFacade(Source.class, Dest.class);
    Dest src = new Dest("baeldung", 10);
    Source dest = boundMapper.mapReverse(src);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), src.getName());
}

The test will fail otherwise.

3.2. Configure Field Mappings

The examples we have looked at so far involve source and destination classes with identical field names. This subsection tackles the case where there is a difference between the two.

Consider a source object, Person ,with three fields namely name, nickname and age:

public class Person {
    private String name;
    private String nickname;
    private int age;
    
    public Person(String name, String nickname, int age) {
        this.name = name;
        this.nickname = nickname;
        this.age = age;
    }
    
    // standard getters and setters
}

Then another layer of the application has a similar object, but written by a french programmer. Let’s say that’s called Personne, with fields nom, surnom and age, all corresponding to the above three:

public class Personne {
    private String nom;
    private String surnom;
    private int age;
    
    public Personne(String nom, String surnom, int age) {
        this.nom = nom;
        this.surnom = surnom;
        this.age = age;
    }
    
    // standard getters and setters
}

Orika cannot automatically resolve these differences. But we can use the ClassMapBuilder API to register these unique mappings.

We have already used it before, but we have not tapped into any of it’s powerful features yet. The first line of each of our preceding tests using the default MapperFacade was using the ClassMapBuilder API to register the two classes we wanted to map:

mapperFactory.classMap(Source.class, Dest.class);

We could also map all fields using the default configuration, to make it clearer:

mapperFactory.classMap(Source.class, Dest.class).byDefault()

By adding the byDefault() method call, we are already configuring the behaviour of the mapper using the ClassMapBuilder API.

Now we want to be able to map Personne to Person, so we also configure field mappings onto the mapper using ClassMapBuilder API:

@Test
public void givenSrcAndDestWithDifferentFieldNames_whenMaps_thenCorrect() {
    mapperFactory.classMap(Personne.class, Person.class)
      .field("nom", "name").field("surnom", "nickname")
      .field("age", "age").register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Personne frenchPerson = new Personne("Claire", "cla", 25);
    Person englishPerson = mapper.map(frenchPerson, Person.class);

    assertEquals(englishPerson.getName(), frenchPerson.getNom());
    assertEquals(englishPerson.getNickname(), frenchPerson.getSurnom());
    assertEquals(englishPerson.getAge(), frenchPerson.getAge());
}

Don’t forget to call the register() API method in order to register the configuration with the MapperFactory.

Even if only one field differs, going down this route means we must explicitly register all field mappings, including age which is the same in both objects, otherwise the unregistered field will not be mapped and the test would fail.

This will soon become tedious, what if we only want to map one field out of 20, do we need to configure all of their mappings?

No, not when we tell the mapper to use it’s default mapping configuration in cases where we have not explicitly defined a mapping:

mapperFactory.classMap(Personne.class, Person.class)
  .field("nom", "name").field("surnom", "nickname").byDefault().register();

Here, we have not defined a mapping for the age field, but nevertheless the test will pass.

3.3. Exclude a Field

Assuming we would like to exclude the nom field of Personne from the mapping – so that the Person object only receives new values for fields that are not excluded:

@Test
public void givenSrcAndDest_whenCanExcludeField_thenCorrect() {
    mapperFactory.classMap(Personne.class, Person.class).exclude("nom")
      .field("surnom", "nickname").field("age", "age").register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Personne frenchPerson = new Personne("Claire", "cla", 25);
    Person englishPerson = mapper.map(frenchPerson, Person.class);

    assertEquals(null, englishPerson.getName());
    assertEquals(englishPerson.getNickname(), frenchPerson.getSurnom());
    assertEquals(englishPerson.getAge(), frenchPerson.getAge());
}

Notice how we exclude it in the configuration of the MapperFactory and then notice also the first assertion where we expect the value of name in the Person object to remain null, as a result of it being excluded in mapping.

4. Collections Mapping

Sometimes the destination object may have unique attributes while the source object just maintains every property in a collection.

4.1. Lists And Arrays

Consider a source data object that only has one field, a list of a person’s names:

public class PersonNameList {
    private List<String> nameList;
    
    public PersonNameList(List<String> nameList) {
        this.nameList = nameList;
    }
}

Now consider our destination data object which separates firstName and lastName into separate fields:

public class PersonNameParts {
    private String firstName;
    private String lastName;

    public PersonNameParts(String firstName, String lastName) {
        this.firstName = firstName;
        this.lastName = lastName;
    }
}

Let’s assume we are very sure that at index 0 there will always be the firstName of the person and at index 1 there will always be their lastName.

Orika allows us to use the bracket notation to access members of a collection:

@Test
public void givenSrcWithListAndDestWithPrimitiveAttributes_whenMaps_thenCorrect() {
    mapperFactory.classMap(PersonNameList.class, PersonNameParts.class)
      .field("nameList[0]", "firstName")
      .field("nameList[1]", "lastName").register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    List<String> nameList = Arrays.asList(new String[] { "Sylvester", "Stallone" });
    PersonNameList src = new PersonNameList(nameList);
    PersonNameParts dest = mapper.map(src, PersonNameParts.class);

    assertEquals(dest.getFirstName(), "Sylvester");
    assertEquals(dest.getLastName(), "Stallone");
}

Even if instead of PersonNameList, we had PersonNameArray, the same test would pass for an array of names.

4.2. Maps

Assuming our source object has a map of values. We know there is a key in that map, first, whose value represents a person’s firstName in our destination object.

Likewise we know that there is another key, last, in the same map whose value represents a person’s lastName in the destination object.

public class PersonNameMap {
    private Map<String, String> nameMap;

    public PersonNameMap(Map<String, String> nameMap) {
        this.nameMap = nameMap;
    }
}

Similar to the case in the preceding section, we use bracket notation, but instead of passing in an index, we pass in the key whose value we want to map to the given destination field.

Orika accepts two ways of retrieving the key, both are represented in the following test:

@Test
public void givenSrcWithMapAndDestWithPrimitiveAttributes_whenMaps_thenCorrect() {
    mapperFactory.classMap(PersonNameMap.class, PersonNameParts.class)
      .field("nameMap['first']", "firstName")
      .field("nameMap[\"last\"]", "lastName")
      .register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Map<String, String> nameMap = new HashMap<>();
    nameMap.put("first", "Leornado");
    nameMap.put("last", "DiCaprio");
    PersonNameMap src = new PersonNameMap(nameMap);
    PersonNameParts dest = mapper.map(src, PersonNameParts.class);

    assertEquals(dest.getFirstName(), "Leornado");
    assertEquals(dest.getLastName(), "DiCaprio");
}

We can use either single quotes or double quotes but we must escape the latter.

5. Map Nested Fields

Following on from the preceding collections examples, assume that inside our source data object, there is another Data Transfer Object (DTO) that holds the values we want to map.

public class PersonContainer {
    private Name name;
    
    public PersonContainer(Name name) {
        this.name = name;
    }
}
public class Name {
    private String firstName;
    private String lastName;
    
    public Name(String firstName, String lastName) {
        this.firstName = firstName;
        this.lastName = lastName;
    }
}

To be able to access the properties of the nested DTO and map them onto our destination object, we use dot notation, like so:

@Test
public void givenSrcWithNestedFields_whenMaps_thenCorrect() {
    mapperFactory.classMap(PersonContainer.class, PersonNameParts.class)
      .field("name.firstName", "firstName")
      .field("name.lastName", "lastName").register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    PersonContainer src = new PersonContainer(new Name("Nick", "Canon"));
    PersonNameParts dest = mapper.map(src, PersonNameParts.class);

    assertEquals(dest.getFirstName(), "Nick");
    assertEquals(dest.getLastName(), "Canon");
}

6. Mapping Null Values

In some cases, you may wish to control whether nulls are mapped or ignored when they are encountered. By default, Orika will map null values when encountered:

@Test
public void givenSrcWithNullField_whenMapsThenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class).byDefault();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Source src = new Source(null, 10);
    Dest dest = mapper.map(src, Dest.class);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), src.getName());
}

This behavior can be customized at different levels depending on how specific we would like to be.

6.1. Global Configuration

We can configure our mapper to map nulls or ignore them at the global level before creating the global MapperFactory. Remember how we created this object in our very first example? This time we add an extra call during the build process:

MapperFactory mapperFactory = new DefaultMapperFactory.Builder()
  .mapNulls(false).build();

We can run a test to confirm that indeed, nulls are not getting mapped:

@Test
public void givenSrcWithNullAndGlobalConfigForNoNull_whenFailsToMap_ThenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class);
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Source src = new Source(null, 10);
    Dest dest = new Dest("Clinton", 55);
    mapper.map(src, dest);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), "Clinton");
}

What happens is that, by default, nulls are mapped. This means that even if a field value in the source object is null and the corresponding field’s value in the destination object has a meaningful value, it will be overwritten.

In our case, the destination field is not overwritten if its corresponding source field has a null value.

6.2. Local Configuration

Mapping of null values can be controlled on a ClassMapBuilder by using the mapNulls(true|false) or mapNullsInReverse(true|false)  for controlling mapping of nulls in the reverse direction.

By setting this value on a ClassMapBuilder instance, all field mappings created on the same ClassMapBuilder, after the value is set, will take on that same value.

Let’s illustrate this with an example test:

@Test
public void givenSrcWithNullAndLocalConfigForNoNull_whenFailsToMap_ThenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class).field("age", "age")
      .mapNulls(false).field("name", "name").byDefault().register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Source src = new Source(null, 10);
    Dest dest = new Dest("Clinton", 55);
    mapper.map(src, dest);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), "Clinton");
}

Notice how we call mapNulls just before registering name field, this will cause all fields following the mapNulls call to be ignored when they have null value.

Bi-directional mapping also accepts mapped null values:

@Test
public void givenDestWithNullReverseMappedToSource_whenMapsByDefault_thenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class).byDefault();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Dest src = new Dest(null, 10);
    Source dest = new Source("Vin", 44);
    mapper.map(src, dest);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), src.getName());
}

Also we can prevent this by calling mapNullsInReverse and passing in false:

@Test
public void 
  givenDestWithNullReverseMappedToSourceAndLocalConfigForNoNull_whenFailsToMap_thenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class).field("age", "age")
      .mapNullsInReverse(false).field("name", "name").byDefault()
      .register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Dest src = new Dest(null, 10);
    Source dest = new Source("Vin", 44);
    mapper.map(src, dest);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), "Vin");
}

6.3. Field Level Configuration

We can configure this at the field level using fieldMap, like so:

mapperFactory.classMap(Source.class, Dest.class).field("age", "age")
  .fieldMap("name", "name").mapNulls(false).add().byDefault().register();

In this case, the configuration will only affect the name field as we have called it at field level:

@Test
public void givenSrcWithNullAndFieldLevelConfigForNoNull_whenFailsToMap_ThenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class).field("age", "age")
      .fieldMap("name", "name").mapNulls(false).add().byDefault().register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Source src = new Source(null, 10);
    Dest dest = new Dest("Clinton", 55);
    mapper.map(src, dest);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), "Clinton");
}

7. Orika Custom Mapping

So far, we have looked at simple custom mapping examples using the ClassMapBuilder API. We shall still use the same API but customize our mapping using Orika’s CustomMapper class.

Assuming we have two data objects each with a certain field called dtob, representing the date and time of the birth of a person.

One data object represents this value as a datetime String in the following ISO format:

2007-06-26T21:22:39Z

and the other represents the same as a long type in the following unix timestamp format:

1182882159000

Clearly, non of the customizations we have covered so far suffices to convert between the two formats during the mapping process, not even Orika’s built in converter can handle the job. This is where we have to write a CustomMapper to do the required conversion during mapping.

Let us create our first data object:

public class Person3 {
    private String name;
    private String dtob;
    
    public Person3(String name, String dtob) {
        this.name = name;
        this.dtob = dtob;
    }
}

then our second data object:

public class Personne3 {
    private String name;
    private long dtob;
    
    public Personne3(String name, long dtob) {
        this.name = name;
        this.dtob = dtob;
    }
}

We will not label which is source and which is destination right now as the CustomMapper enables us to cater for bi-directional mapping.

Here is our concrete implementation of the CustomMapper abstract class:

class PersonCustomMapper extends CustomMapper<Personne3, Person3> {

    @Override
    public void mapAtoB(Personne3 a, Person3 b, MappingContext context) {
        Date date = new Date(a.getDtob());
        DateFormat format = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'");
        String isoDate = format.format(date);
        b.setDtob(isoDate);
    }

    @Override
    public void mapBtoA(Person3 b, Personne3 a, MappingContext context) {
        DateFormat format = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'");
        Date date = format.parse(b.getDtob());
        long timestamp = date.getTime();
        a.setDtob(timestamp);
    }
};

Notice that we have implemented methods mapAtoB and mapBtoA. Implementing both makes our mapping function bi-directional.

Each method exposes the data objects we are mapping and we take care of copying the field values from one to the other.

There in is where we write the custom code to manipulate the source data according to our requirements before writing it to the destination object.

Let’s run a test to confirm that our custom mapper works:

@Test
public void givenSrcAndDest_whenCustomMapperWorks_thenCorrect() {
    mapperFactory.classMap(Personne3.class, Person3.class)
      .customize(customMapper).register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    String dateTime = "2007-06-26T21:22:39Z";
    long timestamp = new Long("1182882159000");
    Personne3 personne3 = new Personne3("Leornardo", timestamp);
    Person3 person3 = mapper.map(personne3, Person3.class);

    assertEquals(person3.getDtob(), dateTime);
}

Notice that we still pass the custom mapper to Orika’s mapper via ClassMapBuilder API, just like all other simple customizations.

We can confirm too that bi-directional mapping works:

@Test
public void givenSrcAndDest_whenCustomMapperWorksBidirectionally_thenCorrect() {
    mapperFactory.classMap(Personne3.class, Person3.class)
      .customize(customMapper).register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    String dateTime = "2007-06-26T21:22:39Z";
    long timestamp = new Long("1182882159000");
    Person3 person3 = new Person3("Leornardo", dateTime);
    Personne3 personne3 = mapper.map(person3, Personne3.class);

    assertEquals(person3.getDtob(), timestamp);
}

8. Conclusion

In this article, we have explored the most important features of the Orika mapping framework.

There are definitely more advanced features that give us much more control but in most use cases, the ones covered here will be more than enough.

The full project code and all examples can be found in my github project. Don’t forget to check out our tutorial on the Dozer mapping framework as well, since they both solve more or less the same problem.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Asynchronous Batch Operations in Couchbase

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

In this follow-up to our tutorial on using Couchbase in a Spring application, we explore the the asynchronous nature of the Couchbase SDK and how it may be used to perform persistence operations in batches, thus allowing our application to achieve optimal use of Couchbase resources.

1.1. CrudService Interface

First, we augment our generic CrudService interface to include batch operations:

public interface CrudService<T> {
    ...
    
    List<T> readBulk(Iterable<String> ids);

    void createBulk(Iterable<T> items);

    void updateBulk(Iterable<T> items);

    void deleteBulk(Iterable<String> ids);

    boolean exists(String id);
}

1.2. CouchbaseEntity Interface

We define an interface for the entities that we want to persist:

public interface CouchbaseEntity {

    String getId();
    
    void setId(String id);
    
}

1.3. AbstractCrudService Class

Then we will implement each of these methods in a generic abstract class. This class is derived from the PersonCrudService class that we used in the previous tutorial and begins as follows:

public abstract class AbstractCrudService<T extends CouchbaseEntity> implements CrudService<T> {
    private BucketService bucketService;
    private Bucket bucket;
    private JsonDocumentConverter<T> converter;

    public AbstractCrudService(BucketService bucketService, JsonDocumentConverter<T> converter) {
        this.bucketService = bucketService;
        this.converter = converter;
    }

    protected void loadBucket() {
        bucket = bucketService.getBucket();
    }
    
    ...
}

2. The Asynchronous Bucket Interface

The Couchbase SDK provides the AsyncBucket interface for performing asynchronous operations. Given a Bucket instance, you can obtain its asynchronous version via the async() method:

AsyncBucket asyncBucket = bucket.async();

3. Batch Operations

To perform batch operations using the AsyncBucket interface, we employ the RxJava library.

3.1. Batch Read

Here we implement the readBulk method. First we use the AsyncBucket and the flatMap mechanism in RxJava to retrieve the documents asynchronously into an Observable<JsonDocument>, then we use the toBlocking mechanism in RxJava to convert these to a list of entities:

@Override
public List<T> readBulk(Iterable<String> ids) {
    AsyncBucket asyncBucket = bucket.async();
    Observable<JsonDocument> asyncOperation = Observable
      .from(ids)
      .flatMap(new Func1<String, Observable<JsonDocument>>() {
          public Observable<JsonDocument> call(String key) {
              return asyncBucket.get(key);
          }
    });

    List<T> items = new ArrayList<T>();
    try {
        asyncOperation.toBlocking()
          .forEach(new Action1<JsonDocument>() {
              public void call(JsonDocument doc) {
                  T item = converter.fromDocument(doc);
                  items.add(item);
              }
        });
    } catch (Exception e) {
        logger.error("Error during bulk get", e);
    }

    return items;
}

3.2. Batch Insert

We again use RxJava’s flatMap construct to implement the createBulk method.

Since bulk mutation requests are produced faster than their responses can be generated, sometimes resulting in an overload condition, we institute a retry with exponential delay whenever a BackpressureException is encountered:

@Override
public void createBulk(Iterable<T> items) {
    AsyncBucket asyncBucket = bucket.async();
    Observable
      .from(items)
      .flatMap(new Func1<T, Observable<JsonDocument>>() {
          @SuppressWarnings("unchecked")
          @Override
          public Observable<JsonDocument> call(final T t) {
              if(t.getId() == null) {
                  t.setId(UUID.randomUUID().toString());
              }
              JsonDocument doc = converter.toDocument(t);
              return asyncBucket.insert(doc)
                .retryWhen(RetryBuilder
                  .anyOf(BackpressureException.class)
                  .delay(Delay.exponential(TimeUnit.MILLISECONDS, 100))
                  .max(10)
                  .build());
          }
      })
      .last()
      .toBlocking()
      .single();
}

3.3. Batch Update

We use a similar mechanism in the updateBulk method:

@Override
public void updateBulk(Iterable<T> items) {
    AsyncBucket asyncBucket = bucket.async();
    Observable
      .from(items)
      .flatMap(new Func1<T, Observable<JsonDocument>>() {
          @SuppressWarnings("unchecked")
          @Override
          public Observable<JsonDocument> call(final T t) {
              JsonDocument doc = converter.toDocument(t);
              return asyncBucket.upsert(doc)
                .retryWhen(RetryBuilder
                  .anyOf(BackpressureException.class)
                  .delay(Delay.exponential(TimeUnit.MILLISECONDS, 100))
                  .max(10)
                  .build());
          }
      })
      .last()
      .toBlocking()
      .single();
}

3.4. Batch Delete

And we write the deleteBulk method as follows:

@Override
public void deleteBulk(Iterable<String> ids) {
    AsyncBucket asyncBucket = bucket.async();
    Observable
      .from(ids)
      .flatMap(new Func1<String, Observable<JsonDocument>>() {
          @SuppressWarnings("unchecked")
          @Override
          public Observable<JsonDocument> call(String key) {
              return asyncBucket.remove(key)
                .retryWhen(RetryBuilder
                  .anyOf(BackpressureException.class)
                  .delay(Delay.exponential(TimeUnit.MILLISECONDS, 100))
                  .max(10)
                  .build());
          }
      })
      .last()
      .toBlocking()
      .single();
}

4. PersonCrudService

Finally, we write a Spring service, PersonCrudService, that extends our AbstractCrudService for the Person entity.

Since all of the Couchbase interaction is implemented in the abstract class, the implementation for an entity class is trivial, as we only need to ensure that all our dependencies are injected and our bucket loaded:

@Service
public class PersonCrudService extends AbstractCrudService<Person> {

    @Autowired
    public PersonCrudService(
      @Qualifier("TutorialBucketService") BucketService bucketService,
      PersonDocumentConverter converter) {
        super(bucketService, converter);
    }

    @PostConstruct
    private void init() {
        loadBucket();
    }
}

5. Conclusion

The source code shown in this tutorial is available in the github project.

You can learn more about the Couchbase Java SDK at the official Couchbase developer documentation site.

I usually post about Persistence on Twitter - you can follow me there:


Viewing all 4754 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>