Quantcast
Channel: Baeldung
Viewing all 4754 articles
Browse latest View live

Java Web Weekly, Issue 129

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Notes on Reactive Programming Part II: Writing Some Code [spring.io]

A reactive system is an entirely different beast, and such a good fit for a small set of scenarios.

A great writeup to go through if you want to skate where the puck is going.

>> Java EE 8 in Crisis [xenonique.co.uk]

And since we’re talking about where we’re headed, the state and outlook of Java EE 8 is significant for the entire Java community and ecosystem.

>> How to implement a custom String-based sequence identifier generator with Hibernate [vladmihalcea.com]

A super practical, focused solution on generating String ids with Hibernate (which is actually something I wandered about in the past).

And, as always, Vlad’s picking his topics with the help and involvement of the community, which is a really a solid way to go about things. A cool resource.

>> How To Implement hashCode Correctly [codefx.org]

The next back-to-basics writeup after we had a look at equals last week. I knew this one was coming.

>> Configure Once, Run Everywhere: Decoupling Configuration and Runtime [infoq.com]

Real-world project configuration is never as easy as we might initially think.

Now – I’m not sure if a standard is the answer here, but the practical approach here looks interesting.

>> It’s Time to Unlearn Everything You Know About Java Exceptions [takipi.com]

A high level piece about how to actually do exceptions well. It also reads well, I think Alex had some fun writing this one.

>> Should you use JPA for your next project? [thoughts-on-java.org]

Hibernate and JPA are certainly not a good fit for every type of project out there, but they’re a solid base for a lot of them. And if you really get to know the tool well, it can be surprising how far you can go.

This interactive writeup can be helpful in making the decision when you’re starting up a new project, or at least give you some context around that decision.

>> JDK 9 is not (yet) Feature Complete — how will we get there? [mail.openjdk.java.net]

Yeah.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Mutation Testing [cleancoder.com]

A good intro to mutation testing and also to a library that might be interesting to explore – pitest.

>> Serverless Architectures [martinfowler.com]

>> Serverless Reference Architectures with AWS Lambda [allthingsdistributed.com]

I don’t know much about this architectural style, and these writeups were a good way to get started.

Also worth reading:

3. Musings

>> Things I learned from doing my first workshop [swizec.com]

I like this writeup, mostly because it resonates with my own experiences, but also because I learn a lot from seeing other people leveling up and getting a glimpse into how they think. Good stuff.

>> Creating Your Code Review Checklist [daedtech.com]

Some good aspects to think about on your next code review.

>> Three Martini Open Office Plans [daedtech.com]

A fun exploration of whether or not open office plans make sense, from the POV of an outsider.

For me personally it’s been long enough since I last had the experience of trying to get work done in an open office, so I can half-laugh about it. But I very distinctly remember it wasn’t easy to pull off.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> What do you think management is? [dilbert.com]

>> He’s wearing headphones, what do I do? [dilbert.com]

>> All roads headed in this direction – I just took the shortest one [dilbert.com]

5. Pick of the Week

The “Hibernate Performance Tuning” course only opens up a few times a year, and the early-bird pricing only lasts until next Friday.

So, basically – if you want to level up in your understanding and command of Hibernate – definitely go through this material:

>> Hibernate Performance Tuning Online Training [thoughts-on-java.org]

If you’ve been reading Java Web Weekly for a while, you know that I very rarely pick products here. That’s simply because there aren’t too many solid courses to pick in our ecosystem. I know two of them that I feel comfortable picking here and sending out to twenty thousand readers.

This is one of the two, and I’ll definitely pick the other one when it gets close to being live.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Guava Set + Function = Map

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In this tutorial – we will illustrate one of many useful features in Guava‘s collect package: how to apply a Function to a Guava Set and obtain a Map.

We’ll discuss two approaches – creating an immutable map and a live map based on the built-in guava operations and then an implementation of a custom live Map implementation.

2. Setup

First, we’ll add the Guava library as a dependency in pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>19.0</version>
</dependency>

A quick note – you can check if there’s a newer version here.

3. The Mapping Function

Let’s first define the function that we’ll apply on the sets elements:

    Function<Integer, String> function = new Function<Integer, String>() {
        @Override
        public String apply(Integer from) {
            return Integer.toBinaryString(from.intValue());
        }
    };

The function is simply converting the value of an Integer to its binary String representation.

4. Guava toMap()

Guava offers a static utility class pertaining to Map instances. Among others, it has two operations that can be used to convert a Set to a Map by applying the defined Guava’s Function.

The following snippet shows creating an immutable Map:

Map<Integer, String> immutableMap = Maps.toMap(set, function);

The following tests asserts that the set is properly converted:

@Test
public void givenStringSetAndShhimpleMap_whenMapsToElementLength_thenCorrect() {
    Set set = new TreeSet(Arrays.asList(32, 64, 128));
    Map<Integer, String> immutableMap = Maps.toMap(set, function);
    assertTrue(immutableMap.get(32).equals("100000")
      && immutableMap.get(64).equals("1000000")
      && immutableMap.get(128).equals("10000000"));
}

The problem with the created map is that if an element is added to the source set, the derived map is not updated.

4. Guava asMap()

If we use the previous example and create a map using the Maps.asMap method:

Map<Integer, String> liveMap = Maps.asMap(set, function);

We’ll get a live map view – meaning that the changes to the originating Set will be reflected in the map as well:

@Test
public void givenStringSet_whenMapsToElementLength_thenCorrect() {
    Set<Integer> set = new TreeSet<Integer>(Arrays.asList(32, 64, 128));
    Map<Integer, String> liveMap = Maps.asMap(set, function);
    assertTrue(liveMap.get(32).equals("100000")
            && liveMap.get(64).equals("1000000")
            && liveMap.get(128).equals("10000000"));
    
    set.add(256);
    assertTrue(liveMap.get(256).equals("100000000") && liveMap.size() == 4);
}

Note that the tests assert properly despite that we added an element through a set and looked it up inside the map.

5. Building Custom Live Map

When we talk of the Map View of a Set, we are basically extending the capability of the Set using a Guava Function.

In the live Map view, changes to the Set should be updating the Map EntrySet in real time. We will create our own generic Map, sub-classing AbstractMap<K,V>, like so:

public class GuavaMapFromSet<K, V> extends AbstractMap<K, V> {
    public GuavaMapFromSet(Set<K> keys, 
        Function<? super K, ? extends V> function) { 
    }
}

Worthy of note is that the main contract of all sub-classes of AbstractMap is to implement the entrySet method, as we’ve done. We will then look at 2 critical parts of the code in the following sub-sections.

5.1. Entries

Another attribute in our Map will be entries, representing our EntrySet:

private Set<Entry<K, V>> entries;

The entries field will always be initialized using the input Set from the constructor:

public GuavaMapFromSet(Set<K> keys,Function<? super K, ? extends V> function) {
    this.entries=keys;
}

A quick note here – in order to maintain a live view, we will use the same iterator in the input Set for the subsequent Map‘s EntrySet.

In fulfilling the contract of AbstractMap<K,V>, we implement the entrySet method in which we then return entries:

@Override
public Set<java.util.Map.Entry<K, V>> entrySet() {
    return this.entries;
}

5.2. Cache

This Map stores the values obtained by applying Function to Set:

private WeakHashMap<K, V> cache;

6. The Set Iterator

We will use the input Set‘s iterator for the subsequent Map‘s EntrySet. To do this, we use a customized EntrySet as well as a customized Entry class.

6.1. The Entry Class

First, let’s see how a single entry in the Map will look like:

private class SingleEntry implements Entry<K, V> {
    private K key;
    public SingleEntry( K key) {
        this.key = key;
    }
    @Override
    public K getKey() {
        return this.key;
    }
    @Override
    public V getValue() {
        V value = GuavaMapFromSet.this.cache.get(this.key);
	if (value == null) {
	    value = GuavaMapFromSet.this.function.apply(this.key);
	    GuavaMapFromSet.this.cache.put(this.key, value);
	}
	return value;
    }
    @Override
    public V setValue( V value) {
        throw new UnsupportedOperationException();
    }
}

Clearly, in this code, we don’t allow modifying the Set from the Map View as a call to setValue throws an UnsupportedOperationException.

Pay close attention to getValue – this is the crux of our live view functionality. We check cache inside our Map for the current key (Set element).

If we find the key, we return it, else we apply our function to the current key and obtain a value, then store it in cache.

This way, whenever the Set has a new element, the map is up to date since the new values are computed on the fly.

6.2. The EntrySet

We will now implement the EntrySet:

private class MyEntrySet extends AbstractSet<Entry<K, V>> {
    private Set<K> keys;
    public MyEntrySet(Set<K> keys) {
        this.keys = keys;
    }
    @Override
    public Iterator<Map.Entry<K, V>> iterator() {
        return new LiveViewIterator();
    }
    @Override
    public int size() {
        return this.keys.size();
    }
}

We have fulfilled the contract for extending AbstractSet by overriding the iterator and size methods. But there’s more.

Remember the instance of this EntrySet will form the entries in our Map View. Normally, the EntrySet of a map simply returns a complete Entry for each iteration.

However, in our case, we need to use the iterator from the input Set to maintain our live view. We know well it will only return the Set‘s elements, so we also need a custom iterator.

6.3. The Iterator

Here is the implementation of our iterator for the above EntrySet:

public class LiveViewIterator implements Iterator<Entry<K, V>> {
    private Iterator<K> inner;
    
    public LiveViewIterator () {
        this.inner = MyEntrySet.this.keys.iterator();
    }
    
    @Override
    public boolean hasNext() {
        return this.inner.hasNext();
    }
    @Override
    public Map.Entry<K, V> next() {
        K key = this.inner.next();
        return new SingleEntry(key);
    }
    @Override
    public void remove() {
        throw new UnsupportedOperationException();
    }
}

LiveViewIterator must reside inside the MyEntrySet class, this way, we can share the Set‘s iterator at initialization.

When looping through GuavaMapFromSet‘s entries using the iterator, a call to next simply retrieves the key from the Set‘s iterator and constructs a SingleEntry.

7. Putting It all Together

After stitching together what we have covered in this tutorial, let us take a replace the liveMap variable from the previous samples, and replace it with our custom map:

@Test
public void givenIntSet_whenMapsToElementBinaryValue_thenCorrect() {
    Set<Integer> set = new TreeSet<>(Arrays.asList(32, 64, 128));
    Map<Integer, String> customMap = new GuavaMapFromSet<Integer, String>(set, function);
    
    assertTrue(customMap.get(32).equals("100000")
      && customMap.get(64).equals("1000000")
      && customMap.get(128).equals("10000000"));
}

Changing the content of input Set, we will see that the Map updates in real time:

@Test
public void givenStringSet_whenMapsToElementLength_thenCorrect() {
    Set<Integer> set = new TreeSet<Integer>(Arrays.asList(32, 64, 128));
    Map<Integer, String> customMap = Maps.asMap(set, function);
    
    assertTrue(customMap.get(32).equals("100000")
      && customMap.get(64).equals("1000000")
      && customMap.get(128).equals("10000000"));
    
    set.add(256);
    assertTrue(customMap.get(256).equals("100000000") && customMap.size() == 4);
}

8. Conclusion

In this tutorial, we have looked at the different ways that one can leverage Guava operations and obtain a Map view from a Set by applying a Function.

The full implementation of all these examples and code snippets can be found in my Guava github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Mockito vs EasyMock vs JMockit

$
0
0

1. Introduction

1.1. Overview

In this post we’re going to talk about mocking: what it is, why use it and several examples of how to mock the same test case using some of the most used mocking libraries for Java.

We’ll start with some formal/semi-formal definitions of mocking concepts, then we’ll present the case under test, follow up with examples for each library and end up with some conclusions. The chosen libraries are Mockito, EasyMock and JMockit.

If you feel that you already know the basics of mocking, maybe you can skip to Point 2 without reading the next three points.

1.2. Reasons to Use Mocks

We’ll start assuming that you already code following some kind of driven development methodology centred on tests (TDD, ATDD or BDD). Or simply that you want to create a test for an existing class that relies on dependencies to achieve its functionality.

In any case, when unit-testing a class, we want to test only its functionality and not that of its dependencies (either because we trust their implementation or because we’ll test it ourselves).

In order to achieve this, we need to provide to the object-under-test, a replacement that we can control for that dependency. This way we can force extreme return values, exception throwing or simply reduce time-consuming methods to a fixed return value.

This controlled replacement is the mock and it will help you to simplify test coding and to reduce test execution time.

1.3. Mock Concepts and Definition

Let’s see four definitions from an article written by Martin Fowler that sums up the basics everyone should know about mocks:

  • Dummy objects are passed around but never actually used. Usually, they are just used to fill parameter lists.
  • Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).
  • Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it ‘sent’, or maybe only how many messages it ‘sent’.
  • Mocks are what we are talking about here: objects pre-programmed with expectations which form a specification of the calls they are expected to receive.

1.4 To Mock or Not to Mock: That Is the Question

Not everything must be mocked. Sometimes it’s better to do an integration test as mocking that method/feature would actually be just working for little actual benefit. In our test case (that will be shown in the next point) that would be testing the LoginDao.

The LoginDao would use some kind of third party library for DB access, and mocking it would only consist on assuring that parameters had been prepared for the call, but we still would need to test that the call returns the data we really wanted.

For that reason, it won’t be included in this example (although we could actually write both the unit test with mock calls for the third party library calls AND an integration test with DBUnit for testing the actual performance of the third party library).

2. Test Case

With everything in the previous section in mind, let’s propose a quite typical test case and how we’ll test it using mocks (when it makes sense to actually use mocks). This will help us to have a common scenario for later on been able to compare the different mocking libraries.

2.1 Proposed Case

The proposed test case will be the login process in an application with a layered architecture.

The login request will be handled by a controller, that uses a service, that uses a DAO (that looks for user credentials on a DB). We won’t deepen too much into each layer’s implementation and will focus more on the interactions between the components of each layer.

This way, we’ll have a LoginController, a LoginService and a LoginDAO. Let’s see a diagram for clarification:

Test case diagram

2.2 Implementation

We’ll follow now with the implementation used for the test case, so we can understand what’s happening (or what should happen) on the tests.

We’ll start with the model used for all operations, UserForm, that will only hold the user’s name and password (we’re using public access modifiers to simplify) and a getter method for the username field in order to allow mocking for that property:

public class UserForm {
    public String password;
    public String username;
    public String getUsername(){
        return username;
    }
}

Let’s follow with LoginDAO, that will be void of functionality as we only want its methods to be there so we can mock them when needed:

public class LoginDao {
    public int login(UserForm userForm){
        return 0;
    }
}

LoginDao will be used by LoginService in its login method. LoginService will also have a setCurrentUser method that returns void in order to test that kind of mocking.

public class LoginService {
    private LoginDao loginDao;
    private String currentUser;

    public boolean login(UserForm userForm) {
        assert null != userForm;
        int loginResults = loginDao.login(userForm);
        switch (loginResults){
            case 1:
                return true;
            default:
                return false;
        }
    }

    public void setCurrentUser(String username) {
        if(null != username){
            this.currentUser = username;
        }
    }
}

Finally, LoginController will use LoginService for its login method. This will include:

  • a case in which no calls to the mocked service will be done.
  • a case in which only one method will be called.
  • a case in which all methods will be called.
  • a case in which exception throwing will be tested.
public class LoginController {
    public LoginService loginService;

    public String login(UserForm userForm){
        if(null == userForm){
            return "ERROR";
        }else{
            boolean logged;

            try {
                logged = loginService.login(userForm);
            } catch (Exception e) {
                return "ERROR";
            }

            if(logged){
                loginService.setCurrentUser(userForm.getUsername());
                return "OK";
            }else{
                return "KO";
            }
        }
    }
}

Now that we’ve seen what is it that we’re trying to test, let’s see how we’ll mock it with each library.

3. Test Setup

3.1 Mockito

For Mockito we’ll be using version 1.10.19 as version 2 is still beta (at least while writing this).

The easiest way of creating and using mocks is via the @Mock and @InjectMocks annotations. The first one will create a mock for the class used to define the field and the second one will try to inject said created mocks into the annotated mock.

There are more annotations such as @Spy that lets you create a partial mock (a mock that uses the normal implementation in non-mocked methods).

That been said, you need to call MockitoAnnotations.initMocks(this) before executing any tests that would use said mocks for all of this “magic” to work. This is usually done in a @Before annotated method. You can also use the MockitoJUnitRunner.

public class LoginControllerTest {

    @Mock
    private LoginDao loginDao;

    @Spy
    @InjectMocks
    private LoginService spiedLoginService;

    @Mock
    private LoginService loginService;

    @InjectMocks
    private LoginController loginController;

    @Before
    public void setUp() {
        loginController = new LoginController();
        MockitoAnnotations.initMocks(this);
    }
}

3.2 EasyMock

For EasyMock, we’ll be using version 3.4 (Javadoc). Note that with EasyMock, for mocks to start “working”, you must call EasyMock.replay(mock) on every test method, or you will receive an exception.

Mocks and tested classes can also be defined via annotations, but in this case instead of calling a static method for it to work, we’ll be using the EasyMockRunner for the test class.

Mocks are created with the @Mock annotation and the tested object with the @TestSubject one (which will get its dependencies injected from created mocks). The tested object must be created in-line.

@RunWith(EasyMockRunner.class)
public class LoginControllerTest {

    @Mock
    private LoginDao loginDao;

    @Mock
    private LoginService loginService;

    @TestSubject
    private LoginController loginController = new LoginController();
}

3.3. JMockit

For JMockit we’ll be using version 1.24 (Javadoc) as version 1.25 hasn’t been released yet (at least while writing this).

Setup for JMockit is as easy as with Mockito, with the exception that there is no specific annotation for partial mocks (and really no need either) and that you must use JMockit as the test runner.

Mocks are defined using the @Injectable annotation (that will create only one mock instance) or with @Mocked annotation (that will create mocks for every instance of the class of the annotated field).

The tested instance gets created (and its mocked dependencies injected) using the @Tested annotation.

@RunWith(JMockit.class)
public class LoginControllerTest {

    @Injectable
    private LoginDao loginDao;

    @Injectable
    private LoginService loginService;

    @Tested
    private LoginController loginController;
}

4. Verifying No Calls to Mock

4.1. Mockito

For verifying that a mock received no calls in Mockito you have the method verifyZeroInteractions() that accepts a mock.

@Test
public void assertThatNoMethodHasBeenCalled() {
    loginController.login(null);
    Mockito.verifyZeroInteractions(loginService);
}

4.2. EasyMock

For verifying that a mock received no calls you simply don’t specify behaviour, you replay the mock and lastly you verify it.

@Test
public void assertThatNoMethodHasBeenCalled() {
    EasyMock.replay(loginService);
    loginController.login(null);
    EasyMock.verify(loginService);
}

4.3. JMockit

For verifying that a mock received no calls you simply don’t specify expectations for that mock and do a FullVerifications(mock) for said mock.

@Test
public void assertThatNoMethodHasBeenCalled() {
    loginController.login(null);
    new FullVerifications(loginService) {};
}

5. Defining Mocked Method Calls and Verifying Calls to Mocks

5.1. Mockito

For mocking method calls, you can use Mockito.when(mock.method(args)).thenReturn(value). Here you can return different values for more than one call just adding them as more parameters: thenReturn(value1, value2, value-n, …).

Note that you can’t mock void returning methods with this syntax. In said cases you’ll use a verification of said method (as shown on line 11).

For verifying calls to a mock you can use Mockito.verify(mock).method(args) and you can also verify that no more calls were done to a mock using verifyNoMoreInteractions(mock).

For verifying args, you can pass specific values or use predefined matchers like any(), anyString()anyInt(). There are a lot more of that kind of matchers and even the possibility to define your own matchers which we’ll see in following examples.

@Test
public void assertTwoMethodsHaveBeenCalled() {
    UserForm userForm = new UserForm();
    userForm.username = "foo";
    Mockito.when(loginService.login(userForm)).thenReturn(true);

    String login = loginController.login(userForm);

    Assert.assertEquals("OK", login);
    Mockito.verify(loginService).login(userForm);
    Mockito.verify(loginService).setCurrentUser("foo");
}

@Test
public void assertOnlyOneMethodHasBeenCalled() {
    UserForm userForm = new UserForm();
    userForm.username = "foo";
    Mockito.when(loginService.login(userForm)).thenReturn(false);

    String login = loginController.login(userForm);

    Assert.assertEquals("KO", login);
    Mockito.verify(loginService).login(userForm);
    Mockito.verifyNoMoreInteractions(loginService);
}

5.2. EasyMock

For mocking method calls, you use EasyMock.expect(mock.method(args)).andReturn(value).

For verifying calls to a mock, you can use EasyMock.verify(mock) but you must call it always after calling EasyMock.replay(mock).

For verifying args, you can pass specific values or you have predefined matchers like isA(Class.class), anyString()anyInt(), and a lot more of that kind of matchers and again the possibility to define your own matchers.

@Test
public void assertTwoMethodsHaveBeenCalled() {
    UserForm userForm = new UserForm();
    userForm.username = "foo";
    EasyMock.expect(loginService.login(userForm)).andReturn(true);
    loginService.setCurrentUser("foo");
    EasyMock.replay(loginService);

    String login = loginController.login(userForm);

    Assert.assertEquals("OK", login);
    EasyMock.verify(loginService);
}

@Test
public void assertOnlyOneMethodHasBeenCalled() {
    UserForm userForm = new UserForm();
    userForm.username = "foo";
    EasyMock.expect(loginService.login(userForm)).andReturn(false);
    EasyMock.replay(loginService);

    String login = loginController.login(userForm);

    Assert.assertEquals("KO", login);
    EasyMock.verify(loginService);
}

5.3. JMockit

With JMockit, you have clearly defined steps for testing: record, replay and verify.

Record is done in a new Expectations(){{}} block (into which you can define actions for several mocks), replay is done simply by invoking a method of the tested class (that should call some mocked object) and verification is done inside a new Verifications(){{}} block (into which you can define verifications for several mocks).

For mocking method calls, you can use mock.method(args); result = value; inside any Expectations block. Here you can return different values for more than one call just using returns(value1, value2, …, valuen); instead of result = value;.

For verifying calls to a mock you can use new Verifications(){{mock.call(value)}} or new Verifications(mock){{}} to verify every expected call previously defined.

For verifying args, you can pass specific values or you have predefined values like any, anyStringanyLong, and a lot more of that kind of special values and again the possibility to define your own matchers (that must be Hamcrest matchers).

@Test
public void assertTwoMethodsHaveBeenCalled() {
    UserForm userForm = new UserForm();
    userForm.username = "foo";
    new Expectations() {{
        loginService.login(userForm); result = true;
        loginService.setCurrentUser("foo");
    }};

    String login = loginController.login(userForm);

    Assert.assertEquals("OK", login);
    new FullVerifications(loginService) {};
}

@Test
public void assertOnlyOneMethodHasBeenCalled() {
    UserForm userForm = new UserForm();
    userForm.username = "foo";
    new Expectations() {{
        loginService.login(userForm); result = false;
        // no expectation for setCurrentUser
    }};

    String login = loginController.login(userForm);

    Assert.assertEquals("KO", login);
    new FullVerifications(loginService) {};
}

6. Mocking Exception Throwing

6.1. Mockito

Exception throwing can be mocked using .thenThrow(ExceptionClass.class) after a Mockito.when(mock.method(args)).

@Test
public void mockExceptionThrowin() {
    UserForm userForm = new UserForm();
    Mockito.when(loginService.login(userForm)).thenThrow(IllegalArgumentException.class);

    String login = loginController.login(userForm);

    Assert.assertEquals("ERROR", login);
    Mockito.verify(loginService).login(userForm);
    Mockito.verifyZeroInteractions(loginService);
}

6.2. EasyMock

Exception throwing can be mocked using .andThrow(new ExceptionClass()) after an EasyMock.expect(…) call.

@Test
public void mockExceptionThrowing() {
    UserForm userForm = new UserForm();
    EasyMock.expect(loginService.login(userForm)).andThrow(new IllegalArgumentException());
    EasyMock.replay(loginService);

    String login = loginController.login(userForm);

    Assert.assertEquals("ERROR", login);
    EasyMock.verify(loginService);
}

6.3. JMockit

Mocking exception throwing with JMockito is especially easy. Just return an Exception as the result of a mocked method call instead of the “normal” return.

@Test
public void mockExceptionThrowing() {
    UserForm userForm = new UserForm();
    new Expectations() {{
        loginService.login(userForm); result = new IllegalArgumentException();
        // no expectation for setCurrentUser
    }};

    String login = loginController.login(userForm);

    Assert.assertEquals("ERROR", login);
    new FullVerifications(loginService) {};
}

7. Mocking an Object to Pass Around

7.1. Mockito

You can create a mock also to pass as an argument for a method call. With Mockito, you can do that with a one-liner.

@Test
public void mockAnObjectToPassAround() {
    UserForm userForm = Mockito.when(Mockito.mock(UserForm.class).getUsername())
      .thenReturn("foo").getMock();
    Mockito.when(loginService.login(userForm)).thenReturn(true);

    String login = loginController.login(userForm);

    Assert.assertEquals("OK", login);
    Mockito.verify(loginService).login(userForm);
    Mockito.verify(loginService).setCurrentUser("foo");
}

7.2. EasyMock

Mocks can be created in-line with EasyMock.mock(Class.class). Afterwards, you can use EasyMock.expect(mock.method()) to prepare it for execution, always remembering to call EasyMock.replay(mock) before using it.

@Test
public void mockAnObjectToPassAround() {
    UserForm userForm = EasyMock.mock(UserForm.class);
    EasyMock.expect(userForm.getUsername()).andReturn("foo");
    EasyMock.expect(loginService.login(userForm)).andReturn(true);
    loginService.setCurrentUser("foo");
    EasyMock.replay(userForm);
    EasyMock.replay(loginService);

    String login = loginController.login(userForm);

    Assert.assertEquals("OK", login);
    EasyMock.verify(userForm);
    EasyMock.verify(loginService);
}

7.3. JMockit

To mock an object for just one method, you can simply pass it mocked as a parameter to the test method. Then you can create expectations as with any other mock.

@Test
public void mockAnObjectToPassAround(@Mocked UserForm userForm) {
    new Expectations() {{
        userForm.getUsername(); result = "foo";
        loginService.login(userForm); result = true;
        loginService.setCurrentUser("foo");
    }};
    
    String login = loginController.login(userForm);

    Assert.assertEquals("OK", login);
    new FullVerifications(loginService) {};
    new FullVerifications(userForm) {};
}

8. Custom Argument Matching

8.1. Mockito

Sometimes argument matching for mocked calls needs to be a little more complex than just a fixed value or anyString(). For that cases with Mockito has its own matcher class that is used with argThat(ArgumentMatcher<>).

@Test
public void argumentMatching() {
    UserForm userForm = new UserForm();
    userForm.username = "foo";
    // default matcher
    Mockito.when(loginService.login(Mockito.any(UserForm.class))).thenReturn(true);

    String login = loginController.login(userForm);

    Assert.assertEquals("OK", login);
    Mockito.verify(loginService).login(userForm);
    // complex matcher
    Mockito.verify(loginService).setCurrentUser(Mockito.argThat(
        new ArgumentMatcher<String>() {
            @Override
            public boolean matches(Object argument) {
                return argument instanceof String && 
                  ((String) argument).startsWith("foo");
            }
        }
    ));
}

8.2. EasyMock

Custom argument matching is a little bit more complicated with EasyMock as you need to create a static method in which you create the actual matcher and then report it with EasyMock.reportMatcher(IArgumentMatcher).

Once this method is created, you use it on your mock expectation with a call to the method (like seen in the example in line ).

@Test
public void argumentMatching() {
    UserForm userForm = new UserForm();
    userForm.username = "foo";
    // default matcher
    EasyMock.expect(loginService.login(EasyMock.isA(UserForm.class))).andReturn(true);
    // complex matcher
    loginService.setCurrentUser(specificArgumentMatching("foo"));
    EasyMock.replay(loginService);

    String login = loginController.login(userForm);

    Assert.assertEquals("OK", login);
    EasyMock.verify(loginService);
}

private static String specificArgumentMatching(String expected) {
    EasyMock.reportMatcher(new IArgumentMatcher() {
        @Override
        public boolean matches(Object argument) {
            return argument instanceof String 
              && ((String) argument).startsWith(expected);
        }

        @Override
        public void appendTo(StringBuffer buffer) {
            //NOOP
        }
    });
    return null;
}

8.3. JMockit

Custom argument matching with JMockit is done with the special withArgThat(Matcher) method (that receives Hamcrest‘s Matcher objects).

@Test
public void argumentMatching() {
    UserForm userForm = new UserForm();
    userForm.username = "foo";
    // default matcher
    new Expectations() {{
        loginService.login((UserForm) any);
        result = true;
        // complex matcher
        loginService.setCurrentUser(withArgThat(new BaseMatcher<String>() {
            @Override
            public boolean matches(Object item) {
                return item instanceof String && ((String) item).startsWith("foo");
            }

            @Override
            public void describeTo(Description description) {
                //NOOP
            }
        }));
    }};

    String login = loginController.login(userForm);

    Assert.assertEquals("OK", login);
    new FullVerifications(loginService) {};
}

9. Partial Mocking

9.1. Mockito

Mockito allows partial mocking (a mock that uses the real implementation instead of mocked method calls in some of its methods) in two ways.

You can either use .thenCallRealMethod() in a normal mock method call definition or you can create a spy instead of a mock in which case the default behaviour for that will be to call the real implementation in all non-mocked methods.

@Test
public void partialMocking() {
    // use partial mock
    loginController.loginService = spiedLoginService;
    UserForm userForm = new UserForm();
    userForm.username = "foo";
    // let service's login use implementation so let's mock DAO call
    Mockito.when(loginDao.login(userForm)).thenReturn(1);

    String login = loginController.login(userForm);

    Assert.assertEquals("OK", login);
    // verify mocked call
    Mockito.verify(spiedLoginService).setCurrentUser("foo");
}

9.2. EasyMock

Partial mocking also gets a little more complicated with EasyMock, as you need to define which methods will be mocked when creating the mock.

This is done with EasyMock.partialMockBuilder(Class.class).addMockedMethod(“methodName”).createMock(). Once this is done, you can use the mock as any other non-partial mock.

@Test
public void partialMocking() {
    UserForm userForm = new UserForm();
    userForm.username = "foo";
    // use partial mock
    LoginService loginServicePartial = EasyMock.partialMockBuilder(LoginService.class)
      .addMockedMethod("setCurrentUser").createMock();
    loginServicePartial.setCurrentUser("foo");
    // let service's login use implementation so let's mock DAO call
    EasyMock.expect(loginDao.login(userForm)).andReturn(1);

    loginServicePartial.setLoginDao(loginDao);
    loginController.loginService = loginServicePartial;
    
    EasyMock.replay(loginDao);
    EasyMock.replay(loginServicePartial);

    String login = loginController.login(userForm);

    Assert.assertEquals("OK", login);
    // verify mocked call
    EasyMock.verify(loginServicePartial);
    EasyMock.verify(loginDao);
}

9.3. JMockit

Partial mocking with JMockit is especially easy. Every method call for which no mocked behaviour has been defined in an Expectations(){{}} uses the “real” implementation.

In this case as no expectation is given for LoginService.login(UserForm) the actual implementation (and the call to LoginDAO.login(UserForm)) is performed.

@Test
public void partialMocking() {
    // use partial mock
    LoginService partialLoginService = new LoginService();
    partialLoginService.setLoginDao(loginDao);
    loginController.loginService = partialLoginService;

    UserForm userForm = new UserForm();
    userForm.username = "foo";
    // let service's login use implementation so let's mock DAO call
    new Expectations() {{
        loginDao.login(userForm); result = 1;
        // no expectation for loginService.login
        partialLoginService.setCurrentUser("foo");
    }};

    String login = loginController.login(userForm);

    Assert.assertEquals("OK", login);
    // verify mocked call
    new FullVerifications(partialLoginService) {};
    new FullVerifications(loginDao) {};
}

10. Conclusion

In this post, we’ve been comparing three Java mock libraries, each one with its strong points and downsides.

  • All three of them are easily configured with annotations to help you define mocks and the object-under-test, with runners to make mock injection as painless as possible.
    • We’d say Mockito would win here as it has a special annotation for partial mocks, but JMockit actually doesn’t even need it, so let’s say that it’s a tie between those two.
  • All three of them follow more or less the record-replay-verify pattern, but in our opinion, the best one to do so is JMockit as it actually forces you to use those in blocks, so tests get more structured.
  • Easiness of use is important so you can work as less as possible to define your tests. JMockit will be the chosen option for its fixed-always-the-same structure.
  • Mockito is more or less THE most known, so the community will be bigger.
  • Having to call replay every time you want to use a mock is a clear no-go so we’ll put a minus one for EasyMock.
  • Consistency/simplicity is also important for me. We loved the way of returning results of JMockit that is the same for “normal” results as for exceptions.

Will all this been said, we’re going to choose JMockit as a kind of a winner even though up till now we’ve been using Mockito as we’ve been captivated by its simplicity and fixed structure and will try and use it from now on.

The full implementation of this tutorial can be found on the GitHub project so feel free to download it and play with it.

Spring REST API with Protocol Buffers

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Overview

Protocol Buffers is a language and platform neutral mechanism for serialization and deserialization of structured data, which is proclaimed by Google, its creator, to be much faster, smaller and simpler than other types of payloads, such as XML and JSON.

This tutorial guides you through setting up a REST API to take advantage of this binary-based message structure.

2. Protocol Buffers

This section gives some basic information on Protocol Buffers and how they are applied in the Java ecosystem.

2.1. Introduction to Protocol Buffers

In order to make use of Protocol Buffers, we need to define message structures in .proto files. Each file is a description of the data that might be transferred from one node to another, or stored in data sources. Here is an example of .proto files, which is named baeldung.proto and lives in the src/main/resources directory. This file will be used in this tutorial later on:

syntax = "proto3";
package baeldung;
option java_package = "com.baeldung.protobuf";
option java_outer_classname = "BaeldungTraining";

message Course {
    int32 id = 1;
    string course_name = 2;
    repeated Student student = 3;
}
message Student {
    int32 id = 1;
    string first_name = 2;
    string last_name = 3;
    string email = 4;
    repeated PhoneNumber phone = 5;
    message PhoneNumber {
        string number = 1;
        PhoneType type = 2;
    }
    enum PhoneType {
        MOBILE = 0;
        LANDLINE = 1;
    }
}

In this tutorial, we use version 3 of both protocol buffer compiler and protocol buffer language, therefore the .proto file must start with the syntax = “proto3” declaration. If a compiler version 2 is in use, this declaration would be omitted. Next comes the package declaration, which is the namespace for this message structure to avoid naming conflicts with other projects.

The following two declarations are used for Java only: java_package option specifies the package for our generated classes to live in, and java_outer_classname option indicates name of the class enclosing all the types defined in this .proto file.

Subsection 2.3 below will describe the remaining elements and how those are compiled into Java code.

2.2. Protocol Buffers with Java

After a message structure is defined, we need a compiler to convert this language neutral content to Java code. You can follow the instructions in the Protocol Buffers repository in order to get an appropriate compiler version. Alternatively, you may download a pre-built binary compiler from the Maven central repository by searching for the com.google.protobuf:protoc artifact, then picking up an appropriate version for your platform.

Next, copy the compiler to the src/main directory of your project and execute the following command in the command line:

protoc --java_out=java resources/baeldung.proto

This should generate a source file for the BaeldungTraining class within the com.baeldung.protobuf package, as specified in the option declarations of the baeldung.proto file.

In addition to the compiler, Protocol Buffers runtime is required. This can be achieved by adding the following dependency to the Maven POM file:

<dependency>
    <groupId>com.google.protobuf</groupId>
    <artifactId>protobuf-java</artifactId>
    <version>3.0.0-beta-3</version>
</dependency>

We may use another version of the runtime, provided that it is the same as the compiler’s version. For the latest one, please check out this link.

2.3. Compiling a Message Description

By using a compiler, messages in a .proto file are compiled into static nested Java classes. In the above example, the Course and Student messages are converted to Course and Student Java classes, respectively. At the same time, messages’ fields are compiled into JavaBeans style getters and setters inside those generated types. The marker, composed of an equal sign and a number, at the end of each field declaration is the unique tag used to encode the associated field in the binary form.

We will walk through typed fields of the messages to see how those are converted to accessor methods.

Let’s start with the Course message. It has two simple fields, including id and course_name. Their protocol buffer types, int32 and string, are translated into Java int and String types. Here are their associated getters after compilation (with implementations being left out for brevity):

public int getId();
public java.lang.String getCourseName();

Note that names of typed fields should be in snake case (individual words are separated by underscore characters) to maintain the cooperation with other languages. The compiler will convert those names to camel case according to Java conventions.

The last field of Course message, student, is of the Student complex type, which will be described below. This field is prepended by the repeated keyword, meaning that it may be repeated any number of times. The compiler generates some methods associated with the student field as follows (without implementations):

public java.util.List<com.baeldung.protobuf.BaeldungTraining.Student> getStudentList();
public int getStudentCount();
public com.baeldung.protobuf.BaeldungTraining.Student getStudent(int index);

Now we will move on to the Student message, which is used as complex type of the student field of Course message. Its simple fields, including id, first_name, last_name and email are used to create Java accessor methods:

public int getId();
public java.lang.String getFirstName();
public java.lang.String getLastName();
public java.lang.String.getEmail();

The last field, phone, is of the PhoneNumber complex type. Similar to the student field of Course message, this field is repetitive and has several associated methods:

public java.util.List<com.baeldung.protobuf.BaeldungTraining.Student.PhoneNumber> getPhoneList();
public int getPhoneCount();
public com.baeldung.protobuf.BaeldungTraining.Student.PhoneNumber getPhone(int index);

The PhoneNumber message is compiled into the BaeldungTraining.Student.PhoneNumber nested type, with two getters corresponding to the message’s fields:

public java.lang.String getNumber();
public com.baeldung.protobuf.BaeldungTraining.Student.PhoneType getType();

PhoneType, the complex type of the type field of the PhoneNumber message, is an enumeration type, which will be transformed into a Java enum type nested within the BaeldungTraining.Student class:

public enum PhoneType implements com.google.protobuf.ProtocolMessageEnum {
    MOBILE(0),
    LANDLINE(1),
    UNRECOGNIZED(-1),
    ;
    // Other declarations
}

3. Protobuf In Spring REST API

This section will guide you through setting up a REST service using Spring Boot.

3.1. Bean Declaration

Let’s start with the definition of our main @SpringBootApplication:

@SpringBootApplication
public class Application {
    @Bean
    ProtobufHttpMessageConverter protobufHttpMessageConverter() {
        return new ProtobufHttpMessageConverter();
    }

    @Bean
    public CourseRepository createTestCourses() {
        Map<Integer, Course> courses = new HashMap<>();
        Course course1 = Course.newBuilder()
          .setId(1)
          .setCourseName("REST with Spring")
          .addAllStudent(createTestStudents())
          .build();
        Course course2 = Course.newBuilder()
          .setId(2)
          .setCourseName("Learn Spring Security")
          .addAllStudent(new ArrayList<Student>())
          .build();
        courses.put(course1.getId(), course1);
        courses.put(course2.getId(), course2);
        return new CourseRepository(courses);
    }

    // Other declarations
}

The ProtobufHttpMessageConverter bean is used to convert responses returned by @RequestMapping annotated methods to protocol buffer messages.

The other bean, CourseRepository, contains some test data for our API.

What’s important here is that we’re operating with Protocol Buffer specific data – not with standard POJOs.

Here’s the simple implementation of the CourseRepository:

public class CourseRepository {
    Map<Integer, Course> courses;
    
    public CourseRepository (Map<Integer, Course> courses) {
        this.courses = courses;
    }
    
    public Course getCourse(int id) {
        return courses.get(id);
    }
}

3.2. Controller Configuration

We can define the @Controller class for a test URL as follows:

@RestController
public class CourseController {
    @Autowired
    CourseRepository courseRepo;

    @RequestMapping("/courses/{id}")
    Course customer(@PathVariable Integer id) {
        return courseRepo.getCourse(id);
    }
}

And again – the important thing here is that the Course DTO that we’re returning from the controller layer is not a standard POJO. That’s going to be the trigger for it to be converted to protocol buffer messages before being transferred back to the Client.

4. REST Clients and Testing

Now that we had a look at the simple API implementation – let’s now illustrate deserialization of protocol buffer messages on the client side – using two methods.

The first one takes advantage of the RestTemplate API with a pre-configured ProtobufHttpMessageConverter bean to automatically convert messages.

The second is using protobuf-java-format to manually transform protocol buffer responses into JSON documents.

To begin, we need to set up the context for an integration test and instruct Spring Boot to find configuration information in the Application class by declaring a test class as follows:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
@WebIntegrationTest
public class ApplicationTest {
    // Other declarations
}

All code snippets in this section will be placed in the ApplicationTest class.

4.1. Expected Response

The first step to access a REST service is to determine the request URL:

private static final String COURSE1_URL = "http://localhost:8080/courses/1";

This COURSE1_URL will be used for getting the first test double course from the REST service we created before. After a GET request is sent to the above URL, the corresponding response is verified using the following assertions:

private void assertResponse(String response) {
    assertThat(response, containsString("id"));
    assertThat(response, containsString("course_name"));
    assertThat(response, containsString("REST with Spring"));
    assertThat(response, containsString("student"));
    assertThat(response, containsString("first_name"));
    assertThat(response, containsString("last_name"));
    assertThat(response, containsString("email"));
    assertThat(response, containsString("john.doe@baeldung.com"));
    assertThat(response, containsString("richard.roe@baeldung.com"));
    assertThat(response, containsString("jane.doe@baeldung.com"));
    assertThat(response, containsString("phone"));
    assertThat(response, containsString("number"));
    assertThat(response, containsString("type"));
}

We will make use of this helper method in both test cases covered in the succeeding sub-sections.

4.2. Testing with RestTemplate

Here is how we create a client, send a GET request to the designated destination, receive the response in the form of protocol buffer messages and verify it using the RestTemplate API:

@Autowired
private RestTemplate restTemplate;

@Test
public void whenUsingRestTemplate_thenSucceed() {
    ResponseEntity<Course> course = restTemplate.getForEntity(COURSE1_URL, Course.class);
    assertResponse(course.toString());
}

To make this test case work, we need a bean of the RestTemplate type to be registered in a configuration class:

@Bean
RestTemplate restTemplate(ProtobufHttpMessageConverter hmc) {
    return new RestTemplate(Arrays.asList(hmc));
}

Another bean of the ProtobufHttpMessageConverter type is also required to automatically transform the received protocol buffer messages. This bean is the same as the one defined in sub-section 3.1. Since the client and server share the same application context in this tutorial, we may declare the RestTemplate bean in the Application class and re-use the ProtobufHttpMessageConverter bean.

4.3. Testing with HttpClient

The first step to use the HttpClient API and manually convert protocol buffer messages is adding the following two dependencies to the Maven POM file:

<dependency>
    <groupId>com.googlecode.protobuf-java-format</groupId>
    <artifactId>protobuf-java-format</artifactId>
    <version>1.4</version>
</dependency>
<dependency>
    <groupId>org.apache.httpcomponents</groupId>
    <artifactId>httpclient</artifactId>
    <version>4.5.2</version>
</dependency>

For the latest versions of these dependencies, please have a look at protobuf-java-format and httpclient artifacts in Maven central repository.

Let’s move on to create a client, execute a GET request and convert the associated response to an InputStream instance using the given URL:

private InputStream executeHttpRequest(String url) throws IOException {
    CloseableHttpClient httpClient = HttpClients.createDefault();
    HttpGet request = new HttpGet(url);
    HttpResponse httpResponse = httpClient.execute(request);
    return httpResponse.getEntity().getContent();
}

Now, we will convert protocol buffer messages in the form of an InputStream object to a JSON document:

private String convertProtobufMessageStreamToJsonString(InputStream protobufStream) throws IOException {
    JsonFormat jsonFormat = new JsonFormat();
    Course course = Course.parseFrom(protobufStream);
    return jsonFormat.printToString(course);
}

And here is how a test case uses private helper methods declared above and validates the response:

@Test
public void whenUsingHttpClient_thenSucceed() throws IOException {
    InputStream responseStream = executeHttpRequest(COURSE1_URL);
    String jsonOutput = convertProtobufMessageStreamToJsonString(responseStream);
    assertResponse(jsonOutput);
}

4.4. Response in JSON

In order to make it clear, JSON forms of the responses we received in the tests described in previous sub-sections are included herein:

id: 1
course_name: "REST with Spring"
student {
    id: 1
    first_name: "John"
    last_name: "Doe"
    email: "john.doe@baeldung.com"
    phone {
        number: "123456"
    }
}
student {
    id: 2
    first_name: "Richard"
    last_name: "Roe"
    email: "richard.roe@baeldung.com"
    phone {
        number: "234567"
        type: LANDLINE
    }
}
student {
    id: 3
    first_name: "Jane"
    last_name: "Doe"
    email: "jane.doe@baeldung.com"
    phone {
        number: "345678"
    }
    phone {
        number: "456789"
        type: LANDLINE
    }
}

5. Conclusion

This tutorial quickly introduced Protocol Buffers and illustrated the setting up of a REST API using the format with Spring. We then moved to client support and the serialization-deserialization mechanism.

The implementation of all the examples and code snippets can be found in a GitHub project.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

Java Web Weekly, Issue 130

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> High-Performance Java Persistence – Part Two [vladmihalcea.com]

I’ve been following the progress of this book for a while, and it’s clear that it’s going to be THE reference material for developers aiming to learn Hibernate and JPA for many years to come.

It’s cool that it’s finally almost ready to go out and make some waves.

>> Bean Validation and the Jigsaw Liaison [in.relation.to]

Very nice, code-heavy look into what’s coming to Java, and specifically how the modularization work is going to play with bean validation.

>> Java 9 Additions To Stream [codefx.org]

Java 9 is a-coming, and the improvements to the Stream APIs and general functionality definitely look interesting.

>> Java vs .NET vs Python vs Ruby vs Node.JS: Who Reigns the Job Market? [takipi.com]

Another interesting, data-driven writeup, this time on the state of the job market.

The guys and gals from Takipi have been on a number crunching rampage lately. My best guess is that they hired someone really good at pulling insights out of data.

>> The Fault in Our JARs: Why We Stopped Building Fat JARs [product.hubspot.com]

A different perspective to the fat jar approach of packaging and deploying applications, which makes a lot of sense at real scale.

>> Java EE Guardians Unite to Save Java EE [infoq.com]

A good, quick intro to what the Java EE Guardians group is all about, and of course the general state of the Java EE ecosystem right now.

>> I do not use a debugger [lemire.me]

An interesting take on the concept of self-imposed limitations.

And a personal note here, on learning. I was once forced to not touch my mouse, in a 2-day TDD workshop. It was frustrating, but also a huge boost in learning to get better on the keyboard.

>> Eclipse Foundation Releases Neon [infoq.com]

Yep, it’s that time of year. The new Eclipse is out.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Why I Prefer Merge Over Rebase [techblog.bozho.net]

Yes, discussing the best way to approach a git workflows can be a black hole.

This writeup though is pretty to the point, and actually offers a concrete opinion, so it’s well worth a read.

Also worth reading:

3. Musings

>> What It Really Means to Niche Down [daedtech.com]

Intelligent, intentional positioning is rare in our space, which is truly unfortunate. A good read talking about the way to niche down and the benefits that come with such an initially scary prospect.

>> Updating the PerfectTablePlan website [successfulsoftware.net]

This is a cool read about the impact of re-designing a product page.

And it has nothing to do with Java (just FYI).

>> What isn’t Serverless? [martinfowler.com]

>> Serverless [martinfowler.com]

>> Unpacking ‘Function as a Service’ [martinfowler.com]

Different points in the serverless discussion.

Even though I’ve not experimented with most of these concepts in practice, this writeup definitely clears things up and starts making the possibilities clearer for when I will.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Don’t blame me for not knowing [dilbert.com]

>> Are you trying to be a jerk? [dilbert.com]

>> Engineer thinks news is magic [dilbert.com]

5. Pick of the Week

>> You don’t have my permission [signalvnoise.com]

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Migrating to the New Java 8 Date Time API

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In this tutorial you will learn how to refactor your code in order to leverage the new Date Time API introduced in Java 8.

2. New API at a Glance

Working with dates in Java used to be hard. The old date library provided by JDK included only three classes: java.util.Date, java.util.Calendar and java.util.Timezone.

These were only suitable for the most basic tasks. For anything even remotely complex, the developers had to either use third-party libraries or write tons of custom code.

Java 8 introduced a completely new Date Time API (java.util.time.*) that is loosely based on the popular Java library called JodaTime. This new API dramatically simplified date and time processing and fixed many shortcomings of the old date library.

1.1. API Clarity

A first advantage of the new API is clarity – the API is very clear, concise and easy to understand. It does not have a lot of inconsistencies found in the old library such as the field numbering (in Calendar months are zero-based, but days of week are one-based).

1.2. API Flexibility

Another advantage is flexibility – working with multiple representations of time. The old date library included only a single time representation class – java.util.Date, which despite its name, is actually a timestamp. It only stores the number of milliseconds elapsed since the Unix epoch.

The new API has many different time representations, each suitable for different use cases:

  • Instant – represents a point in time (timestamp)
  • LocalDate – represents a date (year, month, day)
  • LocalDateTime – same as LocalDate, but includes time with nanosecond precision
  • OffsetDateTime – same as LocalDateTime, but with time zone offset
  • LocalTime – time with nanosecond precision and without date information
  • ZonedDateTime – same as OffsetDateTime, but includes a time zone ID
  • OffsetLocalTime – same as LocalTime, but with time zone offset
  • MonthDay – month and day, without year or time
  • YearMonth – month and year, without day or time
  • Duration – amount of time represented in seconds, minutes and hours. Has nanosecond precision
  • Period – amount of time represented in days, months and years

1.3. Immutability and Thread-Safety

Another advantage is that all time representations in Java 8 Date Time API are immutable and thus thread-safe.

All mutating methods return a new copy instead of modifying state of the original object.

Old classes such as java.util.Date were not thread-safe and could introduce very subtle concurrency bugs.

1.4. Method Chaining

All mutating methods can be chained together, allowing to implement complex transformations in a single line of code.

ZonedDateTime nextFriday = LocalDateTime.now()
  .plusHours(1)
  .with(TemporalAdjusters.next(DayOfWeek.FRIDAY))
  .atZone(ZoneId.of("PST"));

2. Examples

The examples below will demonstrate how to perform common tasks with both old and new API.

Getting current time

// Old
Date now = new Date();

// New
ZonedDateTime now = ZonedDateTime.now();

Representing specific time

// Old
Date birthDay = new GregorianCalendar(1990, Calendar.DECEMBER, 15).getTime();

// New
LocalDate birthDay = LocalDate.of(1990, Month.DECEMBER, 15);

Extracting specific fields

// Old
int month = new GregorianCalendar().get(Calendar.MONTH);

// New
Month month = LocalDateTime.now().getMonth();

Adding and subtracting time

// Old
GregorianCalendar calendar = new GregorianCalendar();
calendar.add(Calendar.HOUR_OF_DAY, -5);
Date fiveHoursBefore = calendar.getTime();

// New
LocalDateTime fiveHoursBefore = LocalDateTime.now().minusHours(5);

Altering specific fields

// Old
GregorianCalendar calendar = new GregorianCalendar();
calendar.set(Calendar.MONTH, Calendar.JUNE);
Date inJune = calendar.getTime();

// New
LocalDateTime inJune = LocalDateTime.now().withMonth(Month.JUNE.getValue());

Truncating

Truncating resets all time fields smaller than the specified field. In the example below minutes and everything below will be set to zero

// Old
Calendar now = Calendar.getInstance();
now.set(Calendar.MINUTE, 0);
now.set(Calendar.SECOND, 0);
now.set(Calendar.MILLISECOND, 0);
Date truncated = now.getTime();

// New
LocalTime truncated = LocalTime.now().truncatedTo(ChronoUnit.HOURS);

Time zone conversion

// Old
GregorianCalendar calendar = new GregorianCalendar();
calendar.setTimeZone(TimeZone.getTimeZone("CET"));
Date centralEastern = calendar.getTime();

// New
ZonedDateTime centralEastern = LocalDateTime.now().atZone(ZoneId.of("CET"));

Getting time span between two points in time

// Old
GregorianCalendar calendar = new GregorianCalendar();
Date now = new Date();
calendar.add(Calendar.HOUR, 1);
Date hourLater = calendar.getTime();
long elapsed = hourLater.getTime() - now.getTime();

// New
LocalDateTime now = LocalDateTime.now();
LocalDateTime hourLater = LocalDateTime.now().plusHours(1);
Duration span = Duration.between(now, hourLater);

Time formatting and parsing

DateTimeFormatter is a replacement for the old SimpleDateFormat that is thread-safe and provides additional functionality.

// Old
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
Date now = new Date();
String formattedDate = dateFormat.format(now);
Date parsedDate = dateFormat.parse(formattedDate);

// New
LocalDate now = LocalDate.now();
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd");
String formattedDate = now.format(formatter);
LocalDate parsedDate = LocalDate.parse(formattedDate, formatter);

Number of days in a month

// Old
Calendar calendar = new GregorianCalendar(1990, Calendar.FEBRUARY, 20);
int daysInMonth = calendar.getActualMaximum(Calendar.DAY_OF_MONTH);

// New
int daysInMonth = YearMonth.of(1990, 2).lengthOfMonth();

3. Interacting with Legacy Code

In many cases a user might need to ensure interoperability with third-party libraries that rely on the old date library.

In Java 8 old date library classes have been extended with methods that convert them to corresponding objects from new Date API.
New classes provide similar functionalities.

Instant instantFromCalendar = GregorianCalendar.getInstance().toInstant();
ZonedDateTime zonedDateTimeFromCalendar = new GregorianCalendar().toZonedDateTime();
Date dateFromInstant = Date.from(Instant.now());
GregorianCalendar calendarFromZonedDateTime = GregorianCalendar.from(ZonedDateTime.now());
Instant instantFromDate = new Date().toInstant();
ZoneId zoneIdFromTimeZone = TimeZone.getTimeZone("PST").toZoneId();

4. Conclusion

In this article we explored the new Date Time API available in Java 8. We took a look at its advantages, compared to the deprecated API and pointed out differences using multiple examples.

Note that we barely scratched surface of the capabilities of the new Date Time API. Make sure to read through the official documentation to discover full range of tools offered by the new API.

Code examples can be found in the GitHub project.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to AssertJ

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In this article we will be exploring AssertJ – an opensource community-driven library used for writing fluent and rich assertions in Java tests.

This article focuses on tools available in the basic AssertJ module called AssertJ-core.

2. Maven Dependencies

In order to use AssertJ, you need to include the following section in your pom.xml file:

<dependency>
    <groupId>org.assertj</groupId>
    <artifactId>assertj-core</artifactId>
    <version>3.4.1</version>
    <scope>test</scope>
</dependency>

This dependency covers only the basic Java assertions. If you want to use advanced assertions, you will need to add additional modules separately.

Note that for Java 7 and earlier you should use AssertJ core version 2.x.x.

Latest versions can be found here.

3. Introduction

AssertJ provides a set of classes and utility methods that allow us to write fluent and beautiful assertions easily for:

  • Standard Java
  • Java 8
  • Guava
  • Joda Time
  • Neo4J and
  • Swing components

A detailed list of all modules is available on project’s website.

Let’s start with a few examples, right from the AssertJ’s documentation:

assertThat(frodo)
  .isNotEqualTo(sauron)
  .isIn(fellowshipOfTheRing);

assertThat(frodo.getName())
  .startsWith("Fro")
  .endsWith("do")
  .isEqualToIgnoringCase("frodo");

assertThat(fellowshipOfTheRing)
  .hasSize(9)
  .contains(frodo, sam)
  .doesNotContain(sauron);

The above examples are only the tip of the iceberg, but gives us an overview of how writing assertions with this library might look like.

 4. AssertJ in Action

In this section we’ll focus on setting up AssertJ and exploring its possibilities.

4.1. Getting Started

With the jar of the library on a classpath, enabling assertions is as easy as adding a single static import to your test class:

import static org.assertj.core.api.Assertions.*;

4.2. Writing Assertions

In order to write an assertion, you always need to start by passing your object to the Assertions.assertThat() method and then you follow with the actual assertions.

It’s important to remember that unlike some other libraries, the code below does not actually assert anything yet and will never fail a test:

assertThat(anyRefenceOrValue);

If you leverage your IDE’s code completion features, writing AssertJ assertions becomes incredibly easy due to its very descriptive methods. This is how it looks like in IntelliJ IDEA 16:

IDE's code completion features

IDE’s code completion features

 

As you can see, you have dozens of contextual methods to choose from and those are available only for String type. Let’s explore in detail some of this API and look at some specific assertions.

4.3. Object Assertions

Objects can be compared in various ways either to determine equality of two objects or to examine the fields of an object.

Let’s look at two ways that we can compare the equality of two objects. Given the following two Dog objects fido and fidosClone:

public class Dog { 
    private String name; 
    private Float weight;
    
    // standard getters and setters
}

Dog fido = new Dog("Fido", 5.25);

Dog fidosClone = new Dog("Fido", 5.25);

We can compare equality with the following assertion:

assertThat(fido).isEqualTo(fidosClone);

This will fail as isEqualTo() compares object references. If we want to compare their content instead, we can use isEqualToComparingFieldByFieldRecursively() like so:

assertThat(fido).isEqualToComparingFieldByFieldRecursively(fidosClone);

Fido and fidosClone are equal when doing a recursive field by field comparison because each field of one object is compared to the field in the other object.

There are many other assertion methods that provide different ways to compare and contract objects and to examine and assert on their fields. In order to discover them all, refer to the official AbstractObjectAssert documentation.

4.4. Boolean Assertions

Some simple methods exist for truth testing:

  • isTrue()
  • isFalse()

Let’s see them in action:

assertThat("".isEmpty()).isTrue();

4.5. Iterable/Array Assertions

For an Iterable or an Array there are multiple ways of asserting that their content exist. One of the most common assertions would be to check if an Iterable or Array contains a given element:

List<String> list = Arrays.asList("1", "2", "3");

assertThat(list).contains("1");

or if a List is not empty:

assertThat(list).isNotEmpty();

or if a List starts with a given character. For example “1”:

assertThat(list).startsWith("1");

Keep in mind that if you want to create more than one assertion for the same object, you can join them together easily.

Here is an example of an assertion that checks if a provided list is not empty, contains “1” element, does not contains any nulls and contains sequence of elements “2”, “3”:

assertThat(list)
  .isNotEmpty()
  .contains("1")
  .doesNotContainNull()
  .containsSequence("2", "3");

Of course many more possible assertions exist for those types. In order to discover them all, refer to the official AbstractIterableAssert documentation.

4.6. Character Assertions

Assertions for character types mostly involve comparisons and even checking if a given character is from a Unicode table.

Here is an example of an assertions that checks if a provided character is not ‘a’, is in Unicode table, is greater than ‘b’ and is lowercase:

assertThat(someCharacter)
  .isNotEqualTo('a')
  .inUnicode()
  .isGreaterThanOrEqualTo('b')
  .isLowerCase();

For a detailed list of all character types’ assertions, see AbstractCharacterAssert documentation.

4.7. Class Assertions

Assertions for Class type are mostly about checking its fields, Class types, presence of annotations and class finality.

If you want to assert that class Runnable is an interface, you need to simply write:

assertThat(Runnable.class).isInterface();

or if you want to check if one class is assignable from the other:

assertThat(Exception.class).isAssignableFrom(NoSuchElementException.class);

All possible Class assertions can be viewed in the AbstractClassAssert documentation.

4.8. File Assertions

File assertions are all about checking if a given File instance exists, is a directory or a file, has certain content, is readable, or has given extension.

Here you can see an example of an assertion that checks if a given file exists, is file and not a directory, can be readable and writable:

 assertThat(someFile)
   .exists()
   .isFile()
   .canRead()
   .canWrite();

All possible Class assertions can be viewed in the AbstractFileAssert documentation.

4.9. Double/Float/Integer Assertions

Double/Float/Integer and Other Number Types

Numeric assertions are all about comparing numeric values within or without a given offset. For example, if you want to check if two values are equal according to a given precision we can do the following:

assertThat(5.1).isEqualTo(5, withPrecision(1d));

Notice that we are using already imported withPrecision(Double offset) helper method for generating Offset objects.

For more assertions, visit AbstractDoubleAssert documentation.

4.10. InputStream Assertions

There is only one InputStream-specific assertion available:

  • hasSameContentAs(InputStream expected)

and in action:

assertThat(given).hasSameContentAs(expected);

4.11. Map Assertions

Map assertions allow you to check if a map contains certain entry, set of entries, or keys/values separately.

And here you can see an example of an assertions that checks if a given map is not empty, contains numeric key “2”, does not contain numeric key “10” and contains entry: key: 2, value: “a”:

assertThat(map)
  .isNotEmpty()
  .containsKey(2)
  .doesNotContainKeys(10)
  .contains(entry(2, "a"));

For more assertions, see AbstractMapAssert documentation.

4.12. Throwable Assertions

Throwable assertions allow for example: inspecting exception’s messages, stacktraces, cause checking or verifying if an exception has been thrown already.

Let’s have a look at the example of an assertion that checks if a given exception was thrown and has a message ending with “c”:

assertThat(ex).hasNoCause().hasMessageEndingWith("c");

For more assertions, see AbstractThrowableAssert documentation.

5. Describing Assertions

In order to achieve even higher verbosity level, you can create dynamically generated custom descriptions for your assertions. The key to doing this is the as(String description, Object… args) method.

If you define your assertion like this:

assertThat(person.getAge())
  .as("%s's age should be equal to 100", person.getName())
  .isEqualTo(100);

this is what you will get when running tests:

[Alex's age should be equal to 100] expected:<100> but was:<34>

6. Java 8

AssertJ takes full advantage of Java 8’s functional programming features. Let’s dive into an example and see it in action. First let’s see how we do it in Java 7:

assertThat(fellowshipOfTheRing)
  .filteredOn("race", HOBBIT)
  .containsOnly(sam, frodo, pippin, merry);

Here we are filtering a collection on the race Hobbit and in Java 8 we can do something like this:

assertThat(fellowshipOfTheRing)
  .filteredOn(character -> character.getRace().equals(HOBBIT))
  .containsOnly(sam, frodo, pippin, merry);

We will be exploring AssertJ’s Java8 capabilities in a future article from this series. The above examples were taken from AssertJ’s website.

7. Conclusion

In this article we briefly explored the possibilities that AssertJ gives us along with the most popular assertions for core Java types.

The implementation of all the examples and code snippets can be found in a GitHub project.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Spring MVC and the @ModelAttribute Annotation

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

One of the most important Spring-MVC annotations is the @ModelAttribute annotation.

The @ModelAttribute is an annotation that binds a method parameter or method return value to a named model attribute and then exposes it to a web view.

In the following example, we will demonstrate the usability and functionality of the annotation, through a common concept: a form submitted from a company’s employee.

2. The @ModelAttribute in Depth

As the introductory paragraph revealed, @ModelAttribute can be used either as a method parameter or at the method level.

2.1 At the Method Level

When the annotation is used at the method level it indicates the purpose of that method is to add one or more model attributes. Such methods support the same argument types as @RequestMapping methods but that cannot be mapped directly to requests.

Let’s have a look at a quick example here to start understanding how this works:

@ModelAttribute
public void addAttributes(Model model) {
    model.addAttribute("msg", "Welcome to the Netherlands!");
}

In the example, we show a method that adds an attribute named msg to all models defined in the controller class.

Of course we’ll see this in action later on in the article.

In general, Spring-MVC will always make a call first to that method, before it calls any request handler methods. That is, @ModelAttribute methods are invoked before the controller methods annotated with @RequestMapping are invoked. The logic behind the sequence is that, the model object has to be created before any processing starts inside the controller methods.

It is also important that you annotate the respective class as @ControllerAdvice. Thus, you can add values in Model which will be identified as global. This actually means that for every request a default value exists, for every method in the response part.

2.2 As a Method Argument

When used as a method argument, it indicates the argument should be retrieved from the model. When not present, it should be first instantiated and then added to the model and once present in the model, the arguments fields should be populated from all request parameters that have matching names.

In the code snippet that follows the employee model attribute is populated with data from a form submitted to the addEmployee endpoint. Spring MVC does this behind the scenes before invoking the submit method:

@RequestMapping(value = "/addEmployee", method = RequestMethod.POST)
public String submit(@ModelAttribute("employee") Employee employee) {
    // Code that uses the employee object

    return "employeeView";
}

Later on in this article we will see a complete example of how to use the employee object to populate the employeeView template.

So, it binds the form data with a bean. The controller annotated with @RequestMapping can have custom class argument(s) annotated with @ModelAttribute.

This is what is commonly know as data binding in Spring-MVC, a common mechanism that saves you from having to parse each form field individually.

3. Form Example

In this section we will provide the example referred to in the overview section: a very basic form that prompts a user (employee of a company, in our specific example), to enter some personal information (specifically name and id). After the submission is completed and without any errors, the user expects to see the previously submitted data, displayed on another screen.

3.1 The View

Let’s first create a simple form with id and name fields:

<form:form method="POST" action="/spring-mvc-java/addEmployee" 
  modelAttribute="employee">
    <form:label path="name">Name</form:label>
    <form:input path="name" />
		
    <form:label path="id">Id</form:label>
    <form:input path="id" />
		
    <input type="submit" value="Submit" />
</form:form>

3.2 The Controller

Here is the controller class, where the logic for the afore-mentioned view is being implemented:

@Controller
@ControllerAdvice
public class EmployeeController {

    private Map<Long, Employee> employeeMap = new HashMap<>();

    @RequestMapping(value = "/addEmployee", method = RequestMethod.POST)
    public String submit(
      @ModelAttribute("employee") Employee employee,
      BindingResult result, ModelMap model) {
        if (result.hasErrors()) {
            return "error";
        }
        model.addAttribute("name", employee.getName());
        model.addAttribute("id", employee.getId());

        employeeMap.put(employee.getId(), employee);

        return "employeeView";
    }

    @ModelAttribute
    public void addAttributes(Model model) {
        model.addAttribute("msg", "Welcome to the Netherlands!");
    }
}

In the submit() method we have an Employee object bound to our View. Can you see the power of this annotation? You can map your form fields to an object model as simply as that. In the method we are fetching values from the form and setting them to ModelMap.

In the end we return employeeView, which means that the respective JSP file is going to be called as a View representative.

Furthermore, there is also an addAttributes() method. It’s purpose is to add values in the Model which will be identified globally. That is, a default value will be returned as a response for every request to every controller method. We also have to annotate the specific class as @ControllerAdvice.

3.3 The Model

As mentioned before, the Model object is very simplistic and contains all that is required by the “front-end” attributes. Now, let’s have a look at an example:

@XmlRootElement
public class Employee {

    private long id;
    private String name;

    public Employee(long id, String name) {
        this.id = id;
        this.name = name;
    }

    // standard getters and setters removed
}

3.4 Wrap Up

The @ControllerAdvice assists a controller and in particular, @ModelAttribute methods that apply to all @RequestMapping methods. Of course, our addAttributes() method will be the very first to run, prior to the rest of the @RequestMapping methods.

Keeping that in mind and after both of submit() and addAttributes() are run, we could just refer to them in the View returned from the Controller class, by mentioning their given name inside a dollarised curly-braces duo, like for example ${name}.

3.5 Results View

Let’s now print what we received from the form:

<h3>${msg}</h3>
Name : ${name}
ID : ${id}

4. Conclusion

In this tutorial we investigated the usage of @ModelAttribute annotation, for both method arguments and method level usage cases.

The implementation of this simple tutorial can be found in the github project – this is an Maven based project, so it should be easy to import and run as it is.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


XML Libraries Support in Java

$
0
0

1. Introduction

In this article we will be comparing Java XML libraries and APIs.

This is the second article from the series about Java support for XML, if you want to go deeper into the XPath support in Java have a look at the previous article.

2. Overview

Now we’re going to dig deeper into the XML world support and for that we’re going to start by explaining as simple as possible all the subject-related initials.

In Java XML support we can find few API definitions, each one has its pros and cons.

SAX: It is an event based parsing API, it provides a low level access, is memory efficient and faster than DOM since it doesn’t load the whole document tree in memory but it doesn’t provide support for navigation like the one provided by XPath, although it is more efficient it is harder to use too.

DOM: It as model based parser that loads a tree structure document in memory, so we have the original elements order, we can navigate our document both directions, it provides an API for reading and writing, it offers XML manipulation and it is very easy to use although the price is high strain on memory resources.

StAX: It offers the ease of DOM and the efficiency of SAX but it lacks of some functionality provided by DOM like XML manipulation and it only allows us to navigate the document forward.

JAXB: It allows us to navigate the document both directions, it is more efficient than DOM, it allows conversion from XML to java types and it supports XML manipulation but it can only parse a valid XML document.

You could still find some references to JAXP but last release of this project is from March 2013 and it is practically dead.

XML Apis Table

XML APIs Table

3. The XML

In this section we are going to see the most popular implementations, so that we can test real working samples and check differences between them.

In the following examples we will be working with a simple XML file with a structure like this:

<tutorials>
    <tutorial tutId="01" type="java">
        <title>Guava</title>
        <description>Introduction to Guava</description>
        <date>04/04/2016</date>
        <author>GuavaAuthor</author>
    </tutorial>
    ...
</tutorials>

4. DOM4J

We’re going to start by taking a look at what we can do with DOM4J and for this example we need to add the last version of this dependency.

This is one of the most popular libraries to work with XML files, since it allows us to perform bi-directional reading, create new documents and update existing ones.

DOM4J can work with DOM, SAX, XPath and XLST. SAX is supported via JAXP.

Let’s take a look here for example, how can we select an element filtering by a given id.

SAXReader reader = new SAXReader();
Document document = reader.read(file);
List<Node> elements = document.selectNodes("//*[@tutId='" + id + "']");
return elements.get(0);

The SAXReader class is responsible for creating a DOM4J tree from SAX parsing events. Once we have a org.dom4j.Document we just need to call the necessary method and pass to it the XPath expression as a String.

We can load an existing document, make changes to its content and then update the original file.

for (Node node : nodes) {
    Element element = (Element)node;
    Iterator<Element> iterator = element.elementIterator("title");
    while (iterator.hasNext()) {
        Element title =(Element)iterator.next();
        title.setText(title.getText() + " updated");
    }
}
XMLWriter writer = new XMLWriter(
  new FileWriter(new File("src/test/resources/example_updated.xml")));
writer.write(document);
writer.close();

In the example above, we are changing every title’s content and create a new file.

Notice here how simple it is to get every title’s node in a list by calling elementIterator and passing the name of the node.

Once we have our content modified, we will use the XMLWriter that takes a DOM4J tree and formats it to a stream as XML.

Creating a new document from the scratch is as simple as we see below.

Document document = DocumentHelper.createDocument();
Element root = document.addElement("XMLTutorials");
Element tutorialElement = root.addElement("tutorial").addAttribute("tutId", "01");
tutorialElement.addAttribute("type", "xml");
tutorialElement.addElement("title").addText("XML with Dom4J");
...
OutputFormat format = OutputFormat.createPrettyPrint();
XMLWriter writer = new XMLWriter(
  new FileWriter(new File("src/test/resources/example_new.xml")), format);
writer.write(document);
writer.close();

DocumentHelper gives us a collection of methods to use by DOM4J, such as createDocument that creates an empty document to start working with it.

We can create as many attributes or elements as we need with the methods provided by DOM4J, and once we have our document completed we just write it to a file as we did with the update case before.

5. JDOM

In order to work with with JDOM, we have to add this dependency to our pom.

JDOM’s working style is pretty similar to DOM4J’s, so we are going to take a look at just a couple of examples:

SAXBuilder builder = new SAXBuilder();
Document doc = builder.build(this.getFile());
Element tutorials = doc.getRootElement();
List<Element> titles = tutorials.getChildren("tutorial");

In the example above, we are retrieving all elements from the root element in a very simple way as we can do with DOM4J:

SAXBuilder builder = new SAXBuilder();
Document document = (Document) builder.build(file);
String filter = "//*[@tutId='" + id + "']";
XPathFactory xFactory = XPathFactory.instance();
XPathExpression<Element> expr = xFactory.compile(filter, Filters.element());
List<Element> node = expr.evaluate(document);

Again, here in the code above, we have a SAXBuilder creating a Document instance from a given file. We are retrieving an element by its tutId attribute by passing an XPath expression to the XPathFactory provided by JDOM2.

6. StAX

Now, we are going to see how we could retrieve all elements from our root element using the Stax API. Stax is included in the JDK since Java 6 so you don’t need to add any dependencies.

Firstly, we need to create a Tutorial class:

public class Tutorial {
    private String tutId;
    private String type;
    private String title;
    private String description;
    private String date;
    private String author;
    
    // standard getters and setters
}

and then we are ready to follow with:

List<Tutorial> tutorials = new ArrayList<>();
XMLInputFactory factory = XMLInputFactory.newInstance();
XMLEventReader eventReader = factory.createXMLEventReader(new FileReader(this.getFile()));
Tutorial current;
while (eventReader.hasNext()) {
    XMLEvent event = eventReader.nextEvent();
    switch (event.getEventType()) {
        case XMLStreamConstants.START_ELEMENT:
            StartElement startElement = event.asStartElement();
            String qName = startElement.getName().getLocalPart();
            ...
            break;
        case XMLStreamConstants.CHARACTERS:
            Characters characters = event.asCharacters();
            ...
            break;
        case XMLStreamConstants.END_ELEMENT:
            EndElement endElement = event.asEndElement();
            
            // check if we found the closing element
            // close resources that need to be explicitly closed
            break;
    }
}

In the example above, in order to help us retrieve the information, we needed to create a class to store the retrieved data in.

To read the document, we declared what is called event handlers and we used them to navigate our document ahead. Remember that the SAX implementations don’t provide bi-directional navigation. As you can see here, a lot of work needs to be done just to retrieve a simple list of elements.

7. JAXB

JAXB is included with the JDK, so initially we could think that we don’t need to add any dependency to our project, but this is not completely true, since JAXB needs XercesImpl to work correctly, so we should add this dependency to make it work.

It’s very simple to load, create and manipulate information from an XML file using JAXB.

We just need to create the correct java entities to bind the XML and that’s it.

JAXBContext jaxbContext = JAXBContext.newInstance(Tutorials.class);
Unmarshaller jaxbUnmarshaller = jaxbContext.createUnmarshaller();
Tutorials tutorials = (Tutorials) jaxbUnmarshaller.unmarshal(this.getFile());

In the example above, we load our XML file into our object and from there we can handle everything as a normal Java structure;

To create a new document, it is as simple as reading it but doing the reverse way, like done in the below code.

Firstly, we are going to modify our Tutorial class to add JAXB annotations to getters and setters:

public class Tutorial {
    ...
    
    public String getTutId() {
        return tutId;
    }
	
    @XmlAttribute
    public void setTutId(String tutId) {
        this.tutId = tutId;
    }
    ...
    @XmlElement
    public void setTitle(String title) {
        this.title = title;
    }
    ...
}

@XmlRootElement
public class Tutorials {
    private List<Tutorial> tutorial;

    // standard getters and setters with @XmlElement annotation
}

With @XmlRootElement we define what object is going to represent the root node of our document and then we use @XmlAttribute or @XmlElement to define whether that attribute represents an attribute of a node or an element of the document.

Then we can follow with:

Tutorials tutorials = new Tutorials();
tutorials.setTutorial(new ArrayList<>());
Tutorial tut = new Tutorial();
tut.setTutId("01");
...
tutorials.getTutorial().add(tut);
JAXBContext jaxbContext = JAXBContext.newInstance(Tutorials.class);
Marshaller jaxbMarshaller = jaxbContext.createMarshaller();
jaxbMarshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
jaxbMarshaller.marshal(tutorials, file);

As you can see, binding XML file to Java objects is the easiest way to work this kind of files.

8. XPath Expression Support

To create complex XPath expressions, we can use Jaxen. This is an open source XPath library adaptable to many different object models, including DOM, XOM, DOM4J, and JDOM.

We can create XPath expressions and compile them against many supported documents.

String expression = "/tutorials/tutorial";
XPath path = new DOMXPath(expression);
List result = path.selectNodes(xmlDocument);

To make it work we’ll need to add this dependency to our project.

9. Conclusion

As you can see there are many options for working with XML, depending on the requirements of your application, you could work with any of them or you may have to choose between efficiency and simplicity.

You can find the full working samples for this article in our git repository here.

Keep Track of Logged In Users with Spring Security

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In this quick tutorial, we’re going to show an example of how we can track the currently logged in users in an application using Spring Security.

For this purpose, we’re going to keep track of a list of logged in users by adding the user when they log in and removing them when they log out.

We’ll leverage the HttpSessionBindingListener to update the list of logged in users whenever user information is added to the session or removed from the session based on user logs into the system or logs out from the system.

2. Active User Store

For simplicity, we will define a class that acts as an in memory store for the logged in users:

public class ActiveUserStore {

    public List<String> users;

    public ActiveUserStore() {
        users = new ArrayList<String>();
    }

    // standard getter and setter
}

We’ll define this as a standard bean in the Spring context:

@Bean
public ActiveUserStore activeUserStore(){
    return new ActiveUserStore();
}

3. The HTTPSessionBindingListener

Now, we’re going to make use of the HTTPSessionBindingListener interface and create a wrapper class to represent a user that is currently logged in.

This will basically listen to events of type HttpSessionBindingEvent, which are triggered whenever a value is set or removed, or, in other words, bound or unbound, to the HTTP session:

@Component
public class LoggedUser implements HttpSessionBindingListener {

    private String username; 
    private ActiveUserStore activeUserStore;
    
    public LoggedUser(String username, ActiveUserStore activeUserStore) {
        this.username = username;
        this.activeUserStore = activeUserStore;
    }
    
    public LoggedUser() {}

    @Override
    public void valueBound(HttpSessionBindingEvent event) {
        List<String> users = activeUserStore.getUsers();
        LoggedUser user = (LoggedUser) event.getValue();
        if (!users.contains(user.getUsername())) {
            users.add(user.getUsername());
        }
    }

    @Override
    public void valueUnbound(HttpSessionBindingEvent event) {
        List<String> users = activeUserStore.getUsers();
        LoggedUser user = (LoggedUser) event.getValue();
        if (users.contains(user.getUsername())) {
            users.remove(user.getUsername());
        }
    }

    // standard getter and setter
}

The listener has two methods that need to be implemented, valueBound() and valueUnbound() for the two types of actions that trigger the event it is listening for. Whenever a value of the type that implements the listener is set or removed from the session, or the session is invalidated, these two methods will be invoked.

In our case, the valueBound() method will be called when the user logs in and the valueUnbound() method will be called when the user logs out or when the session expires.

In each of the methods we retrieve the value associated with the event, then add or remove the username from our list of logged in users, depending on whether the value was bound or unbound from the session.

4. Tracking Login and Logout

Now we need to keep track of when the user is successfully logged in or logged out so that we can add or remove active user from the session. In a Spring Security application, this can be achieved by implementing the AuthenticationSuccessHandler and LogoutSuccessHandler interfaces.

4.1. Implementing AuthenticationSuccessHandler

For the login action, we will set the username of the user logging in as an attribute on the session by overriding the onAuthenticationSuccess() method which provides us access to the session and authentication objects:

@Component("myAuthenticationSuccessHandler")
public class MySimpleUrlAuthenticationSuccessHandler implements AuthenticationSuccessHandler {

    @Autowired
    ActiveUserStore activeUserStore;
    
    @Override
    public void onAuthenticationSuccess(HttpServletRequest request, 
      HttpServletResponse response, Authentication authentication) 
      throws IOException {
        HttpSession session = request.getSession(false);
        if (session != null) {
            LoggedUser user = new LoggedUser(authentication.getName(), activeUserStore);
            session.setAttribute("user", user);
        }
    }
}

4.2. Implementing LogoutSuccessHandler

For the logout action, we will remove the user attribute by override the onLogoutSuccess() method of the LogoutSuccessHandler interface:

@Component("myLogoutSuccessHandler")
public class MyLogoutSuccessHandler implements LogoutSuccessHandler{
    @Override
    public void onLogoutSuccess(HttpServletRequest request, 
      HttpServletResponse response, Authentication authentication)
      throws IOException, ServletException {
        HttpSession session = request.getSession();
        if (session != null){
            session.removeAttribute("user");
        }
    }
}

5. Controller and View

In order to see all the above in action, we will create a controller mapping for the url “/users” that will retrieve the list of users, add it as a model attribute and return the users.html view:

5.1. Controller

@Controller
public class UserController {
    
    @Autowired
    ActiveUserStore activeUserStore;

    @RequestMapping(value = "/loggedUsers", method = RequestMethod.GET)
    public String getLoggedUsers(Locale locale, Model model) {
        model.addAttribute("users", activeUserStore.getUsers());
        return "users";
    }
}

5.2. Users.html

<html>
<body>
    <h2>Currently logged in users</h2>
    <div th:each="user : ${users}">
        <p th:text="${user}">user</p>
    </div>
</body>
</html>

6. Conclusion

In this article, we have demonstrated how we can determine who the currently logged in users are in a Spring Security application.

The implementation of this tutorial can be found in the github project – this is an Maven based project, so it should be easy to import and run as it is.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 131

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Java 9 Additions To Optional [codefx.org]

Some interesting stuff is definitely coming to Optional in the JDK.

>> 5 Common Hibernate Exceptions and How to Fix Them [takipi.com]

I like to go through these exception focused articles – they usually have new insights I can glean for when I do get the exception.

>> Managing Secrets with Vault [spring.io]

Storing secret configuration data is almost always an important thing to get right in the overall architecture of a system.

It’s also one of the most common question I get from readers when it comes to project configuration. So this writeup is an interesting solution to that question. Not the only solution, but certainly an interesting one.

>> Turn Around. Don’t Use JPA’s loadgraph and fetchgraph Hints. Use SQL Instead. [jooq.org]

A different perspective on picking the persistence solution of your next greenfield project, talking about preferring plain SQL over something higher level such as JPA.

>> 14 High-Performance Java Persistence Tips [vladmihalcea.com]

Some low-hanging fruit (and not so low-hanging) to improve the performance of a Hibernate implementation.

>> “Micro Profile in Enterprise Java” Announced ! [antoniogoncalves.org] and >> The Enterprise Java Future Is Bright: Java EE 8 MicroProfile Launched [adam-bien.com]

Big announcements in the Java EE world (seems like every week now).

>> Close Encounters of The Java Memory Model Kind [shipilev.net]

A fantastic deep-dive into the JMM (still reading through it now). Definitely one to bookmark.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Code Review and How Enterprises Can Miss The Point [daedtech.com]

An insightful analysis of the motivations of different players in a large organization when it comes to code reviews and to getting something useful out of the practice. Well worth reading.

>> How I prepared for the NDC keynote (and other speaker tips) [troyhunt.com]

Some solid, to the point advice on speaking well.

I feel that speaking is a life-long journey and there’s always a lot to learn. And delivering a good presentation is such an important skill that it really makes sense to spend time and learn how to do it well, as much as possible.

>> Learning a Healthy Fear of Legacy Code [daedtech.com]

Here be dragons.

>> Expanding the Cloud: Introducing the AWS Asia Pacific (Mumbai) Region [allthingsdistributed.com]

Yeah, one more region to play with, after Frankfurt.

>> Special Skills [dandreamsofcoding.com]

There’s a time to study the foundations and there’s a time to specialize. And while foundations are important, specialization and niching down are more and more critical today.

>> Jepsen: Crate 0.54.9 version divergence [aphyr.com]

Who knew that the Elasticsearch data consistency problems (which are quite real) would go beyond the core product and spread to other solutions as well. It’s not that surprising though.

>> Amazon Elastic File System – Production-Ready in Three Regions [aws.amazon.com] and

Elastic Network Adapter – High Performance Network Interface for Amazon EC2 [aws.amazon.com]

Two important announcements of new AWS goodness in a single week.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Nothing about you is normal [dilbert.com]

>> Two good ways to avoid listening to others [dilbert.com]

>> Did someone tell you Twitter was a video game? [dilbert.com]

4. Pick of the Week

>> This I Believe – 25 Thoughts for Life [conversionxl.com]

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to the Java 8 Date/Time API

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

Java 8 introduced new APIs for Date and Time to address the shortcomings of the older java.util.Date and java.util.Calendar.

As part of this article, let’s start with the issues in the existing Date and Calendar APIs and let’s discuss how the new Java 8 Date and Time APIs address them.

We will also look at some of the core classes of the new Java 8 project that are part of the java.time package like LocalDate, LocalTime, LocalDateTime, ZonedDateTime, Period, Duration and their supported APIs.

2. Issues with the Existing Date/Time APIs

  • Thread Safety – The Date and Calendar classes are not thread safe, leaving developers to deal with the headache of hard to debug concurrency issues and to write additional code to handle thread safety. On the contrary the new Date and Time APIs introduced in Java 8 are immutable and thread safe, thus taking that concurrency headache away from developers.
  • APIs Design and Ease of Understanding – The Date and Calendar APIs are poorly designed with inadequate methods to perform day-to-day operations. The new Date/Time APIs is ISO centric and follows consistent domain models for date, time, duration and periods. There are a wide variety of utility methods that support the commonest operations.
  • ZonedDate and Time – Developers had to write additional logic to handle timezone logic with the old APIs, whereas with the new APIs, handling of timezone can be done with Local and ZonedDate/Time APIs.

3. Using LocalDate, LocalTime and LocalDateTime

The most commonly used classes are LocalDate, LocalTime and LocalDateTime. As their names indicate, they represent the local Date/Time from the context of the observer.

These classes are mainly used when timezone are not required to be explicitly specified in the context. As part of this section, we will cover the most commonly used APIs.

3.1. Working with LocalDate

The LocalDate represents a date in ISO format (yyyy-MM-dd) without time.

It can be used to store dates like birthdays and paydays.

An instance of current date can be created from the system clock as below:

LocalDate localDate = LocalDate.now();

The LocalDate representing a specific day, month and year can be obtained using the “of” method or by using the “parse” method. For example the below code snippets represents the LocalDate for 20 February 2015:

LocalDate.of(2015, 02, 20);

LocalDate.parse("2015-02-20");

The LocalDate provides various utility methods to obtain a variety of information. Let’s have a quick peek at some of these APIs methods.

The following code snippet gets the current local date and adds one day:

LocalDate tomorrow = LocalDate.now().plusDays(1);

This example obtains the current date and subtracts one month. Note how it accepts an enum as the time unit:

LocalDate previousMonthSameDay = LocalDate.now().minus(1, ChronoUnit.MONTHS);

In the following two code examples we parse the date “2016-06-12” and get the day of the week and the day of the month respectively. Note the return values, the first is an object representing the DayOfWeek while the second in an int representing the ordinal value of the month:

DayOfWeek sunday = LocalDate.parse("2016-06-12").getDayOfWeek();

int twelve = LocalDate.parse("2016-06-12").getDayOfMonth();

We can test if a date occurs in a leap year. In this example we test if the current date occurs is a leap year:

boolean leapYear = LocalDate.now().isLeapYear();

The relationship of a date to another can be determined to occur before or after another date:

boolean notBefore = LocalDate.parse("2016-06-12")
  .isBefore(LocalDate.parse("2016-06-11"));

boolean isAfter = LocalDate.parse("2016-06-12").isAfter(LocalDate.parse("2016-06-11"));

Date boundaries can be obtained from a given date. In the following two examples we get the LocalDateTime that represents the beginning of the day (2016-06-12T00:00) of the given date and the LocalDate that represents the beginning of the month (2016-06-01) respectively:

LocalDateTime beginningOfDay = LocalDate.parse("2016-06-12").atStartOfDay();
LocalDate firstDayOfMonth = LocalDate.parse("2016-06-12")
  .with(TemporalAdjusters.firstDayOfMonth());

Now let’s have a look at how we work with local time.

3.2. Working with LocalTime

The LocalTime represents time without a date.

Similar to LocalDate an instance of LocalTime can be created from system clock or by using “parse” and “of” method. Quick look at some of the commonly used APIs below.

An instance of current LocalTime can be created from the system clock as below:

LocalTime now = LocalTime.now();

In the below code sample, we create a LocalTime representing 06:30 AM by parsing a string representation:

LocalTime sixThirty = LocalTime.parse("06:30");

The Factory method “of” can be used to create a LocalTime. For example the below code creates LocalTime representing 06:30 AM using the factory method:

LocalTime sixThirty = LocalTime.of(6, 30);

The below example creates a LocalTime by parsing a string and adds an hour to it by using the “plus” API. The result would be LocalTime representing 07:30 AM:

LocalTime sevenThirty = LocalTime.parse("06:30").plus(1, ChronoUnit.HOURS);

Various getter methods are available which can be used to get specific units of time like hour, min and secs like below:

int six = LocalTime.parse("06:30").getHour();

We can also check if a specific time is before or after another specific time. The below code sample compares two LocalTime for which the result would be true:

boolean isbefore = LocalTime.parse("06:30").isBefore(LocalTime.parse("07:30"));

The max, min and noon time of a day can be obtained by constants in LocalTime class. This is very useful when performing database queries to find records within a given span of time. For example, the below code represents 23:59:59.99:

LocalTime maxTime = LocalTime.MAX

Now let’s dive into LocalDateTime.

3.3. Working with LocalDateTime

The LocalDateTime is used to represent a combination of date and time.

This is the most commonly used class when we need a combination of date and time. The class offers a variety of APIs and we will look at some of the most commonly used ones.

An instance of LocalDateTime can be obtained from the system clock similar to LocalDate and LocalTime:

LocalDateTime.now();

The below code samples explain how to create an instance using the factory “of” and “parse” methods. The result would be a LocalDateTime instance representing 20 February 2015, 06:30 AM:

LocalDateTime.of(2015, Month.FEBRUARY, 20, 06, 30);
LocalDateTime.parse("2015-02-20T06:30:00");

There are utility APIs to support addition and subtraction of specific units of time like days, months, year and minutes are available. The below code samples demonstrates the usage of “plus” and “minus” methods. These APIs behave exactly like their counterparts in LocalDate and LocalTime:

localDateTime.plusDays(1);
localDateTime.minusHours(2);

Getter methods are available to extract specific units similar to the date and time classes. Given the above instance of LocalDateTime, the below code sample will return the month February:

localDateTime.getMonth();

4. Using ZonedDateTime API

Java 8 provides ZonedDateTime when we need to deal with time zone specific date and time. The ZoneId is an identifier used to represent different zones. There are about 40 different time zones and the ZoneId are used to represent them as follows.

In this code snippet we create a Zone for Paris:

ZoneId zoneId = ZoneId.of("Europe/Paris");

A set of all zone ids can be obtained as below:

Set<String> allZoneIds = ZoneId.getAvailableZoneIds();

The LocalDateTime can be converted to a specific zone:

ZonedDateTime zonedDateTime = ZonedDateTime.of(localDateTime, zoneId);

The ZonedDateTime provides parse method to get time zone specific date time:

ZonedDateTime.parse("2015-05-03T10:15:30+01:00[Europe/Paris]");

Another way to work with time zone is by using OffsetDateTime. The OffsetDateTime is an immutable representation of a date-time with an offset. This class stores all date and time fields, to a precision of nanoseconds, as well as the offset from UTC/Greenwich.

The OffSetDateTime instance can be created as below using ZoneOffset. Here we create a LocalDateTime representing 6:30 am on 20th February 2015:

LocalDateTime localDateTime = LocalDateTime.of(2015, Month.FEBRUARY, 20, 06, 30);

Then we add two hours to the time by creating a ZoneOffset and setting for the localDateTime instance:

ZoneOffset offset = ZoneOffset.of("+02:00");

OffsetDateTime offSetByTwo = OffsetDateTime
  .of(localDateTime, offset);

We now have a localDateTime of 2015-02-20 06:30 +02:00. Now let’s move on to how to modify date and time values using the Period and Duration classes.

5. Using Period and Duration

The Period class represents a quantity of time in terms of years, months and days and the Duration class represents a quantity of time in terms of seconds and nano seconds.

5.1. Working with Period

The Period class is widely used to modify values of given a date or to obtain the difference between two dates:

LocalDate initialDate = LocalDate.parse("2007-05-10");

The Date can be manipulated using Period as shown in the following code snippet:

LocalDate finalDate = initialDate.plus(Period.ofDays(5));

The Period class has various getter methods such as getYears, getMonths and getDays to get values from a Period object. The below code example returns an int value of 5 as we try to get difference in terms of days:

int five = Period.between(finalDate, initialDate).getDays();

The Period between two dates can be obtained in a specific unit such as days or month or years, using ChronoUnit.between:

int five = ChronoUnit.DAYS.between(initialDate , initialDate);

This code example returns five days. Let’s continue by taking a look at the Duration class.

5.2. Working with Duration

Similar to Period, the Duration class is use to deal with Time. In the following code we create a LocalTime of 6:30 am and then add a duration of 30 seconds to make a LocalTime of 06:30:30am:

LocalTime initialTime = LocalTime.of(6, 30, 0);

LocalTime finalTime = initialTime.plus(Duration.ofSeconds(30));

The Duration between two instants can be obtained either as a Duration or as a specific unit. In the first code snippet we use the between() method of the Duration class to find the time difference between finalTime and initialTime and return the difference in seconds:

int thirty = Duration.between(finalTime, initialTime).getSeconds();

In the second example we use the between() method of the ChronoUnit class to perform the same operation:

int thirty = ChronoUnit.SECONDS.between(finalTime, initialTime);

Now we will look at how to convert existing Date and Calendar to new Date/Time.

6. Compatibility with Date and Calendar

Java 8 has added the toInstant() method which helps to convert existing Date and Calendar instance to new Date Time API as in the following code snippet:

LocalDateTime.ofInstant(date.toInstant(), ZoneId.systemDefault());
LocalDateTime.ofInstant(calendar.toInstant(), ZoneId.systemDefault());

 The LocalDateTime can be constructed from epoch seconds as below. The result of the below code would be a LocalDateTime representing 2016-06-13T11:34:50:

LocalDateTime.ofEpochSecond(1465817690, 0, ZoneOffset.UTC);

Now let’s move on to Date and Time formatting.

7. Date and Time Formatting

Java 8 provides APIs for the easy formatting of Date and Time:

LocalDateTime localDateTime = LocalDateTime.of(2015, Month.JANUARY, 25, 6, 30);

The below code passes an ISO date format to format the local date. The result would be 2015-01-25 :

LocalDate localDate = localDateTime.format(DateTimeFormatter.ISO_DATE);

The DateTimeFormatter provides various standard formatting options. Custom patterns can be provided to format method as well, like below, which would return a LocalDate as 2015/01/25:

localDateTime.format(DateTimeFormatter.ofPattern("yyyy/MM/dd"));

We can pass in formatting style either as SHORT, LONG or MEDIUM as part of the formatting option. The below code sample would give an output representing LocalDateTime in 25-Jan-2015 06:30:00:

localDateTime
  .format(DateTimeFormatter.ofLocalizedDateTime(FormatStyle.MEDIUM)
  .withLocale(Locale.UK);

Let us take a look at alternatives available to Java 8 Core Date/Time APIs.

8. Backport and Alternate Options

8.1. Using Project Threeten

For organization that are on the path of moving to Java 8 from Java 7 or Java 6 and want to use date and time API, project threeten provides the backport capability. Developers can use classes available in this project to achieve the same functionality as that of new Java 8 Date and Time API and once they move to Java 8, the packages can be switched. Artifact for the project threeten can be found in the maven central repository:

<dependency>
    <groupId>org.threeten</groupId>
    <artifactId>threetenbp</artifactId>
    <version>1.3.1</version>
</dependency>

8.2. Joda-Time Library

Another alternative for Java 8 Date and Time library is Joda-Time library. In fact Java 8 Date Time APIs has been led jointly by the author of Joda-Time library (Stephen Colebourne) and Oracle. This library provides pretty much all capabilities that is supported in Java 8 Date Time project. The Artifact can be found in the maven central by including the below pom dependency in your project:

<dependency>
    <groupId>joda-time</groupId>
    <artifactId>joda-time</artifactId>
    <version>2.9.4</version>
</dependency>

9. Conclusion

Java 8 provides a rich set of APIs with consistent API design for easier development.

The code samples for the above article can be found in the Java 8 Date/Time git repository.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to Spring Data Neo4j

$
0
0

1. Overview

This article is an introduction to Spring Data Neo4j, the popular graph database.

Spring Data Neo4j enables POJO based development for the Neo4j Graph Database and uses familiar Spring concepts such as a template classes for core API usage and provides an annotation based programming model.

Also, a lot of developers don’t really know if Neo4j will actually be a good match for their specific needs; here’s a solid overview on Stackoverflow discussing why to use Neo4j and the pros and cons.

2. Maven Dependencies

Let’s start by declaring the Spring Data Neo4j dependencies in the pom.xml. The below mentioned spring modules are also required for Spring Data Neo4j:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-neo4j</artifactId>
    <version>${spring-data-neo4j.version}</version>
</dependency>
<dependency>
    <groupId>org.neo4j</groupId>
    <artifactId>neo4j-ogm-test</artifactId>
    <version>${neo4j-ogm-test.version}</version>
    <scope>test</scope>
</dependency>

These dependencies include the required modules for testing along with an embedded server as well.

Note that the last dependency is scoped as ‘test’. But also note that, in a real world application development, you’re more likely to have an full Neo4J server running.

3. Neo4Jj Configuration

The Neo4j configuration is very straight forward and defines the connection setting for the application to connect to the server. Similar to the most of the other spring data modules, this is a spring configuration which can be defined as XML or Java configuration.

In this tutorial, we’ll use Java-based configuration only:

public static final String URL = 
  System.getenv("NEO4J_URL") != null ? 
  System.getenv("NEO4J_URL") : "http://neo4j:movies@localhost:7474";

@Bean
public org.neo4j.ogm.config.Configuration getConfiguration() {
    org.neo4j.ogm.config.Configuration config = new org.neo4j.ogm.config.Configuration();
    config.driverConfiguration().setDriverClassName(
      "org.neo4j.ogm.drivers.http.driver.HttpDriver").setURI(URL);
    return config;
}

@Override
public SessionFactory getSessionFactory() {
    return new SessionFactory(getConfiguration(), 
      "com.baeldung.spring.data.neo4j.domain");
}

As mentioned above, the config is simple and contains only two settings. First – the SessionFactory is referencing the models that we created to represent the data objects. Then, the connection properties with the server endpoints and access credentials.

Please note that in this example, the connection related properties are configured directly to the server; however in a production application, these should be properly externalized and part of the standard configuration of the project.

4. Neo4j Repositories

Aligning with the Spring Data framework, Neo4j supports the Spring Data repository abstraction behavior. That means accessing the underlying persistent mechanism is abstracted in the inbuilt GraphRepository where simple project can directly extend it and use the provided operations out-of-the-box.

The repositories are extensible by annotated, named or derived finder methods. Support for Spring Data Neo4j Repositories are also based on Neo4jTemplate, so the underlying functionality is identical.

4.1. Creating the MovieRepositoryPersonRepository

We use two repositories in this tutorial for data persistence:

@Repository
public interface MovieRepository extends GraphRepository<Movie> {

    Movie findByTitle(@Param("title") String title);

    @Query("MATCH (m:Movie) WHERE m.title =~ ('(?i).*'+{title}+'.*') RETURN m")
    Collection<Movie> 
      findByTitleContaining(@Param("title") String title);

    @Query("MATCH (m:Movie)<-[:ACTED_IN]-(a:Person) 
      RETURN m.title as movie, collect(a.name) as cast LIMIT {limit}")
    List<Map<String,Object>> graph(@Param("limit") int limit);
}

As you can, the repository contains some custom operations as well as the standard ones inherited from the base class.

 Next we have the simpler PersonRepository, which just has the standard operations:

@Repository
public interface PersonRepository extends GraphRepository <Person> {
    //
}

You may have already noticed that PersonRepository is just the standard Spring Data interface. This is because in this simple example, it is almost sufficient to use the inbuilt operations basically as our operation set is related to the Movie entity. However you can always add custom operations here which may wrap single/ multiple inbuilt operations.

4.2. Configuring Neo4jRepositories

As the next step, we have to let Spring know the relevant repository indicating it in the Neo4jConfiguration class created in section 3.1:

@Configuration
@ComponentScan("com.baeldung.spring.data.neo4j")
@EnableNeo4jRepositories(
  basePackages = "com.baeldung.spring.data.neo4j.repostory")
public class LibraryNeo4jConfiguration extends Neo4jConfiguration {
    //
}

5. The Full Data Model

We already started looking at the data model, so let’s now lay it all out – the full Movie, Role and Person. The Person entity references the Movie entity through the Role relationship.

@NodeEntity
public class Movie {

    @GraphId
    Long id;

    private String title;

    private int released;

    private String tagline;

    @Relationship(type="ACTED_IN", direction = Relationship.INCOMING)

    private List<Role> roles;

    // standard constructor, getters and setters 
}

Notice how we’ve annotated Movie with @NodeEntity to indicate that this class is directly mapped to a node in Neo4j.

@JsonIdentityInfo(generator=JSOGGenerator.class)
@NodeEntity
public class Person {

    @GraphId
    Long id;

    private String name;

    private int born;

    @Relationship(type = "ACTED_IN")
    private List<Movie> movies;

    // standard constructor, getters and setters 
}

@JsonIdentityInfo(generator=JSOGGenerator.class)
@RelationshipEntity(type = "ACTED_IN")
public class Role {

    @GraphId
    Long id;

    private Collection<String> roles;

    @StartNode
    private Person person;

    @EndNode
    private Movie movie;

    // standard constructor, getters and setters 
}

Of course, these last couple of classes are similarly annotated and the -movies reference is linking Person to Movie class by the “ACTED_IN” relationship.

6. Data Access using MovieRepository

6.1. Saving a New Movie Object

Let’s save some data – first, a new Movie, then a Person and of course a Role – including all the relation data we have as well:

Movie italianJob = new Movie();
italianJob.setTitle("The Italian Job");
italianJob.setReleased(1999);
movieRepository.save(italianJob);

Person mark = new Person();
mark.setName("Mark Wahlberg");
personRepository.save(mark);

Role charlie = new Role();
charlie.setMovie(italianJob);
charlie.setPerson(mark);
Collection<String> roleNames = new HashSet();
roleNames.add("Charlie Croker");
charlie.setRoles(roleNames);
List<Role> roles = new ArrayList();
roles.add(charlie);
italianJob.setRoles(roles);
movieRepository.save(italianJob);

6.2. Retrieving an Existing Movie Object by Title

Let’s now verify the inserted movie by retrieving it using the defined title which is a custom operation:

Movie result = movieRepository.findByTitle(title);

6.3. Retrieving an Existing Movie Object by a Part of the Title

It is possible to search to search an existing movie using a part of the title:

Collection<Movie> result = movieRepository.findByTitleContaining("Italian");

6.4. Retrieving All the Movies

All the movies can be retrieve once and can be check for the correct count:

Collection<Movie> result = (Collection<Movie>) movieRepository.findAll();

However there are number of find methods provided with default behavior which is useful for customs requirements and not all are described here.

6.5. Count the Existing Movie Objects

After inserting several movie objects, we can get exiting movie count:

long movieCount = movieRepository.count();

6.6. Deleting an Existing Movie

movieRepository.delete(movieRepository.findByTitle("The Italian Job"));

After deleting the inserted movie, we can search for the movie object and verify the result is null:

assertNull(movieRepository.findByTitle("The Italian Job"));

6.7. Delete all Inserted Data

It is possible to delete all the elements in the database making the database empty:

movieRepository.deleteAll();

The result of this operation quickly removes all data from a table.

7. Conclusion

In this tutorial, we went through the basics of Spring Data Neo4j using a very simple example.

However Neo4j is capable of catering to very advanced and complex applications having a huge set of relations and networks. And Spring Data Neo4j also offers advanced features to map annotated entity classes to the Neo4j Graph Database.

The implementation of the above code snippets and examples can be found in the GitHub project – this is an Maven based project, so it should be easy to import and run as it is.

Testing with Hamcrest

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

Hamcrest is the well-know framework used for unit testing in the Java ecosystem. It bundled in JUnit and simply put, it uses existing predicates – called matcher classes – for making assertions.

In this tutorial, we will explore the Hamcrest API and learn how to take advantage of it to write neater and more intuitive unit tests for our software.

2. Hamcrest Setup

We can use Hamcrest with maven by adding the following dependency to our pom.xml file:

<dependency>
    <groupId>org.hamcrest</groupId>
    <artifactId>hamcrest-all</artifactId>
    <version>1.3</version>
</dependency>

The latest version of this library can always be found here.

3. An Example Test

Hamcrest is commonly used with junit and other testing frameworks for making assertions. Specifically, instead of using junit‘s numerous assert methods, we only use the API’s single assertThat statement with appropriate matchers.

Let’s look at an example that tests two Strings for equality regardless of case. This should give us a clear idea about how Hamcrest fits in to a testing method:

public class StringMatcherTest {
    
    @Test
    public void given2Strings_whenEqual_thenCorrect() {
        String a = "foo";
        String b = "FOO";
        assertThat(a, equalToIgnoringCase(b));
    }
}

In the following sections we shall take a look at several other common matchers Hamcrest offers.

4. The Object Matcher

Hamcrest provides matchers for making assertions on arbitrary Java objects.

To assert that the toString method of an Object returns a specified String:

@Test
public void givenBean_whenToStringReturnsRequiredString_thenCorrect(){
    Person person=new Person("Barrack", "Washington");
    String str=person.toString();
    assertThat(person,hasToString(str));
}

We can also check that one class is a sub-class of another:

@Test
public void given2Classes_whenOneInheritsFromOther_thenCorrect(){
        assertThat(Cat.class,typeCompatibleWith(Animal.class));
    }
}

5. The Bean Matcher

We can use Hamcrest‘s Bean matcher to inspect properties of a Java bean.

Assume the following Person bean:

public class Person {
    String name;
    String address;

    public Person(String personName, String personAddress) {
        name = personName;
        address = personAddress;
    }
}

We can check if the bean has the property, name like so:

@Test
public void givenBean_whenHasValue_thenCorrect() {
    Person person = new Person("Baeldung", 25);
    assertThat(person, hasProperty("name"));
}

We can also check if Person has the address property, initialized to New York:

@Test
public void givenBean_whenHasCorrectValue_thenCorrect() {
    Person person = new Person("Baeldung", "New York");
    assertThat(person, hasProperty("address", equalTo("New York")));
}

We can as well check if two Person objects are constructed with the same values:

@Test
public void given2Beans_whenHavingSameValues_thenCorrect() {
    Person person1 = new Person("Baeldung", "New York");
    Person person2 = new Person("Baeldung", "New York");
    assertThat(person1, samePropertyValuesAs(person2));
}

6. The Collection Matcher

Hamcrest provides matchers for inspecting Collections.

Simple check to find out if a Collection is empty:

@Test
public void givenCollection_whenEmpty_thenCorrect() {
    List<String> emptyList = new ArrayList<>();
    assertThat(emptyList, empty());
}

To check the size of a Collection:

@Test
public void givenAList_whenChecksSize_thenCorrect() {
    List<String> hamcrestMatchers = Arrays.asList(
      "collections", "beans", "text", "number");
    assertThat(hamcrestMatchers, hasSize(4));
}

We can also use it to assert that an array has a required size:

@Test
public void givenArray_whenChecksSize_thenCorrect() {
    String[] hamcrestMatchers = { "collections", "beans", "text", "number" };
    assertThat(hamcrestMatchers, arrayWithSize(4));
}

To check if a Collection contains given members, regardless of order:

@Test
public void givenAListAndValues_whenChecksListForGivenValues_thenCorrect() {
    List<String> hamcrestMatchers = Arrays.asList(
      "collections", "beans", "text", "number");
    assertThat(hamcrestMatchers,
    containsInAnyOrder("beans", "text", "collections", "number"));
}

To further assert that the Collection members are in given order:

@Test
public void givenAListAndValues_whenChecksListForGivenValuesWithOrder_thenCorrect() {
    List<String> hamcrestMatchers = Arrays.asList(
      "collections", "beans", "text", "number");
    assertThat(hamcrestMatchers,
    contains("collections", "beans", "text", "number"));
}

To check if an array has a single given element:

@Test
public void givenArrayAndValue_whenValueFoundInArray_thenCorrect() {
    String[] hamcrestMatchers = { "collections", "beans", "text", "number" };
    assertThat(hamcrestMatchers, hasItemInArray("text"));
}

We can also use an alternative matcher for the same test:

@Test
public void givenValueAndArray_whenValueIsOneOfArrayElements_thenCorrect() {
    String[] hamcrestMatchers = { "collections", "beans", "text", "number" };
    assertThat("text", isOneOf(hamcrestMatchers));
}

Or still we can do the same with a different matcher like so:

@Test
public void givenValueAndArray_whenValueFoundInArray_thenCorrect() {
    String[] array = new String[] { "collections", "beans", "text",
      "number" };
    assertThat("beans", isIn(array));
}

We can also check if the array contains given elements regardless of order:

@Test
public void givenArrayAndValues_whenValuesFoundInArray_thenCorrect() {
    String[] hamcrestMatchers = { "collections", "beans", "text", "number" };
      assertThat(hamcrestMatchers,
    arrayContainingInAnyOrder("beans", "collections", "number",
      "text"));
}

To check if the array contains given elements but in the given order:

@Test
public void givenArrayAndValues_whenValuesFoundInArrayInOrder_thenCorrect() {
    String[] hamcrestMatchers = { "collections", "beans", "text", "number" };
    assertThat(hamcrestMatchers,
    arrayContaining("collections", "beans", "text", "number"));
}

When our Collection is a Map, we can use the following matchers in these respective functions:

To check if it contains a given key:

@Test
public void givenMapAndKey_whenKeyFoundInMap_thenCorrect() {
    Map<String, String> map = new HashMap<>();
    map.put("blogname", "baeldung");
    assertThat(map, hasKey("blogname"));
}

and a given value:

@Test
public void givenMapAndValue_whenValueFoundInMap_thenCorrect() {
    Map<String, String> map = new HashMap<>();
    map.put("blogname", "baeldung");
    assertThat(map, hasValue("baeldung"));
}

and finally a given entry (key, value):

@Test
public void givenMapAndEntry_whenEntryFoundInMap_thenCorrect() {
    Map<String, String> map = new HashMap<>();
    map.put("blogname", "baeldung");
    assertThat(map, hasEntry("blogname", "baeldung"));
}

7. The Number Matcher

The Number matchers are used to perform assertions on variables of the Number class.

To check greaterThan condition:

@Test
public void givenAnInteger_whenGreaterThan0_thenCorrect() {
    assertThat(1, greaterThan(0));
}

To check greaterThan or equalTo condition:

@Test
public void givenAnInteger_whenGreaterThanOrEqTo5_thenCorrect() {
    assertThat(5, greaterThanOrEqualTo(5));
}

To check lessThan condition:

@Test
public void givenAnInteger_whenLessThan0_thenCorrect() {
    assertThat(-1, lessThan(0));
}

To check lessThan or equalTo condition:

@Test
public void givenAnInteger_whenLessThanOrEqTo5_thenCorrect() {
    assertThat(-1, lessThanOrEqualTo(5));
}

To check closeTo condition:

@Test
public void givenADouble_whenCloseTo_thenCorrect() {
    assertThat(1.2, closeTo(1, 0.5));
}

Let’s pay close attention to the last matcher, closeTo. The first argument, the operand, is the one to which the target is compared and the second argument is the allowable deviation from the operand . This means that if the target is operand+deviation or operand-deviation, then the test will pass.

8. The Text Matcher

Assertion on Strings is made easier, neater and more intuitive with Hamcrest‘s text matchers. We are going to take a look at them in this section.

To check if a String is empty:

@Test
public void givenString_whenEmpty_thenCorrect() {
    String str = "";
    assertThat(str, isEmptyString());
}

To check if a String is empty or null:

@Test
public void givenString_whenEmptyOrNull_thenCorrect() {
    String str = null;
    assertThat(str, isEmptyOrNullString());
}

To check for equality of two Strings while ignoring white space:

@Test
public void given2Strings_whenEqualRegardlessWhiteSpace_thenCorrect() {
    String str1 = "text";
    String str2 = " text ";
    assertThat(str1, equalToIgnoringWhiteSpace(str2));
}

We can also check for the presence of one or more sub-strings in a given String in a given order:

@Test
public void givenString_whenContainsGivenSubstring_thenCorrect() {
    String str = "calligraphy";
    assertThat(str, stringContainsInOrder(Arrays.asList("call", "graph")));
}

Finally, we can check for equality of two Strings regardless of case:

@Test
 public void given2Strings_whenEqual_thenCorrect() {
    String a = "foo";
    String b = "FOO";
    assertThat(a, equalToIgnoringCase(b));
}

9. The Core API

The Hamcrest core API is to be used by third-party framework providers. However, it offers us some great constructs to make our unit tests more readable and also some core matchers that can be used just as easily.

Readability with the is construct on a matcher:

@Test
public void given2Strings_whenIsEqualRegardlessWhiteSpace_thenCorrect() {
    String str1 = "text";
    String str2 = " text ";
    assertThat(str1, is(equalToIgnoringWhiteSpace(str2)));
}

The is construct on a simple data type:

@Test
public void given2Strings_whenIsEqual_thenCorrect() {
    String str1 = "text";
    String str2 = "text";
    assertThat(str1, is(str2));
}

Negation with the not construct on a matcher:

@Test
public void given2Strings_whenIsNotEqualRegardlessWhiteSpace_thenCorrect() {
    String str1 = "text";
    String str2 = " texts ";
    assertThat(str1, not(equalToIgnoringWhiteSpace(str2)));
}

The not construct on a simple data type:

@Test
public void given2Strings_whenNotEqual_thenCorrect() {
    String str1 = "text";
    String str2 = "texts";
    assertThat(str1, not(str2));
}

Check if a String contains a given sub-string:

@Test
public void givenAStrings_whenContainsAnotherGivenString_thenCorrect() {
    String str1 = "calligraphy";
    String str2 = "call";
    assertThat(str1, containsString(str2));
}

Check if a String starts with given sub-string:

@Test
public void givenAString_whenStartsWithAnotherGivenString_thenCorrect() {
    String str1 = "calligraphy";
    String str2 = "call";
    assertThat(str1, startsWith(str2));
}

Check if a String ends with given sub-string:

@Test
public void givenAString_whenEndsWithAnotherGivenString_thenCorrect() {
    String str1 = "calligraphy";
    String str2 = "phy";
    assertThat(str1, endsWith(str2));
}

Check if two Objects are of the same instance:

@Test
public void given2Objects_whenSameInstance_thenCorrect() {
    Cat cat=new Cat();
    assertThat(cat, sameInstance(cat));
}

Check if an Object is an instance of a given class:

@Test
public void givenAnObject_whenInstanceOfGivenClass_thenCorrect() {
    Cat cat=new Cat();
    assertThat(cat, instanceOf(Cat.class));
}

Check if all members of a Collection meet a condition:

@Test
public void givenList_whenEachElementGreaterThan0_thenCorrect() {
    List<Integer> list = Arrays.asList(1, 2, 3);
    int baseCase = 0;
    assertThat(list, everyItem(greaterThan(baseCase)));
}

Check that a String is not null:

@Test
public void givenString_whenNotNull_thenCorrect() {
    String str = "notnull";
    assertThat(str, notNullValue());
}

Chain conditions together, test passes when target meets any of the conditions, similar to logical OR:

@Test
public void givenString_whenMeetsAnyOfGivenConditions_thenCorrect() {
    String str = "calligraphy";
    String start = "call";
    String end = "foo";
    assertThat(str, anyOf(startsWith(start), containsString(end)));
}

Chain conditions together, test passes only when target meets all conditions, similar to logical AND:

@Test
public void givenString_whenMeetsAllOfGivenConditions_thenCorrect() {
    String str = "calligraphy";
    String start = "call";
    String end = "phy";
    assertThat(str, allOf(startsWith(start), endsWith(end)));
}

10. A Custom Matcher

We can define our own matcher by extending TypeSafeMatcher. In this section, we will create a custom matcher which allows a test to pass only when the target is a positive integer.

public class IsPositiveInteger extends TypeSafeMatcher<Integer> {

    public void describeTo(Description description) {
        description.appendText("a positive integer");
    }

    @Factory
    public static Matcher<Integer> isAPositiveInteger() {
        return new IsPositiveInteger();
    }

    @Override
    protected boolean matchesSafely(Integer integer) {
        return integer > 0;
    }

}

We need only to implement the matchSafely method which checks that the target is indeed a positive integer and the describeTo method which produces a failure message in case the test does not pass.

Here is a test that uses our new custom matcher:

@Test
public void givenInteger_whenAPositiveValue_thenCorrect() {
    int num = 1;
    assertThat(num, isAPositiveInteger());
}

and here is a failure message we get since we have passed in a non-positive integer:

java.lang.AssertionError: Expected: a positive integer but: was <-1>

11. Conclusion

In this tutorial, we have explored the Hamcrest API and learnt how we can write better and more maintainable unit tests with it.

The full implementation of all these examples and code snippets can be found in my Hamcrest github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to Couchbase SDK for Java

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

In this introduction to the Couchbase SDK for Java, we demonstrate how to interact with a Couchbase document database, covering basic concepts such as creating a Couchbase environment, connecting to a cluster, opening data buckets, using the basic persistence operations, and working with document replicas.

2. Maven Dependencies

If you are using Maven, add the following to your pom.xml file:

<dependency>
    <groupId>com.couchbase.client</groupId>
    <artifactId>java-client</artifactId>
    <version>2.2.6</version>
</dependency>

3. Getting Started

The SDK provides the CouchbaseEnvironment interface and an implementation class DefaultCouchbaseEnvironment containing default settings for managing access to clusters and buckets. The default environment settings can be overridden if necessary, as we will see in section 3.2.

Important: The official Couchbase SDK documentation cautions users to ensure that only one CouchbaseEnvironment is active in the JVM, since the use of two or more may result in unpredictable behavior.

3.1. Connecting to a Cluster with a Default Environment

To have the SDK automatically create a CouchbaseEnvironment with default settings and associate it with our cluster, we can connect to the cluster simply by providing the IP address or hostname of one or more nodes in the cluster.

In this example, we connect to a single-node cluster on our local workstation:

Cluster cluster = CouchbaseCluster.create("localhost");

To connect to a multi-node cluster, we would specify at least two nodes in case one of them is unavailable when the application attempts to establish the connection:

Cluster cluster = CouchbaseCluster.create("192.168.4.1", "192.168.4.2");

Note: It is not necessary to specify every node in the cluster when creating the initial connection. The CouchbaseEnvironment will query the cluster once the connection is established in order to discover the remaining nodes (if any).

3.2. Using a Custom Environment

If your application requires fine tuning of any of the settings provided by DefaultCouchbaseEnvironment, you can create a custom environment and then use that environment when connecting to your cluster.

Here’s an example that connects to a single-node cluster using a custom CouchbaseEnvironment with a ten-second connection timeout and a three-second key-value lookup timeout:

CouchbaseEnvironment env = DefaultCouchbaseEnvironment.builder()
  .connectTimeout(10000)
  .kvTimeout(3000)
  .build();
Cluster cluster = CouchbaseCluster.create(env, "localhost");

And to connect to a multi-node cluster with the custom environment:

Cluster cluster = CouchbaseCluster.create(env,
  "192.168.4.1", "192.168.4.2");

3.3. Opening a Bucket

Once you have connected to the Couchbase cluster, you can open one or more buckets.

When you first set up a Couchbase cluster, the installation package automatically creates a bucket named “default” with a blank password.

Here’s one way to open the “default” bucket when it has a blank password:

Bucket bucket = cluster.openBucket();

You can also specify the bucket name when opening it:

Bucket bucket = cluster.openBucket("default");

For any other bucket with a blank password, you must supply the bucket name:

Bucket myBucket = cluster.openBucket("myBucket");

To open a bucket that has a non-blank password, you must supply the bucket name and password:

Bucket bucket = cluster.openBucket("bucketName", "bucketPassword");

4. Persistence Operations

In this section, we show how to perform CRUD operations in Couchbase. In our examples, we will be working with simple JSON documents representing a person, as in this sample document:

{
  "name": "John Doe",
  "type": "Person",
  "email": "john.doe@mydomain.com",
  "homeTown": "Chicago"
}

The “type” attribute is not required, however it is common practice to include an attribute specifying the document type in case one decides to store multiple types in the same bucket.

4.1. Document IDs

Each document stored in Couchbase is associated with an id that is unique to the bucket in which the document is being stored. The document id is analogous to the primary key column in a traditional relational database row.

Document id values must be UTF-8 strings of 250 or fewer bytes.

Since Couchbase does not provide a mechanism for automatically generating the id on insertion, we must provide our own.

Common strategies for generating ids include key-derivation using a natural key, such as the “email” attribute shown in our sample document, and the use of UUID strings.

For our examples, we will generate random UUID strings.

4.2. Inserting a Document

Before we can insert a new document into our bucket, we must first create an instance of JSONObject containing the document’s contents:

JsonObject content = JsonObject.empty()
  .put("name", "John Doe")
  .put("type", "Person")
  .put("email", "john.doe@mydomain.com")
  .put("homeTown", "Chicago");

Next, we create a JSONDocument object consisting of an id value and the JSONObject:

String id = UUID.randomUUID().toString();
JsonDocument document = JsonDocument.create(id, content);

To add a new document to the bucket, we use the insert method:

JsonDocument inserted = bucket.insert(document);

The JsonDocument returned contains all of the properties of the original document, plus a value known as the “CAS” (compare-and-swap) value that Couchbase uses for version tracking.

If a document with the supplied id already exists in the bucket, Couchbase throws a DocumentAlreadyExistsException.

We can also use the upsert method, which will either insert the document (if the id is not found) or update the document (if the id is found):

JsonDocument upserted = bucket.upsert(document);

4.3. Retrieving a Document

To retrieve a document by its id, we use the get method:

JsonDocument retrieved = bucket.get(id);

If no document exists with the given id, the method returns null.

4.4. Updating or Replacing a Document

We can update an existing document using the upsert method:

JsonObject content = document.content();
content.put("homeTown", "Kansas City");
JsonDocument upserted = bucket.upsert(document);

As we mentioned in section 4.2, upsert will succeed whether a document with the given id was found or not.

If enough time has passed between the time we originally retrieved the document and our attempt to upsert the revised document, there is a possibility that the original document will have been deleted from the bucket by another process or user.

If we need to guard against this scenario in our application, we can instead use the replace method, which fails with a DocumentDoesNotExistException if a document with the given id is not found in Couchbase:

JsonDocument replaced = bucket.replace(document);

4.5. Deleting a Document

To delete a Couchbase document, we use the remove method:

JsonDocument removed = bucket.remove(document);

You may also remove by id:

JsonDocument removed = bucket.remove(id);

The JsonDocument object returned has only the id and CAS properties set; all other properties (including the JSON content) are removed from the returned object.

If no document exists with the given id, Couchbase throws a DocumentDoesNotExistException.

5. Working with Replicas

This section discusses Couchbase’s virtual bucket and replica architecture and introduces a mechanism for retrieving a replica of a document in the event that a document’s primary node is unavailable.

5.1. Virtual Buckets and Replicas

Couchbase distributes a bucket’s documents across a collection of 1024 virtual buckets, or vbuckets, using a hashing algorithm on the document id to determine the vbucket in which to store each document.

Each Couchbase bucket can also be configured to maintain one or more replicas of each vbucket. Whenever a document is inserted or updated and written to its vbucket, Couchbase initiates a process to replicate the new or updated document to its replica vbucket.

In a multi-node cluster, Couchbase distributes vbuckets and replica vbuckets among all the data nodes in the cluster. A vbucket and its replica vbucket are kept on separate data nodes in order to achieve a certain measure of high-availability.

5.2. Retrieving a Document From a Replica

When retrieving a document by its id, if the document’s primary node is down or otherwise unreachable due to a network error, Couchbase throws an exception.

You can have your application catch the exception and attempt to retrieve one or more replicas of the document using the getFromReplica method.

The following code would use the first replica found:

JsonDocument doc;
try{
    doc = bucket.get(id);
}
catch(CouchbaseException e) {
    List<JsonDocument> list = bucket.getFromReplica(id, ReplicaMode.FIRST);
    if(!list.isEmpty()) {
        doc = list.get(0);
     }
}

Note that it is possible, when writing your application, to have write operations block until persistence and replication are complete. However the more common practice, for reasons of performance, is to have the application return from writes immediately after writing to memory of a document’s primary node, because disk writes are inherently slower than memory writes.

When using the latter approach, if a recently updated document’s primary node should fail or go offline before the updates have been fully replicated, replica reads may or may not return the latest version of the document.

It is also worth noting that Couchbase retrieves replicas (if any are found) asynchronously. Therefore if your bucket is configured for multiple replicas, there is no guarantee as to the order in which the SDK returns them, and you may want to loop through all the replicas found in order to ensure that your application has the latest replica version available:

long maxCasValue = -1;
for(JsonDocument replica : bucket.getFromReplica(id, ReplicaMode.ALL)) {
    if(replica.cas() > maxCasValue) {
        doc = replica;
        maxCasValue = replica.cas();
    }
}

6. Conclusion

We have introduced some basic usage scenarios that you will need in order to get started with the Couchbase SDK.

Code snippets presented in this tutorial can be found in the github project.

You can learn more about the SDK at the official Couchbase SDK developer documentation site.

I usually post about Persistence on Twitter - you can follow me there:



JMockit 101

$
0
0

1. Introduction

With this article, we’ll be starting a new series centered around the mocking toolkit JMockit.

In this first installment we’ll talk about what JMockit is, it’s characteristics and how mocks are created and used with it.

Later articles will focus on and go deeper into its capabilities.

2. JMockit

2.1. Introduction

First of all, let’s talk about what JMockit is: a Java framework for mocking objects in tests (you can use it for both JUnit and TestNG ones).

It uses Java’s instrumentation APIs to modify the classes’ bytecode during runtime in order to dynamically alter their behavior. Some of its strong points are its expressibility and its out-of-the-box ability to mock static and private methods.

Maybe you’re new to JMockit, but it’s definitely not due to it being new. JMockit’s development started in June 2006 and its first stable release dates to December 2012, so it’s been around for a time now (current version is 1.24 at the time of writing the article).

2.2. The Expressibility of JMockit

As told before, one of the strongest points of JMockit is its expressibility. In order to create mocks and define their behavior, instead of calling methods from the mocking API, you just need to define them directly.

This means that you won’t do things like:

API.expect(mockInstance.method()).andThenReturn(value).times(2);

Instead, expect things like:

new Expectation() {
    mockInstance.method(); 
    result = value; 
    times = 2;
}

It might seem that it is more code, but you could simply put all three lines just on one. The really important part is that you don’t end up with a big “train” of chained method calls. Instead, you end up with a definition of how you want the mock to behave when called.

If you take into account that on the result = value part you could return anything (fixed values, dynamically generated values, exceptions, etc), the expressiveness of JMockit gets even more evident.

2.3. The Record-Replay-Verify Model

Tests using JMockit are divided into three differentiated stages: record, replay and verify.

  1. On the record phase, during test preparation and before the invocations to the methods we want to be executed, we will define the expected behavior for all tests to be used during the next stage.
  2. The replay phase is the one in which the code under test is executed. The invocations of mocked methods/constructors previously recorded on the previous stage will now be replayed.
  3. Lastly, on the verify phase, we will assert that the result of the test was the one we expected (and that mocks behaved and were used according to what was defined in the record phase).

With a code example, a wireframe for a test would look something like this:

@Test
public void testWireframe() {
   // preparation code not specific to JMockit, if any

   new Expectations() {{ 
       // define expected behaviour for mocks
   }};

   // execute code-under-test

   new Verifications() {{ 
       // verify mocks
   }};

   // assertions
}

3. Creating Mocks

3.1. JMockit’s Annotations

When using JMockit, the easiest way to use mocks, is to use annotations. There are three for creating mocks (@Mocked@Injectable and @Capturing) and one to specify the class under testing (@Tested).

When using the @Mocked annotation on a field, it will create mocked instances of each and every new object of that particular class.

On the other hand, with the @Injectable annotation, only one mocked instance will be created.

The last annotation, @Capturing will behave like @Mocked, but will extend its reach to every subclass extending or implementing the annotated field’s type.

 3.2. Passing Arguments to Tests

When using JMockit is possible to pass mocks as test parameters. This is quite useful for creating a mock just for that one test in particular, like some complex model object that needs a specific behavior just for one test for instance. It would be something like this:

@RunWith(JMockit.class)
public class TestPassingArguments {
   
   @Injectable
   private Foo mockForEveryTest;

   @Tested
   private Bar bar;

   @Test
   public void testExample(@Mocked Xyz mockForJustThisTest) {
       new Expectations() {{
           mockForEveryTest.someMethod("foo");
           mockForJustThisTest.someOtherMethod();
       }};

       bar.codeUnderTest();
   }
}

This way of creating a mock by passing it as a parameter, instead of having to call some API method, again shows us the expressibility we’re talking about since the beginning.

3.3. Complete Example

To end this article, we’ll be including a complete example of a test using JMockit.

In this example, we’ll be testing a Performer class that uses Collaborator in its perform() method. This perform() method, receives a Model object as a parameter from which it will use its getInfo() that returns a String, this String will be passed to the collaborate() method from Collaborator that will return true for this particular test, and this value will be passed to the receive() method from Collaborator.

So, the tested classes will look like this:

public class Model {
    public String getInfo(){
        return "info";
    }
}

public class Collaborator {
    public boolean collaborate(String string){
        return false;
    }
    public void receive(boolean bool){
        // NOOP
    }
}

public class Performer {
    private Collaborator collaborator;
	
    public void perform(Model model) {
        boolean value = collaborator.collaborate(model.getInfo());
        collaborator.receive(value);
    }
}

And the test’s code will end up being like:

@RunWith(JMockit.class)
public class PerformerTest {

    @Injectable
    private Collaborator collaborator;

    @Tested
    private Performer performer;

    @Test
    public void testThePerformMethod(@Mocked Model model) {
        new Expectations() {{
    	    model.getInfo();result = "bar";
    	    collaborator.collaborate("bar"); result = true;
        }};
        performer.perform(model);
        new Verifications() {{
    	    collaborator.receive(true);
        }};
    }
}

4. Conclusion

With this, we’ll wrap up our practical intro to JMockit. If you want to learn more about JMockit, stay tuned for future articles.

The full implementation of this tutorial can be found on the GitHub project.

Intro to QueryDSL

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

This is an introductory article to get you up and running with the powerful QueryDSL API for data persistence.

The goal here is to give you the practical tools to add QueryDSL into your project, understand the structure and purpose of the generated classes, and get a basic understanding of how to write type-safe database queries for most common scenarios.

2. The Purpose of QueryDSL

Object-relational mapping frameworks are at the core of Enterprise Java. These compensate the mismatch between object-oriented approach and relational database model. They also allow developers to write cleaner and more concise persistence code and domain logic.

However, one of the most difficult design choices for an ORM framework is the API for building correct and type-safe queries.

One of the most widely used Java ORM frameworks, Hibernate (and also closely related JPA standard), proposes a string-based query language HQL (JPQL) very similar to SQL. The obvious drawbacks of this approach are the lack of type safety and absence of static query checking. Also, in more complex cases (for instance, when the query needs to be constructed at runtime depending on some conditions), building an HQL query typically involves concatenation of strings which is usually very unsafe and error-prone.

The JPA 2.0 standard brought an improvement in the form of Criteria Query API — a new and type-safe method of building queries that took advantage of metamodel classes generated during annotation preprocessing. Unfortunately, being groundbreaking in its essence, Criteria Query API ended up very verbose and practically unreadable. Here’s an example from Java EE tutorial for generating a query as simple as SELECT p FROM Pet p:

EntityManager em = ...;
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery<Pet> cq = cb.createQuery(Pet.class);
Root<Pet> pet = cq.from(Pet.class);
cq.select(pet);
TypedQuery<Pet> q = em.createQuery(cq);
List<Pet> allPets = q.getResultList();

No wonder that a more adequate QueryDSL library soon emerged, based on the same idea of generated metadata classes, yet implemented with a fluent and readable API.

3. QueryDSL Class Generation

Let’s start with generating and exploring the magical metaclasses that account for the fluent API of QueryDSL.

3.1. Adding QueryDSL to Maven Build

Including QueryDSL in your project is as simple as adding several dependencies to your build file and configuring a plugin for processing JPA annotations. Let’s start with the dependencies. The version of QueryDSL libraries should be extracted to a separate property inside the <project><properties> section, as follows (for the latest version of QueryDSL libraries, check the Maven Central repository):

<properties>
    <querydsl.version>4.1.3</querydsl.version>
</properties>

Next, add the following dependencies to the <project><dependencies> section of your pom.xml file:

<dependencies>

    <dependency>
        <groupId>com.querydsl</groupId>
        <artifactId>querydsl-apt</artifactId>
        <version>${querydsl.version}</version>
        <scope>provided</scope>
    </dependency>

    <dependency>
        <groupId>com.querydsl</groupId>
        <artifactId>querydsl-jpa</artifactId>
        <version>${querydsl.version}</version>
    </dependency>

</dependencies>

The querydsl-apt dependency is an annotation processing tool (APT) — implementation of corresponding Java API that allows processing of annotations in source files before they move on to the compilation stage. This tool generates the so called Q-types — classes that directly relate to the entity classes of your application, but are prefixed with letter Q. For instance, if you have a User class marked with the @Entity annotation in your application, then the generated Q-type will reside in a QUser.java source file.

The provided scope of the querydsl-apt dependency means that this jar should be made available only at build time, but not included into the application artifact.

The querydsl-jpa library is the QueryDSL itself, designed to be used together with a JPA application.

To configure annotation processing plugin that takes advantage of querydsl-apt, add the following plugin configuration to your pom – inside the <project><build><plugins> element:

<plugin>
    <groupId>com.mysema.maven</groupId>
    <artifactId>apt-maven-plugin</artifactId>
    <version>1.1.3</version>
    <executions>
        <execution>
            <goals>
                <goal>process</goal>
            </goals>
            <configuration>
                <outputDirectory>target/generated-sources/java</outputDirectory>
                <processor>com.querydsl.apt.jpa.JPAAnnotationProcessor</processor>
            </configuration>
        </execution>
    </executions>
</plugin>

This plugin makes sure that the Q-types are generated during the process goal of Maven build. The outputDirectory configuration property points to the directory where the Q-type source files will be generated. The value of this property will be useful later on, when you’ll go exploring the Q-files.

You should also add this directory to the source folders of the project, if your IDE does not do this automatically — consult the documentation for your favorite IDE on how to do that.

For this article we will use a simple JPA model of a blog service, consisting of Users and their BlogPosts with a one-to-many relationship between them:

@Entity
public class User {

    @Id
    @GeneratedValue
    private Long id;

    private String login;

    private Boolean disabled;

    @OneToMany(cascade = CascadeType.PERSIST, mappedBy = "user")
    private Set<BlogPost> blogPosts = new HashSet<>(0);

    // getters and setters

}

@Entity
public class BlogPost {

    @Id
    @GeneratedValue
    private Long id;

    private String title;

    private String body;

    @ManyToOne
    private User user;

    // getters and setters

}

To generate Q-types for your model, simply run:

mvn compile

3.2. Exploring Generated Classes

Now go to the directory specified in the outputDirectory property of apt-maven-plugin (target/generated-sources/java in our example). You will see a package and class structure that directly mirrors your domain model, except all the classes start with letter Q (QUser and QBlogPost in our case).

Open the file QUser.java. This is your entry point to building all queries that have User as a root entity. First thing you’ll notice is the @Generated annotation which means that this file was automatically generated and should not be edited manually. Should you change any of your domain model classes, you will have to run mvn compile again to regenerate all of the corresponding Q-types.

Aside from several QUser constructors present in this file, you should also take notice of a public static final instance of the QUser class:

public static final QUser user = new QUser("user");

This is the instance that you can use in most of your QueryDSL queries to this entity, except when you need to write some more complex queries, like joining several different instances of a table in a single query.

The last thing that should be noted is that for every field of the entity class there is a corresponding *Path field in the Q-type, like NumberPath id, StringPath login and SetPath blogPosts in the QUser class (notice that the name of the field corresponding to Set is pluralized). These fields are used as parts of fluent query API that we will encounter later on.

4. Querying with QueryDSL

4.1. Simple Querying and Filtering

To build a query, first we’ll need an instance of a JPAQueryFactory, which is a preferred way of starting the building process. The only thing that JPAQueryFactory needs is an EntityManager, which should already be available in your JPA application via EntityManagerFactory.createEntityManager() call or @PersistenceContext injection.

EntityManagerFactory emf = 
  Persistence.createEntityManagerFactory("org.baeldung.querydsl.intro");
EntityManager em = entityManagerFactory.createEntityManager();
JPAQueryFactory queryFactory = new JPAQueryFactory(em);

Now let’s create our first query:

QUser user = QUser.user;

User c = queryFactory.selectFrom(user)
  .where(user.login.eq("David"))
  .fetchOne();

Notice we’ve defined a local variable QUser user and initialized it with QUser.user static instance. This is done purely for brevity, alternatively you may import the static QUser.user field.

The selectFrom method of the JPAQueryFactory starts building a query. We pass it the QUser instance and continue building the conditional clause of the query with the .where() method. The user.login is a reference to a StringPath field of the QUser class that we’ve seen before. The StringPath object also has the .eq() method that allows to fluently continue building the query by specifying the field equality condition.

Finally, to fetch the value from the database into persistence context, we end the building chain with the call to the fetchOne() method. This method returns null if the object can’t be found, but throws a NonUniqueResultException if there are multiple entities satisfying the .where() condition.

4.2. Ordering and Grouping

Now let’s fetch all users in a list, sorted by their login in ascension order.

List<User> c = queryFactory.selectFrom(user)
  .orderBy(user.login.asc())
  .fetch();

This syntax is possible because the *Path classes have the .asc() and .desc() methods. You can also specify several arguments for the .orderBy() method to sort by multiple fields.

Now let’s try something more difficult. Suppose we need to group all posts by title and count duplicating titles. This is done with the .groupBy() clause. We’ll also want to order the titles by resulting occurrence count.

NumberPath<Long> count = Expressions.numberPath(Long.class, "c");

List<Tuple> userTitleCounts = queryFactory.select(
  blogPost.title, blogPost.id.count().as(count))
  .from(blogPost)
  .groupBy(blogPost.title)
  .orderBy(count.desc())
  .fetch();

We selected the blog post title and count of duplicates, grouping by title and then ordering by aggregated count. Notice we first created an alias for the count() field in the .select() clause, because we needed to reference it in the .orderBy() clause.

4.3. Complex Queries with Joins and Subqueries

Let’s find all users that wrote a post titled “Hello World!” For such query we could use an inner join. Notice we’ve created an alias blogPost for the joined table to reference it in the .on() clause:

QBlogPost blogPost = QBlogPost.blogPost;

List<User> users = queryFactory.selectFrom(user)
  .innerJoin(user.blogPosts, blogPost)
  .on(blogPost.title.eq("Hello World!"))
  .fetch();

Now let’s try to achieve the same with subquery:

List<User> users = queryFactory.selectFrom(user)
  .where(user.id.in(
    JPAExpressions.select(blogPost.user.id)
      .from(blogPost)
      .where(blogPost.title.eq("Hello World!"))))
  .fetch();

As we can see, subqueries are very similar to queries, and they are also quite readable, but they start with JPAExpressions factory methods. To connect subqueries with the main query, as always, we reference the aliases defined and used earlier.

4.4. Modifying Data

JPAQueryFactory allows not only constructing queries, but also modifying and deleting records. Let’s change the user’s login and disable the account:

queryFactory.update(user)
  .where(user.login.eq("Ash"))
  .set(user.login, "Ash2")
  .set(user.disabled, true)
  .execute();

We can have any number of .set() clauses we want for different fields. The .where() clause is not necessary, so we can update all the records at once.

To delete the records matching a certain condition, we can use a similar syntax:

queryFactory.delete(user)
  .where(user.login.eq("David"))
  .execute();

The .where() clause is also not necessary, but be careful, because omitting the .where() clause results in deleting all of the entities of a certain type.

You may wonder, why JPAQueryFactory doesn’t have the .insert() method. This is a limitation of JPA Query interface. The underlying javax.persistence.Query.executeUpdate() method is capable of executing update and delete but not insert statements. To insert data, you should simply persist the entities with EntityManager.

If you still want to take advantage of a similar QueryDSL syntax for inserting data, you should use SQLQueryFactory class that resides in the querydsl-sql library.

5. Conclusion

In this article we’ve discovered a powerful and type-safe API for persistent object manipulation that is provided by QueryDSL.

We’ve learned to add QueryDSL to project and explored the generated Q-types. We’ve also covered some typical use cases and enjoyed their conciseness and readability.

All the source code for the examples can be found in the github repository.

Finally, there are of course many more features that QueryDSL provides, including working with raw SQL, non-persistent collections, NoSQL databases and full-text search – and we’ll explore some of these in future articles.

I usually post about Persistence on Twitter - you can follow me there:


Java Web Weekly, Issue 132

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Implementing HAL hypermedia REST API using Spring HATEOAS [opencredo.com]

I’ve been talking about HATEOAS for such a long time now and consistently see clients get value out of it for not a lot of effort. And so of course this write get the first spot here in the review.

A solid, practical article detailing quite a bit of what you have to know when implementing a Hypermedia API with Spring.

>> Playing with HTTP/2 [kaczmarzyk.net]

Very nice primer on starting down the HTTP/2 path in the Java ecosystem, while we’re waiting for the long overdue Servlet 4 specification.

>> How I Caused Confusion about Spring Boot [codecentric.de]

A quick writeup going beyond simple usecase and discussing some good practices of how configuration should be handled with Spring Boot.

>> How Functional Programming will (Finally) do Away With the GoF Patterns [jooq.org]

There’s a quote I can’t place right now – that goes something like this: Design Patterns are missing language features.

Java 8 gave us a much more powerful language, which of course changed the landscape when it comes to needing patterns. So I fully expect to keep seeing these style of writeup as Java 8 gets adopted and understood more and more.

>> Tabs vs Spaces: How They Write Java at Google, Twitter, Mozilla and Pied Piper [takipi.com]

Yeah, you read that right – tabs vs spaces! Back to trolling basics 🙂 – it made me reconsider my life choices.

Joking aside, it’s a fun read.

>> Spring Sweets: Using Groovy Configuration As PropertySource [jdriven.com]

Some interesting Groovy alternative configuration for handling properties in Spring.

>> Java 9 on the Brink of a Delivery Date and Scope Review [infoq.com]

Looks like we’re close to getting the real release date for Java 9.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Goldilocks Microservices [vanilla-java.github.io]

Sizing your microservices right and keeping the overall architecture flexible can definitely make or break an implementation; this article is about making the pragmatic choices that make sense for your particular scenario.

>> Adding service virtualization to your Continuous Delivery pipeline [ontestautomation.com]

A quick intro to a highly useful technique and trend that’s been picking up lots of momentum lately, and for good reason – making heavy use of virtualization within a CD pipeline.

Also worth reading:

3. Musings

>> Security insanity: how we keep failing at the basics [troyhunt.com]

A fantastic deep-dive into broken password security rules.

>> Does Github Enhance the Need for Code Review? [daedtech.com]

An three-decade look at the proprietary vs open source software world from the vantage point of the seminal work The Cathedral and the Bazaar.

>> Surviving The Dreaded Company Framework [daedtech.com]

Internal frameworks are a pain point with so many developers, give that for every one that makes sense, a hundred that don’t are built. I cringed when I first read this title.

>> With Commercial Licensing, Invest in Innovation, not Protection [jooq.org]

That’s good advice, and also scary if you actually have a product that the advice applies to. It’s also worth mentioning that the advice comes out of practical experience and not just out of “thinking about it a bit”.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> I always plan my schedule around your incompetence [dilbert.com]

>> My productivity plunges whenever you learn new jargon [dilbert.com]

>> Yeah, that’s how it works [dilbert.com]

5. Pick of the Week

>> Don’t let anyone overpay you [m.signalvnoise.com]

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Wiring in Spring: @Autowired, @Resource and @Inject

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

This Spring Framework article will demonstrate the use of annotations related to dependency injection, namely the @Resource, @Inject, and @Autowired annotations. These annotations provide classes with a declarative way to resolve dependencies. For example:

@Autowired 
ArbitraryClass arbObject;

as opposed to instantiating them directly (the imperative way), for example:

ArbitraryClass arbObject = new ArbitraryClass();

Two of the three annotations belong to the Java extension package: javax.annotation.Resource and javax.inject.Inject. The @Autowired annotation belongs to the org.springframework.beans.factory.annotation package.

Each of these annotations can resolve dependencies either by field injection or by setter injection. A simplified, but practical, example will be used to demonstrate the distinction between the three annotations, based on the execution paths taken by each annotation.

The examples will focus on how to use the three injection annotations during integration testing. The dependency required by the test can either be an arbitrary file or an arbitrary class.

2. The @Resource Annotation

The @Resource annotation is part of the JSR-250 annotation collection and is packaged with Java EE. This annotation has the following execution paths, listed by precedence:

  1. Match by Name
  2. Match by Type
  3. Match by Qualifier

These execution paths are applicable to both setter and field injection.

2.1. Field Injection

Resolving dependencies by field injection is achieved by annotating an instance variable with the @Resource annotation.

2.1.1. Match by Name

The integration test used to demonstrate match-by-name field injection is listed as follows:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestResourceNameType.class)
public class FieldResourceInjectionTest {

    @Resource(name="namedFile")
    private File defaultFile;

    @Test
    public void givenResourceAnnotation_WhenOnField_ThenDependencyValid(){
        assertNotNull(defaultFile);
        assertEquals("namedFile.txt", defaultFile.getName());
    }
}

Let’s go through the code. In the FieldResourceInjectionTest integration test, at line 7, the resolution of the dependency by name is achieved by passing in the bean name as an attribute value to the @Resource annotation:

@Resource(name="namedFile")
private File defaultFile;

This configuration will resolve dependencies using the match-by-name execution path. The bean namedFile must be defined in the ApplicationContextTestResourceNameType application context.

Note that the bean id and the corresponding reference attribute value must match:

@Configuration
public class ApplicationContextTestResourceNameType {

    @Bean(name="namedFile")
    public File namedFile() {
        File namedFile = new File("namedFile.txt");
        return namedFile;
    }
}

Failure to define the bean in the application context will result in a org.springframework.beans.factory.NoSuchBeanDefinitionException being thrown. This can be demonstrated by changing the attribute value passed in to the @Bean annotation, in the ApplicationContextTestResourceNameType application context; or changing the attribute value passed in to the @Resource annotation, in the FieldResourceInjectionTest integration test.

2.1.2. Match by Type

To demonstrate the match-by-type execution path, just remove the attribute value at line 7 of the FieldResourceInjectionTest integration test so that it looks as follows:

@Resource
private File defaultFile;

and run the test again.

The test will still pass because if the @Resource annotation does not receive a bean name as an attribute value, the Spring Framework will proceed with the next level of precedence, match-by-type, in order to try resolve the dependency.

2.1.3. Match by Qualifier

To demonstrate the match-by-qualifier execution path, the integration testing scenario will be modified so that there are two beans defined in the ApplicationContextTestResourceQualifier application context:

@Configuration
public class ApplicationContextTestResourceQualifier {

    @Bean(name="defaultFile")
    public File defaultFile() {
        File defaultFile = new File("defaultFile.txt");
        return defaultFile;
    }

    @Bean(name="namedFile")
    public File namedFile() {
        File namedFile = new File("namedFile.txt");
        return namedFile;
    }
}

The QualifierResourceInjectionTest integration test will be used to demonstrate match-by-qualifier dependency resolution. In this scenario, a specific bean dependency needs to be injected into each reference variable:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestResourceQualifier.class)
public class QualifierResourceInjectionTest {

    @Resource
    private File dependency1;
	
    @Resource
    private File dependency2;

    @Test
    public void givenResourceAnnotation_WhenField_ThenDependency1Valid(){
        assertNotNull(dependency1);
        assertEquals("defaultFile.txt", dependency1.getName());
    }

    @Test
    public void givenResourceQualifier_WhenField_ThenDependency2Valid(){
        assertNotNull(dependency2);
        assertEquals("namedFile.txt", dependency2.getName());
    }
}

Run the integration test, and a org.springframework.beans.factory.NoUniqueBeanDefinitionException is thrown. This exception is thrown because the application context has found two bean definitions of type File, and it is confused as to which bean should resolve the dependency.

To resolve this issue, please refer to line 7 to line 10 of the QualifierResourceInjectionTest integration test:

@Resource
private File dependency1;

@Resource
private File dependency2;

and add the following lines of code:

@Qualifier("defaultFile")

@Qualifier("namedFile")

so that the code block looks as follows:

@Resource
@Qualifier("defaultFile")
private File dependency1;

@Resource
@Qualifier("namedFile")
private File dependency2;

Run the integration test again, this time round it should pass. The objective of this test was to demonstrate that even if there are multiple beans defined in an application context, the @Qualifier annotation clears any confusion by allowing specific dependencies to be injected into a class.

2.2. Setter Injection

The execution paths taken when injecting dependencies on a field are applicable to setter-based injection.

2.2.1. Match by Name

The only difference is the MethodResourceInjectionTest integration test has a setter method:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestResourceNameType.class)
public class MethodResourceInjectionTest {

    private File defaultFile;

    @Resource(name="namedFile")
    protected void setDefaultFile(File defaultFile) {
        this.defaultFile = defaultFile;
    }

    @Test
    public void givenResourceAnnotation_WhenSetter_ThenDependencyValid(){
        assertNotNull(defaultFile);
        assertEquals("namedFile.txt", defaultFile.getName());
    }
}

Resolving dependencies by setter injection is done by annotating a reference variable’s corresponding setter method. Pass the name of the bean dependency as an attribute value to the @Resource annotation:

private File defaultFile;

@Resource(name="namedFile")
protected void setDefaultFile(File defaultFile) {
    this.defaultFile = defaultFile;
}

The namedFile bean dependency will be reused in this example. The bean name and the corresponding attribute value must match.

Run the integration test as-is and it will pass.

To see that the dependency was indeed resolved by the match-by-name execution path, change the attribute value passed to the @Resource annotation to a value of your choice and run the test again. This time, the test will fail with a NoSuchBeanDefinitionException.

2.2.2. Match by Type

To demonstrate setter-based, match-by-type execution, we will use the MethodByTypeResourceTest integration test:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestResourceNameType.class)
public class MethodByTypeResourceTest {

    private File defaultFile;

    @Resource
    protected void setDefaultFile(File defaultFile) {
        this.defaultFile = defaultFile;
    }

    @Test
    public void givenResourceAnnotation_WhenSetter_ThenValidDependency(){
        assertNotNull(defaultFile);
        assertEquals("namedFile.txt", defaultFile.getName());
    }
}

Run this test as-is, and it will pass.

In order to verify that the File dependency was indeed resolved by the match-by-type execution path, change the class type of the defaultFile variable to another class type like String. Execute the MethodByTypeResourceTest integration test again and this time a NoSuchBeanDefinitionException will be thrown.

The exception verifies that match-by-type was indeed used to resolve the File dependency. The NoSuchBeanDefinitionException confirms that the reference variable name does not need to match the bean name. Instead, dependency resolution depends on the bean’s class type matching the reference variable’s class type.

2.2.3. Match by Qualifier

The MethodByQualifierResourceTest integration test will be used to demonstrate the match-by-qualifier execution path:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestResourceQualifier.class)
public class MethodByQualifierResourceTest {

    private File arbDependency;
    private File anotherArbDependency;

    @Test
    public void givenResourceQualifier_WhenSetter_ThenValidDependencies(){
      assertNotNull(arbDependency);
        assertEquals("namedFile.txt", arbDependency.getName());
        assertNotNull(anotherArbDependency);
        assertEquals("defaultFile.txt", anotherArbDependency.getName());
    }

    @Resource
    @Qualifier("namedFile")
    public void setArbDependency(File arbDependency) {
        this.arbDependency = arbDependency;
    }

    @Resource
    @Qualifier("defaultFile")
    public void setAnotherArbDependency(File anotherArbDependency) {
        this.anotherArbDependency = anotherArbDependency;
    }
}

The objective of this test is to demonstrate that even if multiple bean implementations of a particular type are defined in an application context, a @Qualifier annotation can be used together with the @Resource annotation to resolve a dependency.

Similar to field-based dependency injection, if there are multiple beans defined in an application context, a NoUniqueBeanDefinitionException is thrown if no @Qualifier annotation is used to specify which bean should be used to resolve dependencies.

3. The @Inject Annotation

The @Inject annotation belongs to the JSR-330 annotations collection. This annotation has the following execution paths, listed by precedence:

  1. Match by Type
  2. Match by Qualifier
  3. Match by Name

These execution paths are applicable to both setter and field injection. In order to access the @Inject annotation, the javax.inject library has to be declared as a Gradle or Maven dependency.

For Gradle:

testCompile group: 'javax.inject', name: 'javax.inject', version: '1'

For Maven:

<dependency>
    <groupId>javax.inject</groupId>
    <artifactId>javax.inject</artifactId>
    <version>1</version>
</dependency>

3.1. Field Injection

3.1.1. Match by Type

The integration test example will be modified to use another type of dependency, namely the ArbitraryDependency class. The ArbitraryDependency class dependency merely serves as a simple dependency, and holds no further significance. It is listed as follows:

@Component
public class ArbitraryDependency {

    private final String label = "Arbitrary Dependency";

    public String toString() {
        return label;
    }
}

The FieldInjectTest integration test in question is listed as follows:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestInjectType.class)
public class FieldInjectTest {

    @Inject
    private ArbitraryDependency fieldInjectDependency;

    @Test
    public void givenInjectAnnotation_WhenOnField_ThenValidDependency(){
        assertNotNull(fieldInjectDependency);
        assertEquals("Arbitrary Dependency",
          fieldInjectDependency.toString());
    }
}

Unlike the @Resource annotation, which resolves dependencies by name first; the default behavior of the @Inject annotation resolves dependencies by type.

This means that even if a class reference variable name differs from the bean name, the dependency will still be resolved, provided that the bean is defined in the application context. Note how the reference variable name in the following test:

@Inject
private ArbitraryDependency fieldInjectDependency;

differs from the bean name configured in the application context:

@Bean
public ArbitraryDependency injectDependency() {
    ArbitraryDependency injectDependency = new ArbitraryDependency();
    return injectDependency;
}

and when the test is executed, it is able to resolve the dependency.

3.1.2. Match by Qualifier

But what if there are multiple implementations of a particular class type, and a certain class requires a specific bean? Let us modify the integration testing example so that another dependency is required.

In this example, we subclass the ArbitraryDependency class, used in the match-by-type example, to create the AnotherArbitraryDependency class:

public class AnotherArbitraryDependency extends ArbitraryDependency {

    private final String label = "Another Arbitrary Dependency";

    public String toString() {
        return label;
    }
}

The objective of each test case is to ensure that each dependency is injected correctly into each reference variable:

@Inject
private ArbitraryDependency defaultDependency;

@Inject
private ArbitraryDependency namedDependency;

The FieldQualifierInjectTest integration test used to demonstrate match by qualifier is listed as follows:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestInjectQualifier.class)
public class FieldQualifierInjectTest {

    @Inject
    private ArbitraryDependency defaultDependency;

    @Inject
    private ArbitraryDependency namedDependency;

    @Test
    public void givenInjectQualifier_WhenOnField_ThenDefaultFileValid(){
        assertNotNull(defaultDependency);
        assertEquals("Arbitrary Dependency",
          defaultDependency.toString());
    }

    @Test
    public void givenInjectQualifier_WhenOnField_ThenNamedFileValid(){
        assertNotNull(defaultDependency);
        assertEquals("Another Arbitrary Dependency",
          namedDependency.toString());
    }
}

If there are multiple implementations of a particular class in an application context and the FieldQualifierInjectTest integration test attempts to inject the dependencies in the manner listed below:

@Inject 
private ArbitraryDependency defaultDependency;

@Inject 
private ArbitraryDependency namedDependency;

a NoUniqueBeanDefinitionException will be thrown.

Throwing this exception is the Spring Framework’s way of pointing out that there are multiple implementations of a certain class and it is confused about which one to use. In order to elucidate the confusion, go to line 7 and 10 of the FieldQualifierInjectTest integration test:

@Inject
private ArbitraryDependency defaultDependency;

@Inject
private ArbitraryDependency namedDependency;

pass the required bean name to the @Qualifier annotation, which is used together with the @Inject annotation. The code block will should now look as follows:

@Inject
@Qualifier("defaultFile")
private ArbitraryDependency defaultDependency;

@Inject
@Qualifier("namedFile")
private ArbitraryDependency namedDependency;

The @Qualifier annotation expects a strict match when receiving a bean name. Ensure that the bean name is passed to the Qualifier correctly, otherwise a NoUniqueBeanDefinitionException will be thrown. Run the test again, and this time it should pass.

3.1.3. Match by Name

The FieldByNameInjectTest integration test used to demonstrate match by name is similar to the match by type execution path. The only difference is now a specific bean is required, as opposed to a specific type. In this example, we subclass the ArbitraryDependency class again to produce the YetAnotherArbitraryDependency class:

public class YetAnotherArbitraryDependency extends ArbitraryDependency {

    private final String label = "Yet Another Arbitrary Dependency";

    public String toString() {
        return label;
    }
}

In order to demonstrate the match-by-name execution path, we will use the following integration test:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestInjectName.class)
public class FieldByNameInjectTest {

    @Inject
    @Named("yetAnotherFieldInjectDependency")
    private ArbitraryDependency yetAnotherFieldInjectDependency;

    @Test
    public void givenInjectQualifier_WhenSetOnField_ThenDependencyValid(){
        assertNotNull(yetAnotherFieldInjectDependency);
        assertEquals("Yet Another Arbitrary Dependency",
          yetAnotherFieldInjectDependency.toString());
    }
}

The application context is listed as follows:

@Configuration
public class ApplicationContextTestInjectName {

    @Bean
    public ArbitraryDependency yetAnotherFieldInjectDependency() {
        ArbitraryDependency yetAnotherFieldInjectDependency =
          new YetAnotherArbitraryDependency();
        return yetAnotherFieldInjectDependency;
    }
}

Run the integration test as-is, and it will pass.

In order to verify that the dependency was indeed injected by the match-by-name execution path, change the value, yetAnotherFieldInjectDependency, that was passed in to the @Named annotation to another name of your choice. Run the test again – this time, a NoSuchBeanDefinitionException is thrown.

3.2. Setter Injection

Setter-based injection for the @Inject annotation is similar to the approach used for @Resource setter-based injection. Instead of annotating the reference variable, the corresponding setter method is annotated. The execution paths followed by field-based dependency injection also apply to setter based injection.

4. The @Autowired Annotation

The behaviour of @Autowired annotation is similar to the @Inject annotation. The only difference is that the @Autowired annotation is part of the Spring framework. This annotation has the same execution paths as the @Inject annotation, listed in order of precedence:

  1. Match by Type
  2. Match by Qualifier
  3. Match by Name

These execution paths are applicable to both setter and field injection.

4.1. Field Injection

4.1.1. Match by Type

The integration testing example used to demonstrate the @Autowired match-by-type execution path will be similar to the test used to demonstrate the @Inject match-by-type execution path. The FieldAutowiredTest integration test used to demonstrate match-by-type using the @Autowired annotation is listed as follows:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestAutowiredType.class)
public class FieldAutowiredTest {

    @Autowired
    private ArbitraryDependency fieldDependency;

    @Test
    public void givenAutowired_WhenSetOnField_ThenDependencyResolved() {
        assertNotNull(fieldDependency);
        assertEquals("Arbitrary Dependency", fieldDependency.toString());
    }
}

The application context for this integration test is listed as follows:

@Configuration
public class ApplicationContextTestAutowiredType {

    @Bean
    public ArbitraryDependency autowiredFieldDependency() {
        ArbitraryDependency autowiredFieldDependency =
          new ArbitraryDependency();
        return autowiredFieldDependency;
    }
}

The objective of the integration test is to demonstrate that match-by-type takes first precedence over the other execution paths. Notice in line 8 of the FieldAutowiredTest integration test how the reference variable name:

@Autowired
private ArbitraryDependency fieldDependency;

is different to the bean name in the application context:

@Bean
public ArbitraryDependency autowiredFieldDependency() {
    ArbitraryDependency autowiredFieldDependency =
      new ArbitraryDependency();
    return autowiredFieldDependency;
}

When the test is run, it will pass.

In order to confirm that the dependency was indeed resolved using the match-by-type execution path, change the type of the fieldDependency reference variable and run the integration test again. This time round the FieldAutowiredTest integration test must fail, with a NoSuchBeanDefinitionException being thrown. This verifies that match-by-type was used to resolve the dependency.

4.1.2. Match by Qualifier

What if faced with a situation where multiple bean implementations have been defined in the application context, like the one listed below:

@Configuration
public class ApplicationContextTestAutowiredQualifier {

    @Bean
    public ArbitraryDependency autowiredFieldDependency() {
        ArbitraryDependency autowiredFieldDependency =
          new ArbitraryDependency();
        return autowiredFieldDependency;
    }

    @Bean
    public ArbitraryDependency anotherAutowiredFieldDependency() {
        ArbitraryDependency anotherAutowiredFieldDependency =
          new AnotherArbitraryDependency();
        return anotherAutowiredFieldDependency;
    }
}

If the FieldQualifierAutowiredTest integration test, listed below, is executed:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestAutowiredQualifier.class)
public class FieldQualifierAutowiredTest {

    @Autowired
    private ArbitraryDependency fieldDependency1;

    @Autowired
    private ArbitraryDependency fieldDependency2;

    @Test
    public void givenAutowiredQualifier_WhenOnField_ThenDep1Valid(){
        assertNotNull(fieldDependency1);
        assertEquals("Arbitrary Dependency", fieldDependency1.toString());
    }

    @Test
    public void givenAutowiredQualifier_WhenOnField_ThenDep2Valid(){
        assertNotNull(fieldDependency2);
        assertEquals("Another Arbitrary Dependency",
          fieldDependency2.toString());
    }
}

a NoUniqueBeanDefinitionException will be thrown.

The exception is due to the ambiguity caused by the two beans defined in the application context. The Spring Framework does not know which bean dependency should be autowired to which reference variable. Resolve this issue by adding the @Qualifier annotation to lines 7 and 10 of the FieldQualifierAutowiredTest integration test:

@Autowired
private FieldDependency fieldDependency1;

@Autowired
private FieldDependency fieldDependency2;

so that the code block looks as follows:

@Autowired
@Qualifier("autowiredFieldDependency")
private FieldDependency fieldDependency1;

@Autowired
@Qualifier("anotherAutowiredFieldDependency")
private FieldDependency fieldDependency2;

Run the test again, and this time it will pass.

4.1.3. Match by Name

The same integration test scenario will be used to demonstrate the match-by-name execution path when using the @Autowired annotation to inject a field dependency. When autowiring dependencies by name, the @ComponentScan annotation must be used with the application context, ApplicationContextTestAutowiredName:

@Configuration
@ComponentScan(basePackages={"com.baeldung.dependency"})
    public class ApplicationContextTestAutowiredName {
}

The @ComponentScan annotation will search packages for Java classes that have been annotated with the @Component annotation. For example, in the application context, the com.baeldung.dependency package will be scanned for classes that have been annotated with the @Component annotation. In this scenario, the Spring Framework must detect the ArbitraryDependency class, which has the @Component annotation:

@Component(value="autowiredFieldDependency")
public class ArbitraryDependency {

    private final String label = "Arbitrary Dependency";

    public String toString() {
        return label;
    }
}

The attribute value, autowiredFieldDependency, passed into the @Component annotation, tells the Spring Framework that the ArbitraryDependency class is a component named autowiredFieldDependency. In order for the @Autowired annotation to resolve dependencies by name, the component name must correspond with the field name defined in the FieldAutowiredNameTest integration test; please refer to line 8:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestAutowiredName.class)
public class FieldAutowiredNameTest {

    @Autowired
    private ArbitraryDependency autowiredFieldDependency;

    @Test
    public void givenAutowiredAnnotation_WhenOnField_ThenDepValid(){
        assertNotNull(autowiredFieldDependency);
        assertEquals("Arbitrary Dependency",
          autowiredFieldDependency.toString());
	}
}

When the FieldAutowiredNameTest integration test is run as-is, it will pass.

But how do we know that the @Autowired annotation really did invoke the match-by-name execution path? Change the name of the reference variable autowiredFieldDependency to another name of your choice, then run the test again.

This time, the test will fail and a NoUniqueBeanDefinitionException is thrown. A similar check would be to change the @Component attribute value, autowiredFieldDependency, to another value of your choice and run the test again. A NoUniqueBeanDefinitionException will also be thrown.

This exception is proof that if an incorrect bean name is used, no valid bean will be found. Therefore, the match-by-name execution path was invoked.

4.2. Setter Injection

Setter-based injection for the @Autowired annotation is similar the approach demonstrated for @Resource setter-based injection. Instead of annotating the reference variable with the @Inject annotation, the corresponding setter is annotated. The execution paths followed by field-based dependency injection also apply to setter based injection.

5. Applying These Annotations

This raises the question, which annotation should be used and under what circumstances? The answer to these questions depends on the design scenario faced by the application in question, and how the developer wishes to leverage polymorphism based on the default execution paths of each annotation.

5.1. Application-Wide use of Singletons Through Polymorphism

If the design is such that application behaviors are based on implementations of an interface or an abstract class, and these behaviors are used throughout the application, then use either the @Inject or @Autowired annotation.

The benefit of this approach is that when the application is upgraded, or a patch needs to be applied in order to fix a bug; then classes can be swapped out with minimal negative impact to the overall application behavior. In this scenario, the primary default execution path is match-by-type.

5.2. Fine-Grained Application Behavior Configuration Through Polymorphism

If the design is such that the application has complex behavior, each behavior is based on different interfaces/abstract classes, and usage of each of these implementations varies across the application, then use the @Resource annotation. In this scenario, the primary default execution path is match-by-name.

5.3. Dependency Injection Should be Handled Solely by the Java EE Platform

If there is a design mandate for all dependencies to be injected by the Java EE Platform and not Spring, then the choice is between the @Resource annotation and the @Inject annotation. You should narrow down the final decision between the two annotations, based on which default execution path is required.

5.4. Dependency Injection Should be Handled Solely by the Spring Framework

If the mandate is for all dependencies to be handled by the Spring Framework, the only choice is the @Autowired annotation.

5.5. Discussion Summary

The table below summarizes the discussion.

Scenario @Resource @Inject @Autowired
Application-wide use of singletons through polymorphism
Fine-grained application behaviour configuration through polymorphism
Dependency injection should be handled solely by the Java EE platform
Dependency injection should be handled solely by the Spring Framework

6. Conclusion

The article aimed to provide a deeper insight into the behavior of each annotation. Understanding how each annotation behaves will contribute to better overall application design and maintenance.

The code used during the discussion can be found on github.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to JSON Schema in Java

$
0
0

I usually post about Jackson and JSON stuff on Twitter - you can follow me there:

1. Overview

JSON Schema is a declarative language for validating the format and structure of a JSON Object. It allows us to specify a number of special primitives to describe exactly what a valid JSON Object will look like.

The JSON Schema specification is divided into three parts:

  • JSON Schema Core: The JSON Schema Core specification is where the terminology for a schema is defined.
  • JSON Schema Validation: The JSON Schema Validation specification is the document that defines the valid ways to define validation constraints. This document also defines a set of keywords that can be used to specify validations for a JSON API. In the examples that follow, we’ll be using some of these keywords.
  • JSON Hyper-Schema: This is another extension of the JSON Schema spec, where-in, the hyperlink and hypermedia-related keywords are defined.

2. Defining a JSON Schema

Now that we have defined what a JSON Schema is used for, let’s create a JSON Object and the corresponding JSON Schema describing it.

The following is a simple JSON Object representing a product catalog:

{
    "id": 1,
    "name": "Lampshade",
    "price": 0
}

We could define its JSON Schema as follow:

{
    "$schema": "http://json-schema.org/draft-04/schema#",
    "title": "Product",
    "description": "A product from the catalog",
    "type": "object",
    "properties": {
        "id": {
            "description": "The unique identifier for a product",
            "type": "integer"
        },
        "name": {
            "description": "Name of the product",
            "type": "string"
        },
        "price": {
            "type": "number",
            "minimum": 0,
            "exclusiveMinimum": true
        }
    },
    "required": ["id", "name", "price"]
}

As we can see a JSON Schema is a JSON document, and that document MUST be an object. Object members (or properties) defined by JSON Schema are called keywords.

Let’s explain the keywords that we have used in our sample:

  • The $schema keyword states that this schema is written according to the draft v4 specification.
  • The title and description keywords are descriptive only, in that they do not add constraints to the data being validated. The intent of the schema is stated with these two keywords: describes a product.
  • The type keyword defines the first constraint on our JSON data: it has to be a JSON Object.

Also, a JSON Schema MAY contain properties which are not schema keywords. In our case id, name, price will be members (or properties) of the JSON Object.

For each properties we can define the type. We defined id and name as string and price as number. In JSON Schema a number can have a minimum. By default this minimum is inclusive, so we need to specify exclusiveMinimum.

Finally, the Schema tells that id, name and price are required.

3. Validation with JSON Schema

With our JSON Schema put in place we can validate our JSON Object.

There are many libraries to accomplish this task. For the purpose of our example we have chosen the Java json-schema library.

First of all we need to add the following dependency to our pom.xml:

<dependency>
    <groupId>org.everit.json</groupId>
    <artifactId>org.everit.json.schema</artifactId>
    <version>1.3.0</version>
</dependency>

Finally we can write a couple of simple test case to validate our JSON Object:

@Test
public void givenInvalidInput_whenValidating_thenInvalid() throws ValidationException {
    JSONObject jsonSchema = new JSONObject(
      new JSONTokener(JSONSchemaTest.class.getResourceAsStream("/schema.json")));
    JSONObject jsonSubject = new JSONObject(
      new JSONTokener(JSONSchemaTest.class.getResourceAsStream("/product_invalid.json")));
    
    Schema schema = SchemaLoader.load(jsonSchema);
    jsonSchema.validate(jsonSubject);
}

In this case thrown ValidationException will point to #/price. If you look at the console it will print the following output:

#/price: 0.0 is not higher than 0

The second test look like the following:

@Test
public void givenValidInput_whenValidating_thenValid() throws ValidationException {
    JSONObject jsonSchema = new JSONObject(
      new JSONTokener(JSONSchemaTest.class.getResourceAsStream("/schema.json")));
    JSONObject jsonSubject = new JSONObject(
      new JSONTokener(JSONSchemaTest.class.getResourceAsStream("/product_valid.json")));

    Schema schema = SchemaLoader.load(jsonSchema);
    jsonSchema.validate(jsonSubject);
}

Since we use a valid JSON Object no validation error will be thrown.

4. Conclusion

In this article we have defined what is a JSON Schema and wich are some relevant keyword that help us to define our own schema.

Coupling a JSON Schema with its corresponding JSON Object representation we can perform some validation task.

A simple test case of this article can be found in the github project.

I usually post about Jackson and JSON stuff on Twitter - you should follow me there:


Viewing all 4754 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>