Quantcast
Channel: Baeldung
Viewing all 4703 articles
Browse latest View live

Guide to Guava’s Reflection Utilities

$
0
0

1. Overview

In this article, we’ll be looking at the Guava reflection API – which is definitely more versatile compared to the standard Java reflection API.

We’ll be using Guava to capture generic types at runtime, and we’ll making good use of Invokable as well.

2. Capturing Generic Type at Runtime

In Java, generics are implemented with type erasure. That means that the generic type information is only available at compile time and, at runtime – it’s no longer available.

For example, List<String>, the information about generic type gets erased at runtime. Due to that fact, it is not safe to pass around generic Class objects at runtime.

We might end up assigning two lists that have different generic types to the same reference, which is clearly not a good idea:

List<String> stringList = Lists.newArrayList();
List<Integer> intList = Lists.newArrayList();

boolean result = stringList.getClass()
  .isAssignableFrom(intList.getClass());

assertTrue(result);

Because of type erasure, the method isAssignableFrom() can not know the actual generic type of the lists. It basically compares two types that are just a List without any information about an the actual type.

By using the standard Java reflection API we can detect the generic types of methods and classes. If we have a method that returns a List<String>, we can use reflection to obtain the return type of that method – a ParameterizedType representing List<String>.

The TypeToken class uses this workaround to allow the manipulation of generic types. We can use the TypeToken class to capture an actual type of generic list and check if they really can be referenced by the same reference:

TypeToken<List<String>> stringListToken
  = new TypeToken<List<String>>() {};
TypeToken<List<Integer>> integerListToken
  = new TypeToken<List<Integer>>() {};
TypeToken<List<? extends Number>> numberTypeToken
  = new TypeToken<List<? extends Number>>() {};

assertFalse(stringListToken.isSubtypeOf(integerListToken));
assertFalse(numberTypeToken.isSubtypeOf(integerListToken));
assertTrue(integerListToken.isSubtypeOf(numberTypeToken));

Only the integerListToken can be assigned to a reference of type nubmerTypeToken because an Integer class extends a Number class.

3. Capturing Complex Types Using TypeToken

Let’s say that we want to create a generic parameterized class, and we want to have information about a generic type at runtime. We can create a class that has a TypeToken as a field to capture that information:

abstract class ParametrizedClass<T> {
    TypeToken<T> type = new TypeToken<T>(getClass()) {};
}

Then, when creating an instance of that class, the generic type will be available at runtime:

ParametrizedClass<String> parametrizedClass = new ParametrizedClass<String>() {};

assertEquals(parametrizedClass.type, TypeToken.of(String.class));

We can also create a TypeToken of a complex type that has more than one generic type, and retrieve information about each of those types at runtime:

TypeToken<Function<Integer, String>> funToken
  = new TypeToken<Function<Integer, String>>() {};

TypeToken<?> funResultToken = funToken
  .resolveType(Function.class.getTypeParameters()[1]);

assertEquals(funResultToken, TypeToken.of(String.class));

We get an actual return type for Function, that is a String. We can even get a type of the entry in the map:

TypeToken<Map<String, Integer>> mapToken
  = new TypeToken<Map<String, Integer>>() {};

TypeToken<?> entrySetToken = mapToken
  .resolveType(Map.class.getMethod("entrySet")
  .getGenericReturnType());

assertEquals(
  entrySetToken,
  new TypeToken<Set<Map.Entry<String, Integer>>>() {});

Here we use a reflection method getMethod() from Java standard library to capture the return type of a method.

4. Invokable

The Invokable is a fluent wrapper of java.lang.reflect.Method and java.lang.reflect.Constructor. It provides a simpler API on top of a standard Java reflection API. Let’s say that we have a class that has two public methods and one of them is final:

class CustomClass {
    public void somePublicMethod() {}

    public final void notOverridablePublicMethod() {}
}

Now let’s examine the somePublicMethod() using Guava API and Java standard reflection API:

Method method = CustomClass.class.getMethod("somePublicMethod");
Invokable<CustomClass, ?> invokable 
  = new TypeToken<CustomClass>() {}
  .method(method);

boolean isPublicStandradJava = Modifier.isPublic(method.getModifiers());
boolean isPublicGuava = invokable.isPublic();

assertTrue(isPublicStandradJava);
assertTrue(isPublicGuava);

There is not much difference between these two variants, but checking if a method is overridable is a really non-trivial task in Java. Fortunately, the isOverridable() method from the Invokable class makes it easier:

Method method = CustomClass.class.getMethod("notOverridablePublicMethod");
Invokable<CustomClass, ?> invokable
 = new TypeToken<CustomClass>() {}.method(method);

boolean isOverridableStandardJava = (!(Modifier.isFinal(method.getModifiers()) 
  || Modifier.isPrivate(method.getModifiers())
  || Modifier.isStatic(method.getModifiers())
  || Modifier.isFinal(method.getDeclaringClass().getModifiers())));
boolean isOverridableFinalGauava = invokable.isOverridable();

assertFalse(isOverridableStandardJava);
assertFalse(isOverridableFinalGauava);

We see that even such a simple operation needs a lot of checks using standard reflection API. The Invokable class hides this behind the API that is simple to use and very concise.

5. Conclusion

In this article, we were looking at the Guava reflection API and compare it to the standard Java. We saw how to capture generic types at runtime, and how the Invokable class provides elegant and easy to use API for code that is using reflection.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.


Mockito’s Java 8 Features

$
0
0

1. Overview

Java 8 introduced a range of new, awesome features, like lambda and streams. And naturally Mockito leveraged these recent innovations in its 2nd major version.

In this article, we are going to explore everything this powerful combination has to offer.

2. Mocking Interface With a Default Method

From Java 8 onwards we can now write method implementations in our interfaces. This might be a great new functionality, but its introduction to the language violated a strong concept that was part of Java since its conception.

Mockito version 1 was not ready for this change. Basically, because it didn’t allow us to ask it to call real methods from interfaces.

Imagine that we have an interface with 2 method declarations: the first one is the old-fashioned method signature we’re all used to, and the other is a brand new default method:

public interface JobService {
 
    Optional<JobPosition> findCurrentJobPosition(Person person);
    
    default boolean assignJobPosition(Person person, JobPosition jobPosition) {
        if(!findCurrentJobPosition(person).isPresent()) {
            person.setCurrentJobPosition(jobPosition);
            
            return true;
        } else {
            return false;
        }
    }
}

Notice that the assignJobPosition() default method has a call to the unimplemented findCurrentJobPosition() method.

Now, suppose we want to test our implementation of assignJobPosition() without writing an actual implementation of findCurrentJobPosition(). We could simply create a mocked version of JobService, then tell Mockito to return a known value from the call to our unimplemented method and call the real method when assignJobPosition() is called:

public class JobServiceUnitTest {
 
    @Mock
    private JobService jobService;

    @Test
    public void givenDefaultMethod_whenCallRealMethod_thenNoExceptionIsRaised() {
        Person person = new Person();

        when(jobService.findCurrentJobPosition(person))
              .thenReturn(Optional.of(new JobPosition()));

        doCallRealMethod().when(jobService)
          .assignJobPosition(
            Mockito.any(Person.class), 
            Mockito.any(JobPosition.class)
        );

        assertFalse(jobService.assignJobPosition(person, new JobPosition()));
    }
}

This is perfectly reasonable and it would work just fine given we were using an abstract class instead of an interface.

However, the inner workings of Mockito 1 were just not ready for this structure. If we were to run this code with Mockito pre version 2 we would get this nicely described error:

org.mockito.exceptions.base.MockitoException:
Cannot call real method on java interface. Interface does not have any implementation!
Calling real methods is only possible when mocking concrete classes.

Mockito is doing its job and telling us it can’t call real methods on interfaces since this operation was unthinkable before Java 8.

The good news is that just by changing the version of Mockito we’re using we can make this error go away. Using Maven, for example, we could use version 2.7.5 (the latest Mockito version can be found here):

<dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-core</artifactId>
    <version>2.7.5</version>
    <scope>test</scope>
</dependency>

There is no need to make any changes to the code. The next time we run our test, the error will no longer occur.

3. Return Default Values for Optional and Stream

Optional and Stream are other Java 8 new additions. One similarity between the two classes is that both have a special type of value that represent an empty object. This empty object makes it easier to avoid the so far omnipresent NullPointerException.

3.1. Example with Optional

Consider a service that injects the JobService described in the previous section and has a method that calls JobService#findCurrentJobPosition():

public class UnemploymentServiceImpl implements UnemploymentService {
 
    private JobService jobService;
    
    public UnemploymentServiceImpl(JobService jobService) {
        this.jobService = jobService;
    }

    @Override
    public boolean personIsEntitledToUnemploymentSupport(Person person) {
        Optional<JobPosition> optional = jobService.findCurrentJobPosition(person);
        
        return !optional.isPresent();
    }
}

Now, assume we want to create a test to check that, when a person has no current job position, they are entitled to the unemployment support.

In that case, we would force findCurrentJobPosition() to return an empty Optional. Before Mockito 2, we were required to mock the call to that method:

public class UnemploymentServiceImplUnitTest {
 
    @Mock
    private JobService jobService;

    @InjectMocks
    private UnemploymentServiceImpl unemploymentService;

    @Test
    public void givenReturnIsOfTypeOptional_whenMocked_thenValueIsEmpty() {
        Person person = new Person();

        when(jobService.findCurrentJobPosition(any(Person.class)))
          .thenReturn(Optional.empty());
        
        assertTrue(unemploymentService.personIsEntitledToUnemploymentSupport(person));
    }
}

This when(…).thenReturn(…) instruction on line 13 is necessary because Mockito’s default return value for any method calls to a mocked object is null. Version 2 changed that behavior.

Since we rarely handle null values when dealing with Optional, Mockito now returns an empty Optional by default. That is the exact same value as the return of a call to Optional.empty().

So, when using Mockito version 2, we could get rid of line 13 and our test would still be successful:

public class UnemploymentServiceImplUnitTest {
 
    @Test
    public void givenReturnIsOptional_whenDefaultValueIsReturned_thenValueIsEmpty() {
        Person person = new Person();
 
        assertTrue(unemploymentService.personIsEntitledToUnemploymentSupport(person));
    }
}

3.2. Example with Stream

The same behavior occurs when we mock a method that returns a Stream.

Let’s add a new method to our JobService interface that returns a Stream representing all the job positions that a person has ever worked at:

public interface JobService {
    Stream<JobPosition> listJobs(Person person);
}

This method is used on another new method that will query if a person has ever worked on a job that matches a given search string:

public class UnemploymentServiceImpl implements UnemploymentService {
   
    @Override
    public Optional<JobPosition> searchJob(Person person, String searchString) {
        return jobService.listJobs(person)
          .filter((j) -> j.getTitle().contains(searchString))
          .findFirst();
    }
}

So, assume we want to properly test the implementation of searchJob(), without having to worry about writing the listJobs() and assume we want to test the scenario when the person hasn’t work at any jobs yet. In that case, we would want listJobs() to return an empty Stream.

Before Mockito 2, we would need to mock the call to listJobs() to write such test:

public class UnemploymentServiceImplUnitTest {
 
    @Test
    public void givenReturnIsOfTypeStream_whenMocked_thenValueIsEmpty() {
        Person person = new Person();
        when(jobService.listJobs(any(Person.class))).thenReturn(Stream.empty());
        
        assertFalse(unemploymentService.searchJob(person, "").isPresent());
    }
}

If we upgrade to version 2, we could drop the when(…).thenReturn(…) call, because now Mockito will return an empty Stream on mocked methods by default:

public class UnemploymentServiceImplUnitTest {
 
    @Test
    public void givenReturnIsStream_whenDefaultValueIsReturned_thenValueIsEmpty() {
        Person person = new Person();
        
        assertFalse(unemploymentService.searchJob(person, "").isPresent());
    }
}

4. Leveraging Lambda Expressions

With Java 8’s lambda expressions we can make statements much more compact and easier to read. When working with Mockito, 2 very nice examples of the simplicity brought in by lambda expressions are ArgumentMatchers and custom Answers.

4.1. Combination of Lambda and ArgumentMatcher

Before Java 8, we needed to create a class that implemented ArgumentMatcher, and write our custom rule in the matches() method.

With Java 8, we can replace the inner class with a simple lambda expression:

public class ArgumentMatcherWithLambdaUnitTest {
 
    @Test
    public void whenPersonWithJob_thenIsNotEntitled() {
        Person peter = new Person("Peter");
        Person linda = new Person("Linda");
        
        JobPosition teacher = new JobPosition("Teacher");

        when(jobService.findCurrentJobPosition(
          ArgumentMatchers.argThat(p -> p.getName().equals("Peter"))))
          .thenReturn(Optional.of(teacher));
        
        assertTrue(unemploymentService.personIsEntitledToUnemploymentSupport(linda));
        assertFalse(unemploymentService.personIsEntitledToUnemploymentSupport(peter));
    }
}

4.2. Combination of Lambda and Custom Answer

The same effect can be achieved when combining lambda expressions with Mockito’s Answer.

For example, if we wanted to simulate calls to the listJobs() method in order to make it return a Stream containing a single JobPosition if the Person‘s name is “Peter”, and an empty Stream otherwise, we would have to create a class (anonymous or inner) that implemented the Answer interface.

Again, the use of a lambda expression, allow us to write all the mock behavior inline:

public class CustomAnswerWithLambdaUnitTest {
 
    @Before
    public void init() {
        MockitoAnnotations.initMocks(this);

        when(jobService.listJobs(any(Person.class))).then((i) ->
          Stream.of(new JobPosition("Teacher"))
          .filter(p -> ((Person) i.getArgument(0)).getName().equals("Peter")));
    }
}

Notice that, in the implementation above, there is no need for the PersonAnswer inner class.

5. Conclusion

In this article, we covered how to leverage new Java 8 and Mockito 2 features together to write cleaner, simpler and shorter code. If you are not familiar with some of the Java 8 features we saw here, check some of our articles:

Also, check the accompanying code on our GitHub repository.

AngularJS CRUD Application with Spring Data REST

$
0
0

1. Overview

In this tutorial, we’re going to create an example of a simple CRUD application using AngularJS for the front-end and Spring Data REST for the back-end.

2. Creating the REST Data Service

In order to create the support for persistence, we’ll make use of the Spring Data REST specification that will enable us to perform CRUD operations on a data model.

You can find all the necessary information on how to setup the REST endpoints in the introduction to Spring Data REST. In this article, we will reuse the existing project we have setup for the introduction tutorial.

For persistence, we will use the H2 in memory database.

As a data model, the previous article defines a WebsiteUser class, with id, name and email properties and a repository interface called UserRepository.

Defining this interface instructs Spring to create the support for exposing REST collection resources and item resources. Let’s take a closer look at the endpoints available to us now that we will later call from AngularJS.

2.1. The Collection Resources

A list of all the users will be available to us at the endpoint /users. This URL can be called using the GET method and will return JSON objects of the form:

{
  "_embedded" : {
    "users" : [ {
      "name" : "Bryan",
      "age" : 20,
      "_links" : {
        "self" : {
          "href" : "http://localhost:8080/users/1"
        },
        "User" : {
          "href" : "http://localhost:8080/users/1"
        }
      }
    }, 
...
    ]
  }
}

2.2. The Item Resources

A single WebsiteUser object can be manipulated by accessing URLs of the form /users/{userID} with different HTTP methods and request payloads.

For retrieving a WebsiteUser object, we can access /users/{userID} with the GET method. This returns a JSON object of the form:

{
  "name" : "Bryan",
  "email" : "bryan@yahoo.com",
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/users/1"
    },
    "User" : {
      "href" : "http://localhost:8080/users/1"
    }
  }
}

To add a new WebsiteUser, we will need to call /users with POST method. The attributes of the new WebsiteUser record will be added in the request body as a JSON object:

{name: "Bryan", email: "bryan@yahoo.com"}

If there are no errors, this URL returns a status code 201 CREATED.

If we want to update the attributes of the WebsiteUser record, we need to call the URL /users/{UserID} with the PATCH method and a request body containing the new values:

{name: "Bryan", email: "bryan@gmail.com"}

To delete a WebsiteUser record, we can call the URL /users/{UserID} with the DELETE method. If there are no errors, this returns status code 204 NO CONTENT.

2.3. MVC Configuration

We’ll also add a basic MVC configuration to display html files in our application:

@Configuration
@EnableWebMvc
public class MvcConfig extends WebMvcConfigurerAdapter{
    
    public MvcConfig(){
        super();
    }
    
    @Override
    public void configureDefaultServletHandling(
      DefaultServletHandlerConfigurer configurer) {
        configurer.enable();
    }
}

2.4. Allowing Cross Origin Requests

If we want to deploy the AngularJS front-end application separately than the REST API – then we need to enable cross origin requests.

Spring Data REST has added support for this starting with version 1.5.0.RELEASE. To allow requests from a different domain, all you have to do is add the @CrossOrigin annotation to the repository:

@CrossOrigin
@RepositoryRestResource(collectionResourceRel = "users", path = "users")
public interface UserRepository extends CrudRepository<WebsiteUser, Long> {}

As a result, on every response from the REST endpoints a header of Access-Control-Allow-Origin will be added.

3. Creating the AngularJS Client

For creating the front end of our CRUD application, we’ll use AngularJS – a well-know JavaScript framework that eases the creation of front-end applications.

In order to use AngularJS, we first need to include the angular.min.js file in our html page that will be called users.html:

<script 
  src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.6/angular.min.js">
</script>

Next, we need to create an Angular module, controller, and service that will call the REST endpoints and display the returned data.

These will be placed in a JavaScript file called app.js that also needs to be included in the users.html page:

<script src="view/app.js"></script>

3.1. Angular Service

First, let’s create an Angular service called UserCRUDService that will make use of the injected AngularJS $http service to make calls to the server. Each call will be placed in a separate method.

Let’s take a look at defining the method for retrieving a user by id using the /users/{userID} endpoint:

app.service('UserCRUDService', [ '$http', function($http) {

    this.getUser = function getUser(userId) {
        return $http({
            method : 'GET',
            url : 'users/' + userId
        });
    }
} ]);

Next, let’s define the addUser method which makes a POST request to the /users URL and sends the user values in the data attribute:

this.addUser = function addUser(name, email) {
    return $http({
        method : 'POST',
        url : 'users',
        data : {
            name : name,
            email: email
        }
    });
}

The updateUser method is similar to the one above, except it will have an id parameter and makes a PATCH request:

this.updateUser = function updateUser(id, name, email) {
    return $http({
        method : 'PATCH',
        url : 'users/' + id,
        data : {
            name : name,
            email: email
        }
    });
}

The method for deleting a WebsiteUser record will make a DELETE request:

this.deleteUser = function deleteUser(id) {
    return $http({
        method : 'DELETE',
        url : 'users/' + id
    })
}

And finally, let’s take a look at the methods for retrieving the entire list of users:

this.getAllUsers = function getAllUsers() {
    return $http({
        method : 'GET',
        url : 'users'
    });
}

All of these service methods will be called by an AngularJS controller.

3.2. Angular Controller

We will create an UserCRUDCtrl AngularJS controller that will have an UserCRUDService injected and will use the service methods to obtain the response from the server, handle the success and error cases, and set $scope variables containing the response data for displaying it in the HTML page.

Let’s take a look at the getUser() function that calls the getUser(userId) service function and defines two callback methods in case of success and error. If the server request succeeds, then the response is saved in a user variable; otherwise, error messages are handled:

app.controller('UserCRUDCtrl', ['$scope','UserCRUDService', 
  function ($scope,UserCRUDService) {
      $scope.getUser = function () {
          var id = $scope.user.id;
          UserCRUDService.getUser($scope.user.id)
            .then(function success(response) {
                $scope.user = response.data;
                $scope.user.id = id;
                $scope.message='';
                $scope.errorMessage = '';
            },
    	    function error (response) {
                $scope.message = '';
                if (response.status === 404){
                    $scope.errorMessage = 'User not found!';
                }
                else {
                    $scope.errorMessage = "Error getting user!";
                }
            });
      };
}]);

The addUser() function will call the corresponding service function and handle the response:

$scope.addUser = function () {
    if ($scope.user != null && $scope.user.name) {
        UserCRUDService.addUser($scope.user.name, $scope.user.email)
          .then (function success(response){
              $scope.message = 'User added!';
              $scope.errorMessage = '';
          },
          function error(response){
              $scope.errorMessage = 'Error adding user!';
              $scope.message = '';
        });
    }
    else {
        $scope.errorMessage = 'Please enter a name!';
        $scope.message = '';
    }
}

The updateUser() and deleteUser() functions are similar to the one above:

$scope.updateUser = function () {
    UserCRUDService.updateUser($scope.user.id, 
      $scope.User.name, $scope.user.email)
      .then(function success(response) {
          $scope.message = 'User data updated!';
          $scope.errorMessage = '';
      },
      function error(response) {
          $scope.errorMessage = 'Error updating user!';
          $scope.message = '';
      });
}

$scope.deleteUser = function () {
    UserCRUDService.deleteUser($scope.user.id)
      .then (function success(response) {
          $scope.message = 'User deleted!';
          $scope.User = null;
          $scope.errorMessage='';
      },
      function error(response) {
          $scope.errorMessage = 'Error deleting user!';
          $scope.message='';
      });
}

And finally, let’s define the function that retrieves a list of users, and stores it in the users variable:

$scope.getAllUsers = function () {
    UserCRUDService.getAllUsers()
      .then(function success(response) {
          $scope.users = response.data._embedded.users;
          $scope.message='';
          $scope.errorMessage = '';
      },
      function error (response) {
          $scope.message='';
          $scope.errorMessage = 'Error getting users!';
      });
}

3.3. HTML Page

The users.html page will make use of the controller functions defined in the previous section and the stored variables.

First, in order to use the Angular module, we need to set the ng-app property:

<html ng-app="app">

Then, to avoid typing UserCRUDCtrl.getUser() every time we use a function of the controller, we can wrap our HTML elements in a div with a ng-controller property set:

<div ng-controller="UserCRUDCtrl">

Let’s create the form that will input and display the values for the WebiteUser object we want to manipulate. Each of these will have a ng-model attribute set, which binds it to the value of the attribute:

<table>
    <tr>
        <td width="100">ID:</td>
        <td><input type="text" id="id" ng-model="user.id" /></td>
    </tr>
    <tr>
        <td width="100">Name:</td>
        <td><input type="text" id="name" ng-model="user.name" /></td>
    </tr>
    <tr>
        <td width="100">Age:</td>
        <td><input type="text" id="age" ng-model="user.email" /></td>
    </tr>
</table>

Binding the id input to the user.id variable, for example, means that whenever the value of the input is changed, this value is set in the user.id variable and vice versa.

Next, let’s use the ng-click attribute to define the links that will trigger the invoking of each CRUD controller function defined:

<a ng-click="getUser(user.id)">Get User</a>
<a ng-click="updateUser(user.id,user.name,user.email)">Update User</a>
<a ng-click="addUser(user.name,user.email)">Add User</a>
<a ng-click="deleteUser(user.id)">Delete User</a>

Finally, let’s display the list of users entirely and by name:

<a ng-click="getAllUsers()">Get all Users</a><br/><br/>
<div ng-repeat="usr in users">
{{usr.name}} {{usr.email}}

4. Conclusion

In this tutorial, we have shown how you can create a CRUD application using AngularJS and the Spring Data REST specification.

The complete code for the above example can be found in the GitHub project.

To run the application, you can use the command mvn spring-boot:run and access the URL /users.html.

A guide to the “when{}” block in Kotlin

$
0
0

1. Introduction

This tutorial introduces the when{} block in Kotlin language and demonstrates the various ways that it can be used.

To understand the material in this article, basic knowledge of the Kotlin language is needed. You can have a look at the introduction to the Kotlin Language article on Baeldung to learn more about the language.

2. Kotlin’s when{} Block

When{} block is essentially an advanced form of the switch-case statement known from Java.

In Kotlin, if a matching case is found then only the code in the respective case block is executed and execution continues with the next statement after the when block. This essentially means that no break statements are needed in the end of each case block.

To demonstrate the usage of when{}, let’s define an enum class that holds the first letter in the permissions field for some of the file types in Unix:

enum class UnixFileType {
    D, HYPHEN_MINUS, L
}
Let’s also define a hierarchy of classes that model the respective Unix file types:
sealed class UnixFile {

    abstract fun getFileType(): UnixFileType

    class RegularFile(val content: String) : UnixFile() {
        override fun getFileType(): UnixFileType {
            return UnixFileType.HYPHEN_MINUS
        }
    }

    class Directory(val children: List<UnixFile>) : UnixFile() {
        override fun getFileType(): UnixFileType {
            return UnixFileType.D
        }
    }

    class SymbolicLink(val originalFile: UnixFile) : UnixFile() {
        override fun getFileType(): UnixFileType {
            return UnixFileType.L
        }
    }
}

2.1. When{} as an Expression

A big difference from the Java’s switch statement is that the when{} block in Kotlin can be used both as a statement and as an expression. Kotlin follows the principles of other functional languages and flow-control structures are expressions and the result of their evaluation can be returned to the caller.

If the value returned is assigned to a variable, the compiler will check that type of the return value is compatible with the type expected by the client and will inform us in case it is not:

@Test
fun testWhenExpression() {
    val directoryType = UnixFileType.D

    val objectType = when (directoryType) {
        UnixFileType.D -> "d"
        UnixFileType.HYPHEN_MINUS -> "-"
        UnixFileType.L -> "l"
    }

    assertEquals("d", objectType)
}

There are two things to notice when using when as an expression in Kotlin.

First, the value that is returned to the caller is the value of the matching case block or in other words the last defined value in the block.

The second thing to notice is that we need to guarantee that the caller gets a value. For this to happen we need to ensure that the cases, in the when block, cover every possible value that can be assigned to the argument.

2.2. When{} as an Expression with Default Case

A default case will match any argument value that is not matched by a normal case and in Kotlin is declared using the else clause. In any case, the Kotlin compiler will assume that every possible argument value is covered by the when block and will complain in case it is not.

To add a default case in Kotlin’s when expression:

@Test
fun testWhenExpressionWithDefaultCase() {
    val fileType = UnixFileType.L

    val result = when (fileType) {
        UnixFileType.L -> "linking to another file"
        else -> "not a link"
    }

    assertEquals("linking to another file", result)
}

2.3. When{} Expression with a Case that Throws an Exception

In Kotlin, throw returns a value of type Nothing. 

In this case, Nothing is used to declare that the expression failed to compute a value. Nothing is the type that inherits from all user-defined and built-in types in Kotlin.

Therefore, since the type is compatible with any argument that we would use in a when block, it is perfectly valid to throw an exception from a case even if the when block is used as an expression.

Let’s define a when expression where one of the cases throws an exception:

@Test(expected = IllegalArgumentException::class)
fun testWhenExpressionWithThrowException() {
    val fileType = UnixFileType.L

    val result: Boolean = when (fileType) {
        UnixFileType.HYPHEN_MINUS -> true
        else -> throw IllegalArgumentException("Wrong type of file")
    }
}

2.4. When{} Used as a Statement

We can also use the when block as a statement.

In this case, we do not need to cover every possible value for the argument and the value computed in each case block, if any, is just ignored. When used as a statement, the when block can be used similarly to how the switch statement is used in Java.

Let’s use the when block as a statement:

@Test
fun testWhenStatement() {
    val fileType = UnixFileType.HYPHEN_MINUS

    when (fileType) {
        UnixFileType.HYPHEN_MINUS -> println("Regular file type")
        UnixFileType.D -> println("Directory file type")
    }
}

We can see from the example that it is not mandatory to cover all possible argument values when we are using when as a statement.

2.5. Combining When{} Cases

Kotlin’s when expression allows us to combine different cases into one by concatenating the matching conditions with a comma.

Only one case has to match for the respective block of code to be executed, so comma acts as an OR operator.

Let’s create a case that combines two conditions:

@Test
fun testCaseCombination() {
    val fileType = UnixFileType.D

    val frequentFileType: Boolean = when (fileType) {
        UnixFileType.HYPHEN_MINUS, UnixFileType.D -> true
        else -> false
    }

    assertTrue(frequentFileType)
}

2.6. When{} Used Without an Argument

Kotlin allows us to omit the argument value in the when block.

This essentially turns when in a simple if-elseif expression that sequentially checks cases and executes the block of code of the first matching case. If we omit the argument in the when block, then the case expressions should evaluate to either true or false.

Let’s create a when block that omits the argument:

@Test
fun testWhenWithoutArgument() {
    val fileType = UnixFileType.L

    val objectType = when {
        fileType === UnixFileType.L -> "l"
        fileType === UnixFileType.HYPHEN_MINUS -> "-"
        fileType === UnixFileType.D -> "d"
        else -> "unknown file type"
    }

    assertEquals("l", objectType)
}

2.7. Dynamic Case Expressions

In Java, the switch statement can only be used with primitives and their boxed types, enums and the String class. In contrast, Kotlin allows us to use the when block with any built-in or user defined type. 

In addition, it is not required that the cases are constant expressions as in Java. Cases in Kotlin can be dynamic expressions that are evaluated at runtime. For example, cases could be the result of a function as long as the function return type is compatible with the type of the when block argument.

Let’s define a when block with dynamic case expressions:

@Test
fun testDynamicCaseExpression() {
    val unixFile = UnixFile.SymbolicLink(UnixFile.RegularFile("Content"))

    when {
        unixFile.getFileType() == UnixFileType.D -> println("It's a directory!")
        unixFile.getFileType() == UnixFileType.HYPHEN_MINUS -> println("It's a regular file!")
        unixFile.getFileType() == UnixFileType.L -> println("It's a soft link!")
    }
}

2.8. Range and Collection Case Expressions

It is possible to define a case in a when block that checks if a given collection or a range of values contains the argument.

For this reason, Kotlin provides the in operator which is a syntactic sugar for the contains() method. This means that Kotlin behind the scenes translates the case element in to collection.contains(element).

To check if the argument is in a list:

@Test
fun testCollectionCaseExpressions() {
    val regularFile = UnixFile.RegularFile("Test Content")
    val symbolicLink = UnixFile.SymbolicLink(regularFile)
    val directory = UnixFile.Directory(listOf(regularFile, symbolicLink))

    val isRegularFileInDirectory = when (regularFile) {
        in directory.children -> true
        else -> false
    }

    val isSymbolicLinkInDirectory = when {
        symbolicLink in directory.children -> true
        else -> false
    }

    assertTrue(isRegularFileInDirectory)
    assertTrue(isSymbolicLinkInDirectory)
}
To check that the argument is in a range:
@Test
fun testRangeCaseExpressions() {
    val fileType = UnixFileType.HYPHEN_MINUS

    val isCorrectType = when (fileType) {
        in UnixFileType.D..UnixFileType.L -> true
        else -> false
    }

    assertTrue(isCorrectType)
}

Even though REGULAR_FILE type is not explicitly contained in the range, its ordinal is between the ordinals of DIRECTORY and SYMBOLIC_LINK and therefore the test is successful.

2.9. Is Case Operator and Smart Cast

We can use Kotlin’s is operator to check if the argument is an instance of a specified type. The is operator is similar to the instanceof operator in Java.

However, Kotlin provides us with a feature called “smart cast”. After we check if the argument is an instance of a given type, we do not have to explicitly cast the argument to that type since the compiler does that for us.

Therefore, we can use the methods and properties defined in the given type directly in the case block.
To use the is operator with the “smart cast” feature in a when block:

@Test
fun testWhenWithIsOperatorWithSmartCase() {
    val unixFile: UnixFile = UnixFile.RegularFile("Test Content")

    val result = when (unixFile) {
        is UnixFile.RegularFile -> unixFile.content
        is UnixFile.Directory -> unixFile.children.map { it.getFileType() }.joinToString(", ")
        is UnixFile.SymbolicLink -> unixFile.originalFile.getFileType()
    }

    assertEquals("Test Content", result)
}
Without explicitly casting unixFile to RegularFile, Directory or SymbolicLink, we were able to use RegularFile.content, Directory.children, and SymbolicLink.originalFile respectively.

3. Conclusion

In this article, we have seen several examples of how to use the when block offered by the Kotlin language.

Even though it’s not possible to do pattern matching using when in Kotlin, as is the case with the corresponding structures in Scala and other JVM languages, the when block is versatile enough to make us totally forget about these features.

The complete implementation of the examples for this article can be found over on GitHub.

Intro to Jasypt

$
0
0

1. Overview

In this article, we’ll be looking at the Jasypt (Java Simplified Encryption) library.

Jasypt is a Java library which allows developers to add basic encryption capabilities to projects with minimum effort, and without the need of having an in-depth knowledge about implementation details of encryption protocols.

2. Using Simple Encryption

Consider we’re building a web application in which user submits an account private data. We need to store that data in the database, but it would be insecure to store plain text.

One way to deal with it is to store an encrypted data in the database, and when retrieving that data for a particular user decrypt it.

To perform encryption and decryption using a very simple algorithm, we can use a BasicTextEncryptor class from the Jasypt library:

BasicTextEncryptor textEncryptor = new BasicTextEncryptor();
String privateData = "secret-data";
textEncryptor.setPasswordCharArray("some-random-data".toCharArray());

Then we can use an encrypt() method to encrypt the plain text:

String myEncryptedText = textEncryptor.encrypt(privateData);
assertNotSame(privateData, myEncryptedText);

If we want to store a private data for given user in the database, we can store a myEncryptedText without violating any security restrictions. Should we want to decrypt data back to a plain text, we can use a decrypt() method:

String plainText = textEncryptor.decrypt(myEncryptedText);
 
assertEquals(plainText, privateData);

We see that decrypted data is equal to plain text data that was previously encrypted.

3. One-way Encryption

The previous example is not an ideal way to perform authentication, that is when we want to store a user password. Ideally, we want to encrypt the password without a way to decrypt it. When the user tries to log into our service, we encrypt his password and compare it with the encrypted password that is stored in the database. That way we do not need to operate on plain text password.

We can use a BasicPasswordEncryptor class to perform the one-way encryption:

String password = "secret-pass";
BasicPasswordEncryptor passwordEncryptor = new BasicPasswordEncryptor();
String encryptedPassword = passwordEncryptor.encryptPassword(password);

Then, we can compare an already encrypted password with a password of a user that perform login process without a need to decrypt password that is already stored in the database:

boolean result = passwordEncryptor.checkPassword("secret-pass", encryptedPassword);

assertTrue(result);

4. Configuring Algorithm for Encryption

We can use a stronger encryption algorithm but we need to remember to install Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for our JVM (installation instructions are included in the download).

In Jasypt we can use strong encryption by using a StandardPBEStringEncryptor class and customize it using a setAlgorithm() method:

StandardPBEStringEncryptor encryptor = new StandardPBEStringEncryptor();
String privateData = "secret-data";
encryptor.setPassword("some-random-passwprd");
encryptor.setAlgorithm("PBEWithMD5AndTripleDES");

Let’s set the encryption algorithm to be PBEWithMD5AndTripleDES. 

Next, the process of encryption and decryption looks the same as the previous one using a BasicTextEncryptor class:

String encryptedText = encryptor.encrypt(privateData);
assertNotSame(privateData, encryptedText);

String plainText = encryptor.decrypt(encryptedText);
assertEquals(plainText, privateData);

5. Using Multi-Threaded Decryption

When we’re operating on the multi-core machine we want to handle processing of decryption in parallel. To achieve good performance we can use a PooledPBEStringEncryptor and the setPoolSize() API to create a pool of digesters. Each of them can be used by the different thread in parallel:

PooledPBEStringEncryptor encryptor = new PooledPBEStringEncryptor();
encryptor.setPoolSize(4);
encryptor.setPassword("some-random-data");
encryptor.setAlgorithm("PBEWithMD5AndTripleDES");

It’s good practice to set pool size to be equal to the number of cores of the machine. The code for encryption and decryption is the same as previous ones.

6. Usage in Other Frameworks

A quick final note is that the Jasypt library can be integrated with a lot of other libraries, including of course the Spring Framework.

We only need to create a configuration to add encryption support into our Spring application. And if we want to store sensitive data into the database and we are using Hibernate as the data access framework, we can also integrate Jasypt with it.

Instructions about these integrations, as well as with some other frameworks, can be found in the Guides section on the Jasypt’s home page.

7. Conclusion

In this article, we were looking at the Jasypt library that helps us create more secure applications by using an already well know and tested cryptography algorithms. It is covered with the simple API that is easy to use.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

HBase with Java

$
0
0

1. Overview

In this article, we’ll be looking at the HBase database Java Client library. HBase is a distributed database that uses the Hadoop file system for storing data.

We’ll create a Java example client and a table to which we will add some simple records.

2. HBase Data Structure

In HBase, data is grouped into column families. All column members of a column family have the same prefix.

For example, the columns family1:qualifier1 and family1:qualifier2 are both members of the family1 column family. All column family members are stored together on the filesystem.

Inside the column family, we can put a row that has a specified qualifier. We can think of a qualifier as a kind of the column name.

Let’s see an example record from Hbase:

Family1:{  
   'Qualifier1':'row1:cell_data',
   'Qualifier2':'row2:cell_data',
   'Qualifier3':'row3:cell_data'
}
Family2:{  
   'Qualifier1':'row1:cell_data',
   'Qualifier2':'row2:cell_data',
   'Qualifier3':'row3:cell_data'
}

We have two column families, each of them has three qualifiers with some cell data in it. Each row has a row key – it is a unique row identifier. We will be using the row key to insert, retrieve and delete the data.

3. HBase Client Maven Dependency

Before we connect to the HBase, we need to add hbase-client and hbase dependencies:

<dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-client</artifactId>
    <version>${hbase.version}</version>
</dependency>
<dependency>
     <groupId>org.apache.hbase</groupId>
     <artifactId>hbase</artifactId>
     <version>${hbase.version}</version>
</dependency>

4. HBase Setup

We need to setup HBase to be able to connect from a Java client library to it. The installation is out of the scope of this article but you can check out some of the HBase installation guides online.

Next, we need to start an HBase master locally by executing:

hbase master start

5. Connecting to HBase from Java 

To connect programmatically from Java to HBase, we need to define an XML configuration file. We started our HBase instance on localhost so we need to enter that into a configuration file:

<configuration>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>localhost</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2181</value>
    </property>
</configuration>

Now we need to point an HBase client to that configuration file:

Configuration config = HBaseConfiguration.create();

String path = this.getClass()
  .getClassLoader()
  .getResource("hbase-site.xml")
  .getPath();
config.addResource(new Path(path));

Next, we’re checking if a connection to HBase was successful – in the case of a failure, the MasterNotRunningException will be thrown:

HBaseAdmin.checkHBaseAvailable(config);

6. Creating a Database Structure

Before we start adding data to HBase, we need to create the data structure for inserting rows. We will create one table with two column families:

private TableName table1 = TableName.valueOf("Table1");
private String family1 = "Family1";
private String family2 = "Family2";

Firstly, we need to create a connection to the database and get admin object, which we will use for manipulating a database structure:

Connection connection = ConnectionFactory.createConnection(config)
Admin admin = connection.getAdmin();

Then, we can create a table by passing an instance of the HTableDescriptor class to a createTable() method on the admin object:

HTableDescriptor desc = new HTableDescriptor(table1);
desc.addFamily(new HColumnDescriptor(family1));
desc.addFamily(new HColumnDescriptor(family2));
admin.createTable(desc);

7. Adding and Retrieving Elements 

With the table created, we can add new data to it by creating a Put object and calling a put() method on the Table object:

byte[] row1 = Bytes.toBytes("row1")
Put p = new Put(row1);
p.addImmutable(family1.getBytes(), qualifier1, Bytes.toBytes("cell_data"));
table1.put(p);

Retrieving previously created row can be achieved by using a Get class:

Get g = new Get(row1);
Result r = table1.get(g);
byte[] value = r.getValue(family1.getBytes(), qualifier1);

The row1 is a row identifier – we can use it to retrieve a specific row from the database. When calling:

Bytes.bytesToString(value)

the returned result will be previously the inserted cell_data.

8. Scanning and Filtering

We can scan the table, retrieving all elements inside of a given qualifier by using a Scan object (note that ResultScanner extends Closable, so be sure to call close() on it when you’re done):

Scan scan = new Scan();
scan.addColumn(family1.getBytes(), qualifier1);

ResultScanner scanner = table.getScanner(scan);
for (Result result : scanner) {
    System.out.println("Found row: " + result);
}

That operation will print all rows inside of a qualifier1 with some additional information like timestamp:

Found row: keyvalues={Row1/Family1:Qualifier1/1488202127489/Put/vlen=9/seqid=0}

We can retrieve specific records by using filters.

Firstly, we are creating two filters. The filter1 specifies that scan query will retrieve elements that are greater than row1, and filter2 specifies that we are interested only in rows that have a qualifier equal to qualifier1:

Filter filter1 = new PrefixFilter(row1);
Filter filter2 = new QualifierFilter(
  CompareOp.GREATER_OR_EQUAL, 
  new BinaryComparator(qualifier1));
List<Filter> filters = Arrays.asList(filter1, filter2);

Then we can get a result set from a Scan query:

Scan scan = new Scan();
scan.setFilter(new FilterList(Operator.MUST_PASS_ALL, filters));

try (ResultScanner scanner = table.getScanner(scan)) {
    for (Result result : scanner) {
        System.out.println("Found row: " + result);
    }
}

When creating a FilterList we passed an Operator.MUST_PASS_ALL – it means that all filters must be satisfied. We can choose an Operation.MUST_PASS_ONE if only one filter needs to be satisfied. In the resulting set, we will have only rows that matched specified filters.

9. Deleting Rows

Finally, to delete a row, we can use a Delete class:

Delete delete = new Delete(row1);
delete.addColumn(family1.getBytes(), qualifier1);
table.delete(delete);

We’re deleting a row1 that resides inside of a family1.

10. Conclusion

In this quick tutorial, we focused on communicated with a HBase database. We saw how to connect to HBase from the Java client library and how to run various basic operations.

The implementation of all these examples and code snippets can be found in the GitHub project; this is a Maven project, so it should be easy to import and run as it is.

Spring Cloud – Tracing Services with Zipkin

$
0
0

 1. Overview

In this article, we are going to add Zipkin to our spring cloud projectZipkin is an open source project that provides mechanisms for sending, receiving, storing, and visualizing traces. This allows us to correlate activity between servers and get a much clearer picture of exactly what is happening in our services.

This article is not an introductory article to distributed tracing or spring cloud. If you would like more information about distributed tracing, read our introduction to spring sleuth.

2. Zipkin Service

Our Zipkin service will serve as the store for all our spans. Each span is sent to this service and collected into traces for future identification.

2.1. Setup

Create a new Spring Boot project and add these dependencies to pom.xml:

<dependency>
    <groupId>io.zipkin.java</groupId>
    <artifactId>zipkin-server</artifactId>
</dependency>
<dependency>
    <groupId>io.zipkin.java</groupId>
    <artifactId>zipkin-autoconfigure-ui</artifactId>
    <scope>runtime</scope>
</dependency>

For reference: you can find the latest version on Maven Central (zipkin-server, zipkin-autoconfigure-ui). Versions of the dependencies are inherited from spring-boot-starter-parent.

2.2. Enabling Zipkin Server

To enable the Zipkin server, we must add some annotations to the main application class:

@SpringBootApplication
@EnableZipkinServer
public class ZipkinApplication {...}

The new annotation @EnableZipkinServer will set up this server to listen for incoming spans and act as our UI for querying.

2.3. Configuration

First, let’s create a file called bootstrap.properties in src/main/resources. Remember that this file is needed to fetch our configuration from out config server.

Let’s add these properties to it:

spring.cloud.config.name=zipkin
spring.cloud.config.discovery.service-id=config
spring.cloud.config.discovery.enabled=true
spring.cloud.config.username=configUser
spring.cloud.config.password=configPassword

eureka.client.serviceUrl.defaultZone=
  http://discUser:discPassword@localhost:8082/eureka/

Now let’s add a configuration file to our config repo, located at c:\Users\{username}\ on Windows or /Users/{username}/ on *nix.

In this directory let’s add a file named zipkin.properties and add these contents:

spring.application.name=zipkin
server.port=9411
eureka.client.region=default
eureka.client.registryFetchIntervalSeconds=5
logging.level.org.springframework.web=debug

Remember to commit the changes in this directory so that the config service will detect the changes and load the file.

2.4. Run

Now let’s run our application and navigate to http://localhost:9411. We should be greeted with Zipkin’s homepage:

Great! Now we are ready to add some dependencies and configuration to our services that we want to trace.

3. Service Configuration

The setup for the resource servers is pretty much the same. In the following sections, we will detail how to set up the book-service. We will follow that up by explaining the modifications needed to apply these updates to the rating-service and gateway-service.

3.1. Setup

To begin sending spans to our Zipkin server we will add this dependency to our pom.xml file:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-zipkin</artifactId>
</dependency>

For reference: you can find the latest version on Maven Central (spring-cloud-starter-zipkin).

3.2. Spring Config

We need to add some configuration so that book-service will use Eureka to find our Zipkin service. Open BookServiceApplication.java and add this code to the file:

@Autowired
private EurekaClient eurekaClient;
 
@Autowired
private SpanMetricReporter spanMetricReporter;
 
@Autowired
private ZipkinProperties zipkinProperties;
 
@Value("${spring.sleuth.web.skipPattern}")
private String skipPattern;

// ... the main method goes here

@Bean
public ZipkinSpanReporter makeZipkinSpanReporter() {
    return new ZipkinSpanReporter() {
        private HttpZipkinSpanReporter delegate;
        private String baseUrl;

        @Override
        public void report(Span span) {
 
            InstanceInfo instance = eurekaClient
              .getNextServerFromEureka("zipkin", false);
            if (!(baseUrl != null && 
              instance.getHomePageUrl().equals(baseUrl))) {
                baseUrl = instance.getHomePageUrl();
                delegate = new HttpZipkinSpanReporter(baseUrl,
                  zipkinProperties.getFlushInterval(), 
                  zipkinProperties.getCompression().isEnabled(), 
                  spanMetricReporter);
 
                if (!span.name.matches(skipPattern)) delegate.report(span);
            }
        }
    };
}

The above configuration registers a custom ZipkinSpanReporter that gets its URL from eureka. This code also keeps track of the existing URL and only updates the HttpZipkinSpanReporter if the URL changes. This way no matter where we deploy our Zipkin server to we will always be able to locate it without restarting the service.

We also import the default Zipkin properties that are loaded by spring boot and use them to manage our custom reporter.

3.3. Configuration

Now let’s add some configuration to our book-service.properties file in the config repository:

spring.sleuth.sampler.percentage=1.0
spring.sleuth.web.skipPattern=(^cleanup.*)

Zipkin works by sampling actions on a server. By setting the spring.sleuth.sampler.percentage to 1.0, we are setting the sampling rate to 100%. The skip pattern is simply a regex used for excluding spans whose name matches.

The skip pattern will block all spans from being reported that start with the word ‘cleanup’. This is to stop spans originating from the spring session code base.

3.4. Rating Service

Follow the exact same steps from the book-service section above, applying the changes to the equivalent files for rating-service.

3.5. Gateway Service

Follow the same steps book-service. But when adding the configuration to the gateway.properties add these instead:

spring.sleuth.sampler.percentage=1.0
spring.sleuth.web.skipPattern=(^cleanup.*|.+favicon.*)

This will configure the gateway service to not send spans about the favicon or spring session.

3.6. Run

If you haven’t done so already, start the configdiscoverygateway, book, rating, and zipkin services.

Navigate to http://localhost:8080/book-service/books.

Open a new tab and navigate to http://localhost:9411. Select book-service and press the ‘Find Traces’ button. You should see a trace appear in the search results. Click that trace of opening it:

On the trace page, we can see the request broken down by service. The first two spans are created by the gateway and the last is created by the book-service. This shows us how much time the request spent processing on the book-service, 18.379 ms, and on the gateway, 87.961 ms.

4. Conclusion

We have seen how easy it is to integrate Zipkin into our cloud application.

This gives us some much-needed insight into how communication travels through our application. As our application grows in complexity, Zipkin can provide us with much-needed information on where requests are spending their time. This can help us determine where things are slowing down and indicate what areas of our application need improvement.

As always you can find the source code over on Github.

Array Processing with Apache Commons Lang 3

$
0
0

1. Overview

The Apache Commons Lang 3 library provides support for manipulation of core classes of the Java APIs. This support includes methods for handling strings, numbers, dates, concurrency, object reflection and more.

In this quick tutorial, we’ll focus on array processing with the very useful ArrayUtils utility class.

2. Maven Dependency

In order to use the Commons Lang 3 library, just pull it from the central Maven repository using the following dependency:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.5</version>
</dependency>

You can find the latest version of this library here.

3. ArrayUtils

The ArrayUtils class provides utility methods for working with arrays. These methods try to handle the input gracefully by preventing an exception from being thrown when a null value is passed in.

This section illustrates some methods defined in the ArrayUtils class. Note that all of these methods can work with any element type.

For convenience, their overloaded flavors are also defined for handling arrays containing primitive types.

4. add and addAll

The add method copies a given array and inserts a given element at a given position in the new array. If the position is not specified, the new element is added at the end of the array.

The following code fragment inserts the number zero at the first position of the oldArray array and verifies the result:

int[] oldArray = { 2, 3, 4, 5 };
int[] newArray = ArrayUtils.add(oldArray, 0, 1);
int[] expectedArray = { 1, 2, 3, 4, 5 };
 
assertArrayEquals(expectedArray, newArray);

If the position is not specified, the additional element is added at the end of oldArray:

int[] oldArray = { 2, 3, 4, 5 };
int[] newArray = ArrayUtils.add(oldArray, 1);
int[] expectedArray = { 2, 3, 4, 5, 1 };
 
assertArrayEquals(expectedArray, newArray);

The addAll method adds all elements at the end of a given array. The following fragment illustrates this method and confirms the result:

int[] oldArray = { 0, 1, 2 };
int[] newArray = ArrayUtils.addAll(oldArray, 3, 4, 5);
int[] expectedArray = { 0, 1, 2, 3, 4, 5 };
 
assertArrayEquals(expectedArray, newArray);

5. remove and removeAll

The remove method removes an element at a specified position from a given array. All subsequent elements are shifted to the left. Note that this is true for all removal operations.

This method returns a new array instead of making changes to the original one:

int[] oldArray = { 1, 2, 3, 4, 5 };
int[] newArray = ArrayUtils.remove(oldArray, 1);
int[] expectedArray = { 1, 3, 4, 5 };
 
assertArrayEquals(expectedArray, newArray);

The removeAll method removes all elements at specified positions from a given array:

int[] oldArray = { 1, 2, 3, 4, 5 };
int[] newArray = ArrayUtils.removeAll(oldArray, 1, 3);
int[] expectedArray = { 1, 3, 5 };
 
assertArrayEquals(expectedArray, newArray);

6. removeElement and removeElements

The removeElement method removes the first occurrence of a specified element from a given array.

Instead of throwing an exception, the removal operation is ignored if such an element does not exist in the given array:

int[] oldArray = { 1, 2, 3, 3, 4 };
int[] newArray = ArrayUtils.removeElement(oldArray, 3);
int[] expectedArray = { 1, 2, 3, 4 };
 
assertArrayEquals(expectedArray, newArray);

The removeElements method removes the first occurrences of specified elements from a given array.

Instead of throwing an exception, the removal operation is ignored if a specified element does not exist in the given array:

int[] oldArray = { 1, 2, 3, 3, 4 };
int[] newArray = ArrayUtils.removeElements(oldArray, 2, 3, 5);
int[] expectedArray = { 1, 3, 4 };
 
assertArrayEquals(expectedArray, newArray);

7. The removeAllOccurences API

The removeAllOccurences method removes all occurrences of the specified element from the given array.

Instead of throwing an exception, the removal operation is ignored if such an element does not exist in the given array:

int[] oldArray = { 1, 2, 2, 2, 3 };
int[] newArray = ArrayUtils.removeAllOccurences(oldArray, 2);
int[] expectedArray = { 1, 3 };
 
assertArrayEquals(expectedArray, newArray);

8. The contains API

The contains method checks if a value exists in a given array. Here is a code example, including verification of the result:

int[] array = { 1, 3, 5, 7, 9 };
boolean evenContained = ArrayUtils.contains(array, 2);
boolean oddContained = ArrayUtils.contains(array, 7);
 
assertEquals(false, evenContained);
assertEquals(true, oddContained);

9. The reverse API

The reverse method reverses the element order within a specified range of a given array. This method makes changes to the passed-in array instead of returning a new one.

Let’s have a look at a quick:

int[] originalArray = { 1, 2, 3, 4, 5 };
ArrayUtils.reverse(originalArray, 1, 4);
int[] expectedArray = { 1, 4, 3, 2, 5 };
 
assertArrayEquals(expectedArray, originalArray);

If a range is not specified, the order of all elements is reversed:

int[] originalArray = { 1, 2, 3, 4, 5 };
ArrayUtils.reverse(originalArray);
int[] expectedArray = { 5, 4, 3, 2, 1 };
 
assertArrayEquals(expectedArray, originalArray);

10. The shift API

The shift method shifts a series of elements in a given array a number of positions. This method makes changes to the passed-in array instead of returning a new one.

The following code fragment shifts all elements between the elements at index 1 (inclusive) and index 4 (exclusive) one position to the right and confirms the result:

int[] originalArray = { 1, 2, 3, 4, 5 };
ArrayUtils.shift(originalArray, 1, 4, 1);
int[] expectedArray = { 1, 4, 2, 3, 5 };
 
assertArrayEquals(expectedArray, originalArray);

If the range boundaries are not specified, all elements of the array are shifted:

int[] originalArray = { 1, 2, 3, 4, 5 };
ArrayUtils.shift(originalArray, 1);
int[] expectedArray = { 5, 1, 2, 3, 4 };
 
assertArrayEquals(expectedArray, originalArray);

11. The subarray API

The subarray method creates a new array containing elements within a specified range of the given array. The following is an example of an assertion of the result:

int[] oldArray = { 1, 2, 3, 4, 5 };
int[] newArray = ArrayUtils.subarray(oldArray, 2, 7);
int[] expectedArray = { 3, 4, 5 };
 
assertArrayEquals(expectedArray, newArray);

Notice that when the passed-in index is greater than the length of the array, it is demoted to the array length rather than having the method throw an exception. Similarly, if a negative index is passed in, it is promoted to zero.

12. The swap API

The swap method swaps a series of elements at specified positions in the given array.

The following code fragment swaps two groups of elements starting at the indexes 0 and 3, with each group containing two elements:

int[] originalArray = { 1, 2, 3, 4, 5 };
ArrayUtils.swap(originalArray, 0, 3, 2);
int[] expectedArray = { 4, 5, 3, 1, 2 };
 
assertArrayEquals(expectedArray, originalArray);

If no length argument is passed in, only one element at each position is swapped:

int[] originalArray = { 1, 2, 3, 4, 5 };
ArrayUtils.swap(originalArray, 0, 3);
int[] expectedArray = { 4, 2, 3, 1, 5 };
assertArrayEquals(expectedArray, originalArray);

13. Conclusion

This tutorial introduces the core array processing utility in Apache Commons Lang 3 – ArrayUtils.

As always, the implementation of all examples and code snippets given above can be found in the GitHub project.


Spring Security and OpenID Connect

$
0
0

1. Overview

In this quick tutorial, we’ll focus on setting up OpenID Connect with a Spring Security OAuth2 implementation.

OpenID Connect is a simple identity layer built on top of the OAuth 2.0 protocol.

And, more specifically, we’ll learn how to authenticate users using the OpenID Connect implementation from Google.

2. Maven Configuration

First, we need to add the following dependencies to our Spring Boot application:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.security.oauth</groupId>
    <artifactId>spring-security-oauth2</artifactId>
</dependency>

3. The Id Token

Before we dive into the implementation details, let’s have a quick look at how OpenID works, and how we’ll interact with it.

At this point, it’s of course important to already have an understanding of OAuth2, since OpenID is built on top of OAuth.

First, in order to use the identity functionality, we’ll make use of a new OAuth2 scope called openid. This will result in an extra field in our Access Token – “id_token“.

The id_token is a JWT (JSON Web Token) that contains identity information about the user, signed by identity provider (in our case Google).

Finally, both server(Authorization Code) and implicit flows are the most commonly used ways of obtaining id_token, in our example we will use server flow.

3. OAuth2 Client Configuration

Next, let’s configure our OAuth2 client – as follows:

@Configuration
@EnableOAuth2Client
public class GoogleOpenIdConnectConfig {
    @Value("${google.clientId}")
    private String clientId;

    @Value("${google.clientSecret}")
    private String clientSecret;

    @Value("${google.accessTokenUri}")
    private String accessTokenUri;

    @Value("${google.userAuthorizationUri}")
    private String userAuthorizationUri;

    @Value("${google.redirectUri}")
    private String redirectUri;

    @Bean
    public OAuth2ProtectedResourceDetails googleOpenId() {
        AuthorizationCodeResourceDetails details = new AuthorizationCodeResourceDetails();
        details.setClientId(clientId);
        details.setClientSecret(clientSecret);
        details.setAccessTokenUri(accessTokenUri);
        details.setUserAuthorizationUri(userAuthorizationUri);
        details.setScope(Arrays.asList("openid", "email"));
        details.setPreEstablishedRedirectUri(redirectUri);
        details.setUseCurrentUri(false);
        return details;
    }

    @Bean
    public OAuth2RestTemplate googleOpenIdTemplate(OAuth2ClientContext clientContext) {
        return new OAuth2RestTemplate(googleOpenId(), clientContext);
    }
}

And here is application.properties:

google.clientId=<your app clientId>
google.clientSecret=<your app clientSecret>
google.accessTokenUri=https://www.googleapis.com/oauth2/v3/token
google.userAuthorizationUri=https://accounts.google.com/o/oauth2/auth
google.redirectUri=http://localhost:8081/google-login

Note that:

  • You first need to obtain OAuth 2.0 credentials for your Google web app from Google Developers Console.
  • We used scope openid to obtain id_token.
  • we also used an extra scope email to include user email in id_token identity information.
  • The redirect URI http://localhost:8081/google-login is the same one used in our Google web app.

4. Custom OpenID Connect Filter

Now, we need to create our own custom OpenIdConnectFilter to extract authentication from id_token – as follows:

public class OpenIdConnectFilter extends AbstractAuthenticationProcessingFilter {
    @Override
    public Authentication attemptAuthentication(
      HttpServletRequest request, HttpServletResponse response) 
      throws AuthenticationException, IOException, ServletException {
        OAuth2AccessToken accessToken;
        try {
            accessToken = restTemplate.getAccessToken();
        } catch (OAuth2Exception e) {
            throw new BadCredentialsException("Could not obtain access token", e);
        }
        try {
            String idToken = accessToken.getAdditionalInformation().get("id_token").toString();
            Jwt tokenDecoded = JwtHelper.decode(idToken);
            Map<String, String> authInfo = new ObjectMapper().readValue(tokenDecoded.getClaims(), Map.class);

            OpenIdConnectUserDetails user = new OpenIdConnectUserDetails(authInfo, accessToken);
            return new UsernamePasswordAuthenticationToken(user, null, user.getAuthorities());
        } catch (InvalidTokenException e) {
            throw new BadCredentialsException("Could not obtain user details from token", e);
        }
    }
}

And here is our simple OpenIdConnectUserDetails:

public class OpenIdConnectUserDetails implements UserDetails {
    private String userId;
    private String username;
    private OAuth2AccessToken token;

    public OpenIdConnectUserDetails(Map<String, String> userInfo, OAuth2AccessToken token) {
        this.userId = userInfo.get("sub");
        this.username = userInfo.get("email");
        this.token = token;
    }
}

Note that:

  • Spring Security JwtHelper to decode id_token.
  • id_token always contains “sub” field which is a unique identifier for the user.
  • id_token will also contain “email” field as we added email scope to our request.

5. Security Configuration

Next, let’s discuss our security configuration:

@Configuration
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
    @Autowired
    private OAuth2RestTemplate restTemplate;

    @Bean
    public OpenIdConnectFilter openIdConnectFilter() {
        OpenIdConnectFilter filter = new OpenIdConnectFilter("/google-login");
        filter.setRestTemplate(restTemplate);
        return filter;
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
        .addFilterAfter(new OAuth2ClientContextFilter(), 
          AbstractPreAuthenticatedProcessingFilter.class)
        .addFilterAfter(OpenIdConnectFilter(), 
          OAuth2ClientContextFilter.class)
        .httpBasic()
        .authenticationEntryPoint(new LoginUrlAuthenticationEntryPoint("/google-login"))
        .and()
        .authorizeRequests()
        .anyRequest().authenticated();
    }
}

Note that:

  • We added our custom OpenIdConnectFilter after OAuth2ClientContextFilter
  • We used a simple security configuration to redirect users to “/google-login” to get authenticated by Google

6. User Controller

Next, here is a simple controller to test our app:

@Controller
public class HomeController {
    @RequestMapping("/")
    @ResponseBody
    public String home() {
        String username = SecurityContextHolder.getContext().getAuthentication().getName();
        return "Welcome, " + username;
    }
}

Sample response (after redirect to Google to approve app authorities) :

Welcome, example@gmail.com

7. Sample OpenID Connect Process

Finally, let’s take a look at a sample OpenID Connect authentication process.

First, we’re going to send an Authentication Request:

https://accounts.google.com/o/oauth2/auth?
    client_id=sampleClientID
    response_type=code&
    scope=openid%20email&
    redirect_uri=http://localhost:8081/google-login&
    state=abc

The response (after user approval) is a redirect to:

http://localhost:8081/google-login?state=abc&code=xyz

Next, we’re going to exchange the code for an Access Token and id_token:

POST https://www.googleapis.com/oauth2/v3/token 
    code=xyz&
    client_id= sampleClientID&
    client_secret= sampleClientSecret&
    redirect_uri=http://localhost:8081/google-login&
    grant_type=authorization_code

Here’s a sample Response:

{
    "access_token": "SampleAccessToken",
    "id_token": "SampleIdToken",
    "token_type": "bearer",
    "expires_in": 3600,
    "refresh_token": "SampleRefreshToken"
}

Finally, here’s what the information of the actual id_token looks like:

{
    "iss":"accounts.google.com",
    "at_hash":"AccessTokenHash",
    "sub":"12345678",
    "email_verified":true,
    "email":"example@gmail.com",
     ...
}

So you can immediately see just how useful the user information inside the token is for providing identity information to our own application.

8. Conclusion

In this quick intro tutorial, we learned how to authenticate users using the OpenID Connect implementation from Google.

And, as always, you can find the source code over on GitHub.

String Processing with Apache Commons Lang 3

$
0
0

1. Overview

The Apache Commons Lang 3 library provides support for manipulation of core classes of the Java APIs. This support includes methods for handling strings, numbers, dates, concurrency, object reflection and more.

In addition to providing a general introduction to the library, this tutorial demonstrates methods of two of the most commonly used classes, namely ArrayUtils and StringUtils. ArrayUtils is used for operations on arrays, while StringUtils is used for manipulation of String instances.

2. Maven Dependency

In order to use the Commons Lang 3 library, just pull it from the central Maven repository using the following dependency:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.5</version>
</dependency>

You can find the latest version of this library here.

3. StringUtils

The StringUtils class provides methods for null-safe operations on strings.

Many methods of this class have corresponding ones defined in class java.lang.String, which are not null-safe. However, this section will instead focus on several methods that do not have equivalents in the String class.

4. The containsAny Method

The containsAny method checks if a given String contains any character in the given set of characters. This set of characters can be passed in in the form of a String or char varargs.

The following code fragment demonstrates the use of two overloaded flavors of this method with result verification:

String string = "baeldung.com";
boolean contained1 = StringUtils.containsAny(string, 'a', 'b', 'c');
boolean contained2 = StringUtils.containsAny(string, 'x', 'y', 'z');
boolean contained3 = StringUtils.containsAny(string, "abc");
boolean contained4 = StringUtils.containsAny(string, "xyz");
 
assertTrue(contained1);
assertFalse(contained2);
assertTrue(contained3);
assertFalse(contained4);

5. The containsIgnoreCase Method

The containsIgnoreCase method checks if a given String contains another String in a case insensitive manner.

The following code fragment verifies that the String “baeldung.com” comprises “BAELDUNG” when upper and lower case is ignored:

String string = "baeldung.com";
boolean contained = StringUtils.containsIgnoreCase(string, "BAELDUNG");
 
assertTrue(contained);

6. The countMatches Method

The counterMatches method counts how many times a character or substring appears in a given String.

The following is a demonstration of this method, confirming that ‘w’ appears four times and “com” does twice in the String “welcome to www.baeldung.com”:

String string = "welcome to www.baeldung.com";
int charNum = StringUtils.countMatches(string, 'w');
int stringNum = StringUtils.countMatches(string, "com");
 
assertEquals(4, charNum);
assertEquals(2, stringNum);

7. Appending and Prepending Method

The appendIfMissing and appendIfMissingIgnoreCase methods append a suffix to the end of a given String if it does not already end with any of the passed-in suffixes in a case sensitive and insensitive manner respectively.

Similarly, the prependIfMissing and prependIfMissingIgnoreCase methods prepend a prefix to the beginning of a given String if it does not start with any of the passed-in prefixes.

In the following example, the appendIfMissing and prependIfMissing methods are used to add a suffix and prefix to the String “baeldung.com” without these affixes being repeated:

String string = "baeldung.com";
String stringWithSuffix = StringUtils.appendIfMissing(string, ".com");
String stringWithPrefix = StringUtils.prependIfMissing(string, "www.");
 
assertEquals("baeldung.com", stringWithSuffix);
assertEquals("www.baeldung.com", stringWithPrefix);

8. Case Changing Method

The String class already defines methods to convert all characters of a String to uppercase or lowercase. This subsection only illustrates the use of methods changing the case of a String in other ways, including swapCase, capitalize and uncapitalize.

The swapCase method swaps the case of a String, changing uppercase to lowercase and lowercase to uppercase:

String originalString = "baeldung.COM";
String swappedString = StringUtils.swapCase(originalString);
 
assertEquals("BAELDUNG.com", swappedString);

The capitalize method converts the first character of a given String to uppercase, leaving all remaining characters unchanged:

String originalString = "baeldung";
String capitalizedString = StringUtils.capitalize(originalString);
 
assertEquals("Baeldung", capitalizedString);

The uncapitalize method converts the first character of the given String to lowercase, leaving all remaining characters unchanged:

String originalString = "Baeldung";
String uncapitalizedString = StringUtils.uncapitalize(originalString);
 
assertEquals("baeldung", uncapitalizedString);

9. Reversing Method

The StringUtils class defines two methods for reversing strings: reverse and reverseDelimited. The reverse method rearranges all characters of a String in the opposite order, while the reverseDelimited method reorders groups of characters, separated by a specified delimiter.

The following code fragment reverses the string “baeldung” and validates the outcome:

String originalString = "baeldung";
String reversedString = StringUtils.reverse(originalString);
 
assertEquals("gnudleab", reversedString);

With the reverseDelimited method, characters are reversed in groups instead of individually:

String originalString = "www.baeldung.com";
String reversedString = StringUtils.reverseDelimited(originalString, '.');
 
assertEquals("com.baeldung.www", reversedString);

10. The rotate() Method

The rotate() method circularly shifts characters of a String a number of positions. The code fragment below moves all characters of the String “baeldung” four positions to the right and verifies the result:

String originalString = "baeldung";
String rotatedString = StringUtils.rotate(originalString, 4);
 
assertEquals("dungbael", rotatedString);

11. The difference Method

The difference method compares two strings, returning the remainder of the second String, starting from the position where it is different from the first. The following code fragment compares two Strings: “Baeldung Tutorials” and “Baeldung Courses” in both directions and validates the outcome:

String tutorials = "Baeldung Tutorials";
String courses = "Baeldung Courses";
String diff1 = StringUtils.difference(tutorials, courses);
String diff2 = StringUtils.difference(courses, tutorials);
 
assertEquals("Courses", diff1);
assertEquals("Tutorials", diff2);

12. Conclusion

This tutorial introduces String processing in the Apache Commons Lang 3 and goes over the main APIs we can use out of the StringUtils library class.

As always, the implementation of all examples and code snippets given above can be found in the GitHub project.

Java Web Weekly, Issue 167

$
0
0

Lots of interesting writeups on Spring stuff going on this week.

Let’s jump right in…

1. Spring and Java

>> Spring Boot – Configure Log Level in Runtime Using Actuator Endpoint [codeleak.pl]

Starting with Spring Boot 1.5, we can configure log levels at runtime by performing simple POST requests.

>> 7 Tips and Tricks We Learned From the Java Community [takipi.com]

Community is a great source of knowledge 🙂

>> A use-case for Spring component scan [frankel.ch]

A quick refresher of how @ComponentScan works followed by a practical example.

>> Deep Dive into Java 9’s Stack-Walking API [sitepoint.com]

Java 9 includes a brand new Stack-Walking API that provides an access to the execution stack. Hopefully, we’ll no longer need to hack our way through frames.

>> Java EE 8 – February recap [oracle.com]

A short overview of what is going on around Java EE 8.

>> Public Review of JSON-P Specification 1.1 is Now Open [infoq.com]

Very cool – the JSON-P JSR-374 1.1 spec is now public.

>> Getting Started with Thymeleaf 3 Text Templates [codeleak.pl]

And a quick-start guide to templating with Thymeleaf 3.

>> The best way to soft delete with Hibernate [vladmihalcea.com]

With a little bit of effort, soft deletes are achievable with Hibernate.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Easy RSA signatures and encryption with JWK [insaneprogramming.be]

Exchanging data in the REST age can be painful but the JWK standard made it much easier. This short tutorial shows how to use asymmetrical RSA with the JWK.

>> Writing Integration Tests with Docker Compose and JUnit [codecentric.de]

A quick and practical guide to wiring Docker containers up for integration testing.

>> Be aware that bcrypt has a maximum password length [mscharhag.com]

It’s good to remember that bcrypt has its limitations.

>> Continuous Delivery With Kubernetes, Docker, and CircleCI [alexecollins.com]

A presentation of a CD setup with Kubernetes, Docker, and CircleCI. Definitely a useful setup.

Also worth reading:

3. Musings

>> The Whiteboard Interview: Adulthood Deferred [daedtech.com]

Recent “whiteboard interviews” topic has sparked a lot of discussion about the efficiency of contemporary tech interviews.

>> On false negatives and false positives [ontestautomation.com]

A short write-up recalling the importance of fast identification of false negatives and false positives.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> I think I’ll go add value somewhere else [dilbert.com]

>> Self-deprecating joke to underscore my true power [dilbert.com]

>> Big fan of low self-esteem [dilbert.com]

5. Pick of the Week

>> Why we choose profit [m.signalvnoise.com]

Introduction to Java 9 StackWalking API

$
0
0

1. Introduction

In this quick article, we will have a look at Java 9’s StackWalking API.

The new functionality provides access to a Stream of StackFrames, allowing us to easily browse stack in both directly and making good use of the powerful Stream API in Java 8.

2. Advantages of a StackWalker

In Java 8, the Throwable::getStackTrace and Thread::getStackTrace returns an array of StackTraceElements. Without a lot of manual code, there was no way to discard the unwanted frames and keep only the ones we are interested in.

In addition to this, the Thread::getStackTrace may return a partial stack trace. This is because the specification allows the VM implementation to omit some stack frames for the sake of performance.

In Java 9, using the walk() method of the StackWalker, we can traverse a few frames that we are interested in or the complete stack trace.

Of course, the new functionality is thread-safe; this allows multiple threads to share a single StackWalker instance for accessing their respective stacks.

As descriebed in the JEP-259, the JVM will be enhanced to allow efficient lazy access to additional stack frames when required.

3. StackWalker in Action

Let’s start by creating a class containing a chain of method calls:

public class StackWalkerDemo {

    public void methodOne() {
        this.methodTwo();
    }

    public void methodTwo() {
        this.methodThree();
    }

    public void methodThree() {
        // stack walking code
    }
}

3.1. Capture the Entire Stack Trace

Let’s move ahead and add some stack walking code:

public void methodThree() {
    List<StackFrame> stackTrace = StackWalker.getInstance()
      .walk(this::walkExample);
}

The StackWalker::walk method accepts a functional reference, creates a Stream of StackFrames for the current thread, applies the function to the Stream, and closes the Stream.

Now let’s define the StackWalkerDemo::walkExample method:

public List<StackFrame> walkExample(Stream<StackFrame> stackFrameStream) {
    return stackFrameStream.collect(Collectors.toList());
}

This method simply collects the StackFrames and returns it as a List<StackFrame>. To test this example, please run a JUnit test:

@Test
public void giveStalkWalker_whenWalkingTheStack_thenShowStackFrames() {
    new StackWalkerDemo().methodOne();
}

The only reason to run it as a JUnit test is to have more frames in our stack:

class com.baeldung.java9.stackwalker.StackWalkerDemo#methodThree, Line 20
class com.baeldung.java9.stackwalker.StackWalkerDemo#methodTwo, Line 15
class com.baeldung.java9.stackwalker.StackWalkerDemo#methodOne, Line 11
class com.baeldung.java9.stackwalker
  .StackWalkerDemoTest#giveStalkWalker_whenWalkingTheStack_thenShowStackFrames, Line 9
class org.junit.runners.model.FrameworkMethod$1#runReflectiveCall, Line 50
class org.junit.internal.runners.model.ReflectiveCallable#run, Line 12
  ...more org.junit frames...
class org.junit.runners.ParentRunner#run, Line 363
class org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference#run, Line 86
  ...more org.eclipse frames...
class org.eclipse.jdt.internal.junit.runner.RemoteTestRunner#main, Line 192

In the entire stack trace, we are only interested in top four frames. The remaining frames from org.junit and org.eclipse are nothing but noise frames.

3.2. Filtering the StackFrames

Let’s enhance our stack walking code and remove the noise:

public List<StackFrame> walkExample2(Stream<StackFrame> stackFrameStream) {
    return stackFrameStream
      .filter(f -> f.getClassName().contains("com.baeldung"))
      .collect(Collectors.toList());
}

Using the power of the Stream API, we are keeping only the frames that we are interested in. This will clear out the noise, leaving the top four lines in the stack log:

class com.baeldung.java9.stackwalker.StackWalkerDemo#methodThree, Line 27
class com.baeldung.java9.stackwalker.StackWalkerDemo#methodTwo, Line 15
class com.baeldung.java9.stackwalker.StackWalkerDemo#methodOne, Line 11
class com.baeldung.java9.stackwalker
  .StackWalkerDemoTest#giveStalkWalker_whenWalkingTheStack_thenShowStackFrames, Line 9

Let’s now identify the JUnit test that initiated the call:

public String walkExample3(Stream<StackFrame> stackFrameStream) {
    return stackFrameStream
      .filter(frame -> frame.getClassName()
        .contains("com.baeldung") && frame.getClassName().endsWith("Test"))
      .findFirst()
      .map(f -> f.getClassName() + "#" + f.getMethodName() 
        + ", Line " + f.getLineNumber())
      .orElse("Unknown caller");
}

Please note that here, we are only interested in a single StackFrame, which is mapped to a String. The output will only be the line containing StackWalkerDemoTest class.

3.3. Capturing the Reflection Frames

In order to capture the reflection frames, which are hidden by default, the StackWalker needs to be configured with an additional option SHOW_REFLECT_FRAMES:

List<StackFrame> stackTrace = StackWalker
  .getInstance(StackWalker.Option.SHOW_REFLECT_FRAMES)
  .walk(this::walkExample);

Using this option, all the reflections frames including Method.invoke() and Constructor.newInstance() will be captured:

com.baeldung.java9.stackwalker.StackWalkerDemo#methodThree, Line 40
com.baeldung.java9.stackwalker.StackWalkerDemo#methodTwo, Line 16
com.baeldung.java9.stackwalker.StackWalkerDemo#methodOne, Line 12
com.baeldung.java9.stackwalker
  .StackWalkerDemoTest#giveStalkWalker_whenWalkingTheStack_thenShowStackFrames, Line 9
jdk.internal.reflect.NativeMethodAccessorImpl#invoke0, Line -2
jdk.internal.reflect.NativeMethodAccessorImpl#invoke, Line 62
jdk.internal.reflect.DelegatingMethodAccessorImpl#invoke, Line 43
java.lang.reflect.Method#invoke, Line 547
org.junit.runners.model.FrameworkMethod$1#runReflectiveCall, Line 50
  ...eclipse and junit frames...
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner#main, Line 192

As we can see, the jdk.internal frames are the new ones captured by SHOW_REFLECT_FRAMES option.

3.4. Capturing Hidden Frames

In addition to the reflection frames, a JVM implementation may choose to hide implementation specific frames.

However, those frames are not hidden from the StackWalker:

Runnable r = () -> {
    List<StackFrame> stackTrace2 = StackWalker
      .getInstance(StackWalker.Option.SHOW_HIDDEN_FRAMES)
      .walk(this::walkExample);
    printStackTrace(stackTrace2);
};
r.run();

Note that we are assigning a lambda reference to a Runnable in this example. The only reason is that JVM will create some hidden frames for the lambda expression.

This is clearly visible in the stack trace:

com.baeldung.java9.stackwalker.StackWalkerDemo#lambda$0, Line 47
com.baeldung.java9.stackwalker.StackWalkerDemo$$Lambda$39/924477420#run, Line -1
com.baeldung.java9.stackwalker.StackWalkerDemo#methodThree, Line 50
com.baeldung.java9.stackwalker.StackWalkerDemo#methodTwo, Line 16
com.baeldung.java9.stackwalker.StackWalkerDemo#methodOne, Line 12
com.baeldung.java9.stackwalker
  .StackWalkerDemoTest#giveStalkWalker_whenWalkingTheStack_thenShowStackFrames, Line 9
jdk.internal.reflect.NativeMethodAccessorImpl#invoke0, Line -2
jdk.internal.reflect.NativeMethodAccessorImpl#invoke, Line 62
jdk.internal.reflect.DelegatingMethodAccessorImpl#invoke, Line 43
java.lang.reflect.Method#invoke, Line 547
org.junit.runners.model.FrameworkMethod$1#runReflectiveCall, Line 50
  ...junit and eclipse frames...
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner#main, Line 192

The top two frames are the lambda proxy frames, which JVM created internally. It is worthwhile to note that the reflection frames that we captured in the previous example are still retained with SHOW_HIDDEN_FRAMES option. This is because SHOW_HIDDEN_FRAMES is a superset of SHOW_REFLECT_FRAMES.

3.5. Identifying the Calling Class

The option RETAIN_CLASS_REFERENCE retails the object of Class in all the StackFrames walked by the StackWalker. This allows us to call the methods StackWalker::getCallerClass and StackFrame::getDeclaringClass.

Let’s identify calling class using the StackWalker::getCallerClass method:

public void findCaller() {
    Class<?> caller = StackWalker
      .getInstance(StackWalker.Option.RETAIN_CLASS_REFERENCE)
      .getCallerClass();
    System.out.println(caller.getCanonicalName());
}

This time, we’ll call this method directly from a separate JUnit test:

@Test
public void giveStalkWalker_whenInvokingFindCaller_thenFindCallingClass() {
    new StackWalkerDemo().findCaller();
}

The output of caller.getCanonicalName(), will be:

com.baeldung.java9.stackwalker.StackWalkerDemoTest

Please note that the StackWalker::getCallerClass should not be called from the method at the bottom of the stack. as it will result in IllegalCallerException being thrown.

4. Conclusion

With this article, we’ve seen how easy it is to deal with StackFrames using the power of the StackWalker combined with the Stream API.

Of course, there are various other functionalities we can explore – such as skipping, dropping, and limiting the StackFrames. The official documentation contains a few solid examples for additional use cases.

And, as always, you can get the complete source code for this article over on GitHub.

Java Money and Currency API

$
0
0

1. Overview

JSR 354 – “Currency and Money” addresses the standardization of currencies and monetary amounts in Java.

Its goal is to add a flexible and extensible API to the Java ecosystem, and make working with monetary amounts simpler and safer.

The JSR did not make its way into JDK 9 but is a candidate for future JDK releases.

2. Setup

First, let’s define the dependency into our pom.xml file:

<dependency>
    <groupId>org.javamoney</groupId>
    <artifactId>moneta</artifactId>
    <version>1.1</version>
</dependency>

The latest version of the dependency can be checked here.

3. JSR-354 Features

The goals of “Currency and Money” API:

  • To provide an API for handling and calculating monetary amounts
  • To define classes representing currencies and monetary amounts, as well as monetary rounding
  • To deal with currency exchange rates
  • To deal with formatting and parsing of currencies and monetary amounts

4. Model

Main classes of the JSR-354 specification, are depicted in the following diagram:

The model holds two main interfaces CurrencyUnit and MonetaryAmount, explained in the following sections.

 5. CurrencyUnit

CurrencyUnit models the minimal properties of a currency. Its instances can be obtained using the Monetary.getCurrency method:

@Test
public void givenCurrencyCode_whenString_thanExist() {
    CurrencyUnit usd = Monetary.getCurrency("USD");

    assertNotNull(usd);
    assertEquals(usd.getCurrencyCode(), "USD");
    assertEquals(usd.getNumericCode(), 840);
    assertEquals(usd.getDefaultFractionDigits(), 2);
}

We create CurrencyUnit using a String representation of the currency, this could lead to a situation where we try to create a currency with nonexistent code. Creating currencies with nonexistent codes raise an UnknownCurrency exception:

@Test(expected = UnknownCurrencyException.class)
public void givenCurrencyCode_whenNoExist_thanThrowsError() {
    Monetary.getCurrency("AAA");
}

6. MonetaryAmount

MonetaryAmount is a numeric representation of a monetary amount. It’s always associated with CurrencyUnit and defines a monetary representation of a currency.

The amount can be implemented in different ways, focusing on the behavior of a monetary representation requirements, defined by each concrete use cases. For example. Money and FastMoney are implementations of the MonetaryAmount interface.

FastMoney implements MonetaryAmount using long as numeric representation, and is faster than BigDecimal at the cost of precision; it can be used when we need performance and precision isn’t an issue.

A generic instance can be created using a default factory. Let’s show the different way of obtaining MonetaryAmount instances:

@Test
public void givenAmounts_whenStringified_thanEquals() {
 
    CurrencyUnit usd = Monetary.getCurrency("USD");
    MonetaryAmount fstAmtUSD = Monetary.getDefaultAmountFactory()
      .setCurrency(usd).setNumber(200).create();
    Money moneyof = Money.of(12, usd);
    FastMoney fastmoneyof = FastMoney.of(2, usd);

    assertEquals("USD", usd.toString());
    assertEquals("USD 200", fstAmtUSD.toString());
    assertEquals("USD 12", moneyof.toString());
    assertEquals("USD 2.00000", fastmoneyof.toString());
}

7. Monetary Arithmetic

We can perform monetary arithmetic between Money and FastMoney but we need to be careful when we combine instances of these two classes.

For example when we compare one Euro instance of FastMoney with one Euro instance of Money the result is that they are not the same:

@Test
public void givenCurrencies_whenCompared_thanNotequal() {
    MonetaryAmount oneDolar = Monetary.getDefaultAmountFactory()
      .setCurrency("USD").setNumber(1).create();
    Money oneEuro = Money.of(1, "EUR");

    assertFalse(oneEuro.equals(FastMoney.of(1, "EUR")));
    assertTrue(oneDolar.equals(Money.of(1, "USD")));
}

We can perform add, subtract, multiply, divide and other monetary arithmetic operations using the methods provided by the MonetaryAmount class.

Arithmetic operations should throw an ArithmeticException, if the arithmetic operations between amounts outperform the capabilities of the numeric representation type used, for example, if we try to divide one by three, we get an ArithmeticException because the result is an infinite number:

@Test(expected = ArithmeticException.class)
public void givenAmount_whenDivided_thanThrowsException() {
    MonetaryAmount oneDolar = Monetary.getDefaultAmountFactory()
      .setCurrency("USD").setNumber(1).create();
    oneDolar.divide(3);
}

When adding or subtracting amounts, it’s better to use parameters which are instances of MonetaryAmount, as we need to ensure that both amounts have the same currency to perform operations between amounts.

7.1. Calculating Amounts

A total of amounts can be calculated in multiple ways, one way is simply to chain the amounts with:

@Test
public void givenAmounts_whenSummed_thanCorrect() {
    MonetaryAmount[] monetaryAmounts = new MonetaryAmount[] {
      Money.of(100, "CHF"), Money.of(10.20, "CHF"), Money.of(1.15, "CHF")};

    Money sumAmtCHF = Money.of(0, "CHF");
    for (MonetaryAmount monetaryAmount : monetaryAmounts) {
        sumAmtCHF = sumAmtCHF.add(monetaryAmount);
    }

    assertEquals("CHF 111.35", sumAmtCHF.toString());
}

Chaining can also be applied to subtracting:

Money calcAmtUSD = Money.of(1, "USD").subtract(fstAmtUSD);

Multiplying:

MonetaryAmount multiplyAmount = oneDolar.multiply(0.25);

Or dividing:

MonetaryAmount divideAmount = oneDolar.divide(0.25);

Let’s compare our arithmetic results using Strings, given that with Strings because the result also contains the currency:

@Test
public void givenArithmetic_whenStringified_thanEqualsAmount() {
    CurrencyUnit usd = Monetary.getCurrency("USD");

    Money moneyof = Money.of(12, usd);
    MonetaryAmount fstAmtUSD = Monetary.getDefaultAmountFactory()
      .setCurrency(usd).setNumber(200.50).create();
    MonetaryAmount oneDolar = Monetary.getDefaultAmountFactory()
      .setCurrency("USD").setNumber(1).create();
    Money subtractedAmount = Money.of(1, "USD").subtract(fstAmtUSD);
    MonetaryAmount multiplyAmount = oneDolar.multiply(0.25);
    MonetaryAmount divideAmount = oneDolar.divide(0.25);

    assertEquals("USD", usd.toString());
    assertEquals("USD 1", oneDolar.toString());
    assertEquals("USD 200.5", fstAmtUSD.toString());
    assertEquals("USD 12", moneyof.toString());
    assertEquals("USD -199.5", subtractedAmount.toString());
    assertEquals("USD 0.25", multiplyAmount.toString());
    assertEquals("USD 4", divideAmount.toString());
}

8. Monetary Rounding

Monetary rounding is nothing else than a conversion from an amount with an undetermined precision to a rounded amount.

We’ll use the getDefaultRounding API provided by the Monetary class to make the conversion. The default rounding values are provided by the currency:

@Test
public void givenAmount_whenRounded_thanEquals() {
    MonetaryAmount fstAmtEUR = Monetary.getDefaultAmountFactory()
      .setCurrency("EUR").setNumber(1.30473908).create();
    MonetaryAmount roundEUR = fstAmtEUR.with(Monetary.getDefaultRounding());
    
    assertEquals("EUR 1.30473908", fstAmtEUR.toString());
    assertEquals("EUR 1.3", roundEUR.toString());
}

9. Currency Conversion

Currency conversion is an important aspect of dealing with money. Unfortunately, these conversions have a great variety of different implementations and use cases.

The API focuses on the common aspects of currency conversion based on the source, target currency, and exchange rate.

Currency conversion or the access of exchange rates can be parametrized:

@Test
public void givenAmount_whenConversion_thenNotNull() {
    MonetaryAmount oneDollar = Monetary.getDefaultAmountFactory().setCurrency("USD")
      .setNumber(1).create();

    CurrencyConversion conversionEUR = MonetaryConversions.getConversion("EUR");

    MonetaryAmount convertedAmountUSDtoEUR = oneDollar.with(conversionEUR);

    assertEquals("USD 1", oneDollar.toString());
    assertNotNull(convertedAmountUSDtoEUR);
}

A conversion is always bound to  currency. MonetaryAmount can simply be converted by passing a CurrencyConversion to the amount’s with method.

10. Currency Formatting

The formatting allows the access of formats based on java.util.Locale. Contrary to the JDK, the formatters defined by this API are thread-safe:

@Test
public void givenLocale_whenFormatted_thanEquals() {
    MonetaryAmount oneDollar = Monetary.getDefaultAmountFactory()
      .setCurrency("USD").setNumber(1).create();

    MonetaryAmountFormat formatUSD = MonetaryFormats.getAmountFormat(Locale.US);
    String usFormatted = formatUSD.format(oneDollar);

    assertEquals("USD 1", oneDollar.toString());
    assertNotNull(formatUSD);
    assertEquals("USD1.00", usFormatted);
}

Here we’re using the predefined format and creating a custom format for our currencies. The use of the standard format is straightforward using the method format of the MonetaryFormats class. We defined our custom format setting the pattern property of the format query builder.

As before because the currency is included in the result we test our results using Strings:

@Test
public void givenAmount_whenCustomFormat_thanEquals() {
    MonetaryAmount oneDollar = Monetary.getDefaultAmountFactory()
            .setCurrency("USD").setNumber(1).create();

    MonetaryAmountFormat customFormat = MonetaryFormats.getAmountFormat(AmountFormatQueryBuilder.
      of(Locale.US).set(CurrencyStyle.NAME).set("pattern", "00000.00 ¤").build());
    String customFormatted = customFormat.format(oneDollar);

    assertNotNull(customFormat);
    assertEquals("USD 1", oneDollar.toString());
    assertEquals("00001.00 US Dollar", customFormatted);
}

11. Summary

In this quick article, we’ve covered the basics of the Java Money & Currency JSR.

Monetary values are used everywhere, and Java provides is starting to support and handle monetary values, arithmetic or currency conversion.

As always, you can find the code from the article over on Github.

Concurrent Test Execution in Spring 5

$
0
0

1. Introduction

Starting with JUnit 4, tests can be run in parallel to gain speed for larger suites. The problem was concurrent test execution was not fully supported by the Spring TestContext Framework prior to Spring 5.

In this quick article, we’ll show how to use Spring 5 to run our tests in Spring projects concurrently.

2. Maven Setup

As a reminder, to run JUnit tests in parallel, we need to configure the maven-surefire-plugin to enable the feature:

<build>
    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>2.19.1</version>
        <configuration>
            <parallel>methods</parallel>
            <useUnlimitedThreads>true</useUnlimitedThreads>
        </configuration>
    </plugin>
</build>

You can check out the reference documentation for a more detailed configuration on parallel test execution.

3. Concurrent Test

The following example test would fail when running in parallel for versions prior to Spring 5.

However, it will run smoothly in Spring 5:

@RunWith(SpringRunner.class)
@ContextConfiguration(classes = Spring5JUnit4ConcurrentTest.SimpleConfiguration.class)
public class Spring5JUnit4ConcurrentTest implements ApplicationContextAware, InitializingBean {

    @Configuration
    public static class SimpleConfiguration {}

    private ApplicationContext applicationContext;

    private boolean beanInitialized = false;

    @Override
    public void afterPropertiesSet() throws Exception {
        this.beanInitialized = true;
    }

    @Override
    public void setApplicationContext(
      final ApplicationContext applicationContext) throws BeansException {
        this.applicationContext = applicationContext;
    }

    @Test
    public void whenTestStarted_thenContextSet() throws Exception {
        TimeUnit.SECONDS.sleep(2);
 
        assertNotNull(
          "The application context should have been set due to ApplicationContextAware semantics.",
          this.applicationContext);
    }

    @Test
    public void whenTestStarted_thenBeanInitialized() throws Exception {
        TimeUnit.SECONDS.sleep(2);
 
        assertTrue(
          "This test bean should have been initialized due to InitializingBean semantics.",
          this.beanInitialized);
    }
}

When running sequentially, the tests above would take around 6 seconds to pass. With concurrent execution, it will only take about 4.5 seconds – which is quite typical for how much time we can expect to save in larger suites as well.

4. Under the Hood

The primary reason prior versions of the framework didn’t support running tests concurrently was due to the management of TestContext by the TestContextManager.

In Spring 5, the TestContextManager uses a thread local – TestContext – to ensure that operations on TestContexts in each thread would not interfere with each other. Thus thread-safety is guaranteed for most method level and class level concurrent tests:

public class TestContextManager {

    // ...
    private final TestContext testContext;

    private final ThreadLocal<TestContext> testContextHolder = new ThreadLocal<TestContext>() {
        protected TestContext initialValue() {
            return copyTestContext(TestContextManager.this.testContext);
        }
    };

    public final TestContext getTestContext() {
        return this.testContextHolder.get();
    }

    // ...
}

Note that the concurrency support does not apply to all kinds of tests; we need to exclude tests that:

  • change external shared states, such as states in caches, databases, message queues, etc.
  • require specific execution orders, for example, tests that use JUnit‘s @FixMethodOrder
  • modify the ApplicationContext, which are generally marked by @DirtiesContext

5. Summary

In this quick tutorial, we’ve shown an basic example using Spring 5 to run tests in parallel.

As always, the example code can be found over on Github.

A Guide to the Axon Framework

$
0
0

1. Overview

In this article, we’ll be looking at the Axon framework and how it helps us implement an architecture based on CQRS (Command Query Responsibility Segregation) and potentially Event Sourcing.

Note that a lot of these concepts come right out of DDD, which is beyond the scope of this current article.

2. Maven Dependencies

Before we start creating out sample application, we need to add the axon-core and axon-test dependencies into our pom.xml:

<dependency>
    <groupId>org.axonframework</groupId>
    <artifactId>axon-core</artifactId>
    <version>${axon.version}</version>
</dependency>
<dependency>
    <groupId>org.axonframework</groupId>
    <artifactId>axon-test</artifactId>
    <version>${axon.version}</version>
    <scope>test<scope>
</dependency>

<properties>
    <axon.version>3.0.2</axon.version>
</properties>

3. Message Service – Commands 

With the initial goal of doing CQRS in out system, we’ll define two types of actions that the user can perform:

  1.  create a new text message
  2.  mark text message as read

Naturally these will be two commands that model our domain – CreateMessageCommand and MarkReadMessageCommand:

public class CreateMessageCommand {
 
    @TargetAggregateIdentifier
    private String id;
    private String text;
 
    public CreateMessageCommand(String id, String text) {
        this.id = id;
        this.text = text;
    }
 
    // ...
}
public class MarkReadMessageCommand {
 
    @TargetAggregateIdentifier
    private String id;
 
    public MarkReadMessageCommand(String id) {
        this.id = id;
    }
    
    // ...
}

The TargetAggregateIdentifier annotation tells Axon that the annotated field is an id of the given aggregate. We’ll briefly touch on aggregates soon.

4. Events

Our aggregate will be reacting to the above-created commands by producing MessageCreatedEvent and MessageReadEvent events:

public class MessageCreatedEvent {
 
    private String id;
    private String text;
 
    // standard constructors, getters, setters 
}
 
public class MessageReadEvent {
 
    private String id;
 
    // standard constructors, getters, setters
}

5. Aggregates – Producing Events on Commands

Now that we’ve modeled our commands, we need to create handlers that will produce events for commands.

Let’s create an aggregate class:

public class MessagesAggregate {

    @AggregateIdentifier
    private String id;

    @CommandHandler
    public MessagesAggregate(CreateMessageCommand command) {
        apply(new MessageCreatedEvent(command.getId(), command.getText()));
    }

    @EventHandler
    public void on(MessageCreatedEvent event) {
        this.id = event.getId();
    }

    @CommandHandler
    public void markRead(MarkReadMessageCommand command) {
        apply(new MessageReadEvent(id));
    }
    
    // standard constructors
}

Each aggregate needs to have an id field, and we specify this by using an AggregateIdentifier annotation.

Our aggregate is created when CreateMessageCommand arrives – receiving that command will produce a MessageCreatedEvent.

At this point, the aggregate is in the messageCreated state. When the MarkReadMessageCommand arrives, the aggregate is producing a MessageReadEvent.

6. Testing our Setup

Firstly, we need to set up our test by creating a FixtureConfiguration for the MessagesAggregate:

private FixtureConfiguration<MessagesAggregate> fixture;

@Before
public void setUp() throws Exception {
    fixture = 
      new AggregateTestFixture<MessagesAggregate>(MessagesAggregate.class);
}

The first test case should cover the simplest situation – when the CreateMessageCommand arrives in our aggregate, it should produce the MessageCreatedEvent:

String eventText = "Hello, how is your day?";
String id = UUID.randomUUID().toString();
fixture.given()
  .when(new CreateMessageCommand(id, eventText))
  .expectEvents(new MessageCreatedEvent(id, eventText));

Next, we will test a situation when aggregate already produced MessageCreatedEvent and the MarkReadMessageCommand arrives. It should produce a MessageReadEvent:

String id = UUID.randomUUID().toString();

fixture.given(new MessageCreatedEvent(id, "Hello"))
  .when(new MarkReadMessageCommand(id))
  .expectEvents(new MessageReadEvent(id));

7. Putting Everything Together

We created commands, events, and aggregates. To start our application we need to glue everything together.

First, we need to create a command bus to which commands will be sent:

CommandBus commandBus = new SimpleCommandBus();
CommandGateway commandGateway = new DefaultCommandGateway(commandBus);

Next, we need to setup a message bus to which produced events will be sent:

EventStore eventStore = new EmbeddedEventStore(new InMemoryEventStorageEngine());

EventSourcingRepository<MessagesAggregate> repository
  = new EventSourcingRepository<>(MessagesAggregate.class, eventStore);

Events should be persistent, so we need to define a repository for storing them.

In this simple example, we’re storing events in memory. In a production system, it should of course be a database or some other type of persistence store.

If we’re doing Event Sourcing, the EventStore is the central component of the architecture. All events produced by the aggregate need to be persisted into to store to keep a master record of all change in the system.

Events are immutable, so once they are saved in the event store, they can not be modified or deleted. Using events we can recreate a state of a system to any point in the time, taking all events that were produced to this specific point in time.

Before starting the application we need to set up the aggregate that will be handling commands and producing events:

AggregateAnnotationCommandHandler<MessagesAggregate> handler = 
  new AggregateAnnotationCommandHandler<MessagesAggregate>(
    MessagesAggregate.class, repository);
handler.subscribe(commandBus);

The last thing we should declare is a handler that will subscribe to the events produced by the aggregate.

We’ll create a message event handler that will handle both messages – the MessageCreatedEvent and the MessageReadEvent:

public class MessagesEventHandler {

    @EventHandler
    public void handle(MessageCreatedEvent event) {
        System.out.println("Message received: " + event.getText() + " (" + event.getId() + ")");
    }

    @EventHandler
    public void handle(MessageReadEvent event) {
        System.out.println("Message read: " + event.getId());
    }
}

We need to define that this listener should handle events by invoking a subscribe() method:

AnnotationEventListenerAdapter annotationEventListenerAdapter
  = new AnnotationEventListenerAdapter(new MessagesEventHandler());
eventStore.subscribe(eventMessages -> eventMessages.forEach(e -> {
    try {
        annotationEventListenerAdapter.handle(e);
    } catch (Exception e1) {
        throw new RuntimeException(e1);
    }
}));

We completed the setup, so now we can send some commands to the commandGateway:

String itemId = UUID.randomUUID().toString();
commandGateway.send(new CreateMessageCommand(itemId, "Hello, how is your day?"));
commandGateway.send(new MarkReadMessageCommand(itemId));

After running our application, the MessagesEventHandler should handle events produced by the MessagesAggregate class, and we should see similar output:

Message received: Hello, how is your day? (d2ba9cbe-1a44-428e-a710-13b1bdc67c4b)
Message read: d2ba9cbe-1a44-428e-a710-13b1bdc67c4b

8. Conclusion

In this article we introduced the Axon framework as a powerful base to building a CQRS and Event Sourcing system architecture.

We implemented a simple message application using the framework – to show how that should be structured in practice.

The implementation of all these examples and code snippets can be found over on GitHub; this is a Maven project, so it should be easy to import and run as it is.


Ant Colony Optimization

$
0
0

1. Introduction

The aim of this series is to explain the idea of genetic algorithms and show the most known implementations.

In this tutorial, we’ll describe the concept of the ant colony optimization (ACO), followed by the code example.

2. How ACO Works

ACO is a genetic algorithm inspired by an ant’s natural behavior. To fully understand the ACO algorithm, we need to get familiar with its basic concepts:

  • ants use pheromones to find the shortest path between home and food source
  • pheromones evaporate quickly
  • ants prefer to use shorter paths with denser pheromone

Let’s show a simple example of ACO used in the Traveling Salesman Problem. In the following case, we need to find the shortest path between all nodes in the graph:

 

Following by natural behaviors, ants will start to explore new paths during the exploration. Stronger blue color indicates the paths that are used more often than the others, whereas green color indicates the current shortest path that is found:

 

As a result, we’ll achieve the shortest path between all nodes:

 

The nice GUI-based tool for ACO testing can be found here.

3. Java Implementation

3.1. ACO Parameters

Let’s discuss the main parameters for the ACO algorithm, declared in the AntColonyOptimization class:

private double c = 1.0;
private double alpha = 1;
private double beta = 5;
private double evaporation = 0.5;
private double Q = 500;
private double antFactor = 0.8;
private double randomFactor = 0.01;

Parameter c indicates the original number of trails, at the start of the simulation. Furthermore, alpha controls the pheromone importance, while beta controls the distance priority. In general, the beta parameter should be greater than alpha for the best results.

Next, the evaporation variable shows the percent how much the pheromone is evaporating in every iteration, whereas provides information about the total amount of pheromone left on the trail by each Ant, and antFactor tells us how many ants we’ll use per city.

Finally, we need to have a little bit of randomness in our simulations, and this is covered by randomFactor.

3.2. Create Ants

Each Ant will be able to visit a specific city, remember all visited cities, and keep track of the trail length:

public void visitCity(int currentIndex, int city) {
    trail[currentIndex + 1] = city;
    visited[city] = true;
}

public boolean visited(int i) {
    return visited[i];
}

public double trailLength(double graph[][]) {
    double length = graph[trail[trailSize - 1]][trail[0]];
    for (int i = 0; i < trailSize - 1; i++) {
        length += graph[trail[i]][trail[i + 1]];
    }
    return length;
}

3.3. Setup Ants

At the very beginning, we need to initialize our ACO code implementation by providing trails and ants matrices:

graph = generateRandomMatrix(noOfCities);
numberOfCities = graph.length;
numberOfAnts = (int) (numberOfCities * antFactor);

trails = new double[numberOfCities][numberOfCities];
probabilities = new double[numberOfCities];
ants = new Ant[numberOfAnts];
IntStream.range(0, numberOfAnts).forEach(i -> ants.add(new Ant(numberOfCities)));

Next, we need to setup the ants matrix to start with a random city:

public void setupAnts() {
    IntStream.range(0, numberOfAnts)
      .forEach(i -> {
          ants.forEach(ant -> {
              ant.clear();
              ant.visitCity(-1, random.nextInt(numberOfCities));
          });
      });
    currentIndex = 0;
}

For each iteration of the loop, we’ll perform the following operations:

IntStream.range(0, maxIterations).forEach(i -> {
    moveAnts();
    updateTrails();
    updateBest();
});

3.4. Move Ants

Let’s start with the moveAnts() method. We need to choose the next city for all ants, remembering that each ant tries to follow other ants’ trails:

public void moveAnts() {
    IntStream.range(currentIndex, numberOfCities - 1).forEach(i -> {
        ants.forEach(ant -> {
            ant.visitCity(currentIndex, selectNextCity(ant));
        });
        currentIndex++;
    });
}

The most important part is to properly select next city to visit. We should select the next town based on the probability logic. First, we can check if Ant should visit a random city:

int t = random.nextInt(numberOfCities - currentIndex);
if (random.nextDouble() < randomFactor) {
    OptionalInt cityIndex = IntStream.range(0, numberOfCities)
      .filter(i -> i == t && !ant.visited(i))
      .findFirst();
    if (cityIndex.isPresent()) {
        return cityIndex.getAsInt();
    }
}

If we didn’t select any random city, we need to calculate probabilities to select the next city, remembering that ants prefer to follow stronger and shorter trails. We can do this by storing the probability of moving to each city in the array:

public void calculateProbabilities(Ant ant) {
    int i = ant.trail[currentIndex];
    double pheromone = 0.0;
    for (int l = 0; l < numberOfCities; l++) {
        if (!ant.visited(l)){
            pheromone
              += Math.pow(trails[i][l], alpha) * Math.pow(1.0 / graph[i][l], beta);
        }
    }
    for (int j = 0; j < numberOfCities; j++) {
        if (ant.visited(j)) {
            probabilities[j] = 0.0;
        } else {
            double numerator
              = Math.pow(trails[i][j], alpha) * Math.pow(1.0 / graph[i][j], beta);
            probabilities[j] = numerator / pheromone;
        }
    }
}

After we calculate probabilities, we can decide to which city to go to by using:

double r = random.nextDouble();
double total = 0;
for (int i = 0; i < numberOfCities; i++) {
    total += probabilities[i];
    if (total >= r) {
        return i;
    }
}

3.5. Update Trails

In this step, we should update trails and the left pheromone:

public void updateTrails() {
    for (int i = 0; i < numberOfCities; i++) {
        for (int j = 0; j < numberOfCities; j++) {
            trails[i][j] *= evaporation;
        }
    }
    for (Ant a : ants) {
        double contribution = Q / a.trailLength(graph);
        for (int i = 0; i < numberOfCities - 1; i++) {
            trails[a.trail[i]][a.trail[i + 1]] += contribution;
        }
        trails[a.trail[numberOfCities - 1]][a.trail[0]] += contribution;
    }
}

3.6. Update the Best Solution

This is the last step of each iteration. We need to update the best solution in order to keep the reference to it:

private void updateBest() {
    if (bestTourOrder == null) {
        bestTourOrder = ants[0].trail;
        bestTourLength = ants[0].trailLength(graph);
    }
    for (Ant a : ants) {
        if (a.trailLength(graph) < bestTourLength) {
            bestTourLength = a.trailLength(graph);
            bestTourOrder = a.trail.clone();
        }
    }
}

After all iterations, the final result will indicate the best path found by ACO. Please note that by increasing the number of cities, the probability of finding the shortest path decreases. 

4. Conclusion

This tutorial introduces the Ant Colony Optimization algorithm. You can learn about genetic algorithms without any previous knowledge of this area, having only basic computer programming skills.

The complete source code for the code snippets in this tutorial is available in the GitHub project.

For all articles in the series, including other examples of genetic algorithms, check out the following links:

Introduction to Twitter4J

$
0
0

1. Overview

In this article, we will have a look at using Twitter4J in a Java application to communicate with Twitter.

2. Twitter4J

Twitter4J is an open source Java library, which provides a convenient API for accessing the Twitter API.

Simply put, here’s how we can interact with the Twitter API; we can:

  • Post a tweet
  • Get timeline of a user, with a list of latest tweets
  • Send and receive direct messages
  • Search for tweets and much more

This library ensures that that we can easily do these operations, and it also ensures the security and privacy of a user – for which we naturally need to have OAuth credentials configured in our app.

3. Maven Dependencies

We need to start by defining the dependency for Twitter4J in our pom.xml:

<dependency>
    <groupId>org.twitter4j</groupId>
    <artifactId>twitter4j-stream</artifactId>
    <version>4.0.6</version>
</dependency>

To check if any new version of the library has been released – track the releases here.

4. Configuration

Configuring Twitter4J is easy and can be done in various ways – for example in a plain text file or a Java class or even using environment variables.

Let’s look at each of these ways, one at a time.

4.1. Plain Text File

We can use a plain text file – named twitter4j.properties – to hold our configuration details. Let’s look at the properties which need to be provided:

oauth.consumerKey =       // your key
oauth.consumerSecret =    // your secret
oauth.accessToken =       // your token
oauth.accessTokenSecret = // your token secret

All these attributes can be obtained from Twitter Developer console after you make a new app.

4.2. Java Class

We can also use a ConfigurationBuilder class to configure Twitter4J programmatically in Java:

ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setDebugEnabled(true)
  .setOAuthConsumerKey("your consumer key")
  .setOAuthConsumerSecret("your consumer secret")
  .setOAuthAccessToken("your access token")
  .setOAuthAccessTokenSecret("your access token secret");
TwitterFactory tf = new TwitterFactory(cb.build());
Twitter twitter = tf.getInstance();

Note that we’ll be using the Twitter instance in next section – when we start to fetch data.

4.3. Environment Variables

Configuring through environment variables is another choice we have. If we do that, note that we’ll need a twitter4j prefix in our variables:

$ export twitter4j.oauth.consumerKey =       // your key
$ export twitter4j.oauth.consumerSecret =    // your secret
$ export twitter4j.oauth.accessToken =       // your access token
$ export twitter4j.oauth.accessTokenSecret = // your access token secret

5. Adding / Retrieving Real-Time Tweet Data

With a fully configured application, we can finally interact with Twitter.

Let’s look at few examples.

5.1. Post A Tweet

We’ll start by updating a tweet on Twitter:

public String createTweet(String tweet) throws TwitterException {
    Twitter twitter = getTwitterinstance();
    Status status = twitter.updateStatus("creating baeldung API");
    return status.getText();
}

By using status.getText(), we can retrieve the tweet just posted.

5.2. Get the Timeline

We can also fetch a list of tweets from the user’s timeline:

public List<String> getTimeLine() throws TwitterException {
    Twitter twitter = getTwitterinstance();
    
    return twitter.getHomeTimeline().stream()
      .map(item -> item.getText())
      .collect(Collectors.toList());
}

By using twitter.getHomeTimeline(), we get all tweets posted by the current account ID.

5.3. Send a Direct Message

Sending and receiving a direct message to followers is also possible using the Twitter4j:

public static String sendDirectMessage(String recipientName, String msg) 
  throws TwitterException {
 
    Twitter twitter = getTwitterinstance();
    DirectMessage message = twitter.sendDirectMessage(recipientName, msg);
    return message.getText();
}

The sendDirectMessage method takes two parameters:

  • RecipientName: the twitter username of a message recipient
  • msg: message content

If recipient will not be found, the sendDirectMessage will throw an exception with exception code 150.

5.4. Search for Tweets

We can also search for tweets containing some text. By doing this, we’ll get a list of tweets with the username of users.

Let’s see how such a search can be performed:

public static List<String> searchtweets() throws TwitterException {
 
    Twitter twitter = getTwitterinstance();
    Query query = new Query("source:twitter4j baeldung");
    QueryResult result = twitter.search(query);
    
    return result.getTweets().stream()
      .map(item -> item.getText())
      .collect(Collectors.toList());
}

Clearly, we can iterate over each tweet received in a QueryResult and fetch relative data.

5.5. The Streaming API

Twitter Streaming API is useful when updates are required in real-time; it handles thread creation and listens to events.

Let’s create a listener which listens to tweet updates from a user:

public static void streamFeed() {

    StatusListener listener = new StatusListener() {

        @Override
        public void onException(Exception e) {
            e.printStackTrace();
        }
        @Override
        public void onDeletionNotice(StatusDeletionNotice arg) {
        }
        @Override
        public void onScrubGeo(long userId, long upToStatusId) {
        }
        @Override
        public void onStallWarning(StallWarning warning) {
        }
        @Override
        public void onStatus(Status status) {
        }
        @Override
        public void onTrackLimitationNotice(int numberOfLimitedStatuses) {
        }
    };

    TwitterStream twitterStream = new TwitterStreamFactory().getInstance();

    twitterStream.addListener(listener);

    twitterStream.sample();
}

We can put some println() statement to check the output tweet stream in all of the methods. All tweets have location metadata associated with it.

Please note that all the tweets data fetched by the API are in the UTF-8 format and since Twitter is a multi-language platform, some data format may be unrecognizable based upon its origin.

6. Conclusion

This article was a quick but comprehensive introduction to using Twitter4J with Java.

The implementation of shown examples can be found on GitHub  – this is a Maven based project, so it should be easy to import and run as it is. The only change we need to do is to insert our own OAuth credentials.

Guide to @Immutable Annotation in Hibernate

$
0
0

1. Overview

In this article, we’ll talk about how we can make an entity, collection or attribute Immutable in Hibernate.

By default, fields are mutable, which means we’re able to perform operations on them that change their state.

2. Maven

To get our project up and running, we first need to add the necessary dependencies into our pom.xml. And as we’re working with Hibernate, we are going to add the corresponding dependency:

<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-core</artifactId>
    <version>5.2.8.Final</version>
</dependency>

And, because we are working with HSQLDB, we also need:

<dependency>
    <groupId>org.hsqldb</groupId>
    <artifactId>hsqldb</artifactId>
    <version>2.3.4</version>
</dependency>

3. Annotation on Entities

First, let’s define a simple entity class:

@Entity
@Immutable
@Table(name = "events")
public class Event {
 
    @Id
    @Column(name = "event_id")
    @GeneratedValue(generator = "increment")
    @GenericGenerator(name = "increment", strategy = "increment")
    private Long id;

    @Column(name = "title")
    private String title;

    // standard setters and getters
}

As you have noticed we have added already the @Immutable annotation to our entity, so if we try and save an Event:

@Test
public void addEvent() {
    Event event = new Event();
    event.setTitle("My Event");
    event.setGuestList(Sets.newHashSet("guest"));
    session.save(event);
    session.getTransaction().commit();
}

Then we should get the output:

Hibernate: insert into events (title, event_id) values (?, ?)

The output should be the same even if we remove the annotation, meaning there’s no effect when we try to add an entity regardless of the annotation.

3.1. Updating the Entity

Now, we had no issue saving an entity, let’s try to update it:

@Test
public void updateEvent() {
    Event event = (Event) session.createQuery(
      "FROM Event WHERE title='My Event'").list().get(0);
    event.setTitle("Public Event");
    session.saveOrUpdate(event);
    session.getTransaction().commit();
}

Hibernate will simply ignore the update operation without throwing an exception. However, if we remove the @Immutable annotation we get a different result:

Hibernate: select ... from events where title='My Event'
Hibernate: update events set title=? where event_id=?

What this tells us is that our object is now mutable (mutable is the default value if we don’t include the annotation) and will allow the update to do its job.

3.2. Deleting an Entity

When it comes to deleting an entity:

@Test
public void deleteEvent() {
    Event event = (Event) session.createQuery(
      "FROM Event WHERE title='My Event'").list().get(0);
    session.delete(event);
    session.getTransaction().commit();
}

We’ll be able to perform the delete regardless if it is mutable or not:

Hibernate: select ... from events where title='My Event'
Hibernate: delete from events where event_id=?

4. Annotation on Collections

So far we’ve seen what the annotation does to entities, but as we mentioned in the beginning, it can also be applied to collections.

First, let’s add a collection to our Event class:

@Immutable
public Set<String> getGuestList() {
    return guestList;
}

Same as before, we’ve added the annotation beforehand, so if we go ahead and try to add an element to our collection:

org.hibernate.HibernateException: 
  changed an immutable collection instance: [com.baeldung.entities.Event.guestList#1]

This time we get an exception because with collections we are not allowed to add nor delete them.

4.1. Deleting Collections

The other scenario where a Collection by being immutable will throw an exception it’s whenever we try to delete and we have set the @Cascade annotation.

So, whenever @Immutable is present and we attempt to delete:

@Test
public void deleteCascade() {
    Event event = (Event) session.createQuery(
      "FROM Event WHERE title='Public Event'").list().get(0);
    String guest = event.getGuestList().iterator().next();
    event.getGuestList().remove(guest);
    session.saveOrUpdate(event);
    session.getTransaction().commit();
}

Output:

org.hibernate.HibernateException: 
  changed an immutable collection instance:
  [com.baeldung.entities.Event.guestList#1]

5. XML Notes

Finally, the configuration can also be done using XML through the mutable=false attribute:

<hibernate-mapping>
    <class name="com.baeldung.entities.Event" mutable="false">
        <id name="id" column="event_id">
            <generator class="increment"/>
        </id>
        <property name="title"/>
    </class>
</hibernate-mapping>

However, since we basically implemented the examples using the annotation method, we will not get into details using XML.

6. Conclusion

In this quick article we explore the useful @Immutable annotation out of Hibernate, and how that can help us define better semantics and constraints on our data.

As always, the implementation of all of these examples and snippets can be found in the GitHub project. This is a Maven-based project so it should be easy to import and run.

Spring LDAP Overview

$
0
0

1. Overview

LDAP directory servers are read-optimized hierarchical data stores. Typically, they’re used for storing user-related information required for user authentication and authorization.

In this article, we’ll explore the Spring LDAP APIs to authenticate and search for users, as well as to create and modify users in the directory server. The same set of APIs can be used for managing any other type of entries in LDAP.

2. Maven Dependencies

Let’s begin by adding the required Maven dependency:

<dependency>
    <groupId>org.springframework.ldap</groupId>
    <artifactId>spring-ldap-core</artifactId>
    <version>2.3.1.RELEASE</version>
</dependency>

The latest version of this dependency can be found at spring-ldap-core.

3. Data Preparation

For the purpose of this article, let’s first create the following LDAP entry:

ou=users,dc=example,dc=com (objectClass=organizationalUnit)

Under this node, we will create new users, modify existing users, authenticate existing users and search for information.

4. Spring LDAP APIs

4.1. ContextSource & LdapTemplate Bean Definition

ContextSource is used for creating the LdapTemplate. We will see the use of ContextSource during user authentication in the next section:

@Bean
public LdapContextSource contextSource() {
    LdapContextSource contextSource = new LdapContextSource();
    
    contextSource.setUrl(env.getRequiredProperty("ldap.url"));
    contextSource.setBase(
      env.getRequiredProperty("ldap.partitionSuffix"));
    contextSource.setUserDn(
      env.getRequiredProperty("ldap.principal"));
    contextSource.setPassword(
      env.getRequiredProperty("ldap.password"));
    
    return contextSource;
}

LdapTemplate is used for creation and modification of LDAP entries:

@Bean
public LdapTemplate ldapTemplate() {
    return new LdapTemplate(contextSource());
}

4.2. User Authentication

Let’s now implement a simple piece of logic to authenticate an existing user:

public void authenticate(String username, String password) {
    contextSource
      .getContext(
        "cn=" + 
         username + 
         ",ou=users," + 
         env.getRequiredProperty("ldap.partitionSuffix"), password);
}

4.3. User Creation

Next, let’s create a new user and store an SHA hash of the password in LDAP.

At the time of authentication, the LDAP server generates the SHA hash of the supplied password and compares it to the stored one:

public void create(String username, String password) {
    Name dn = LdapNameBuilder
      .newInstance()
      .add("ou", "users")
      .add("cn", username)
      .build();
    DirContextAdapter context = new DirContextAdapter(dn);

    context.setAttributeValues(
      "objectclass", 
      new String[] 
        { "top", 
          "person", 
          "organizationalPerson", 
          "inetOrgPerson" });
    context.setAttributeValue("cn", username);
    context.setAttributeValue("sn", username);
    context.setAttributeValue
      ("userPassword", digestSHA(password));

    ldapTemplate.bind(context);
}

digestSHA() is a custom method which returns the Base64 encoded string of the SHA hash of the supplied password.

Finally, the bind() method of LdapTemplate is used to create an entry in the LDAP server.

4.4. User Modification

We can modify an existing user or entry with the following method:

public void modify(String username, String password) {
    Name dn = LdapNameBuilder.newInstance()
      .add("ou", "users")
      .add("cn", username)
      .build();
    DirContextOperations context 
      = ldapTemplate.lookupContext(dn);

    context.setAttributeValues
      ("objectclass", 
          new String[] 
            { "top", 
              "person", 
              "organizationalPerson", 
              "inetOrgPerson" });
    context.setAttributeValue("cn", username);
    context.setAttributeValue("sn", username);
    context.setAttributeValue("userPassword", 
      digestSHA(password));

    ldapTemplate.modifyAttributes(context);
}

The lookupContext() method is used to find the supplied user.

4.5. User Search

We can search for existing users using search filters:

public List<String> search(String username) {
    return ldapTemplate
      .search(
        "ou=users", 
        "cn=" + username, 
        (AttributesMapper<String>) attrs -> (String) attrs.get("cn").get());
}

The AttributesMapper is used to get the desired attribute value from the entries found. Internally, Spring LdapTemplate invokes the AttributesMapper for all the entries found and creates a list of the attribute values.

5. Testing

spring-ldap-test provides an embedded LDAP server based on ApacheDS 1.5.5. To setup the embedded LDAP server for testing, we need to configure the following Spring bean:

@Bean
public TestContextSourceFactoryBean testContextSource() {
    TestContextSourceFactoryBean contextSource 
      = new TestContextSourceFactoryBean();
    
    contextSource.setDefaultPartitionName(
      env.getRequiredProperty("ldap.partition"));
    contextSource.setDefaultPartitionSuffix(
      env.getRequiredProperty("ldap.partitionSuffix"));
    contextSource.setPrincipal(
      env.getRequiredProperty("ldap.principal"));
    contextSource.setPassword(
      env.getRequiredProperty("ldap.password"));
    contextSource.setLdifFile(
      resourceLoader.getResource(
        env.getRequiredProperty("ldap.ldiffile")));
    contextSource.setPort(
      Integer.valueOf(
        env.getRequiredProperty("ldap.port")));
    return contextSource;
}

Let’s test our user search method with JUnit:

@Test
public void 
  givenLdapClient_whenCorrectSearchFilter_thenEntriesReturned() {
    List<String> users = ldapClient
      .search(SEARCH_STRING);
 
    assertThat(users, Matchers.containsInAnyOrder(USER2, USER3));
}

6. Conclusion

In this article, we have introduced Spring LDAP APIs and developed simple methods for user authentication, user search, user creation and modification in an LDAP server.

As always the full source code is available in this Github project. The tests are created under Maven profile “live” and hence can be run using the option “-P live”.

Java 9 CompletableFuture API Improvements

$
0
0

1. Introduction

Java 9 comes with some changes to the CompletableFuture class. Such changes were introduced as part of JEP 266 in order to address common complaints and suggestions since its introduction in JDK 8, more specifically, support for delays and timeouts, better support for subclassing and a few utility methods.

Code-wise, the API comes with eight new methods and five new static methods. To enable such additions, approximately, 1500 out of 2400 lines of code were changed (as per Open JDK).

2. Instance API Additions

As mentioned, the instance API comes with eight new additions, they are:

  1. Executor defaultExecutor()
  2. CompletableFuture<U> newIncompleteFuture()
  3. CompletableFuture<T> copy()
  4. CompletionStage<T> minimalCompletionStage()
  5. CompletableFuture<T> completeAsync(Supplier<? extends T> supplier, Executor executor)
  6. CompletableFuture<T> completeAsync(Supplier<? extends T> supplier)
  7. CompletableFuture<T> orTimeout(long timeout, TimeUnit unit)
  8. CompletableFuture<T> completeOnTimeout(T value, long timeout, TimeUnit unit)

2.1. Method defaultExecutor()

Signature: Executor defaultExecutor()

Returns the default Executor used for async methods that do not specify an Executor.

new CompletableFuture().defaultExecutor()

This can be overridden by subclasses returning an executor providing, at least, one independent thread.

2.2. Method newIncompleteFuture()

Signature: CompletableFuture<U> newIncompleteFuture()

The newIncompleteFuture, also known as the “virtual constructor”, is used to get a new completable future instance of the same type.

new CompletableFuture().newIncompleteFuture()

This method is especially useful when subclassing CompletableFuture, mainly because it is used internally in almost all methods returning a new CompletionStage, allowing subclasses to control what subtype gets returned by such methods.

2.3. Method copy()

Signature: CompletableFuture<T> copy()

This method returns a new CompletableFuture which:

  • When this gets completed normally, the new one gets completed normally also
  • When this gets completed exceptionally with exception X, the new one is also completed exceptionally with a CompletionException with X as cause
new CompletableFuture().copy()

This method may be useful as a form of “defensive copying”, to prevent clients from completing, while still being able to arrange dependent actions on a specific instance of CompletableFuture.

2.4. Method minimalCompletionStage()

Signature: CompletionStage<T> minimalCompletionStage()

This method returns a new CompletionStage which behaves in the exact same way as described by the copy method, however, such new instance throws UnsupportedOperationException in every attempt to retrieve or set the resolved value.

new CompletableFuture().minimalCompletionStage()

A new CompletableFuture with all methods available can be retrieved by using the toCompletableFuture method available on the CompletionStage API.

2.5. Methods completeAsync()

The completeAsync method should be used to complete the CompletableFuture asynchronously using the value given by the Supplier provided.

Signatures:

CompletableFuture<T> completeAsync(Supplier<? extends T> supplier, Executor executor)
CompletableFuture<T> completeAsync(Supplier<? extends T> supplier)

The difference between this two overloaded methods is the existence of the second argument, where the Executor running the task can be specified. If none is provided, the default executor (returned by the defaultExecutor method) will be used.

2.6. Methods orTimeout()

Signature: CompletableFuture<T> orTimeout(long timeout, TimeUnit unit)

new CompletableFuture().orTimeout(1, TimeUnit.SECONDS)

Resolves the CompletableFuture exceptionally with TimeoutException, unless it is completed before the specified timeout.

2.7. Method completeOnTimeout()

Signature: CompletableFuture<T> completeOnTimeout(T value, long timeout, TimeUnit unit)

new CompletableFuture().completeOnTimeout(value, 1, TimeUnit.SECONDS)

Completes the CompletableFuture normally with the specified value unless it is completed before the specified timeout.

3. Static API Additions

Some utility methods were also added. They are:

  1. Executor delayedExecutor(long delay, TimeUnit unit, Executor executor)
  2. Executor delayedExecutor(long delay, TimeUnit unit) 
  3. <U> CompletionStage<U> completedStage(U value) 
  4. <U> CompletionStage<U> failedStage(Throwable ex) 
  5. <U> CompletableFuture<U> failedFuture(Throwable ex)

3.1. Methods delayedExecutor

Signatures:

Executor delayedExecutor(long delay, TimeUnit unit, Executor executor)
Executor delayedExecutor(long delay, TimeUnit unit)

Returns a new Executor that submits a task to the given base executor after the given delay (or no delay if non-positive). Each delay commences upon invocation of the returned executor’s execute method. If no executor is specified the default executor (ForkJoinPool.commonPool()) will be used.

3.2. Methods completedStage and failedStage

Signatures:

<U> CompletionStage<U> completedStage(U value)
<U> CompletionStage<U> failedStage(Throwable ex)

This utility methods return already resolved CompletionStage instances, either completed normally with a value (completedStage) or completed exceptionally (failedStage) with the given exception.

3.3. Method failedFuture

Signature: <U> CompletableFuture<U> failedFuture(Throwable ex)

The failedFuture method adds the ability to specify an already completed exceptionally CompleatebleFuture instance.

4. Example Use Cases

Within this section, one will show some examples on how to use some of the new API.

4.1. Delay

This example will show how to delay the completion of a CompletableFuture with a specific value by one second. That can be achieved by using the completeAsync method together with the delayedExecutor.

CompletableFuture<Object> future = new CompletableFuture<>();
future.completeAsync(() -> input, CompletableFuture.delayedExecutor(1, TimeUnit.SECONDS));

4.2. Complete with Value on Timeout

Another way to achieve a delayed result is to use the completeOnTimeout method. This example defines a CompletableFuture that will be resolved with a given input if it stays unresolved after 1 second.

CompletableFuture<Object> future = new CompletableFuture<>();
future.completeOnTimeout(input, 1, TimeUnit.SECONDS);

4.3. Timeout

Another possibility is timing out which resolves the future exceptionally with TimeoutException.  For example, having the CompletableFuture timing out after 1 second given it is not completed before that.

CompletableFuture<Object> future = new CompletableFuture<>();
future.orTimeout(1, TimeUnit.SECONDS);

5. Conclusion

In conclusion, Java 9 comes with several additions to the CompletableFuture API, it now has better support for subclassing, thanks to the newIncompleteFuture virtual constructor, it is possible to take control over the CompletionStage instances returned in most of the CompletionStage API.

It has, definitely, better support for delays and timeouts as shown previously. The utility methods added follow a sensible pattern, giving CompletableFuture a convenient way to specify resolved instances.

The examples used in this article can be found in our GitHub repository.

Viewing all 4703 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>