Quantcast
Channel: Baeldung
Viewing all 4717 articles
Browse latest View live

Introduction to GraphQL

$
0
0

1. Overview

GraphQL is a query language, created by Facebook with the purpose of building client applications based on intuitive and flexible syntax, for describing their data requirements and interactions.

One of the primary challenges with traditional REST calls is the inability of the client to request a customized (limited or expanded) set of data. In most cases, once the client requests information from the server, it either gets all or none of the fields.

Another difficulty is working and maintain multiple endpoints. As a platform grows, consequently the number will increase. Therefore, clients often need to ask for data from different endpoints.

When building a GraphQL server, it is only necessary to have one URL for all data fetching and mutating. Thus, a client can request a set of data by sending a query string, describing what they want, to a server.

2. Basic GraphQL Nomenclature

Let’s have a look at GraphQL’s basic terminology.

  • Query: is a read-only operation requested to a GraphQL server
  • Mutation: is a read-write operation requested to a GraphQL server
  • Resolver: In GraphQL, the Resolver is responsible for mapping the operation and the code running on the backend which is responsible for handle the request. It is analogous to MVC backend in a RESTFul application
  • Type: A Type defines the shape of response data that can be returned from the GraphQL server, including fields that are edges to other Types
  • Input: like a Type, but defines the shape of input data that is sent to a GraphQL server
  • Scalar: is a primitive Type, such as a String, Int, Boolean, Float, etc
  • Interface: An Interface will store the names of the fields and their arguments, so GraphQL objects can inherit from it, assuring the use of specific fields
  • Schema: In GraphQL, the Schema manages queries and mutations, defining what is allowed to be executed in the GraphQL server

2.1. Schema Loading

There are two ways of loading a schema into GraphQL server:

  1. by using GraphQL’s Interface Definition Language (IDL)
  2. by using one of the supported programming languages

Let’s demonstrate an example using IDL:

type User {
    firstName: String
}

Now, an example of schema definition using Java code:

GraphQLObjectType userType = newObject()
  .name("User")  
  .field(newFieldDefinition()
    .name("firstName")
    .type(GraphQLString))
  .build();

3. Interface Definition Language

Interface Definition Language (IDL) or Schema Definition Language (SDL) is the most concise way to specify a GraphQL Schema. The syntax is well-defined and will be adopted in the official GraphQL Specification.

For instance, let’s create a GraphQL schema for a User/Emails could be specified like this:

schema {
    query: QueryType
}

enum Gender {
    MALE
    FEMALE
}

type User {
    id: String!
    firstName: String!
    lastName: String!
    createdAt: DateTime!
    age: Int! @default(value: 0)
    gender: [Gender]!
    emails: [Email!]! @relation(name: "Emails")
}

type Email {
    id: String!
    email: String!
    default: Int! @default(value: 0)
    user: User @relation(name: "Emails")
}

4. GraphQL-java

GraphQL-java is an implementation based on the specification and the JavaScript reference implementation. Note that it requires at least Java 8 to run properly.

4.1. GraphQL-java Annotations

GraphQL also makes it possible to use Java annotations to generate its schema definition without all the boilerplate code created by the use of the traditional IDL approach.

4.2. Dependencies

To create our example, let’s firstly start importing the required dependency which is relying on Graphql-java-annotations module:

<dependency>
    <groupId>com.graphql-java</groupId>
    <artifactId>graphql-java-annotations</artifactId>
    <version>3.0.3</version>
</dependency>

We are also implementing an HTTP library to ease the setup in our application. We are going to use Ratpack (although it could be implemented as well with Vert.x, Spark, Dropwizard, Spring Boot, etc.).

Let’s also import the Ratpack dependency:

<dependency>
    <groupId>io.ratpack</groupId>
    <artifactId>ratpack-core</artifactId>
    <version>1.4.6</version>
</dependency>

4.3. Implementation

Let’s create our example: a simple API that provides a “CRUDL” (Create, Retrieve, Update, Delete, and List) for users. First, let’s create our User POJO:

@GraphQLName("user")
public class User {

    @GraphQLField
    private Long id;
 
    @GraphQLField
    private String name;
 
    @GraphQLField
    private String email;
  
    // getters, setters, contructors, and helper methods ommited
}

In this POJO we can see the @GraphQLName(“user”) annotation, as an indication that this class is mapped by GraphQL along with each field annotated with @GraphQLField.

Next, we’ll create the UserHandler class. This class inherits from the chosen HTTP connector library (in our case, Ratpack) a handler method, which will manage and invoke the GraphQL’s Resolver feature. Thus, redirecting the request (JSON payloads) to the proper query or mutation operation:

@Override
public void handle(Context context) throws Exception {
    context.parse(Map.class)
      .then(payload -> {
          Map<String, Object> parameters = (Map<String, Object>)
            payload.get("parameters");
          ExecutionResult executionResult = graphql
            .execute(payload.get(SchemaUtils.QUERY)
              .toString(), null, this, parameters);
          Map<String, Object> result = new LinkedHashMap<>();
          if (executionResult.getErrors().isEmpty()) {
              result.put(SchemaUtils.DATA, executionResult.getData());
          } else {
              result.put(SchemaUtils.ERRORS, executionResult.getErrors());
              LOGGER.warning("Errors: " + executionResult.getErrors());
          }
          context.render(json(result));
      });
}

Now, the class that will support the query operations, i.e., UserQuery. As mentioned all methods that retrieve data from the server to the client are managed by this class:

@GraphQLName("query")
public class UserQuery {

    @GraphQLField
    public static User retrieveUser(
     DataFetchingEnvironment env,
      @NotNull @GraphQLName("id") String id) {
        // return user
    }
    
    @GraphQLField
    public static List<User> listUsers(DataFetchingEnvironment env) {
        // return list of users
    }
}

Similarly to UserQuery, now we create UserMutation, which will manage all the operations that intend to change some given data stored on the server side:

@GraphQLName("mutation")
public class UserMutation {
 
    @GraphQLField
    public static User createUser(
      DataFetchingEnvironment env,
      @NotNull @GraphQLName("name") String name,
      @NotNull @GraphQLName("email") String email) {
      //create user information
    }
}

It is worth notice the annotations in both UserQuery and UserMutation classes: @GraphQLName(“query”) and @GraphQLName(“mutation”). Those annotations are used to define the query and mutation operations respectively.

With the GraphQL-java server able to run the query and mutation operations, we can use the following JSON payloads to test the request of the client against the server:

  • For the CREATE operation:
{
    "query": "mutation($name: String! $email: String!){
       createUser (name: $name email: $email) { id name email age } }",
    "parameters": {
        "name": "John",
        "email": "john@email.com"
     }
}

As the response from the server for this operation:

{
    "data": {
        "createUser": {
            "id": 1,
            "name": "John",
            "email": "john@email.com"
        }
    } 
}
  • For the RETRIEVE operation:
{
    "query": "query($id: String!){ retrieveUser (id: $id) {name email} }",
    "parameters": {
        "id": 1
    }
}

As the response from the server for this operation:

{
    "data": {
        "retrieveUser": {
            "name": "John",
            "email": "john@email.com"
        }
    }
}

GraphQL provides features the client can customize the response. So, in the last RETRIEVE operation used as the example, instead of returning the name and email, we can, for instance, return only the email:

{
    "query": "query($id: String!){ retrieveUser (id: $id) {email} }",
    "parameters": {
        "id": 1
    }
}

So, the returning information from the GraphQL server will only return the requested data:

{
    "data": {
        "retrieveUser": {
            "email": "john@email.com"
        }
    }
}

5. Conclusion

GraphQL is an easy and quite attractive way of minimizing complexity between client/server as an alternative approach to REST APIs.

As always, the example is available at our GitHub repository.


Difference Between Two Dates in Java

$
0
0

1. Overview

In this quick write-up, we’ll explore multiple possibilities of calculating the difference between two dates in Java.

2. Core Java

2.1. java.util.Date

Let’s start by using the core Java APIs to do the calculation and determine the number of days between the two dates:

@Test
public void givenTwoDatesBeforeJava8_whenDifferentiating_thenWeGetSix()
  throws ParseException {
 
    SimpleDateFormat sdf = new SimpleDateFormat("MM/dd/yyyy", Locale.ENGLISH);
    Date firstDate = sdf.parse("06/24/2017");
    Date secondDate = sdf.parse("06/30/2017");

    long diffInMillies = Math.abs(secondDate.getTime() - firstDate.getTime());
    long diff = TimeUnit.DAYS.convert(diffInMillies, TimeUnit.MILLISECONDS);

    assertEquals(diff, 6);
}

2.2. java.time – Since Java 8

Now, calculating the difference is more intuitive if we use LocalDate, LocalDateTime & Duration to represent the two dates (with or without time):

The LocalDate difference:

@Test
public void givenTwoDatesInJava8_whenDifferentiating_thenWeGetSix() {
    LocalDate now = LocalDate.now();
    LocalDate sixDaysBehind = now.minusDays(6);

    Duration duration = Duration.between(now, sixDaysBehind);
    long diff = Math.abs(duration.toDays());

    assertEquals(diff, 6);
}

The LocalDateTime case:

@Test
public void givenTwoDateTimesInJava8_whenDifferentiating_thenWeGetSix() {
    LocalDateTime now = LocalDateTime.now();
    LocalDateTime sixMinutesBehind = now.minusMinutes(6);

    Duration duration = Duration.between(now, sixMinutesBehind);
    long diff = Math.abs(duration.toMinutes());

    assertEquals(diff, 6);
}

Here‘s a bit more detail on this API.

3. External Libraries

3.1. JodaTime

We can also do a relatively straightforward implementation with JodaTime:

<dependency>
    <groupId>joda-time</groupId>
    <artifactId>joda-time</artifactId>
    <version>2.9.9</version>
</dependency>

You can get the latest version of Joda-time from Maven Central.

LocalDate case:

@Test
public void givenTwoDatesInJodaTime_whenDifferentiating_thenWeGetSix() {
    LocalDate now = LocalDate.now();
    LocalDate sixDaysBehind = now.minusDays(6);

    Period period = new Period(now, sixDaysBehind);
    long diff = Math.abs(period.getDays());

    assertEquals(diff, 6);
}

Similarly, with LocalDateTime it is:

@Test
public void givenTwoDateTimesInJodaTime_whenDifferentiating_thenWeGetSix() {
    LocalDateTime now = LocalDateTime.now();
    LocalDateTime sixMinutesBehind = now.minusMinutes(6);

    Period period = new Period(now, sixMinutesBehind);
    long diff = Math.abs(period.getDays());
}

3.2. Date4J

Date4j also provides a straightforward and simple implementation, without the note that, in this case, we need to explicitly provide a TimeZone.

Let’s start with the Maven dependency:

<dependency>
    <groupId>com.darwinsys</groupId>
    <artifactId>hirondelle-date4j</artifactId>
    <version>1.5.1</version>
</dependency>

Here’s a quick test working with the standard DateTime:

@Test
public void givenTwoDatesInDate4j_whenDifferentiating_thenWeGetSix() {
    DateTime now = DateTime.now(TimeZone.getDefault());
    DateTime sixDaysBehind = now.minusDays(6);
 
    long diff = Math.abs(now.numDaysFrom(sixDaysBehind));

    assertEquals(diff, 6);
}

4. Conclusion

In this tutorial, we illustrated a few ways of calculating the difference between dates (with and without time), both in plain Java as well as using external libraries.

The full source code of the article is available over on GitHub.

Runnable vs. Callable in Java

$
0
0

1. Overview

Since the early days of Java, multithreading has been a major aspect of the language. Runnable is the core interface provided for representing multi-threaded tasks and Callable is an improved version of Runnable that was added in Java 1.5.

In this article, we’ll explore the differences and the applications of both interfaces.

2. Execution Mechanism

Both interfaces are designed to represent a task that can be executed by multiple threads. Runnable tasks can be run using the Thread class or ExecutorService whereas Callables can be run only using the former.

3. Return Values

Let’s have a deeper look at the way these interfaces handle return values.

3.1. With Runnable

The Runnable interface is a functional interface and has a single run() method which doesn’t accept any parameters and does not return any values. This is suitable for situations where we are not looking for a result of the thread execution, for example, incoming events logging:

This is suitable for situations where we’re not looking for a result of the thread execution, for example, incoming events logging:

public interface Runnable {
    public void run();
}

Let’s understand this with an example:

public class EventLoggingTask implements  Runnable{
    private Logger logger
      = LoggerFactory.getLogger(EventLoggingTask.class);

    @Override
    public void run() {
        logger.info("Message");
    }
}

In this example, the thread will just read a message from the queue and log it in a log file. There’s no value returned from the task; the task can be launched using ExecutorService:

public void executeTask() {
    executorService = Executors.newSingleThreadExecutor();
    Future future = executorService.submit(new EventLoggingTask());
    executorService.shutdown();
}

In this case, the Future object will not hold any value.

3.2. With Callable

The Callable interface is a generic interface containing a single call() method – which returns a generic value V:

public interface Callable<V> {
    V call() throws Exception;
}

Let’s have a look at calculating the factorial of a number:

public class FactorialTask implements Callable<Integer> {
    int number;

    // standard constructors

    public Integer call() throws InvalidParamaterException {
        int fact = 1;
        // ...
        for(int count = number; count > 1; count--) {
            fact = fact * count;
        }

        return fact;
    }
}

The result of call() method is returned within a Future object:

@Test
public void whenTaskSubmitted_ThenFutureResultObtained(){
    FactorialTask task = new FactorialTask(5);
    Future<Integer> future = executorService.submit(task);
 
    assertEquals(120, future.get().intValue());
}

4. Exception Handling

Let’s see how suitable they are for exception handling.

4.1. With Runnable

Since the method signature does not have the “throws” clause specified, there is no way to propagate further checked exceptions. 

4.2. With Callable

Callable’s call() method contains “throws Exception” clause so we can easily propagate checked exceptions further:

public class FactorialTask implements Callable<Integer> {
    // ...
    public Integer call() throws InvalidParamaterException {

        if(number < 0) {
            throw new InvalidParamaterException("Number should be positive");
        }
    // ...
    }
}

In case of running a Callable using an ExecutorService, the exceptions are collected in the Future object, which can be checked by making a call to the Future.get() method. This will throw an ExecutionException – which wraps the original exception:

@Test(expected = ExecutionException.class)
public void whenException_ThenCallableThrowsIt() {
 
    FactorialCallableTask task = new FactorialCallableTask(-5);
    Future<Integer> future = executorService.submit(task);
    Integer result = future.get().intValue();
}

In the above test, the ExecutionException is being thrown as we are passing an invalid number. We can call the getCause() method on this exception object to get the original checked exception.

If we don’t make call the to the get() method of Future class – then the exception thrown by call() method will not be reported back and the task will still be marked as completed:

@Test
public void whenException_ThenCallableDoesntThrowsItIfGetIsNotCalled(){
    FactorialCallableTask task = new FactorialCallableTask(-5);
    Future<Integer> future = executorService.submit(task);
 
    assertEquals(true, future.isDone());
}

The above test will pass successfully even though we’ve thrown an exception for the negative values of the parameter to FactorialCallableTask.

5. Conclusion

In this article, we’ve explored the differences between the Runnable and Callable interfaces.

As always, the complete code for this article is available over on GitHub.

Introduction to EJB JNDI Lookup on WildFly Application Server

$
0
0

1. Overview

Enterprise Java Beans (EJB) are the core part of the Java EE specification, aimed at simplifying the development of distributed enterprise-level applications. The life-cycle of EJBs is handled by an application server, such as JBoss WildFly or Oracle GlassFish.

EJBs provide a robust programming model that facilitates the implementation of enterprise-level software modules, as it’s up to the application server to handle non-business logic related issues such as transaction handling, component lifecycle management or dependency injection.

Furthermore, we’ve already published two articles covering EJB’s basic concepts, so feel free to check them out here and here.

In this tutorial, we’ll show how to implement a basic EJB module on WildFly and call an EJB from a remote client via a JNDI.

2. Implementing the EJB Module

Business logic is implemented by either one or multiple local/remote business interfaces (also known as local/remote views) or directly through classes that don’t implement any interface (non-view interfaces).

It’s worth noting that local business interfaces are used when the bean is going to be accessed from clients that reside in the same environment, i.e. the same EAR or WAR file, whereas remote business interfaces are required when the bean will be accessed from a different environment, i.e. a different JVM or application server.

Let’s create a basic EJB module, which will be made up of just one bean. The bean’s business logic will be straightforward, limited to converting a given String to its uppercase version.

2.1. Defining a Remote Business Interface

Let’s first define one single remote business interface, decorated with the @Remote annotation. This is mandatory, according to the EJB 3.x specification, as the bean is going to be accessed from a remote client:

@Remote
public interface TextProcessorRemote {
    String processText(String text);
}

2.2. Defining a Stateless Bean

Next, let’s realize the business logic by implementing the aforementioned remote interface:

@Stateless
public class TextProcessorBean implements TextProcessorRemote {
    public String processText(String text) {
        return text.toUpperCase();
    }
}

The TextProcessorBean class is a simple Java class, decorated with the @Stateless annotation.

Stateless beans, by definition, don’t maintain any conversational state with their clients, even when they can maintain instance state across different requests. Their counterpart, stateful beans, do preserve their conversational state, and such as they’re more expensive to create for the application server.

As in this case the above class doesn’t have any instance state, it can be made stateless. In case that it has a state, using it across different client requests wouldn’t make sense at all.

The bean’s behavior is deterministic, i.e., it has no side effects, as a well-designed bean should be: it just takes an input String and returns the uppercase version of it.

2.3. Maven Dependencies

Next, we need to add the javaee-api Maven artifact to the module, which provides all the Java EE 7 specification APIs, including the ones required for EJBs:

<dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-api</artifactId>
    <version>7.0</version>
    <scope>provided</scope>
</dependency>

At this point, we’ve managed to create a basic, yet functional EJB module. To make it available to all potential clients, we have to add the artifact into our local Maven repository as a JAR file.

2.4. Installing the EJB Module into the Local Repository

There’re several methods for achieving this. The most straightforward one consists of executing the Maven lifecycle clean – install build phases:

mvn clean install

This command installs the EJB module as ejbmodule-1.0.jar (or any arbitrary artifact id specified in the pom.xml file), into our local repository. For further information on how to install a local JAR with Maven, check out this article.

Assuming that the EJB module has been correctly installed into our local repository, the next step is to develop a remote client application that makes use of our TextProcessorBean API.

3. Remote EJB Client

We’ll keep the remote EJB client’s business logic extremely simple: first, it performs a JNDI lookup to get a TextProcessorBean proxy. After that, it invokes the proxy’s processText() method.

3.1. Maven Dependencies

We need to include the following Maven artifacts for the EJB client to work as expected:

<dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-api</artifactId>
    <version>7.0</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.wildfly</groupId>
    <artifactId>wildfly-ejb-client-bom</artifactId>
    <version>10.1.0.Final</version>
</dependency>
<dependency>
    <groupId>com.beldung.ejbmodule</groupId>
    <artifactId>ejbmodule</artifactId>
    <version>1.0</version>
</dependency>

While it’s pretty obvious why we include the javaee-api artifact, the inclusion of wildfly-ejb-client-bom is not. The artifact is required for performing remote EJB invocations on WildFly.

Last but not least, we need to make the previous EJB module available to the client, so that’s why we added the ejbmodule dependency too.

3.2. EJB Client Class

Considering that the EJB client calls a proxy of TextProcessorBean, we’ll be very pragmatic and name the client class TextApplication:

public class TextApplication {

    public static void main(String[] args) throws NamingException {
        TextProcessorRemote textProcessor = EJBFactory
          .createTextProcessorBeanFromJNDI("ejb:");
        System.out.print(textProcessor.processText("sample text"));
    }

    private static class EJBFactory {

        private static TextProcessorRemote createTextProcessorBeanFromJNDI
          (String namespace) throws NamingException {
            return lookupTextProcessorBean(namespace);
        }

        private static TextProcessorRemote lookupTextProcessorBean
          (String namespace) throws NamingException {
            Context ctx = createInitialContext();
            String appName = "";
            String moduleName = "EJBModule";
            String distinctName = "";
            String beanName = TextProcessorBean.class.getSimpleName();
            String viewClassName = TextProcessorRemote.class.getName();
            return (TextProcessorRemote) ctx.lookup(namespace 
              + appName + "/" + moduleName 
              + "/" + distinctName + "/" + beanName + "!" + viewClassName);
        }

        private static Context createInitialContext() throws NamingException {
            Properties jndiProperties = new Properties();
            jndiProperties.put(Context.INITIAL_CONTEXT_FACTORY, 
              "org.jboss.naming.remote.client.InitialContextFactory");
            jndiProperties.put(Context.URL_PKG_PREFIXES, 
              "org.jboss.ejb.client.naming");
            jndiProperties.put(Context.PROVIDER_URL, 
               "http-remoting://localhost:8080");
            jndiProperties.put("jboss.naming.client.ejb.context", true);
            return new InitialContext(jndiProperties);
        }
    }
}

Simply put, all that the TextApplication class does is retrieving the bean proxy and call its processText() method with a sample string.

The actual lookup is performed by the nested class EJBFactory, which first creates a JNDI InitialContext instance, then passes the required JNDI parameters to the constructor, and finally uses it for looking up the bean proxy.

Notice that the lookup is performed by using WildFly’s proprietary “ejb:” namespace. This optimizes the lookup process, as the client defers the connection to the server until the proxy is explicitly invoked.

It’s worth noting as well that it’s possible to lookup the bean proxy without resorting to the “ejb” namespace at all. However, we’d be missing all the additional benefits of lazy network connections, thus making the client a lot less performant.

3.3. Setting Up the EJB Context

The client should know what host and port to establish a connection with to perform the bean lookup. To this extent, the client requires setting up the proprietary WildFly EJB context, which is defined with the jboss-ejb-client.properties file placed in its classpath, usually under the src/main/resources folder:

endpoint.name=client-endpoint
remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=false
remote.connections=default
remote.connection.default.host=127.0.0.1
remote.connection.default.port=8080
remote.connection.default.connect.options.org.xnio.Options
  .SASL_POLICY_NOANONYMOUS=false
remote.connection.default.username=myusername
remote.connection.default.password=mypassword

The file is pretty self-explanatory, as it provides all parameters required for establishing a connection to WildFly, including the default number of remote connections, the default host, and port, and the user credentials. In this case, the connection isn’t encrypted, but it can be when SSL is enabled.

The last thing to take into account is that if the connection requires authentication, it’s necessary to add a user to WildFly via the add-user.sh/add-user.bat utility.

4. Conclusion

Performing EJB lookups on WildFly is straightforward, as long as we strictly adhere to the outlined process.

As usual, all the examples included in this article are available on GitHub here and here.

Introduction to javax.measure

$
0
0

1. Overview

In this article, we’ll introduce the javax.measure library – which provides a unified way of representing measures and units in Java.

While working with a program containing physical quantities, we need to remove the uncertainty about units used. It’s essential that we manage both the number and its unit to prevent errors in calculations.

JSR-275, also known as javax.measure library, helps us save the development time, and at the same time, makes the code more readable.

2. Maven Dependency

Let’s simply start with the Maven dependency to pull in the library:

<dependency>
    <groupId>javax.measure</groupId>
    <artifactId>jsr-275</artifactId>
    <version>0.9.1</version>
</dependency>

The latest version can be found over on Maven Central.

3. Exploring javax.measure

Let’s have a look at the example where we want to store water in a tank.

The legacy implementation would look like this:

public class WaterTank {
    public void setWaterQuantity(double quantity);
}

As we can see, the above code does not mention the unit of quantity of water and is not suitable for precise calculations because of the presence of the double type.

If a developer mistakenly passes the value with a different unit of measure than the one we’re expecting, it can lead to serious errors in calculations. Such errors are very hard to detect and resolve.

javax.measure provides us with Measure/Measurable, which resolves this confusion and leaves these kinds of errors out of our program’s scope.

3.1. Simple Example

Now, let’s explore and see how this can be useful in our example.

As mentioned earlier, javax.measure contains a dedicated class for representing measurements – the abstract class Measure – simply made up of a numeric value and a unit, and an interface Measurable – represents a measurable, countable, or a comparable quantity e.g. altitude, delay, etc.

We can define the Measure object, which should store the quantity of water:

public class WaterTank {
    public void setWaterQuantity(Measure<Volume> quantity);
}

We can also implement Measurable, to achieve the same thing:

public class WaterTank implements Measurable<Volume> { ... }

Measurable is an equivalent to Number and provides methods for conversion to primitive types.

Let’s see an example to set the value for the quantity of water:

@Test
public void givenMeasure_whenGetUnitAndConvertValue_thenSuccess() {
    waterTank.setQuantity(Measure.valueOf(9.2, LITRE));
    Measure<Volume> waterQuantity = waterTank.getQuantity();  
   
    assertEquals(LITRE, waterQuantity.getUnit());
    assertEquals(9.2, waterQuantity.getValue().doubleValue(), 0.0f);
}

We can also convert this Volume in Litre to any other unit quickly:

double volumeInMilliLitre = waterQuantity.doubleValue(SI.MILLI(LITRE));
 
assertEquals(9200.0, volumeInMilliLitre, 0.0f);

But, when we try to convert the amount of water into another unit – which is not of type Volume, we get a compilation error:

// Compilation Error 
double volumeInLitre = waterQuantity.doubleValue(SI.KILOGRAM);

There’s no restriction on which one to use out of Measure and Measurable. Measurable is more flexible, whereas Measure is straightforward and easy to use.

Just note that, if we want to retrieve the original numeric value with the original unit, Measure is the only choice.

3.2. Class Parameterization

To maintain the dimension consistency, the framework naturally takes advantage of generics.

Classes and interfaces are parameterized by their quantity type, which makes it possible to have our units checked at compile time. The compiler will give an error or warning based on what it can identify:

Unit<Length> inch = CENTI(METER).times(3.64); // OK
Unit<Length> inch = CENTI(LITRE).times(3.64); // Compile error

There’s always a possibility of bypassing the type check using the asType() method:

Unit<Length> inch = CENTI(METER).times(2.54).asType(Length.class);

We can also use a wildcard if we are not sure of the type of quantity:

Unit<?> kelvinPerSec = KELVIN.divide(SECOND);

4. Unit Conversion

Units can be retrieved from SystemOfUnits. There are some subclasses of this: SI, NonSI, etc. We can also create an entirely new custom unit or create a unit by applying algebraic operations on existing units.

The benefit of using a standard unit is that we don’t run into the conversion pitfalls.

The SI class provides prefixes, or multipliers, like KILO(Unit<Q> unit) and CENTI(Unit<Q> unit), which are equivalent to multiply and divide by a power of 10 respectively.

For example, we can define “Kilometer” and “Centimeter” as:

Unit<Length> kilometer = SI.KILO(METER);
Unit<Length> centimeter = SI.CENTI(METER);

These can be used when a unit we want is not available directly.

4.1. Custom Units

In any case, if a unit doesn’t exist in the SI or NonSI system of units, we can create new units with new symbols:

  • AlternateUnit – a new unit with the same dimension but different symbol and nature
  • CompoundUnit – a combination of several units

Let’s create some custom units using these classes. An example of AlternateUnit for pressure:

@Test
public void givenMeasure_whenAlternateMeasure_ThenGetAlternateMeasure() {
    Unit<Pressure> PASCAL = NEWTON.divide(METER.pow(2))
     .alternate("Pa");
 
    assertTrue(Unit.valueOf("Pa").equals(PASCAL));
}

Similarly, an example of CompoundUnit and its conversion:

@Test
public void givenMeasure_whenCompoundMeasure_ThenGetCompoundMeasure() {
    Unit<Duration> HOUR_MINUTE_SECOND = HOUR.compound(MINUTE)
      .compound(SECOND);
    Measure<Duration> duration = Measure.valueOf(12345, SECOND);
 
    assertEquals("3h25min45s",duration.to(HOUR_MINUTE_SECOND).toString());
}

Here, we have created a compound unit for the duration which consists of hours, minutes and seconds.

The framework also provides a UnitConverter class, which helps us convert one unit to another, or create a new derived unit called TransformedUnit.

Let’s see an example to turn the unit of a double value, from miles to kilometers:

@Test
public void givenMiles_whenConvertToKilometer_ThenConverted() {
    double distanceInMiles = 50.0;
    UnitConverter mileToKilometer = MILE.getConverterTo(KILO(METER));
    double distanceInKilometers = mileToKilometer.convert(distanceInMiles);
 
    assertEquals(80.4672, distanceInKilometers, 0.00f);
}

Some units are location-specific and some are not. To parse and format both these types, we have two abstract classes derived from java.util.Format – UnitFormat and MeasureFormat.

Both of these classes provide the locale-sensitive instances as well as locale-insensitive instances to facilitate unambiguous electronic communication of quantities with their units.

The UnitFormat recognizes some of the SI prefixes, which are used to form decimal multiples and submultiples of SI units:

@Test
public void givenSymbol_WhenCompareToSystemUnit_ThenSuccess() {
    assertTrue(Unit.valueOf("kW")
      .equals(SI.KILO(SI.WATT)));
    assertTrue(Unit.valueOf("ms").equals(SI.SECOND.divide(1000)));
}

Note that Unit.valueOf(String) and Unit.toString() methods are locale-sensitive. Thus, they should be avoided while dealing with locale specific units.

5. Conclusion

In this article, we saw that javax.measure gives us a convenient measurement model. And, apart from the usage ofMeasure and Measurable, we also saw how convenient it is to convert one unit to another, in a number of ways.

For further information, you can always check out documentation here.

And, as always, the entire code is available over on GitHub.

Spring Yarg Integration

$
0
0

1. Overview

Yet Another Report Generator (YARG) is an open source reporting library for Java, developed by Haulmont. It allows creating templates in most common formats (.doc, .docs, .xls, .xlsx, .html, .ftl, .csv) or custom text formats and filling it with data loaded by SQL, Groovy, or JSON.

In this article, we’re going to demonstrate how to use a Spring @RestController that outputs a .docx document with JSON-loaded data.

2. Setting up the Example

In order to start using YARG, we need to add the following dependencies to our pom:

<repositories>
    <repository>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
        <id>bintray-cuba-platform-main</id>
        <name>bintray</name>
        <url>http://dl.bintray.com/cuba-platform/main</url>
    </repository>
</repositories>
...
<dependency> 
    <groupId>com.haulmont.yarg</groupId> 
    <artifactId>yarg</artifactId> 
    <version>2.0.4</version> 
</dependency>

Next, we need a template for our data; we’re going to use a simple Letter.docx :

${Main.title}

Hello ${Main.name},

${Main.content}

Notice how YARG uses a markup/templating language – which allows the insertion of content in the different sections. These sections are divided in terms of the data group that they belong to.

In this example, we have a “Main” group that contains the title, name, and content of the letter.

These groups are called ReportBand in YARG and they are very useful for separating the different types of data you can have.

3. Integrating Spring With YARG

One of the best ways to use a report generator is to create a service that can return the document for us.

And so, we’ll use Spring and implement a simple @RestController that will be responsible for reading the template, getting the JSON, loading it into the document and returning a formatted .docx.

First, let’s create a DocumentController:

@RestController
public class DocumentController {

    @GetMapping("/generate/doc")
    public void generateDocument(HttpServletResponse response)
      throws IOException {
    }
}

This will expose the creation of the document as a service.

Now we will add the loading logic for the template:

ReportBuilder reportBuilder = new ReportBuilder();
ReportTemplateBuilder reportTemplateBuilder = new ReportTemplateBuilder()
  .documentPath("./src/main/resources/Letter.docx")
  .documentName("Letter.docx")
  .outputType(ReportOutputType.docx)
  .readFileFromPath();
reportBuilder.template(reportTemplateBuilder.build());

The ReportBuilder class is going to be responsible for the creation of the report, grouping up the template and the data. ReportTemplateBuilder loads our previously defined Letter.docx template by specifying the path, name, and output-type of the document.

Then we’ll add the loaded template to the report builder.

Now we need to define the data that is going to be inserted in the document, this will be a Data.json file with the following:

{
    "main": {
        "title" : "INTRODUCTION TO YARG",
        "name" : "Baeldung",
        "content" : "This is the content of the letter, can be anything we like."
    }
}

This is a simple JSON structure with a “main” object containing the title, name, and content that our template needs.

Now, let’s continue to load the data into our ReportBuilder:

BandBuilder bandBuilder = new BandBuilder();
String json = FileUtils.readFileToString(
  new File("./src/main/resources/Data.json"));
ReportBand main = bandBuilder.name("Main")
  .query("Main", "parameter=param1 $.main", "json")
  .build();
reportBuilder.band(main);
Report report = reportBuilder.build();

Here we define a BandBuilder in order to create a ReportBand, which is the abstraction that YARG uses for the groups of data that we defined earlier in the template document.

We can see that we define the name with the exact same section “Main”, then we use a query method to find the “Main” section and declare a parameter that will be used to find the data needed to fill the template.

It’s important to note that YARG uses JsonPath to traverse the JSON, which is why we see this “$.main” syntax.

Next, let’s specify in the query that the format of the data is “json”, then add the band to the report and finally, build it.

The last step is to define the Reporting object, which is responsible for inserting the data into the template and generate the final document:

Reporting reporting = new Reporting();
reporting.setFormatterFactory(new DefaultFormatterFactory());
reporting.setLoaderFactory(
  new DefaultLoaderFactory().setJsonDataLoader(new JsonDataLoader()));
response.setContentType(
 "application/vnd.openxmlformats-officedocument.wordprocessingml.document");
reporting.runReport(
  new RunParams(report).param("param1", json),
  response.getOutputStream());

We use a DefaultFormatterFactory that supports the common formats listed at the beginning of the article. After that, we set the JsonDataLoader that is going to be responsible for parsing the JSON.

In the final step, we set the appropriate content type for the .docx format and run the report. This will wire up the JSON data and insert it into the template outputting the result into the response output stream.

Now we can access the /generate/doc URL to download the document and we shall see the following result in our generated .docx:

4. Conclusion

In this article, we showed how we can easily integrate YARG with Spring and use its powerful API to create documents in a simple way.

We used JSON as the data input, but Groovy and SQL are also supported.

If you want to learn more about it you can find the documentation here.

And as always, you can find the complete example over on GitHub.

Introduction to Vavr’s Validation API

$
0
0

1. Overview

Validation is a frequently occurring task in Java applications, and hence a lot of effort has been put into the development of validation libraries.

Vavr (formerly known as Javaslang) provides a full-fledged validation API. It allows us to validate data in a straightforward manner, by using an object-functional programming style. If you want to peek at what this library offers out of the box, feel free to check this article.

In this tutorial, we take an in-depth look at the library’s validation API and learn how to use its most relevant methods.

2. The Validation Interface

Vavr’s validation interface is based on a functional programming concept known as an applicative functor. It executes a sequence of functions while accumulating the results, even if some or all of these functions fail during the execution chain.

The library’s applicative functor is built upon the implementers of its Validation interface. This interface provides methods for accumulating validation errors and validated data, therefore allowing to process both of them as a batch.

3. Validating User Input

Validating user input (e.g., data collected from a web layer) is smooth using the validation API, as it boils down to creating a custom validation class that validates the data while accumulating resulting errors if any.

Let’s validate a user’s name and email, which have been submitted via a login form. First, we need to include Vavr’s Maven artifact into the pom.xml file:

<dependency>
    <groupId>io.vavr</groupId>
    <artifactId>vavr</artifactId>
    <version>0.9.0</version>
</dependency>

Next, let’s create a domain class that models user objects:

public class User {
    private String name;
    private String email;
    
    // standard constructors, setters and getters, toString
}

Finally, let’s define our custom validator:

public class UserValidator {
    private static final String NAME_PATTERN = ...
    private static final String NAME_ERROR = ...
    private static final String EMAIL_PATTERN = ...
    private static final String EMAIL_ERROR = ...
	
    public Validation<Seq<String>, User> validateUser(
      String name, String email) {
        return Validation
          .combine(
            validateField(name, NAME_PATTERN, NAME_ERROR),
            validateField(email, EMAIL_PATTERN, EMAIL_ERROR))
          .ap(User::new);
    }
	
    private Validation<String, String> validateField
      (String field, String pattern, String error) {
 
        return CharSeq.of(field)
          .replaceAll(pattern, "")
          .transform(seq -> seq.isEmpty() 
            ? Validation.valid(field) 
            : Validation.invalid(error));		
    }
}

The UserValidator class validates the supplied name and email individually with the validateField() method. In this case, this method performs a typical regular expression based pattern matching.

The essence in this example is the use of the valid() , invalid() and combine() methods.

4. The valid(), invalid() and combine() Methods

If the supplied name and email match the given regular expressions, the validateField() method calls valid() . This method returns an instance of Validation.Valid. Conversely, if the values are invalid, the counter-part invalid() method returns an instance of Validation.Invalid.

This simple mechanism, based on creating different Validation instances depending on the validation results should give us at least a basic idea on how to process the results (more on this in section 5).

The most relevant facet of the validation process is the combine() method. Internally this method uses the Validation.Builder class, which allows to combine up to 8 different Validation instances that can be computed with different methods:

static <E, T1, T2> Builder<E, T1, T2> combine(
  Validation<E, T1> validation1, Validation<E, T2> validation2) {
    Objects.requireNonNull(validation1, "validation1 is null");
    Objects.requireNonNull(validation2, "validation2 is null");
    return new Builder<>(validation1, validation2);
}

The simplest Validation.Builder class takes two validation instances:

final class Builder<E, T1, T2> {

    private Validation<E, T1> v1;
    private Validation<E, T2> v2;

    // standard constructors

    public <R> Validation<Seq<E>, R> ap(Function2<T1, T2, R> f) {
        return v2.ap(v1.ap(Validation.valid(f.curried())));
    }

    public <T3> Builder3<E, T1, T2, T3> combine(
      Validation<E, T3> v3) {
        return new Builder3<>(v1, v2, v3);
    }
}

Validation.Builder, along with the ap(Function) method, returns one single result with the validation results. If all results are valid, the ap(Function) method maps the results onto a single value. This value is stored in a Valid instance by using the function specified in its signature.

In our example, if the supplied name and email are valid, a new User object is created. Of course, it is possible to do something entirely different with a valid result, i.e to stash it into a database, send it by email and so forth.

5. Processing Validation Results

It’s pretty easy to implement different mechanisms for processing validation results. But how do we validate data in the first place? To this extent, we use the UserValidator class:

UserValidator userValidator = new UserValidator(); 
Validation<Seq<String>, User> validation = userValidator
  .validateUser("John", "john@domain.com");

Once an instance of Validation is obtained, we can leverage the flexibility of the validation API and process results in several ways.

Let’s elaborate on the most commonly encountered approaches.

5.1. The Valid and Invalid Instances

This approach is the simplest one by far. It consists of checking validation results with the Valid and Invalid instances:

@Test
public void 
  givenInvalidUserParams_whenValidated_thenInvalidInstance() {
    assertThat(
      userValidator.validateUser(" ", "no-email"), 
      instanceOf(Invalid.class));
}
	
@Test
public void 
  givenValidUserParams_whenValidated_thenValidInstance() {
    assertThat(
      userValidator.validateUser("John", "john@domain.com"), 
      instanceOf(Valid.class));
}

Rather than checking the validity of results with the Valid and Invalid instances, we should just go one step further and use the isValid() and isInvalid() methods.

5.2. The isValid() and isInvalid() APIs

Using the tandem isValid() / isInvalid() is analogous to the previous approach, with the difference that these methods return true or false, depending on the validation results:

@Test
public void 
  givenInvalidUserParams_whenValidated_thenIsInvalidIsTrue() {
    assertTrue(userValidator
      .validateUser("John", "no-email")
      .isInvalid());
}

@Test
public void 
  givenValidUserParams_whenValidated_thenIsValidMethodIsTrue() {
    assertTrue(userValidator
      .validateUser("John", "john@domain.com")
      .isValid());
}

The Invalid instance contains all the validation errors. They can be fetched with the getError() method:

@Test
public void 
  givenInValidUserParams_withGetErrorMethod_thenGetErrorMessages() {
    assertEquals(
      "Name contains invalid characters, Email must be a well-formed email address", 
      userValidator.validateUser("John", "no-email")
        .getError()
        .intersperse(", ")
        .fold("", String::concat));
 }

Conversely, if the results are valid, a User instance can be grabbed with the get() method:

@Test
public void 
  givenValidUserParams_withGetMethod_thenGetUserInstance() {
    assertThat(userValidator.validateUser("John", "john@domain.com")
      .get(), instanceOf(User.class));
 }

This approach works as expected, but the code still looks pretty verbose and lengthy. We can compact it further using the toEither() method.

5.3. The toEither() API

The toEither() method constructs Left and Right instances of the Either interface. This complementary interface has several convenience methods that can be used for shortening the processing of validation results.

If the results are valid, the result is stored in the Right instance. In our example, this would amount to a valid User object. Conversely, if the results are invalid, the errors are stored in the Left instance:

@Test
public void 
  givenValidUserParams_withtoEitherMethod_thenRightInstance() {
    assertThat(userValidator.validateUser("John", "john@domain.com")
      .toEither(), instanceOf(Right.class));
}

The code now looks much more concise and streamlined. But we’re not done yet. The Validation interface provides the fold() method, which applies a custom function that applies to valid results and another one to invalid ones.

5.4. The fold() API

Let’s see how to use the fold() method for processing validation results:

@Test
public void 
  givenValidUserParams_withFoldMethod_thenEqualstoParamsLength() {
    assertEquals(2, (int) userValidator.validateUser(" ", " ")
      .fold(Seq::length, User::hashCode));
}

The use of fold() reduces the processing of validation results to just a one-liner.

It’s worth stressing that the functions’ return types passed as arguments to the method must be the same. Moreover, the functions must be supported by the type parameters defined in the validation class, i.e., Seq<String> and User.

6. Conclusion

In this article, we explored in depth Vavr’s validation API and learned how to use some of its most relevant methods. For a full list, check the official docs API.

Vavr’s validation control provides a very appealing alternative to more traditional implementations of Java Beans Validation, such as Hibernate Validator.

As usual, all the examples shown in the article are available over on GitHub.

Delete a Directory Recursively in Java

$
0
0

1. Introduction

In this article, we’ll illustrate how to delete a directory recursively in plain Java. We’ll also look at some alternatives for deleting directories using external libraries.

2. Deleting a Directory Recursively

Java has an option to delete a directory. However, this requires the directory to be empty. So, we need to use recursion to delete a particular non-empty directory:

  1. Get all the contents of the directory to be deleted
  2. Delete all children that are not a directory (exit from recursion)
  3. For each subdirectory of current directory, start with step 1 (recursive step)
  4. Delete the directory

Let’s implement this simple algorithm:

boolean deleteDirectory(File directoryToBeDeleted) {
    File[] allContents = directoryToBeDeleted.listFiles();
    if (allContents != null) {
        for (File file : allContents) {
            deleteDirectory(file);
        }
    }
    return directoryToBeDeleted.delete();
}

This method can be tested using a straightforward test case:

@Test
public void givenDirectory_whenDeletedWithRecursion_thenIsGone() 
  throws IOException {
 
    Path pathToBeDeleted = TEMP_DIRECTORY.resolve(DIRECTORY_NAME);

    boolean result = deleteDirectory(pathToBeDeleted.toFile());

    assertTrue(result);
    assertFalse(
      "Directory still exists", 
      Files.exists(pathToBeDeleted));
}

The @Before method of our test class creates a directory tree with subdirectories and files at pathToBeDeleted location and @After method cleans up directory if required.

Next, let’s have a look at how we can achieve deletion using two of the most commonly used libraries – Apache’s commons-io and Spring Framework’s spring-core. Both of these libraries allow us to delete the directories using just a single line of code.

3. Using FileUtils from commons-io

First, we need to add the commons-io dependency to the Maven project:

<dependency>
    <groupId>commons-io</groupId>
    <artifactId>commons-io</artifactId>
    <version>2.5</version>
</dependency>

The latest version of the dependency can be found here.

Now, we can use FileUtils to perform any file-based operations including deleteDirectory() with just one statement:

FileUtils.deleteDirectory(file);

4. Using FileSystemUtils from Spring

Alternatively, we can add spring-core dependency to the Maven project:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-core</artifactId>
    <version>4.3.10.RELEASE</version>
</dependency>

The latest version of the dependency can be found here.

We can use the deleteRecursively() method in FileSystemUtils to perform the deletion:

boolean result = FileSystemUtils.deleteRecursively(file);

The recent releases of Java offer newer ways of performing such IO operations described in the following sections.

5. Using NIO2 with Java 7

Java 7 introduced a whole new way of performing file operations using Files. It allows us to traverse a directory tree and use callbacks for actions to be performed.

public void whenDeletedWithNIO2WalkFileTree_thenIsGone() 
  throws IOException {
 
    Path pathToBeDeleted = TEMP_DIRECTORY.resolve(DIRECTORY_NAME);

    Files.walkFileTree(pathToBeDeleted, 
      new SimpleFileVisitor<Path>() {
        @Override
        public FileVisitResult postVisitDirectory(
          Path dir, IOException exc) throws IOException {
            Files.delete(dir);
            return FileVisitResult.CONTINUE;
        }
        
        @Override
        public FileVisitResult visitFile(
          Path file, BasicFileAttributes attrs) 
          throws IOException {
            Files.delete(file);
            return FileVisitResult.CONTINUE;
        }
    });

    assertFalse("Directory still exists", 
      Files.exists(pathToBeDeleted));
}

The Files.walkFileTree() method traverses a file tree and emits events. We need to specify callbacks for these events. So, in this case, we will define SimpleFileVisitor to take the following actions for the generated events:

  1. Visiting a file – delete it
  2. Visiting a directory before processing its entries – do nothing
  3. Visiting a directory after processing its entries- delete the directory, as all entries within this directory would have been processed (or deleted) by now
  4. Unable to visit a file – rethrow IOException that caused the failure

Please refer to Introduction to the Java NIO2 File API for more details on NIO2 APIs on handling file operations.

6. Using NIO2 with Java 8

Since Java 8, Stream API offers an even better way of deleting a directory:

@Test
public void whenDeletedWithFilesWalk_thenIsGone() 
  throws IOException {
    Path pathToBeDeleted = TEMP_DIRECTORY.resolve(DIRECTORY_NAME);

    Files.walk(pathToBeDeleted)
      .sorted(Comparator.reverseOrder())
      .map(Path::toFile)
      .forEach(File::delete);

    assertFalse("Directory still exists", 
      Files.exists(pathToBeDeleted));
}

Here, Files.walk() returns a Stream of Path that we sort in reverse order. This places the paths denoting the contents of directories before directories itself. Thereafter it maps Path to File and deletes each File. 

7. Conclusion

In this quick tutorial, we explored different ways of deleting a directory. While we saw how to use recursion to delete, we also looked at some libraries, NIO2 leveraging events and Java 8 Path Stream employing a functional programming paradigm.

All source code and test cases for this article are available over on Github.


“Sneaky Throws” in Java

$
0
0

1. Overview

In Java, the sneaky throw concept allows us to throw any checked exception without defining it explicitly in the method signature. This allows the omission of the throws declaration, effectively imitating the characteristics of a runtime exception.

In this article, we’ll see how this is done in practice, by looking at some code examples.

2. About Sneaky Throws

Checked exceptions are part of Java, not the JVM. In the bytecode, we can throw any exception from anywhere, without restrictions.

Java 8 brought a new type inference rule that states that a throws T is inferred as RuntimeException whenever allowed. This gives the ability to implement sneaky throws without the helper method.

A problem with sneaky throws is that you probably want to catch the exceptions eventually, but the Java compiler doesn’t allow you to catch sneakily thrown checked exceptions using exception handler for their particular exception type.

3. Sneaky Throws in Action

As we already mentioned, the compiler and the Jave Runtime can see different things:

public static <E extends Throwable> void sneakyThrow(Throwable e) throws E {
    throw (E) e;
}

private static void throwsSneakyIOException() {
    sneakyThrow(new IOException("sneaky"));
}

The compiler sees the signature with the throws T inferred to a RuntimeException type, so it allows the unchecked exception to propagate. The Java Runtime doesn’t see any type in the throws as all throws are the same a simple throw e.

This quick test demonstrates the scenario:

@Test
public void whenCallSneakyMethod_thenThrowSneakyException() {
    try {
        SneakyThrows.throwsSneakyIOException();
    } catch (Exception ex) {
        assertEquals("sneaky", ex.getMessage().toString());
    }
}

It’s possible to throw a checked exception using bytecode manipulation, or Thread.stop(Throwable), but it’s messy and not recommended.

4. Using Lombok Annotations

The @SneakyThrows annotation from Lombok allows you to throw checked exceptions without using the throws declaration. This comes in handy when you need to raise an exception from a method within very restrictive interfaces like Runnable.

Say we throw an exception from within a Runnable; it will only be passed to the Thread’s unhandled exception handler.

This code will throw the Exception instance, so there is no need for you to wrap it in a RuntimeException:

public class SneakyRunnable implements Runnable {
    @SneakyThrows(InterruptedException.class)
    public void run() {
        throw new InterruptedException();
    }
}

A drawback with this code is that you cannot catch a checked exception that is not declared; so, it will not compile.

Here’s the correct form for throwing a sneaky exception:

@SneakyThrows
public void run() {
    try {
        throw new InterruptedException();
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
}

And here’s the test for this behavior:

@Test
public void whenCallSneakyRunnableMethod_thenThrowException() {
    try {
        new SneakyRunnable().run();
    } catch (Exception e) {
        assertEquals(InterruptedException.class, e.getStackTrace());
    }
}

5. Conclusion

As we have seen in this article, the Java compiler can be tricked to treat checked exceptions as unchecked.

As always the code is available over on GitHub.

OutOfMemoryError: GC Overhead Limit Exceeded

$
0
0

1. Overview

Simply put, the JVM takes care of freeing up memory when objects are no longer being used; this process is called Garbage Collection (GC).

The GC Overhead Limit Exceeded error is one from the family of java.lang.OutOfMemoryError and is an indication of a resource (memory) exhaustion.

In this quick article, we’ll look at what causes the java.lang.OutOfMemoryError: GC Overhead Limit Exceeded error and how it can be solved.

2. GC Overhead Limit Exceeded Error

OutOfMemoryError is a subclass of java.lang.VirtualMachineError; it’s thrown by the JVM when it encounters a problem related to utilizing resources. More specifically, the error occurs when the JVM spent too much time performing Garbage Collection and was only able to reclaim very little heap space.

According to Java docs, by default, the JVM is configured to throw this error if the Java process spends more than 98% of its time doing GC and when only less than 2% of the heap is recovered in each run. In other words, this means that our application has exhausted nearly all the available memory and the Garbage Collector has spent too much time trying to clean it and failed repeatedly.

In this situation, users experience extreme slowness of the application. Certain operations, which usually complete in milliseconds, take more time to complete. This is because the CPU is using its entire capacity for Garbage Collection and hence cannot perform any other tasks.

3. Error in Action

Let’s look at a piece of code that throws java.lang.OutOfMemoryError: GC Overhead Limit Exceeded.

We can achieve that, for example, by adding key-value pairs in an unterminated loop:

public class OutOfMemoryGCLimitExceed {
    public static void addRandomDataToMap() {
        Map<Integer, String> dataMap = new HashMap<>();
        Random r = new Random();
        while (true) {
            dataMap.put(r.nextInt(), String.valueOf(r.nextInt()));
        }
    }
}

When this method is invoked, with the JVM arguments as -Xmx100m -XX:+UseParallelGC (Java heap size is set to 100MB and the GC Algorithm is ParallelGC), we get a java.lang.OutOfMemoryError: GC Overhead Limit Exceeded error. To get a better understanding of different Garbage Collection Algorithms we can check Oracle’s Java Garbage Collection Basics tutorial.

We’ll get a java.lang.OutOfMemoryError: GC Overhead Limit Exceeded error very quickly by running following command from the root of the project:

mvn exec:exec

It should also be noted that in some situations we might encounter a heap space error before encountering the GC Overhead Limit Exceeded error.

4. Solving GC Overhead Limit Exceeded Error

The ideal solution is to find the underlying problem with the application by examining the code for any memory leaks.

Following questions need to be addressed:

  • What are the objects in the application that occupy large portions of the heap?
  • In which parts of the source code are these objects being allocated?

We can also use automated graphical tools such as JConsole which helps to detect performance problems in the code including java.lang.OutOfMemoryErrors.

The last resort would be to increase the heap size by altering the JVM launch configuration. For example, this gives 1GB heap space for the Java application:

java -Xmx1024m com.xyz.TheClassName

However, this will not solve the problem if there are memory leaks in the actual application code. Instead, we will just postpone the error. Therefore, it is more advisable to re-assess the memory usage of the application thoroughly.

5. Conclusion

In this tutorial, we examined the java.lang.OutOfMemoryError: GC Overhead Limit Exceeded and the reasons behind it.

As always, the source code related to this article can be found over on GitHub.

Spring MVC Setup with Kotlin

$
0
0

1. Overview

In this quick tutorial, we’ll take a look at what it takes to create a simple Spring MVC project with the Kotlin language.

2. Maven

For the Maven configuration, we need to add the following Kotlin dependencies:

<dependency>
    <groupId>org.jetbrains.kotlin</groupId>
    <artifactId>kotlin-stdlib-jre8</artifactId>
    <version>1.1.4</version>
</dependency>

We also need to add the following Spring dependencies:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
    <version>4.3.10.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>4.3.10.RELEASE</version>
</dependency>

To compile our code, we need to specify our source directory and configure the Kotlin Maven Plugin in the build section of our pom.xml:

<plugin>
    <artifactId>kotlin-maven-plugin</artifactId>
    <groupId>org.jetbrains.kotlin</groupId>
    <version>1.1.4</version>
    <executions>
        <execution>
            <id>compile</id>
            <phase>compile</phase>
            <goals>
                <goal>compile</goal>
            </goals>
        </execution>
        <execution>
            <id>test-compile</id>
            <phase>test-compile</phase>
            <goals>
                <goal>test-compile</goal>
            </goals>
        </execution>
    </executions>
</plugin>

3. Spring MVC Configuration

We can use either the Kotlin annotation configuration or an XML configuration.

3.1. Kotlin Configuration

The annotation configuration is pretty simple. We setup view controllers, the template resolver, and the template engine. Thereafter we can use them to configure the view resolver:

@EnableWebMvc
@Configuration
open class ApplicationWebConfig : WebMvcConfigurerAdapter(), 
  ApplicationContextAware {

    private var applicationContext: ApplicationContext? = null

    override fun setApplicationContext(applicationContext: 
      ApplicationContext?) {
        this.applicationContext = applicationContext
    }

    override fun addViewControllers(registry:
      ViewControllerRegistry?) {
        super.addViewControllers(registry)

        registry!!.addViewController("/welcome.html")
    }
    @Bean
    open fun templateResolver(): SpringResourceTemplateResolver {
        return SpringResourceTemplateResolver()
          .apply { prefix = "/WEB-INF/view/" }
          .apply { suffix = ".html"}
          .apply { templateMode = TemplateMode.HTML }
          .apply { setApplicationContext(applicationContext) }
    }

    @Bean
    open fun templateEngine(): SpringTemplateEngine {
        return SpringTemplateEngine()
          .apply { setTemplateResolver(templateResolver()) }
    }

    @Bean
    open fun viewResolver(): ThymeleafViewResolver {
        return ThymeleafViewResolver()
          .apply { templateEngine = templateEngine() }
          .apply { order = 1 }
    }
}

Next, let’s create a ServletInitializer class. The class should extend AbstractAnnotationConfigDispatcherServletInitializer. This is a replacement for the traditional web.xml configuration:

class ApplicationWebInitializer: 
  AbstractAnnotationConfigDispatcherServletInitializer() {

    override fun getRootConfigClasses(): Array<Class<*>>? {
        return null
    }

    override fun getServletMappings(): Array<String> {
        return arrayOf("/")
    }

    override fun getServletConfigClasses(): Array<Class<*>> {
        return arrayOf(ApplicationWebConfig::class.java)
    }
}

3.2. XML Configuration

The XML equivalent for the ApplicationWebConfig class is:

<beans xmlns="...">
    <context:component-scan base-package="com.baeldung.kotlin.mvc" />

    <mvc:view-controller path="/welcome.html"/>

    <mvc:annotation-driven />

    <bean id="templateResolver" 
      class="org.thymeleaf.spring4.templateresolver.SpringResourceTemplateResolver">
        <property name="prefix" value="/WEB-INF/view/" />
        <property name="suffix" value=".html" />
        <property name="templateMode" value="HTML" />
    </bean>

    <bean id="templateEngine"
          class="org.thymeleaf.spring4.SpringTemplateEngine">
        <property name="templateResolver" ref="templateResolver" />
    </bean>


    <bean class="org.thymeleaf.spring4.view.ThymeleafViewResolver">
        <property name="templateEngine" ref="templateEngine" />
        <property name="order" value="1" />
    </bean>

</beans>

In this case, we do have to specify the web.xml configuration as well:

<web-app xmlns=...>

    <display-name>Spring Kotlin MVC Application</display-name>

    <servlet>
        <servlet-name>spring-web-mvc</servlet-name>
        <servlet-class>
            org.springframework.web.servlet.DispatcherServlet
        </servlet-class>
        <load-on-startup>1</load-on-startup>
        <init-param>
            <param-name>contextConfigLocation</param-name>
            <param-value>/WEB-INF/spring-web-config.xml</param-value>
        </init-param>
    </servlet>

    <servlet-mapping>
        <servlet-name>spring-web-mvc</servlet-name>
        <url-pattern>/</url-pattern>
    </servlet-mapping>
</web-app>

4. The Html Views

The corresponding html resource is located under the /WEB-INF/view directory. In the above view controller configuration, we defined a basic view controller, welcome.html. The content of the corresponding resource is:

<html>
    <head>Welcome</head>

    <body>
        <h1>Body of the welcome view</h1>
    </body>
</html>

5. Conclusion

After running the project, we can access the configured welcome page at http://localhost:8080/welcome.html.

In this article, we configured a simple Spring MVC project using both a Kotlin and XML configuration.

The complete source code is available over on GitHub.

Guide to Collections API in Vavr

$
0
0

1. Overview

The Vavr library, formerly known as Javaslang, is a functional library for Java. In this article, we explore its powerful collections API.

To get more information about this library, please read this article.

2. Persistent Collections

A persistent collection when modified produces a new version of the collection while preserving the current version.

Maintaining multiple versions of the same collection might lead to inefficient CPU and memory usage. However, the Vavr collection library overcomes this by sharing data structure across different versions of a collection.

This is fundamentally different from Java’s unmodifiableCollection() from the Collections utility class, which merely provides a wrapper around an underlying collection.

Trying to modify such a collection results in UnsupportedOperationException instead of a new version being created. Moreover, the underlying collection is still mutable through its direct reference.

3. Traversable

Traversable is the base type of all Vavr collections – this interface defines methods that are shared among all data structures.

It provides some useful default methods such as size(), get(), filter(), isEmpty() and others which are inherited by sub-interfaces.

Let’s explore the collections library further.

4. Seq

We’ll start with sequences.

The Seq interface represents sequential data structures. It is the parent interface for List, Stream, Queue, Array, Vector, and CharSeq.  All these data structures have their own unique properties which we’ll be exploring below.

4.1. List

A List is an eagerly-evaluated sequence of elements extending the LinearSeq interface.

Persistent Lists are formed recursively from a head and a tail:

  • Head – the first element
  • Tail – a list containing remaining elements (that list is also formed from a head and a tail)

There are static factory methods in the List API that can be used for creating a List. We can use the static of() method to create an instance of List from one or more objects.

We can also use the static empty() to create an empty List and ofAll() to create a List from an Iterable type:

List<String> list = List.of(
  "Java", "PHP", "Jquery", "JavaScript", "JShell", "JAVA");

Let’s take a look at some examples on how to manipulate lists.

We can use the drop() and its variants to remove first N elements:

List list1 = list.drop(2);                                      
assertFalse(list1.contains("Java") && list1.contains("PHP"));   
                                                                
List list2 = list.dropRight(2);                                 
assertFalse(list2.contains("JAVA") && list2.contains("JShell"));
                                                                
List list3 = list.dropUntil(s -> s.contains("Shell"));          
assertEquals(list3.size(), 2);                                  
                                                                
List list4 = list.dropWhile(s -> s.length() > 0);               
assertTrue(list4.isEmpty());

drop(int n) removes n number of elements from the list starting from the first element while the dropRight() does the same starting from the last element in the list.

dropUntil() continues removing elements from the list until the predicate evaluates to true whereas the dropWhile() continues dropping elements while the predicate is true.

There’s also dropRightWhile() and dropRightUntil() that starts removing elements from the right.

Next, take(int n) is used to grab elements from a list. It takes n number of elements from the list and then stops. There’s also a takeRight(int n) that starts taking elements from the end of the list:

List list5 = list.take(1);                       
assertEquals(list5.single(), "Java");            
                                                 
List list6 = list.takeRight(1);                  
assertEquals(list6.single(), "JAVA");            
                                                 
List list7 = list.takeUntil(s -> s.length() > 6);
assertEquals(list7.size(), 3);

Finally, takeUntil() continues taking elements from the list until the predicate is true. There’s a takeWhile() variant that takes a predicate argument as well.

Moreover, there are other useful methods in the API, e.g., actually the distinct() that returns a list of non-duplicate elements as well as the distinctBy() that accepts a Comparator to determine equality.

Very interestingly, there’s also the intersperse() that inserts an element in between every element of a list. It can be very handy for String operations:

List list8 = list
  .distinctBy((s1, s2) -> s1.startsWith(s2.charAt(0) + "") ? 0 : 1);
assertEquals(list8.size(), 2);

String words = List.of("Boys", "Girls")
  .intersperse("and")
  .reduce((s1, s2) -> s1.concat( " " + s2 ))
  .trim();  
assertEquals(words, "Boys and Girls");

Want to divide a list into categories? Well, there’s an API for that too:

Iterator<List<String>> iterator = list.grouped(2);
assertEquals(iterator.head().size(), 2);

Map<Boolean, List<String>> map = list.groupBy(e -> e.startsWith("J"));
assertEquals(map.size(), 2);
assertEquals(map.get(false).get().size(), 1);
assertEquals(map.get(true).get().size(), 5);

The group(int n)  divides a List into groups of n elements each. The groupdBy() accepts a Function that contains the logic for dividing the list and returns a Map with two entries – true and false.

The true key maps to a List of elements that satisfy the condition specified in the Function; the false key maps to a List of elements that do not.

As expected, when mutating a List, the original List is not actually modified. Instead, a new version of the List is always returned.

We can also interact with a List using stack semantics – last-in-first-out (LIFO) retrieval of elements. To this extent, there are API methods for manipulating a stack such as peek(), pop() and push():

List<Integer> intList = List.empty();

List<Integer> intList1 = intList.pushAll(List.rangeClosed(5,10));

assertEquals(intList1.peek(), Integer.valueOf(10));

List intList2 = intList1.pop();
assertEquals(intList2.size(), (intList1.size() - 1) );

The pushAll() function is used to insert a range of integers onto the stack, while the peek() is used to get the head of the stack. There’s also the peekOption() that can wrap the result in an Option object.

There are other interesting and really useful methods in the List interface that are neatly documented in the Java docs.

4.2. Queue

An immutable Queue stores elements allowing a first-in-first-out (FIFO) retrieval.

A Queue internally consists of two linked lists, a front List, and a rear List. The front List contains the elements that are dequeued, and the rear List contains the elements that are enqueued.

This allows enqueue and dequeue operations to perform in O(1). When the front List runs out of elements, front and rear List’s are swapped, and the rear List is reversed.

Let’s create a queue:

Queue<Integer> queue = Queue.of(1, 2);
Queue<Integer> secondQueue = queue.enqueueAll(List.of(4,5));

assertEquals(3, queue.size());
assertEquals(5, secondQueue.size());

Tuple2<Integer, Queue<Integer>> result = secondQueue.dequeue();
assertEquals(Integer.valueOf(1), result._1);

Queue<Integer> tailQueue = result._2;
assertFalse(tailQueue.contains(secondQueue.get(0)));

The dequeue function removes the head element from the Queue and returns a Tuple2<T, Q>. The tuple contains the head element that has been removed as the first entry and the remaining elements of the Queue as the second entry.

We can use the combination(n) to get all the possible N combinations of elements in the Queue:

Queue<Queue<Integer>> queue1 = queue.combinations(2);
assertEquals(queue1.get(2).toCharSeq(), CharSeq.of("23"));

Again, we can see that the original Queue is not modified while enqueuing/dequeuing elements.

4.3. Stream

Stream is an implementation of a lazy linked list and is quite different from java.util.stream. Unlike java.util.stream, the Vavr Stream stores data and is lazily evaluating next elements.

Let’s say we have a Stream of integers:

Stream<Integer> s = Stream.of(2, 1, 3, 4);

Printing the result of s.toString() to the console will only show Stream(2, ?). This means that it is only the head of the Stream that has been evaluated while the tail has not been evaluated.

Invoking s.get(3) and subsequently displaying the result of s.tail() returns Stream(1, 3, 4, ?). On the contrary, without invoking s.get(3) first which causes the Stream to evaluate the last element – the result of s.tail() will only be Stream(1, ?). This means just the first element of the tail has been evaluated.

This behaviour can improve performance and makes it possible to use Stream to represent sequences that are (theoretically) infinitely long.

Vavr Stream is immutable and may be Empty or Cons. A Cons consists of a head element and a lazy computed tail Stream. Unlike a List, for a Stream, only the head element is kept in memory. The tail elements are computed on demand.

Let’s create a Stream of 10 positive integers and compute the sum of the even numbers:

Stream<Integer> intStream = Stream.iterate(0, i -> i + 1)
  .take(10);

assertEquals(10, intStream.size());

long evenSum = intStream.filter(i -> i % 2 == 0)
  .sum()
  .longValue();

assertEquals(20, evenSum);

As opposed to Java 8 Stream API, Vavr’s Stream is a data structure for storing a sequence of elements.

Thus, it has methods like get(), append(), insert() and others for manipulating its elements. The drop(), distinct() and some other methods considered earlier are also available.

Finally, let’s quickly demonstrate the tabulate() in a Stream. This method returns a Stream of length n, which contains elements that are the result of applying a function:

Stream<Integer> s1 = Stream.tabulate(5, (i)-> i + 1);
assertEquals(s1.get(2).intValue(), 3);

We can also use the zip() to generate a Stream of Tuple2<Integer, Integer>, which contains elements that are formed by combining two Streams:

Stream<Integer> s = Stream.of(2,1,3,4);

Stream<Tuple2<Integer, Integer>> s2 = s.zip(List.of(7,8,9));
Tuple2<Integer, Integer> t1 = s2.get(0);
 
assertEquals(t1._1().intValue(), 2);
assertEquals(t1._2().intValue(), 7);

4.4. Array

An Array is an immutable, indexed, sequence that allows efficient random access. It is backed by a Java array of objects. Essentially, it is a Traversable wrapper for an array of objects of type T.

We can instantiate an Array by using the static method of(). We can also generate a range elements by using the static range() and rangeBy() methods. The rangeBy() has a third parameter that let us define the step.

The range() and rangeBy() methods will only generate elements starting from the start value to end value minus one. If we need to include the end value we can use either the rangeClosed() or rangeClosedBy():

Array<Integer> rArray = Array.range(1, 5);
assertFalse(rArray.contains(5));

Array<Integer> rArray2 = Array.rangeClosed(1, 5);
assertTrue(rArray2.contains(5));

Array<Integer> rArray3 = Array.rangeClosedBy(1,6,2);
assertEquals(rArray3.size(), 3);

Let’s manipulate the elements by index:

Array<Integer> intArray = Array.of(1, 2, 3);
Array<Integer> newArray = intArray.removeAt(1);

assertEquals(3, intArray.size());
assertEquals(2, newArray.size());
assertEquals(3, newArray.get(1).intValue());

Array<Integer> array2 = intArray.replace(1, 5);
assertEquals(array2.get(0).intValue(), 5);

4.5. Vector

A Vector is a kind of in-between Array and List providing another indexed sequence of elements that allows both random access and modification in constant time:

Vector<Integer> intVector = Vector.range(1, 5);
Vector<Integer> newVector = intVector.replace(2, 6);

assertEquals(4, intVector.size());
assertEquals(4, newVector.size());

assertEquals(2, intVector.get(1).intValue());
assertEquals(6, newVector.get(1).intValue());

4.6. CharSeq

CharSeq is a collection object to express a sequence of primitive characters. It is essentially a String wrapper with the addition of collection operations.

To create a CharSeq:

CharSeq chars = CharSeq.of("vavr");
CharSeq newChars = chars.replace('v', 'V');

assertEquals(4, chars.size());
assertEquals(4, newChars.size());

assertEquals('v', chars.charAt(0));
assertEquals('V', newChars.charAt(0));
assertEquals("Vavr", newChars.mkString());

5. Set

In this section, we elaborate on various Set implementations in the collections library. The unique feature of the Set data structure is that it doesn’t allow duplicate values.

There are, however, different implementations of Set –  the HashSet being the basic one. The TreeSet doesn’t allow duplicate elements and can be sorted. The LinkedHashSet maintains the insertion order of its elements.

Let’s have a closer look at these implementations one by one.

5.1. HashSet

HashSet has static factory methods for creating new instances – some of which we have explored previously in this article – like of(), ofAll() and variations of range() methods.

We can get the difference between two sets by using the diff() method. Also, the union() and intersect() methods return the union set and intersection set of the two sets:

HashSet<Integer> set0 = HashSet.rangeClosed(1,5);
HashSet<Integer> set1 = HashSet.rangeClosed(3, 6);

assertEquals(set0.union(set1), HashSet.rangeClosed(1,6));
assertEquals(set0.diff(set1), HashSet.rangeClosed(1,2));
assertEquals(set0.intersect(set1), HashSet.rangeClosed(3,5));

We can also perform basic operations such as adding and removing elements:

HashSet<String> set = HashSet.of("Red", "Green", "Blue");
HashSet<String> newSet = set.add("Yellow");

assertEquals(3, set.size());
assertEquals(4, newSet.size());
assertTrue(newSet.contains("Yellow"));

The HashSet implementation is backed by a Hash array mapped trie (HAMT), which boasts a superior performance when compared to an ordinary HashTable and its structure makes it suitable for backing a persistent collection.

5.2. TreeSet

An immutable TreeSet is an implementation of the SortedSet interface. It stores a Set of sorted elements and is implemented using binary search trees. All its operations run in O(log n) time.

By default, elements of a TreeSet are sorted in their natural order.

Let’s create a SortedSet using natural sorting order:

SortedSet<String> set = TreeSet.of("Red", "Green", "Blue");
assertEquals("Blue", set.head());

SortedSet<Integer> intSet = TreeSet.of(1,2,3);
assertEquals(2, intSet.average().get().intValue());

To order elements in a customized manner, pass a Comparator instance while creating a TreeSet. We can also generate a string from the set elements:

SortedSet<String> reversedSet
  = TreeSet.of(Comparator.reverseOrder(), "Green", "Red", "Blue");
assertEquals("Red", reversedSet.head());

String str = reversedSet.mkString(" and ");
assertEquals("Red and Green and Blue", str);

5.3. BitSet

Vavr collections also contain an immutable BitSet implementation. The BitSet interface extends the SortedSet interface. BitSet can be instantiated using static methods in BitSet.Builder.

Like other implementations of the Set data structure, BitSet does not allow duplicate entries to be added to the set.

It inherits methods for manipulation from the Traversable interface. Note that it is different from the java.util.BitSet in the standard Java library. BitSet data can’t contain String values.

Let’s see how to create a BitSet instance using the factory method of():

BitSet<Integer> bitSet = BitSet.of(1,2,3,4,5,6,7,8);
BitSet<Integer> bitSet1 = bitSet.takeUntil(i -> i > 4);
assertEquals(bitSet1.size(), 4);

We use the takeUntil() to select the first four elements of BitSet. The operation returned a new instance. Take note that the takeUntil() is defined in the Traversable interface, which is a parent interface of BitSet.

Other methods and operations demonstrated above, that are defined in the Traversable interface, are also applicable to BitSet as well.

6. Map

A map is a key-value data structure. Vavr’s Map is immutable and has implementations for HashMap, TreeMap, and LinkedHashMap.

Generally, map contracts don’t allow duplicate keys – though there may be duplicate values mapped to different keys.

6.1. HashMap

A HashMap is an implementation of an immutable Map interface. It stores key-value pairs using the hash code of the keys.

Vavr’s Map uses Tuple2 to represent key-value pairs instead of a traditional Entry type:

Map<Integer, List<Integer>> map = List.rangeClosed(0, 10)
  .groupBy(i -> i % 2);
        
assertEquals(2, map.size());
assertEquals(6, map.get(0).get().size());
assertEquals(5, map.get(1).get().size());

Similar to HashSet, a HashMap implementation is backed by a hash array mapped trie (HAMT) resulting in constant time for almost all operations.

We can filter map entries by keys, using the filterKeys() method or by values, using the filterValues() method. Both methods accept a Predicate as an argument:

Map<String, String> map1
  = HashMap.of("key1", "val1", "key2", "val2", "key3", "val3");
        
Map<String, String> fMap
  = map1.filterKeys(k -> k.contains("1") || k.contains("2"));
assertFalse(fMap.containsKey("key3"));
        
Map<String, String> fMap2
  = map1.filterValues(v -> v.contains("3"));
assertEquals(fMap2.size(), 1);
assertTrue(fMap2.containsValue("val3"));

We can also transform map entries by using the map() method. Let’s, for example, transform map1 to a Map<String, Integer>:

Map<String, Integer> map2 = map1.map(
  (k, v) -> Tuple.of(k, Integer.valueOf(v.charAt(v.length() - 1) + "")));
assertEquals(map2.get("key1").get().intValue(), 1);

6.2. TreeMap

An immutable TreeMap is an implementation of the SortedMap interface. Similar to TreeSet, a Comparator instance is used to custom sort elements of a TreeMap.

Let’s demonstrate the creation of a SortedMap:

SortedMap<Integer, String> map
  = TreeMap.of(3, "Three", 2, "Two", 4, "Four", 1, "One");

assertEquals(1, map.keySet().toJavaArray()[0]);
assertEquals("Four", map.get(4).get());

By default, entries of TreeMap are sorted in the natural order of the keys. We can, however, specify a Comparator that will be used for sorting:

TreeMap<Integer, String> treeMap2 =
  TreeMap.of(Comparator.reverseOrder(), 3,"three", 6, "six", 1, "one");
assertEquals(treeMap2.keySet().mkString(), "631");

As with TreeSet, a TreeMap implementation is also modeled using a tree, hence its operations are of O(log n) time. The map.get(key) returns an Option that wraps a value at the specified key in the map.

7. Interoperability with Java

The collection API is fully interoperable with Java’s collection framework. Let’s see how this is done in practice.

7.1. Java to Vavr Conversion

Each collection implementation in Vavr has a static factory method ofAll() that takes a java.util.Iterable. This allows us to create a Vavr collection out of a Java collection. Likewise, another factory method ofAll() takes a Java Stream directly.

To convert a Java List to an immutable List:

java.util.List<Integer> javaList = java.util.Arrays.asList(1, 2, 3, 4);
List<Integer> vavrList = List.ofAll(javaList);

java.util.stream.Stream<Integer> javaStream = javaList.stream();
Set<Integer> vavrSet = HashSet.ofAll(javaStream);

Another useful function is the collector() that can be used in conjunction with Stream.collect() to obtain a Vavr collection:

List<Integer> vavrList = IntStream.range(1, 10)
  .boxed()
  .filter(i -> i % 2 == 0)
  .collect(List.collector());

assertEquals(4, vavrList.size());
assertEquals(2, vavrList.head().intValue());

7.2. Vavr to Java Conversion

Value interface has many methods to convert a Vavr type to a Java type. These methods are of the format toJavaXXX().

Let’s address a couple of examples:

Integer[] array = List.of(1, 2, 3)
  .toJavaArray(Integer.class);
assertEquals(3, array.length);

java.util.Map<String, Integer> map = List.of("1", "2", "3")
  .toJavaMap(i -> Tuple.of(i, Integer.valueOf(i)));
assertEquals(2, map.get("2").intValue());

We can also use Java 8 Collectors to collect elements from Vavr collections:

java.util.Set<Integer> javaSet = List.of(1, 2, 3)
  .collect(Collectors.toSet());
        
assertEquals(3, javaSet.size());
assertEquals(1, javaSet.toArray()[0]);

7.3. Java Collection Views

Alternatively, the library provides so-called collection views that perform better when converting to Java collections. The conversion methods from the previous section iterate through all the elements to build a Java collection.

Views, on the other hand, implement standard Java interfaces and delegate method calls to the underlying Vavr collection.

As of this writing, only the List view is supported. Each sequential collection has two methods, one to create an immutable view and another for a mutable view.

Calling mutator methods on immutable view results in an UnsupportedOperationException.

Let’s look at an example:

@Test(expected = UnsupportedOperationException.class)
public void givenVavrList_whenViewConverted_thenException() {
    java.util.List<Integer> javaList = List.of(1, 2, 3)
      .asJava();
    
    assertEquals(3, javaList.get(2).intValue());
    javaList.add(4);
}

To create an immutable view:

java.util.List<Integer> javaList = List.of(1, 2, 3)
  .asJavaMutable();
javaList.add(4);

assertEquals(4, javaList.get(3).intValue());

8. Conclusion

In this tutorial, we’ve learned about various functional data structures provided by Vavr’s Collection API. There are more useful and productive API methods that can be found in Vavr’s collections JavaDoc and the user guide.

Finally, it’s important to note that the library also defines Try, Option, Either, and Future that extend the Value interface and as a consequence implement Java’s Iterable interface. This implies that they can behave as a collection in some situations.

The complete source code for all the examples in this article can be found over on Github.

Guide to JDeferred

$
0
0

1. Overview

JDeferred is a small Java library (also supports Groovy) used for implementing asynchronous topology without writing boilerplate code. This framework is inspired by the Jquery’s Promise/Ajax feature and Android’s Deferred Object pattern.

In this tutorial, we’ll show how to use JDeferred and its different utilities.

2. Maven Dependency

We can start using JDeferred in any application by adding the following dependency into our pom.xml:

<dependency>
    <groupId>org.jdeferred</groupId>
    <artifactId>jdeferred-core</artifactId>
    <version>1.2.6</version>
</dependency>

We can check the latest version of the JDeferred project in the Central Maven Repository.

3. Promises

Let’s have a look at a simple use-case of invoking an error-prone synchronous REST API call and perform some task based on the data returned by the API.

In simple JQuery, the above scenario can be addressed in the following way:

$.ajax("/GetEmployees")
    .done(
        function() {
            alert( "success" );
        }
     )
    .fail(
        function() {
            alert( "error" );
        }
     )
    .always(
        function() {
            alert( "complete" );
        }
    );

Similarly, JDeferred comes with the Promise and Deferred interfaces which register a thread-independent hook on the corresponding object that triggers different customizable action based on that object status.

Here, Deferred acts as the trigger and the Promise acts as the observer.

We can easily create this type of asynchronous workflow:

Deferred<String, String, String> deferred
  = new DeferredObject<>();
Promise<String, String, String> promise = deferred.promise();

promise.done(result -> System.out.println("Job done"))
  .fail(rejection -> System.out.println("Job fail"))
  .progress(progress -> System.out.println("Job is in progress"))
  .always((state, result, rejection) -> 
    System.out.println("Job execution started"));

deferred.resolve("msg");
deferred.notify("notice");
deferred.reject("oops");

Here, each method has different semantics:

  • done() – triggers only when the pending actions on the deferred object is/are completed successfully
  • fail() – triggers while some exception is raised while performing pending action/s on the deferred object
  • progress() – triggers as soon as pending actions on the deferred object is/are started to execute
  • always() – triggers regardless of deferred object’s state

By default, a deferred object’s status can be PENDING/REJECTED/RESOLVED. We can check the status using deferred.state() method.

Point to note here is that once a deferred object’s status is changed to RESOLVED, we can’t perform reject operation on that object.

Similarly, once the object’s status is changed to REJECTED, we can’t perform resolve or notify operation on that object. Any violation will result into an IllegalStateExeption.

4. Filters

Before retrieving the final result, we can perform filtering on the deferred object with DoneFilter.

Once the filtering is done, we’ll get the thread-safe deferred object:

private static String modifiedMsg;

static String filter(String msg) {
    Deferred<String, ?, ?> d = new DeferredObject<>();
    Promise<String, ?, ?> p = d.promise();
    Promise<String, ?, ?> filtered = p.then((result) > {
        modifiedMsg = "Hello "  result;
    });

    filtered.done(r > System.out.println("filtering done"));

    d.resolve(msg);
    return modifiedMsg;
}

5. Pipes

Similar to filter, JDeferred offers the DonePipe interface to perform sophisticated post-filtering actions once the deferred object pending actions are resolved.

public enum Result { 
    SUCCESS, FAILURE 
}; 

private static Result status; 

public static Result validate(int num) { 
    Deferred<Integer, ?, ?> d = new DeferredObject<>(); 
    Promise<Integer, ?, ?> p = d.promise(); 
    
    p.then((DonePipe<Integer, Integer, Exception, Void>) result > {
        public Deferred<Integer, Exception, Void> pipeDone(Integer result) {
            if (result < 90) {
                return new DeferredObject<Integer, Exception, Void>()
                  .resolve(result);
            } else {
                return new DeferredObject<Integer, Exception, Void>()
                  .reject(new Exception("Unacceptable value"));
            }
    }).done(r > status = Result.SUCCESS )
      .fail(r > status = Result.FAILURE );

    d.resolve(num);
    return status;
}

Here, based on the value of the actual result, we’ve raised an exception to reject the result.

6. Deferred Manager

In a real time scenario, we need to deal with the multiple deferred objects observed by multiple promises. In this scenario, it’s pretty difficult to manage multiple promises separately.

That’s why JDeferred comes with DeferredManager interface which creates a common observer for all of the promises. Hence, using this common observer, we can create common actions for all of the promises:

Deferred<String, String, String> deferred = new DeferredObject<>();
DeferredManager dm = new DefaultDeferredManager();
Promise<String, String, String> p1 = deferred.promise(), 
  p2 = deferred.promise(), 
  p3 = deferred.promise();
dm.when(p1, p2, p3)
  .done(result -> ... )
  .fail(result -> ... );
deferred.resolve("Hello Baeldung");

We can also assign ExecutorService with a custom thread pool to the DeferredManager:

ExecutorService executor = Executors.newFixedThreadPool(10);
DeferredManager dm = new DefaultDeferredManager(executor);

In fact, we can completely ignore the use of Promise and can directly define the Callable interface to complete the task:

DeferredManager dm = new DefaultDeferredManager();
dm.when(() -> {
    // return something and raise an exception to interrupt the task
}).done(result -> ... )
  .fail(e -> ... );

7. Thread-Safe Action

Although, most of the time we need to deal with asynchronous workflow, some of the time we need to wait for the results of the all of the parallel tasks.

In this type of scenario, we may only use Object‘s wait() method to wait for all deferred tasks to finish:

DeferredManager dm = new DefaultDeferredManager();
Deferred<String, String, String> deferred = new DeferredObject<>();
Promise<String, String, String> p1 = deferred.promise();
Promise<String, String, String> p = dm
  .when(p1)
  .done(result -> ... )
  .fail(result -> ... );

synchronized (p) {
    while (p.isPending()) {
        try {
            p.wait();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
}

deferred.resolve("Hello Baeldung");

Alternatively, we can use Promise interface’s waitSafely() method to achieve the same.

try {
    p.waitSafely();
} catch (InterruptedException e) {
    e.printStackTrace();
}

Although both of the above methods perform pretty much the same thing, it’s always advisable to use the second one since the second procedure doesn’t require synchronization.

8. Android Integration

JDeferred can be easily integrated with Android applications using the Android Maven plugin.

For APKLIB build, we need to add the following dependency in the pom.xml:

<dependency>
    <groupId>org.jdeferred</groupId>
    <artifactId>jdeferred-android</artifactId>
    <version>1.2.6</version>
    <type>apklib</type>
</dependency>

For AAR build, we need to add the following dependency in the pom.xml:

<dependency>
    <groupId>org.jdeferred</groupId>
    <artifactId>jdeferred-android-aar</artifactId>
    <version>1.2.6</version>
    <type>aar</type>
</dependency>

9. Conclusion

In this tutorial, we explored about JDeferred, and it’s different utilities.

As always, the full source code is available over on GitHub.

Java Weekly, Issue 193

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> Java to Move to 6-Monthly Release Cadence [infoq.com]

Moving forward, Java will be released twice a year.

This is a big step, which should make it possible to constantly introduce new, smaller features without waiting for the completion of big ones (JPMS, Lambda Expressions).

>> Code Smells: Multi-Responsibility Methods [blog.jetbrains.com]

It is not a secret that “god” methods are hard to test, maintain, and refactor.

>> Spring Boot 2.0 Will Feature Improved Actuator Endpoints [infoq.com]

The new Spring Boot will bring slightly redesigned Actuator Endpoints – with improved security.

>> Streaming large JSON file with Jackson – RxJava FAQ [nurkiewicz.com]

An interesting use case for RxJava used for streaming JSON files without risking the memory overload.

>> Bypassing Kotlin’s Null-Safety [4comprehension.com]

There are situations when Kotlin’s null safety won’t always protect us – this is especially the case with libraries that use sun.misc.Unsafe.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Type of Mocks [blog.tremblay.pro]

Keep in mind that not every substitute implementation is a mock; we have also spies, dummies, stubs, and fakes.

Also worth reading:

3. Musings

>> Getting Started with Behavior-Driven Development [daedtech.com]

BDD ideas bridge the gap between engineers and business when it comes to testing. Definitely worth having a look.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Work from home [dilbert.com]

>> Legacy System [dilbert.com]

>> Excuses [dilbert.com]

5. Pick of the Week

>> Finally Getting the Most out of the Java Thread Pool [stackify.com]

How to Iterate Over a Stream With Indices

$
0
0

1. Overview

Java 8 Streams are not collections and elements cannot be accessed using their indices, but there are still a few tricks to make this possible.

In this short article, we’re going to look at how to iterate over a Stream using IntStream, StreamUtils, EntryStream, and Vavr‘s Stream.

2. Using Plain Java

We can navigate through a Stream using an Integer range, and also benefit from the fact that the original elements are in an array or a collection accessible by indices.

Let’s implement a method which iterates with indices and demonstrates this approach.

Simply put, we want to get an array of Strings and only select even indexed elements:

public List<String> getEvenIndexedStrings(String[] names) {
    List<String> evenIndexedNames = IntStream
      .range(0, names.length)
      .filter(i -> i % 2 == 0)
      .mapToObj(i -> names[i])
      .collect(Collectors.toList());
    
    return evenIndexedNames;
}

Let’s now test out the implementation:

@Test
public void whenCalled_thenReturnListOfEvenIndexedStrings() {
    String[] names 
      = {"Afrim", "Bashkim", "Besim", "Lulzim", "Durim", "Shpetim"};
    List<String> expectedResult 
      = Arrays.asList("Afrim", "Besim", "Durim");
    List<String> actualResult 
      = StreamIndices.getEvenIndexedStrings(names);
   
    assertEquals(expectedResult, actualResult);
}

3. Using StreamUtils

Another way to iterate with indices can be done using zipWithIndex() method of StreamUtils from the proton-pack library (the latest version can be found here).

First, you need to add it to your pom.xml:

<dependency>
    <groupId>com.codepoetics</groupId>
    <artifactId>protonpack</artifactId>
    <version>1.13</version>
</dependency>

Now, let’s look at the code:

public List<Indexed<String>> getEvenIndexedStrings(List<String> names) {
    List<Indexed<String>> list = StreamUtils
      .zipWithIndex(names.stream())
      .filter(i -> i.getIndex() % 2 == 0)
      .collect(Collectors.toList());
    
    return list;
}

The following tests this method and passes successfully:

@Test
public void whenCalled_thenReturnListOfEvenIndexedStrings() {
    List<String> names = Arrays.asList(
      "Afrim", "Bashkim", "Besim", "Lulzim", "Durim", "Shpetim");
    List<Indexed<String>> expectedResult = Arrays.asList(
      Indexed.index(0, "Afrim"), 
      Indexed.index(2, "Besim"), 
      Indexed.index(4, "Durim"));
    List<Indexed<String>> actualResult 
      = StreamIndices.getEvenIndexedStrings(names);
    
    assertEquals(expectedResult, actualResult);
}

4. Using StreamEx

We can also iterate with indexes using filterKeyValue() of EntryStream class from StreamEx library (the latest version can be found here). First, we need to add it to our pom.xml:

<dependency>
    <groupId>one.util</groupId>
    <artifactId>streamex</artifactId>
    <version>0.6.5</version>
</dependency>

Let’s see a simple application of this method using our previous example:

public List<String> getEvenIndexedStringsVersionTwo(List<String> names) {
    return EntryStream.of(names)
      .filterKeyValue((index, name) -> index % 2 == 0)
      .values()
      .toList();
}

We’ll use a similar test to test this:

@Test
public void whenCalled_thenReturnListOfEvenIndexedStringsVersionTwo() {
    String[] names 
      = {"Afrim", "Bashkim", "Besim", "Lulzim", "Durim", "Shpetim"};
    List<String> expectedResult 
      = Arrays.asList("Afrim", "Besim", "Durim");
    List<String> actualResult 
      = StreamIndices.getEvenIndexedStrings(names);
   
   assertEquals(expectedResult, actualResult);
}

5. Iteration Using Vavre‘s Stream

Another plausible way of iteration is using zipWithIndex() method of Vavr (previously known as Javaslang)’s Stream implementation:

public List<String> getOddIndexedStringsVersionTwo(String[] names) {
    return Stream
      .of(names)
      .zipWithIndex()
      .filter(tuple -> tuple._2 % 2 == 1)
      .map(tuple -> tuple._1)
      .toJavaList();
}

We can test this example with the following method:

@Test
public void whenCalled_thenReturnListOfOddStringsVersionTwo() {
    String[] names 
      = {"Afrim", "Bashkim", "Besim", "Lulzim", "Durim", "Shpetim"};
    List<String> expectedResult 
      = Arrays.asList("Bashkim", "Lulzim", "Shpetim");
    List<String> actualResult 
      = StreamIndices.getOddIndexedStringsVersionTwo(names);

    assertEquals(expectedResult, actualResult);
}

If you want to read more about Vavr, check this article.

6. Conclusion

In this quick tutorial, we saw four approaches on how to iterate through streams using indices. Streams have gotten a lot of attention and being able to also iterate through them with indices can be helpful.

There are a lot of features that are included in Java 8 Streams, some of which are already covered on Baeldung.

The code for all the examples explained here, and much more can be found over on GitHub.


Integrating Retrofit with RxJava

$
0
0

1. Overview

This article focuses on how to implement a simple RxJava-ready REST Client using Retrofit.

We’ll build an example application interacting with the GitHub API – using the standard Retrofit approach, and then we’ll enhance it using RxJava to leverage the advantages of Reactive Programming.

2. Plain Retrofit

Let’s first build an example with Retrofit. We’ll use the GitHub APIs to get a sorted list of all the contributors that have more than 100 contributions in any repository.

2.1. Maven Dependencies

To start a project with Retrofit, let’s include these Maven artifacts:

<dependency>
    <groupId>com.squareup.retrofit2</groupId>
    <artifactId>retrofit</artifactId>
    <version>2.3.0</version>
</dependency>

<dependency>
    <groupId>com.squareup.retrofit2</groupId>
    <artifactId>converter-gson</artifactId>
    <version>2.3.0</version>
</dependency>

For the latest versions, have a look at retrofit and converter-gson on Maven Central repository.

2.2. API Interface

Let’s create a simple interface:

public interface GitHubBasicApi {

    @GET("users/{user}/repos")
    Call<List> listRepos(@Path("user") String user);
    
    @GET("repos/{user}/{repo}/contributors")
    Call<List> listRepoContributors(
      @Path("user") String user,
      @Path("repo") String repo);   
}

The listRepos() method retrieves a list of repositories for a given user passed as a path parameter.

The listRepoContributers() method retrieves a list of contributors for a given user and repository, both passed as path parameters.

2.3. Logic

Let’s implement the required logic using Retrofit Call objects and normal Java code:

class GitHubBasicService {

    private GitHubBasicApi gitHubApi;

    GitHubBasicService() {
        Retrofit retrofit = new Retrofit.Builder()
          .baseUrl("https://api.github.com/")
          .addConverterFactory(GsonConverterFactory.create())
          .build();

        gitHubApi = retrofit.create(GitHubBasicApi.class);
    }

    List<String> getTopContributors(String userName) throws IOException {
        List<Repository> repos = gitHubApi
          .listRepos(userName)
          .execute()
          .body();

        repos = repos != null ? repos : Collections.emptyList();

        return repos.stream()
          .flatMap(repo -> getContributors(userName, repo))
          .sorted((a, b) -> b.getContributions() - a.getContributions())
          .map(Contributor::getName)
          .distinct()
          .sorted()
          .collect(Collectors.toList());
    }

    private Stream<Contributor> getContributors(String userName, Repository repo) {
        List<Contributor> contributors = null;
        try {
            contributors = gitHubApi
              .listRepoContributors(userName, repo.getName())
              .execute()
              .body();
        } catch (IOException e) {
            e.printStackTrace();
        }

        contributors = contributors != null ? contributors : Collections.emptyList();

        return contributors.stream()
          .filter(c -> c.getContributions() > 100);
    }
}

3. Integrating with RxJava

Retrofit lets us receive calls results with custom handlers instead of the normal Call object by using Retrofit Call adapters. This makes it possible to use RxJava Observables and Flowables here.

3.1. Maven Dependencies

To use RxJava adapter, we need to include this Maven artifact:

<dependency>
    <groupId>com.squareup.retrofit2</groupId>
    <artifactId>adapter-rxjava</artifactId>
    <version>2.3.0</version>
</dependency>

For the latest version please check adapter-rxjava in Maven central repository.

3.2. Register RxJava Call Adapter

Let’s add RxJavaCallAdapter to the builder:

Retrofit retrofit = new Retrofit.Builder()
  .baseUrl("https://api.github.com/")
  .addConverterFactory(GsonConverterFactory.create())
  .addCallAdapterFactory(RxJavaCallAdapterFactory.create())
  .build();

3.3. API Interface

At this point, we can change the return type of interface methods to use Observable<…> rather than Call<…>. We may use other Rx types like Observable, Flowable, Single, Maybe, Completable.

Let’s modify our API interface to use Observable:

public interface GitHubRxApi {

    @GET("users/{user}/repos")
    Observable<List<Repository>> listRepos(@Path("user") String user);
    
    @GET("repos/{user}/{repo}/contributors")
    Observable<List<Contributer>> listRepoContributors(
      @Path("user") String user,
      @Path("repo") String repo);   
}

3.4. Logic

Let’s implement it using RxJava:

class GitHubRxService {

    private GitHubRxApi gitHubApi;

    GitHubRxService() {
        Retrofit retrofit = new Retrofit.Builder()
          .baseUrl("https://api.github.com/")
          .addConverterFactory(GsonConverterFactory.create())
          .addCallAdapterFactory(RxJavaCallAdapterFactory.create())
          .build();

        gitHubApi = retrofit.create(GitHubRxApi.class);
    }

    Observable<String> getTopContributors(String userName) {
        return gitHubApi.listRepos(userName)
          .flatMapIterable(x -> x)
          .flatMap(repo -> gitHubApi.listRepoContributors(userName, repo.getName()))
          .flatMapIterable(x -> x)
          .filter(c -> c.getContributions() > 100)
          .sorted((a, b) -> b.getContributions() - a.getContributions())
          .map(Contributor::getName)
          .distinct();
    }
}

4. Conclusion

Comparing the code before and after using RxJava, we’ve found that it has been improved in the following ways:

  • Reactive – as our data now flows in streams, it enables us to do asynchronous stream processing with non-blocking back pressure
  • Clear – due to its declarative nature
  • Concise – the whole operation can be represented as one operation chain

All the code in this article is available over on GitHub.

The package com.baeldung.retrofit.basic contains the basic retrofit example while the package com.baeldung.retrofit.rx contains the retrofit example with RxJava integration.

StringBuilder and StringBuffer in Java

$
0
0

1. Overview

In this short article, we’re going to look at similarities and differences between StringBuilder and StringBuffer in Java.

Simply put, StringBuilder was introduced in Java 1.5 as a replacement for StringBuffer.

2. Similarities

Both StringBuilder and StringBuffer create objects that hold a mutable sequence of characters. Let’s see how this works, and how it compares to an immutable String class:

String immutable = "abc";
immutable = immutable + "def";

Even though it may look like that we’re modifying the same object by appending “def”, we are creating a new one because String instances can’t be modified.

When using either StringBuffer or StringBuilder, we can use the append() method:

StringBuffer sb = new StringBuffer("abc");
sb.append("def");

In this case, there was no new object created. We have called the append() method on sb instance and modified its content. StringBuffer and StringBuilder are mutable objects.

3. Differences

StringBuffer is synchronized and therefore thread-safe. StringBuilder is compatible with StringBuffer API but with no guarantee of synchronization.

Because it’s not a thread-safe implementation, it is faster and it is recommended to use it in places where there’s no need for thread safety.

3.1. Performance

In small iterations, the performance difference is insignificant. Let’s do a quick micro-benchmark with JMH:

@State(Scope.Benchmark)
public static class MyState {
    int iterations = 1000;
    String initial = "abc";
    String suffix = "def";
}

@Benchmark
public StringBuffer benchmarkStringBuffer(MyState state) {
    StringBuffer stringBuffer = new StringBuffer(state.initial);
    for (int i = 0; i < state.iterations; i++) {
        stringBuffer.append(state.suffix);
    }
    return stringBuffer;
}

@Benchmark
public StringBuilder benchmarkStringBuilder(MyState state) {
    StringBuilder stringBuilder = new StringBuilder(state.initial);
    for (int i = 0; i < state.iterations; i++) {
        stringBuilder.append(state.suffix);
    }
    return stringBuilder;
}

We have used the default Throughput mode – i.e. operations per unit of time (higher score is better), which gives:

Benchmark                                          Mode  Cnt      Score      Error  Units
StringBufferStringBuilder.benchmarkStringBuffer   thrpt  200  86169.834 ±  972.477  ops/s
StringBufferStringBuilder.benchmarkStringBuilder  thrpt  200  91076.952 ± 2818.028  ops/s

If we increase the number of iterations from 1k to 1m then we get:

Benchmark                                          Mode  Cnt   Score   Error  Units
StringBufferStringBuilder.benchmarkStringBuffer   thrpt  200  77.178 ± 0.898  ops/s
StringBufferStringBuilder.benchmarkStringBuilder  thrpt  200  85.769 ± 1.966  ops/s

However, let’s bear in mind that this is a micro-benchmark, which may or may not have a real impact on the actual, real-world performance of an application.

4. Conclusions

Simply put, the StringBuffer is a thread-safe implementation and therefore slower than the StringBuilder.

In single-threaded programs, we can take of the StringBuilder. Yet, the performance gain of StringBuilder over StringBuffer may be too small to justify replacing it everywhere. It’s always a good idea to profile the application and understand its runtime performance characteristics before doing any kind of work to replace one implementation with another.

Finally, as always, the code used during the discussion can be found over on GitHub.

New in Spring Security OAuth2 – Verify Claims

$
0
0

1. Overview

In this quick tutorial, we’ll work with a Spring Security OAuth2 implementation and we’ll learn how to verify JWT claims using the new JwtClaimsSetVerifier – introduced in Spring Security OAuth 2.2.0.RELEASE.

2. Maven Configuration

First, we need to add the latest version of spring-security-oauth2 into our pom.xml:

<dependency>
    <groupId>org.springframework.security.oauth</groupId>
    <artifactId>spring-security-oauth2</artifactId>
    <version>2.2.0.RELEASE</version>
</dependency>

3. Token Store Configuration

Next, let’s configure our TokenStore in the Resource Server:

@Bean
public TokenStore tokenStore() {
    return new JwtTokenStore(accessTokenConverter());
}

@Bean
public JwtAccessTokenConverter accessTokenConverter() {
    JwtAccessTokenConverter converter = new JwtAccessTokenConverter();
    converter.setSigningKey("123");
    converter.setJwtClaimsSetVerifier(jwtClaimsSetVerifier());
    return converter;
}

Note how we’re adding the new verifier to our JwtAccessTokenConverter.

For more details on how to configure JwtTokenStore, check out the writeup about using JWT with Spring Security OAuth.

Now, in the following sections, we’ll discuss different types of claim verifier and how to make them work together.

4. IssuerClaimVerifier

We’ll start simple – by verifying the Issuer “iss” claim using IssuerClaimVerifier – as follows:

@Bean
public JwtClaimsSetVerifier issuerClaimVerifier() {
    try {
        return new IssuerClaimVerifier(new URL("http://localhost:8081"));
    } catch (MalformedURLException e) {
        throw new RuntimeException(e);
    }
}

In this example, we added a simple IssuerClaimVerifier to verify our issuer. If the JWT token contains a different value for issuer “iss” claim, a simple InvalidTokenException will be thrown.

Naturally, if the token does contain the issuer “iss” claim, no exception will be thrown and the token is considered valid.

5. Custom Claim Verifier

But, what’s interesting here is that we can also build our custom claim verifier:

@Bean
public JwtClaimsSetVerifier customJwtClaimVerifier() {
    return new CustomClaimVerifier();
}

Here’s a simple implementation of what this can look like – to check if the user_name claim exists in our JWT token:

public class CustomClaimVerifier implements JwtClaimsSetVerifier {
    @Override
    public void verify(Map<String, Object> claims) throws InvalidTokenException {
        String username = (String) claims.get("user_name");
        if ((username == null) || (username.length() == 0)) {
            throw new InvalidTokenException("user_name claim is empty");
        }
    }
}

Notice how we’re simply implementing the JwtClaimsSetVerifier interface here, and then provide a completely custom implementation for the verify method – which gives us full flexibility for any kind of check we need.

6. Combine Multiple Claim Verifiers 

Finally, let’s see how to combine multiple claim verifier using DelegatingJwtClaimsSetVerifier – as follows:

@Bean
public JwtClaimsSetVerifier jwtClaimsSetVerifier() {
    return new DelegatingJwtClaimsSetVerifier(Arrays.asList(
      issuerClaimVerifier(), customJwtClaimVerifier()));
}

DelegatingJwtClaimsSetVerifier takes a list of JwtClaimsSetVerifier objects and delegates the claim verification process to these verifiers.

7. Simple Integration Test

Now that we’re done with the implementation, let’s test our claims verifiers with a simple integration test:

@RunWith(SpringRunner.class)
@SpringBootTest(
  classes = ResourceServerApplication.class, 
  webEnvironment = WebEnvironment.RANDOM_PORT)
public class JwtClaimsVerifierIntegrationTest {

    @Autowired
    private JwtTokenStore tokenStore;

    ...
}

We’ll start with a token that doesn’t contain an issuer (but contains a user_name) – which should be valid:

@Test
public void whenTokenDontContainIssuer_thenSuccess() {
    String tokenValue = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9....";
    OAuth2Authentication auth = tokenStore.readAuthentication(tokenValue);
    
    assertTrue(auth.isAuthenticated());
}

The reason this is valid is simple – the first verifier is only active if an issuer claim exists in the token. If that claim doesn’t exist – the verifier doesn’t kick in.

Next, let’s have a look at a token which contain a valid issuer (http://localhost:8081) and a user_name as well. This should also be valid:

@Test
public void whenTokenContainValidIssuer_thenSuccess() {
    String tokenValue = "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9....";
    OAuth2Authentication auth = tokenStore.readAuthentication(tokenValue);
    
    assertTrue(auth.isAuthenticated());
}

When the token contains an invalid issuer (http://localhost:8082) – then it’s going to be verified and determined to be invalid:

@Test(expected = InvalidTokenException.class)
public void whenTokenContainInvalidIssuer_thenException() {
    String tokenValue = "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9....";
    OAuth2Authentication auth = tokenStore.readAuthentication(tokenValue);
    
    assertTrue(auth.isAuthenticated());
}

Next, when the token doesn’t contain an user_name claim, then it’s going to be invalid:

@Test(expected = InvalidTokenException.class)
public void whenTokenDontContainUsername_thenException() {
    String tokenValue = "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9....";
    OAuth2Authentication auth = tokenStore.readAuthentication(tokenValue);
    
    assertTrue(auth.isAuthenticated());
}

And finally, when the token contains an empty user_name claim, then it’s also invalid:

@Test(expected = InvalidTokenException.class)
public void whenTokenContainEmptyUsername_thenException() {
    String tokenValue = "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9....";
    OAuth2Authentication auth = tokenStore.readAuthentication(tokenValue);
    
    assertTrue(auth.isAuthenticated());
}

8. Conclusion

In this quick article, we had a look at the new verifier functionality in the Spring Security OAuth.

As always, the full source code is available over on GitHub.

Introduction to MBassador

$
0
0

1. Overview

Simply put, MBassador is a high-performance event bus utilizing the publish-subscribe semantics.

Messages are broadcasted to one or more peers without the prior knowledge of how many subscribers there are, or how they use the message.

2. Maven Dependency

Before we can use the library, we need to add the mbassador dependency:

<dependency>
    <groupId>net.engio</groupId>
    <artifactId>mbassador</artifactId>
    <version>1.3.1</version>
</dependency>

3. Basic Event Handling

3.1. Simple Example

We’ll start with a simple example of publishing a message:

private MBassador<Object> dispatcher = new MBassador<>();
private String messageString;

@Before
public void prepareTests() {
    dispatcher.subscribe(this);
}

@Test
public void whenStringDispatched_thenHandleString() {
    dispatcher.post("TestString").now();
 
    assertNotNull(messageString);
    assertEquals("TestString", messageString);
}

@Handler
public void handleString(String message) {
    messageString = message;
}

At the top of this test class, we see the creation of a MBassador with its default constructor. Next, in the @Before method, we call subscribe() and pass in a reference to the class itself.

In subscribe(), the dispatcher inspects the subscriber for @Handler annotations.

And, in the first test, we call dispatcher.post(…).now() to dispatch the message – which results in handleString() being called.

This initial test demonstrates several important concepts. Any Object can be a subscriber, as long as it has one or more methods annotated with @Handler. A subscriber can have any number of handlers.

We’re using test objects that subscribe to themselves for the sake of simplicity, but in most production scenarios, message dispatchers will in different classes from consumers.

Handler methods have only one input parameter – the message, and can’t throw any checked exceptions.

Similar to the subscribe() method, the post method accepts any Object. This Object is delivered to subscribers.

When a message is posted, it is delivered to any listeners that have subscribed to the message type.

Let’s add another message handler and send a different message type:

private Integer messageInteger; 

@Test
public void whenIntegerDispatched_thenHandleInteger() {
    dispatcher.post(42).now();
 
    assertNull(messageString);
    assertNotNull(messageInteger);
    assertTrue(42 == messageInteger);
}

@Handler
public void handleInteger(Integer message) {
    messageInteger = message;
}

As expected, when we dispatch an Integer, handleInteger() is called, and handleString() is not. A single dispatcher can be used to send more than one message type.

3.2. Dead Messages

So where does a message go when there is no handler for it? Let’s add a new event handler and then send a third message type:

private Object deadEvent; 

@Test
public void whenLongDispatched_thenDeadEvent() {
    dispatcher.post(42L).now();
 
    assertNull(messageString);
    assertNull(messageInteger);
    assertNotNull(deadEvent);
    assertTrue(deadEvent instanceof Long);
    assertTrue(42L == (Long) deadEvent);
} 

@Handler
public void handleDeadEvent(DeadMessage message) {
    deadEvent = message.getMessage();
}

In this test, we dispatch a Long instead of an Integer. Neither handleInteger() nor handleString() are called, but handleDeadEvent() is.

When there are no handlers for a message, it gets wrapped in a DeadMessage object. Since we added a handler for Deadmessage, we capture it.

DeadMessage can be safely ignored; if an application does not need to track dead messages, they can be allowed to go nowhere.

4. Using an Event Hierarchy

Sending String and Integer events is limiting. Let’s create a few message classes:

public class Message {}

public class AckMessage extends Message {}

public class RejectMessage extends Message {
    int code;

    // setters and getters
}

We have a simple base class and two classes that extend it.

4.1. Sending a Base Class Message

We’ll start with Message events:

private MBassador<Message> dispatcher = new MBassador<>();

private Message message;
private AckMessage ackMessage;
private RejectMessage rejectMessage;

@Before
public void prepareTests() {
    dispatcher.subscribe(this);
}

@Test
public void whenMessageDispatched_thenMessageHandled() {
    dispatcher.post(new Message()).now();
    assertNotNull(message);
    assertNull(ackMessage);
    assertNull(rejectMessage);
}

@Handler
public void handleMessage(Message message) {
    this.message = message;
}

@Handler
public void handleRejectMessage(RejectMessage message) {
   rejectMessage = message;
}

@Handler
public void handleAckMessage(AckMessage message) {
    ackMessage = message;
}

Discover MBassador – a high-performance pub-sub event bus. This limits us to using Messages but adds an additional layer of type safety.

When we send a Message, handleMessage() receives it. The other two handlers do not.

4.2. Sending a Subclass Message

Let’s send a RejectMessage:

@Test
public void whenRejectDispatched_thenMessageAndRejectHandled() {
    dispatcher.post(new RejectMessage()).now();
 
    assertNotNull(message);
    assertNotNull(rejectMessage);
    assertNull(ackMessage);
}

When we send a RejectMessage both handleRejectMessage() and handleMessage() receive it.

Since RejectMessage extends Message, the Message handler received it, in addition to the RejectMessage handler.

Let’s verify this behavior with an AckMessage:

@Test
public void whenAckDispatched_thenMessageAndAckHandled() {
    dispatcher.post(new AckMessage()).now();
 
    assertNotNull(message);
    assertNotNull(ackMessage);
    assertNull(rejectMessage);
}

Just as we expected, when we send an AckMessage, both handleAckMessage() and handleMessage() receive it.

5. Filtering Messages

Organizing messages by type is already a powerful feature, but we can filter them even more.

5.1. Filter on Class and Subclass

When we posted a RejectMessage or AckMessage, we received the event in both the event handler for the particular type and in the base class.

We can solve this type hierarchy issue by making Message abstract and creating a class such as GenericMessage. But what if we don’t have this luxury?

We can use message filters:

private Message baseMessage;
private Message subMessage;

@Test
public void whenMessageDispatched_thenMessageFiltered() {
    dispatcher.post(new Message()).now();
 
    assertNotNull(baseMessage);
    assertNull(subMessage);
}

@Test
public void whenRejectDispatched_thenRejectFiltered() {
    dispatcher.post(new RejectMessage()).now();
 
    assertNotNull(subMessage);
    assertNull(baseMessage);
}

@Handler(filters = { @Filter(Filters.RejectSubtypes.class) })
public void handleBaseMessage(Message message) {
    this.baseMessage = message;
}

@Handler(filters = { @Filter(Filters.SubtypesOnly.class) })
public void handleSubMessage(Message message) {
    this.subMessage = message;
}

The filters parameter for the @Handler annotation accepts a Class that implements IMessageFilter. The library offers two examples:

The Filters.RejectSubtypes does as its name suggests: it will filter out any subtypes. In this case, we see that RejectMessage is not handled by handleBaseMessage().

The Filters.SubtypesOnly also does as its name suggests: it will filter out any base types. In this case, we see that Message is not handled by handleSubMessage().

5.2. IMessageFilter

The Filters.RejectSubtypes and the Filters.SubtypesOnly both implement IMessageFilter.

RejectSubTypes compares the class of the message to its defined message types and will only allow through messages that equal one of its types, as opposed to any subclasses.

5.3. Filter with Conditions

Fortunately, there is an easier way of filtering messages. MBassador supports a subset of Java EL expressions as conditions for filtering messages.

Let’s filter a String message based on its length:

private String testString;

@Test
public void whenLongStringDispatched_thenStringFiltered() {
    dispatcher.post("foobar!").now();
 
    assertNull(testString);
}

@Handler(condition = "msg.length() < 7")
public void handleStringMessage(String message) {
    this.testString = message;
}

The “foobar!” message is seven characters long and is filtered. Let’s send a shorter String:

    
@Test
public void whenShortStringDispatched_thenStringHandled() {
    dispatcher.post("foobar").now();
 
    assertNotNull(testString);
}

Now, the “foobar” is only six characters long and is passed through.

Our RejectMessage contains a field with an accessor. Let’s write a filter for that:

private RejectMessage rejectMessage;

@Test
public void whenWrongRejectDispatched_thenRejectFiltered() {

    RejectMessage testReject = new RejectMessage();
    testReject.setCode(-1);

    dispatcher.post(testReject).now();
 
    assertNull(rejectMessage);
    assertNotNull(subMessage);
    assertEquals(-1, ((RejectMessage) subMessage).getCode());
}

@Handler(condition = "msg.getCode() != -1")
public void handleRejectMessage(RejectMessage rejectMessage) {
    this.rejectMessage = rejectMessage;
}

Here again, we can query a method on an object and either filter the message or not.

5.4. Capture Filtered Messages

Similar to DeadEvents, we may want to capture and process filtered messages. There is a dedicated mechanism for capturing filtered events too. Filtered events are treated differently from “dead” events.

Let’s write a test that illustrates this:

private String testString;
private FilteredMessage filteredMessage;
private DeadMessage deadMessage;

@Test
public void whenLongStringDispatched_thenStringFiltered() {
    dispatcher.post("foobar!").now();
 
    assertNull(testString);
    assertNotNull(filteredMessage);
    assertTrue(filteredMessage.getMessage() instanceof String);
    assertNull(deadMessage);
}

@Handler(condition = "msg.length() < 7")
public void handleStringMessage(String message) {
    this.testString = message;
}

@Handler
public void handleFilterMessage(FilteredMessage message) {
    this.filteredMessage = message;
}

@Handler
public void handleDeadMessage(DeadMessage deadMessage) {
    this.deadMessage = deadMessage;
}

With the addition of a FilteredMessage handler, we can track Strings that are filtered because of their length. The filterMessage contains our too-long String while deadMessage remains null.

6. Asynchronous Message Dispatch and Handling

So far all of our examples have used synchronous message dispatch; when we called post.now() the messages were delivered to each handler in the same thread we called post() from.

6.1. Asynchronous Dispatch

The MBassador.post() returns a SyncAsyncPostCommand. This class offers several methods, including:

  • now() – dispatch messages synchronously; the call will block until all messages have been delivered
  • asynchronously() – executes the message publication asynchronously

Let’s use asynchronous dispatch in a sample class. We’ll use Awaitility in these tests to simplify the code:

private MBassador<Message> dispatcher = new MBassador<>();
private String testString;
private AtomicBoolean ready = new AtomicBoolean(false);

@Test
public void whenAsyncDispatched_thenMessageReceived() {
    dispatcher.post("foobar").asynchronously();
 
    await().untilAtomic(ready, equalTo(true));
    assertNotNull(testString);
}

@Handler
public void handleStringMessage(String message) {
    this.testString = message;
    ready.set(true);
}

We call asynchronously() in this test, and use an AtomicBoolean as a flag with await() to wait for the delivery thread to deliver the message.

If we comment out the call to await(), we risk the test failing, because we check testString before the delivery thread completes.

6.2. Asynchronous Handler Invocation

Asynchronous dispatch allows the message provider to return to message processing before the messages are delivered to each handler, but it still calls each handler in order, and each handler has to wait for the previous to finish.

This can lead to problems if one handler performs an expensive operation.

MBassador provides a mechanism for asynchronous handler invocation. Handlers configured for this receive messages in their thread:

private Integer testInteger;
private String invocationThreadName;
private AtomicBoolean ready = new AtomicBoolean(false);

@Test
public void whenHandlerAsync_thenHandled() {
    dispatcher.post(42).now();
 
    await().untilAtomic(ready, equalTo(true));
    assertNotNull(testInteger);
    assertFalse(Thread.currentThread().getName().equals(invocationThreadName));
}

@Handler(delivery = Invoke.Asynchronously)
public void handleIntegerMessage(Integer message) {
 
    this.invocationThreadName = Thread.currentThread().getName();
    this.testInteger = message;
    ready.set(true);
}

Handlers can request asynchronous invocation with the delivery = Invoke.Asynchronously property on the Handler annotation. We verify this in our test by comparing the Thread names in the dispatching method and the handler.

7. Customizing MBassador

So far we’ve been using an instance of MBassador with its default configuration. The dispatcher’s behavior can be modified with annotations, similar to those we have seen so far; we’ll cover a few more to finish this tutorial.

7.1. Exception Handling

Handlers cannot define checked exceptions. Instead, the dispatcher can be provided with an IPublicationErrorHandler as an argument to its constructor:

public class MBassadorConfigurationTest
  implements IPublicationErrorHandler {

    private MBassador dispatcher;
    private String messageString;
    private Throwable errorCause;

    @Before
    public void prepareTests() {
        dispatcher = new MBassador<String>(this);
        dispatcher.subscribe(this);
    }

    @Test
    public void whenErrorOccurs_thenErrorHandler() {
        dispatcher.post("Error").now();
 
        assertNull(messageString);
        assertNotNull(errorCause);
    }

    @Test
    public void whenNoErrorOccurs_thenStringHandler() {
        dispatcher.post("Error").now();
 
        assertNull(errorCause);
        assertNotNull(messageString);
    }

    @Handler
    public void handleString(String message) {
        if ("Error".equals(message)) {
            throw new Error("BOOM");
        }
        messageString = message;
    }

    @Override
    public void handleError(PublicationError error) {
        errorCause = error.getCause().getCause();
    }
}

When handleString() throws an Error, it is saved to errorCause.

7.2. Handler Priority

Handlers are called in reverse order of how they are added, but this isn’t behavior we want to rely on. Even with the ability to call handlers in their threads, we may still need to know what order they will be called in.

We can set handler priority explicitly:

private LinkedList<Integer> list = new LinkedList<>();

@Test
public void whenRejectDispatched_thenPriorityHandled() {
    dispatcher.post(new RejectMessage()).now();

    // Items should pop() off in reverse priority order
    assertTrue(1 == list.pop());
    assertTrue(3 == list.pop());
    assertTrue(5 == list.pop());
}

@Handler(priority = 5)
public void handleRejectMessage5(RejectMessage rejectMessage) {
    list.push(5);
}

@Handler(priority = 3)
public void handleRejectMessage3(RejectMessage rejectMessage) {
    list.push(3);
}

@Handler(priority = 2, rejectSubtypes = true)
public void handleMessage(Message rejectMessage) 
    logger.error("Reject handler #3");
    list.push(3);
}

@Handler(priority = 0)
public void handleRejectMessage0(RejectMessage rejectMessage) {
    list.push(1);
}

Handlers are called from highest priority to lowest. Handlers with the default priority, which is zero, are called last. We see that the handler numbers pop() off in reverse order.

7.3. Reject Subtypes, the Easy Way

What happened to handleMessage() in the test above? We don’t have to use RejectSubTypes.class to filter our sub types.

RejectSubTypes is a boolean flag that provides the same filtering as the class, but with better performance than the IMessageFilter implementation.

We still need to use the filter-based implementation for accepting subtypes only, though.

8. Conclusion

MBassador is a simple and straightforward library for passing messages between objects. Messages can be organized in a variety of ways and can be dispatched synchronously or asynchronously.

And, as always, the example is available in this GitHub project.

Example of Vertx and RxJava Integration

$
0
0

1. Overview

RxJava is a popular library for creating asynchronous and event-based programs, it takes inspiration from the main ideas brought forward by the Reactive Extensions initiative.

Vert.x, a project under Eclipse‘s umbrella, offers several components designed from the ground up to fully leverage the reactive paradigm.

Used together, they could prove to be a valid foundation for any Java program that needs to be reactive.

In this article, we’ll load a file with a list of city names and we’ll prints out, for each of them, how long is a day, from sunrise to sunset.

We’ll use data published from the public www.metaweather.com REST API – to calculate the length of the daylight and RxJava with Vert.x to do it in a purely reactive way.

2. Maven Dependency

Let’s start by importing vertx-rx-java2:

<dependency>
    <groupId>io.vertx</groupId>
    <artifactId>vertx-rx-java2</artifactId>
    <version>3.5.0-Beta1</version>
</dependency>

At the time of writing, the integration between Vert.x and the newer RxJava 2 is available only as a beta release, that is, however, stable enough for the program we’re building.

Note that io.vertx:vertx-rx-java2 depends on io.reactivex.rxjava2:rxjava so there’s no need to import any RxJava related package explicitly.

The latest version of the Vert.x integration with RxJava can be found on Maven Central.

3. Set Up

As in every application that uses Vert.x, we’ll start creating the vertx object, the main entry point to all Vert.x features:

Vertx vertx = io.vertx.reactivex.core.Vertx.vertx();

The vertx-rx-java2 library provides two classes: io.vertx.core.Vertx and io.vertx.reactivex.core.Vertx. While the first is the usual entry point for applications that are uniquely based on Vert.x, the latter is the one we’ve to use to get the integration with RxJava.

We go on defining objects we’ll use later:

FileSystem fileSystem = vertx.fileSystem();
HttpClient httpClient = vertx.createHttpClient();

Vert.x‘s FileSystem gives access to the file system in a reactive way, while Vert.x‘s HttpClient does the same for HTTP.

4. Reactive Chain

It’s easy in a reactive context to concatenate several simpler reactive operators to obtain a meaningful computation.

Let’s do that for our example:

fileSystem
  .rxReadFile("cities.txt").toFlowable()
  .flatMap(buffer -> Flowable.fromArray(buffer.toString().split("\\r?\\n")))
  .flatMap(city -> searchByCityName(httpClient, city))
  .flatMap(HttpClientResponse::toFlowable)
  .map(extractingWoeid())
  .flatMap(cityId -> getDataByPlaceId(httpClient, cityId))
  .flatMap(toBufferFlowable())
  .map(Buffer::toJsonObject)
  .map(toCityAndDayLength())
  .subscribe(System.out::println, Throwable::printStackTrace);

Let’s now explore how each logical chunk of code works.

5. City Names

The first step is to read a file containing a list of city names, one name per line:

fileSystem
 .rxReadFile("cities.txt").toFlowable()
 .flatMap(buffer -> Flowable.fromArray(buffer.toString().split("\\r?\\n")))

The method rxReadFile() reactively reads a file and returns a RxJava‘s Single<Buffer>. So we got the integration we’re looking for: the asynchronicity of Vert.x in a data structure from RxJava.

There’s only one file, so we’ll get a single emission of a Buffer with the full content of the file. We convert that input into a RxJava‘s Flowable and we flat-map the lines of the file to have a Flowable that emits an event for each city name instead.

6. JSON City Descriptor

Having the city name, the next step is to use the Metaweather REST API to get the identifier code for that city. This identifier will then be used to get the sunrise and sunset times for the city. Let’s continue the chain of invocations:

Let’s continue the chain of invocations:

.flatMap(city -> searchByCityName(httpClient, city))
.flatMap(HttpClientResponse::toFlowable)

The searchByCityName() method uses the HttpClient we created in the first step – to invoke the REST service that gives the identifier of a city. Then with the second flatMap() we get the Buffer containing the response.

Let’s complete this step writing the searchByCityName()‘s body:

Flowable<HttpClientResponse> searchByCityName(HttpClient httpClient, String cityName) {
    HttpClientRequest req = httpClient.get(
        new RequestOptions()
          .setHost("www.metaweather.com")
          .setPort(443)
          .setSsl(true)
          .setURI(format("/api/location/search/?query=%s", cityName)));
    return req
      .toFlowable()
      .doOnSubscribe(subscription -> req.end());
}

Vert.x‘s HttpClient returns a RxJava‘s Flowable that emits the reactive HTTP response. This is turn emits the body of the response split in Buffers.

We created a new reactive request to the proper URL but we noted that Vert.x requires the HttpClientRequest.end() method to be invoked to signaling that the request can be sent and it also requires at least one subscription before end() could be successfully invoked.

A solution to accomplish that is to use RxJava‘s doOnSubscribe() to invoke end() as soon as a consumer subscribes.

7. City Identifiers

We now just need to get the value of the woeid property of the returned JSON object, which uniquely identifies the city through a custom method:

.map(extractingWoeid())

The extractingWoeid() method returns a function that extracts the city identifier from the JSON contained in the REST service response:

private static Function<Buffer, Long> extractingWoeid() {
    return cityBuffer -> cityBuffer
      .toJsonArray()
      .getJsonObject(0)
      .getLong("woeid");
}

Note that we can use the handy toJson…() methods provided by Buffer to quickly get access to the properties we need.

8. City Details

Let’s continue the reactive chain to retrieve the details we need from the REST API:

.flatMap(cityId -> getDataByPlaceId(httpClient, cityId))
.flatMap(toBufferFlowable())

Let’s detail the getDataByPlaceId() method:

static Flowable<HttpClientResponse> getDataByPlaceId(
  HttpClient httpClient, long placeId) {
 
    return autoPerformingReq(
      httpClient,
      format("/api/location/%s/", placeId));
}

Here, we used the same approach we’ve put in place in the previous step. getDataByPlaceId() returns a Flowable<HttpClientResponse>. The HttpClientResponse, in turn, will emit the API response in chunks if it is longer that few bytes.

With the toBufferFlowable() method we reduce the response chunks into a single one to have access to the full JSON object:

static Function<HttpClientResponse, Publisher<? extends Buffer>>
  toBufferFlowable() {
    return response -> response
      .toObservable()
      .reduce(
        Buffer.buffer(),
        Buffer::appendBuffer).toFlowable();
}

9. Sunset and Sunrise Times

Let’s keep on adding to the reactive chain, retrieving the information we’re interested in from the JSON object:

.map(toCityAndDayLength())

Let’s write the toCityAndDayLength() method:

static Function<JsonObject, CityAndDayLength> toCityAndDayLength() {
    return json -> {
        ZonedDateTime sunRise = ZonedDateTime.parse(json.getString("sun_rise"));
        ZonedDateTime sunSet = ZonedDateTime.parse(json.getString("sun_set"));
        String cityName = json.getString("title");
        return new CityAndDayLength(
          cityName, sunSet.toEpochSecond() - sunRise.toEpochSecond());
    };
}

It returns a function that maps the information contained in a JSON to create a POJO that simply calculates the time in hours between sunrise and sunset.

10. Subscription

The reactive chain is completed. We can now subscribe to the resulting Flowable with a handler that prints out the emitted instances of CityAndDayLength, or the stack trace in case of errors:

.subscribe(
  System.out::println, 
  Throwable::printStackTrace)

When we run the application we could see a result like that, depending on the city contained in the list and the date in which the application runs:

In Chicago there are 13.3 hours of light.
In Milan there are 13.5 hours of light.
In Cairo there are 12.9 hours of light.
In Moscow there are 14.1 hours of light.
In Santiago there are 11.3 hours of light.
In Auckland there are 11.2 hours of light.

The cities could appear in a different order than those specified in the file because all the requests to the HTTP API are executed asynchronously.

11. Conclusion

In this article, we saw how easy it is to mix Vert.x reactive modules with the operators and logical constructs provided by RxJava.

The reactive chain we built, albeit long, showed how it makes a complex scenario fairly easy to write.

As always, the complete source code is available over on GitHub.

Viewing all 4717 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>