Quantcast
Channel: Baeldung
Viewing all 4703 articles
Browse latest View live

Guide to Guava RangeMap

$
0
0

1. Overview

In this tutorial, we’ll show how to use the Google Guava’s RangeMap interface and its implementations.

A RangeMap is a special kind of mapping from disjoint non-empty ranges to non-null values. Using queries, we may look up the value for any particular range in that map.

The basic implementation of RangeMap is a TreeRangeMap. Internally the map makes use of a TreeMap to store the key as a range and the value as any custom Java object.

2. Google Guava’s RangeMap

Let’s have a look at how to use the RangeMap class.

2.1. Maven Dependency

Let’s start by adding Google’s Guava library dependency in the pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>21.0</version>
</dependency>

The latest version of the dependency can be checked here.

3. Creating

Some of the ways in which we may create an instance of RangeMap are:

  • Use the create method from the TreeRangeMap class to create a mutable map:
RangeMap<Integer, String> experienceRangeDesignationMap
  = TreeRangeMap.create();
  • If we intend to create an immutable range map, use the ImmutableRangeMap class (which follows a builder pattern):
RangeMap<Integer, String> experienceRangeDesignationMap
  = new ImmutableRangeMap.<Integer, String>builder()
  .put(Range.closed(0, 2), "Associate")
  .build();

4. Using

Let’s start with a simple example showing the usage of RangeMap.

4.1. Retrieval Based on Input Within a Range

We can get a value associated with a value within a range of integers:

@Test
public void givenRangeMap_whenQueryWithinRange_returnsSucessfully() {
    RangeMap<Integer, String> experienceRangeDesignationMap 
     = TreeRangeMap.create();

    experienceRangeDesignationMap.put(
      Range.closed(0, 2), "Associate");
    experienceRangeDesignationMap.put(
      Range.closed(3, 5), "Senior Associate");
    experienceRangeDesignationMap.put(
      Range.closed(6, 8),  "Vice President");
    experienceRangeDesignationMap.put(
      Range.closed(9, 15), "Executive Director");

    assertEquals("Vice President", 
      experienceRangeDesignationMap.get(6));
    assertEquals("Executive Director", 
      experienceRangeDesignationMap.get(15));
}

Note:

  • The closed method of the Range class assumes the range of integer values to be between 0 to 2 (both inclusive)
  • The Range in the above example consists of integers. We may use a range of any type, as long as it implements the Comparable interface such as String, Character, floating point decimals etc.
  • RangeMap returns Null when we try to get the value for a range that is not present in map
  • In a case of an ImmutableRangeMap, a range of one key cannot overlap with a range of a key that needs to be inserted. If that happens, we get an IllegalArgumentException
  • Both keys and values in the RangeMap cannot be null. If either one of them is null, we get a NullPointerException

4.2. Removing a Value Based on a Range

Let’s see how we can remove values. In this example, we show how to remove a value associated with an entire range. We also show how to remove a value based on a partial key range:

@Test
public void givenRangeMap_whenRemoveRangeIsCalled_removesSucessfully() {
    RangeMap<Integer, String> experienceRangeDesignationMap 
      = TreeRangeMap.create();

    experienceRangeDesignationMap.put(
      Range.closed(0, 2), "Associate");
    experienceRangeDesignationMap.put(
      Range.closed(3, 5), "Senior Associate");
    experienceRangeDesignationMap.put(
      Range.closed(6, 8), "Vice President");
    experienceRangeDesignationMap.put(
      Range.closed(9, 15), "Executive Director");
 
    experienceRangeDesignationMap.remove(Range.closed(9, 15));
    experienceRangeDesignationMap.remove(Range.closed(1, 4));
  
    assertNull(experienceRangeDesignationMap.get(9));
    assertEquals("Associate", 
      experienceRangeDesignationMap.get(0));
    assertEquals("Senior Associate", 
      experienceRangeDesignationMap.get(5));
    assertNull(experienceRangeDesignationMap.get(1));
}

As can be seen, even after partially removing values from a range, we still can get the values if the range is still valid.

4.3. Span of Key Range

In case we would like to know what the overall span of a RangeMap is, we may use the span method:

@Test
public void givenRangeMap_whenSpanIsCalled_returnsSucessfully() {
    RangeMap<Integer, String> experienceRangeDesignationMap = 
      TreeRangeMap.create();

    experienceRangeDesignationMap.put(
      Range.closed(0, 2), "Associate");
    experienceRangeDesignationMap.put(
      Range.closed(3, 5), "Senior Associate");
    experienceRangeDesignationMap.put(
      Range.closed(6, 8), "Vice President");
    experienceRangeDesignationMap.put(
      Range.closed(9, 15), "Executive Director");
    experienceRangeDesignationMap.put(
        
    assertEquals(0, experienceSpan.lowerEndpoint().intValue());
    assertEquals(15, experienceSpan.upperEndpoint().intValue());
}

4.4. Getting a SubRangeMap 

When we want to select a part from a RangeMap, we may use the subRangeMap method:

@Test
public void givenRangeMap_whenSpanIsCalled_returnsSucessfully() {
    RangeMap<Integer, String> experienceRangeDesignationMap 
      = TreeRangeMap.create();

    experienceRangeDesignationMap.put(
      Range.closed(0, 2), "Associate");
    experienceRangeDesignationMap.put(
      Range.closed(3, 5), "Senior Associate");
    experienceRangeDesignationMap.put(
      Range.closed(6, 8), "Vice President");
    experienceRangeDesignationMap.put(
      Range.closed(9, 15), "Executive Director");
    RangeMap<Integer, String> experiencedSubRangeDesignationMap         
      = experienceRangeDesignationMap.subRangeMap(Range.closed(4, 14));
        
    assertNull(experiencedSubRangeDesignationMap.get(3));
    assertEquals("Executive Director", 
      experiencedSubRangeDesignationMap.get(14));
    assertEquals("Vice President", 
      experiencedSubRangeDesignationMap.get(7));
}

4.5. Getting an Entry

Finally, if we are looking for an Entry from a RangeMap, we use the getEntry method:

@Test
public void givenRangeMap_whenGetEntryIsCalled_returnsEntrySucessfully() {
    RangeMap<Integer, String> experienceRangeDesignationMap 
      = TreeRangeMap.create();

    experienceRangeDesignationMap.put(
      Range.closed(0, 2), "Associate");
    experienceRangeDesignationMap.put(
      Range.closed(3, 5), "Senior Associate");
    experienceRangeDesignationMap.put(
      Range.closed(6, 8), "Vice President");
    experienceRangeDesignationMap.put(
      Range.closed(9, 15), "Executive Director");
    Map.Entry<Range<Integer>, String> experienceEntry 
      = experienceRangeDesignationMap.getEntry(10);
       
    assertEquals(Range.closed(9, 15), experienceEntry.getKey());
    assertEquals("Executive Director", experienceEntry.getValue());
}

5. Conclusion

In this tutorial, we illustrated examples of using the RangeMap in the Guava library. It is predominantly used to get a value based on the key specified as a from the map.

The implementation of these examples can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as is.


A Guide to TreeMap in Java

$
0
0

1. Overview

In this article, we are going to explore TreeMap implementation of Map interface from Java Collections Framework(JCF).

TreeMap is a map implementation that keeps its entries sorted according to the natural ordering of its keys or better still using a comparator if provided by the user at construction time.

Previously, we have covered HashMap and LinkedHashMap implementations and we will realize that there is quite a bit of information about how these classes work that is similar.

The mentioned articles are highly recommended reading before going forth with this one.

2. Default Sorting in TreeMap

By default, TreeMap sorts all its entries according to their natural ordering. For an integer, this would mean ascending order and for strings, alphabetical order.

Let’s see the natural ordering in a test:

@Test
public void givenTreeMap_whenOrdersEntriesNaturally_thenCorrect() {
    TreeMap<Integer, String> map = new TreeMap<>();
    map.put(3, "val");
    map.put(2, "val");
    map.put(1, "val");
    map.put(5, "val");
    map.put(4, "val");

    assertEquals("[1, 2, 3, 4, 5]", map.keySet().toString());
}

Notice that we placed the integer keys in a non-orderly manner but on retrieving the key set, we confirm that they are indeed maintained in ascending order. This is the natural ordering of integers.

Likewise, when we use strings, they will be sorted in their natural order, i.e. alphabetically:

@Test
public void givenTreeMap_whenOrdersEntriesNaturally_thenCorrect2() {
    TreeMap<String, String> map = new TreeMap<>();
    map.put("c", "val");
    map.put("b", "val");
    map.put("a", "val");
    map.put("e", "val");
    map.put("d", "val");

    assertEquals("[a, b, c, d, e]", map.keySet().toString());
}

TreeMap, unlike a hash map and linked hash map, does not employ the hashing principle anywhere since it does not use an array to store its entries.

3. Custom Sorting in TreeMap

If we’re not satisfied with the natural ordering of TreeMap, we can also define our own rule for ordering by means of a comparator during construction of a tree map.

In the example below, we want the integer keys to be ordered in descending order:

@Test
public void givenTreeMap_whenOrdersEntriesByComparator_thenCorrect() {
    TreeMap<Integer, String> map = 
      new TreeMap<>(Comparator.reverseOrder());
    map.put(3, "val");
    map.put(2, "val");
    map.put(1, "val");
    map.put(5, "val");
    map.put(4, "val");
        
    assertEquals("[5, 4, 3, 2, 1]", map.keySet().toString());
}

A hash map does not guarantee the order of keys stored and specifically does not guarantee that this order will remain the same over time, but a tree map guarantees that the keys will always be sorted according to the specified order.

4. Importance of TreeMap Sorting

We now know that TreeMap stores all its entries in sorted order. Because of this attribute of tree maps, we can perform queries like; find “largest”, find “smallest”, find all keys less than or greater than a certain value, etc.

The code below only covers a small percentage of these cases:

@Test
public void givenTreeMap_whenPerformsQueries_thenCorrect() {
    TreeMap<Integer, String> map = new TreeMap<>();
    map.put(3, "val");
    map.put(2, "val");
    map.put(1, "val");
    map.put(5, "val");
    map.put(4, "val");
        
    Integer highestKey = map.lastKey();
    Integer lowestKey = map.firstKey();
    Set<Integer> keysLessThan3 = map.headMap(3).keySet();
    Set<Integer> keysGreaterThanEqTo3 = map.tailMap(3).keySet();

    assertEquals(new Integer(5), highestKey);
    assertEquals(new Integer(1), lowestKey);
    assertEquals("[1, 2]", keysLessThan3.toString());
    assertEquals("[3, 4, 5]", keysGreaterThanEqTo3.toString());
}

5. Internal Implementation of TreeMap

TreeMap implements NavigableMap interface and bases it’s internal working on the principles of red-black trees:

public class TreeMap<K,V> extends AbstractMap<K,V>
  implements NavigableMap<K,V>, Cloneable, java.io.Serializable

The principle of red-black trees is beyond the scope of this article, however, there are key things to remember in order to understand how they fit into TreeMap.

First of all, a red-black tree is a data structure that consists of nodes; picture an inverted mango tree with its root in the sky and the branches growing downward. The root will contain the first element added to the tree.

The rule is that starting from the root, any element in the left branch of any node is always less than the element in the node itself. Those on the right are always greater. What defines greater or less than is determined by the natural ordering of the elements or the defined comparator at construction as we saw earlier.

This rule guarantees that the entries of a treemap will always be in sorted and predictable order.

Secondly, a red-black tree is a self-balancing binary search tree. This attribute and the above guarantee that basic operations like search, get, put and remove take logarithmic time O(log  n).

Being self-balancing is key here. As we keep inserting and deleting entries, picture the tree growing longer on one edge or shorter on the other.

This would mean that an operation would take a shorter time on the shorter branch and longer time on the branch which is furthest from the root, something we would not want to happen.

Therefore, this is taken care of in the design of red-black trees. For every insertion and deletion, the maximum height of the tree on any edge is maintained at O(log n) i.e. the tree balances itself continuously.

Just like hash map and linked hash map, a tree map is not synchronized and therefore the rules for using it in a multi-threaded environment are similar to those in the other two map implementations.

6. Choosing the Right Map

Having looked at HashMap and LinkedHashMap implementations previously and now TreeMap, it is important to make a brief comparison between the three to guide us on which one fits where.

A hash map is good as a general purpose map implementation that provides rapid storage and retrieval operations. However, it falls short because of its chaotic and unorderly arrangement of entries.

This causes it to perform poorly in scenarios where there is a lot of iteration as the entire capacity of the underlying array affects traversal other than just the number of entries.

A linked hash map possesses the good attributes of hash maps and adds order to the entries. It performs better where there is a lot of iteration because only the number of entries is taken into account regardless of capacity.

A tree map takes ordering to the next level by providing complete control over how the keys should be sorted. On the flip side, it offers worse general performance than the other two alternatives.

We could say a linked hash map reduces the chaos in the ordering of a hash map without incurring the performance penalty of a tree map.

7. Conclusion

In this article, we have explored Java TreeMap class and its internal implementation. Since it is the last in a series of common Map interface implementations, we also went ahead to briefly discuss where it fits best in relation to the other two.

The full source code for all the examples used in this article can be found in the GitHub project.

Intro to Spring Remoting with HTTP Invokers

$
0
0

1. Overview

In some cases, we need to decompose a system into several processes, each taking responsibility for a different aspect of our application. In these scenarios is not uncommon that one of the processes needs to synchronously get data from another one.

The Spring Framework offers a range of tools comprehensively called Spring Remoting that allows us to invoke remote services as if they were, at least to some extent, available locally.

In this article, we will set up an application based on Spring’s HTTP invoker, which leverages native Java serialization and HTTP to provide remote method invocation between a client and a server application.

2. Service Definition

Let’s suppose we have to implement a system that allows users to book a ride in a cab.

Let’s also suppose that we choose to build two distinct applications to obtain this goal:

  • a booking engine application to check whether a cab request can be served, and
  • a front-end web application that allows customers to book their rides, ensuring the availability of a cab has been confirmed

2.1. Service Interface

When we use Spring Remoting with HTTP invoker, we have to define our remotely callable service trough an interface to let Spring create proxies at both client and server side that encapsulate the technicalities of the remote call. So let’s start with the interface of a service that allows to book a cab:

public interface CabBookingService {
    Booking bookRide(String pickUpLocation) throws BookingException;
}

When the service is able to allocate a cab, it returns a Booking object with a reservation code. Booking has to be a serializable because Spring’s HTTP invoker has to transfer its instances from the server to the client:

public class Booking implements Serializable {
    private String bookingCode;

    @Override public String toString() {
        return format("Ride confirmed: code '%s'.", bookingCode);
    }

    // standard getters/setters and a constructor
}

If the service is not able to book a cab, a BookingException is thrown. In this case, there’s no need to mark the class as Serializable because Exception already implements it:

public class BookingException extends Exception {
    public BookingException(String message) {
        super(message);
    }
}

2.2. Packaging the Service

The service interface along with all custom classes used as arguments, return types and exceptions have to be available in both client’s and server’s classpath. One of the most effective ways to do that is to pack all of them in a .jar file that can be later included as a dependency in the server’s and client’s pom.xml.

Let’s thus put all the code in a dedicated Maven module, called “api”; we’ll use the following Maven coordinates for this example:

<groupId>com.baeldung</groupId>
<artifactId>api</artifactId>
<version>1.0-SNAPSHOT</version>

3. Server Application

Let’s build the booking engine application to expose the service using Spring Boot.

3.1. Maven Dependencies

First, you’ll need to make sure your project is using Spring Boot:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.4.3.RELEASE</version>
</parent>

You can find the last Spring Boot version here. We then need the Web starter module:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

And we need the service definition module that we assembled in the previous step:

<dependency>
    <groupId>com.baeldung</groupId>
    <artifactId>api</artifactId>
    <version>1.0-SNAPSHOT</version>
</dependency>

3.2. Service Implementation

We firstly define a class that implements the service’s interface:

public class CabBookingServiceImpl implements CabBookingService {

    @Override public Booking bookPickUp(String pickUpLocation) throws BookingException {
        if (random() < 0.3) throw new BookingException("Cab unavailable");
        return new Booking(randomUUID().toString());
    }
}

Let’s pretend that this is a likely implementation. Using a test with a random value we’ll be able to reproduce both successful scenarios ─ when an available cab has been found and a reservation code returned ─ and failing scenarios ─ when a BookingException is thrown to indicate that there is not any available cab.

3.3. Exposing the Service

We then need to define an application with a bean of type HttpInvokerServiceExporter in the context. It will take care of exposing an HTTP entry point in the web application that will be later invoked by the client:

@Configuration
@ComponentScan
@EnableAutoConfiguration
public class Server {

    @Bean(name = "/booking") HttpInvokerServiceExporter accountService() {
        HttpInvokerServiceExporter exporter = new HttpInvokerServiceExporter();
        exporter.setService( new CabBookingServiceImpl() );
        exporter.setServiceInterface( CabBookingService.class );
        return exporter;
    }

    public static void main(String[] args) {
        SpringApplication.run(Server.class, args);
    }
}

It is worth noting that Spring’s HTTP invoker uses the name of the HttpInvokerServiceExporter bean as a relative path for the HTTP endpoint URL.

We can now start the server application and keep it running while we set up the client application.

4. Client Application

Let’s now write the client application.

4.1. Maven Dependencies

We’ll use the same service definition and the same Spring Boot version we used at server side. We still need the web starter dependency, but since we don’t need to automatically start an embedded container, we can exclude the Tomcat starter from the dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <exclusions>
        <exclusion>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-tomcat</artifactId>
        </exclusion>
    </exclusions>
</dependency>

4.2. Client Implementation

Let’s implement the client:

@Configuration
public class Client {

    @Bean
    public HttpInvokerProxyFactoryBean invoker() {
        HttpInvokerProxyFactoryBean invoker = new HttpInvokerProxyFactoryBean();
        invoker.setServiceUrl("http://localhost:8080/booking");
        invoker.setServiceInterface(CabBookingService.class);
        return invoker;
    }

    public static void main(String[] args) throws BookingException {
        CabBookingService service = SpringApplication
          .run(Client.class, args)
          .getBean(CabBookingService.class);
        out.println(service.bookRide("13 Seagate Blvd, Key Largo, FL 33037"));
    }
}

The @Bean annotated invoker() method creates an instance of HttpInvokerProxyFactoryBean. We need to provide the URL that the remote server responds at through the setServiceUrl() method.

Similarly to what we did for the server, we should also provide the interface of the service we want to invoke remotely through the setServiceInterface() method.

HttpInvokerProxyFactoryBean implements Spring’s FactoryBean. A FactoryBean is defined as a bean, but the Spring IoC container will inject the object it creates, not the factory itself. You can find more details about FactoryBean in our factory bean article.

The main() method bootstraps the stand alone application and obtains an instance of CabBookingService from the context. Under the hood this object is just a proxy created by the HttpInvokerProxyFactoryBean that takes care of all technicalities involved in the execution of the remote invocation. Thanks to it we can now easily use the proxy as we would do if the service implementation had been available locally.

Let’s run the application multiple times to execute several remote calls to verify how the client behaves when a cab is available and when it is not.

5. Caveat Emptor

When we work with technologies that allow remote invocations, there are some pitfalls we should be well aware of.

5.1. Beware of Network Related Exceptions

We should always expect the unexpected when we work with an unreliable resource as the network.

Let’s suppose the client is invoking the server while it cannot be reached ─ either because of a network problem or because the server is down ─ then Spring Remoting will raise a RemoteAccessException that is a RuntimeException.

The compiler will not then force us to include the invocation in a try-catch block, but we should always consider to do it, to properly manage network problems.

5.2. Objects are Transferred by Value, not by Reference

Spring Remoting HTTP marshals method arguments and returned values to transmit them on the network. This means that the server acts upon a copy of the provided argument and the client acts upon a copy of the result created by the server.

So we cannot expect, for instance, that invoking a method on the resulting object will change the status of the same object on the server side because there is not any shared object between client and server.

5.3. Beware of Fine-Grained Interfaces

Invoking a method across network boundaries is significantly slower than invoking it on an object in the same process.

For this reason, it is usually a good practice to define services that should be remotely invoked with coarser grained interfaces that are able to complete business transactions requiring fewer interactions, even at the expense of a more cumbersome interface.

6. Conclusion

With this example, we saw how it is easy with Spring Remoting to invoke a remote process.

The solution is slightly less open than other widespread mechanisms like REST or web services, but in scenarios where all the components are developed with Spring, it can represent a viable and far quicker alternative.

As usual, you’ll find the sources over on GitHub.

Guide to EJB Set-up

$
0
0

1. Overview

In this article, we’re going to discuss how to get started with Enterprise JavaBean (EJB) development.

Enterprise JavaBeans are used for developing scalable, distributed, server-side components and typically encapsulate the business logic of the application.

We’ll use WildFly 10.1.0 as our preferred server solution, however, you are free to use any Java Enterprise application server of your choice.

2. Setup

Let’s start by discussing the Maven dependencies required for EJB 3.2 development and how to configure the WildFly application server using either the Maven Cargo plugin or manually.

2.1. Maven Dependency

In order to use EJB 3.2, make sure you add the latest version to the dependencies section of your pom.xml file:

<dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-api</artifactId>
    <version>7.0</version>
    <scope>provided</scope>
</dependency>
You will find the latest dependency in the Maven Repository. This dependency ensures that all Java EE 7 APIs are available during compile time. The provided scope ensures that once deployed, the dependency will be provided by the container where it has been deployed.

2.2. WildFly Setup With Maven Cargo

Let’s talk about how to use the Maven Cargo plugin to setup the server.

Here is the code for the Maven profile that provisions the WildFly server:

<profile>
    <id>wildfly-standalone</id>
    <build>
        <plugins>
            <plugin>
                <groupId>org.codehaus.cargo</groupId>
                <artifactId>cargo-maven2-plugin</artifactId>
                <version>${cargo-maven2-plugin.version</version>
                <configuration>
                    <container>
                        <containerId>wildfly10x</containerId>
                        <zipUrlInstaller>
                            <url>
                                http://download.jboss.org/
                                  wildfly/10.1.0.Final/
                                    wildfly-10.1.0.Final.zip
                            </url>
                        </zipUrlInstaller>
                    </container>
                    <configuration>
                        <properties>
                            <cargo.hostname>127.0.0.0</cargo.hostname>
                            <cargo.jboss.management-http.port>
                                9990
                            </cargo.jboss.management-http.port>
                            <cargo.servlet.users>
                                testUser:admin1234!
                            </cargo.servlet.users>
                        </properties>
                    </configuration>
                </configuration>
            </plugin>
        </plugins>
    </build>
</profile>

We use the plugin to download the WildFly 10.1 zip directly from the WildFly’s website. Which is then configured, by making sure that the hostname is 127.0.0.1 and setting the port to 9990.

Then we create a test user, by using the cargo.servlet.users property, with the user id testUser and the password admin1234!.

Now that configuration of the plugin is completed we should be able to call a Maven target and have the server download, installed, launched and the application deployed.

To do this, navigate to the ejb-remote directory and run the following command:

mvn clean package cargo:run

When you run this command for the first time, it will download the WildFly 10.1 zip file, extract it and execute the installation and then launch it. It will also add the test user discussed above. Any further executions will not download the zip file again.

2.3. Manual Setup of WildFly

In order to setup WildFly manually, you must download the installation zip file yourself from the wildfly.org website. The following steps are a high-level view of the WildFly server setup process:

After downloading and unzipping the file’s contents to the location where you want to install the server, configure the following environment variables:

JBOSS_HOME=/Users/$USER/../wildfly.x.x.Final
JAVA_HOME=`/usr/libexec/java_home -v 1.8`

Then in the bin directory, run the ./standalone.sh for Linux based operating systems or ./standalone.bat for Windows.

After this, you will have to add a user. This user will be used to connect to the remote EJB bean. To find out how to add a user you should take a look at the ‘add a user’ documentation.

For detailed setup instructions please visit WildFly’s Getting Started documentation.

The project POM has been configured to work with the Cargo plugin and manual server configuration by setting two profiles. By default, the Cargo plugin is selected. However to deploy the application to an already installed, configured and running Wildfly server execute the following command in the ejb-remote directory:

mvn clean install wildfly:deploy -Pwildfly-runtime

3. Remote vs Local

A business interface for a bean can be either local or remote.

A @Local annotated bean can only be accessed if it is in the same application as the bean that makes the invocation, i.e. if they reside in the same .ear or .war.

A @Remote annotated bean can be accessed from a different application, i.e. an application residing in a different JVM or application server.

There are some important points to keep in mind when designing a solution that includes EJBs:

  • The java.io.Serializable, java.io.Externalizable and interfaces defined by the javax.ejb package are always excluded when a bean is declared with @Local or @Remote
  • If a bean class is remote, then all implemented interfaces are to be remote
  • If a bean class contains no annotation or if the @Local annotation is specified, then all implemented interfaces are assumed to be local
  • Any interface that is explicitly defined for a bean which contains no interface must be declared as @Local
  • The EJB 3.2 release tends to provide more granularity for situations where local and remote interfaces need to explicitly defined

4. Creating the Remote EJB

Let’s first create the bean’s interface and call it HelloWorld:

@Remote
public interface HelloWorld {
    String getHelloWorld();
}

Now we will implement the above interface and name the concrete implementation HelloWorldBean:

@Stateless(name = "HelloWorld")
public class HelloWorldBean implements HelloWorld {

    @Resource
    private SessionContext context;

    @Override
    public String getHelloWorld() {
        return "Welcome to EJB Tutorial!";
    }
}

Note the @Stateless annotation on the class declaration. It denotes that this bean is a stateless session bean. This kind of bean does not have any associated client state, but it may preserve its instance state and is normally used to do independent operations.

The @Resource annotation injects the session context into the remote bean.

The SessionContext interface provides access to the runtime session context that the container provides for a session bean instance. The container then passes the SessionContext interface to an instance after the instance has been created. The session context remains associated with that instance for its lifetime.

The EJB container normally creates a pool of stateless bean’s objects and uses these objects to process client requests. As a result of this pooling mechanism, instance variable values are not guaranteed to be maintained across lookup method calls.

5. Remote Setup

In this section, we will discuss how to setup Maven to build and run the application on the server.

Let’s look at the plugins one by one.

5.1. The EJB Plugin

The EJB plugin which is given below is used to package an EJB module. We have specified the EJB version as 3.2.

The following plugin configuration is used to setup the target JAR for the bean:

<plugin>
    <artifactId>maven-ejb-plugin</artifactId>
    <version>2.4</version>
    <configuration>
        <ejbVersion>3.2</ejbVersion>
    </configuration>
</plugin>

5.2. Deploy the Remote EJB

To deploy the bean in a WildFly server ensure that the server is up and running.

Then to execute the remote setup we will need to run the following Maven commands against the pom file in the ejb-remote project:

mvn clean install

Then we should run:

mvn wildfly:deploy

Alternatively, we can deploy it manually as an admin user from the admin console of the application server

6. Client Setup 

After creating the remote bean we should test the deployed bean by creating a client.

First, let’s discuss the Maven setup for the client project.

6.1. Client-Side Maven Setup

In order to launch the EJB3 client we need to add the following dependencies:

<dependency>
    <groupId>org.wildfly</groupId>
    <artifactId>wildfly-ejb-client-bom</artifactId>
    <type>pom</type>
    <scope>import</scope>
</dependency>

We depend on the EJB remote business interfaces of this application to run the client. So we need to specify the EJB client JAR dependency. We add the following in the parent pom:

<dependency>
    <groupId>com.baeldung.ejb</groupId>
    <artifactId>ejb-remote</artifactId>
    <type>ejb</type>
</dependency>

The <type> is specified as ejb.

6.2. Accessing The Remote Bean

We need to create a file under src/main/resources and name it jboss-ejb-client.properties that will contain all the properties that are required to access the deployed bean:

remote.connections=default
remote.connection.default.host=127.0.0.1
remote.connection.default.port=8080
remote.connection.default.connect.options.org.xnio.Options
  .SASL_POLICY_NOANONYMOUS = false
remote.connection.default.connect.options.org.xnio.Options
  .SASL_POLICY_NOPLAINTEXT = false
remote.connection.default.connect.options.org.xnio.Options
  .SASL_DISALLOWED_MECHANISMS = ${host.auth:JBOSS-LOCAL-USER}
remote.connection.default.username=testUser
remote.connection.default.password=admin1234!

7. Creating the Client

The class that will access and use the remote HelloWorld bean has been created in EJBClient.java which is in the com.baeldung.ejb.client package.

7.1 Remote Bean URL 

The remote bean is located via a URL that conforms to the following format:

ejb:${appName}/${moduleName}/${distinctName}/${beanName}!${viewClassName}
  • The ${appName} is the application name of the deployment. Here we have not used any EAR file but a simple JAR or WAR deployment, so the application name will be empty
  • The ${moduleName} is the name we set for our deployment earlier, so it is ejb-remote
  • The ${distinctName} is a specific name which can be optionally assigned to the deployments that are deployed on the server. If a deployment doesn’t use distinct-name then we can use an empty String in the JNDI name, for the distinct-name, as we did in our example
  • The ${beanName} variable is the simple name of the implementation class of the EJB, so in our example it is HelloWorld
  • ${viewClassName} denotes the fully-qualified interface name of the remote interface

7.2 Look-up Logic

Next, let’s have a look at our simple look-up logic:

public HelloWorld lookup() throws NamingException { 
    String appName = ""; 
    String moduleName = "remote"; 
    String distinctName = ""; 
    String beanName = "HelloWorld"; 
    String viewClassName = HelloWorld.class.getName();
    String toLookup = String.format("ejb:%s/%s/%s/%s!%s",
      appName, moduleName, distinctName, beanName, viewClassName);
    return (HelloWorld) context.lookup(toLookup);
}

In order to connect to the bean we just created, we will need a URL which we can feed into the context.

7.3 The Initial Context

We’ll now create/initialize the session context:

public void createInitialContext() throws NamingException {
    Properties prop = new Properties();
    prop.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
    prop.put(Context.INITIAL_CONTEXT_FACTORY, 
      "org.jboss.naming.remote.client.InitialContextFacto[ERROR]
    prop.put(Context.PROVIDER_URL, "http-remoting://127.0.0.1:8080");
    prop.put(Context.SECURITY_PRINCIPAL, "testUser");
    prop.put(Context.SECURITY_CREDENTIALS, "admin1234!");
    prop.put("jboss.naming.client.ejb.context", false);
    context = new InitialContext(prop);
}

To connect to the remote bean we need a JNDI context. The context factory is provided by the Maven artifact org.jboss:jboss-remote-naming and this creates a JNDI context, which will resolve the URL constructed in the lookup method, into proxies to the remote application server process.

7.4 Define Lookup Parameters

We define the factory class with the parameter Context.INITIAL_CONTEXT_FACTORY.

The Context.URL_PKG_PREFIXES is used to define a package to scan for additional naming context.

The parameter org.jboss.ejb.client.scoped.context = false tells the context to read the connection parameters (such as the connection host and port) from the provided map instead of from a classpath configuration file. This is especially helpful if we want to create a JAR bundle that should be able to connect to different hosts.

The parameter Context.PROVIDER_URL defines the connection schema and should start with http-remoting://.

8. Testing

To test the deployment and check the setup, we can run the following test to make sure everything works properly:

@Test
public void testEJBClient() {
    EJBClient ejbClient = new EJBClient();
    HelloWorldBean bean = new HelloWorldBean();
    
    assertEquals(bean.getHelloWorld(), ejbClient.getEJBRemoteMessage());
}

With the test passing, we can now be sure everything is working as expected.

9. Conclusion

So we have created an EJB server and a client which invokes a method on a remote EJB. The project can be run on any application server by properly adding the dependencies for that server.

The entire project can be found over on GitHub.

Java Web Weekly, Issue 161

$
0
0

1. Spring and Java

>> Bean Validation 2.0 Progress Report [beanvalidation.org]

The new features of the Bean Validation 2.0 definitely look promising.

>> Swift for Beans – var, let and Type Inference [knitelius.com]

Swift-like features are making their way into Java.

>> New JEP Would Simplify Java Type Variance [infoq.com]

Simplified Type Variance possibly in JDK 10.

>> Declutter Your POJOs with Lombok [sitepoint.com]

A short overview of Lombok – the Java boilerplate killer.

>> Pivotal Releases First Milestone of Next-Generation Spring Data Featuring Reactive Database Access [infoq.com]

The first milestone of the new Spring Data was already released.

It looks like it will be possible to create “reactive” repositories making use of the Spring Reactor project.

>> Hibernate Tips: Use query comments to identify a query [thoughts-on-java.org]

A quick and very practical write-up about leveraging query comments in Hibernate.

>> JDK 9 is the End of the Road for Some Features [marxsoftware.com]

Most articles focus on JDK 9 additions. This one goes through the list of features to be removed from JVM.

>> Protecting JAX-RS Resources with RBAC and Apache Shiro [stormpath.com]

Implementing a fine-grained Role-Based Access Control with Apache Shiro.

>> Flyway Tutorial – Execute Migrations using Maven [codecentric.de]

Another short write-up about doing database migrations with Flyway. This time, focusing on the maven-flyway-plugin.

>> Building Reactive Applications with Akka Actors and Java 8 [infoq.com]

It turns out you do not need to use Scala in order to be able to use Akka 🙂

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Navigating the microservice architecture pattern language – part 1 [plainoldobjects.com]

A short write-up exploring and explaining the semantics of microservices.

>> Better performance: the case for timeouts [odino.org]

A very detailed experiment showing that such basic things as timeouts can noticeably impact the performance.

>> Exploring data sets with Kibana [frankel.ch]

The title says all 🙂

>> AWS Serverless Lambda Scheduled Events to Store Tweets in Couchbase [couchbase.com]

A short tutorial showing how to use Couchbase in a tweet-fetching AWS Lambda application.

Also worth reading:

3. Musings

>> Software Development and the Gig Economy [henrikwarne.com]

A few thoughts about the Software Development market and the direction it’s heading.

>> Managing a To Do list [kylecordes.com]

>> Really Managing a To Do list [kylecordes.com]

Tips on how to effectively manage your TODOs.

>> Automate Your Documentation [daedtech.com]

How to write documentation as easy as possible 🙂

>> Collaborating with Outsiders to the Dev Team [daedtech.com]

Trying to teach developers how to live with other forms of life 🙂

>> Nights and Weekends [swizec.com]

Building something interesting in your off hours isn’t supposed to be easy.

>> SyntheticMonitoring [martinfowler.com]

An explanation of the Synthetic Monitoring technique which revolves around running tests on a live system.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Secret Goals [dilbert.com]

>> Totally different [dilbert.com]

>> Work-life balance [dilbert.com]

5. Pick of the Week

>> How to do what you love and make good money [sivers.org]

Guide to CountDownLatch in Java

$
0
0

1. Introduction

In this article, we’ll give a guide to the CountDownLatch class and demonstrate how it can be used in a few practical examples.

Essentially, by using a CountDownLatch we can cause a thread to block until other threads have completed a given task.

2. Usage in Concurrent Programming

Simply put, a CountDownLatch has a counter field, which you can decrement as we require. We can then use it to block a calling thread until it’s been counted down to zero.

If we were doing some parallel processing, we could instantiate the CountDownLatch with the same value for the counter as a number of threads we want to work across. Then, we could just call countdown() after each thread finishes, guaranteeing that a dependent thread calling await() will block until the worker threads are finished.

3. Waiting for a Pool of Threads to Complete

Let’s try out this pattern by creating a Worker and using a CountDownLatch field to signal when it has completed:

public class Worker implements Runnable {
    private List<String> outputScraper;
    private CountDownLatch countDownLatch;

    public Worker(List<String> outputScraper, CountDownLatch countDownLatch) {
        this.outputScraper = outputScraper;
        this.countDownLatch = countDownLatch;
    }

    @Override
    public void run() {
        doSomeWork();
        countDownLatch.countDown();
        outputScraper.add("Counted down");
    }
}

Then, let’s create a test in order to prove that we can get a CountDownLatch to wait for the Worker instances to complete:

@Test
public void whenParallelProcessing_thenMainThreadWillBlockUntilCompletion()
  throws InterruptedException {

    List<String> outputScraper = Collections.synchronizedList(new ArrayList<>());
    CountDownLatch countDownLatch = new CountDownLatch(5);
    List<Thread> workers = Stream
      .generate(() -> new Thread(new Worker(outputScraper, countDownLatch)))
      .limit(5)
      .collect(toList());

      workers.forEach(Thread::start);
      countDownLatch.await(); 
      outputScraper.add("Latch released");

      assertThat(outputScraper)
        .containsExactly(
          "Counted down",
          "Counted down",
          "Counted down",
          "Counted down",
          "Counted down",
          "Latch released"
        );
    }

Naturally “Latch released” will always be the last output – as it’s dependant on the CountDownLatch releasing.

Note that if we didn’t call await(), we wouldn’t be able to guarantee the ordering of the execution of the threads, so the test would randomly fail.

4. A Pool of Threads Waiting to Begin

If we took the previous example, but this time started thousands of threads instead of five, it’s likely that many of the earlier ones will have finished processing before we have even called start() on the later ones. This could make it difficult to try and reproduce a concurrency problem, as we wouldn’t be able to get all our threads to run in parallel.

To get around this, let’s get the CountdownLatch to work differently than in the previous example. Instead of blocking a parent thread until some child threads have finished, we can block each child thread until all the others have started.

Let’s modify our run() method so it blocks before processing:

public class WaitingWorker implements Runnable {

    private List<String> outputScraper;
    private CountDownLatch readyThreadCounter;
    private CountDownLatch callingThreadBlocker;
    private CountDownLatch completedThreadCounter;

    public WaitingWorker(
      List<String> outputScraper,
      CountDownLatch readyThreadCounter,
      CountDownLatch callingThreadBlocker,
      CountDownLatch completedThreadCounter) {

        this.outputScraper = outputScraper;
        this.readyThreadCounter = readyThreadCounter;
        this.callingThreadBlocker = callingThreadBlocker;
        this.completedThreadCounter = completedThreadCounter;
    }

    @Override
    public void run() {
        readyThreadCounter.countDown();
        try {
            callingThreadBlocker.await();
            doSomeWork();
            outputScraper.add("Counted down");
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            completedThreadCounter.countDown();
        }
    }
}

Now, let’s modify our test so it blocks until all the Workers have started, unblocks the Workers, and then blocks until the Workers have finished:

@Test
public void whenDoingLotsOfThreadsInParallel_thenStartThemAtTheSameTime()
 throws InterruptedException {
 
    List<String> outputScraper = Collections.synchronizedList(new ArrayList<>());
    CountDownLatch readyThreadCounter = new CountDownLatch(5);
    CountDownLatch callingThreadBlocker = new CountDownLatch(1);
    CountDownLatch completedThreadCounter = new CountDownLatch(5);
    List<Thread> workers = Stream
      .generate(() -> new Thread(new WaitingWorker(
        outputScraper, readyThreadCounter, callingThreadBlocker, completedThreadCounter)))
      .limit(5)
      .collect(toList());

    workers.forEach(Thread::start);
    readyThreadCounter.await(); 
    outputScraper.add("Workers ready");
    callingThreadBlocker.countDown(); 
    completedThreadCounter.await(); 
    outputScraper.add("Workers complete");

    assertThat(outputScraper)
      .containsExactly(
        "Workers ready",
        "Counted down",
        "Counted down",
        "Counted down",
        "Counted down",
        "Counted down",
        "Workers complete"
      );
}

This pattern is really useful for trying to reproduce concurrency bugs, as can be used to force thousands of threads to try and perform some logic in parallel.

5. Terminating a CountdownLatch Early

Sometimes, we may run into a situation where the Workers terminate in error before counting down the CountDownLatch. This could result in it never reaching zero and await() never terminating:

@Override
public void run() {
    if (true) {
        throw new RuntimeException("Oh dear, I'm a BrokenWorker");
    }
    countDownLatch.countDown();
    outputScraper.add("Counted down");
}

Let’s modify our earlier test to use a BrokenWorker, in order to show how await() will block forever:

@Test
public void whenFailingToParallelProcess_thenMainThreadShouldGetNotGetStuck()
  throws InterruptedException {
 
    List<String> outputScraper = Collections.synchronizedList(new ArrayList<>());
    CountDownLatch countDownLatch = new CountDownLatch(5);
    List<Thread> workers = Stream
      .generate(() -> new Thread(new BrokenWorker(outputScraper, countDownLatch)))
      .limit(5)
      .collect(toList());

    workers.forEach(Thread::start);
    countDownLatch.await(3L);
}

Clearly, this is not the behaviour we want – it would be much better for the application to continue than infinitely block.

To get around this, let’s add a timeout argument to our call to await().

boolean completed = countDownLatch.await(3L, TimeUnit.SECONDS);
assertThat(completed).isFalse();

As we can see, the test will eventually time out and await() will return false.

6. Conclusion

In this quick guide, we’ve demonstrated how we can use a CountDownLatch in order to block a thread until other threads have finished some processing.

We’ve also shown how it can be used to help debug concurrency issues by making sure threads run in parallel.

The implementation of these examples can be found over on GitHub; this is a Maven-based project, so should be easy to run as is.

Introduction to the Kotlin Language

$
0
0

1. Overview

In this tutorial, we’re going to take a look at Kotlin, a new language in the JVM world, and some of its basic features, including classes, inheritance, conditional statements, and looping constructs. Then we will look at some of the main features that make Kotlin an attractive language, including null safety, data classes, extension functions, and String templates.

2. Maven Dependencies

To use Kotlin in your Maven project, you need to add the Kotlin standard library to your pom.xml:

<dependency>
    <groupId>org.jetbrains.kotlin</groupId>
    <artifactId>kotlin-stdlib</artifactId>
    <version>1.0.4</version>
</dependency>

To add JUnit support for Kotlin, you will also need to include the following dependencies:

<dependency>
    <groupId>org.jetbrains.kotlin</groupId>
    <artifactId>kotlin-test-junit</artifactId>
    <version>1.0.4</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.12</version>
    <scope>test</scope>
</dependency>

You can find the latest versions of kotlin-stdlib, kotlin-test-junit, and junit on Maven Central.

Finally, you will need to configure the source directories and Kotlin plugin in order to perform a Maven build:

<build>
    <sourceDirectory>${project.basedir}/src/main/kotlin</sourceDirectory>
    <testSourceDirectory>${project.basedir}/src/test/kotlin</testSourceDirectory>
    <plugins>
        <plugin>
            <artifactId>kotlin-maven-plugin</artifactId>
            <groupId>org.jetbrains.kotlin</groupId>
            <version>1.0.4</version>
            <executions>
                <execution>
                    <id>compile</id>
                    <goals>
                        <goal>compile</goal>
                    </goals>
                </execution>
                <execution>
                    <id>test-compile</id>
                    <goals>
                        <goal>test-compile</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

You can find the latest version of kotlin-maven-plugin in Maven Central.

3. Basic Syntax

Let’s look at basic building blocks of Kotlin Language.

There is some similarity to Java (e.g. defining packages is in the same way). Let’s take a look at differences.

3.1. Defining Functions

Let’s define a Function having two Int parameters with Int return type:

fun sum(a: Int, b: Int): Int {
    return a + b
}

3.2. Defining Local Variables

Assign-once (read-only) local variable:

val a: Int = 1
val b = 1 
val c: Int 
c = 1

Note that type of a variable b is inferred by a Kotlin compiler. We could also define mutable variables:

var x = 5 
x += 1

4. Optional Fields

Kotlin has basic syntax for defining a field that could be nullable (optional). When we want to declare that type of field is nullable we need to use type suffixed with a question mark:

val email: String?

When you defined nullable field it is perfectly valid to assign a null to it:

val email: String? = null

That means that in an email field could be a null. If we will write:

val email: String = "value"

Then we need to assign a value to email field in the same statement that we declare email. It can not have a null value. We will get back to Kotlin null safety in a later section.

5. Classes

Let’s demonstrate how to create a simple class for managing a specific category of a product. Our ItemManager class below has a default constructor that populates two fields — categoryId and dbConnection — and an optional email field:

class ItemManager(val categoryId: String, val dbConnection: String) {
    var email = ""
    // ...
}

That ItemManager(…) construct creates constructor and two fields in our class: categoryId and dbConnection

Note that our constructor uses the val keyword for its arguments — this means that the corresponding fields will be final and immutable. If we had used the var keyword (as we did when defining the email field), then those fields would be mutable.

Let’s create an instance of ItemManager using the default constructor:

ItemManager("cat_id", "db://connection")

We could construct ItemManager using named parameters. It is very useful when you have like in this example function that takes two parameters with the same type e.g. String, and you do not want to confuse an order of them. Using naming parameters you can explicitly write which parameter is assigned. In class ItemManager there are two fields, categoryId and dbConnection so both can be referenced using named parameters:

ItemManager(categoryId = "catId", dbConnection = "db://Connection")

It is very useful when we need to pass more arguments to a function.

If you need additional constructors, you would define them using the constructor keyword. Let’s define another constructor that also sets the email field:

constructor(categoryId: String, dbConnection: String, email: String) 
  : this(categoryId, dbConnection) {
    this.email = email
}

Note that this constructor invokes the default constructor that we defined above before setting the email field. And since we already defined categoryId and dbConnection to be immutable using the val keyword in the default constructor, we do not need to repeat the val keyword in the additional constructor.

Now, let’s create an instance using the additional constructor:

ItemManager("cat_id", "db://connection", "foo@bar.com")

If you want to define an instance method on ItemManager, you would do so using the fun keyword:

fun isFromSpecificCategory(catId: String): Boolean {
    return categoryId == catId
}

6. Inheritance

By default, Kotlin’s classes are closed for extension — the equivalent of a class marked final in Java.

In order to specify that a class is open for extension, you would use the open keyword when defining the class.

Let’s define an Item class that is open for extension:

open class Item(val id: String, val name: String = "unknown_name") {
    open fun getIdOfItem(): String {
        return id
    }
}

Note that we also denoted the getIdOfItem() method as open. This allows it to be overridden.

Now, let’s extend the Item class and override the getIdOfItem() method:

class ItemWithCategory(id: String, name: String, val categoryId: String) : Item(id, name) {
    override fun getIdOfItem(): String {
        return id + name
    }
}

7. Conditional Statements

In Kotlin, conditional statement if is an equivalent of a function that returns some value. Let’s look at an example:

fun makeAnalyisOfCategory(catId: String): Unit {
    val result = if (catId == "100") "Yes" else "No"
    println(result)
}

In this example, we see that if catId is equal to “100” conditional block returns “Yes” else it returns “No”Returned value gets assigned to result.

You could create a normal ifelse block:

val number = 2
if (number < 10) {
    println("number less that 10")
} else if (number > 10) {
    println("number is greater that 10")
}

Kotlin has also a very useful when command that acts like an advanced switch statement:

val name = "John"
when (name) {
    "John" -> println("Hi man")
    "Alice" -> println("Hi lady")
}

8. Collections

There are two types of collections in Kotlin: mutable and immutable. When we create immutable collection it means that is read only:

val items = listOf(1, 2, 3, 4)

There is no add function element on that list.

When we want to create a mutable list that could be altered, we need to use mutableListOf() method:

val rwList = mutableListOf(1, 2, 3)
rwList.add(5)

A mutable list has add() method so we could append an element to it. There are also equivalent method to other types of collections: mutableMapOf(), mapOf(), setOf(), mutableSetOf()

9. Exceptions

Mechanism of exception handling is very similar to the one in Java.

All exception classes extend Throwable.  The exception must have a message, stacktrace, and an optional cause. Every exception in Kotlin is unchecked, meaning that compiler does not force us to catch them.

To throw an exception object, we need to use the throw-expression:

throw Exception("msg")

Handling of exception is done by using try…catch block(finally optional):

try {

}
catch (e: SomeException) {

}
finally {

}

10. Lambdas

In Kotlin, we could define lambda functions and pass them as arguments to other functions.

Let’s see how to define a simple lambda:

val sumLambda = { a: Int, b: Int -> a + b }

We defined sumLambda function that takes two arguments of type Int as an argument and returns Int. 

We could pass a lambda around:

@Test
fun givenListOfNumber_whenDoingOperationsUsingLambda_shouldReturnProperResult() {
    // given
    val listOfNumbers = listOf(1, 2, 3)

    // when
    val sum = listOfNumbers.reduce { a, b -> a + b }

    // then
    assertEquals(6, sum)
}

11. Looping Constructs

In Kotlin, looping through collections could be done by using a standard for..in construct:

val numbers = arrayOf("first", "second", "third", "fourth")
for (n in numbers) {
    println(n)
}

If we want to iterate over a range of integers we could use a range construct:

for (i in 2..9 step 2) {
    println(i)
}

Note that the range in the example above is inclusive on both sides. The step parameter is optional and it is an equivalent to incrementing the counter twice in each iteration. The output will be following:

2
4
6
8

We could use a rangeTo() function that is defined on Int class in the following way:

1.rangeTo(10).map{ it * 2 }

The result will contain (note that rangeTo() is also inclusive):

[2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

12. Null Safety

Let’s look at one of the key features of Kotlin – null safety, that is built into the language. To illustrate why this is useful, we will create simple service that returns an Item object:

class ItemService {
    fun findItemNameForId(id: String): Item? {
        val itemId = UUID.randomUUID().toString()
        return Item(itemId, "name-$itemId");
    }
}

The important thing to notice is returned type of that method. It is an object followed by the question mark. It is a construct from Kotlin language, meaning that Item returned from that method could be null. We need to handle that case at compile time, deciding what we want to do with that object (it is more or less equivalent to Java 8 Optional<T> type).

If the method signature has type without question mark:

fun findItemNameForId(id: String): Item

then calling code will not need to handle a null case because it is guaranteed by the compiler and Kotlin language, that returned object can not be null.

Otherwise, if there is a nullable object passed to a method, and that case is not handled, it will not compile.

Let’s write a test case for Kotlin type-safety:

val id = "item_id"
val itemService = ItemService()

val result = itemService.findItemNameForId(id)

assertNotNull(result?.let { it -> it.id })
assertNotNull(result!!.id)

We are seeing here that after executing method findItemNameForId(), the returned type is of Kotlin Nullable. To access a field of that object (id), we need to handle that case at compile time. Method let() will execute only if a result is non-nullable. Id field can be accessed inside of a lambda function because it is null safe.

Another way to access that nullable object field is to use Kotlin operator !!. It is equivalent to:

if (result == null){
    throwNpe(); 
}
return result;

Kotlin will check if that object is a null if so, it will throw a NullPointerException, otherwise it will return a proper object. Function throwNpe() is a Kotlin internal function.

13. Data Classes

A very nice language construct that could be found in Kotlin is data classes (it is equivalent to “case class” from Scala language). The purpose of such classes is to only hold data. In our example we had an Item class that only holds the data:

data class Item(val id: String, val name: String)

The compiler will create for us methods hashCode(), equals(), and toString(). It is good practice to make data classes immutable, by using a val keyword. Data classes could have default field values:

data class Item(val id: String, val name: String = "unknown_name")

We see that name field has a default value “unknown_name”.

14. Extension Functions

Suppose that we have a class that is a part of 3rd party library, but we want to extend it with an additional method. Kotlin allows us to do this by using extension functions.

Let’s consider an example in which we have a list of elements and we want to take a random element from that list. We want to add a new function random() to 3rd party List class.

Here’s how it looks like in Kotlin:

fun <T> List<T>.random(): T? {
    if (this.isEmpty()) return null
    return get(ThreadLocalRandom.current().nextInt(count()))
}

The most important thing to notice here is a signature of the method. The method is prefixed with a name of the class that we are adding this extra method to.

Inside the extension method, we operate on a scope of a list, therefore using this gave use access to list instance methods like isEmpty() or count(). Then we are able to call random() method on any list that is in that scope:

fun <T> getRandomElementOfList(list: List<T>): T? {
    return list.random()
}

We created a method that takes a list and then executes custom extension function random() that was previously defined. Let’s write a test case for our new function:

val elements = listOf("a", "b", "c")

val result = ListExtension().getRandomElementOfList(elements)

assertTrue(elements.contains(result))

The possibility of defining functions that “extends” 3rd party classes is a very powerful feature and can make our code more concise and readable.

15. String Templates

A very nice feature of Kotlin language is a possibility to use templates for Strings. It is very useful because we do not need to concatenate Strings manually:

val firstName = "Tom"
val secondName = "Mary"
val concatOfNames = "$firstName + $secondName"
val sum = "four: ${2 + 2}"

We can also evaluate an expression inside the ${} block:

val itemManager = ItemManager("cat_id", "db://connection")
val result = "function result: ${itemManager.isFromSpecificCategory("1")}"

16. Kotlin/Java Interoperability

Kotlin – Java interoperability is seamlessly easy. Let’s suppose that we have a Java class with a method that operates on String:

class StringUtils{
    public static String toUpperCase(String name) {
        return name.toUpperCase();
    }
}

Now we want to execute that code from our Kotlin class. We only need to import that class and we could execute java method from Kotlin without any problems:

val name = "tom"

val res = StringUtils.toUpperCase(name)

assertEquals(res, "TOM")

As we see, we used java method from Kotlin code.

Calling Kotlin code from a Java is also very easy. Let’s define simple Kotlin function:

class MathematicsOperations {
    fun addTwoNumbers(a: Int, b: Int): Int {
        return a + b
    }
}

Executing addTwoNumbers() from Java code is very easy:

int res = new MathematicsOperations().addTwoNumbers(2, 4);

assertEquals(6, res);

We see that call to Kotlin code was transparent to us.

When we define a method in java that return type is a void, in Kotlin returned value will be of a Unit type.

There are some special identifiers in Java language ( is, object, in, ..) that when used them in Kotlin code needs to be escaped. For example, we could define a method that has a name object() but we need to remember to escape that name as this is a special identifier in java:

fun `object`(): String {
    return "this is object"
}

Then we could execute that method:

`object`()

17. Conclusion

This article makes an introduction to Kotlin language and it’s key features. It starts by introducing simple concepts like loops, conditional statements, and defining classes. Then shows some more advanced features like extension functions and null safety.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

New Stream Collectors in Java 9

$
0
0

1. Overview

Collectors were added in Java 8 which helped accumulate input elements into mutable containers such as MapList, and Set.

In this article, we’re going to explore two new collectors added in Java 9: Collectors.filtering and Collectors.flatMapping used in combination with Collectors.groupingBy providing intelligent collections of elements.

2. Filtering Collector

The Collectors.filtering is similar to the Stream filter(); it’s used for filtering input elements but used for different scenarios. The Stream’s filter is used in the stream chain whereas the filtering is a Collector which was designed to be used along with groupingBy.

With Stream’s filter, the values are filtered first and then it’s grouped. In this way, the values which are filtered out are gone and there is no trace of it. If we need a trace then we would need to group first and then apply filtering which actually the Collectors.filtering does.

The Collectors.filtering takes a function for filtering the input elements and a collector to collect the filtered elements:

@Test
public void givenList_whenSatifyPredicate_thenMapValueWithOccurences() {
    List<Integer> numbers = List.of(1, 2, 3, 5, 5);

    Map<Integer, Long> result = numbers.stream()
      .filter(val -> val > 3)
      .collect(Collectors.groupingBy(i -> i, Collectors.counting()));

    assertEquals(1, result.size());

    result = numbers.stream()
      .collect(Collectors.groupingBy(i -> i,
        Collectors.filtering(val -> val > 3, Collectors.counting())));

    assertEquals(4, result.size());
}

3. FlatMapping Collector

The Collectors.flatMapping is similar to Collectors.mapping but has a more fine-grained objective. Both the collectors takes a function and a collector where the elements are collected but flatMapping function accepts a Stream of elements which is then accumulated by the collector.

Let’s see the following model class:

class Blog {
    private String authorName;
    private List<String> comments;
      
    // constructor and getters
}

Collectors.flatMapping lets us skip intermediate collection and write directly to a single container which is mapped to that group defined by the Collectors.groupingBy:

@Test
public void givenListOfBlogs_whenAuthorName_thenMapAuthorWithComments() {
    Blog blog1 = new Blog("1", "Nice", "Very Nice");
    Blog blog2 = new Blog("2", "Disappointing", "Ok", "Could be better");
    List<Blog> blogs = List.of(blog1, blog2);
        
    Map<String,  List<List<String>>> authorComments1 = blogs.stream()
     .collect(Collectors.groupingBy(Blog::getAuthorName, 
       Collectors.mapping(Blog::getComments, Collectors.toList())));
       
    assertEquals(2, authorComments1.size());
    assertEquals(2, authorComments1.get("1").get(0).size());
    assertEquals(3, authorComments1.get("2").get(0).size());

    Map<String, List<String>> authorComments2 = blogs.stream()
      .collect(Collectors.groupingBy(Blog::getAuthorName, 
        Collectors.flatMapping(blog -> blog.getComments().stream(), 
        Collectors.toList())));

    assertEquals(2, authorComments2.size());
    assertEquals(2, authorComments2.get("1").size());
    assertEquals(3, authorComments2.get("2").size());
}

The Collectors.mapping maps all grouped author’s comments to the collector’s container i.e. List whereas this intermediate collection is removed with flatMapping as it gives a direct stream of the comment list to be mapped to the collector’s container.

4. Conclusion

This article illustrates the use of the new Collectors introduced in Java9 i.e. Collectors.filtering() and Collectors.flatMapping() used in combination with Collectors.groupingBy().

These Collectors can also be used along with Collectors.partitioningBy() but it only creates two partitions based on conditions and the real power of the collectors isn’t leveraged; hence left out of this tutorial.

The complete source code for the code snippets in this tutorial is available over on GitHub.


Spring Data MongoDB: Projections and Aggregations

$
0
0

1. Overview

Spring Data MongoDB provides simple high-level abstractions to MongoDB native query language. In this article, we will explore the support for Projections and Aggregation framework.

If you’re new to this topic, refer to our introductory article Introduction to Spring Data MongoDB.

2. Projection

In MongoDB, Projections are a way to fetch only the required fields of a document from a database. This reduces the amount of data that has to be transferred from database server to client and hence increases performance.

With Spring Data MongDB, projections can be used both with MongoTemplate and MongoRepository.

Before we move further, let’s look at the data model we will be using:

@Document
public class User {
    @Id
    private String id;
    private String name;
    private Integer age;
    
    // standard getters and setters
}

2.1. Projections Using MongoTemplate

The include() and exclude() methods on the Field class is used to include and exclude fields respectively:

Query query = new Query();
query.fields().include("name").exclude("id");
List<User> john = mongoTemplate.find(query, User.class);

These methods can be chained together to include or exclude multiple fields. The field marked as @Id (_id in the database) is always fetched unless explicitly excluded.

Excluded fields are null in the model class instance when records are fetched with projection. In the case where fields are of a primitive type or their wrapper class, then the value of excluded fields are default values of the primitive types.

For example, String would be nullint/Integer would be 0 and boolean/Boolean would be false.

Thus in the above example, the name field would be Johnid would be null and age would be 0.

2.2. Projections Using MongoRepository

While using MongoRepositories, the fields of @Query annotation can be defined in JSON format:

@Query(value="{}", fields="{name : 1, _id : 0}")
List<User> findNameAndExcludeId();

The result would be same as using the MongoTemplate. The value=”{}” denotes no filters and hence all the documents will be fetched.

3. Aggregation

Aggregation in MongoDB was built to process data and return computed results. Data is processed in stages and the output of one stage is provided as input to the next stage. This ability to apply transformations and do computations on data in stages makes aggregation a very powerful tool for analytics.

Spring Data MongoDB provides an abstraction for native aggregation queries using the three classes Aggregation which wraps an aggregation query, AggregationOperation which wraps individual pipeline stages and AggregationResults which is the container of the result produced by aggregation.

To perform and aggregation, first, create aggregation pipelines using the static builder methods on Aggregation class, then create an instance of Aggregation using the newAggregation() method on the Aggregation class and finally run the aggregation using MongoTemplate:

MatchOperation matchStage = Aggregation.match(new Criteria("foo").is("bar"));
ProjectionOperation projectStage = Aggregation.project("foo", "bar.baz");
        
Aggregation aggregation 
  = Aggregation.newAggregation(matchStage, projectStage);

AggregationResults<OutType> output 
  = mongoTemplate.aggregate(aggregation, "foobar", OutType.class);

Please note that both MatchOperation and ProjectionOperation implement AggregationOperation. There are similar implementations for other aggregation pipelines. OutType is the data model for expected output.

Now, we will look at a few examples and their explanations to cover the major aggregation pipelines and operators.

The dataset which we will be using in this article lists details about all the zip codes in the US which can be downloaded from MongoDB repository.

Let’s look at a sample document after importing it into a collection called zips in the test database.

{
    "_id" : "01001",
    "city" : "AGAWAM",
    "loc" : [
        -72.622739,
        42.070206
    ],
    "pop" : 15338,
    "state" : "MA"
}

For the sake of simplicity and to make code concise, in the next code snippets, we will assume that all the static methods of Aggregation class are statically imported.

3.1. Get all the States with a Population Greater Than 10 Million Order by Population Descending

Here we will have three pipelines:

  1. $group stage summing up the population of all zip codes
  2. $match stage to filter out states with population over 10 million
  3. $sort stage to sort all the documents in descending order of population

The expected output will have a field _id as state and a field statePop with the total state population. Let’s create a data model for this and run the aggregation:

public class StatePoulation {
 
    @Id
    private String state;
    private Integer statePop;
 
    // standard getters and setters
}

The @Id annotation will map the _id field from output to state in the model:

GroupOperation groupByStateAndSumPop = group("state")
  .sum("pop").as("statePop");
MatchOperation filterStates = match(new Criteria("statePop").gt(10000000));
SortOperation sortByPopDesc = sort(new Sort(Direction.DESC, "statePop"));

Aggregation aggregation = newAggregation(
  groupByStateAndSumPop, filterStates, sortByPopDesc);
AggregationResults<StatePopulation> result = mongoTemplate.aggregate(
  aggregation, "zips", StatePopulation.class);

The AggregationResults class implements Iterable and hence we can iterate over it and print the results.

If the output data model is not known, the standard MongoDB classes Document or DBObject can be used.

3.2. Get Smallest State by Average City Population

For this problem, we will need four stages:

  1. $group to sum the total population of each city
  2. $group to calculate average population of each state
  3. $sort stage to order states by their average city population in ascending order
  4. $limit to get the first state with lowest average city population

Although it’s not necessarily required, we will use an additional $project stage to reformat the document as per out StatePopulation data model.

GroupOperation sumTotalCityPop = group("state", "city")
  .sum("pop").as("cityPop");
GroupOperation averageStatePop = group("_id.state")
  .avg("cityPop").as("avgCityPop");
SortOperation sortByAvgPopAsc = sort(new Sort(Direction.ASC, "avgCityPop"));
LimitOperation limitToOnlyFirstDoc = limit(1);
ProjectionOperation projectToMatchModel = project()
  .andExpression("_id").as("state")
  .andExpression("avgCityPop").as("statePop");

Aggregation aggregation = newAggregation(
  sumTotalCityPop, averageStatePop, sortByAvgPopAsc,
  limitToOnlyFirstDoc, projectToMatchModel);

AggregationResults<StatePopulation> result = mongoTemplate
  .aggregate(aggregation, "zips", StatePopulation.class);
StatePopulation smallestState = result.getUniqueMappedResult();

In this example, we already know that there will be only one document in the result since we limit the number of output documents to 1 in the last stage. As such, we can invoke getUniqueMappedResult() to get the required StatePopulation instance.

Another thing to notice is that, instead of relying on the @Id annotation to map _id to state, we have explicitly done it in projection stage.

3.3. Get the State with Maximum and Minimum Zip Codes

For this example, we need three stages:

  1. $group to count the number of zip codes for each state
  2. $sort to order the states by the number of zip codes
  3. $group to find the state with max and min zip codes using $first and $last operators
GroupOperation sumZips = group("state").count().as("zipCount");
SortOperation sortByCount = sort(Direction.ASC, "zipCount");
GroupOperation groupFirstAndLast = group().first("_id").as("minZipState")
  .first("zipCount").as("minZipCount").last("_id").as("maxZipState")
  .last("zipCount").as("maxZipCount");

Aggregation aggregation = newAggregation(sumZips, sortByCount, groupFirstAndLast);

AggregationResults<DBObject> result = mongoTemplate
  .aggregate(aggregation, "zips", DBObject.class);
DBObject dbObject = result.getUniqueMappedResult();

Here we have not used any model but used the DBObject already provided with MongoDB driver.

4. Conclusion

In this article, we learned how to fetch specified fields of a document in MongoDB using projections in Spring Data MongoDB.

We also learned about the MongoDB aggregation framework support in Spring Data. We covered major aggregation phases – group, project, sort, limit, and match and looked at some examples of its practical applications. The complete source code is available over on GitHub.

MaxUploadSizeExceededException in Spring

$
0
0

1. Overview

In the Spring framework, a MaxUploadSizeExceededException is thrown when an application attempts to upload a file whose size exceeds a certain threshold as specified in the configuration.

In this tutorial, we will take a look at how to specify a maximum upload size. Then we will show a simple file upload controller and discuss different methods for handling this exception.

2. Setting a Maximum Upload Size

By default, there is no limit on the size of files that can be uploaded. In order to set a maximum upload size, you have to declare a bean of type MultipartResolver.

Let’s see an example that limits the file size to 5 MB:

@Bean
public MultipartResolver multipartResolver() {
    CommonsMultipartResolver multipartResolver
      = new CommonsMultipartResolver();
    multipartResolver.setMaxUploadSize(5242880);
    return multipartResolver;
}

3. File Upload Controller

Next, let’s define a controller method that handles the uploading and saving to the server of a file:

@RequestMapping(value = "/uploadFile", method = RequestMethod.POST)
public ModelAndView uploadFile(MultipartFile file) throws IOException {
 
    ModelAndView modelAndView = new ModelAndView("file");
    InputStream in = file.getInputStream();
    File currDir = new File(".");
    String path = currDir.getAbsolutePath();
    FileOutputStream f = new FileOutputStream(
      path.substring(0, path.length()-1)+ file.getOriginalFilename());
    int ch = 0;
    while ((ch = in.read()) != -1) {
        f.write(ch);
    }
    
    f.flush();
    f.close();
    
    modelAndView.getModel().put("message", "File uploaded successfully!");
    return modelAndView;
}

If the user attempts to upload a file with a size greater than 5 MB, the application will throw an exception of type MaxUploadSizeExceededException.

4. Handling MaxUploadSizeExceededException

In order to handle this exception, we can have our controller implement the interface HandlerExceptionResolver, or we can create a @ControllerAdvice annotated class.

4.1. Implementing HandlerExceptionResolver

The HandlerExceptionResolver interface declares a method called resolveException() where exceptions of different types can be handled.

Let’s override the resolveException() method to display a message in case the exception caught is of type MaxUploadSizeExceededException:

@Override
public ModelAndView resolveException(
  HttpServletRequest request,
  HttpServletResponse response, 
  Object object,
  Exception exc) {   
     
    ModelAndView modelAndView = new ModelAndView("file");
    if (exc instanceof MaxUploadSizeExceededException) {
        modelAndView.getModel().put("message", "File size exceeds limit!");
    }
    return modelAndView;
}

4.2. Creating a Controller Advice Interceptor

There are a couple of advantages of handling the exception through an interceptor rather than in the controller itself. One is that we can apply the same exception handling logic to multiple controllers.

Another is that we can create a method that targets only the exception we want to handle, allowing the framework to delegate the exception handling without our having to use instanceof to check what type of exception was thrown:

@ControllerAdvice
public class FileUploadExceptionAdvice {
     
    @ExceptionHandler(MaxUploadSizeExceededException.class)
    public ModelAndView handleMaxSizeException(
      MaxUploadSizeExceededException exc, 
      HttpServletRequest request,
      HttpServletResponse response) {
 
        ModelAndView modelAndView = new ModelAndView("file");
        modelAndView.getModel().put("message", "File too large!");
        return modelAndView;
    }
}

5. Tomcat Configuration

If you are deploying to Tomcat server version 7 and above, there is a configuration property called maxSwallowSize that you may have to set or change.

This property specifies the maximum number of bytes that Tomcat will “swallow” for an upload from the client when it knows the server will ignore the file.

The default value of the property is 2097152 (2 MB). If left unchanged or if set below the 5 MB limit that we set in our MultipartResolver, Tomcat will reject any attempt to upload a file over 2 MB, and our custom exception handling will never be invoked.

In order for the request to be successful and for the error message from the application to be displayed, you need to set maxSwallowSize property to a negative value. This instructs Tomcat to swallow all failed uploads regardless of file size.

This is done in the TOMCAT_HOME/conf/server.xml file:

<Connector port="8080" protocol="HTTP/1.1"
  connectionTimeout="20000"
  redirectPort="8443" 
  maxSwallowSize = "-1"/>

6. Conclusion

In this article, we have demonstrated how to configure a maximum file upload size in Spring and how to handle the MaxUploadSizeExceededException that results when a client attempts to upload a file exceeding this size limit.

The full source code for this article can be found in the GitHub project.

Guide to java.util.concurrent.BlockingQueue

$
0
0

1. Overview

In this article, we will look at one of the most useful constructs java.util.concurrent to solve the concurrent producer-consumer problem. We’ll look at an API of the BlockingQueue interface and how methods from that interface make writing concurrent programs easier.

Later in the article, we will show an example of a simple program that has multiple producer threads and multiple consumer threads.

2. BlockingQueue Types

We can distinguish two types of BlockingQueue:

  • unbounded queue – can grow almost indefinitely
  • bounded queue – with maximal capacity defined

2.1. Unbounded Queue

Creating unbounded queues is simple:

BlockingQueue<String> blockingQueue = new LinkedBlockingDeque<>();

The Capacity of blockingQueue will be set to Integer.MAX_VALUE. All operations that add an element to the unbounded queue will never block, thus it could grow to a very large size.

The most important thing when designing a producer-consumer program using unbounded BlockingQueue is that consumers should be able to consume messages as quickly as producers are adding messages to the queue. Otherwise, the memory could fill up and we would get an OutOfMemory exception.

2.2. Bounded Queue

The second type of queues is the bounded queue. We can create such queues by passing the capacity as an argument to a constructor:

BlockingQueue<String> blockingQueue = new LinkedBlockingDeque<>(10);

Here we have a blockingQueue that has a capacity equal to 10. It means that when a consumer tries to add an element to an already full queue, depending on a method that was used to add it (offer(), add() or put()), it will block until space for inserting object becomes available. Otherwise, the operations will fail.

Using bounded queue is a good way to design concurrent programs because when we insert an element to an already full queue, that operations need to wait until consumers catch up and make some space available in the queue. It gives us throttling without any effort on our part.

3. BlockingQueue API

There are two types of methods in the BlockingQueue interface – methods responsible for adding elements to a queue and methods that retrieve those elements. Each method from those two groups behaves differently in case the queue is full/empty.

3.1. Adding Elements

  • add() – returns true if insertion was successful, otherwise throws an IllegalStateException
  • put() – inserts the specified element into a queue, waiting for a free slot if necessary
  • offer() – returns true if insertion was successful, otherwise false
  • offer(E e, long timeout, TimeUnit unit) – tries to insert element into a queue and waits for an available slot within a specified timeout

3.2. Retrieving Elements

  • take() – waits for a head element of a queue and removes it. If the queue is empty, it blocks and waits for an element to become available
  • poll(long timeout, TimeUnit unit) –  retrieves and removes the head of the queue, waiting up to the specified wait time if necessary for an element to become available. Returns null after a timeout

These methods are the most important building blocks from BlockingQueue interface when building producer-consumer programs.

4. Multithreaded Producer-Consumer Example

Let’s create a program that consists of two parts – a Producer and a Consumer.

The Producer will be producing a random number from 0 to 100 and will put that number in a BlockingQueue. We’ll have 4 producer threads and use the put() method to block until there’s space available in the queue.

The important thing to remember is that we need to stop our consumer threads from waiting for an element to appear in a queue indefinitely.

A good technique to signal from producer to the consumer that there are no more messages to process is to send a special message called a poison pill. We need to send as many poison pills as we have consumers. Then when a consumer will take that special poison pill message from a queue, it will finish execution gracefully.

Let’s look at a producer class:

public class NumbersProducer implements Runnable {
    private BlockingQueue<Integer> numbersQueue;
    private final int poisonPill;
    private final int poisonPillPerProducer;
    
    public NumbersProducer(BlockingQueue<Integer> numbersQueue, int poisonPill, int poisonPillPerProducer) {
        this.numbersQueue = numbersQueue;
        this.poisonPill = poisonPill;
        this.poisonPillPerProducer = poisonPillPerProducer;
    }
    public void run() {
        try {
            generateNumbers();
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
    }
    
    private void generateNumbers() throws InterruptedException {
        for (int i = 0; i < 100; i++) {
            numbersQueue.put(ThreadLocalRandom.current().nextInt(100));
        }
        for (int j = 0; j < poisonPillPerProducer; j++) {
            numbersQueue.put(poisonPill);
        }
     }
}

Our producer constructor takes as an argument the BlockingQueue that is used to coordinate processing between the producer and the consumer. We see that method generateNumbers() will put 100 elements in a queue. It takes also poison pill message, to know what type of message put into a queue when the execution will be finished. That message needs to be put poisonPillPerProducer times into a queue.

Each consumer will take an element from a BlockingQueue using take() method so it will block until there is an element in a queue. After taking an Integer from a queue it checks if the message is a poison pill, if yes then execution of a thread is finished. Otherwise, it will print out the result on standard output along with current thread’s name.

This will give us insight into inner workings of our consumers:

public class NumbersConsumer implements Runnable {
    private BlockingQueue<Integer> queue;
    private final int poisonPill;
    
    public NumbersConsumer(BlockingQueue<Integer> queue, int poisonPill) {
        this.queue = queue;
        this.poisonPill = poisonPill;
    }
    public void run() {
        try {
            while (true) {
                Integer number = queue.take();
                if (number.equals(poisonPill)) {
                    return;
                }
                System.out.println(Thread.currentThread().getName() + " result: " + result);
            }
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
    }
}

The important thing to notice is the usage of a queue. Same as in the producer constructor, a queue is passed as an argument. We can do it because BlockingQueue can be shared between threads without any explicit synchronization.

Now that we have our producer and consumer, we can start our program. We need to define the queue’s capacity, and we set it to 100 elements.

We want to have 4 producer threads and a number of consumers threads will be equal to the number of available processors:

int BOUND = 10;
int N_PRODUCERS = 4;
int N_CONSUMERS = Runtime.getRuntime().availableProcessors();
int poisonPill = Integer.MAX_VALUE;
int poisonPillPerProducer = N_CONSUMERS / N_PRODUCERS;

BlockingQueue<Integer> queue = new LinkedBlockingQueue<>(BOUND);

for (int i = 0; i < N_PRODUCERS; i++) {
    new Thread(new NumbersProducer(queue, poisonPill, poisonPillPerProducer)).start();
}

for (int j = 0; j < N_CONSUMERS; j++) {
    new Thread(new NumbersConsumer(queue, poisonPill)).start();
}

BlockingQueue is created using construct with a capacity. We’re creating 4 producers and N consumers. We specify our poison pill message to be an Integer.MAX_VALUE because such value will never be sent by our producer under normal working conditions. The most important thing to notice here is that BlockingQueue is used to coordinate work between them.

When we run the program, 4 producer threads will be putting random Integers in a BlockingQueue and consumers will be taking those elements from the queue. Each thread will print to standard output the name of the thread together with a result.

5. Conclusion

This article shows a practical use of BlockingQueue and explains methods that are used to add and retrieve elements from it. Also, we’ve shown how to build a multithreaded producer-consumer program using BlockingQueue to coordinate work between producers and consumers.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as it is.

Intro to Dropwizard Metrics

$
0
0

1. Introduction

Metrics is a Java library which provides measuring instruments for Java applications.

It has several modules, and in this article, we will elaborate metrics-core module, metrics-healthchecks module, metrics-servlets module, and metrics-servlet module, and sketch out the rest, for your reference.

2. Module metrics-core

2.1. Maven Dependencies

To use the metrics-core module, there’s only one dependency required which needs to be added to the pom.xml file:

<dependency>
    <groupId>io.dropwizard.metrics</groupId>
    <artifactId>metrics-core</artifactId>
    <version>3.1.2</version>
</dependency>

And you can find its latest version here.

2.2. MetricRegistry

Simply put, we’ll use the MetricRegistry class to register one or several metrics.

We can use one metrics registry for all of our metrics, but if we want to use different reporting methods for different metrics, we can also divide our metrics into groups and use different metrics registries for each group.

Let’s create a MetricRegistry now:

MetricRegistry metricRegistry = new MetricRegistry();

And then we can register some metrics with this MetricRegistry:

Meter meter1 = new Meter();
metricRegistry.register("meter1", meter1);

Meter meter2 = metricRegistry.meter("meter2");

There are two basic ways of creating a new metric: instantiating one yourself or obtaining one from the metric registry. As you can see, we used both of them in the example above, we are instantiating the Meter object “meter1” and we are getting another Meter object “meter2” which is created by the metricRegistry.

In a metric registry, every metric has a unique name, as we used “meter1” and “meter2” as metric names above. MetricRegistry also provides a set of static helper methods to help us create proper metric names:

String name1 = MetricRegistry.name(Filter.class, "request", "count");
String name2 = MetricRegistry.name("CustomFilter", "response", "count");

If we need to manage a set of metric registries, we can use SharedMetricRegistries class, which is a singleton and thread-safe. We can add a metric register into it, retrieve this metric register from it, and remove it:

SharedMetricRegistries.add("default", metricRegistry);
MetricRegistry retrievedMetricRegistry = SharedMetricRegistries.getOrCreate("default");
SharedMetricRegistries.remove("default");

3. Metrics Concepts

The metrics-core module provides several commonly used metric types: Meter, Gauge, Counter, Histogram and Timer, and Reporter to output metrics’ values.

3.1. Meter

A Meter measures event occurrences count and rate:

Meter meter = new Meter();
long initCount = meter.getCount();
assertThat(initCount, equalTo(0L));

meter.mark();
assertThat(meter.getCount(), equalTo(1L));

meter.mark(20);
assertThat(meter.getCount(), equalTo(21L));

double meanRate = meter.getMeanRate();
double oneMinRate = meter.getOneMinuteRate();
double fiveMinRate = meter.getFiveMinuteRate();
double fifteenMinRate = meter.getFifteenMinuteRate(); 

The getCount() method returns event occurrences count, and mark() method adds 1 or n to the event occurrences count. The Meter object provides four rates which represent average rates for the whole Meter lifetime, for the recent one minute, for the recent five minutes and for the recent quarter, respectively.

3.2. Gauge

Gauge is an interface which is simply used to return a particular value. The metrics-core module provides several implementations of it: RatioGauge, CachedGauge, DerivativeGauge and JmxAttributeGauge.

RatioGauge is an abstract class and it measures the ratio of one value to another.

Let’s see how to use it. First, we implement a class AttendanceRatioGauge:

public class AttendanceRatioGauge extends RatioGauge {
    private int attendanceCount;
    private int courseCount;

    @Override
    protected Ratio getRatio() {
        return Ratio.of(attendanceCount, courseCount);
    }
    
    // standard constructors
}

And then we test it:

RatioGauge ratioGauge = new AttendanceRatioGauge(15, 20);

assertThat(ratioGauge.getValue(), equalTo(0.75));

CachedGauge is another abstract class which can cache value, therefore, it is quite useful when the values are expensive to calculate. To use it, we need to implement a class ActiveUsersGauge:

public class ActiveUsersGauge extends CachedGauge<List<Long>> {
    
    @Override
    protected List<Long> loadValue() {
        return getActiveUserCount();
    }
 
    private List<Long> getActiveUserCount() {
        List<Long> result = new ArrayList<Long>();
        result.add(12L);
        return result;
    }

    // standard constructors
}

Then we test it to see if it works as expected:

Gauge<List<Long>> activeUsersGauge = new ActiveUsersGauge(15, TimeUnit.MINUTES);
List<Long> expected = new ArrayList<>();
expected.add(12L);

assertThat(activeUsersGauge.getValue(), equalTo(expected));

We set the cache’s expiration time to 15 minutes when instantiating the ActiveUsersGauge.

DerivativeGauge is also an abstract class and it allows you to derive a value from other Gauge as its value.

Let’s look at an example:

public class ActiveUserCountGauge extends DerivativeGauge<List<Long>, Integer> {
    
    @Override
    protected Integer transform(List<Long> value) {
        return value.size();
    }

    // standard constructors
}

This Gauge derives its value from an ActiveUsersGauge, so we expect it to be the value from the base list’s size:

Gauge<List<Long>> activeUsersGauge = new ActiveUsersGauge(15, TimeUnit.MINUTES);
Gauge<Integer> activeUserCountGauge = new ActiveUserCountGauge(activeUsersGauge);

assertThat(activeUserCountGauge.getValue(), equalTo(1));

JmxAttributeGauge is used when we need to access other libraries’ metrics exposed via JMX.

3.3. Counter

The Counter is used for recording incrementations and decrementations:

Counter counter = new Counter();
long initCount = counter.getCount();
assertThat(initCount, equalTo(0L));

counter.inc();
assertThat(counter.getCount(), equalTo(1L));

counter.inc(11);
assertThat(counter.getCount(), equalTo(12L));

counter.dec();
assertThat(counter.getCount(), equalTo(11L));

counter.dec(6);
assertThat(counter.getCount(), equalTo(5L));

3.4. Histogram

Histogram is used for keeping track of a stream of Long values and it analyzes their statistical characteristics such as max, min, mean, median, standard deviation, 75th percentile and so on:

Histogram histogram = new Histogram(new UniformReservoir());
histogram.update(5);
long count1 = histogram.getCount();
assertThat(count1, equalTo(1L));

Snapshot snapshot1 = histogram.getSnapshot();
assertThat(snapshot1.getValues().length, equalTo(1));
assertThat(snapshot1.getValues()[0], equalTo(5L));

histogram.update(20);
long count2 = histogram.getCount();
assertThat(count2, equalTo(2L));

Snapshot snapshot2 = histogram.getSnapshot();
assertThat(snapshot2.getValues().length, equalTo(2));
assertThat(snapshot2.getValues()[1], equalTo(20L));
assertThat(snapshot2.getMax(), equalTo(20L));
assertThat(snapshot2.getMean(), equalTo(12.5));
assertEquals(10.6, snapshot2.getStdDev(), 0.1);
assertThat(snapshot2.get75thPercentile(), equalTo(20.0));
assertThat(snapshot2.get999thPercentile(), equalTo(20.0));

Histogram samples the data by using reservoir sampling, and when we instantiate a Histogram object, we need to set its reservoir explicitly.

Reservoir is an interface and metrics-core provides four implementations of them: ExponentiallyDecayingReservoir, UniformReservoir, SlidingTimeWindowReservoir, SlidingWindowReservoir.

In the section above, we mentioned that a metric can also be created by MetricRegistry, besides using a constructor. When we use metricRegistry.histogram(), it returns a Histogram instance with ExponentiallyDecayingReservoir implementation.

3.5. Timer

Timer is used for keeping track of multiple timing durations which are represented by Context objects, and it also provides their statistical data:

Timer timer = new Timer();
Timer.Context context1 = timer.time();
TimeUnit.SECONDS.sleep(5);
long elapsed1 = context1.stop();

assertEquals(5000000000L, elapsed1, 1000000);
assertThat(timer.getCount(), equalTo(1L));
assertEquals(0.2, timer.getMeanRate(), 0.1);

Timer.Context context2 = timer.time();
TimeUnit.SECONDS.sleep(2);
context2.close();

assertThat(timer.getCount(), equalTo(2L));
assertEquals(0.3, timer.getMeanRate(), 0.1);

3.6. Reporter

When we need to output our measurements, we can use Reporter. This is an interface, and the metrics-core module provides several implementations of it, such as ConsoleReporter, CsvReporter, Slf4jReporter, JmxReporter and so on.

Here we use ConsoleReporter as an example:

MetricRegistry metricRegistry = new MetricRegistry();

Meter meter = metricRegistry.meter("meter");
meter.mark();
meter.mark(200);
Histogram histogram = metricRegistry.histogram("histogram");
histogram.update(12);
histogram.update(17);
Counter counter = metricRegistry.counter("counter");
counter.inc();
counter.dec();

ConsoleReporter reporter = ConsoleReporter.forRegistry(metricRegistry).build();
reporter.start(5, TimeUnit.MICROSECONDS);
reporter.report();

Here is the sample output of the ConsoleReporter:

-- Histograms ------------------------------------------------------------------
histogram
count = 2
min = 12
max = 17
mean = 14.50
stddev = 2.50
median = 17.00
75% <= 17.00
95% <= 17.00
98% <= 17.00
99% <= 17.00
99.9% <= 17.00

-- Meters ----------------------------------------------------------------------
meter
count = 201
mean rate = 1756.87 events/second
1-minute rate = 0.00 events/second
5-minute rate = 0.00 events/second
15-minute rate = 0.00 events/second

4. Module metrics-healthchecks

Metrics has an extension metrics-healthchecks module for dealing with health checks.

4.1. Maven Dependencies

To use the metrics-healthchecks module, we need to add this dependency to the pom.xml file:

<dependency>
    <groupId>io.dropwizard.metrics</groupId>
    <artifactId>metrics-healthchecks</artifactId>
    <version>3.1.2</version>
</dependency>

And you can find its latest version here.

4.2. Usage

First, we need several classes which are responsible for specific health check operations, and these classes must implement HealthCheck.

For example, we use DatabaseHealthCheck and UserCenterHealthCheck:

public class DatabaseHealthCheck extends HealthCheck {
 
    @Override
    protected Result check() throws Exception {
        return Result.healthy();
    }
}
public class UserCenterHealthCheck extends HealthCheck {
 
    @Override
    protected Result check() throws Exception {
        return Result.healthy();
    }
}

Then, we need a HealthCheckRegistry (which is just like MetricRegistry), and register the DatabaseHealthCheck and UserCenterHealthCheck with it:

HealthCheckRegistry healthCheckRegistry = new HealthCheckRegistry();
healthCheckRegistry.register("db", new DatabaseHealthCheck());
healthCheckRegistry.register("uc", new UserCenterHealthCheck());

assertThat(healthCheckRegistry.getNames().size(), equalTo(2));

We can also unregister the HealthCheck:

healthCheckRegistry.unregister("uc");
 
assertThat(healthCheckRegistry.getNames().size(), equalTo(1));

We can run all the HealthCheck instances:

Map<String, HealthCheck.Result> results = healthCheckRegistry.runHealthChecks();
for (Map.Entry<String, HealthCheck.Result> entry : results.entrySet()) {
    assertThat(entry.getValue().isHealthy(), equalTo(true));
}

Finally, we can run a specific HealthCheck instance:

healthCheckRegistry.runHealthCheck("db");

5. Module metrics-servlets

Metrics provides us a handful of useful servlets which allow us to access metrics related data through HTTP requests.

5.1. Maven Dependencies

To use the metrics-servlets module, we need to add this dependency to the pom.xml file:

<dependency>
    <groupId>io.dropwizard.metrics</groupId>
    <artifactId>metrics-servlets</artifactId>
    <version>3.1.2</version>
</dependency>

And you can find its latest version here.

5.2. HealthCheckServlet Usage

HealthCheckServlet provides health check results. First, we need to create a ServletContextListener which exposes our HealthCheckRegistry:

public class MyHealthCheckServletContextListener
  extends HealthCheckServlet.ContextListener {
 
    public static HealthCheckRegistry HEALTH_CHECK_REGISTRY
      = new HealthCheckRegistry();

    static {
        HEALTH_CHECK_REGISTRY.register("db", new DatabaseHealthCheck());
    }

    @Override
    protected HealthCheckRegistry getHealthCheckRegistry() {
        return HEALTH_CHECK_REGISTRY;
    }
}

Then, we add both this listener and HealthCheckServlet into the web.xml file:

<listener>
    <listener-class>com.baeldung.metrics.servlets.MyHealthCheckServletContextListener</listener-class>
</listener>
<servlet>
    <servlet-name>healthCheck</servlet-name>
    <servlet-class>com.codahale.metrics.servlets.HealthCheckServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>healthCheck</servlet-name>
    <url-pattern>/healthcheck</url-pattern>
</servlet-mapping>

Now we can start the web application, and send a GET request to “http://localhost:8080/healthcheck” to get health check results. Its response should be like this:

{
  "db": {
    "healthy": true
  }
}

5.3. ThreadDumpServlet Usage

ThreadDumpServlet provides information about all live threads in the JVM, their states, their stack traces, and the state of any locks they may be waiting for.
If we want to use it, we simply need to add these into the web.xml file:

<servlet>
    <servlet-name>threadDump</servlet-name>
    <servlet-class>com.codahale.metrics.servlets.ThreadDumpServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>threadDump</servlet-name>
    <url-pattern>/threaddump</url-pattern>
</servlet-mapping>

Thread dump data will be available at “http://localhost:8080/threaddump”.

5.4. PingServlet Usage

PingServlet can be used to test if the application is running. We add these into the web.xml file:

<servlet>
    <servlet-name>ping</servlet-name>
    <servlet-class>com.codahale.metrics.servlets.PingServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>ping</servlet-name>
    <url-pattern>/ping</url-pattern>
</servlet-mapping>

And then send a GET request to “http://localhost:8080/ping”. The response’s status code is 200 and its content is “pong”.

5.5. MetricsServlet Usage

MetricsServlet provides metrics data. First, we need to create a ServletContextListener which exposes our MetricRegistry:

public class MyMetricsServletContextListener
  extends MetricsServlet.ContextListener {
    private static MetricRegistry METRIC_REGISTRY
     = new MetricRegistry();

    static {
        Counter counter = METRIC_REGISTRY.counter("m01-counter");
        counter.inc();

        Histogram histogram = METRIC_REGISTRY.histogram("m02-histogram");
        histogram.update(5);
        histogram.update(20);
        histogram.update(100);
    }

    @Override
    protected MetricRegistry getMetricRegistry() {
        return METRIC_REGISTRY;
    }
}

Both this listener and MetricsServlet need to be added into web.xml:

<listener>
    <listener-class>com.codahale.metrics.servlets.MyMetricsServletContextListener</listener-class>
</listener>
<servlet>
    <servlet-name>metrics</servlet-name>
    <servlet-class>com.codahale.metrics.servlets.MetricsServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>metrics</servlet-name>
    <url-pattern>/metrics</url-pattern>
</servlet-mapping>

This will be exposed in our web application at “http://localhost:8080/metrics”. Its response should contain various metrics data:

{
  "version": "3.0.0",
  "gauges": {},
  "counters": {
    "m01-counter": {
      "count": 1
    }
  },
  "histograms": {
    "m02-histogram": {
      "count": 3,
      "max": 100,
      "mean": 41.66666666666666,
      "min": 5,
      "p50": 20,
      "p75": 100,
      "p95": 100,
      "p98": 100,
      "p99": 100,
      "p999": 100,
      "stddev": 41.69998667732268
    }
  },
  "meters": {},
  "timers": {}
}

5.6. AdminServlet Usage

AdminServlet aggregates HealthCheckServlet, ThreadDumpServlet, MetricsServlet, and PingServlet.

Let’s add these into the web.xml:

<servlet>
    <servlet-name>admin</servlet-name>
    <servlet-class>com.codahale.metrics.servlets.AdminServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>admin</servlet-name>
    <url-pattern>/admin/*</url-pattern>
</servlet-mapping>

It can now be accessed at “http://localhost:8080/admin”. We will get a page containing four links, one for each of those four servlets.

Note that, if we want to do health check and access metrics data, those two listeners are still needed.

6. Module metrics-servlet

The metrics-servlet module provides a Filter which has several metrics: meters for status codes, a counter for the number of active requests, and a timer for request duration.

6.1. Maven Dependencies

To use this module, let’s first add the dependency into the pom.xml:

<dependency>
    <groupId>io.dropwizard.metrics</groupId>
    <artifactId>metrics-servlet</artifactId>
    <version>3.1.2</version>
</dependency>

And you can find its latest version here.

6.2. Usage

To use it, we need to create a ServletContextListener which exposes our MetricRegistry to the InstrumentedFilter:

public class MyInstrumentedFilterContextListener
  extends InstrumentedFilterContextListener {
 
    public static MetricRegistry REGISTRY = new MetricRegistry();

    @Override
    protected MetricRegistry getMetricRegistry() {
        return REGISTRY;
    }
}

Then, we add these into the web.xml:

<listener>
     <listener-class>
         com.baeldung.metrics.servlet.MyInstrumentedFilterContextListener
     </listener-class>
</listener>

<filter>
    <filter-name>instrumentFilter</filter-name>
    <filter-class>
        com.codahale.metrics.servlet.InstrumentedFilter
    </filter-class>
</filter>
<filter-mapping>
    <filter-name>instrumentFilter</filter-name>
    <url-pattern>/*</url-pattern>
</filter-mapping>

Now the InstrumentedFilter can work. If we want to access its metrics data, we can do it through its MetricRegistry REGISTRY.

7. Other Modules

Except for the modules we introduced above, Metrics has some other modules for different purposes:

  • metrics-jvm: provides several useful metrics for instrumenting JVM internals
  • metrics-ehcache: provides InstrumentedEhcache, a decorator for Ehcache caches
  • metrics-httpclient: provides classes for instrumenting Apache HttpClient (4.x version)
  • metrics-log4j: provides InstrumentedAppender, a Log4j Appender implementation for log4j 1.x which records the rate of logged events by their logging level
  • metrics-log4j2: is similar to metrics-log4j, just for log4j 2.x
  • metrics-logback: provides InstrumentedAppender, a Logback Appender implementation which records the rate of logged events by their logging level
  • metrics-json: provides HealthCheckModule and MetricsModule for Jackson

What’s more, other than these main project modules, some other third party libraries provide integration with other libraries and frameworks.

8. Conclusion

Instrumenting applications is a common requirement, so in this article, we introduced Metrics, hoping that it can help you to solve your problem.

As always, the complete source code for the example is available over on GitHub.

Apache Maven Tutorial

$
0
0

1. Introduction

Building a software project typically consists of such tasks as downloading dependencies, putting additional jars on a classpath, compiling source code into binary code, running tests, packaging compiled code into deployable artifacts such as JAR, WAR, and ZIP files, and deploying these artifacts to an application server or repository.

Apache Maven automates these tasks, minimizing the risk of humans making errors while building the software manually and separating the work of compiling and packaging our code from that of code construction.

In this tutorial, we’re going to explore this powerful tool for describing, building, and managing Java software projects using a central piece of information — the Project Object Model (POM) — that is written in XML.

2. Why Use Maven?

The key features of Maven are:

  • simple project setup that follows best practices: Maven tries to avoid as much configuration as possible, by supplying project templates (named archetypes)
  • dependency management: it includes automatic updating, downloading and validating the compatibility, as well as reporting the dependency closures (known also as transitive dependencies)
  • isolation between project dependencies and plugins: with Maven, project dependencies are retrieved from the dependency repositories while any plugin’s dependencies are retrieved from the plugin repositories, resulting in fewer conflicts when plugins start to download additional dependencies
  • central repository system: project dependencies can be loaded from the local file system or public repositories, such as Maven Central
In order to learn how to install Maven on your system, please check this tutorial on Baeldung.

3. Project Object Model

The configuration of a Maven project is done via a Project Object Model (POM), represented by a pom.xml file. The POM describes the project, manages dependencies, and configures plugins for building the software.

The POM also defines the relationships among modules of multi-module projects. Let’s look at the basic structure of a typical POM file:

<project>
    <modelVersion>4.0.0</modelVersion>
    <groupId>org.baeldung</groupId>
    <artifactId>org.baeldung</artifactId>
    <packaging>jar</packaging>
    <version>1.0-SNAPSHOT</version>
    <name>org.baeldung</name>
    <url>http://maven.apache.org</url>
    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
            //...
            </plugin>
        </plugins>
    </build>
</project>

Let’s take a closer look at these constructs.

3.1. Project Identifiers

Maven uses a set of identifiers, also called coordinates, to uniquely identify a project and specify how the project artifact should be packaged:

  • groupId – a unique base name of the company or group that created the project
  • artifactId – a unique name of the project
  • version – a version of the project
  • packaging – a packaging method (e.g. WAR/JAR/ZIP)

The first three of these (groupId:artifactId:version) combine to form the unique identifier and are the mechanism by which you specify which versions of external libraries (e.g. JARs) your project will use.

3.2. Dependencies

These external libraries that a project uses are called dependencies. The dependency management feature in Maven ensures automatic download of those libraries from a central repository, so you don’t have to store them locally.

This is a key feature of Maven and provides the following benefits:

  • uses less storage by significantly reducing the number of downloads off remote repositories
  • makes checking out a project quicker
  • provides an effective platform for exchanging binary artifacts within your organization and beyond without the need for building artifact from source every time

In order to declare a dependency on an external library, you need to provide the groupId, artifactId, and the version of the library. Let’s take a look at an example:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-core</artifactId>
    <version>4.3.5.RELEASE</version>
</dependency>

As Maven processes the dependencies, it will download Spring Core library into your local Maven repository.

3.3. Repositories

A repository in Maven is used to hold build artifacts and dependencies of varying types. The default local repository is located in the .m2/repository folder under the home directory of the user.

If an artifact or a plug-in is available in the local repository, Maven uses it. Otherwise, it is downloaded from a central repository and stored in the local repository. The default central repository is Maven Central.

Some libraries, such as JBoss server, are not available at the central repository but are available at an alternate repository. For those libraries, you need to provide the URL to the alternate repository inside pom.xml file:

<repositories>
    <repository>
        <id>JBoss repository</id>
        <url>http://repository.jboss.org/nexus/content/groups/public/</url>
    </repository>
</repositories>

Please note that you can use multiple repositories in your projects.

3.4. Properties

Custom properties can help to make your pom.xml file easier to read and maintain. In the classic use case, you would use custom properties to define versions for your project’s dependencies.

Maven properties are value-placeholders and are accessible anywhere within a pom.xml by using the notation ${name}, where name is the property.

Let’s see an example:

<properties>
    <spring.version>4.3.5.RELEASE</spring.version>
</properties>

<dependencies>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-core</artifactId>
        <version>${spring.version}</version>
    </dependency>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-context</artifactId>
        <version>${spring.version}</version>
    </dependency>
</dependencies>

Now if you want to upgrade Spring to a newer version, you only have to change the value inside the<spring.version> property tag and all the dependencies using that property in their <version> tags will be updated.

Properties are also often used to define build path variables:

<properties>
    <project.build.folder>${project.build.directory}/tmp/</project.build.folder>
</properties>

<plugin>
    //...
    <outputDirectory>${project.resources.build.folder}</outputDirectory>
    //...
</plugin>

3.5. Build

The build section is also a very important section of the Maven POM. It provides information about the default Maven goal, the directory for the compiled project, and the final name of the application. The default build section looks like this:

<build>
    <defaultGoal>install</defaultGoal>
    <directory>${basedir}/target</directory>
    <finalName>${artifactId}-${version}</finalName>
    <filters>
      <filter>filters/filter1.properties</filter>
    </filters>
    //...
</build>

The default output folder for compiled artifacts is named target, and the final name of the packaged artifact consists of the artifactId and version, but you can change it at any time.

3.6. Using Profiles

Another important feature of Maven is its support for profiles. A profile is basically a set of configuration values. By using profiles, you can customize the build for different environments such as Production/Test/Development:

<profiles>
    <profile>
        <id>production</id>
        <build>
            <plugins>
                <plugin>
                //...
                </plugin>
            </plugins>
        </build>
    </profile>
    <profile>
        <id>development</id>
        <activation>
            <activeByDefault>true</activeByDefault>
        </activation>
        <build>
            <plugins>
                <plugin>
                //...
                </plugin>
            </plugins>
        </build>
     </profile>
 </profiles>

As you can see in the example above, the default profile is set to development. If you want to run the production profile, you can use the following Maven command:

mvn clean install -Pproduction

4. Maven Build Lifecycles

Every Maven build follows a specified lifecycle. You can execute several build lifecycle goals, including the ones to compile the project’s code, create a package, and install the archive file in the local Maven dependency repository.

4.1. Lifecycle Phases

The following list shows the most important Maven lifecycle phases:

  • validate – checks the correctness of the project
  • compile – compiles the provided source code into binary artifacts
  • test – executes unit tests
  • package – packages compiled code into an archive file
  • integration-test – executes additional tests, which require the packaging
  • verify – checks if the package is valid
  • install – installs the package file into the local Maven repository
  • deploy – deploys the package file to a remote server or repository

4.2. Plugins and Goals

A Maven plugin is a collection of one or more goals. Goals are executed in phases, which helps to determine the order in which the goals are executed.

The rich list of plugins that are officially supported by Maven is available hereThere is also an interesting article how to build an executable JAR on Baeldung using various plugins.

To gain a better understanding of which goals are run in which phases by default, take a look at the default Maven lifecycle bindings.

To go through any one of the above phases, we just have to call one command:

mvn <phase>

For example, mvn clean install will remove the previously created jar/war/zip files and compiled classes (clean) and execute all the phases necessary to install new archive (install).

Please note that goals provided by plugins can be associated with different phases of the lifecycle.

5. Your First Maven Project

In this section, we will use the command line functionality of Maven to create a Java project.

5.1. Generating a Simple Java Project

In order to build a simple Java project, let’s run the following command:

mvn archetype:generate
  -DgroupId=org.baeldung
  -DartifactId=org.baeldung.java 
  -DarchetypeArtifactId=maven-archetype-quickstart
  -DinteractiveMode=false

The groupId is a parameter indicating the group or individual that created a project, which is often a reversed company domain name. The artifactId is the base package name used in the project, and we use the standard archetype.

Since we didn’t specify the version and the packaging type, these will be set to default values — the version will be set to 1.0-SNAPSHOT, and the packaging will be set to jar.

If you don’t know which parameters to provide, you can always specify interactiveMode=true, so that Maven asks for all the required parameters.

After the command completes, we have a Java project containing an App.java class, which is just a simple “Hello World” program, in the src/main/java folder.

We also have an example test class in src/test/java. The pom.xml of this project will look similar to this:

<project>
    <modelVersion>4.0.0</modelVersion>
    <groupId>org.baeldung</groupId>
    <artifactId>org.baeldung.java</artifactId>
    <packaging>jar</packaging>
    <version>1.0-SNAPSHOT</version>
    <name>org.baeldung.java</name>
    <url>http://maven.apache.org</url>
    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.1.2</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>

As you can see, the junit dependency is provided by default.

5.2. Compiling and Packaging a Project

The next step is to compile the project:

mvn compile

Maven will run through all lifecycle phases needed by the compile phase to build the project’s sources. If you want to run only the test phase, you can use:

mvn test

Now let’s invoke the package phasewhich will produce the compiled archive jar file:

mvn package

5.3. Executing an Application

Finally, we are going to execute our Java project with the exec-maven-plugin. Let’s configure the necessary plugins in the pom.xml:

<build>
    <sourceDirectory>src</sourceDirectory>
    <plugins>
        <plugin>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.6.1</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
            </configuration>
        </plugin>
        <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>exec-maven-plugin</artifactId>
            <version>1.5.0</version>
            <configuration>
                <mainClass>org.baeldung.java.App</mainClass>
            </configuration>
        </plugin>
    </plugins>
</build>

The first plugin, maven-compiler-plugin, is responsible for compiling the source code using Java version 1.8. The exec-maven-plugin searches for the mainClass in our project.

To execute the application, we run the following command:

mvn exec:java

6. Multi-Module Projects

The mechanism in Maven that handles multi-module projects (also called aggregator projects) is called Reactor.

The Reactor collects all available modules to build, then sorts projects into the correct build order, and finally, builds them one by one.

Let’s see how to create a multi-module parent project.

6.1. Create Parent Project

First of all, we need to create a parent project. In order to create a new project with the name parent-project, we use the following command:

mvn archetype:create -DgroupId=org.baeldung -DartifactId=parent-project

Next, we update the packaging type inside the pom.xml file to indicate that this is a parent module:

<packaging>pom</packaging>

6.2. Create Submodule Projects

In the next step, we create submodule projects from the directory of parent-project:

cd parent-project
mvn archetype:generate -DgroupId=org.baeldung  -DartifactId=core
mvn archetype:generate -DgroupId=org.baeldung  -DartifactId=service
mvn archetype:generate -DgroupId=org.baeldung  -DartifactId=webapp

To verify if we created the submodules correctly, we look in the parent-project pom.xml file, where we should see three modules:

<modules>
    <module>core</module>
    <module>service</module>
    <module>webapp</module>
</modules>

Moreover, a parent section will be added in each submodule’s pom.xml:

<parent>
    <groupId>org.baeldung</groupId>
    <artifactId>parent-project</artifactId>
    <version>1.0-SNAPSHOT</version>
</parent>

6.3. Enable Dependency Management in Parent Project

Dependency management is a mechanism for centralizing the dependency information for a muti-module parent project and its children.

When you have a set of projects or modules that inherit a common parent, you can put all the required information about the dependencies in the common pom.xml file. This will simplify the references to the artifacts in the child POMs.

Let’s take a look at a sample parent’s pom.xml:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-core</artifactId>
            <version>4.3.5.RELEASE</version>
        </dependency>
        //...
    </dependencies>
</dependencyManagement>

By declaring the spring-core version in the parent, all submodules that depend on spring-core can declare the dependency using only the groupId and artifactId, and the version will be inherited:

<dependencies>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-core</artifactId>
    </dependency>
    //...
</dependencies>

Moreover, you can provide exclusions for dependency management in parent’s pom.xml, so that specific libraries will not be inherited by child modules:

<exclusions>
    <exclusion>
        <groupId>org.springframework</groupId>
        <artifactId>spring-context</artifactId>
    </exclusion>
</exclusions>

Finally, if a child module needs to use a different version of a managed dependency, you can override the managed version in child’s pom.xml file:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-core</artifactId>
    <version>4.2.1.RELEASE</version>
</dependency>

Please note that while child modules inherit from their parent project, a parent project does not necessarily have any modules that it aggregates. On the other hand, a parent project may also aggregate projects that do not inherit from it.

For more information on inheritance and aggregation please refer to this documentation.

6.4. Updating the Submodules and Building a Project

We can change the packaging type of each submodule. For example, let’s change the packaging of the webapp module to WAR by updating the pom.xml file:

<packaging>war</packaging>

Now we can test the build of our project by using the mvn clean install command. The output of the Maven logs should be similar to this:

[INFO] Scanning for projects...
[INFO] Reactor build order:
[INFO]   parent-project
[INFO]   core
[INFO]   service
[INFO]   webapp
//.............
[INFO] -----------------------------------------
[INFO] Reactor Summary:
[INFO] -----------------------------------------
[INFO] parent-project .................. SUCCESS [2.041s]
[INFO] core ............................ SUCCESS [4.802s]
[INFO] service ......................... SUCCESS [3.065s]
[INFO] webapp .......................... SUCCESS [6.125s]
[INFO] -----------------------------------------

7. Conclusion

In this article, we discussed some of the more popular features of the Apache Maven build tool.

All code examples on Baeldung are built using Maven, so you can easily check our GitHub project website to see various Maven configurations.

CORS with Spring

$
0
0

1. Overview

In any modern browser, Cross-Origin Resource Sharing (CORS) is a relevant specification with the emergence of HTML5 and JS clients that consume data via REST APIs.

In many cases, the host that serves the JS (e.g. example.com) is different from the host that serves the data (e.g. api.example.com). In such a case, CORS enables the cross-domain communication.

Spring provides first class support for CORS, offering an easy and powerful way for configuring it.

2. Controller Method CORS Configuration

Enabling CORS is straightforward – just add the annotation @CrossOrigin.

We may implement this in a number of different ways.

2.1. @CrossOrigin on a @RequestMapping-Annotated Handler Method

@RestController
@RequestMapping("/account")
public class AccountController {

    @CrossOrigin
    @RequestMapping("/{id}")
    public Account retrieve(@PathVariable Long id) {
        // ...
    }

    @RequestMapping(method = RequestMethod.DELETE, path = "/{id}")
    public void remove(@PathVariable Long id) {
        // ...
    }
}

In the example above, CORS is only enabled for the retrieve() method. We can see that we didn’t set any configuration for the @CrossOrigin annotation, so the default configuration takes place:

  • All origins are allowed
  • The HTTP methods allowed are those specified in the @RequestMapping annotation (for this example is GET)
  • The time that the preflight response is cached (maxAge) is 30 minutes

2.2. @CrossOrigin on the Controller

@CrossOrigin(origins = "http://example.com", maxAge = 3600)
@RestController
@RequestMapping("/account")
public class AccountController {

    @RequestMapping("/{id}")
    public Account retrieve(@PathVariable Long id) {
        // ...
    }

    @RequestMapping(method = RequestMethod.DELETE, path = "/{id}")
    public void remove(@PathVariable Long id) {
        // ...
    }
}

Since @CrossOrigin is added to the Controller both retrieve() and remove() methods have it enabled. We can customize the configuration by specifying the value of one of the annotation attributes: origins, methods, allowedHeaders, exposedHeaders, allowCredentials or maxAge. 

2.3. @CrossOrigin on Controller and Handler Method 

@CrossOrigin(maxAge = 3600)
@RestController
@RequestMapping("/account")
public class AccountController {

    @CrossOrigin("http://example.com")
    @RequestMapping("/{id}")
    public Account retrieve(@PathVariable Long id) {
        // ...
    }

    @RequestMapping(method = RequestMethod.DELETE, path = "/{id}")
    public void remove(@PathVariable Long id) {
        // ...
    }
}

Spring will combine attributes from both annotations to create a merged CORS configuration.

In this example both methods will have a maxAge of 3600 seconds, the method remove() will allow all origins but the method retrieve() will only allow origins from http://example.com.

3. Global CORS Configuration

As an alternative to the fine-grained annotation-based configuration, Spring lets us define some global CORS configuration out of your controllers. This is similar to using a Filter based solution but can be declared within Spring MVC and combined with fine-grained @CrossOrigin configuration.

By default all origins and GET, HEAD and POST methods are allowed.

3.1. JavaConfig

@Configuration
@EnableWebMvc
public class WebConfig extends WebMvcConfigurerAdapter {

    @Override
    public void addCorsMappings(CorsRegistry registry) {
        registry.addMapping("/**");
    }
}

The example above enables CORS requests from any origin to any endpoint in the application.

If we want to lock this down a bit more, the registry.addMapping method returns a CorsRegistration object which can be used for additional configuration. There’s also an allowedOrigins method which you can use to specify an array of allowed origins. This can be useful if you need to load this array from an external source at runtime.

Additionally, there’s also allowedMethods, allowedHeaders, exposedHeaders, maxAge, and allowCredentials which can be used to set the response headers and provide us with more customization options.

3.2. XML Namespace

This minimal XML configuration enables CORS on a /** path pattern with the same default properties as the JavaConfig one:

<mvc:cors>
    <mvc:mapping path="/**" />
</mvc:cors>

It is also possible to declare several CORS mappings with customized properties:

<mvc:cors>

    <mvc:mapping path="/api/**"
        allowed-origins="http://domain1.com, http://domain2.com"
        allowed-methods="GET, PUT"
        allowed-headers="header1, header2, header3"
        exposed-headers="header1, header2" allow-credentials="false"
        max-age="123" />

    <mvc:mapping path="/resources/**"
        allowed-origins="http://domain1.com" />

</mvc:cors>

4. How It Works

CORS requests are automatically dispatched to the various HandlerMappings that are registered. They handle CORS preflight requests and intercept CORS simple and actual requests by means of a CorsProcessor implementation (DefaultCorsProcessor by default) in order to add the relevant CORS response headers (such as Access-Control-Allow-Origin).

CorsConfiguration allows one to specify how the CORS requests should be processed: allowed origins, headers and methods among others. This may be provided in various ways:

  • AbstractHandlerMapping#setCorsConfiguration() allows one to specify a Map with several CorsConfigurations mapped onto path patterns such as /api/**
  • Subclasses may provide their own CorsConfiguration by overriding the AbstractHandlerMapping#getCorsConfiguration(Object, HttpServletRequest) method
  • Handlers may implement the CorsConfigurationSource interface (like ResourceHttpRequestHandler now does) in order to provide a CorsConfiguration for each request

5. Conclusion

In this article, we showed how Spring provides support for enabling CORS in our application.

We started with the configuration of the controller. We saw that we only need to add the annotation @CrossOrigin in order to enable CORS either to one particular method or the entire controller.

Finally, we also saw that if we want to control the CORS configuration outside of the controllers we can perform this easily in the configuration files – either using JavaConfig or XML.

Querying Couchbase with MapReduce Views

$
0
0

1. Overview

In this tutorial, we will introduce some simple MapReduce views and demonstrate how to query them using the Couchbase Java SDK.

2. Maven Dependency

To work with Couchbase in a Maven project, import the Couchbase SDK into your pom.xml:

<dependency>
    <groupId>com.couchbase.client</groupId>
    <artifactId>java-client</artifactId>
    <version>2.4.0</version>
</dependency>

You can find the latest version on Maven Central.

3. MapReduce Views

In Couchbase, a MapReduce view is a type of index that can be used to query a data bucket. It is defined using a JavaScript map function and an optional reduce function.

3.1. The map Function

The map function is run against each document one time. When the view is created, the map function is run once against each document in the bucket, and the results are stored in the bucket.

Once a view is created, the map function is run only against newly inserted or updated documents in order to update the view incrementally.

Because the map function’s results are stored in the data bucket, queries against a view exhibit low latencies.

Let’s look at an example of a map function that creates an index on the name field of all documents in the bucket whose type field is equal to “StudentGrade”:

function (doc, meta) {
    if(doc.type == "StudentGrade" && doc.name) {    
        emit(doc.name, null);
    }
}

The emit function tells Couchbase which data field(s) to store in the index key (first parameter) and what value (second parameter) to associate with the indexed document.

In this case, we are storing only the document name property in the index key. And since we are not interested in associating any particular value with each entry, we pass null as the value parameter.

As Couchbase processes the view, it creates an index of the keys that are emitted by the map function, associating each key with all documents for which that key was emitted.

For example, if three documents have the name property set to “John Doe”, then the index key “John Doe” would be associated with those three documents.

3.2. The reduce Function

The reduce function is used to perform aggregate calculations using the results of a map function. The Couchbase Admin UI provides an easy way to apply the built-in reduce functions “_count”, “_sum”, and “_stats”, to your map function.

You can also write your own reduce functions for more complex aggregations. We will see examples of using the built-in reduce functions later in the tutorial.

4. Working with Views and Queries

4.1. Organizing the Views

Views are organized into one or more design document per bucket. In theory, there is no limit to the number of views per design document. However, for optimal performance, it has been suggested that you should limit each design document to fewer than ten views.

When you first create a view within a design document, Couchbase designates it as a development view. You can run queries against a development view to test its functionality. Once you are satisfied with the view, you would publish the design document, and the view becomes a production view.

4.2. Constructing Queries

In order to construct a query against a Couchbase view, you need to provide its design document name and view name to create a ViewQuery object:

ViewQuery query = ViewQuery.from("design-document-name", "view-name");

When executed, this query will return all rows of the view. We will see in later sections how to restrict the result set based on the key values.

To construct a query against a development view, you can apply the development() method when creating the query:

ViewQuery query 
  = ViewQuery.from("design-doc-name", "view-name").development();

4.3. Executing the Query

Once we have a ViewQuery object, we can execute the query to obtain a ViewResult:

ViewResult result = bucket.query(query);

4.4. Processing Query Results

And now that we have a ViewResult, we can iterate over the rows to get the document ids and/or content:

for(ViewRow row : result.allRows()) {
    JsonDocument doc = row.document();
    String id = doc.id();
    String json = doc.content().toString();
}

5. Sample Application

For the remainder of the tutorial, we will write MapReduce views and queries for a set of student grade documents having the following format, with grades constrained to the range 0 to 100:

{ 
    "type": "StudentGrade",
    "name": "John Doe",
    "course": "History",
    "hours": 3,
    "grade": 95
}

We will store these documents in the “baeldung-tutorial” bucket and all views in a design document named “studentGrades.” Let’s look at the code needed to open the bucket so that we can query it:

Bucket bucket = CouchbaseCluster.create("127.0.0.1")
  .openBucket("baeldung-tutorial");

6. Exact Match Queries

Suppose you want to find all student grades for a particular course or set of courses. Let’s write a view called “findByCourse” using the following map function:

function (doc, meta) {
    if(doc.type == "StudentGrade" && doc.course && doc.grade) {
        emit(doc.course, null);
    }
}

Note that in this simple view, we only need to emit the course field.

6.1. Matching on a Single Key

To find all grades for the History course, we apply the key method to our base query:

ViewQuery query 
  = ViewQuery.from("studentGrades", "findByCourse").key("History");

6.2. Matching on Multiple Keys

If you want to find all grades for Math and Science courses, you can apply the keys method to the base query, passing it an array of key values:

ViewQuery query = ViewQuery
  .from("studentGrades", "findByCourse")
  .keys(JsonArray.from("Math", "Science"));

7. Range Queries

In order to query for documents containing a range of values for one or more fields, we need a view that emits the field(s) we are interested in, and we must specify a lower and/or upper bound for the query.

Let’s take a look at how to perform range queries involving a single field and multiple fields.

7.1. Queries Involving a Single Field

To find all documents with a range of grade values regardless of the value of the course field, we need a view that emits only the grade field. Let’s write the map function for the “findByGrade” view:

function (doc, meta) {
    if(doc.type == "StudentGrade" && doc.grade) {
        emit(doc.grade, null);
    }
}

Let’s write a query in Java using this view to find all grades equivalent to a “B” letter grade (80 to 89 inclusive):

ViewQuery query = ViewQuery.from("studentGrades", "findByGrade")
  .startKey(80)
  .endKey(89)
  .inclusiveEnd(true);

Note that the start key value in a range query is always treated as inclusive.

And if all the grades are known to be integers, then the following query will yield the same results:

ViewQuery query = ViewQuery.from("studentGrades", "findByGrade")
  .startKey(80)
  .endKey(90)
  .inclusiveEnd(false);

To find all “A” grades (90 and above), we only need to specify the lower bound:

ViewQuery query = ViewQuery
  .from("studentGrades", "findByGrade")
  .startKey(90);

And to find all failing grades (below 60), we only need to specify the upper bound:

ViewQuery query = ViewQuery
  .from("studentGrades", "findByGrade")
  .endKey(60)
  .inclusiveEnd(false);

7.2. Queries Involving Multiple Fields

Now, suppose we want to find all students in a specific course whose grade falls into a certain range. This query requires a new view that emits both the course and grade fields.

With multi-field views, each index key is emitted as an array of values. Since our query involves a fixed value for course and a range of grade values, we will write the map function to emit each key as an array of the form [course, grade].

Let’s look at the map function for the view “findByCourseAndGrade“:

function (doc, meta) {
    if(doc.type == "StudentGrade" && doc.course && doc.grade) {
        emit([doc.course, doc.grade], null);
    }
}

When this view is populated in Couchbase, the index entries are sorted by course and grade. Here’s a subset of keys in the “findByCourseAndGrade” view shown in their natural sort order:

["History", 80]
["History", 90]
["History", 94]
["Math", 82]
["Math", 88]
["Math", 97]
["Science", 78]
["Science", 86]
["Science", 92]

Since the keys in this view are arrays, you would also use arrays of this format when specifying the lower and upper bounds of a range query against this view.

This means that in order to find all students who got a “B” grade (80 to 89) in the Math course, you would set the lower bound to:

["Math", 80]

and the upper bound to:

["Math", 89]

Let’s write the range query in Java:

ViewQuery query = ViewQuery
  .from("studentGrades", "findByCourseAndGrade")
  .startKey(JsonArray.from("Math", 80))
  .endKey(JsonArray.from("Math", 89))
  .inclusiveEnd(true);

If we want to find for all students who received an “A” grade (90 and above) in Math, then we would write:

ViewQuery query = ViewQuery
  .from("studentGrades", "findByCourseAndGrade")
  .startKey(JsonArray.from("Math", 90))
  .endKey(JsonArray.from("Math", 100));

Note that because we are fixing the course value to “Math“, we have to include an upper bound with the highest possible grade value. Otherwise, our result set would also include all documents whose course value is lexicographically greater than “Math“.

And to find all failing Math grades (below 60):

ViewQuery query = ViewQuery
  .from("studentGrades", "findByCourseAndGrade")
  .startKey(JsonArray.from("Math", 0))
  .endKey(JsonArray.from("Math", 60))
  .inclusiveEnd(false);

Much like the previous example, we must specify a lower bound with the lowest possible grade. Otherwise, our result set would also include all grades where the course value is lexicographically less than “Math“.

Finally, to find the five highest Math grades (barring any ties), you can tell Couchbase to perform a descending sort and to limit the size of the result set:

ViewQuery query = ViewQuery
  .from("studentGrades", "findByCourseAndGrade")
  .descending()
  .startKey(JsonArray.from("Math", 100))
  .endKey(JsonArray.from("Math", 0))
  .inclusiveEnd(true)
  .limit(5);

Note that when performing a descending sort, the startKey and endKey values are reversed, because Couchbase applies the sort before it applies the limit.

8. Aggregate Queries

A major strength of MapReduce views is that they are highly efficient for running aggregate queries against large datasets. In our student grades dataset, for example, we can easily calculate the following aggregates:

  • number of students in each course
  • sum of credit hours for each student
  • grade point average for each student across all courses

Let’s build a view and query for each of these calculations using built-in reduce functions.

8.1. Using the count() Function

First, let’s write the map function for a view to count the number of students in each course:

function (doc, meta) {
    if(doc.type == "StudentGrade" && doc.course && doc.name) {
        emit([doc.course, doc.name], null);
    }
}

We’ll call this view “countStudentsByCourse” and designate that it is to use the built-in “_count” function. And since we are only performing a simple count, we can still emit null as the value for each entry.

To count the number of students in the each course:

ViewQuery query = ViewQuery
  .from("studentGrades", "countStudentsByCourse")
  .reduce()
  .groupLevel(1);

Extracting data from aggregate queries is different from what we’ve seen up to this point. Instead of extracting a matching Couchbase document for each row in the result, we are extracting the aggregate keys and results.

Let’s run the query and extract the counts into a java.util.Map:

ViewResult result = bucket.query(query);
Map<String, Long> numStudentsByCourse = new HashMap<>();
for(ViewRow row : result.allRows()) {
    JsonArray keyArray = (JsonArray) row.key();
    String course = keyArray.getString(0);
    long count = Long.valueOf(row.value().toString());
    numStudentsByCourse.put(course, count);
}

8.2. Using the sum() Function

Next, let’s write a view that calculates the sum of each student’s credit hours attempted. We’ll call this view “sumHoursByStudent” and designate that it is to use the built-in “_sum” function:

function (doc, meta) {
    if(doc.type == "StudentGrade"
         && doc.name
         && doc.course
         && doc.hours) {
        emit([doc.name, doc.course], doc.hours);
    }
}

Note that when applying the “_sum” function, we have to emit the value to be summed — in this case, the number of credits — for each entry.

Let’s write a query to find the total number of credits for each student:

ViewQuery query = ViewQuery
  .from("studentGrades", "sumCreditsByStudent")
  .reduce()
  .groupLevel(1);

And now, let’s run the query and extract the aggregated sums into a java.util.Map:

ViewResult result = bucket.query(query);
Map<String, Long> hoursByStudent = new HashMap<>();
for(ViewRow row : result.allRows()) {
    String name = (String) row.key();
    long sum = Long.valueOf(row.value().toString());
    hoursByStudent.put(name, sum);
}

8.3. Calculating Grade Point Averages

Suppose we want to calculate each student’s grade point average (GPA) across all courses, using the conventional grade point scale based on the grades obtained and the number of credit hours that the course is worth (A=4 points per credit hour, B=3 points per credit hour, C=2 points per credit hour, and D=1 point per credit hour).

There is no built-in reduce function to calculate average values, so we’ll combine the output from two views to compute the GPA.

We already have the “sumHoursByStudent” view that sums the number of credit hours each student attempted. Now we need the total number of grade points each student earned.

Let’s create a view called “sumGradePointsByStudent” that calculates the number of grade points earned for each course taken. We’ll use the built-in “_sum” function to reduce the following map function:

function (doc, meta) {
    if(doc.type == "StudentGrade"
         && doc.name
         && doc.hours
         && doc.grade) {
        if(doc.grade >= 90) {
            emit(doc.name, 4*doc.hours);
        }
        else if(doc.grade >= 80) {
            emit(doc.name, 3*doc.hours);
        }
        else if(doc.grade >= 70) {
            emit(doc.name, 2*doc.hours);
        }
        else if(doc.grade >= 60) {
            emit(doc.name, doc.hours);
        }
        else {
            emit(doc.name, 0);
        }
    }
}

Now let’s query this view and extract the sums into a java.util.Map:

ViewQuery query = ViewQuery.from(
  "studentGrades",
  "sumGradePointsByStudent")
  .reduce()
  .groupLevel(1);
ViewResult result = bucket.query(query);

Map<String, Long> gradePointsByStudent = new HashMap<>();
for(ViewRow row : result.allRows()) {
    String course = (String) row.key();
    long sum = Long.valueOf(row.value().toString());
    gradePointsByStudent.put(course, sum);
}

Finally, let’s combine the two Maps in order to calculate GPA for each student:

Map<String, Float> result = new HashMap<>();
for(Entry<String, Long> creditHoursEntry : hoursByStudent.entrySet()) {
    String name = creditHoursEntry.getKey();
    long totalHours = creditHoursEntry.getValue();
    long totalGradePoints = gradePointsByStudent.get(name);
    result.put(name, ((float) totalGradePoints / totalHours));
}

9. Conclusion

We have demonstrated how to write some basic MapReduce views in Couchbase, and how to construct and execute queries against the views, and extract the results.

The code presented in this tutorial can be found in the GitHub project.

You can learn more about MapReduce views and how to query them in Java at the official Couchbase developer documentation site.


A Guide to Spring Mobile

$
0
0

1. Overview

Spring Mobile is a modern extension to the popular Spring Web MVC  framework that helps to simplify the development of web applications, which needs to be fully or partially compatible with cross device platforms, with minimal effort and less boilerplate coding.

In this article, we’ll learn about the Spring Mobile project and we would build a sample project to highlight uses of Spring Mobile.

2. Features of Spring Mobile

  • Automatic Device Detection: Spring Mobile has built-in server side device resolver abstraction layer. This analyzes all incoming requests and detects sender device information, for example, a device type, an operating system etc
  • Site Preference Management: Using Site Preference Management, Spring Mobile allows users to choose mobile/tablet/normal view of the website. It’s comparatively deprecated technique since by using DeviceDelegatingViewresolver we can persist the view layer depending on the device type without demanding any input from the user side
  • Site Switcher: Site Switcher is capable of automatically switch the users to the most appropriate view according to his/her device type (i.e. mobile, desktop etc)
  • Device Aware View Manager: Usually, depending on device type we forward the user request to specific site meant to handle specific device. Spring Mobile’s View Manager lets developer the flexibility to put all of the views in pre-defined format and Spring Mobile would auto-mange the different views based on device type

3. Building an Application

Let’s now create a demo application using Spring Mobile with Spring Boot and Freemarker Template Engine and try to capture device details with a minimal amount of coding.

3.1. Maven Dependencies

Before we start we need to add following Spring Mobile dependency in the pom.xml:

<dependency>
    <groupId>org.springframework.mobile</groupId>
    <artifactId>spring-mobile-device</artifactId>
    <version>1.1.5.RELEASE</version>
</dependency>

We can check the latest version of Spring Mobile in Central Maven Repository.

3.2. Create Freemarker Templates

First, let’s create our index page using Freemarker. Don’t forget to put necessary dependency to enable autoconfiguration for Freemarker.

Since we are trying to detect the sender device and route the request accordingly, we need to create three separate Freemarker files to address this; one to handle a mobile request, another one to handle tablet and the last one (default) to handle normal browser request.

We need to create two folders named ‘mobile‘ and ‘tablet‘ under src/main/resources/templates and put the Freemarker files accordingly. The final structure should look like this:

└── src
    └── main
        └── resources
            └── templates
                └── index.ftl
                └── mobile
                    └── index.ftl
                └── tablet
                    └── index.ftl

Now, let’s put the following HTML inside index.ftl files:

<h1>You are into browser version</h1>

Depending on the device type, we’ll change the content inside the <h1> tag,

3.3. Enable DeviceDelegatingViewresolver

To enable Spring Mobile DeviceDelegatingViewresolver service, we need to put the following property inside application.properties:

spring.mobile.devicedelegatingviewresolver.enabled: true

Site preference functionality is enabled by default in Spring Boot when you include the Spring Mobile starter. However, it can be disabled by setting the following property to false:

spring.mobile.sitepreference.enabled: true

3.4. Create a Controller

Now we need to create a Controller class to handle the incoming request. We would use simple @GetMapping annotation to handle the request:

@Controller
public class IndexController {

    @GetMapping("/")
    public String greeting(Device device) {
		
        String deviceType = "browser";
        String platform = "browser";
		
        if (device.isNormal()) {
            deviceType = "browser";
        } else if (device.isMobile()) {
            deviceType = "mobile";
        } else if (device.isTablet()) {
            deviceType = "tablet";
        }
        
        platform = device.getDevicePlatform().name();
        
        if (platform.equalsIgnoreCase("UNKNOWN")) {
            platform = "browser";
        }
     	
        return "index";
    }
}

A couple of things to note here:

  • In the handler mapping method, we are passing org.springframework.mobile.device.Device. This is the injected device information with each and every request. This is done by DeviceDelegatingViewresolver which we have enabled in the apllication.properties
  • The org.springframework.mobile.device.Device has a couple of inbuilt methods like isMobile(), isTablet(), getDevicePlatform() etc. Using these we can capture all device information we need and use it

3.5. Java Config

We are almost done. One last thing to do is to build a Spring Boot config class to start the application:

@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

4. Testing the Application

Once we start the application, it will run on http://localhost:8080.

We will use Google Chrome’s Developer Console to emulate different kinds of device. We can enable it by pressing ctrl + shift + i or by pressing F12.

By default, if we open the main page, we could see that Spring Web is detecting the device as a desktop browser. We should see the following result:

Now, on the console panel, we click the second icon on the top left. It would enable a mobile view of the browser.

We could see a drop-down coming in the top left corner of the browser. In the drop-down, we can choose different kinds of device type. To emulate a mobile device let’s choose Nexus 6P and refresh the page.

As soon as we refresh the page, we’ll notice that the content of the page changes because DeviceDelegatingViewresolver has already detected that the last request came from a mobile device. Hence, it passed the index.ftl file inside the mobile folder in the templates.

Here’s the result:

In the same way, we are going to emulate a tablet version. Let’s choose iPad from the drop-down just like the last time and refresh the page. The content would be changed and it should be treated as a tablet view:

Now, we’ll see if Site Preference functionality is working as expected or not.

To simulate a real time scenario where user wants to view the site in mobile friendly way, just add following url parameter at the end of default url:

?site_preference=mobile

Once refreshed, the view should be automatically moved to mobile view i.e. following text would be displayed ‘You are into mobile version’.

In the same way to simulate tablet preference, just add following url parameter at the end of default url:

?site_preference=tablet

And just like the last time the view should be automatically refreshed to tablet view.

Please note that, the default url would remain as same and if the user again goes through default url, user will be redirect to respective view based on device type.

5. Conclusion

We just created a web application and implement the cross-platform functionality. From the productivity perspective, it’s a tremendous performance boost. Spring Mobile eliminates many front-end scripting to handle cross-browser behaviour, thus reducing development time.

Like always, updated source code are over on GitHub.

Guide to Guava Table

$
0
0

1. Overview

In this tutorial, we’ll show how to use the Google Guava’s Table interface and its multiple implementations.

Guava’s Table is a collection that represents a table like structure containing rows, columns and the associated cell values. The row and the column act as an ordered pair of keys.

2. Google Guava’s Table

Let’s have a look at how to use the Table class.

2.1. Maven Dependency

Let’s start by adding Google’s Guava library dependency in the pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>21.0</version>
</dependency>

The latest version of the dependency can be checked here.

2.2. About

If we were to represent Guava’s Table using Collections present in core Java, then the structure would be a map of rows where each row contains a map of columns with associated cell values. 

Table represents a special map where two keys can be specified in combined fashion to refer to a single value.

It’s similar to creating a map of maps, for example, Map<UniversityName, Map<CoursesOffered, SeatAvailable>>. Table would be also a perfect way of representing the Battleships game board.

3. Creating

You can create an instance of Table in multiple ways:

  • Using the create method from the class HashBasedTable which uses LinkedHashMap internally:
    Table<String, String, Integer> universityCourseSeatTable 
      = HashBasedTable.create();
  • If we need a Table whose row keys and the column keys need to be ordered by their natural ordering or by supplying comparators, you can create an instance of a Table using the create method from a class called TreeBasedTable, which uses TreeMap internally:
    Table<String, String, Integer> universityCourseSeatTable
      = TreeBasedTable.create();
    
  • If we know the row keys and the column keys beforehand and the table size is fixed, use the create method from the class ArrayTable:
    List<String> universityRowTable 
      = Lists.newArrayList("Mumbai", "Harvard");
    List<String> courseColumnTables 
      = Lists.newArrayList("Chemical", "IT", "Electrical");
    Table<String, String, Integer> universityCourseSeatTable
      = ArrayTable.create(universityRowTable, courseColumnTables);
    
  • If we intend to create an immutable instance of Table whose internal data are never going to change, use the ImmutableTable class (creating which follows a builder pattern):
    Table<String, String, Integer> universityCourseSeatTable
      = ImmutableTable.<String, String, Integer> builder()
      .put("Mumbai", "Chemical", 120).build();
    

4. Using

Let’s start with a simple example showing the usage of Table.

4.1. Retrieval

If we know the row key and the column key then we can get the value associated with the row and the column key:

@Test
public void givenTable_whenGet_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable 
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    int seatCount 
      = universityCourseSeatTable.get("Mumbai", "IT");
    Integer seatCountForNoEntry 
      = universityCourseSeatTable.get("Oxford", "IT");

    assertThat(seatCount).isEqualTo(60);
    assertThat(seatCountForNoEntry).isEqualTo(null);
}

4.2. Checking for an Entry

We can check the presence of an entry in a Table based on:

  • Row key
  • Column key
  • Both row key and column key
  • Value

Let’s see how to check for the presence of an entry:

@Test
public void givenTable_whenContains_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable 
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    boolean entryIsPresent
      = universityCourseSeatTable.contains("Mumbai", "IT");
    boolean courseIsPresent 
      = universityCourseSeatTable.containsColumn("IT");
    boolean universityIsPresent 
      = universityCourseSeatTable.containsRow("Mumbai");
    boolean seatCountIsPresent 
      = universityCourseSeatTable.containsValue(60);

    assertThat(entryIsPresent).isEqualTo(true);
    assertThat(courseIsPresent).isEqualTo(true);
    assertThat(universityIsPresent).isEqualTo(true);
    assertThat(seatCountIsPresent).isEqualTo(true);
}

4.3. Removal

We can remove an entry from the Table by supplying both the row key and column key:

@Test
public void givenTable_whenRemove_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);

    int seatCount 
      = universityCourseSeatTable.remove("Mumbai", "IT");

    assertThat(seatCount).isEqualTo(60);
    assertThat(universityCourseSeatTable.remove("Mumbai", "IT")).
      isEqualTo(null);
}

4.4. Row Key to Cell Value Map

We can get a Map representation with the key as a row and the value as a CellValue by providing the column key:

@Test
public void givenTable_whenColumn_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable 
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    Map<String, Integer> universitySeatMap 
      = universityCourseSeatTable.column("IT");

    assertThat(universitySeatMap).hasSize(2);
    assertThat(universitySeatMap.get("Mumbai")).isEqualTo(60);
    assertThat(universitySeatMap.get("Harvard")).isEqualTo(120);
}

4.5. Map Representation of a Table

We can get a Map<UniversityName, Map<CoursesOffered, SeatAvailable>> representation by using the columnMap method:

@Test
public void givenTable_whenColumnMap_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable 
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    Map<String, Map<String, Integer>> courseKeyUniversitySeatMap 
      = universityCourseSeatTable.columnMap();

    assertThat(courseKeyUniversitySeatMap).hasSize(3);
    assertThat(courseKeyUniversitySeatMap.get("IT")).hasSize(2);
    assertThat(courseKeyUniversitySeatMap.get("Electrical")).hasSize(1);
    assertThat(courseKeyUniversitySeatMap.get("Chemical")).hasSize(1);
}

4.6. Column Key to Cell Value Map

We can get a Map representation with the key as a column and the value as a CellValue by providing row key:

@Test
public void givenTable_whenRow_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable 
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    Map<String, Integer> courseSeatMap 
      = universityCourseSeatTable.row("Mumbai");

    assertThat(courseSeatMap).hasSize(2);
    assertThat(courseSeatMap.get("IT")).isEqualTo(60);
    assertThat(courseSeatMap.get("Chemical")).isEqualTo(120);
}

4.7. Get Distinct Row Key

We can get all the row keys from a table using the rowKeySet method:

@Test
public void givenTable_whenRowKeySet_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    Set<String> universitySet = universityCourseSeatTable.rowKeySet();

    assertThat(universitySet).hasSize(2);
}

4.8. Get Distinct Column Key

We can get all column keys from a table using the columnKeySet method:

@Test
public void givenTable_whenColKeySet_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    Set<String> courseSet = universityCourseSeatTable.columnKeySet();

    assertThat(courseSet).hasSize(3);
}

5. Conclusion

In this tutorial, we illustrated the methods of the Table class from the Guava library. The Table class provides a collection that represents a table like structure containing rows, columns and associated cell values.

The code belonging to the above examples can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as is.

Guide to java.util.concurrent.Future

$
0
0

1. Overview

In this article, we are going to learn about Future. An interface that’s been around since Java 1.5 and can be quite useful when working with asynchronous calls and concurrent processing.

2. Creating Futures

Simply put, the Future class represents a future result of an asynchronous computation – a result that will eventually appear in the Future after the processing is complete.

Let’s see how to write methods that create and return a Future instance.

Long running methods are good candidates for asynchronous processing and the Future interface. This enables us to execute some other process while we are waiting for the task encapsulated in Future to complete.

Some examples of operations that would leverage the async nature of Future are:

  • computational intensive processes (mathematical and scientific calculations)
  • manipulating large data structures (big data)
  • remote method calls (downloading files, HTML scrapping, web services).

2.1. Implementing Futures with FutureTask

For our example, we are going to create a very simple class that calculates the square of an Integer. This definitely doesn’t fit the “long-running” methods category, but we are going to put a Thread.sleep() call to it to make it last 1 second to complete:

public class SquareCalculator {    
    
    private ExecutorService executor 
      = Executors.newSingleThreadExecutor();
    
    public Future<Integer> calculate(Integer input) {        
        return executor.submit(() -> {
            Thread.sleep(1000);
            return input * input;
        });
    }
}

The bit of code that actually performs the calculation is contained within the call() method, supplied as a lambda expression. As you can see there’s nothing special about it, except for the sleep() call mentioned earlier.

It gets more interesting when we direct our attention to the usage of Callable and ExecutorService.

Callable is an interface representing a task that returns a result. Here, we’ve created an anonymous class and placed our business logic in the call() method.

Creating an instance of Callable does not take us anywhere, we still have to pass this instance to an executor that will take care of starting that task in a new thread and give us back the valuable Future object. That’s where ExecutorService comes in.

There are a few ways we can get ahold of an ExecutorService instance, most of them are provided by utility class Executors static factory methods. In this example, we’ve used the basic newSingleThreadExecutor(), which gives us an ExecutorService capable of handling a single thread at a time.

Once we have an ExecutorService object, we just need to call submit() passing our Callable as an argument. submit() will take care of starting the task and return a FutureTask object, which is an implementation of the Future interface.

3. Consuming Futures

Up to this point, we’ve learned how to create an instance of Future. In this section, we’ll learn how to work with this instance by exploring all methods that are part of

In this section, we’ll learn how to work with this instance by exploring all methods that are part of Future‘s API.

3.1. Using isDone() and get() to Obtain Results

Now we need to call calculate() and use the returned Future to get the resulting Integer. Two methods from the Future API will help us with this task.

Future.isDone() tells us if the executor has finished processing the task. If the task is completed, it will return true otherwise, it returns false.

The method that returns the actual result from the calculation is Future.get(). Notice that this method blocks the execution until the task is complete, but in our example, this won’t be an issue since we’ll check first if the task is completed by calling isDone().

By using these two methods we can run some other code while we wait for the main task to finish:

Future<Integer> future = new SquareCalculator().calculate(10);

while(!future.isDone()) {
    System.out.println("Calculating...");
    Thread.sleep(300);
}

Integer result = future.get();

In this example, we write a simple message on the output to let the user know the program is performing the calculation.

The method get() will block the execution until the task is complete. But we don’t have to worry about that since our example only get to the point where get() is called after making sure that the task is finished. So, in this scenario, future.get() will always return immediately.

It is worth mentioning that get() has an overloaded version that takes a timeout and a TimeUnit as arguments:

Integer result = future.get(500, TimeUnit.MILLISECONDS);

The difference between get(long, TimeUnit) and get(), is that the former will throw a TimeoutException if the task doesn’t return before the specified timeout period.

3.2. Canceling a Future with cancel()

Suppose we’ve triggered a task but, for some reason, we don’t care about the result anymore. We can use Future.cancel(boolean) to tell the executor to stop the operation and interrupt its underlying thread:

Future<Integer> future = new SquareCalculator().calculate(4);

boolean canceled = future.cancel(true);

Our instance of Future from the code above would never complete its operation. In fact, if we try to call get() from that instance, after the call to cancel(), the outcome would be a CancellationExceptionFuture.isCancelled() will tell us if a Future was already canceled. This can be quite useful to avoid getting a CancellationException.

It is possible that a call to cancel() fails. In that case, its returned value will be false. Notice that cancel() takes a boolean value as an argument – this controls whether the thread executing this task should be interrupted or not.

4. More Multithreading with Thread Pools

Our current ExecutorService is single threaded since it was obtained with the Executors.newSingleThreadExecutor. To highlight this “single threadness”, let’s trigger two calculations simultaneously:

SquareCalculator squareCalculator = new SquareCalculator();

Future<Integer> future1 = squareCalculator.calculate(10);
Future<Integer> future2 = squareCalculator.calculate(100);

while (!(future1.isDone() && future2.isDone())) {
    System.out.println(
      String.format(
        "future1 is %s and future2 is %s", 
        future1.isDone() ? "done" : "not done", 
        future2.isDone() ? "done" : "not done"
      )
    );
    Thread.sleep(300);
}

Integer result1 = future1.get();
Integer result2 = future2.get();

System.out.println(result1 + " and " + result2);

squareCalculator.shutdown();

Now let’s analyze the output for this code:

calculating square for: 10
future1 is not done and future2 is not done
future1 is not done and future2 is not done
future1 is not done and future2 is not done
future1 is not done and future2 is not done
calculating square for: 100
future1 is done and future2 is not done
future1 is done and future2 is not done
future1 is done and future2 is not done
100 and 10000

It is clear that the process is not parallel. Notice how the second task only starts once the first task is completed, making the whole process take around 2 seconds to finish.

To make our program really multi-threaded we should use a different flavor of ExecutorService. Let’s see how the behavior of our example changes if we use a thread pool, provided by the factory method Executors.newFixedThreadPool():

public class SquareCalculator {
 
    private ExecutorService executor = Executors.newFixedThreadPool(2);
    
    //...
}

With a simple change in our SquareCalculator class now we have an executor which is able to use 2 simultaneous threads.

If we run the exact same client code again, we’ll get the following output:

calculating square for: 10
calculating square for: 100
future1 is not done and future2 is not done
future1 is not done and future2 is not done
future1 is not done and future2 is not done
future1 is not done and future2 is not done
100 and 10000

This is looking much better now. Notice how the 2 tasks start and finish running simultaneously, and the whole process takes around 1 second to complete.

There are other factory methods that can be used to create thread pools, like Executors.newCachedThreadPool() that reuses previously used Threads when they are available, and Executors.newScheduledThreadPool() which schedules commands to run after a given delay

For more information about ExecutorService, read our article dedicated to the subject.

5. Overview of ForkJoinTask

ForkJoinTask is an abstract class which implements Future and is capable of running a large number of tasks hosted by a small number of actual threads in ForkJoinPool.

In this section, we are going quickly cover the main characteristics of ForkJoinPool. For a comprehensive guide about the topic, check our Guide to the Fork/Join Framework in Java.

Then the main characteristic of a ForkJoinTask is that it usually will spawn new subtasks as part of the work required to complete its main task. It generates new tasks by calling fork() and it gathers all results with join()thus the name of the class.

There are two abstract classes that implement ForkJoinTaskRecursiveTask which returns a value upon completion, and RecursiveAction which doesn’t return anything. As the names imply, those classes are to be used for recursive tasks, like for example file-system navigation or complex mathematical computation.

Let’s expand our previous example to create a class that, given an Integer, will calculate the sum squares for all its factorial elements. So, for instance, if we pass the number 4 to our calculator, we should get the result from the sum of 4² + 3² + 2² + 1² which is 30.

First of all, we need to create a concrete implementation of RecursiveTask and implement its compute() method. This is where we’ll write our business logic:

public class FactorialSquareCalculator extends RecursiveTask<Integer> {
 
    private Integer n;

    public FactorialSquareCalculator(Integer n) {
        this.n = n;
    }

    @Override
    protected Integer compute() {
        if (n <= 1) {
            return n;
        }

        FactorialSquareCalculator calculator 
          = new FactorialSquareCalculator(n - 1);

        calculator.fork();

        return n * n + calculator.join();
    }
}

Notice how we achieve recursiveness by creating a new instance of FactorialSquareCalculator within compute(). By calling fork(), a non-blocking method, we ask ForkJoinPool to initiate the execution of this subtask.

The join() method will return the result from that calculation, to which we add the square of the number we are currently visiting.

Now we just need to create a ForkJoinPool to handle the execution and thread management:

ForkJoinPool forkJoinPool = new ForkJoinPool();

FactorialSquareCalculator calculator = new FactorialSquareCalculator(10);

forkJoinPool.execute(calculator);

6. Conclusion

In this article, we had a comprehensive view of the Future interface, visiting all its methods. We’ve also learned how to leverage the power of thread pools to trigger multiple parallel operations. The main methods from the ForkJoinTask class, fork() and join() were briefly covered as well.

We have many other great articles on parallel and asynchronous operations in Java. Here are three of them that are closely related to the Future interface (some of them are already mentioned in the article):

Check the source code used in this article in our GitHub repository.

Building an API With the Spark Java Framework

$
0
0

1. Introduction

In this article, we will have a quick introduction to Spark framework. Spark framework is a rapid development web framework inspired by the Sinatra framework for Ruby and is built around Java 8 Lambda Expression philosophy, making it less verbose than most applications written in other Java frameworks.

It’s a good choice if you want to have a Node.js like experience when developing a web API or microservices in Java. With Spark, you can have a REST API ready to serve JSON in less than ten lines of code.

We will have a quick start with a “Hello World” example, followed by a simple REST API.

2. Maven Dependencies

2.1. Spark Framework

Include following Maven dependency in your pom.xml:

<dependency>
    <groupId>com.sparkjava</groupId>
    <artifactId>spark-core</artifactId>
    <version>2.5.4</version>
</dependency>

You can find the latest version of Spark on Maven Central.

2.2. Gson Library 

At various places in the example, we will be using Gson library for JSON operations. To include Gson in your project, include this dependency in your pom.xml:

<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>2.8.0</version>
</dependency>

You can find the latest version of Gson on Maven Central.

3. Getting Started with Spark Framework

Let’s take a look at the basic building blocks of a Spark application and demonstrate a quick web service.

3.1. Routes

Web services in Spark Java are built upon routes and their handlers. Routes are essential elements in Spark. As per the documentation, each route is made up of three simple pieces – a verb, a path, and a callback.

  1. The verb is a method corresponding to an HTTP method. Verb methods include: get, post, put, delete, head, trace, connect, and options
  2. The path (also called a route pattern) determines which URI(s) the route should listen to and provide a response for
  3. The callback is a handler function that is invoked for a given verb and path in order to generate and return a response to the corresponding HTTP request. A callback takes a request object and response object as arguments

Here we show the basic structure for a route that uses the get verb:

get("/your-route-path/", (request, response) -> {
    // your callback code
});

3.2. Hello World API

Let’s create a simple web service that has two routes for GET requests and returns “Hello” messages in response. These routes use the get method, which is a static import from the class spark.Spark:

import static spark.Spark.*;

public class HelloWorldService {
    public static void main(String[] args) {
 
        get("/hello", (req, res)->"Hello, world");
        
        get("/hello/:name", (req,res)->{
            return "Hello, "+ req.params(":name");
        });
    }
}

The first argument to the get method is the path for the route. The first route contains a static path representing only a single URI (“/hello”).

The second route’s path (“/hello/:name”) contains a placeholder for the “name” parameter, as denoted by prefacing the parameter with a colon (“:”). This route will be invoked in response to GET requests to URIs such as “/hello/Joe” and “/hello/Mary”.

The second argument to the get method is a lambda expression giving a functional programming flavor to this framework.

The lambda expression has request and response as arguments and helps return the response. We will put our controller logic in the lambda expression for the REST API routes, as we shall see later in this tutorial.

3.3. Testing the Hello World API

After running the class HelloWorldService as a normal Java class, you will be able to access the service on its default port of 4567 using the routes defined with the get method above.

Let’s look at the request and response for the first route:

Request:

GET http://localhost:4567/hello

Response:

Hello, world

Let’s test the second route, passing the name parameter in its path:

Request:

GET http://localhost:4567/hello/baeldung

Response:

Hello, baeldung

See how the placement of the text “baeldung” in the URI was used to match the route pattern “/hello/:name” – causing the second route’s callback handler function to be invoked.

4. Designing a RESTful Service

In this section, we will design a simple REST web service for the following User entity:

public class User {
    private String id;
    private String firstName;
    private String lastName;
    private String email;

    // constructors, getters and setters
}

4.1. Routes

Let’s list the routes that make up our API:

  • GET /users —  get list of all users
  • GET /users/:id — get user with given id
  • POST /users/:id — add a user
  • PUT /users/:id — edit a particular user
  • OPTIONS /users/:id — check whether a user exists with given id
  • DELETE /users/:id — delete a particular user

4.2. The User Service

Below is the UserService interface declaring the CRUD operations for the User entity:

public interface UserService {
 
    public void addUser (User user);
    
    public Collection<User> getUsers ();
    public User getUser (String id);
    
    public User editUser (User user) 
      throws UserException;
    
    public void deleteUser (String id);
    
    public boolean userExist (String id);
}

For demonstration purposes, we provide a Map implementation of this UserService interface in the GitHub code to simulate persistence. You can supply your own implementation with the database and persistence layer of your choice.

4.3. The JSON Response Structure

Below is the JSON structure of the responses used in our REST service:

{
    status: <STATUS>
    message: <TEXT-MESSAGE>
    data: <JSON-OBJECT>
}

The status field value can be either SUCCESS or ERROR. The data field will contain the JSON representation of the return data, such as a User or collection of Users.

When there is no data being returned, or if the status is ERROR, we will populate the message field to convey a reason for the error or lack of return data.

Let’s represent the above JSON structure using a Java class:

public class StandardResponse {
 
    private StatusResponse status;
    private String message;
    private JsonElement data;
    
    public StandardResponse(StatusResponse status) {
        // ...
    }
    public StandardResponse(StatusResponse status, String message) {
        // ...
    }
    public StandardResponse(StatusResponse status, JsonElement data) {
        // ...
    }
    
    // getters and setters
}

where StatusResponse is an enum defined as below:

public enum StatusResponse {
    SUCCESS ("Success"),
    ERROR ("Error");
 
    private String status;       
    // constructors, getters
}

5. Implementing RESTful Services

Now let’s implement the routes and handlers for our REST API.

5.1. Creating Controllers

The following Java class contains the routes for our API, including the verbs and paths and an outline of the handlers for each route:

public class SparkRestExample {
    public static void main(String[] args) {
        post("/users", (request, response) -> {
            //...
        });
        get("/users", (request, response) -> {
            //...
        });
        get("/users/:id", (request, response) -> {
            //...
        });
        put("/users/:id", (request, response) -> {
            //...
        });
        delete("/users/:id", (request, response) -> {
            //...
        });
        options("/users/:id", (request, response) -> {
            //...
        });
    }
}

We will show the full implementation of each route handler in the following subsections.

5.2. Add User

Below is the post method response handler which will add a User:

post("/users", (request, response) -> {
    response.type("application/json");
    User user = new Gson().fromJson(request.body(), User.class);
    userService.addUser(user);

    return new Gson()
      .toJson(new StandardResponse(StatusResponse.SUCCESS));
});

Note: In this example, the JSON representation of the User object is passed as the raw body of a POST request.

Let’s test the route:

Request:

POST http://localhost:4567/users
{
    "id": "1012", 
    "email": "your-email@your-domain.com", 
    "firstName": "Mac",
    "lastName": "Mason1"
}

Response:

{
    "status":"SUCCESS"
}

5.3. Get All Users

Below is the get method response handler which returns all users from the UserService:

get("/users", (request, response) -> {
    response.type("application/json");
    return new Gson().toJson(
      new StandardResponse(StatusResponse.SUCCESS,new Gson()
        .toJsonTree(userService.getUsers())));
});

Now let’s test the route:

Request:

GET http://localhost:4567/users

Response:

{
    "status":"SUCCESS",
    "data":[
        {
            "id":"1014",
            "firstName":"John",
            "lastName":"Miller",
            "email":"your-email@your-domain.com"
        },
        {
            "id":"1012",
            "firstName":"Mac",
            "lastName":"Mason1",
            "email":"your-email@your-domain.com"
        }
    ]
}

5.4. Get User by Id

Below is the get method response handler which returns a User with the given id:

get("/users/:id", (request, response) -> {
    response.type("application/json");
    return new Gson().toJson(
      new StandardResponse(StatusResponse.SUCCESS,new Gson()
        .toJsonTree(userService.getUser(request.params(":id")))));
});

Now let’s test the route:

Request:

GET http://localhost:4567/users/1012

Response:

{
    "status":"SUCCESS",
    "data":{
        "id":"1012",
        "firstName":"Mac",
        "lastName":"Mason1",
        "email":"your-email@your-domain.com"
    }
}

5.5. Edit a User

Below is the put method response handler, which edits the user having the id supplied in the route pattern:

put("/users/:id", (request, response) -> {
    response.type("application/json");
    User toEdit = new Gson().fromJson(request.body(), User.class);
    User editedUser = userService.editUser(toEdit);
            
    if (editedUser != null) {
        return new Gson().toJson(
          new StandardResponse(StatusResponse.SUCCESS,new Gson()
            .toJsonTree(editedUser)));
    } else {
        return new Gson().toJson(
          new StandardResponse(StatusResponse.ERROR,new Gson()
            .toJson("User not found or error in edit")));
    }
});

Note: In this example, the data are passed in the raw body of a POST request as a JSON object whose property names match the fields of the User object to be edited.

Let’s test the route:

Request:

PUT http://localhost:4567/users/1012
{
    "lastName": "Mason"
}

Response:

{
    "status":"SUCCESS",
    "data":{
        "id":"1012",
        "firstName":"Mac",
        "lastName":"Mason",
        "email":"your-email@your-domain.com"
    }
}

5.6. Delete a User

Below is the delete method response handler, which will delete the User with the given id:

delete("/users/:id", (request, response) -> {
    response.type("application/json");
    userService.deleteUser(request.params(":id"));
    return new Gson().toJson(
      new StandardResponse(StatusResponse.SUCCESS, "user deleted"));
});

Now, let’s test the route:

Request:

DELETE http://localhost:4567/users/1012

Response:

{
    "status":"SUCCESS",
    "message":"user deleted"
}

5.7. Check if User Exists

The options method is a good choice for conditional checking. Below is the options method response handler which will check whether a User with the given id exists:

options("/users/:id", (request, response) -> {
    response.type("application/json");
    return new Gson().toJson(
      new StandardResponse(StatusResponse.SUCCESS, 
        (userService.userExist(
          request.params(":id"))) ? "User exists" : "User does not exists" ));
});

Now let’s test the route:

Request:

OPTIONS http://localhost:4567/users/1012

Response:

{
    "status":"SUCCESS",
    "message":"User exists"
}

6. Conclusion

In this article, we had a quick introduction to the Spark framework for rapid web development.

This framework is mainly promoted for generating microservices in Java. Node.js developers with Java knowledge who want to leverage libraries built on JVM libraries should feel at home using this framework.

And as always, you can find all the sources for this tutorial in the Github project.

Java 9 Convenience Factory Methods for Collections

$
0
0

1. Overview

Java 9 brings the long awaited syntactic sugar for creating small unmodifiable Collection instances using a concise one line code. As per JEP 269, new convenience factory methods will be included in JDK 9.

In this article, we will cover its usage along with the implementation details.

2. History and Motivation

Creating a small immutable Collection in Java using the traditional way is very verbose. Let’s take an example of a Set:

Set<String> set = new HashSet<>();
set.add("foo");
set.add("bar");
set.add("baz");
set = Collections.unmodifiableSet(set);

That’s too much code for a simple task and it should be possible to be done in a single expression.

The is also true for a Map, however, for List, there’s a factory method:

List<String> list = Arrays.asList("foo", "bar", "baz");

Although this List creation is better than the constructor initialization, this is less obvious as the common intuition would not be to look into Arrays class for methods to create a List:

There are other ways to reduce verbosity like the double brace technique:

Set<String> set = Collections.unmodifiableSet(new HashSet<String>() {{
    add("foo"); add("bar"); add("baz");
}});

or by using Java 8 Streams:

Stream.of("foo", "bar", "baz")
  .collect(collectingAndThen(toSet(), Collections::unmodifiableSet));

The double brace technique is only a little less verbose but greatly reduces the readability.

The Java 8 version, though, is a one-line expression, has some problems too. First, it’s not obvious and intuitive, second, it’s still verbose, third, it involves the creation of unnecessary objects and fourth, this method can’t be used to create a Map.

To summarize the shortcomings, none of the above approaches treat the specific use case creating a small unmodifiable Collection as a first class problem.

3. Description and Usage

Static methods have been provided on List, Set, and Map interfaces which take the elements as arguments and return an instance of List, Set and Map respectively. The method is named of(…) for all the three interfaces.

3.1. List and Set

The signature and characteristics of List and Set factory methods are same:

static <E> List<E> of(E e1, E e2, E e3)
static <E> Set<E>  of(E e1, E e2, E e3)

usage of the methods:

List<String> list = List.of("foo", "bar", "baz");
Set<String> set = Set.of("foo", "bar", "baz");

As you can see, it’s very simple, short and concise.

In the example, we have used the method with takes exactly three elements as parameters and returns a List / Set of size 3. But, there are 12 overloaded versions of this method – eleven with 0 to 10 parameters and one with var-args:

static <E> List<E> of()
static <E> List<E> of(E e1)
static <E> List<E> of(E e1, E e2)
// ....and so on

static <E> List<E> of(E... elems)

For most practical purposes, 10 elements would be sufficient but if more are required, the var-args version can be used.

Now you may ask, what is the point of having 11 extra methods if there’s a var-args version which can work for any number of elements. The answer to that is performance. Every var-args method call implicitly creates an array. Having the overloaded methods avoid unnecessary object creation and the garbage collection overhead thereof.

During the creation of a Set using a factory method, if duplicate elements are passed as parameters, then IllegalArgumentException is thrown at runtime:

@Test(expected = IllegalArgumentException.class)
public void onDuplicateElem_IfIllegalArgExp_thenSuccess() {
    Set.of("foo", "bar", "baz", "foo");
}

An important point to note here is that since the factory methods use generics, primitive types are autoboxed.

If an array of primitive type is passed, a List of array of that primitive type is returned. For example:

int[] arr = { 1, 2, 3, 4, 5 };
List<int[]> list = List.of(arr);

In this case, a List<int[]> of size 1 is returned and the element at index 0 contains the array.

3.2. Map

The signature of Map factory method is:

static <K,V> Map<K,V> of(K k1, V v1, K k2, V v2, K k3, V v3)

and the usage:

Map<String, String> map = Map.of("foo", "a", "bar", "b", "baz", "c");

Similarly to List and Set, the of(…) method is overloaded to have 0 to 10 key-value pairs.

In the case of Map, there is a different method for more than 10 key-value pairs:

static <K,V> Map<K,V> ofEntries(Map.Entry<? extends K,? extends V>... entries)

and it’s usage:

Map<String, String> map = Map.ofEntries(
  new AbstractMap.SimpleEntry<>("foo", "a"),
  new AbstractMap.SimpleEntry<>("bar", "b"),
  new AbstractMap.SimpleEntry<>("baz", "c"));

Passing in duplicate values for Key would throw an IllegalArgumentException:

@Test(expected = IllegalArgumentException.class)
public void givenDuplicateKeys_ifIllegalArgExp_thenSuccess() {
    Map.of("foo", "a", "foo", "b");
}

Again, in the case of Map too, the primitive types are autoboxed.

4. Implementation Notes

The collections created using the factory methods are not the most commonly used implementations.

For example, the List is not an ArrayList and the Map is not a HashMap. They are different implementations which are introduced in Java 9. These implementations are internal and their constructors are not made public.

In this section, we will see some important implementation differences which are common to all the three types of collections.

4.1. Immutable

The collections created using factory methods are immutable and changing an element, adding new elements or removing an element throws UnsupportedOperationException:

@Test(expected = UnsupportedOperationException.class)
public void onElemAdd_ifUnSupportedOpExpnThrown_thenSuccess() {
    Set<String> set = Set.of("foo", "bar");
    set.add("baz");
}
@Test(expected = UnsupportedOperationException.class)
public void onElemModify_ifUnSupportedOpExpnThrown_thenSuccess() {
    List<String> list = List.of("foo", "bar");
    list.set(0, "baz");
}
@Test(expected = UnsupportedOperationException.class)
public void onElemRemove_ifUnSupportedOpExpnThrown_thenSuccess() {
    Map<String, String> map = Map.of("foo", "a", "bar", "b");
    map.remove("foo");
}

4.2. No null Element Allowed

In the case of List and Set, no elements can be null. In the case of a Map, neither keys nor values can be null. Passing null argument throws a NullPoimterException:

@Test(expected = NullPointerException.class)
public void onNullElem_ifNullPtrExpnThrown_thenSuccess() {
    List.of("foo", "bar", null);
}

4.3. Value-Based Instances

The instances created by factory methods are value based. This means that factories are free to create a new instance or return an existing instance. Hence, if we create Lists with same values, they may or may not refer to the same object on the heap:

List<String> list1 = List.of("foo", "bar");
List<String> list2 = List.of("foo", "bar");

In this case, list1 == list2 may or may not evaluate to true depending on the JVM.

4.4. Serialization

Collections created from factory methods are Serializable if the elements of the collection are Serializable.

5. Conclusion

In this article, we introduced the new factory methods for Collections introduced in Java 9. We concluded why this feature is a welcome change by going over some past methods for creating unmodifiable collections. We covered it’s usage and highlighted key points to be considered while using them. Finally, we clarified that these collections are different from the commonly used implementations and pointed out key differences.

The complete source code for this article and the unit tests are available over on GitHub.

Viewing all 4703 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>