Quantcast
Channel: Baeldung
Viewing all 4708 articles
Browse latest View live

[NEWS] AssertJ 3.6.X – Interview with Joel Costigliola

$
0
0

1. Introduction

AssertJ is a library that provides fluent assertions for Java. You can read more about it here and here.

Recently, the 3.6.0 version was released along with two small bug-fix releases 3.6.1 and 3.6.2.

Today, Joel Costigliola – the creator of the library – is with us and will tell you a little bit more about the release and future plans.

“We are trying to make AssertJ really community-oriented”

2. Versions 2.6.0 and 3.6.0 were released pretty much at the same time. What is the difference between them? 

2.x versions target Java 7 while 3.x target Java 8. Another way of seeing this is that 3.x = 2.x + Java 8 specific features.

3. What are the most notable changes/additions that appeared in 3.6.0/2.6.0?

2.6.0 ended up having different small features but no big additions. If I had to choose, the most interesting ones would be those related to suppresed exceptions:
hasSuppresedException()
– hasNoSuppresedExceptions()

3.6.0 additionally got a way of checking multiples assertions on array/iterable/map entry elements:

4. Since the release of 3.6.0 two bugfix releases appeared (3.6.1, 3.6.2). Can you tell us a little bit more what happened there and what needed to be fixed?

In 3.6.1, filteredOn(Predicate) was only working with List but not Iterable, pretty bad.

In 3.6.2, we did not think of extracting properties from Java 8 default getter method, it turns out it did not work out of the box after some internal refactoring.

I asked users whether they could wait for the next release, the bug reporter told me he was ok to wait but another user wanted it so I released a new version. We are trying to make AssertJ really community-oriented, since cutting as release is cheap (well except the documentation part) I usually don’t see any problem releasing.

5. Did you encounter any interesting technical challenges when working on the newest release?

I will point a problem I have encountered working on the next release 3.7.0 that should be out in few weeks.

Java 8 is picky about “ambiguous” method signatures. We added a new assertThat method that takes a ThrowingCallable (a simple class that is a Callable throwing an exception), it turned out that Java 8 confuses it with another assertThat method that takes an Iterable!

That was the most surprising to me as I don’t see any ambiguity between the two.

6. Are you planning any new major release soon? Anything that will utilize Java 9 additions?

In the next weeks / one month. We usually try to have releases every few months or when there are major additions.

Pascal Schumacher that has joined the AssertJ team has done some work on Java 9 to check compatibiliy, a few things don’t work, mainly the ones that rely on introspection since Java 9 changes the access rules. What we will do is start a 4.x branch that will be Java 9 focused, following the same strategy as 3.x vs 2.x, we will have 4.x = 3.x + Java 9 features.

Once 4.0 is officially released we will likely drop 2.x active development but keep accepting PRs as we do not have the capacity of maintaining 3 versions in sync, I mean by that we report any changes from n.x version to the n+1.x version, so adding something in 2.x would need to reported both in 3.x and 4.x and that is too much work at the moment.


A Guide to Redis with Redisson

$
0
0

1. Overview

Redisson is a Redis client for Java. In this article, we will explore some of its features, and demonstrate how it could facilitate building distributed business applications.

Redisson constitutes an in-memory data grid that offers distributed Java objects and services backed by Redis. It’s distributed in-memory data model allows sharing of domain objects and services across applications and servers.

This article will guide us on how to setup Redisson, understand how it operates, and explore some of Redisson’s objects and services.

2. Maven Dependencies

Let’s get started by importing Redisson to our project by adding the section below to our pom.xml:

<dependency>
    <groupId>org.redisson</groupId>
    <artifactId>redisson</artifactId>
    <version>3.3.0</version>
</dependency>

The latest version of this dependency can be found here.

3. Configuration

Before we get started, we must ensure we have the latest version of Redis setup and running. If you don’t have Redis and you use Linux or Macintosh, you can follow the information here to get it setup. If you are a Windows user, you can setup Redis using this unofficial port.

We need to configure Redisson to connect to Redis. Redisson supports connections to the following Redis configurations:

  • Single node
  • Master with slave nodes
  • Sentinel nodes
  • Clustered nodes
  • Replicated nodes

Redisson supports Amazon Web Services (AWS) ElastiCache Cluster and Azure Redis Cache for Clustered and Replicated Nodes.

Let’s connect to a single node instance of Redis. This instance is running locally on the default port, 6379:

RedissonClient client = Redisson.create();

You can pass different configurations to the Redisson object’s create method. This could be configurations to have it connect to a different port, or maybe, to connect to a Redis cluster. This configuration could be in Java code or loaded from an external configuration file.

3.1. Java Configuration

Let’s configure Redisson in Java code:

Config config = new Config();
config.useSingleServer()
  .setAddress("127.0.0.1:6379");

RedissonClient client = Redisson.create(config);

We specify Redisson configurations in an instance of a Config object and then pass it to the create method. Above, we specified to Redisson that we want to connect to a single node instance of Redis. To do this we used the Config object’s useSingleServer method. This returns a reference to a SingleServerConfig object.

The SingleServerConfig object has settings that Redisson uses to connect to a single node instance of Redis. Here, we use its setAddress method to configure the address setting. This sets the address of the node we are connecting to. Some other settings include retryAttempts, connectionTimeout and clientName. These settings are configured using their corresponding setter methods.

We can configure Redisson for different Redis configurations in a similar way using the Config object’s following methods:

  • useSingleServer – for single node instance. Get single node settings here
  • useMasterSlaveServers – for master with slave nodes. Get master-slave node settings here
  • useSentinelServers – for sentinel nodes. Get sentinel node settings here
  • useClusterServers – for clustered nodes. Get clustered node settings here
  • useReplicatedServers – for replicated nodes. Get replicated node settings here

3.2. File Configuration

Redisson can load configurations from external JSON or YAML files:

Config config = Config.fromJSON(new File("singleNodeConfig.json"));  
RedissonClient client = Redisson.create(config);

The Config object’s fromJSON method can load configurations from a string, file, input stream or URL.

Here is the sample configuration in the singleNodeConfig.json file:

{
    "singleServerConfig": {
        "idleConnectionTimeout": 10000,
        "pingTimeout": 1000,
        "connectTimeout": 10000,
        "timeout": 3000,
        "retryAttempts": 3,
        "retryInterval": 1500,
        "reconnectionTimeout": 3000,
        "failedAttempts": 3,
        "password": null,
        "subscriptionsPerConnection": 5,
        "clientName": null,
        "address": "redis://127.0.0.1:6379",
        "subscriptionConnectionMinimumIdleSize": 1,
        "subscriptionConnectionPoolSize": 50,
        "connectionMinimumIdleSize": 10,
        "connectionPoolSize": 64,
        "database": 0,
        "dnsMonitoring": false,
        "dnsMonitoringInterval": 5000
    },
    "threads": 0,
    "nettyThreads": 0,
    "codec": null,
    "useLinuxNativeEpoll": false
}

Here is a corresponding YAML configuration file:

singleServerConfig:
    idleConnectionTimeout: 10000
    pingTimeout: 1000
    connectTimeout: 10000
    timeout: 3000
    retryAttempts: 3
    retryInterval: 1500
    reconnectionTimeout: 3000
    failedAttempts: 3
    password: null
    subscriptionsPerConnection: 5
    clientName: null
    address: "redis://127.0.0.1:6379"
    subscriptionConnectionMinimumIdleSize: 1
    subscriptionConnectionPoolSize: 50
    connectionMinimumIdleSize: 10
    connectionPoolSize: 64
    database: 0
    dnsMonitoring: false
    dnsMonitoringInterval: 5000
threads: 0
nettyThreads: 0
codec: !<org.redisson.codec.JsonJacksonCodec> {}
useLinuxNativeEpoll: false

We can configure other Redis configurations from a file in a similar manner using settings peculiar to that configuration. For your reference, here are their JSON and YAML file formats:

To save a Java configuration to JSON or YAML format, we can use the toJSON or toYAML methods of the Config object:

Config config = new Config();
// ... we configure multiple settings here in Java
String jsonFormat = config.toJSON();
String yamlFormat = config.toYAML();

Now that we know how to configure Redisson, let’s look at how Redisson executes operations.

4. Operation

Redisson supports synchronous, asynchronous and reactive interfaces. Operations over these interfaces are thread-safe.

All entities (objects, collections, locks and services) generated by a RedissonClient have synchronous and asynchronous methods. Synchronous methods bear asynchronous variants. These methods normally bear the same method name of their synchronous variants appended with “Async”. Let’s look at a synchronous method of the RAtomicLong object:

RedissonClient client = Redisson.create();
RAtomicLong myLong = client.getAtomicLong('myLong');

The asynchronous variant of the synchronous compareAndSet method would be:

RFuture<Boolean> isSet = myLong.compareAndSetAsync(6, 27);

The asynchronous variant of the method returns an RFuture object. We can set listeners on this object to get back the result when it becomes available:

isSet.handle((result, exception) -> {
    // handle the result or exception here.
});

To generate reactive objects, we would need to use the RedissonReactiveClient:

RedissonReactiveClient client = Redisson.createReactive();
RAtomicLongReactive myLong = client.getAtomicLong("myLong");

Publisher<Boolean> isSetPublisher = myLong.compareAndSet(5, 28);

This method returns reactive objects based on the Reactive Streams Standard for Java 9.

Let’s explore some of the distributed objects provided by Redisson.

5. Objects

An individual instance of a Redisson object is serialized and stored in any of the available Redis nodes backing Redisson. These objects could be distributed in a cluster across multiple nodes and can be accessed by a single application or multiple applications/servers.

These distributed objects follow specifications from the java.util.concurrent.atomic package. They support lock-free, thread-safe and atomic operations on objects stored in Redis. Data consistency between applications/servers is ensured as values are not updated while another application is reading the object.

Redisson objects are bound to Redis keys. We can manage these keys through the RKeys interface. We access our Redisson objects using these keys.

We can get all keys:

RKeys keys = client.getKeys();

We can extract all key names as iterable string collections:

Iterable<String> allKeys = keys.getKeys();

We can get keys conforming to a pattern:

Iterable<String> keysByPattern = keys.getKeysByPattern('key*')

The RKeys interface also allows deleting keys, deleting keys by pattern and other useful key-based operations that we could use to manage our keys and objects.

Distributed objects provided by Redisson include:

  • ObjectHolder
  • BinaryStreamHolder
  • GeospatialHolder
  • BitSet
  • AtomicLong
  • AtomicDouble
  • Topic
  • BloomFilter
  • HyperLogLog

Let’s take a look at three of these objects: ObjectHolder, AtomicLong, and Topic.

5.1. Object Holder

Represented by the RBucket class, this object can hold any type of object. This object has a maximum size of 512MB:

RBucket<Ledger> bucket = client.getBucket("ledger");
bucket.set(new Ledger());
Ledger ledger = bucket.get();

The RBucket object can perform atomic operations such as compareAndSet and getAndSet on objects it holds.

5.2. AtomicLong

Represented by the RAtomicLong class, this object closely resembles the java.util.concurrent.atomic.AtomicLong class and represents a long value that can be updated atomically:

RAtomicLong atomicLong = client.getAtomicLong("myAtomicLong");
atomicLong.set(5);
atomicLong.incrementAndGet();

5.3. Topic

The Topic object supports the Redis’ “publish and subscribe” mechanism. To listen for published messages:

RTopic<CustomMessage> subscribeTopic = client.getTopic("baeldung");
subscribeTopic.addListener(
  (channel, customMessage) 
  -> future.complete(customMessage.getMessage()));

Above, the Topic is registered to listen to messages from the “baeldung” channel. We then add a listener to the topic to handle incoming messages from that channel. We can add multiple listeners to a channel.

Let’s publish messages to the “baeldung” channel:

RTopic<CustomMessage> publishTopic = client.getTopic("baeldung");
long clientsReceivedMessage
  = publishTopic.publish(new CustomMessage("This is a message"));

This could be published from another application or server. The CustomMessage object will be received by the listener and processed as defined in the onMessage method.

We can learn more about other Redisson objects here.

6. Collections

We handle Redisson collections in the same fashion we handle objects.

Distributed collections provided by Redisson include:

  • Map
  • Multimap
  • Set
  • SortedSet
  • ScoredSortedSet
  • LexSortedSet
  • List
  • Queue
  • Deque
  • BlockingQueue
  • BoundedBlockingQueue
  • BlockingDeque
  • BlockingFairQueue
  • DelayedQueue
  • PriorityQueue
  • PriorityDeque

Let’s take a look at three of these collections: Map, Set, and List.

6.1. Map

Redisson based maps implement the java.util.concurrent.ConcurrentMap and java.util.Map interfaces. Redisson has four map implementations. These are RMap, RMapCache, RLocalCachedMap and RClusteredMap.

Let’s create a map with Redisson:

RMap<String, Ledger> map = client.getMap("ledger");
Ledger newLedger = map.put("123", new Ledger());map

RMapCache supports map entry eviction. RLocalCachedMap allows local caching of map entries. RClusteredMap allows data from a single map to be split across Redis cluster master nodes.

We can learn more about Redisson maps here.

6.2. Set

Redisson based Set implements the java.util.Set interface.

Redisson has three Set implementations, RSet, RSetCache, and RClusteredSet with similar functionality as their map counterparts.

Let’s create a Set with Redisson:

RSet<Ledger> ledgerSet = client.getSet("ledgerSet");
ledgerSet.add(new Ledger());

We can learn more about Redisson sets here.

6.3. List

Redisson-based Lists implement the java.util.List interface.

Let’s create a List with Redisson:

RList<Ledger> ledgerList = client.getList("ledgerList");
ledgerList.add(new Ledger());

We can learn more about other Redisson collections here.

7. Locks and Synchronizers

Redisson’s distributed locks allow for thread synchronization across applications/servers. Redisson’s list of locks and synchronizers include:

  • Lock
  • FairLock
  • MultiLock
  • ReadWriteLock
  • Semaphore
  • PermitExpirableSemaphore
  • CountDownLatch

Let’s take a look at Lock and MultiLock.

7.1. Lock

Redisson’s Lock implements java.util.concurrent.locks.Lock interface.

Let’s implement a lock, represented by the RLock class:

RLock lock = client.getLock("lock");
lock.lock();
// perform some long operations...
lock.unlock();

7.2. MultiLock

Redisson’s RedissonMultiLock groups multiple RLock objects and treats them as a single lock:

RLock lock1 = clientInstance1.getLock("lock1");
RLock lock2 = clientInstance2.getLock("lock2");
RLock lock3 = clientInstance3.getLock("lock3");

RedissonMultiLock lock = new RedissonMultiLock(lock1, lock2, lock3);
lock.lock();
// perform long running operation...
lock.unlock();

We can learn more about other locks here.

8. Services

Redisson exposes 4 types of distributed services. These are: Remote Service, Live Object Service, Executor Service and Scheduled Executor Service. Let’s look at the Remote Service and Live Object Service.

8.1. Remote Service

This service provides Java remote method invocation facilitated by Redis. A Redisson remote service consists of a server-side (worker instance) and client-side implementation. The server-side implementation executes a remote method invoked by the client. Calls from a remote service can be synchronous or asynchronous.

The server-side registers an interface for remote invocation:

RRemoteService remoteService = client.getRemoteService();
LedgerServiceImpl ledgerServiceImpl = new LedgerServiceImpl();

remoteService.register(LedgerServiceInterface.class, ledgerServiceImpl);

The client-side calls a method of the registered remote interface:

RRemoteService remoteService = client.getRemoteService();
LedgerServiceInterface ledgerService
  = remoteService.get(LedgerServiceInterface.class);

List<String> entries = ledgerService.getEntries(10);

We can learn more about remote services here.

8.2. Live Object Service

Redisson Live Objects extend the concept of standard Java objects that could only be accessed from a single JVM to enhanced Java objects that could be shared between different JVMs in different machines. This is accomplished by mapping an object’s fields to a Redis hash. This mapping is made through a runtime-constructed proxy class. Field getters and setters are mapped to Redis hget/hset commands.

Redisson Live Objects support atomic field access as a result of Redis’ single-threaded nature.

Creating a Live Object is simple:

@REntity
public class LedgerLiveObject {
    @RId
    private String name;

    // getters and setters...
}

We annotate our class with @REntity and a unique or identifying field with @RId. Once we have done this, we can use our Live Object in our application:

RLiveObjectService service = client.getLiveObjectService();

LedgerLiveObject ledger = new LedgerLiveObject();
ledger.setName("ledger1");

ledger = service.persist(ledger);

We create our Live Object like standard Java objects using the new keyword. We then use an instance of RLiveObjectService to save the object to Redis using its persist method.

If the object has previously been persisted to Redis, we can retrieve the object:

LedgerLiveObject returnLedger
  = service.get(LedgerLiveObject.class, "ledger1");

We use the RLiveObjectService to get our Live Object using the field annotated with @RId.

We can learn more about Redisson Live Objects here.

We can also learn more about other Redisson services here.

9. Pipelining

Redisson supports pipelining. Multiple operations can be batched as a single atomic operation. This is facilitated by the RBatch class. Multiple commands are aggregated against an RBatch object instance before they are executed:

RBatch batch = client.createBatch();
batch.getMap("ledgerMap").fastPutAsync("1", "2");
batch.getMap("ledgerMap").putAsync("2", "5");

List<?> result = batch.execute();

10. Scripting

Redisson supports LUA scripting. We can execute LUA scripts against Redis:

client.getBucket("foo").set("bar");
String result = client.getScript().eval(Mode.READ_ONLY,
  "return redis.call('get', 'foo')", RScript.ReturnType.VALUE);

11. Low-Level Client

It is possible that we might want to perform Redis operations not yet supported by Redisson. Redisson provides a low-level client that allows execution of native Redis commands:

RedisClient client = new RedisClient("localhost", 6379);
RedisConnection conn = client.connect();
conn.sync(StringCodec.INSTANCE, RedisCommands.SET, "test", 0);

conn.closeAsync();
client.shutdown();

The low-level client also supports asynchronous operations.

12. Conclusion

This article showcased Redisson and some of the features that make it ideal for developing distributed applications. We explored its distributed objects, collections, locks and services. We also explored some of its other features such as pipelining, scripting and its low-level client.

Redisson also provides integration with other frameworks such as the JCache API, Spring Cache, Hibernate Cache and Spring Sessions. We can learn more about its integration with other frameworks here.

You can find code samples in the GitHub project.

Guide to Google Guice

$
0
0

1. Introduction

This article will examine the fundamentals of Google Guice. We’ll look at approaches to completing basic Dependency Injection (DI) tasks in Guice.

We will also compare and contrast the Guice approach to those of more established DI frameworks like Spring and Contexts and Dependency Injection (CDI).

This article presumes the reader has an understanding of the fundamentals of the Dependency Injection pattern.

2. Setup

In order to use Google Guice in your Maven project, you will need to add the following dependency to your pom.xml:

<dependency>
    <groupId>com.google.inject</groupId>
    <artifactId>guice</artifactId>
    <version>4.1.0</version>
</dependency>

There is also a collection of Guice extensions (we will cover those a little later) here, as well as third-party modules to extend the capabilities of Guice (mainly by providing integration to more established Java frameworks).

3. Basic Dependency Injection With Guice

3.1. Our Sample Application

We will be working with a scenario where we design classes that support three means of communication in a helpdesk business: Email, SMS, and IM.

Consider the class:

public class Communication {
 
    @Inject 
    private Logger logger;
    
    @Inject
    private Communicator communicator;

    public Communication(Boolean keepRecords) {
        if (keepRecords) {
            System.out.println("Message logging enabled");
        }
    }
 
    public boolean sendMessage(String message) {
        return communicator.sendMessage(message);
    }

}

This Communication class is the basic unit of communication. An instance of this class is used to send messages via the available communications channels. As shown above, Communication has a Communicator which we use to do the actual message transmission.

The basic entry point into Guice is the Injector:

public static void main(String[] args){
    Injector injector = Guice.createInjector(new BasicModule());
    Communication comms = injector.getInstance(Communication.class);
}

This main method retrieves an instance of our Communication class. It also introduces a fundamental concept of Guice: the Module (using BasicModule in this example). The Module is the basic unit of definition of bindings (or wiring, as it’s known in Spring).

Guice has adopted a code-first approach for dependency injection and management so you won’t be dealing with a lot of XML out-of-the-box.

In the example above, the dependency tree of Communication will be implicitly injected using a feature called just-in-time binding, provided the classes have the default no-arg constructor. This has been a feature in Guice since inception and only available in Spring since v4.3.

3.2. Guice Bindings

Binding is to Guice as wiring is to Spring. With bindings, you define how Guice is going to inject dependencies into a class.

A binding is defined in an implementation of com.google.inject.AbstractModule:

public class BasicModule extends AbstractModule {
 
    @Override
    protected void configure() {
        bind(Communicator.class).to(DefaultCommunicatorImpl.class);
    }
}

This module implementation specifies that an instance of DefaultCommunicatorImpl is to be injected wherever a Communicator variable is found.

Another incarnation of this mechanism is the named binding. Consider the following variable declaration:

@Inject @Named("DefaultCommunicator")
Communicator communicator;

For this, we will have the following binding definition:

@Override
protected void configure() {
    bind(Communicator.class)
      .annotatedWith(Names.named("DefaultCommunicator"))
      .to(Communicator.class);
}

This binding will provide an instance of Communicator to a variable annotated with the @Named(“DefaultCommunicator”) annotation.

You’ll notice the @Inject and @Named annotations appear to be loan annotations from JavaEE’s CDI, and they are. They are in the com.google.inject.* package — you should be careful to import from the right package when using an IDE.

Tip: While we just said to use the Guice-provided @Inject and @Named, it’s worthwhile to note that Guice does provide support for javax.inject.Inject and javax.inject.Named, among other JavaEE annotations.

You can also inject a dependency that doesn’t have a default no-arg constructor using constructor binding:

public class BasicModule extends AbstractModule {
 
    @Override
    protected void configure() {
        bind(Boolean.class).toInstance(true);
        bind(Communication.class).toConstructor(
          Communication.class.getConstructor(Boolean.TYPE));
}

The snippet above will inject an instance of Communication using the constructor that takes a boolean argument. We supply the true argument to the constructor by defining an untargeted binding of the Boolean class.

This untargeted binding will be eagerly supplied to any constructor in the binding that accepts a boolean parameter. With this approach, all dependencies of Communication are injected.

Another approach to constructor-specific binding is the instance binding, where we provide an instance directly in the binding:

public class BasicModule extends AbstractModule {
 
    @Override
    protected void configure() {
        bind(Communication.class)
          .toInstance(new Communication(true));
    }    
}

This binding will provide an instance of the Communication class wherever a Communication variable is declared.

In this case, however, the dependency tree of the class will not be automatically wired. You should limit the use of this mode where there isn’t any heavy initialization or dependency injection necessary.

4. Types of Dependency Injection

Guice supports the standard types of injections you would have come to expect with the DI pattern. In the Communicator class, we need to inject different types of CommunicationMode.

4.1. Field Injection

@Inject @Named("SMSComms")
CommunicationMode smsComms;

Use the optional @Named annotation as a qualifier to implement targeted injection based on the name

4.2. Method Injection

Here we use a setter method to achieve the injection:

@Inject
public void setEmailCommunicator(@Named("EmailComms") CommunicationMode emailComms) {
    this.emailComms = emailComms;
}

4.3. Constructor Injection

You can also inject dependencies using a constructor:

@Inject
public Communication(@Named("IMComms") CommunicationMode imComms) {
    this.imComms= imComms;
}

4.4. Implicit Injections

Guice will implicitly inject some general purpose components like the Injector and an instance of java.util.Logger, among others. You’ll notice we are using loggers all through the samples but you won’t find an actual binding for them.

5. Scoping in Guice

Guice supports the scopes and scoping mechanisms we have grown used to in other DI frameworks. Guice defaults to providing a new instance of a defined dependency.

5.1. Singleton

Let’s inject a singleton into our application:

bind(Communicator.class).annotatedWith(Names.named("AnotherCommunicator"))
  .to(Communicator.class).in(Scopes.SINGLETON);

The in(Scopes.SINGLETON) specifies that any Communicator field with the @Named(“AnotherCommunicator”) will get a singleton injected. This singleton is lazily initiated by default.

5.2. Eager Singleton

Now, let’s inject an eager singleton:

bind(Communicator.class).annotatedWith(Names.named("AnotherCommunicator"))
  .to(Communicator.class)
  .asEagerSingleton();

The asEagerSingleton() call defines the singleton as eagerly instantiated.

In addition to these two scopes, Guice supports custom scopes as well as the web-only @RequestScoped and @SessionScoped annotations, supplied by JavaEE (there are no Guice-supplied versions of those annotations).

6. Aspect Oriented Programming in Guice

Guice is compliant with the AOPAlliance’s specifications for aspect-oriented programming. We can implement the quintessential logging interceptor, which we will use to track message sending in our example, in only four steps.

Step 1 –  Implement the AOPAlliance’s MethodInterceptor:

public class LoggingInterceptor implements MethodInterceptor {

    @Inject
    Logger logger;

    @Override
    public Object invoke(MethodInvocation invocation) throws Throwable {
        Object[] objectArray = invocation.getArguments();
        int i = 0;
        for (Object object : objectArray) {
            logger.info("Sending message: " + object.toString());
        }
        return invocation.proceed();
    }
}

Step 2 – Define a plain java annotation:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface MessageSentLoggable {
}

Step 3 – Define a binding for a Matcher:

Matcher is a Guice class that we use do specify the components that our AOP annotation will apply to. In this case, we want the annotation to apply to implementations of CommunicationMode:

public class AOPModule extends AbstractModule {

    @Override
    protected void configure() {
        bindInterceptor(
            Matchers.any(),
            Matchers.annotatedWith(MessageSentLoggable.class),
            new MessageLogger()
        );
    }
}

We have specified a Matcher here that will apply our MessageLogger interceptor to any class, that has the MessageSentLoggable annotation applied to its methods.

Step 4 – Apply our annotation to our CommunicationMode and load our Module

@Override
@MessageSentLoggable
public boolean sendMessage(String message) {
    logger.info("SMS message sent");
    return true;
}

public static void main(String[] args) {
    Injector injector = Guice.createInjector(new BasicModule(), new AOPModule());
    Communication comms = injector.getInstance(Communication.class);
}

7. Conclusion

Having looked at basic Guice functionality, we can see where the inspiration for Guice came from Spring.

Along with its support for JSR-330, Guice aims to be an injection-focused DI framework (whereas Spring provides a whole ecosystem for programming convenience not necessarily just DI), targeted at developers who want DI flexibility.

Guice is also highly extensible, allowing programmers to write portable plugins that result in flexible and creative uses of the framework. This is in addition to the extensive integration that Guice already provides for most popular frameworks and platforms like Servlets, JSF, JPA, and OSGi, to name a few.

You can find all of the source code used in this tutorial in our GitHub project.

Multiple Entry Points in Spring Security

$
0
0

1. Overview

In this quick tutorial, we’re going to take a look at how to define multiple entry points in a Spring Security application.

This mainly entails defining multiple http blocks in an XML configuration file or multiple HttpSecurity instances by extending the WebSecurityConfigurerAdapter class multiple times.

2. Maven Dependencies

For development, we will need the following dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
    <version>1.5.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
    <version>1.5.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <version>1.5.2.RELEASE</version>
</dependency>    
<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-test</artifactId>
    <version>4.2.2.RELEASE</version>
</dependency>

The latest versions of spring-boot-starter-security, spring-boot-starter-thymeleaf, spring-boot-starter-test, spring-security-test can be downloaded from Maven Central.

3. Defining Multiple Entry Points

3.1. Java Configuration

Let’s define the main configuration class that will hold a user source:

@Configuration
@EnableWebSecurity
public class MultipleEntryPointsSecurityConfig {

    @Bean
    public UserDetailsService userDetailsService() throws Exception {
        InMemoryUserDetailsManager manager = new InMemoryUserDetailsManager();
        manager.createUser(User
          .withUsername("user")
          .password("userPass")
          .roles("USER").build());
        manager.createUser(User
          .withUsername("admin")
          .password("adminPass")
          .roles("ADMIN").build());
        return manager;
    }
}

When using Java configuration, the way to define multiple security realms is to define multiple configuration classes extending the WebSecurityConfigurerAdapter – each having its own security configuration. These can be static classes inside the main @Configuration class.

The main motivation for having multiple entry points in one application is if there are different types of users that can access different portions of the application.

Let’s look at an example – and define a configuration with three entry points, each having different permissions and authentication modes: one for administrative users using HTTP Basic Authentication, one for regular users that use form authentication, and one for guest users that do not require authentication.

The entry point defined for administrative users secures URLs of the form /admin/** to only allow users with a role of ADMIN and requires HTTP Basic Authentication:

@Configuration
@Order(1)
public static class App1ConfigurationAdapter extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.antMatcher("/admin/**")
            .authorizeRequests().anyRequest().hasRole("ADMIN")
            .and().httpBasic()
            .and().exceptionHandling().accessDeniedPage("/403");
    }
}

The @Order annotation on each static class indicates the order in which the configurations will be considered to find one that matches the requested URL. The order value for each class must be unique.

Next, let’s define the configuration for URLs of the form /user/** that can be accessed by regular users with a USER role using form authentication:

@Configuration
@Order(2)
public static class App2ConfigurationAdapter extends WebSecurityConfigurerAdapter {

    protected void configure(HttpSecurity http) throws Exception {
        http.antMatcher("/user/**")
            .authorizeRequests().anyRequest().hasRole("USER")
            .and()
            .formLogin().loginPage("/userLogin").loginProcessingUrl("/user/login")
            .failureUrl("/userLogin?error=loginError")
            .defaultSuccessUrl("/user/myUserPage")
            .and()
            .logout().logoutUrl("/user/logout")
            .logoutSuccessUrl("/multipleHttpLinks")
            .deleteCookies("JSESSIONID")
            .and()
            .exceptionHandling().accessDeniedPage("/403")
            .and()
            .csrf().disable();
    }
}

This configuration will also require defining the /userLogin MVC mapping and a page that contains a standard login form.

For the form authentication, it’s very important to remember that any URL necessary for the configuration, such as the login processing URL also needs to follow the /user/** format or be otherwise configured to be accessible.

Both of the above configurations will redirect to a /403 URL if a user without the appropriate role attempts to access a protected URL.

Finally, let’s define the third configuration for URLs of the form /guest/** that will allow all types of users, including unauthenticated ones:

@Configuration
@Order(3)
public static class App3ConfigurationAdapter extends WebSecurityConfigurerAdapter {

    protected void configure(HttpSecurity http) throws Exception {
        http.antMatcher("/guest/**").authorizeRequests().anyRequest().permitAll();
    }
}

3.2. XML Configuration

Let’s take a look at the equivalent XML configuration for the three HttpSecurity instances in the previous section.

This will contain three separate XML <http> blocks.

For the /admin/** URLs the XML configuration will be:

<security:http pattern="/admin/**" use-expressions="true" auto-config="true">
    <security:intercept-url pattern="/**" access="hasRole('ROLE_ADMIN')"/>
    <security:http-basic/>
    <security:access-denied-handler error-page="/403"/>
</security:http>

Of note here is that if using XML configuration, the roles have to be of the form ROLE_<ROLE_NAME>.

The configuration for the /user/** URLs is:

<security:http pattern="/user/**" use-expressions="true" auto-config="true">
    <security:intercept-url pattern="/**" access="hasRole('ROLE_USER')"/>
    <security:form-login login-page="/userLogin" login-processing-url="/user/login" 
      authentication-failure-url="/userLogin?error=loginError"
      default-target-url="/user/myUserPage"/>
    <security:csrf disabled="true"/>
    <security:access-denied-handler error-page="/403"/>
    <security:logout logout-url="/user/logout" delete-cookies="JSESSIONID" 
      logout-success-url="/multipleHttpLinks"/>
</security:http>

For the /guest/** URLs we will have the http element:

<security:http pattern="/**" use-expressions="true" auto-config="true">
    <security:intercept-url pattern="/guest/**" access="permitAll()"/>  
</security:http>

Also important here is that at least one XML <http> block must match the /** pattern.

4. Accessing Protected URLs

4.1. MVC Configuration

Let’s create request mappings that match the URL patterns we have secured:

@Controller
public class PagesController {

    @RequestMapping("/admin/myAdminPage")
    public String getAdminPage() {
        return "multipleHttpElems/myAdminPage";
    }

    @RequestMapping("/user/myUserPage")
    public String getUserPage() {
        return "multipleHttpElems/myUserPage";
    }

    @RequestMapping("/guest/myGuestPage")
    public String getGuestPage() {
        return "multipleHttpElems/myGuestPage";
    }

    @RequestMapping("/403")
    public String getAccessDeniedPage() {
        return "403";
    }
    @RequestMapping("/multipleHttpLinks")
    public String getMultipleHttpLinksPage() {
        return "multipleHttpElems/multipleHttpLinks";
    }
}

The /multipleHttpLinks mapping will return a simple HTML page with links to the protected URLs:

<a th:href="@{/admin/myAdminPage}">Admin page</a>
<a th:href="@{/user/myUserPage}">User page</a>
<a th:href="@{/guest/myGuestPage}">Guest page</a>

Each of the HTML pages corresponding to the protected URLs will have a simple text and a back link:

Welcome admin!

<a th:href="@{/multipleHttpLinks}" >Back to links</a>

4.2. Initializing the Application

We will run our example as a Spring Boot application, so let’s define a class with the main method:

@SpringBootApplication
public class MultipleEntryPointsApplication {
    public static void main(String[] args) {
        SpringApplication.run(MultipleEntryPointsApplication.class, args);
    }
}

If we want to use the XML configuration, we also need to add the @ImportResource({“classpath*:spring-security-multiple-entry.xml”}) annotation to our main class.

4.3. Testing the Security Configuration

Let’s set up a JUnit test class that we can use to test our protected URLs:

@RunWith(SpringRunner.class)
@WebAppConfiguration
@SpringBootTest(classes = MultipleEntryPointsApplication.class)
public class MultipleEntryPointsTest {
 
    @Autowired
    private WebApplicationContext wac;

    @Autowired
    private FilterChainProxy springSecurityFilterChain;

    private MockMvc mockMvc;

    @Before
    public void setup() {
        this.mockMvc = MockMvcBuilders.webAppContextSetup(this.wac)
          .addFilter(springSecurityFilterChain).build();
    }
}

Next, let’s test the URLs using the admin user.

When requesting the /admin/adminPage URL without an HTTP Basic Authentication, we should expect to receive an Unauthorized status code, and after adding the authentication the status code should be 200 OK.

If attempting to access the /user/userPage URL with the admin user, we should receive status 302 Forbidden:

@Test
public void whenTestAdminCredentials_thenOk() throws Exception {
    mockMvc.perform(get("/admin/myAdminPage")).andExpect(status().isUnauthorized());

    mockMvc.perform(get("/admin/myAdminPage")
      .with(httpBasic("admin", "adminPass"))).andExpect(status().isOk());

    mockMvc.perform(get("/user/myUserPage")
      .with(user("admin").password("adminPass").roles("ADMIN")))
      .andExpect(status().isForbidden());
}

Let’s create a similar test using the regular user credentials to access the URLs:

@Test
public void whenTestUserCredentials_thenOk() throws Exception {
    mockMvc.perform(get("/user/myUserPage")).andExpect(status().isFound());

    mockMvc.perform(get("/user/myUserPage")
      .with(user("user").password("userPass").roles("USER")))
      .andExpect(status().isOk());

    mockMvc.perform(get("/admin/myAdminPage")
      .with(user("user").password("userPass").roles("USER")))
      .andExpect(status().isForbidden());
}

In the second test, we can see that missing the form authentication will result in a status of 302 Found instead of Unauthorized, as Spring Security will redirect to the login form.

Finally, let’s create a test in which we access the /guest/guestPage URL will all three types of authentication and verify we receive a status of 200 OK:

@Test
public void givenAnyUser_whenGetGuestPage_thenOk() throws Exception {
    mockMvc.perform(get("/guest/myGuestPage")).andExpect(status().isOk());

    mockMvc.perform(get("/guest/myGuestPage")
      .with(user("user").password("userPass").roles("USER")))
      .andExpect(status().isOk());

    mockMvc.perform(get("/guest/myGuestPage")
      .with(httpBasic("admin", "adminPass")))
      .andExpect(status().isOk());
}

5. Conclusion

In this tutorial, we have demonstrated how to configure multiple entry points when using Spring Security.

The complete source code for the examples can be found over on GitHub. To run the application, uncomment the MultipleEntryPointsApplication start-class tag in the pom.xml and run the command mvn spring-boot:run, then accesses the /multipleHttpLinks URL

Note that it is not possible to log out when using HTTP Basic Authentication, so you will have to close and reopen the browser to remove this authentication.

To run the JUnit test, use the defined Maven profile entryPoints with the following command:

mvn clean install -PentryPoints

A Guide to the Java API for WebSocket

$
0
0

1. Overview

WebSocket provides an alternative to the limitation of efficient communication between the server and the web browser by providing bi-directional, full-duplex, real-time client/server communications. The server can send data to the client at any time. Because it runs over TCP, it also provides a low-latency low-level communication and reduces the overhead of each message.

In this article, we’ll take a look at the Java API for WebSockets by creating a chat-like application.

2. JSR 356

JSR 356 or the Java API for WebSocket, specifies an API that Java developers can use for integrating WebSockets withing their applications – both on the server side as well as on the Java client side.

This Java API provides both server and client side components:

  • Server: everything in the javax.websocket.server package.
  • Client: the content of javax.websocket package, which consists of client side APIs and also common libraries to both server and client.

3. Building a Chat Using WebSockets

We will build a very simple chat-like application. Any user will be able to open the chat from any browser, type his name, login into the chat and start communicating with everybody connected to the chat.

We’ll start by adding the latest dependency to the pom.xml file:

<dependency>
    <groupId>javax.websocket</groupId
    <artifactId>javax.websocket-api</artifactId>
    <version>1.1</version>
</dependency>

The latest version may be found here.

In order to convert Java Objects into their JSON representations and vice versa, we’ll use Gson:

<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>2.8.0</version>
</dependency>

The latest version is available in the Maven Central repository.

3.1. Endpoint Configuration

There are two ways of configuring endpoints: annotation-based and extension-based. You can either extend the javax.websocket.Endpoint class or use dedicated method-level annotations. As the annotation model leads to cleaner code as compared to the programmatic model, the annotation has become the conventional choice of coding. In this case, WebSocket endpoint lifecycle events are handled by the following annotations:

  • @ServerEndpoint: If decorated with @ServerEndpoint, the container ensures availability of the class as a WebSocket server listening to a specific URI space
  • @ClientEndpoint: A class decorated with this annotation is treated as a WebSocket client
  • @OnOpen: A Java method with @OnOpen is invoked by the container when a new WebSocket connection is initiated
  • @OnMessage: A Java method, annotated with @OnMessage, receives the information from the WebSocket container when a message is sent to the endpoint
  • @OnError: A method with @OnError is invoked when there is a problem with the communication
  • @OnClose: Used to decorate a Java method that is called by the container when the WebSocket connection closes

3.2. Writing the Server Endpoint

We declare a Java class WebSocket server endpoint by annotating it with @ServerEndpoint. We also specify the URI where the endpoint is deployed. The URI is defined relatively to the root of the server container and must begin with a forward slash:

@ServerEndpoint(value = "/chat/{username}")
public class ChatEndpoint {

    @OnOpen
    public void onOpen(Session session) throws IOException {
        // Get session and WebSocket connection
    }

    @OnMessage
    public void onMessage(Session session, Message message) throws IOException {
        // Handle new messages
    }

    @OnClose
    public void onClose(Session session) throws IOException {
        // WebSocket connection closes
    }

    @OnError
    public void onError(Session session, Throwable throwable) {
        // Do error handling here
    }
}

The code above is the server endpoint skeleton for our chat-like application. As you can see, we have 4 annotations mapped to their respective methods. Below you can see the implementation of such methods:

@ServerEndpoint(value="/chat/{username}")
public class ChatEndpoint {
 
    private Session session;
    private static Set<ChatEndpoint> chatEndpoints 
      = new CopyOnWriteArraySet<>();
    private static HashMap<String, String> users = new HashMap<>();

    @OnOpen
    public void onOpen(
      Session session, 
      @PathParam("username") String username) throws IOException {
 
        this.session = session;
        chatEndpoints.add(this);
        users.put(session.getId(), username);

        Message message = new Message();
        message.setFrom(username);
        message.setContent("Connected!");
        broadcast(message);
    }

    @OnMessage
    public void onMessage(Session session, Message message) 
      throws IOException {
 
        message.setFrom(users.get(session.getId()));
        broadcast(message);
    }

    @OnClose
    public void onClose(Session session) throws IOException {
 
        chatEndpoints.remove(this);
        Message message = new Message();
        message.setFrom(users.get(session.getId()));
        message.setContent("Disconnected!");
        broadcast(message);
    }

    @OnError
    public void onError(Session session, Throwable throwable) {
        // Do error handling here
    }

    private static void broadcast(Message message) 
      throws IOException, EncodeException {
 
        chatEndpoints.forEach(endpoint -> {
            synchronized (endpoint) {
                try {
                    endpoint.session.getBasicRemote().
                      sendObject(message);
                } catch (IOException | EncodeException e) {
                    e.printStackTrace();
                }
            }
        });
    }
}

When a new user logs in (@OnOpen) is immediately mapped to a data structure of active users. Then, a message is created and sent to all endpoints using the broadcast method.

This method is also used whenever a new message is sent (@OnMessage) by any of the users connected – this is the main purpose of the chat.

If at some point an error occurs, the method with the annotation @OnError handles it. You can use this method to log the information about the error and clear the endpoints.

Finally, when a user is no longer connected to the chat, the method @OnClose clears the endpoint and broadcasts to all users that a user has been disconnected.

4. Message Types

The WebSocket specification supports two on-wire data formats – text and binary. The API supports both these formats, adds capabilities to work with Java objects and health check messages (ping-pong) as defined in the specification:

  • Text: Any textual data (java.lang.String, primitives or their equivalent wrapper classes)
  • Binary: Binary data (e.g. audio, image etc.) represented by a java.nio.ByteBuffer or a byte[] (byte array)
  • Java objects: The API makes it possible to work with native (Java object) representations in your code and use custom transformers (encoders/decoders) to convert them into compatible on-wire formats (text, binary) allowed by the WebSocket protocol
  • Ping-Pong: A javax.websocket.PongMessage is an acknowledgment sent by a WebSocket peer in response to a health check (ping) request

For our application, we’ll be using Java Objects. We’ll create the classes for encoding and decoding messages.

4.1. Encoder

An encoder takes a Java object and produces a typical representation suitable for transmission as a message such as JSON, XML or binary representation. Encoders can be used by implementing the Encoder.Text<T> or Encoder.Binary<T> interfaces.

In the code below we define the class Message to be encoded and in the method encode we use Gson for encoding the Java object to JSON:

public class MessageEncoder implements Encoder.Text<Message> {

    private static Gson gson = new Gson();

    @Override
    public String encode(Message message) throws EncodeException {
        return gson.toJson(message);
    }

    @Override
    public void init(EndpointConfig endpointConfig) {
        // Custom initialization logic
    }

    @Override
    public void destroy() {
        // Close resources
    }
}

4.2. Decoder

A decoder is the opposite of an encoder and is used to transform data back into a Java object. Decoders can be implemented using the Decoder.Text<T> or Decoder.Binary<T> interfaces.

As we saw with the encoder, the decode method is where we take the JSON retrieved in the message sent to the endpoint and use Gson to transform it to a Java class called Message:

public class MessageDecoder implements Decoder.Text<Message> {

    private static Gson gson = new Gson();

    @Override
    public Message decode(String s) throws DecodeException {
        return gson.fromJson(s, Message.class);
    }

    @Override
    public boolean willDecode(String s) {
        return (s != null);
    }

    @Override
    public void init(EndpointConfig endpointConfig) {
        // Custom initialization logic
    }

    @Override
    public void destroy() {
        // Close resources
    }
}

4.3. Setting Encoder and Decoder in Server Endpoint

Let’s put everything together by adding the classes created for encoding and decoding the data at the class level annotation @ServerEndpoint:

@ServerEndpoint( 
  value="/chat/{username}", 
  decoders = MessageDecoder.class, 
  encoders = MessageEncoder.class )

Every time messages are sent to the endpoint, they will automatically either be converted to JSON or Java objects.

5. Conclusion

In this article, we looked at what is the Java API for WebSockets  and how it can help us building applications such as this real-time chat.

We saw the two programming models for creating an endpoint: annotations and programmatic. We defined an endpoint using the annotation model for our application along with the life cycle methods.

Also, in order to be able to communicate back and forth between the server and client, we saw that we need encoders and decoders to convert Java objects to JSON and vice versa.

The JSR 356 API is very simple and the annotation based programming model that makes it very easy to build WebSocket applications.

To run the application we built in the example, all we need to do is deploy the war file in a web server and go to the URL: http://localhost:8080/java-websocket/. You can find the link to the repository here.

Overview of Spring-Boot Dev Tools

$
0
0

1. Introduction

Spring Boot gives us the ability to quickly setup and run services.

To enhance the development experience further, Spring released the spring-boot-devtools tool – as part of Spring Boot-1.3. This article will try to cover the benefits we can achieve using the new functionality.

We’ll cover the following topics:

  • Property defaults
  • Automatic Restart
  • Live Reload
  • Global settings
  • Remote applications

1.1. Add Spring-Boot-Devtools in a Project

Adding spring-boot-devtools in a project is as simple as adding any other spring-boot module. In an existing spring-boot project, add the following dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-devtools</artifactId>
</dependency>

Do a clean build of the project, and you are now integrated with spring-boot-devtools. The newest version can be fetched from here and all versions can be found here.

2. Property Defaults

Spring-boot does a lot of auto-configurations, including enabling caching by default to improve performance.  One such example is caching of templates used by template engines, e.g. thymeleaf. But during development, it’s more important to see the changes as quickly as possible.

The default behavior of caching can be disabled for thymeleaf using the property spring.thymeleaf.cache=false in the application.properties file. We do not need to do this manually, introducing this spring-boot-devtools does this automatically for us.

3. Automatic Restart

In a typical application development environment, a developer would make some changes, build the project and deploy/start the application for new changes to take effect, or else try to leverage JRebel, etc.

Using spring-boot-devtools, this process is also automated. Whenever files change in the classpath, applications using spring-boot-devtools will cause the application to restart. The benefit of this feature is the time required to verify the changes made is considerably reduced:

19:45:44.804 ... - Included patterns for restart : []
19:45:44.809 ... - Excluded patterns for restart : [/spring-boot-starter/target/classes/, /spring-boot-autoconfigure/target/classes/, /spring-boot-starter-[\w-]+/, /spring-boot/target/classes/, /spring-boot-actuator/target/classes/, /spring-boot-devtools/target/classes/]
19:45:44.810 ... - Matching URLs for reloading : [file:/.../target/test-classes/, file:/.../target/classes/]

 :: Spring Boot ::        (v1.5.2.RELEASE)

2017-03-12 19:45:45.174  ...: Starting Application on machine with PID 7724 (<some path>\target\classes started by user in <project name>)
2017-03-12 19:45:45.175  ...: No active profile set, falling back to default profiles: default
2017-03-12 19:45:45.510  ...: Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@385c3ca3: startup date [Sun Mar 12 19:45:45 IST 2017]; root of context hierarchy

As seen in the logs, the thread that has spawned the application is not a main rather a restartedMain thread. Any changes made in the project be it a java file change will cause an automated restart of the project:

2017-03-12 19:53:46.204  ...: Closing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@385c3ca3: startup date [Sun Mar 12 19:45:45 IST 2017]; root of context hierarchy
2017-03-12 19:53:46.208  ...: Unregistering JMX-exposed beans on shutdown


 :: Spring Boot ::        (v1.5.2.RELEASE)

2017-03-12 19:53:46.587  ...: Starting Application on machine with PID 7724 (<project path>\target\classes started by user in <project name>)
2017-03-12 19:53:46.588  ...: No active profile set, falling back to default profiles: default
2017-03-12 19:53:46.591  ...: Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@acaf4a1: startup date [Sun Mar 12 19:53:46 IST 2017]; root of context hierarchy

4. Live Reload

spring-boot-devtools module includes an embedded LiveReload server that is used to trigger a browser refresh when a resource is changed.

For this to happen in the browser we need to install the LiveReload plugin one such implementation is Remote Live Reload for Chrome.

5. Global Settings

spring-boot-devtools provides a way to configure global settings that are not coupled with any application. This file is named as .spring-boot-devtools.properties and it located at $HOME.

6. Remote Applications

6.1. Remote Debugging via HTTP (Remote Debug Tunnel)

spring-boot-devtools provides out of the box remote debugging capabilities via HTTP, to have this feature it is required that spring-boot-devtools are packaged as part of the application. This can be achieved by disabling excludeDevtools configuration in the plugin in maven.

Here’s a quick sample:

<build>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
            <configuration>
                <excludeDevtools>false</excludeDevtools>
            </configuration>
        </plugin>
    </plugins>
</build>

Now for remote debugging via HTTP to work, following steps have to be taken:

  1. An application being deployed and started on the server, should be started with Remote Debugging enabled:
    -Xdebug -Xrunjdwp:server=y,transport=dt_socket,suspend=n

    As we can see, the remote debugging port is not mentioned here. Hence, java will choose a random port

  2. For the same project, open the Launch configurations, choose the following options:
    Select main class: org.springframework.boot.devtools.RemoteSpringApplication
    In program arguments, add the URL for the application, e.g. http://localhost:8080
  3. Default port for debugger via spring-boot application is 8000 and can be overridden via:
    spring.devtools.remote.debug.local-port=8010
  4. Now create a remote-debug configuration, setting up the port as 8010 as configured via properties or 8000, if sticking to defaults

Here’s what the log will look like:

  .   ____          _                                              __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _          ___               _      \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` |        | _ \___ _ __  ___| |_ ___ \ \ \ \
 \\/  ___)| |_)| | | | | || (_| []::::::[]   / -_) '  \/ _ \  _/ -_) ) ) ) )
  '  |____| .__|_| |_|_| |_\__, |        |_|_\___|_|_|_\___/\__\___|/ / / /
 =========|_|==============|___/===================================/_/_/_/
 :: Spring Boot Remote ::  (v1.5.2.RELEASE)

2017-03-12 22:24:11.089  ...: Starting RemoteSpringApplication v1.5.2.RELEASE on machine with PID 10476 (..\org\springframework\boot\spring-boot-devtools\1.5.2.RELEASE\spring-boot-devtools-1.5.2.RELEASE.jar started by user in project)
2017-03-12 22:24:11.097  ...: No active profile set, falling back to default profiles: default
2017-03-12 22:24:11.357  ...: Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@11e21d0e: startup date [Sun Mar 12 22:24:11 IST 2017]; root of context hierarchy
2017-03-12 22:24:11.869  ...: The connection to http://localhost:8080 is insecure. You should use a URL starting with 'https://'.
2017-03-12 22:24:11.949  ...: LiveReload server is running on port 35729
2017-03-12 22:24:11.983  ...: Started RemoteSpringApplication in 1.24 seconds (JVM running for 1.802)
2017-03-12 22:24:34.324  ...: Remote debug connection opened

6.2. Remote Update

The remote client monitors the application classpath for changes as is done for remote restart feature. Any change in the classpath causes, the updated resource to be pushed to the remote application and a restart is triggered.

Changes are pushed when the remote client is up and running, as monitoring for changed files is only possible then.

Here’s what that looks like in the logs:

2017-03-12 22:33:11.613  INFO 1484 ...: Remote debug connection opened
2017-03-12 22:33:21.869  INFO 1484 ...: Uploaded 1 class resource

 7. Conclusion

With this quick article, we have just demonstrated how we can leverage the spring-boot-devtools module to make the developer experience better and reduce the development time by automating a lot of activities.

Introduction to Google Protocol Buffer

$
0
0

1. Overview

In this article, we’ll be looking at the Google Protocol Buffer (protobuf) – a well-known language-agnostic binary data format. We can define a file with a protocol and next, using that protocol, we can generate code in languages like Java, C++, C#, Go, or Python.

This is an introductory article to the format itself; if you want to see how to use the format with a Spring web application, have a look at this article.

2. Defining Maven Dependencies

To use protocol buffers is Java, we need to add a Maven dependency to a protobuf-java:

<dependency>
    <groupId>com.google.protobuf</groupId>
    <artifactId>protobuf-java</artifactId>
    <version>${protobuf.version}</version>
</dependency>

<properties>
    <protobuf.version>3.2.0</protobuf.version>
</properties>

3. Defining a Protocol

Let’s start with an example. We can define a very simple protocol in a protobuf format:

message Person {
    required string name = 1;
}

This is a protocol of a simple message of Person type that has only one required field – name that has a string type.

Let’s look at the more complex example of defining a protocol. Let’s say that we need to store person details in a protobuf format:

package protobuf;

package protobuf;

option java_package = "com.baeldung.protobuf";
option java_outer_classname = "AddressBookProtos";

message Person {
    required string name = 1;
    required int32 id = 2;
    optional string email = 3;

    repeated string numbers = 4;
}

message AddressBook {
    repeated Person people = 1;
}

Our protocol consists of two types of data: a Person and an AddressBook. After generating the code (more on this in the later section), those classes will be the inner classes inside the AddressBookProtos class.

When we want to define a field that is required – meaning that creating an object without such field will cause an Exception, we need to use a required keyword.

Creating a field with the optional keyword means that this field doesn’t need to be set. The repeated keyword is an array type of variable size.

All fields are indexed – the field that is denoted with number 1 will be saved as a first field in a binary file. Field marked with 2 will be saved next and so on. That gives us better control over how fields are laid out in the memory.

4. Generating Java Code From a Protobuf File

Once we define a file, we can generate code from it.

Firstly, we need to install protobuf on our machine. Once we do this, we can generate code by executing a protoc command:

protoc -I=. --java_out=. addressbook.proto

The protoc command will generate Java output file from our addressbook.proto file. The -I option specifies a directory in which a proto file resides. The java-out specifies a directory where the generated class will be created. 

Generated class will have setters, getters, constructors and builders for our defined messages. It will also have some util methods for saving protobuf files and deserializing them from binary format to Java class.

5. Creating an Instance of Protobuf Defined Messages

We can easily use a generated code to create Java instance of a Person class:

String email = "j@baeldung.com";
int id = new Random().nextInt();
String name = "Michael Program";
String number = "01234567890";
AddressBookProtos.Person person =
  AddressBookProtos.Person.newBuilder()
    .setId(id)
    .setName(name)
    .setEmail(email)
    .addNumbers(number)
    .build();

assertEquals(person.getEmail(), email);
assertEquals(person.getId(), id);
assertEquals(person.getName(), name);
assertEquals(person.getNumbers(0), number);

We can create a fluent builder by using a newBuilder() method on the desired message type. After setting up all required fields, we can call a build() method to create an instance of a Person class.

6. Serializing and Deserializing Protobuf 

Once we create an instance of our Person class, we want to save that on disc in a binary format that is compatible with a created protocol. Let’s say that we want to create an instance of the AddressBook class and add one person to that object.

Next, we want to save that file on disc – there is a writeTo() util method in auto-generated code that we can use:

AddressBookProtos.AddressBook addressBook 
  = AddressBookProtos.AddressBook.newBuilder().addPeople(person).build();
FileOutputStream fos = new FileOutputStream(filePath);
addressBook.writeTo(fos);

After executing that method, our object will be serialized to binary format and saved on disc. To load that data from a disc and deserialize it back to the AddressBook object we can use a mergeFrom() method:

AddressBookProtos.AddressBook deserialized
  = AddressBookProtos.AddressBook.newBuilder()
    .mergeFrom(new FileInputStream(filePath)).build();
 
assertEquals(deserialized.getPeople(0).getEmail(), email);
assertEquals(deserialized.getPeople(0).getId(), id);
assertEquals(deserialized.getPeople(0).getName(), name);
assertEquals(deserialized.getPeople(0).getNumbers(0), number);

7. Conclusion

In this quick article, we introduced a standard for describing and storing data in a binary format – Google Protocol Buffer.

We created a simple protocol, created Java instance that complies with defined protocol. Next, we saw how to serialize and deserialize objects using protobuf.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Introduction to Javatuples

$
0
0

1. Overview

A tuple is a collection of several elements that may or may not be related to each other. In other words, tuples can be considered anonymous objects.

For example, [“RAM”, 16, “Astra”] is a tuple containing three elements.

In this article, we will have a quick look at a really simple library that allows us to work with the tuple based data structures, named javatuples.

2. Built-in Javatuples Classes

This library provides us ten different classes that would suffice most of our requirements related to tuples:

  • Unit<A>
  • Pair<A,B>
  • Triplet<A,B,C>
  • Quartet<A,B,C,D>
  • Quintet<A,B,C,D,E>
  • Sextet<A,B,C,D,E,F>
  • Septet<A,B,C,D,E,F,G>
  • Octet<A,B,C,D,E,F,G,H>
  • Ennead<A,B,C,D,E,F,G,H,I>
  • Decade<A,B,C,D,E,F,G,H,I,J>

In addition to the classes above, there are two additional classes, KeyValue<A,B> and LabelValue<A,B>, which provide functionalities similar to Pair<A,B>, but differ in semantics.

As per the official siteall the classes in javatuples are typesafe and immutable. Each of the tuple class implements the IterableSerializable, and Comparable interface.

3. Adding Maven Dependency

Let’s add the Maven dependency to our pom.xml:

<dependency>
    <groupId>org.javatuples</groupId>
    <artifactId>javatuples</artifactId>
    <version>1.2</version>
</dependency>

Please check the Central Maven repository for the latest version.

4. Creating Tuples

Creating a tuple is really simple. We can use the corresponding constructors:

Pair<String, Integer> pair = new Pair<String, Integer>("A pair", 55);

There is also a little less verbose and semantically elegant way of creating a tuple:

Triplet<String, Integer, Double> triplet = Triplet.with("hello", 23, 1.2);

We can also create tuples from an Iterable:

List<String> listOfNames = Arrays.asList("john", "doe", "anne", "alex");
Quartet<String, String, String, String> quartet
  = Quartet.fromCollection(collectionOfNames);

Please note that the number of items in the collection should match the type of the tuple that we want to create. For example, we cannot create a Quintet using the above collection as it requires exactly five elements. Same is true for any other tuple class having a higher order than Quintet.

However, we can create a lower order tuple like Pair or a Triplet using the above collection, by specifying a starting index in the fromIterable() method:

Pair<String, String> pairFromList = Pair.fromIterable(listOfNames, 2);

The above code will result in creating a Pair containing “anne” and “alex“.

Tuples can be conveniently created from any array as well:

String[] names = new String[] {"john", "doe", "anne"};
Triplet<String, String, String> triplet2 = Triplet.fromArray(names);

5. Getting Values from Tuples

Every class in javatuples has a getValueX() method for getting the values from tuples, where X specifies the order of the element inside the tuple. Like the indexes in arrays, the value of X starts from zero.

Let’s create a new Quartet and fetch some values:

Quartet<String, Double, Integer, String> quartet 
  = Quartet.with("john", 72.5, 32, "1051 SW");

String name = quartet.getValue0();
Integer age = quartet.getValue2();
 
assertThat(name).isEqualTo("john");
assertThat(age).isEqualTo(32);

As we can see, the position of “john” is zero, “72.5” is one, and so on.

Note that the getValueX() methods are type-safe. That means, no casting is required.

An alternative to this is the getValue(int pos) method. It takes a zero-based position of the element to be fetched. This method is not type-safe and requires explicit casting:

Quartet<String, Double, Integer, String> quartet 
  = Quartet.with("john", 72.5, 32, "1051 SW");

String name = (String) quartet.getValue(0);
Integer age = (Integer) quartet.getValue(2);
 
assertThat(name).isEqualTo("john");
assertThat(age).isEqualTo(32);

Please note that the classes KeyValue and LabelValue have their corresponding methods getKey()/getValue() and getLabel()/getValue().

6. Setting Values to Tuples

Similar to getValueX(), all classes in javatuples have setAtX() methods. Again, X is zero-based positions for the element that we want to set:

Pair<String, Integer> john = Pair.with("john", 32);
Pair<String, Integer> alex = john.setAt0("alex");

assertThat(john.toString()).isNotEqualTo(alex.toString());

The important thing here is that the return type of setAtX() method is the tuple type itself. This is because the javatuples are immutable. Setting any new value will leave the original instance intact.

7. Adding and Removing Elements from Tuples

We can conveniently add new elements to the tuples. However, this will result in a new tuple of one order higher being created:

Pair<String, Integer> pair1 = Pair.with("john", 32);
Triplet<String, Integer, String> triplet1 = pair1.add("1051 SW");

assertThat(triplet1.contains("john"));
assertThat(triplet1.contains(32));
assertThat(triplet1.contains("1051 SW"));

It is clear from the above example that adding one element to a Pair will create a new Triplet. Similarly, adding one element to a Triplet will create a new Quartet.

The example above also demonstrates the use of contains() method provided by all the classes in javatuples. This is a really handy method for verifying if the tuple contains a given value.

It is also possible to add one tuple to another using the add() method:

Pair<String, Integer> pair1 = Pair.with("john", 32);
Pair<String, Integer> pair2 = Pair.with("alex", 45);
Quartet<String, Integer, String, Integer> quartet2 = pair1.add(pair2);

assertThat(quartet2.containsAll(pair1));
assertThat(quartet2.containsAll(pair2));

Note the use of containsAll() method. It will return true if all the elements of pair1 are present in quartet2.

By default, the add() method adds the element as a last element of the tuple. However, it is possible to add the element at a given position using addAtX() method, where X is the zero-based position where we want to add the element:

Pair<String, Integer> pair1 = Pair.with("john", 32);
Triplet<String, String, Integer> triplet2 = pair1.addAt1("1051 SW");

assertThat(triplet2.indexOf("john")).isEqualTo(0);
assertThat(triplet2.indexOf("1051 SW")).isEqualTo(1);
assertThat(triplet2.indexOf(32)).isEqualTo(2);

This example adds the String at position 1, which is then verified by the indexOf() method. Please note the difference in order of the types for the Pair<String, Integer> and the Triplet<String, String, Integer> after the call to addAt1() method call.

We can also add multiple elements using any of add() or addAtX() methods:

Pair<String, Integer> pair1 = Pair.with("john", 32);
Quartet<String, Integer, String, Integer> quartet1 = pair1.add("alex", 45);

assertThat(quartet1.containsAll("alex", "john", 32, 45));

In order to remove an element from the tuple, we can use the removeFromX() method. Again, X specifies the zero-based position of the element to be removed:

Pair<String, Integer> pair1 = Pair.with("john", 32);
Unit<Integer> unit = pair1.removeFrom0();

assertThat(unit.contains(32));

8. Converting Tuples to List/Array

We have already seen how to convert a List to a tuple. Let’s now see hot to convert a tuple to a List:

Quartet<String, Double, Integer, String> quartet
  = Quartet.with("john", 72.5, 32, "1051 SW");
List<Object> list = quartet.toList();

assertThat(list.size()).isEqualTo(4);

It is fairly simple. The only thing to note here is that we will always get a List<Object>, even if the tuple contains the same type of elements.

Finally, let’s convert the tuple to an array:

Quartet<String, Double, Integer, String> quartet
  = Quartet.with("john", 72.5, 32, "1051 SW");
Object[] array = quartet.toArray();

assertThat(array.length).isEqualTo(4);

Clear enough, the toArray() method always returns an Object[].

9. Conclusion

In this article, we have explored the javatuples library and observed it’s simplicity. It provides elegant semantics and is really easy to use.

Make sure you check out the complete source code for this article over on GitHub. The complete source code contains a little more examples than the ones covered here. After reading this article, the additional examples should be easy enough to understand.


Introduction to Javassist

$
0
0

1. Overview

In this article, we will be looking at the Javasisst (Java Programming Assistant) library.

Simply put, this library makes the process of manipulating Java bytecode simpler by using a high-level API than the one in the JDK.

2. Maven Dependency

To add the Javassist library to our project we need to add javassist into our pom:

<dependency>
    <groupId>org.javassist</groupId>
    <artifactId>javassist</artifactId>
    <version>${javaassist.version}</version>
</dependency>

<properties>
    <javaassist.version>3.21.0-GA</javaassist.version>
</properties>

3. What is the Bytecode?

At a very high level, every Java class that is written in a plain text format and compiled to bytecode – an instruction set that can be processed by the Java Virtual Machine. The JVM translates bytecode instructions into machine level assembly instructions.

Let’s say that we have a Point class:

public class Point {
    private int x;
    private int y;

    public void move(int x, int y) {
        this.x = x;
        this.y = y;
    }

    // standard constructors/getters/setters
}

After compilation, the Point.class file containing the bytecode will be created. We can see the bytecode of that class by executing the javap command:

javap -c Point.class

This will print the following output:

public class com.baeldung.javasisst.Point {
  public com.baeldung.javasisst.Point(int, int);
    Code:
       0: aload_0
       1: invokespecial #1                  // Method java/lang/Object."<init>":()V
       4: aload_0
       5: iload_1
       6: putfield      #2                  // Field x:I
       9: aload_0
      10: iload_2
      11: putfield      #3                  // Field y:I
      14: return

  public void move(int, int);
    Code:
       0: aload_0
       1: iload_1
       2: putfield      #2                  // Field x:I
       5: aload_0
       6: iload_2
       7: putfield      #3                  // Field y:I
      10: return
}

All those instructions are specified by the Java language; a large number of them are available.

Let’s analyze the bytecode instructions of the move() method:

  • aload_0 instruction is loading a reference onto the stack from the local variable 0
  • iload_1 is loading an int value from the local variable 1
  • putfield is setting a field x of our object. All operations are analogical for field y
  • The last instruction is a return

Every line of Java code is compiled to bytecode with proper instructions. The Javassist library makes manipulating that bytecode relatively easy.

4. Generating a Java Class

Javassist library can be used for generating new Java class files.

Let’s say that we want to generate a JavassistGeneratedClass class that implements a java.lang.Cloneable interface. We want that class to have an id field of int typeThe ClassFile is used to create a new class file and FieldInfo is used to add a new field to a class:

ClassFile cf = new ClassFile(
  false, "com.baeldung.JavassistGeneratedClass", null);
cf.setInterfaces(new String[] {"java.lang.Cloneable"});

FieldInfo f = new FieldInfo(cf.getConstPool(), "id", "I");
f.setAccessFlags(AccessFlag.PUBLIC);
cf.addField(f);

After we create a JavassistGeneratedClass.class we can assert that it actually has an id field:

ClassPool classPool = ClassPool.getDefault();
Field[] fields = classPool.makeClass(cf).toClass().getFields();
 
assertEquals(fields[0].getName(), "id");

5. Loading Bytecode Instructions of Class

If we want to load bytecode instructions of an already existing class method, we can get a CodeAttribute of a specific method of the class. Then we can get a CodeIterator to iterate over all bytecode instructions of that method.

Let’s load all bytecode instructions of the move() method of the Point class:

ClassPool cp = ClassPool.getDefault();
ClassFile cf = cp.get("com.baeldung.javasisst.Point")
  .getClassFile();
MethodInfo minfo = cf.getMethod("move");
CodeAttribute ca = minfo.getCodeAttribute();
CodeIterator ci = ca.iterator();

List<String> operations = new LinkedList<>();
while (ci.hasNext()) {
    int index = ci.next();
    int op = ci.byteAt(index);
    operations.add(Mnemonic.OPCODE[op]);
}

assertEquals(operations,
  Arrays.asList(
  "aload_0", 
  "iload_1", 
  "putfield", 
  "aload_0", 
  "iload_2",  
  "putfield", 
  "return"));

We can see all bytecode instructions of the move() method by aggregating bytecodes to the list of operations, as shown in the assertion above.

6. Adding Fields to Existing Class Bytecode

Let’s say that we want to add a field of int type to the bytecode of the existing class. We can load that class using ClassPoll and add a field into it:

ClassFile cf = ClassPool.getDefault()
  .get("com.baeldung.javasisst.Point").getClassFile();

FieldInfo f = new FieldInfo(cf.getConstPool(), "id", "I");
f.setAccessFlags(AccessFlag.PUBLIC);
cf.addField(f);

We can use reflection to verify that id field exists on the Point class:

ClassPool classPool = ClassPool.getDefault();
Field[] fields = classPool.makeClass(cf).toClass().getFields();
List<String> fieldsList = Stream.of(fields)
  .map(Field::getName)
  .collect(Collectors.toList());
 
assertTrue(fieldsList.contains("id"));

7. Adding Constructor to Class Bytecode

We can add a constructor to the existing class mentioned in one of the previous examples by using an addInvokespecial() method.

And we can add a parameterless constructor by invoking a <init> method from java.lang.Object class:

ClassFile cf = ClassPool.getDefault()
  .get("com.baeldung.javasisst.Point").getClassFile();
Bytecode code = new Bytecode(cf.getConstPool());
code.addAload(0);
code.addInvokespecial("java/lang/Object", MethodInfo.nameInit, "()V");
code.addReturn(null);

MethodInfo minfo = new MethodInfo(
  cf.getConstPool(), MethodInfo.nameInit, "()V");
minfo.setCodeAttribute(code.toCodeAttribute());
cf.addMethod(minfo);

We can check for the presence of the newly created constructor by iterating over bytecode:

CodeIterator ci = code.toCodeAttribute().iterator();
List<String> operations = new LinkedList<>();
while (ci.hasNext()) {
    int index = ci.next();
    int op = ci.byteAt(index);
    operations.add(Mnemonic.OPCODE[op]);
}

assertEquals(operations,
  Arrays.asList("aload_0", "invokespecial", "return"));

8. Conclusion

In this article, we introduced the Javassist library, with the goal of making bytecode manipulation easier.

We focused on the core features and generated a class file from Java code; we also made some bytecode manipulation of an already created Java class.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Java Web Weekly, Issue 168

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> Troubleshooting Memory Issues in Java Applications [infoq.com]

Fixing memory issues can be challenging. This comprehensive guide will give you an idea where to start looking for when you encounter them.

>> Pipeline as code with a Spring Boot application [pragmaticintegrator.wordpress.com]

“Infrastructure as Code” is not a new approach, but definitely still very interesting for the significant advantages and maturity it brings.

>> JSR 269 Maintenance Review for Java SE 9 [oracle.com]

A few updates regarding Pluggable Annotation Processing API for Java SE 9.

>> An update on GlassFish 5 [oracle.com]

The first promoted build of GF5 was released recently.

>> Spring Boot and hypermedia, part 1: HAL [insaneprogramming.be]

A short guide to building a self-discoverable API with Spring Boot.

>> Types Are Mightier Than Tests [sitepoint.com]

TDD is a powerful and necessary tool, although sometimes a weak one when it comes to checking the correctness of imperative programs. Higher abstractions coupled with strong type system can make your life easier by decreasing the number of spots where mistakes can even be made.

>> Coping with stringly-typed [frankel.ch]

In the world of strong static typing, sometimes it’s easy to abuse String type. There are some solutions to dealing with such situations.

>> 5 new features in Hibernate 5 every developer should know [thoughts-on-java.org]

There are a few new interesting features in the newest Hibernate release.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Kotlin 1.1 Adds Coroutines, Type Aliases, Improved JavaScript Support [infoq.com]

Looks like Kotlin is getting even more very interesting features. I am definitely curious how this one will develop over time.

>> SelfEncapsulation [martinfowler.com]

An interesting approach where you restrict yourself to using getters/setters when possible instead of accessing fields directly. This can make refactoring much easier if some additional non-standard logic needs to be performed when accessing fields.

>> Protecting Sensitive Data [techblog.bozho.net]

A few tips for increasing security of your highly sensitive data.

>> Is An Agile Java Standard Possible? [sitepoint.com]

And some interesting thoughts about the state of the Java platform development. It turns out that making the whole process Agile, might be not that easy.

Also worth reading:

3. Musings

>> Programmer Career Planning [henrikwarne.com]

Sometimes it’s worth leaving the comfort zone in order to learn something new and to increase your position in the market.

>> Password Rules Are Bullshit [codinghorror.com]

Strict password policies can be irritating especially when your randomly generated password does not match all required criteria 🙂

>> The Case for a Team Standard [daedtech.com]

It’s important to make sure that your standards not only exist but are also high.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> I can’t feel my legs [dilbert.com]

>> The vision was the hard part [dilbert.com]

>> I like everything you said expect the “we” part [dilbert.com]

5. Pick of the Week

>> Humans are born irrational, and that has made us better decision-makers [qz.com]

Using Custom Banners in Spring Boot

$
0
0

1. Overview

By default, Spring Boot comes with a banner which shows up as soon as the application starts.

In this article, we’ll learn how to create a custom banner and use it in Spring Boot applications.

2. Creating a Banner

Before we start, we need to create the custom banner which will be displayed at the time of application start-up time. We can create the custom banner from scratch or use various tools that will do this for us.

There is a site available here, where you can upload any image and convert into ANSI charset in plain-text format.

In this example we used Baeldung’s official logo:

However, in some situation, we might like to use the banner in the plain text format since it’s relatively easier to maintain.

The plain-text custom banner which we used in this example is available here.

Point to note here is that ANSI charset has the ability to display colorful text in the console. This can’t be done with the simple plain text format.

3. Using the Custom Banner

Since we have the custom banner ready, we need to create a file named banner.txt in the src/main/resources directory and paste the banner content into it.

Point to note here is that banner.txt is the default expected banner file name, which Spring Boot uses. However, if we want to choose any other location or another name for the banner, we need to set banner.location property in the application.properties file:

banner.location=classpath:/path/to/banner/bannername.txt

We can also use images as banners too. Same as with banner.txt, Spring Boot expects the banner image’s name as banner.gif. Additionally, we can set different image properties such as height, width, etc. in the application.properties:

banner.image.location=classpath:banner.gif
banner.image.width=  //TODO
banner.image.height= //TODO
banner.image.margin= //TODO
banner.image.invert= //TODO

However, it’s always better to use text format because the application start up time will drastically increase if some complex image structure is used.

4. Conclusion

In this quick article, we showed how to use a custom banner in Spring Boot applications.

Like always, the full source code is available over on GitHub.

Introduction to Ratpack

$
0
0

1. Overview

Ratpack is a set of JVM based libraries built for modern days high-performance real-time applications. It’s built on top of embedded Netty event-driven networking engine and is fully compliant with the reactive design pattern.

In this article, we’ll learn how to use Ratpack and we’ll build a small application using it.

2. Why Ratpack?

The main advantages of Ratpack:

  • it’s very lightweight, fast and scalable
  • it consumes less memory than other frameworks such as DropWizard; an interesting benchmark comparison result can be found here
  • since it’s built on top of Netty, Ratpack is totally event-driven and non-blocking in nature
  • it has support for Guice dependency management
  • much like Spring Boot, Ratpack has its own testing libraries to quickly setup test-cases

3. Creating an Application

To understand how Ratpack works, let’s start by creating a small application with it.

3.1. Maven Dependencies

First, let’s add the following dependencies into our pom.xml:

<dependency>
    <groupId>io.ratpack</groupId>
    <artifactId>ratpack-core</artifactId>
    <version>1.4.5</version>
</dependency>
<dependency>
    <groupId>io.ratpack</groupId>
    <artifactId>ratpack-test</artifactId>
    <version>1.4.5</version>
</dependency>

You can check the latest version on Maven Central.

Note that although we are using Maven as our build system, as per Ratpack recommendation, it’s better to use Gradle as a build tool since Ratpack has first-class Gradle support provided via the Ratpack’s Gradle plugin.

We can use the following build Gradle script:

buildscript {
    repositories {
      jcenter()
    }
    dependencies {
      classpath "io.ratpack:ratpack-gradle:1.4.5"
    }
}
 
apply plugin: "io.ratpack.ratpack-java"
repositories {
    jcenter()
}
dependencies {
    testCompile 'junit:junit:4.11'
    runtime "org.slf4j:slf4j-simple:1.7.21"
}
test {
    testLogging {
      events 'started', 'passed'
    }
}

3.2. Building the Application

Once our build management is configured, we need to create a class to start the embedded Netty server and build a simple context to handle the default requests:

public class Application {
	
    public static void main(String[] args) throws Exception {
        RatpackServer.start(server -> server.handlers(chain -> chain
          .get(ctx -> ctx.render("Welcome to Baeldung ratpack!!!"))));
    }
}

As we can see, by using RatpackServer we can now start the server (default port 5050). The handlers() method takes a function that receives a Chain object, which maps all the respective incoming requests. This “Handler Chain API” is used for building the response handling strategy.

If we run this code snippet and hit the browser at http://localhost:5050, “Welcome to Baeldung ratpack!!!” should be displayed.

Similarly, we can map an HTTP POST request.

3.3. Handling URL Path Parameters

In the next example, we need to capture some URL path param in our application. In Ratpack we use PathTokens to capture them:

RatpackServer.start(server -> server
  .handlers(chain -> chain
  .get(":name", ctx -> ctx.render("Hello " 
  + ctx.getPathTokens().get("name") + " !!!"))));

Here, we’re mapping the name URL param. Whenever a request like http://localhost:5050/John would come, the response will be “Hello John !!!”.

3.4. Request/Response Header Modification with/without Filter

Sometimes, we need to modify the inline HTTP response header based on our need. Ratpack has MutableHeaders to customize outgoing responses.

For example, we need to alter following headers in the response: Access-Control-Allow-OriginAccept-Language, and Accept-Charset:

RatpackServer.start(server -> server.handlers(chain -> chain.all(ctx -> {
    MutableHeaders headers = ctx.getResponse().getHeaders();
    headers.set("Access-Control-Allow-Origin", "*");
    headers.set("Accept-Language", "en-us");
    headers.set("Accept-Charset", "UTF-8");
    ctx.next();
}).get(":name", ctx -> ctx
    .render("Hello " + ctx.getPathTokens().get("name") + "!!!"))));

By using MutableHeaders we set are setting the three headers and pushing them in the Chain.

In the same way, we can check the incoming request headers too:

ctx.getRequest().getHeaders().get("//TODO")

The same can be achieved by creating a filter. Ratpack has a Handler interface, which can be implemented to create a filter. It has only one method handle(), which takes the current Context as a parameter:

public class RequestValidatorFilter implements Handler {

    @Override
    public void handle(Context ctx) throws Exception {
        MutableHeaders headers = ctx.getResponse().getHeaders();
        headers.set("Access-Control-Allow-Origin", "*");
        ctx.next();
    }
}

We can use this filter in the following way:

RatpackServer.start(
    server -> server.handlers(chain -> chain
      .all(new RequestValidatorFilter())
      .get(ctx -> ctx.render("Welcome to baeldung ratpack!!!"))));
}

3.5. JSON Parser

Ratpack internally uses faster-jackson for JSON parsing. We can use Jackson module to parse any object to JSON.

Let’s create a simple POJO class which will be used for parsing:

public class Employee {

    private Long id;
    private String title;
    private String name;

    // getters and setters 

}

Here, we have created one simple POJO class named Employee, which has three parameters: id, title, and name. Now we will use this Employee object to convert into JSON and return the same when certain URL is hit:

List<Employee> employees = new ArrayList<Employee>();
employees.add(new Employee(1L, "Mr", "John Doe"));
employees.add(new Employee(2L, "Mr", "White Snow"));

RatpackServer.start(
    server -> server.handlers(chain -> chain
      .get("data/employees",
      ctx -> ctx.render(Jackson.json(employees)))));

As we can see, we are manually adding two Employee objects into a list and parsing them as JSON using Jackson module. As soon as the /data/employees URL is hit, the JSON object will be returned.

Point to note here is that we are not using ObjectMapper at all since Ratpack’s Jackson module will do the needful on the fly.

3.6. In-Memory Database

Ratpack has the first class support for in-memory databases. It uses HikariCP for JDBC connection pooling. In order to use it, we need to add Ratpack’s HikariCP module dependency in the pom.xml:

<dependency>
    <groupId>io.ratpack</groupId>
    <artifactId>ratpack-hikari</artifactId>
    <version>1.4.5</version>
</dependency>

If we are using Gradle, the same needs to be added in the Gradle build file:

compile ratpack.dependency('hikari')

Now, we need to create an SQL file with table DDL statements so that the tables are created as soon as the server is up and running. We’ll create the DDL.sql file in the src/main/resources directory and add some DDL statements into it.

Since we’re using H2 database, we have to add dependencies for that too.

Now, by using HikariModule, we can initialize the database at the runtime:

RatpackServer.start(
    server -> server.registry(Guice.registry(bindings -> 
      bindings.module(HikariModule.class, config -> {
          config.setDataSourceClassName("org.h2.jdbcx.JdbcDataSource");
          config.addDataSourceProperty("URL",
          "jdbc:h2:mem:baeldung;INIT=RUNSCRIPT FROM 'classpath:/DDL.sql'");
      }))).handlers(...));

4. Testing

As mentioned earlier, Ratpack has first-class support for jUnit test cases. By using MainClassApplicationUnderTest we can easily create test cases and test the endpoints:

@RunWith(JUnit4.class)
public class ApplicationTest {

    MainClassApplicationUnderTest appUnderTest
      = new MainClassApplicationUnderTest(Application.class);

    @Test
    public void givenDefaultUrl_getStaticText() {
        assertEquals("Welcome to baeldung ratpack!!!", 
          appUnderTest.getHttpClient().getText("/"));
    }

    @Test
    public void givenDynamicUrl_getDynamicText() {
        assertEquals("Hello dummybot!!!", 
          appUnderTest.getHttpClient().getText("/dummybot"));
    }

    @Test
    public void givenUrl_getListOfEmployee() 
      throws JsonProcessingException {
 
        List<Employee> employees = new ArrayList<Employee>();
        ObjectMapper mapper = new ObjectMapper();
        employees.add(new Employee(1L, "Mr", "John Doe"));
        employees.add(new Employee(2L, "Mr", "White Snow"));

        assertEquals(mapper.writeValueAsString(employees), 
          appUnderTest.getHttpClient().getText("/data/employees"));
    }
 
    @After
    public void shutdown() {
        appUnderTest.close();
    }

}

Please note that we need to manually terminate the running MainClassApplicationUnderTest instance by calling the close() method as it may unnecessarily block JVM resources. That’s why we have used @After annotation to forcefully terminate the instance once the test case executed.

5. Conclusion

In this article, we saw the simplicity of using Ratpack.

As always, the full source code is available over on GitHub.

Testing an OAuth Secured API with the Spring MVC Test Support

$
0
0

1. Overview

In this article, we’re going to show how we can test an API which is secured using OAuth with the Spring MVC test support.

2. Authorization and Resource Server

For a tutorial on how to setup an authorization and resource server, look through this previous article: Spring REST API + OAuth2 + AngularJS.

Our authorization server uses JdbcTokenStore and defined a client with id “fooClientIdPassword” and password “secret”, and supports the password grant type.

The resource server restricts the /employee URL to the ADMIN role.

Starting with Spring Boot version 1.5.0 the security adapter takes priority over the OAuth resource adapter, so in order to reverse the order, we have to annotate the WebSecurityConfigurerAdapter class with @Order(SecurityProperties.ACCESS_OVERRIDE_ORDER).

Otherwise, Spring will attempt to access requested URLs based on the Spring Security rules instead of Spring OAuth rules, and we would receive a 403 error when using token authentication.

3. Defining a Sample API

First, let’s create a simple POJO called Employee with two properties that we will manipulate through the API:

public class Employee {
    private String email;
    private String name;
    
    // standard constructor, getters, setters
}

Next, let’s define a controller with two request mappings, for getting and saving an Employee object to a list:

@Controller
public class EmployeeController {

    private List<Employee> employees = new ArrayList<>();

    @GetMapping("/employee")
    @ResponseBody
    public Optional<Employee> getEmployee(@RequestParam String email) {
        return employees.stream()
          .filter(x -> x.getEmail().equals(email)).findAny();
    }

    @PostMapping("/employee")
    @ResponseStatus(HttpStatus.CREATED)
    public void postMessage(@RequestBody Employee employee) {
        employees.add(employee);
    }
}

Keep in mind that in order to make this work, we need an additional JDK8 Jackson module. Otherwise, Optional class will not be serialized/deserialized properly. The latest version of jackson-datatype-jdk8 can be downloaded from Maven Central.

4. Testing the API

4.1. Setting Up the Test Class

To test our API, we will create a test class annotated with @SpringBootTest that uses the AuthorizationServerApplication class to read the application configuration.

For testing a secured API with Spring MVC test support, we need to inject the WebAppplicationContext and Spring Security Filter Chain beans. We’ll use these to obtain a MockMvc instance before the tests are run:

@RunWith(SpringRunner.class)
@WebAppConfiguration
@SpringBootTest(classes = AuthorizationServerApplication.class)
public class OAuthMvcTest {

    @Autowired
    private WebApplicationContext wac;

    @Autowired
    private FilterChainProxy springSecurityFilterChain;

    private MockMvc mockMvc;

    @Before
    public void setup() {
        this.mockMvc = MockMvcBuilders.webAppContextSetup(this.wac)
          .addFilter(springSecurityFilterChain).build();
    }
}

4.2. Obtaining an Access Token

Simply put, an APIs secured with OAuth2 expects to receive a the Authorization header with a value of Bearer <access_token>.

In order to send the required Authorization header, we first need to obtain a valid access token by making a POST request to the /oauth/token endpoint. This endpoint requires an HTTP Basic authentication, with the id and secret of the OAuth client, and a list of parameters specifying the client_id, grant_type, username, and password.

Using Spring MVC test support, the parameters can be wrapped in a MultiValueMap and the client authentication can be sent using the httpBasic method.

Let’s create a method that sends a POST request to obtain the token and reads the access_token value from the JSON response:

private String obtainAccessToken(String username, String password) throws Exception {
 
    MultiValueMap<String, String> params = new LinkedMultiValueMap<>();
    params.add("grant_type", "password");
    params.add("client_id", "fooClientIdPassword");
    params.add("username", username);
    params.add("password", password);

    ResultActions result 
      = mockMvc.perform(post("/oauth/token")
        .params(params)
        .with(httpBasic("fooClientIdPassword","secret"))
        .accept("application/json;charset=UTF-8"))
        .andExpect(status().isOk())
        .andExpect(content().contentType("application/json;charset=UTF-8"));

    String resultString = result.andReturn().getResponse().getContentAsString();

    JacksonJsonParser jsonParser = new JacksonJsonParser();
    return jsonParser.parseMap(resultString).get("access_token").toString();
}

4.3. Testing GET and POST Requests

The access token can be added to a request using the header(“Authorization”, “Bearer “+ accessToken) method.

Let’s attempt to access one of our secured mappings without an Authorization header and verify that we receive an unauthorized status code:

@Test
public void givenNoToken_whenGetSecureRequest_thenUnauthorized() throws Exception {
    mockMvc.perform(get("/employee")
      .param("email", EMAIL))
      .andExpect(status().isUnauthorized());
}

We have specified that only users with a role of ADMIN can access the /employee URL. Let’s create a test in which we obtain an access token for a user with USER role and verify that we receive a forbidden status code:

@Test
public void givenInvalidRole_whenGetSecureRequest_thenForbidden() throws Exception {
    String accessToken = obtainAccessToken("user1", "pass");
    mockMvc.perform(get("/employee")
      .header("Authorization", "Bearer " + accessToken)
      .param("email", "jim@yahoo.com"))
      .andExpect(status().isForbidden());
}

Next, let’s test our API using a valid access token, by sending a POST request to create an Employee object, then a GET request to read the object created:

@Test
public void givenToken_whenPostGetSecureRequest_thenOk() throws Exception {
    String accessToken = obtainAccessToken("admin", "nimda");

    String employeeString = "{\"email\":\"jim@yahoo.com\",\"name\":\"Jim\"}";
        
    mockMvc.perform(post("/employee")
      .header("Authorization", "Bearer " + accessToken)
      .contentType(application/json;charset=UTF-8)
      .content(employeeString)
      .accept(application/json;charset=UTF-8))
      .andExpect(status().isCreated());

    mockMvc.perform(get("/employee")
      .param("email", "jim@yahoo.com")
      .header("Authorization", "Bearer " + accessToken)
      .accept("application/json;charset=UTF-8"))
      .andExpect(status().isOk())
      .andExpect(content().contentType(application/json;charset=UTF-8))
      .andExpect(jsonPath("$.name", is("Jim")));
}

5. Conclusion

In this quick tutorial, we have demonstrated how we can test an OAuth-secured API using the Spring MVC test support.

The full source code of the examples can be found in the GitHub project.

To run the test, the project has an mvc profile that can be executed using the command mvn clean install -Pmvc.

Guide to Internationalization in Spring Boot

$
0
0

1. Overview

In this quick tutorial, we’re going to take a look at how we can add internationalization to a Spring Boot application.

2. Maven Dependencies

For development, we need the following dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
    <version>1.5.2.RELEASE</version>
</dependency>

The latest version of spring-boot-starter-thymeleaf can be downloaded from Maven Central.

3. LocaleResolver

In order for our application to be able to determine which locale is currently being used, we need to add a LocaleResolver bean:

@Bean
public LocaleResolver localeResolver() {
    SessionLocaleResolver slr = new SessionLocaleResolver();
    slr.setDefaultLocale(Locale.US);
    return slr;
}

The LocaleResolver interface has implementations that determine the current locale based on the session, cookies, the Accept-Language header, or a fixed value.

In our example, we have used the session based resolver SessionLocaleResolver and set a default locale with value US.

4. LocaleChangeInterceptor

Next, we need to add an interceptor bean that will switch to a new locale based on the value of the lang parameter appended to a request:

@Bean
public LocaleChangeInterceptor localeChangeInterceptor() {
    LocaleChangeInterceptor lci = new LocaleChangeInterceptor();
    lci.setParamName("lang");
    return lci;
}

In order to take effect, this bean needs to be added to the application’s interceptor registry.

To achieve this, our @Configuration class has to extend the WebMvcConfigurerAdapter class and override the addInterceptors() method:

@Override
public void addInterceptors(InterceptorRegistry registry) {
    registry.addInterceptor(localeChangeInterceptor());
}

5. Defining the Message Sources

By default, a Spring Boot application will look for message files containing internationalization keys and values in the src/main/resources folder.

The file for the default locale will have the name messages.properties, and files for each locale will be named messages_XX.properties, where XX is the locale code.

The keys for the values that will be localized have to be the same in every file, with values appropriate to the language they correspond to.

If a key does not exist in a certain requested locale, then the application will fall back to the default locale value.

Let’s define a default message file for the English language called messages.properties:

greeting=Hello! Welcome to our website!
lang.change=Change the language
lang.eng=English
lang.fr=French

Next, let’s create a file called messages_fr.properties for the French language with the same keys:

greeting=Bonjour! Bienvenue sur notre site!
lang.change=Changez la langue
lang.eng=Anglais
lang.fr=Francais

6. Controller and HTML Page

Let’s create a controller mapping that will return a simple HTML page called international.html that we want to see in two different languages:

@Controller
public class PageController {

    @GetMapping("/international")
    public String getInternationalPage() {
        return "international";
    }
}

Since we are using thymeleaf to display the HTML page, the locale-specific values will be accessed using the keys with the syntax #{key}:

<h1 th:text="#{greeting}"></h1>

If using JSP files, the syntax is:

<h1><spring:message code="greeting" text="default"/></h1>

If we want to access the page with the two different locales we have to add the parameter lang to the URL in the form: /international?lang=fr

If no lang parameter is present on the URL, the application will use the default locale, in our case US locale.

Let’s add a drop-down to our HTML page with the two locales whose names are also localized in our properties files:

<span th:text="#{lang.change}"></span>:
<select id="locales">
    <option value=""></option>
    <option value="en" th:text="#{lang.eng}"></option>
    <option value="fr" th:text="#{lang.fr}"></option>
</select>

Then we can add a jQuery script that will call the /international URL with the respective lang parameter depending on which drop-down option is selected:

<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js">
</script>
<script type="text/javascript">
$(document).ready(function() {
    $("#locales").change(function () {
        var selectedOption = $('#locales').val();
        if (selectedOption != ''){
            window.location.replace('international?lang=' + selectedOption);
        }
    });
});
</script>

7. Running the Application

In order to initialize our application, we have to add the main class annotated with @SpringBootApplication:

@SpringBootApplication
public class InternationalizationApp {
    
    public static void main(String[] args) {
        SpringApplication.run(InternationalizationApp.class, args);
    }
}

Depending on the selected locale, we will view the page in either English or French when running the application.

Let’s see the English version:

screen shot in English
And now let’s see the French version:
screen shot in French

8. Conclusion

In this tutorial, we have shown how we can use the support for internationalization in a Spring Boot application.

The full source code for the example can be found over on GitHub.

Introduction to JiBX

$
0
0

1. Overview

JiBX is a tool for binding XML data to Java objects. It provides solid performance compared to other common tools such as JAXB.

JiBX is also quite flexible when compared to other Java-XML tools, using binding definitions to decouple the Java structure from XML representation so that each can be changed independently.

In this article, we’ll explore the different ways provided by JiBX of binding the XML to Java objects.

2. Components of JiBX

2.1. Binding Definition Document

The binding definition document specifies how your Java objects are converted to or from XML.

The JiBX binding compiler takes one or more binding definitions as input, along with actual class files. It compiles the binding definition into Java bytecode by adding it to the class files. Once the class files have been enhanced with this compiled binding definition code, they are ready to work with JiBX runtime.

2.2. Tools

There are three main tools that we are going to use:

  • BindGen –  to generate the binding and matching schema definitions from Java code
  • CodeGen –  to create the Java code and a binding definition from an XML schema
  • JiBX2Wsdl – to make the binding definition and a matching WSDL along with a schema definition from existing Java code

3. Maven Configuration

3.1. Dependencies

We need to add the jibx-run dependency in the pom.xml:

<dependency>
    <groupId>org.jibx</groupId>
    <artifactId>jibx-run</artifactId>
    <version>1.3.1</version>
</dependency>

The latest version of this dependency can be found here.

3.2. Plugins

To perform the different steps in JiBX like code generation or binding generation, we need to configure maven-jibx-plugin in pom.xml.

For the case when we need to start from the Java code and generate the binding and schema definition, let’s configure the plugin:

<plugin>
    <groupId>org.jibx</groupId>
    <artifactId>maven-jibx-plugin</artifactId>
    ...
    <configuration>
        <directory>src/main/resources</directory>
        <includes>
            <includes>*-binding.xml</includes>
        </includes>
        <excludes>
            <exclude>template-binding.xml</exclude>
        </excludes>
        <verbose>true</verbose>
    </configuration>
    <executions>
        <execution>
            <phase>process-classes</phase>
            <goals>
                <goal>bind</goal>
            </goals>
        </execution>
    </executions>
</plugin>

When we have a schema and we generate the Java code and binding definition, the maven-jibx-plugin is configured with the information about schema file path and path to the source code directory:

<plugin>
    <groupId>org.jibx</groupId>
    <artifactId>maven-jibx-plugin</artifactId>
    ...
    <executions>
        <execution>
        <id>generate-java-code-from-schema</id>
        <goals>
             <goal>schema-codegen</goal>
        </goals>
            <configuration>
                <directory>src/main/jibx</directory>
                <includes>
                    <include>customer-schema.xsd</include>
                </includes>
                <verbose>true</verbose>
            </configuration>
            </execution>
            <execution>
                <id>compile-binding</id>
                <goals>
                    <goal>bind</goal>
                </goals>
            <configuration>
                <directory>target/generated-sources</directory>
                <load>true</load>
                <validate>true</validate>
                <verify>true</verify>
            </configuration>
        </execution>
    </executions>
</plugin>

4. Binding Definitions

Binding definitions are the core part of JiBX. A basic binding file specifies the mapping between XML and Java object fields:

<binding>
    <mapping name="customer" class="com.baeldung.xml.jibx.Customer">
        ...
        <value name="city" field="city" />
    </mapping>
</binding>

4.1. Structure Mapping

Structure mapping makes the XML structure look similar to object structure:

<binding>
    <mapping name="customer" class="com.baeldung.xml.jibx.Customer">
    ...
    <structure name="person" field="person">
        ...
        <value name="last-name" field="lastName" />
    </structure>
    ...    
    </mapping>
</binding>

The corresponding classes for this structure are going to be:

public class Customer {
    
    private Person person;
    ...
    
    // standard getters and setters

}

public class Person {
    
    private String lastName;
    ...
    
    // standard getters and setters

}

4.2. Collection and Array mappings

JiBX binding provides an easy way for working with a collection of objects:

<mapping class="com.baeldung.xml.jibx.Order" name="Order">
    <collection get-method="getAddressList" 
      set-method="setAddressList" usage="optional" 
      createtype="java.util.ArrayList">
        
        <structure type="com.baeldung.xml.jibx.Order$Address" 
          name="Address">
            <value style="element" name="Name" 
              get-method="getName" set-method="setName"/>
              ...
        </structure>
     ...
</mapping>

Let’s see corresponding mapping Java objects:

public class Order {
    List<Address> addressList = new ArrayList<>();
    ...
 
    // getters and setters here
}

public static class Address {
    private String name;
    
    ...
    // standard getters and setter
    
}

4.3. Advanced Mappings

So far we have seen a basic mapping definition. JiBX mapping provides different flavors of mapping like abstract mapping and mapping inheritance.

Let see how can we define an abstract mapping:

<binding>
    <mapping name="customer" 
      class="com.baeldung.xml.jibx.Customer">

        <structure name="person" field="person">
            ...
            <value name="name" field="name" />
        </structure>
        <structure name="home-phone" field="homePhone" />
        <structure name="office-phone" field="officePhone" />
        <value name="city" field="city" />
    </mapping>
 
    <mapping name="phone" 
      class="com.baeldung.xml.jibx.Phone" abstract="true">
        <value name="number" field="number"/>
    </mapping>
</binding>

Let’s see how this binds to Java objects:

public class Customer {
    private Person person;
    ...
    private Phone homePhone;
    private Phone officePhone;
    
    // standard getters and setters
    
}

Here we have specified multiple Phone fields in Customer class. The Phone itself is again a POJO:

public class Phone {

    private String number;
    
    // standard getters and setters
}

In addition to regular mappings, we can also define extensions. Each extension mapping refers to some base mapping. At the time of marshaling, the actual object type decides which XML mapping is applied.

Let’s see how the extensions work:

<binding>
    <mapping class="com.baeldung.xml.jibx.Identity" 
      abstract="true">
        <value name="customer-id" field="customerId"/>
    </mapping>
    ...  
    <mapping name="person" 
      class="com.baeldung.xml.jibx.Person" 
      extends="com.baeldung.xml.jibx.Identity">
        <structure map-as="com.baeldung.xml.jibx.Identity"/>
        ...
    </mapping>
    ...
</binding>

Let’s look at the corresponding Java objects:

public class Identity {

    private long customerId;
    
    // standard getters and setters
}

5. Conclusion

In this quick article, we have explored different ways by which we can use the JiBX for converting XML to/from Java objects. We have also seen how we can make use binding definitions to work with different representations.

Full code for this article is available over on GitHub.


Spring Boot Authentication Auditing Support

$
0
0

1. Overview

In this short article, we’ll explore the Spring Boot Actuator module and the support for publishing authentication and authorization events in conjunction with Spring Security.

2. Maven Dependencies

First, we need to add the spring-boot-starter-actuator to our pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
    <version>1.5.2.RELEASE</version>
</dependency>

The latest version is available in the Maven Central repository.

3. Listening for Authentication and Authorization Events

To log all authentication and authorization attempts in a Spring Boot application, we can just define a bean with a listener method:

@Component
public class LoginAttemptsLogger {

    @EventListener
    public void auditEventHappened(
      AuditApplicationEvent auditApplicationEvent) {
        
        AuditEvent auditEvent = auditApplicationEvent.getAuditEvent();
        System.out.println("Principal " + auditEvent.getPrincipal() 
          + " - " + auditEvent.getType());

        WebAuthenticationDetails details = 
          (WebAuthenticationDetails) auditEvent.getData().get("details");
        System.out.println("Remote IP address: " 
          + details.getRemoteAddress());
        System.out.println("  Session Id: " + details.getSessionId());
    }
}

Note that we’re just outputting some of the things that are available in AuditApplicationEvent to show what information is available. In an actual application, you might want to store that information in a repository or cache to process it further.

Note that any Spring bean will work; the basics of the new Spring event support are quite simple:

  • annotate the method with @EventListener
  • add the AuditApplicationEvent as the sole argument of the method

The output of running the application will look something like to this:

Principal anonymousUser - AUTHORIZATION_FAILURE
  Remote IP address: 0:0:0:0:0:0:0:1
  Session Id: null
Principal user - AUTHENTICATION_FAILURE
  Remote IP address: 0:0:0:0:0:0:0:1
  Session Id: BD41692232875A5A65C5E35E63D784F6
Principal user - AUTHENTICATION_SUCCESS
  Remote IP address: 0:0:0:0:0:0:0:1
  Session Id: BD41692232875A5A65C5E35E63D784F6

In this example, three AuditApplicationEvents have been received by the listener:

  1. Without logging on, access has been requested to a restricted page
  2. A wrong password has been used while logging on
  3. A correct password has been used the second time around

4. An Authentication Audit Listener

If the information exposed by Spring Boot’s AuthorizationAuditListener is not enough, you can create your own bean to expose more information.

Let’s have a look at an example, where we also expose the request URL that was accessed when the authorization fails:

@Component
public class ExposeAttemptedPathAuthorizationAuditListener 
  extends AbstractAuthorizationAuditListener {

    public static final String AUTHORIZATION_FAILURE 
      = "AUTHORIZATION_FAILURE";

    @Override
    public void onApplicationEvent(AbstractAuthorizationEvent event) {
        if (event instanceof AuthorizationFailureEvent) {
            onAuthorizationFailureEvent((AuthorizationFailureEvent) event);
        }
    }

    private void onAuthorizationFailureEvent(
      AuthorizationFailureEvent event) {
        Map<String, Object> data = new HashMap<>();
        data.put(
          "type", event.getAccessDeniedException().getClass().getName());
        data.put("message", event.getAccessDeniedException().getMessage());
        data.put(
          "requestUrl", ((FilterInvocation)event.getSource()).getRequestUrl() );
        
        if (event.getAuthentication().getDetails() != null) {
            data.put("details", 
              event.getAuthentication().getDetails());
        }
        publish(new AuditEvent(event.getAuthentication().getName(), 
          AUTHORIZATION_FAILURE, data));
    }
}

We can now log the request URL in our listener:

@Component
public class LoginAttemptsLogger {

    @EventListener
    public void auditEventHappened(
      AuditApplicationEvent auditApplicationEvent) {
        AuditEvent auditEvent = auditApplicationEvent.getAuditEvent();
 
        System.out.println("Principal " + auditEvent.getPrincipal() 
          + " - " + auditEvent.getType());

        WebAuthenticationDetails details
          = (WebAuthenticationDetails) auditEvent.getData().get("details");
 
        System.out.println("  Remote IP address: " 
          + details.getRemoteAddress());
        System.out.println("  Session Id: " + details.getSessionId());
        System.out.println("  Request URL: " 
          + auditEvent.getData().get("requestUrl"));
    }
}

As a result, the output now contains the requested URL:

Principal anonymousUser - AUTHORIZATION_FAILURE
  Remote IP address: 0:0:0:0:0:0:0:1
  Session Id: null
  Request URL: /hello

Note that we extended from the abstract AbstractAuthorizationAuditListener in this example, so we can use the publish method from that base class in our implementation.

If you want to test it, check out the source code and run:

mvn clean spring-boot:run

Thereafter you can point your browser to http://localhost:8080/.

5. Storing Audit Events

By default, Spring Boot stores the audit events in an AuditEventRepository. If you don’t create a bean with an own implementation, then an InMemoryAuditEventRepository will be wired for you.

The InMemoryAuditEventRepository is a kind of circular buffer that stores the last 4000 audit events in memory. Those events can then be accessed via the management endpoint http://localhost:8080/auditevents.

This returns a JSON representation of the audit events:

{
  "events": [
    {
      "timestamp": "2017-03-09T19:21:59+0000",
      "principal": "anonymousUser",
      "type": "AUTHORIZATION_FAILURE",
      "data": {
        "requestUrl": "/auditevents",
        "details": {
          "remoteAddress": "0:0:0:0:0:0:0:1",
          "sessionId": null
        },
        "type": "org.springframework.security.access.AccessDeniedException",
        "message": "Access is denied"
      }
    },
    {
      "timestamp": "2017-03-09T19:22:00+0000",
      "principal": "anonymousUser",
      "type": "AUTHORIZATION_FAILURE",
      "data": {
        "requestUrl": "/favicon.ico",
        "details": {
          "remoteAddress": "0:0:0:0:0:0:0:1",
          "sessionId": "18FA15865F80760521BBB736D3036901"
        },
        "type": "org.springframework.security.access.AccessDeniedException",
        "message": "Access is denied"
      }
    },
    {
      "timestamp": "2017-03-09T19:22:03+0000",
      "principal": "user",
      "type": "AUTHENTICATION_SUCCESS",
      "data": {
        "details": {
          "remoteAddress": "0:0:0:0:0:0:0:1",
          "sessionId": "18FA15865F80760521BBB736D3036901"
        }
      }
    }
  ]
}

6. Conclusion

With the actuator support in Spring Boot, it becomes trivial to log the authentication and authorization attempts from users. The reader is also referred to production ready auditing for some additional information.

The code from this article can be found over on GitHub

Property Testing Example With Javaslang

$
0
0

1. Overview

In this article, we’ll be looking at the concept of Property Testing and its implementation in the javaslang-test library.

The Property based testing (PBT) allows us to specify the high-level behavior of a program regarding invariants it should adhere to.

2. What is Property Testing?

A property is the combination of an invariant with an input values generator. For each generated value, the invariant is treated as a predicate and checked whether it yields true or false for that value.

As soon as there is one value which yields false, the property is said to be falsified, and checking is aborted. If a property cannot be invalidated after a specific amount of sample data, the property is assumed to be satisfied.

Thanks to that behavior, our test fail-fast if a condition is not satisfied without doing unnecessary work.

3. Maven Dependency

First, we need to add a Maven dependency to the javaslang-test library:

<dependency>
    <groupId>io.javaslang</groupId>
    <artifactId>javaslang-test</artifactId>
    <version>${javaslang.test.version}</version>
</dependency>

<properties>
    <javaslang.test.version>2.0.5</javaslang.test.version> 
</properties>

4. Writing Property Based Tests

Let’s consider a function that returns a stream of strings. It is an infinite stream of 0 upwards that maps numbers to the strings based on the simple rule. We are using here an interesting Javaslang feature called the Pattern Matching:

private static Predicate<Integer> divisibleByTwo = i -> i % 2 == 0;
private static Predicate<Integer> divisibleByFive = i -> i % 5 == 0;

private Stream<String> stringsSupplier() {
    return Stream.from(0).map(i -> Match(i).of(
      Case($(divisibleByFive.and(divisibleByTwo)), "DividedByTwoAndFiveWithoutRemainder"),
      Case($(divisibleByFive), "DividedByFiveWithoutRemainder"),
      Case($(divisibleByTwo), "DividedByTwoWithoutRemainder"),
      Case($(), "")));
}

Writing the unit test for such method will be error prone because there is a high probability that we’ll forget about some edge case and basically not cover all possible scenarios.

Fortunately, we can write a property-based test that will cover all edge cases for us. First, we need to define which kind of numbers should be an input for our test:

Arbitrary<Integer> multiplesOf2 = Arbitrary.integer()
  .filter(i -> i > 0)
  .filter(i -> i % 2 == 0 && i % 5 != 0);

We specified that input number needs to fulfill two conditions – it needs to be greater that zero, and needs to be dividable by two without remainder but not by five.

Next, we need to define a condition that checks if a function that is tested returns proper value for given argument:

CheckedFunction1<Integer, Boolean> mustEquals
  = i -> stringsSupplier().get(i).equals("DividedByTwoWithoutRemainder");

To start a property-based test, we need to use the Property class:

CheckResult result = Property
  .def("Every second element must equal to DividedByTwoWithoutRemainder")
  .forAll(multiplesOf2)
  .suchThat(mustEquals)
  .check(10_000, 100);

result.assertIsSatisfied();

We’re specifying that, for all arbitrary integers that are multiples of 2, the mustEquals predicate must be satisfied. The check() method takes a size of a generated input and number of times that this test will be run.

We can quickly write another test that will verify if the stringsSupplier() function returns a DividedByTwoAndFiveWithoutRemainder string for every input number that is divisible by two and five without the remainder.

The Arbitrary supplier and CheckedFunction need to be changed:

Arbitrary<Integer> multiplesOf5 = Arbitrary.integer()
  .filter(i -> i > 0)
  .filter(i -> i % 5 == 0 && i % 2 == 0);

CheckedFunction1<Integer, Boolean> mustEquals
  = i -> stringsSupplier().get(i).endsWith("DividedByTwoAndFiveWithoutRemainder");

Then we can run the property-based test for one thousand iterations:

Property.def("Every fifth element must equal to DividedByTwoAndFiveWithoutRemainder")
  .forAll(multiplesOf5)
  .suchThat(mustEquals)
  .check(10_000, 1_000)
  .assertIsSatisfied();

5. Conclusion

In this quick article, we had a look at the concept of property based testing.

We created tests using the javaslang-test library; we used the Arbitrary, CheckedFunction, and Property class to define property-based test using javaslang-test. 

The implementation of all these examples and code snippets can be found over on GitHub – this is a Maven project, so it should be easy to import and run as it is.

Form Validation with AngularJS and Spring MVC

$
0
0

1. Overview

Validation is never quite as straightforward as we expect. And of course validating the values entered by a user into an application is very important for preserving the integrity of our data.

In the context of a web application, data input is usually done using HTML forms and requires both client-side and server-side validation.

In this tutorial, we’ll have a look at implementing client-side validation of form input using AngularJS and server-side validation using the Spring MVC framework.

2. Maven Dependencies

To start off, let’s add the following dependencies:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>4.3.7.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-validator</artifactId>
    <version>5.4.0.Final</version>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.8.7</version>
</dependency>

The latest versions of spring-webmvc, hibernate-validator and jackson-databind can be downloaded from Maven Central.

3. Validation Using Spring MVC

An application should never rely solely on client-side validation, as this can be easily circumvented. To prevent incorrect or malicious values from being saved or causing improper execution of the application logic, it is important to validate input values on the server side as well.

Spring MVC offers support for server-side validation by using JSR 349 Bean Validation specification annotations. For this example, we will use the reference implementation of the specification, which is hibernate-validator.

3.1. The Data Model

Let’s create a User class that has properties annotated with appropriate validation annotations:

public class User {

    @NotNull
    @Email
    private String email;

    @NotNull
    @Size(min = 4, max = 15)
    private String password;

    @NotBlank
    private String name;

    @Min(18)
    @Digits(integer = 2, fraction = 0)
    private int age;

    // standard constructor, getters, setters
}

The annotations used above belong to the JSR 349 specification, with the exception of @Email and @NotBlank, which are specific to the hibernate-validator library.

3.2. Spring MVC Controller

Let’s create a controller class that defines a /user endpoint, which will be used to save a new User object to a List.

In order to enable validation of the User object received through request parameters, the declaration must be preceded by the @Valid annotation, and the validation errors will be held in a BindingResult instance.

To determine if the object contains invalid values, we can use the hasErrors() method of BindingResult.

If hasErrors() returns true, we can return a JSON array containing the error messages associated with the validations that did not pass. Otherwise, we will add the object to the list:

@PostMapping(value = "/user")
@ResponseBody
public ResponseEntity<Object> saveUser(@Valid User user, 
  BindingResult result, Model model) {
    if (result.hasErrors()) {
        List<String> errors = result.getAllErrors().stream()
          .map(DefaultMessageSourceResolvable::getDefaultMessage)
          .collect(Collectors.toList());
        return new ResponseEntity<>(errors, HttpStatus.OK);
    } else {
        if (users.stream().anyMatch(it -> user.getEmail().equals(it.getEmail()))) {
            return new ResponseEntity<>(
              Collections.singletonList("Email already exists!"), 
              HttpStatus.CONFLICT);
        } else {
            users.add(user);
            return new ResponseEntity<>(HttpStatus.CREATED);
        }
    }
}

As you can see, server-side validation adds the advantage of having the ability to perform additional checks that are not possible on the client side.

In our case, we can verify whether a user with the same email already exists – and return a status of 409 CONFLICT if that’s the case.

We also need to define our list of users and initialize it with a few values:

private List<User> users = Arrays.asList(
  new User("ana@yahoo.com", "pass", "Ana", 20),
  new User("bob@yahoo.com", "pass", "Bob", 30),
  new User("john@yahoo.com", "pass", "John", 40),
  new User("mary@yahoo.com", "pass", "Mary", 30));

Let’s also add a mapping for retrieving the list of users as a JSON object:

@GetMapping(value = "/users")
@ResponseBody
public List<User> getUsers() {
    return users;
}

The final item we need in our Spring MVC controller is a mapping to return the main page of our application:

@GetMapping("/userPage")
public String getUserProfilePage() {
    return "user";
}

We will take a look at the user.html page in more detail in the AngularJS section.

3.3. Spring MVC Configuration

Let’s add a basic MVC configuration to our application:

@Configuration
@EnableWebMvc
@ComponentScan(basePackages = "com.baeldung.springmvcforms")
class ApplicationConfiguration extends WebMvcConfigurerAdapter {

    @Override
    public void configureDefaultServletHandling(
      DefaultServletHandlerConfigurer configurer) {
        configurer.enable();
    }

    @Bean
    public InternalResourceViewResolver htmlViewResolver() {
        InternalResourceViewResolver bean = new InternalResourceViewResolver();
        bean.setPrefix("/WEB-INF/html/");
        bean.setSuffix(".html");
        return bean;
    }
}

3.4. Initializing the Application

Let’s create a class that implements WebApplicationInitializer interface to run our application:

public class WebInitializer implements WebApplicationInitializer {

    public void onStartup(ServletContext container) throws ServletException {

        AnnotationConfigWebApplicationContext ctx
          = new AnnotationConfigWebApplicationContext();
        ctx.register(ApplicationConfiguration.class);
        ctx.setServletContext(container);
        container.addListener(new ContextLoaderListener(ctx));

        ServletRegistration.Dynamic servlet 
          = container.addServlet("dispatcher", new DispatcherServlet(ctx));
        servlet.setLoadOnStartup(1);
        servlet.addMapping("/");
    }
}

3.5. Testing Spring MVC Validation Using cURL

Before we implement the AngularJS client section, we can test our API using cURL with the command:

curl -i -X POST -H "Accept:application/json" 
  "localhost:8080/spring-mvc-forms/user?email=aaa&password=12&age=12"

The response is an array containing the default error messages:

[
    "not a well-formed email address",
    "size must be between 4 and 15",
    "may not be empty",
    "must be greater than or equal to 18"
]

4. AngularJS validation

Client-side validation is useful in creating a better user experience, as it provides the user with information on how to successfully submit valid data and enables them to be able to continue to interact with the application.

The AngularJS library has great support for adding validation requirements on form fields, handling error messages, and styling valid and invalid forms.

First, let’s create an AngularJS module that injects the ngMessages module, which is used for validation messages:

var app = angular.module('app', ['ngMessages']);

Next, let’s create an AngularJS service and controller that will consume the API built in the previous section.

4.1. The AngularJS Service

Our service will have two methods that call the MVC controller methods — one to save a user, and one to retrieve the list of users:

app.service('UserService',['$http', function ($http) {
	
    this.saveUser = function saveUser(user){
        return $http({
          method: 'POST',
          url: 'user',
          params: {email:user.email, password:user.password, 
            name:user.name, age:user.age},
          headers: 'Accept:application/json'
        });
    }
	
    this.getUsers = function getUsers(){
        return $http({
          method: 'GET',
          url: 'users',
          headers:'Accept:application/json'
        }).then( function(response){
        	return response.data;
        } );
    }

}]);

4.2. The AngularJS Controller

The UserCtrl controller injects the UserService, calls the service methods, and handles the response and error messages:

app.controller('UserCtrl', ['$scope','UserService', function ($scope,UserService) {
	
	$scope.submitted = false;
	
	$scope.getUsers = function() {
		   UserService.getUsers().then(function(data) {
		       $scope.users = data;
	       });
	   }
    
    $scope.saveUser = function() {
    	$scope.submitted = true;
    	  if ($scope.userForm.$valid) {
            UserService.saveUser($scope.user)
              .then (function success(response) {
                  $scope.message = 'User added!';
                  $scope.errorMessage = '';
                  $scope.getUsers();
                  $scope.user = null;
                  $scope.submitted = false;
              },
              function error(response) {
                  if (response.status == 409) {
                    $scope.errorMessage = response.data.message;
            	  }
            	  else {
                    $scope.errorMessage = 'Error adding user!';
            	  }
                  $scope.message = '';
            });
    	  }
    }
   
   $scope.getUsers();
}]);

We can see in the example above that the service method is called only if the $valid property of userForm is true. Still, in this case, there is the additional check for duplicate emails, which can only be done on the server and is handled separately in the error() function.

Also notice that there is a submitted variable defined which will tell us if the form has been submitted or not.

Initially this variable will be false, and on invocation of the saveUser() method it becomes true. If we don’t want validation messages to show before the user submits the form, we can use the submitted variable to prevent this.

4.3. Form Using AngularJS Validation

In order to make use of the AngularJS library and our AngularJS module, we will need to add the scripts to our user.html page:

<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.6/angular.min.js">
</script>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.4.0/angular-messages.js">
</script>
<script src="js/app.js"></script>

Then we can use our module and controller by setting the ng-app and ng-controller properties:

<body ng-app="app" ng-controller="UserCtrl">

Let’s create our HTML form:

<form name="userForm" method="POST" novalidate 
  ng-class="{'form-error':submitted}" ng-submit="saveUser()" >
...
</form>

Note that we have to set the novalidate attribute on the form in order to prevent default HTML5 validation and replace it with our own.

The ng-class attribute adds the form-error CSS class dynamically to the form if the submitted variable has a value of true.

The ng-submit attribute defines the AngularJS controller function that will be called when the form in submitted. Using ng-submit instead of ng-click has the advantage that it also responds to submitting the form using the ENTER key.

Now let’s add the four input fields for the User attributes:

<label class="form-label">Email:</label>
<input type="email" name="email" required ng-model="user.email" class="form-input"/>

<label class="form-label">Password:</label>
<input type="password" name="password" required ng-model="user.password" 
  ng-minlength="4" ng-maxlength="15" class="form-input"/>

<label class="form-label">Name:</label>
<input type="text" name="name" ng-model="user.name" ng-trim="true" 
  required class="form-input" />

<label class="form-label">Age:</label>
<input type="number" name="age" ng-model="user.age" ng-min="18"
  class="form-input" required/>

Each input field has a binding to a property of the user variable through the ng-model attribute.

For setting validation rules, we use the HTML5 required attribute and several AngularJS-specific attributes: ng-minglength, ng-maxlength, ng-min, and ng-trim.

For the email field, we also use the type attribute with a value of email for client-side email validation.

In order to add error messages corresponding to each field, AngularJS offers the ng-messages directive, which loops through an input’s $errors object and displays messages based on each validation rule.

Let’s add the directive for the email field right after the input definition:

<div ng-messages="userForm.email.$error" 
  ng-show="submitted && userForm.email.$invalid" class="error-messages">
    <p ng-message="email">Invalid email!</p>
    <p ng-message="required">Email is required!</p>
</div>

Similar error messages can be added for the other input fields.

We can control when the directive is displayed for the email field using the ng-show property with a boolean expression. In our example, we display the directive when the field has an invalid value, meaning the $invalid property is true, and the submitted variable is also true.

Only one error message will be displayed at a time for a field.

We can also add a check mark sign (represented by HEX code character ✓) after the input field in case the field is valid, depending on the $valid property:

<div class="check" ng-show="userForm.email.$valid">✓</div>

AngularJS validation also offers support for styling using CSS classes such as ng-valid and ng-invalid or more specific ones like ng-invalid-required and ng-invalid-minlength.

Let’s add the CSS property border-color:red for invalid inputs inside the form’s form-error class:

.form-error input.ng-invalid {
    border-color:red;
}

We can also show the error messages in red using a CSS class:

.error-messages {
    color:red;
}

After putting everything together, let’s see an example of how our client-side form validation will look when filled out with a mix of valid and invalid values:

AngularJS form validation example

5. Conclusion

In this tutorial, we’ve shown how we can combine client-side and server-side validation using AngularJS and Spring MVC.

As always, the full source code for the examples can be found over on GitHub.

To view the application, access the /userPage URL after running it.

New Stream, Comparator and Collector Functionality in Guava 21

$
0
0

1. Introduction

This article is first in the series about the new features launched with Version 21 of the Google Guava library. We’ll discuss newly added classes and some major changes from previous versions of Guava.

More specifically, we’ll discuss additions and changes in the common.collect package.

Guava 21 introduces some new and useful functionality in the common.collect package; let’s have a quick look at some of these new utilities and how we can get the most out of them.

2. Streams

We’re all excited about the latest addition of java.util.stream.Stream in Java 8. Well, Guava is now making good use of streams and provides what Oracle may have missed.

Streams is a static utility class, with some much-needed utilities for handling Java 8 streams.

2.1. Streams.stream()

Streams class provides four ways to create streams using Iterable, Iterator, Optional and Collection.

Though, stream creation using Collection is deprecated, as it’s provided by Java 8 out of the box:

List<Integer> numbers = Arrays.asList(1,2,3,4,5,6,7,8,9,10);
Stream<Integer> streamFromCollection = Streams.stream(numbers);
Stream<Integer> streamFromIterator = Streams.stream(numbers.iterator());
Stream<Integer> streamFromIterable = Streams.stream((Iterable<Integer>) numbers);
Stream<Integer> streamFromOptional = Streams.stream(Optional.of(1));

Streams class also provides flavors with OptionalDouble, OptionalLong and OptionalInt. These methods return a stream containing only that element otherwise empty stream:

LongStream streamFromOptionalLong = Streams.stream(OptionalLong.of(1));
IntStream streamFromOptionalInt = Streams.stream(OptionalInt.of(1));
DoubleStream streamFromOptionalDouble = Streams.stream(OptionalDouble.of(1.0));

2.2. Streams.concat()

Streams class provides methods for concating more than one homogeneous streams.

Stream<Integer> concatenatedStreams = Streams.concat(streamFromCollection, streamFromIterable,streamFromIterator);

The concat functionality comes in a few flavors – LongStream, IntStream and DoubleStream.

2.3. Streams.findLast()

Streams have a utility method to find the last element in the stream by using findLast() method.

This method either returns last element or Optional.empty() if the stream is there are no elements in the stream:

List<Integer> integers = Arrays.asList(1,2,3,4,5,6,7,8,9,10);
Optional<Integer> lastItem = Streams.findLast(integers.stream());

The findLast() method works for LongStream, IntStream and DoubleStream.

2.4. Streams.mapWithIndex()

By using mapWithIndex() method, each element of the stream carries information about their respective position (index):

mapWithIndex( Stream.of("a", "b", "c"), (str, index) -> str + ":" + index)

This will return Stream.of(“a:0″,”b:1″,”c:2”).

Same can be achieved with IntStream, LongStream and DoubleStream using overloaded mapWithIndex().

2.5. Streams.zip()

In order to map corresponding elements of two streams using some function, just use zip method of Streams:

Streams.zip(
  Stream.of("candy", "chocolate", "bar"),
  Stream.of("$1", "$2","$3"),
  (arg1, arg2) -> arg1 + ":" + arg2
);

This will return Stream.of(“candy:$1″,”chocolate:$2″,”bar:$3”);

The resulting stream will only be as long as the shorter of the two input streams; if one stream is longer, its extra element will be ignored.

3. Comparators

Guava Ordering class is deprecated and in the phase of deletion in newer versions. Most of the functionalities of Ordering class are already enlisted in JDK 8.

Guava introduces Comparators to provide additional features of Ordering which are not yet provided by the Java 8 standard libs.

Let’s have a quick look at these.

3.1. Comparators.isInOrder()

This method returns true if each element in the Iterable is greater than or equal to the preceding one, as specified by the Comparator:

List<Integer> integers = Arrays.asList(1,2,3,4,4,6,7,8,9,10);
boolean isInAscendingOrder = Comparators.isInOrder(
  integers, new AscedingOrderComparator());

3.2. Comparators.isInStrictOrder()

Quite similar to the isInOrder() method but it strictly holds the condition, the element cannot be equal to the preceding one, it has to be greater than. The previous code will return false for this method.

3.3. Comparators.lexicographical()

This API returns a new Comparator instance – which sorts in lexicographical (dictionary) order comparing corresponding elements pairwise. Internally, it creates a new instance of LexicographicalOrdering<S>().

4. MoreCollectors

MoreCollectors contains some very useful Collectors which are not present in Java 8 java.util.stream.Collectors and are not associated with com.google.common type.

Let’s go over a few of these.

4.1. MoreCollectors.toOptional()

Here, Collector converts a stream containing zero or one element to an Optional:

List<Integer> numbers = Arrays.asList(1);
Optional<Integer> number = numbers.stream()
  .map(e -> e * 2)
  .collect(MoreCollectors.toOptional());

If the stream contains more than one elements – the collector will throw IllegalArgumentException.

4.2. MoreCollectors.onlyElement()

With this API, the Collector takes a stream containing just one element and returns the element; if the stream contains more than one element it throws IllegalArgumentException or if the stream contains zero element it throws NoSuchElementException.

5. Interners.InternerBuilder

This is an internal builder class to already existing Interners in Guava library. It provides some handy method to define concurrency level and type (weak or strong) of Interner you prefer:

Interners interners = Interners.newBuilder()
  .concurrencyLevel(2)
  .weak()
  .build();

6. Conclusion

In this quick article, we explored the newly added functionality in the common.collect package of Guava 21.

The code for this article can be found on Github, as always.

Introduction to JSONassert

$
0
0

1. Overview

In this article, we’ll have a look at the JSONAssert library – a library focused on understanding JSON data and writing complex JUnit tests using that data.

2. Maven Dependency

First, let’s add the Maven dependency:

<dependency>
    <groupId>org.skyscreamer</groupId>
    <artifactId>jsonassert</artifactId>
    <version>1.5.0</version>
</dependency>

Please check out the latest version of the library here.

3. Working with Simple JSON Data

3.1. Using the LENIENT Mode

Let’s start our tests with a simple JSON string comparison:

String actual = "{id:123, name:\"John\"}";
JSONAssert.assertEquals(
  "{id:123,name:\"John\"}", actual, JSONCompareMode.LENIENT);

The test will pass as the expected JSON string, and the actual JSON string are the same.

The comparison mode LENIENT means that even if the actual JSON contains extended fields, the test will still pass:

String actual = "{id:123, name:\"John\", zip:\"33025\"}";
JSONAssert.assertEquals(
  "{id:123,name:\"John\"}", actual, JSONCompareMode.LENIENT);

As we can see, the real variable contains an additional field zip which is not present in the expected String. Still, the test will pass.

This concept is useful in the application development. This means that our APIs can grow, returning additional fields as required, without breaking the existing tests.

3.2. Using the STRICT Mode

The behavior mentioned in the previous sub-section can be easily changed by using the STRICT comparison mode:

String actual = "{id:123,name:\"John\"}";
JSONAssert.assertNotEquals(
  "{name:\"John\"}", actual, JSONCompareMode.STRICT);

Please note the use of assertNotEquals() in the above example.

3.3. Using a Boolean Instead of JSONCompareMode

The compare mode can also be defined by using an overloaded method that takes boolean instead of JSONCompareMode where LENIENT = false and STRICT = true:

String actual = "{id:123,name:\"John\",zip:\"33025\"}";
JSONAssert.assertEquals(
  "{id:123,name:\"John\"}", actual, JSONCompareMode.LENIENT);
JSONAssert.assertEquals(
  "{id:123,name:\"John\"}", actual, false);

actual = "{id:123,name:\"John\"}";
JSONAssert.assertNotEquals(
  "{name:\"John\"}", actual, JSONCompareMode.STRICT);
JSONAssert.assertNotEquals(
  "{name:\"John\"}", actual, true);

3.4. The Logical Comparison

As described earlier, JSONAssert makes a logical comparison of the data. This means that the ordering of elements does not matter while dealing with JSON objects:

String result = "{id:1,name:\"John\"}";
JSONAssert.assertEquals(
  "{name:\"John\",id:1}", result, JSONCompareMode.STRICT);
JSONAssert.assertEquals(
  "{name:\"John\",id:1}", result, JSONCompareMode.LENIENT);

Strict or not, the above test will pass in both the cases.

Another example of logical comparison can be demonstrated by using different types for the same value:

JSONObject expected = new JSONObject();
JSONObject actual = new JSONObject();
expected.put("id", Integer.valueOf(12345));
actual.put("id", Double.valueOf(12345));

JSONAssert.assertEquals(expected, actual, JSONCompareMode.LENIENT);

The first thing to note here is that we are using JSONObject instead of a String as we did for earlier examples. The next thing is that we have used Integer for expected and Double for actual. The test will pass irrespective of the types because the logical value 12345 for both of them is same.

Even in the case when we have nested object representation, this library works pretty well:

String result = "{id:1,name:\"Juergen\", 
  address:{city:\"Hollywood\", state:\"LA\", zip:91601}}";
JSONAssert.assertEquals("{id:1,name:\"Juergen\", 
  address:{city:\"Hollywood\", state:\"LA\", zip:91601}}", result, false);

3.5. Assertions with User Specified Messages

All the assertEquals() and assertNotEquals() methods accept a String message as the first parameter. This message provides some customization to our test cases by providing a meaningful message in the case of test failures:

String actual = "{id:123,name:\"John\"}";
String failureMessage = "Only one field is expected: name";
try {
    JSONAssert.assertEquals(failureMessage, 
      "{name:\"John\"}", actual, JSONCompareMode.STRICT);
} catch (AssertionError ae) {
    assertThat(ae.getMessage()).containsIgnoringCase(failureMessage);
}

In the case of any failure, the entire error message will make more sense:

Only one field is expected: name 
Unexpected: id

The first line is the user specified message and the second line is the additional message provided by the library.

4. Working with JSON Arrays

The comparison rules for JSON arrays differ a little, compared to JSON objects.

4.1. The Order of the Elements in an Array

The first difference is that the order of elements in an array has to be exactly same in STRICT comparison mode. However, for LENIENT comparison mode, the order does not matter:

String result = "[Alex, Barbera, Charlie, Xavier]";
JSONAssert.assertEquals(
  "[Charlie, Alex, Xavier, Barbera]", result, JSONCompareMode.LENIENT);
JSONAssert.assertEquals(
  "[Alex, Barbera, Charlie, Xavier]", result, JSONCompareMode.STRICT);
JSONAssert.assertNotEquals(
  "[Charlie, Alex, Xavier, Barbera]", result, JSONCompareMode.STRICT);

This is pretty useful in the scenario where the API returns an array of sorted elements, and we want to verify if the response is sorted.

4.2. The Extended Elements in an Array

Another difference is that extended elements are not allowed when the dealing with JSON arrays:

String result = "[1,2,3,4,5]";
JSONAssert.assertEquals(
  "[1,2,3,4,5]", result, JSONCompareMode.LENIENT);
JSONAssert.assertNotEquals(
  "[1,2,3]", result, JSONCompareMode.LENIENT);
JSONAssert.assertNotEquals(
  "[1,2,3,4,5,6]", result, JSONCompareMode.LENIENT);

The above example clearly demonstrates that even with the LENIENT comparison mode, the items in the expected array has to match the items in the real array exactly. Adding or removing, even a single element, will result in a failure.

4.3. Array Specific Operations

We also have a couple of other techniques to verify the contents of the arrays further.

Suppose we want to verify the size of the array. This can be achieved by using a concrete syntax as the expected value:

String names = "{names:[Alex, Barbera, Charlie, Xavier]}";
JSONAssert.assertEquals(
  "{names:[4]}", 
  names, 
  new ArraySizeComparator(JSONCompareMode.LENIENT));

The String “{names:[4]}” specifies the expected size of the array.

Let’s have a look at another comparison technique:

String ratings = "{ratings:[3.2,3.5,4.1,5,1]}";
JSONAssert.assertEquals(
  "{ratings:[1,5]}", 
  ratings, 
  new ArraySizeComparator(JSONCompareMode.LENIENT));

The above example verifies that all the elements in the array must have a value between [1,5], both 1 and 5 inclusive. If there is any value less than 1 or greater than 5, the above test will fail.

5. Advanced Comparison Example

Consider the use case where our API returns multiple ids, each one being an Integer value. This means that all the ids can be verified using a simple regular expression ‘\d‘.

The above regex can be combined with a CustomComparator and applied to all the values of all the ids. If any of the ids does not match the regex, the test will fail:

JSONAssert.assertEquals("{entry:{id:x}}", "{entry:{id:1, id:2}}", 
  new CustomComparator(
  JSONCompareMode.STRICT, 
  new Customization("entry.id", 
  new RegularExpressionValueMatcher<Object>("\\d"))));

JSONAssert.assertNotEquals("{entry:{id:x}}", "{entry:{id:1, id:as}}", 
  new CustomComparator(JSONCompareMode.STRICT, 
  new Customization("entry.id", 
  new RegularExpressionValueMatcher<Object>("\\d"))));

The “{id:x}” in the above example is nothing but a placeholder – the x can be replaced by anything. As it is the place where the regex pattern ‘\d‘ will be applied. Since the id itself is inside another field entry, the Customization specifies the position of the id, so that the CustomComparator can perform the comparison.

6. Conclusion

In this quick article, we looked at various scenarios where JSONAssert can be helpful. We started with a super simple example and moved to more complex comparisons.

Of course, as always, the full source code of all the examples discussed here can be found over on GitHub.

Viewing all 4708 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>