Quantcast
Channel: Baeldung
Viewing all 4703 articles
Browse latest View live

Java Web Weekly, Issue 165

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> From Microservices to Distributed Systems – Survival guide for Java Developers [eisele.net]

Another solid way of doing a microservice implementation 🙂

>> What’s the Top Java Logging Method on GitHub? String Concatenation vs Parameterized Logging [takipi.com]

Should we parameterize or concatenate? As usual, the answer is “it depends”.

>> Deterministic Execution on the JVM [infoq.com]

A very interesting article exploring JVM determinism on the case study of the deterministic classloader – Corda.

>> The Future of Java in the Enterprise – InfoQ’s Opinion [infoq.com]

InfoQ are going over the JVM landscape and checking which technologies have already crossed the chasm 🙂

>> Should I Implement the Arcane Iterator.remove() Method? Yes You (Probably) Should [jooq.org]

Just in case, it’s better to not ignore the Iterator.remove() method.

>> Java Web Frameworks Index by RebelLabs [zeroturnaround.com]

The RebelLabs guys created a ranking of Java web frameworks by researching Stackoverflow, LinkedIn, Github, etc. Quite interesting data here.

>> The Dangers of Race Conditions in Five Minutes [sitepoint.com]

Revising basics and consequences of race conditions.

>> Lazy Computations in Java with a Lazy Type [sitepoint.com]

If you miss some tools in Java, you can always build it yourself. The article goes through a case study of the design and implementation of a Lazy type in Java.

>> Java 9 Will Adjust Memory Limits if Running with Docker [infoq.com]

The JVM is not aware of the fact that it is running in a container and it can cause multiple problems. Java 9 brings a solution for this problem.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> CockroachDB beta-20160829 [jepsen.io]

A deep dive into the CockrockroachDB persistence guarantees.

This one, as many of the Jepsen articles – is an insightful read even if you’re not using CockroachDB (which you probably aren’t).

>> ElasticSearch API cheatsheet [frankel.ch]

The most important ElasticSearch API operations in one place.

>> CQRS and Event Sourcing with Lagom [codecentric.de]

And yet another approach to CQRS and Event Sourcing – this time with Lagom from Lightbend – the company behind Scala and Akka.

>> MariaDB Dialects [in.relation.to]

A super short overview of MariaDB Dialects.

>> The MySQL Dialect refactoring [in.relation.to]

And some very nice simplifications of dialects in Hibernate – and a good example of still evolving a mature framework.

Also worth reading:

3. Musings

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> The power will corrupt you in 3, 2, 1… [dilbert.com]

>> Can I create my own job? [dilbert.com]

>> No idea why he succeeded. [dilbert.com]

5. Pick of the Week

>> One Thing [randsinrepose.com]


Finding Max/Min of a List or Collection

$
0
0

1. Introduction

A quick intro on how to find the min/max value from a given list/collection with the powerful Stream API in Java8.

2. Find Max in a List of Integers

We can use max() method provided through the java.util.Stream interface. It accepts a method reference:

@Test
public void whenListIsOfIntegerThenMaxCanBeDoneUsingIntegerComparator() {
    // given
    List<Integer> listOfIntegers = Arrays.asList(1, 2, 3, 4, 56, 7, 89, 10);
    Integer expectedResult = 89;

    // then
    Integer max = listOfIntegers
      .stream()
      .mapToInt(v -> v)
      .max().orElseThrow(NoSuchElementException::new);

    assertEquals("Should be 89", expectedResult, max);
}

Let’s take a closer look at the code:

  1. Calling stream() method on the list to get a stream of values from the list
  2. Calling mapToInt(value -> value) on the stream to get an Integer Stream
  3. Calling max() method on the stream to get the max value
  4. Calling orElseThrow() to throw an exception if no value is received from max()

3. Find Min with Custom Objects

In order to find the min/max on custom objects, we can also provide a lambda expression for our preferred sorting logic.

Let’s first define the custom POJO:

class Person {
    String name;
    Integer age;
      
    // standard constructors, getters and setters
}

We want to find the Person object with the minimum age:

@Test
public void whenListIsOfPersonObjectThenMinCanBeDoneUsingCustomComparatorThroughLambda() {
    // given
    Person alex = new Person("Alex", 23);
    Person john = new Person("John", 40);
    Person peter = new Person("Peter", 32);
    List<Person> people = Arrays.asList(alex, john, peter);

    // then
    Person minByAge = people
      .stream()
      .min(Comparator.comparing(Person::getAge))
      .orElseThrow(NoSuchElementException::new);

    assertEquals("Should be Alex", alex, minByAge);
}

Let’s have a look at this logic:

  1. Calling stream() method on the list to get a stream of values from the list
  2. Calling min() method on the stream to get the minimum value. We are passing a lambda function as a comparator, this is used to decide the sorting logic for deciding the minimum value
  3. Calling orElseThrow() to throw an exception if no value is received from min()

4. Conclusion

In this quick article, we explored how the max() and min() methods from Java 8’s Stream API can be used to find the maximum and minimum value from a List/Collection.

As always, the code is available over on Github.

Introduction to jOOL

$
0
0

1. Overview

In this article, we will be looking at the jOOL library – another product from jOOQ.

2. Maven Dependency

Let’s start by adding a Maven dependency to your pom.xml:

<dependency>
    <groupId>org.jooq</groupId>
    <artifactId>jool</artifactId>
    <version>0.9.12</version>
</dependency>

You can find the latest version here.

3. Functional Interfaces

In Java 8, functional interfaces are quite limited. They accept the maximum number of two parameters and do not have many additional features.

jOOL fixes that by proving a set of new functional interfaces that can accept even 16 parameters (from Function1 up to Function16) and are enriched with additional handy methods.

For example, to create a function that takes three arguments, we can use Function3:

Function3<String, String, String, Integer> lengthSum
  = (v1, v2, v3) -> v1.length() + v2.length() + v3.length();

In pure Java, you would need to implement it by yourself. Besides that, functional interfaces from jOOL have a method applyPartially() that allows us to perform a partial application easily:

Function2<Integer, Integer, Integer> addTwoNumbers = (v1, v2) -> v1 + v2;
Function1<Integer, Integer> addToTwo = addTwoNumbers.applyPartially(2);

Integer result = addToTwo.apply(5);

assertEquals(result, (Integer) 7);

When we have a method that is of a Function2 type, we can transform it easily to a standard Java BiFunction by using a toBiFunction() method:

BiFunction biFunc = addTwoNumbers.toBiFunction();

Similarly, there is a toFunction() method in Function1 type.

4. Tuples

A tuple is a very important construct in a functional programming world. It’s a typed container for values where each value can have a different type. Tuples are often used as function arguments.

They’re also very useful when doing transformations on a stream of events. In jOOL, we have tuples that can wrap from one up to sixteen values, provided by Tuple1 up to Tuple16 types:

tuple(2, 2)

And for four values:

tuple(1,2,3,4);

Let’s consider an example when we have a sequence of tuples that carried 3 values:

Seq<Tuple3<String, String, Integer>> personDetails = Seq.of(
  tuple("michael", "similar", 49),
  tuple("jodie", "variable", 43));
Tuple2<String, String> tuple = tuple("winter", "summer");

List<Tuple4<String, String, String, String>> result = personDetails
  .map(t -> t.limit2().concat(tuple)).toList();

assertEquals(
  result,
  Arrays.asList(tuple("michael", "similar", "winter", "summer"), tuple("jodie", "variable", "winter", "summer"))
);

We can use different kinds of transformations on tuples. First, we call a limit2() method to take only two values from Tuple3. Then, we are calling a concat() method to concatenate two tuples.

In the result, we get values that are of a Tuple4 type.

5. Seq 

The Seq construct adds higher-level methods on a Stream while often uses its methods underneath.

5.1. Contains Operations

We can find a couple variants of methods checking for a presence of elements in a Seq. Some of those methods use an anyMatch() method from a Stream class:

assertTrue(Seq.of(1, 2, 3, 4).contains(2));

assertTrue(Seq.of(1, 2, 3, 4).containsAll(2, 3));

assertTrue(Seq.of(1, 2, 3, 4).containsAny(2, 5));

5.2. Join Operations

When we have two streams and we want to join them (similar to a SQL join operation of two datasets), using a standard Stream class is not a very elegant way to do this:

Stream<Integer> left = Stream.of(1, 2, 4);
Stream<Integer> right = Stream.of(1, 2, 3);

List<Integer> rightCollected = right.collect(Collectors.toList());
List<Integer> collect = left
  .filter(rightCollected::contains)
  .collect(Collectors.toList());

assertEquals(collect, Arrays.asList(1, 2));

We need to collect right stream to a list, to prevent java.lang.IllegalStateException: stream has already been operated upon or closed. Next, we need to make a side effect operation by accessing a rightCollected list from a filter method. It is error prone and not elegant way to join two data sets.

Fortunately, Seq has useful methods to do inner, left and right joins on data sets. Those methods hide an implementation of it exposing elegant API.

We can do an inner join by using an innerJoin() method:

assertEquals(
  Seq.of(1, 2, 4).innerJoin(Seq.of(1, 2, 3), (a, b) -> a == b).toList(),
  Arrays.asList(tuple(1, 1), tuple(2, 2))
);

We can do right and left joins accordingly:

assertEquals(
  Seq.of(1, 2, 4).leftOuterJoin(Seq.of(1, 2, 3), (a, b) -> a == b).toList(),
  Arrays.asList(tuple(1, 1), tuple(2, 2), tuple(4, null))
);

assertEquals(
  Seq.of(1, 2, 4).rightOuterJoin(Seq.of(1, 2, 3), (a, b) -> a == b).toList(),
  Arrays.asList(tuple(1, 1), tuple(2, 2), tuple(null, 3))
);

There is even a crossJoin() method that makes possible to make a cartesian join of two datasets:

assertEquals(
  Seq.of(1, 2).crossJoin(Seq.of("A", "B")).toList(),
  Arrays.asList(tuple(1, "A"), tuple(1, "B"), tuple(2, "A"), tuple(2, "B"))
);

5.3. Manipulating a Seq

Seq has many useful methods for manipulating sequences of elements. Let’s look at some of them.

We can use a cycle() method to take repeatedly elements from a source sequence. It will create an infinite stream, so we need to be careful when collecting results to a list thus we need to use a limit() method to transform infinite sequence into finite one:

assertEquals(
  Seq.of(1, 2, 3).cycle().limit(9).toList(),
  Arrays.asList(1, 2, 3, 1, 2, 3, 1, 2, 3)
);

Let’s say that we want to duplicate all elements from one sequence to the second sequence. The duplicate() method does exactly that:

assertEquals(
  Seq.of(1, 2, 3).duplicate().map((first, second) -> tuple(first.toList(), second.toList())),
  tuple(Arrays.asList(1, 2, 3), Arrays.asList(1, 2, 3))
);

Returning type of a duplicate() method is a tuple of two sequences.

Let’s say that we have a sequence of integers and we want to split that sequence into two sequences using some predicate. We can use a partition() method:

assertEquals(
  Seq.of(1, 2, 3, 4).partition(i -> i > 2)
    .map((first, second) -> tuple(first.toList(), second.toList())),
  tuple(Arrays.asList(3, 4), Arrays.asList(1, 2))
);

5.4. Grouping Elements

Grouping elements by a key using the Stream API is cumbersome and non-intuitive – because we need to use collect() method with a Collectors.groupingBy collector.

Seq hides that code behind a groupBy() method that returns Map so there is no need to use a collect() method explicitly:

Map<Integer, List<Integer>> expectedAfterGroupBy = new HashMap<>();
expectedAfterGroupBy.put(1, Arrays.asList(1, 3));
expectedAfterGroupBy.put(0, Arrays.asList(2, 4));

assertEquals(
  Seq.of(1, 2, 3, 4).groupBy(i -> i % 2),
  expectedAfterGroupBy
);

5.5. Skipping Elements

Let’s say that we have a sequence of elements and we want to skip elements while a predicate is not matched. When a predicate is satisfied, elements should land in a resulting sequence.

We can use a skipWhile() method for that:

assertEquals(
  Seq.of(1, 2, 3, 4, 5).skipWhile(i -> i < 3).toList(),
  Arrays.asList(3, 4, 5)
);

We can achieve the same result using a skipUntil() method:

assertEquals(
  Seq.of(1, 2, 3, 4, 5).skipUntil(i -> i == 3).toList(),
  Arrays.asList(3, 4, 5)
);

5.6. Zipping Sequences

When we’re processing sequences of elements, often there is a need to zip them into one sequence.

The zip() API that could be used to zip two sequences into one:

assertEquals(
  Seq.of(1, 2, 3).zip(Seq.of("a", "b", "c")).toList(),
  Arrays.asList(tuple(1, "a"), tuple(2, "b"), tuple(3, "c"))
);

The resulting sequence contains tuples of two elements.

When we are zipping two sequences, but we want to zip them in a specific way we can pass a BiFunction to a zip() method that defines the way of zipping elements:

assertEquals(
  Seq.of(1, 2, 3).zip(Seq.of("a", "b", "c"), (x, y) -> x + ":" + y).toList(),
  Arrays.asList("1:a", "2:b", "3:c")
);

Sometimes, it is useful to zip sequence with an index of elements in this sequence, via the zipWithIndex() API:

assertEquals(
  Seq.of("a", "b", "c").zipWithIndex().toList(),
  Arrays.asList(tuple("a", 0L), tuple("b", 1L), tuple("c", 2L))
);

6. Converting Checked Exceptions to Unchecked

Let’s say that we have a method that takes a string and can throw a checked exception:

public Integer methodThatThrowsChecked(String arg) throws Exception {
    return arg.length();
}

Then we want to map elements of a Stream applying that method to each element. There is no way to handle that exception higher so we need to handle that exception in a map() method:

List<Integer> collect = Stream.of("a", "b", "c").map(elem -> {
    try {
        return methodThatThrowsChecked(elem);
    } catch (Exception e) {
        e.printStackTrace();
        throw new RuntimeException(e);
    }
}).collect(Collectors.toList());

assertEquals(
    collect,
    Arrays.asList(1, 1, 1)
);

There is not much we can do with that exception because of the design of functional interfaces in Java so in a catch clause, we are converting a checked exception into unchecked one.

Fortunately, in a jOOL there is an Unchecked class that has methods that can convert checked exceptions into unchecked exceptions:

List<Integer> collect = Stream.of("a", "b", "c")
  .map(Unchecked.function(elem -> methodThatThrowsChecked(elem)))
  .collect(Collectors.toList());

assertEquals(
  collect,
  Arrays.asList(1, 1, 1)
);

We are wrapping a call to a methodThatThrowsChecked() into an Unchecked.function() method that handles converting of exceptions underneath.

7. Conclusion

This article shows how to use the jOOL library that adds useful additional methods to the Java standard Stream API.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Introduction to Cobertura

$
0
0

1. Overview

In this article, we will demonstrate several aspects of generating code coverage reports using Cobertura.

Simply put, Cobertura is a reporting tool that calculates test coverage for a codebase – the percentage of branches/lines accessed by unit tests in a Java project.

2. Maven Plugin

2.1. Maven Configuration

In order to start calculating code coverage in your Java project, you need to declare the Cobertura Maven plugin in your pom.xml file under the reporting section:

<reporting>
    <plugins>
        <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>cobertura-maven-plugin</artifactId>
            <version>2.7</version>
        </plugin>
    </plugins>
</reporting>

You can always check the latest version of the plugin in the Maven central repository.

Once done, go ahead and run Maven specifying cobertura:cobertura as a goal.

This will create a detailed HTML style report showing code coverage statistics gathered via code instrumentation:

The line coverage metric shows how many statements are executed in the Unit Tests run, while the branch coverage metric focuses on how many branches are covered by those tests.

For each conditional, you have two branches, so basically, you’ll end up having twice as many branches as conditionals.

The complexity factor reflects the complexity of the code — it goes up when the number of branches in code increases.

In theory, the more branches you have, the more tests you need to implement in order to increase the branch coverage score.

2.2. Configuring Code Coverage Calculation and Checks

You can ignore/exclude a specific set of classes from code instrumentation using the ignore and the exclude tags:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>cobertura-maven-plugin</artifactId>
    <version>2.7</version>
    <configuration>
        <instrumentation>
            <ignores>
                <ignore>com/baeldung/algorithms/dijkstra/*</ignore>
            </ignores>
            <excludes>
                <exclude>com/baeldung/algorithms/dijkstra/*</exclude>
            </excludes>
        </instrumentation>
    </configuration>
</plugin>

After calculating the code coverage comes the check phase. The check phase ensures that a certain level of code coverage is reached.

Here’s a basic example on how to configure the check phase:

<configuration>
    <check>
        <haltOnFailure>true</haltOnFailure>
        <branchRate>75</branchRate>
        <lineRate>85</lineRate>
        <totalBranchRate>75</totalBranchRate>
        <totalLineRate>85</totalLineRate>
        <packageLineRate>75</packageLineRate>
        <packageBranchRate>85</packageBranchRate>
        <regexes>
            <regex>
                <pattern>com.baeldung.algorithms.dijkstra.*</pattern>
                <branchRate>60</branchRate>
                <lineRate>50</lineRate>
             </regex>
        </regexes>
    </check>
</configuration>

When using the haltOnFailure flag, Cobertura will cause the build to fail if one of the specified checks fail.

The branchRate/lineRate tags specify the minimum acceptable branch/line coverage score required after code instrumentation. These checks can be expanded to the package level using the packageLineRate/packageBranchRate tags.

It is also possible to declare specific rule checks for classes with names following a specific pattern by using the regex tag. In the example above, we ensure that a specific line/branch coverage score must be reached for classes in the com.baeldung.algorithms.dijkstra package and below.

3. Eclipse Plugin

3.1. Installation

Cobertura is also available as an Eclipse plugin called eCobertura. In order to install eCobertura for Eclipse, you need to follow the steps below and have Eclipse version 3.5 or greater installed:

Step 1: From the Eclipse menu, select HelpInstall New Software. Then, at the work with the field, enter http://ecobertura.johoop.de/update/:

Step 2: Select eCobertura Code Coverage, click “next”, and then follow the steps in the installation wizard.

Now that eCobertura is installed, restart Eclipse and show the coverage session view under Windows → Show View → Other → Cobertura.

3.2. Using Eclipse Kepler or Later

For the newer version of Eclipse (Kepler, Luna, etc.), the installation of eCobertura may cause some problems related to JUnit — the newer version of JUnit packaged with Eclipse is not fully compatible with eCobertura‘s dependencies checker:

Cannot complete the install because one or more required items could not be found.
  Software being installed: eCobertura 0.9.8.201007202152 (ecobertura.feature.group
     0.9.8.201007202152)
  Missing requirement: eCobertura UI 0.9.8.201007202152 (ecobertura.ui 
     0.9.8.201007202152) requires 'bundle org.junit4 0.0.0' but it could not be found
  Cannot satisfy dependency:
    From: eCobertura 0.9.8.201007202152 
    (ecobertura.feature.group 0.9.8.201007202152)
    To: ecobertura.ui [0.9.8.201007202152]

As a workaround, you can download an older version JUnit and place it into the Eclipse plugins folder.

This can be done by deleting the folder org.junit.*** from %ECLIPSE_HOME%/plugins, and then copying the same folder from an older Eclipse installation that is compatible with eCobertura.

Once done, restart your Eclipse IDE and re-install the plugin using the corresponding update site.

3.3. Code Coverage Reports in Eclipse

In order to calculate code coverage by a Unit Test, right-click your project/test to open the context menu, then choose the option Cover As → JUnit Test.

Under the Coverage Session view, you can check the line/branch coverage report per class:

Java 8 users may encounter a common error when calculating code coverage:

java.lang.VerifyError: Expecting a stackmap frame at branch target ...

In this case, Java is complaining about some methods not having a proper stack map, due to the stricter bytecode verifier introduced in newer versions of Java.

This issue can be solved by disabling verification in the Java Virtual Machine.

To do so, right-click your project to open the context menu, select Cover As, and then open the Coverage Configurations view. In the arguments tab, add the -noverify flag as a VM argument. Finally, click on the coverage button to launch coverage calculation.

You can also use the flag -XX:-UseSplitVerifier, but this only works with Java 6 and 7, as the split verifier is no longer supported in Java 8.

4. Conclusion

In this article, we have shown briefly how to use Cobertura to calculate code coverage in a Java project. We have also described the steps required to install eCobertura in your Eclipse environment.

Cobertura is a great yet simple code coverage tool, but not actively maintained, as it is currently outclassed by newer and more powerful tools like JaCoCo.

Finally, you can check out the example provided in this article in the GitHub project.

Introduction to RabbitMQ

$
0
0

1. Overview

Decoupling of software components is one of the most important parts of software design. One way of achieving this is using messaging systems, which provide an asynchronous way of communication between components (services). In this article, we will cover one of such systems: RabbitMQ.

RabbitMQ is a message broker that implements Advanced Message Queuing Protocol (AMQP). It provides client libraries for major programming languages.

Besides using for decoupling software components RabbitMQ can be used for:

  • Performing background operations
  • Performing asynchronous operation

2. Messaging Model

First, let’s have a quick, high-level look at how messaging works.

Simply put, there are two kinds of applications interacting with a messaging system: producers and consumers. Producers are those, who sends (publishes) messages to a broker, and consumers, who receive messages from the broker. Usually, this programs (software components) are running on different machines and RabbitMQ acts as a communication middleware between them.

In this article, we will discuss a simple example with two services which will communicate using RabbitMQ. One of the services will publish messages to RabbitMQ and the other one will consume.

3. Setup

For the beginning let’s run RabbitMQ using official setup guide here.

We’ll naturally use the Java client for interacting with RabbitMQ server; the Maven dependency for this client is:

<dependency>
    <groupId>com.rabbitmq</groupId>
    <artifactId>amqp-client</artifactId>
    <version>4.0.0</version>
</dependency>

After running the RabbitMQ broker using the official guide, we need to connect to it using java client:

ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();

We use the ConnectionFactory to setup the connection with the server, it takes care of the protocol (AMQP) and authentication as well. Here we connect to the server on localhost, we can modify the host name by using the setHost function.

We can use setPort to set the port if the default port is not used by the RabbitMQ Server; the default port for RabbitMQ is 15672:

factory.setPort(15678);

We can set username and the password:

factory.setUsername("user1");
factory.setPassword("MyPassword");

Further, we will use this connection for publishing and consuming messages.

4. Producer

Consider a simple scenario where a web application allows users to add new products to a website. Any time when new product added, we need to send an email to customers.

First, let’s define a queue:

channel.queueDeclare("products_queue", false, false, false, null);

Each time when users add a new product, we will publish a message to a queue:

String message = "product details"; 
channel.basicPublish("", "products_queue", null, message.getBytes());

Lastly, we close the channel and the connection:

channel.close();
connection.close();

This message will be consumed by another service, which is responsible for sending emails to customers.

5. Consumer

Let’s see what we can implement the consumer side; we’re going to declare the same queue:

channel.queueDeclare("products_queue", false, false, false, null);

Here’s how we define the consumer that will process messages from queue asynchronously:

Consumer consumer = new DefaultConsumer(channel) {
    @Override
     public void handleDelivery(
        String consumerTag,
        Envelope envelope, 
        AMQP.BasicProperties properties, 
        byte[] body) throws IOException {
 
            String message = new String(body, "UTF-8");
            // process the message
     }
};
channel.basicConsume("products_queue", true, consumer);

6. Conclusion

This simple article covered basic concepts of RabbitMQ and discussed a simple example using it.

The full implementation of this tutorial can be found in the GitHub project.

AWS Lambda Using DynamoDB With Java

$
0
0

1. Introduction

AWS Lambda is serverless computing service provided by Amazon Web Services and WS DynamoDB is a NoSQL database service also provided by Amazon.

Interestingly, DynamoDB supports both document store and key-value store and is fully managed by AWS.

Before we start, note that this tutorial requires a valid AWS account (you can create one here). Also, it’s a good idea to first read the AWS Lambda with Java article.

2. Maven Dependencies

To enable lambda we need the following dependency which can be found on Maven Central:

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-lambda-java-core</artifactId>
    <version>1.1.0</version>
</dependency>

To use different AWS resources we need the following dependency which also can also be found on Maven Central:

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-lambda-java-events</artifactId>
    <version>1.3.0</version>
</dependency>

And to build the application, we’re going to use the Maven Shade Plugin:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-shade-plugin</artifactId>
    <version>3.0.0</version>
    <configuration>
        <createDependencyReducedPom>false</createDependencyReducedPom>
    </configuration>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>shade</goal>
            </goals>
        </execution>
    </executions>
</plugin>

3. Lambda Code

There are different ways of creating handlers in a lambda application:

  • MethodHandler
  • RequestHandler
  • RequestStreamHandler

We will use RequestHandler interface in our application. We’ll accept the PersonRequest in JSON format, and the response will be PersonResponse also in JSON format:

public class PersonRequest {
    private String firstName;
    private String lastName;
    
    // standard getters and setters
}
public class PersonResponse {
    private String message;
    
    // standard getters and setters
}

Next is our entry point class which will implement RequestHandler interface as:

public class SavePersonHandler 
  implements RequestHandler<PersonRequest, PersonResponse> {
    
    private DynamoDB dynamoDb;
    private String DYNAMODB_TABLE_NAME = "Person";
    private Regions REGION = Regions.US_WEST_2;

    public PersonResponse handleRequest(
      PersonRequest personRequest, Context context) {
 
        this.initDynamoDbClient();

        persistData(personRequest);

        PersonResponse personResponse = new PersonResponse();
        personResponse.setMessage("Saved Successfully!!!");
        return personResponse;
    }

    private PutItemOutcome persistData(PersonRequest personRequest) 
      throws ConditionalCheckFailedException {
        return this.dynamoDb.getTable(DYNAMODB_TABLE_NAME)
          .putItem(
            new PutItemSpec().withItem(new Item()
              .withString("firstName", personRequest.getFirstName())
              .withString("lastName", personRequest.getLastName());
    }

    private void initDynamoDbClient() {
        AmazonDynamoDBClient client = new AmazonDynamoDBClient();
        client.setRegion(Region.getRegion(REGION));
        this.dynamoDb = new DynamoDB(client);
    }
}

Here when we implement the RequestHandler interface, we need to implement handleRequest() for the actual processing of the request. As for the rest of the code, we have:

  • PersonRequest object – which will contain the request values passed in JSON format
  • Context object –  used to get information from lambda execution environment
  • PersonResponse – which is the response object for the lambda request

When creating a DynamoDB object, we’ll first create the AmazonDynamoDBClient object and use that to create a DynamoDB object. Note that the region is mandatory.

To add items in DynamoDB table, we’ll make use of a PutItemSpec object – by specifying the number of columns and their values.

We don’t need any predefined schema in DynamoDB table, we just need to define the Primary Key column name, which is “id” in our case.

4. Building the Deployment File

To build the lambda application, we need to execute the following Maven command:

mvn clean package shade:shade

Lambda application will be compiled and packaged into a jar file under the target folder.

5. Creating the DynamoDB Table

Follow these steps to create the DynamoDB table:

  • Login to AWS Account
  • Click “DynamoDB” that can be located under “All Services”
  • This page will show already created DynamoDB tables (if any)
  • Click “Create Table” button
  • Provide “Table name” and “Primary Key” with its datatype as “Number”
  • Click on “Create” button
  • Table will be created

6. Creating the Lambda Function

Follow these steps to create the Lambda function:

  • Login to AWS Account
  • Click “Lambda” that can be located under “All Services”
  • This page will show already created Lambda Function (if any) or no lambda functions are created click on “Get Started Now”
  • “Select blueprint” -> Select “Blank Function”
  • “Configure triggers” -> Click “Next” button
  • “Configure function”
    • “Name”: SavePerson
    • “Description”: Save Person to DDB
    • “Runtime”: Select “Java 8”
    • “Upload”: Click “Upload” button and select the jar file of lambda application
  • “Handler”: com.baeldung.lambda.dynamodb.SavePersonHandler
  • “Role”: Select “Create a custom role”
  • A new window will pop and will allow configuring IAM role for lambda execution and we need to add the DynamoDB grants in it. Once done, click “Allow” button
  • Click “Next” button
  • “Review”: Review the configuration
  • Click “Create function” button

7. Testing the Lambda Function

Next step is to test the lambda function:

  • Click the “Test” button
  • The “Input test event” window will be shown. Here, we’ll provide the JSON input for our request:
{
  "id": 1,
  "firstName": "John",
  "lastName": "Doe",
  "age": 30,
  "address": "United States"
}
  • Click “Save and test” or “Save” button
  • The output can be seen on “Execution result” section:
{
  "message": "Saved Successfully!!!"
}
  • We also need to check in DynamoDB that the record is persisted:
    • Go to “DynamoDB” Management Console
    • Select the table “Person”
    • Select the “Items” tab
    • Here you can see the person’s details which were being passed in request to lambda application
  • So the request is successfully processed by our lambda application

8. Conclusion

In this quick article, we have learned how to create Lambda application with DynamoDB and Java 8. The detailed instructions should give you a head start in setting everything up.

And, as always, the full source code for the example app can be found over on Github.

Guide to java.util.concurrent.Locks

$
0
0

1. Overview

Simply put, a lock is a more flexible and sophisticated thread synchronization mechanism than the standard synchronized block.

The Lock interface has been around since Java 1.5. It is defined inside the java.util.concurrent.lock package and it provides extensive operations for locking.

In this article, we’ll explore different implementations of the Lock interface and their applications.

2. Differences between Lock and Synchronized block

There are few differences between the use of synchronized block and using Lock API’s:

  • synchronized block is fully contained within a method – we can have Lock API’s lock() and unlock() operation in separate methods
  • A synchronized block does not support the fairness, any thread can acquire the lock ones released, no preference can be specified. We can achieve fairness within the Lock APIs by specifying the fairness property. It makes sure that longest waiting thread is given access to lock
  • A thread gets blocked if it can’t get an access to the synchronized blockThe Lock API provides tryLock() method. The thread acquires lock only if it’s available and not held by any other thread. This reduces blocking time of thread waiting for the lock
  • A thread which is in “waiting” state to acquire the access to synchronized block, can’t be interrupted. The Lock API provides a method lockInterruptibly() which can be used to interrupt the thread when it is waiting for the lock

3. Lock API

Let’s take a look at the methods in the Lock interface:

  • void lock() – acquire the lock if it’s available; if the lock is not available a thread gets blocked until the lock is released
  • void lockInterruptibly() – this is similar to the lock(), but it allows the blocked thread to be interrupted and resume the execution through a thrown java.lang.InterruptedException
  • boolean tryLock() – this is non-blocking version of lock() method; it attempts to acquire the lock immediately, return true if locking succeeds
  • boolean tryLock(long timeout, TimeUnit timeUnit) – this is similar to tryLock(), except it waits up the given timeout before giving up trying to acquire the Lock
  • void unlock() –  unlocks the Lock instance

A locked instance should always be unlocked to avoid deadlock condition. A recommended code block to use the lock should contain a try/catch and finally block:

Lock lock = ...; 
lock.lock();
try {
    // access to the shared resource
} finally {
    lock.unlock();
}

In addition to Lock interfacewe have a ReadWriteLock interface which maintains a pair of locks, one for read-only operations, and one for the write operation. The read lock may be simultaneously held by multiple threads as long as there is no write.

ReadWriteLock declares methods to acquire read or write locks:

  • Lock readLock() – returns the lock that’s used for reading
  • Lock writeLock() – returns the lock that’s used for writing

4. Lock implementations

4.1. ReentrantLock

ReentrantLock class implements the Lock interface. It offers the same concurrency and memory semantics, as the implicit monitor lock accessed using synchronized methods and statements, with extended capabilities.

Let’s see, how we can use ReenrtantLock for synchronization:

public class SharedObject {
    //...
    ReentrantLock lock = new ReentrantLock();
    int counter = 0;

    public void perform() {
        lock.lock();
        try {
            // Critical section here
            count++;
        } finally {
            lock.unlock();
        }
    }
    //...
}

We need to make sure that we are wrapping the lock() and the unlock() calls in the try-finally block to avoid the deadlock situations.

Let’s see how the tryLock() works:

public void performTryLock(){
    //...
    boolean isLockAcquired = lock.tryLock(1, TimeUnit.SECONDS);
    
    if(isLockAcquired) {
        try {
            //Critical section here
        } finally {
            lock.unlock();
        }
    }
    //...
}

In this case, the thread calling tryLock(), will wait for one second and will give up waiting if the lock is not available.

4.2. ReentrantReadWriteLock

ReentrantReadWriteLock class implements the ReadWriteLock interface.

Let’s see rules for acquiring the ReadLock or WriteLock by a thread:

  • Read Lock – if no thread acquired the write lock or requested for it then multiple threads can acquire the read lock
  • Write Lock – if no threads are reading or writing then only one thread can acquire the write lock

Let’s see how to make use of the ReadWriteLock:

public class SynchronizedHashMapWithReadWriteLock {

    Map<String,String>  syncHashMap = new HashMap<>();
    ReadWriteLock lock = new ReentrantReadWriteLock();
    //...
    Lock writeLock = lock.writeLock();

    public void put(String key, String value) {
        try {
            writeLock.lock();
            syncHashMap.put(key, value);
        } finally {
            writeLock.unlock();
        }
    }
    ...
    public String remove(String key){
        try {
            writeLock.lock();
            return syncHashMap.remove(key);
        } finally {
            writeLock.unlock();
        }
    }
    //...
}

For both the write methods, we need to surround the critical section with the write lock, only one thread can get access to it:

Lock readLock = lock.readLock();
//...
public String get(String key){
    try {
        readLock.lock();
        return syncHashMap.get(key);
    } finally {
        readLock.unlock();
    }
}

public boolean containsKey(String key) {
    try {
        readLock.lock();
        return syncHashMap.containsKey(key);
    } finally {
        readLock.unlock();
    }
}

For both read methods, we need to surround the critical section with the read lock. Multiple threads can get access to this section if no write operation is in progress.

4.3. StampedLock

StampedLock is introduced in Java 8.  It also supports both read and write locks. However, lock acquisition methods returns a stamp that is used to release a lock or to check if the lock is still valid:

public class StampedLockDemo {
    Map<String,String> map = new HashMap<>();
    private StampedLock lock = new StampedLock();

    public void put(String key, String value){
        long stamp = lock.writeLock();
        try {
            map.put(key, value);
        } finally {
            lock.unlockWrite(stamp);
        }
    }

    public String get(String key) throws InterruptedException {
        long stamp = lock.readLock();
        try {
            return map.get(key);
        } finally {
            lock.unlockRead(stamp);
        }
    }
}

Another feature provided by StampedLock is optimistic locking. Most of the time read operations doesn’t need to wait for write operation completion and as a result of this, the full fledged read lock is not required. Instead, we can upgrade to read lock:

public String readWithOptimisticLock(String key) {
    long stamp = lock.tryOptimisticRead();
    String value = map.get(key);

    if(!lock.validate(stamp)) {
        stamp = lock.readLock();
        try {
            return map.get(key);
        } finally {
            lock.unlock(stamp);               
        }
    }
    return value;
}

5. Working with Conditions

The Condition class provides the ability for a thread to wait for some condition to occur while executing the critical section.

This can occur when a thread acquires the access to the critical section but doesn’t have the necessary condition to perform its operation. For example, a reader thread can get access to the lock of a shared queue, which still doesn’t have any data to consume.

Traditionally Java provides wait(), notify() and notifyAll() methods for thread intercommunication. Conditions have similar mechanisms, but in addition, we can specify multiple conditions:

public class ReentrantLockWithCondition {

    Stack<String> stack = new Stack<>();
    int CAPACITY = 5;

    ReentrantLock lock = new ReentrantLock();
    Condition stackEmptyCondition = lock.newCondition();
    Condition stackFullCondition = lock.newCondition();

    public void pushToStack(String item){
        try {
            lock.lock();
            while(stack.size() == CAPACITY){
                stackFullCondition.await();
            }
            stack.push(item);
            stackEmptyCondition.signalAll();
        } finally {
            lock.unlock();
        }
    }

    public String popFromStack() {
        try {
            lock.lock();
            while(stack.size() == 0){
                stackEmptyCondition.await();
            }
            return stack.pop();
        } finally {
            lock.unlock();
            stackFullCondition.signalAll();
        }
    }
}

6. Conclusion

In this article, we have seen different implementations of the Lock interface and the newly introduced StampedLock class. We also explored how we can make use of the Condition class to work with multiple conditions.

The complete code for this tutorial is available over on GitHub.

Spring Remoting with Hessian and Burlap

$
0
0

1. Overview

In the previous article titled “Intro to Spring Remoting with HTTP Invokers” we saw how easy is to set up a client/server application that leverages remote method invocation (RMI) through Spring Remoting.

In this article, we will show how Spring Remoting supports the implementation of RMI using Hessian and Burlap instead.

2. Maven Dependencies

Both Hessian and Burlap are provided by the following library that you will need to include explicitly in your pom.xml file:

<dependency>
    <groupId>com.caucho</groupId>
    <artifactId>hessian</artifactId>
    <version>4.0.38</version>
</dependency>

You can find the latest version on Maven Central.

3. Hessian

Hessian is a lightweight binary protocol from Caucho, the makers of the Resin application server. Hessian implementations exist for several platforms and languages, Java included.

In the following subsections, we will modify the “cab booking” example presented in the previous article to make the client and the server to communicate using Hessian instead of the Spring Remote HTTP based protocol.

3.1. Exposing the Service

Let’s expose the service by configuring a RemoteExporter of type HessianServiceExporter, replacing the HttpInvokerServiceExporter previously used:

@Bean(name = "/booking") 
RemoteExporter bookingService() {
    HessianServiceExporter exporter = new HessianServiceExporter();
    exporter.setService(new CabBookingServiceImpl());
    exporter.setServiceInterface( CabBookingService.class );
    return exporter;
}

We can now start the server and keep it active while we prepare the client.

3.2. Client Application

Let’s implement the client. Here again, the modifications are quite simple — we need to replace the HttpInvokerProxyFactoryBean with a HessianProxyFactoryBean:

@Configuration
public class HessianClient {

    @Bean
    public HessianProxyFactoryBean hessianInvoker() {
        HessianProxyFactoryBean invoker = new HessianProxyFactoryBean();
        invoker.setServiceUrl("http://localhost:8080/booking");
        invoker.setServiceInterface(CabBookingService.class);
        return invoker;
    }

    public static void main(String[] args) throws BookingException {
        CabBookingService service
          = SpringApplication.run(HessianClient.class, args)
              .getBean(CabBookingService.class);
        out.println(
          service.bookRide("13 Seagate Blvd, Key Largo, FL 33037"));
    }
}

Let’s now run the client to make it connect to the server using Hessian.

4. Burlap

Burlap is another lightweight protocol from Caucho, based on XML. Caucho stopped maintaining it a long time ago, and for that, its support has been deprecated in the newest Spring releases, even though it is already present.

Therefore you should reasonably continue using Burlap only if you have applications that are already distributed and that cannot easily be migrated to another Spring Remoting implementation.

4.1. Exposing the Service

We can use Burlap exactly in the same way that we used Hessian — we just have to choose the proper implementation:

@Bean(name = "/booking") 
RemoteExporter burlapService() {
    BurlapServiceExporter exporter = new BurlapServiceExporter();
    exporter.setService(new CabBookingServiceImpl());
    exporter.setServiceInterface( CabBookingService.class );
    return exporter;
}

As you can see, we just changed the type of exporter from HessianServiceExporter to BurlapServiceExporter. All the setup code can be left unchanged.

Again, let’s start the server and let’s keep it running while we work on the client.

4.2. Client Implementation

We can likewise swap Hessian for Burlap at the client side, changing out HessianProxyFactoryBean with BurlapProxyFactoryBean:

@Bean
public BurlapProxyFactoryBean burlapInvoker() {
    BurlapProxyFactoryBean invoker = new BurlapProxyFactoryBean();
    invoker.setServiceUrl("http://localhost:8080/booking");
    invoker.setServiceInterface(CabBookingService.class);
    return invoker;
}

We can now run the client and see how it connects successfully to the server application using Burlap.

5. Conclusion

With these quick examples, we showed how it is easy with Spring Remoting to choose among different technologies to implement remote method invocation and how you can develop an application being completely unaware of the technical details of the protocol used to represent the remote method invocation.

As usual, you’ll find the sources over on GitHub, with clients for both Hessian and Burlap and the JUnit test CabBookingServiceTest.java that will take care of running both the server and the clients.


Introduction to cglib

$
0
0

1. Overview

In this article, we will be looking at the cglib (Code Generation Library) library. It is a byte instrumentation library used in many Java frameworks such as Hibernate or Spring. The bytecode instrumentation allows manipulating or creating classes after the compilation phase of a program.

2. Maven Dependency

To use cglib in your project, just add a Maven dependency (latest version can be found here):

<dependency>
    <groupId>cglib</groupId>
    <artifactId>cglib</artifactId>
    <version>3.2.4</version>
</dependency>

3. Cglib

Classes in Java are loaded dynamically at runtime. Cglib is using this feature of Java language to make it possible to add new classes to an already running Java program.

Hibernate uses cglib for generation of dynamic proxies. For example, it will not return full object stored in a database but it will return an instrumented version of stored class that lazily loads values from the database on demand.

Popular mocking frameworks, like Mockito, use cglib for mocking methods. The mock is an instrumented class where methods are replaced by empty implementations.

We will be looking at the most useful constructs from cglib.

4. Implementing Proxy Using cglib

Let’s say that we have a PersonService class that has two methods:

public class PersonService {
    public String sayHello(String name) {
        return "Hello " + name;
    }

    public Integer lengthOfName(String name) {
        return name.length();
    }
}

Notice that first method returns String and the second one Integer.

4.1. Returning the Same Value

We want to create a simple proxy class that will intercept a call to a sayHello() method. The Enhancer class allows us to create a proxy by dynamically extending a PersonService class by using a setSuperclass() method from the Enhancer class:

Enhancer enhancer = new Enhancer();
enhancer.setSuperclass(PersonService.class);
enhancer.setCallback((FixedValue) () -> "Hello Tom!");
PersonService proxy = (PersonService) enhancer.create();

String res = proxy.sayHello(null);

assertEquals("Hello Tom!", res);

The FixedValue is a callback interface that simply returns the value from the proxied method. Executing sayHello() method on a proxy returned a value specified in a proxy method.

4.2. Returning Value Depending on a Method Signature

The first version of our proxy has some drawbacks because we are not able to decide which method a proxy should intercept, and which method should be invoked from a superclass. We can use a MethodInterceptor interface to intercept all calls to the proxy and decide if want to make a specific call or execute a method from a superclass:

Enhancer enhancer = new Enhancer();
enhancer.setSuperclass(PersonService.class);
enhancer.setCallback((MethodInterceptor) (obj, method, args, proxy) -> {
    if (method.getDeclaringClass() != Object.class && method.getReturnType() == String.class) {
        return "Hello Tom!";
    } else {
        return proxy.invokeSuper(obj, args);
    }
});

PersonService proxy = (PersonService) enhancer.create();

assertEquals("Hello Tom!", proxy.sayHello(null));
int lengthOfName = proxy.lengthOfName("Mary");
 
assertEquals(4, lengthOfName);

In this example, we are intercepting all calls when method signature is not from the Object class, meaning that i.e. toString() or hashCode() methods will not be intercepted. Besides that, we are intercepting only methods from a PersonService that returns a String. Call to a lengthOfName() method will not be intercepted because its return type is an Integer.

5. Bean Creator

Another useful construct from the cglib is a BeanGenerator class. It allows us to dynamically create beans and to add fields together with setter and getter methods. It can be used by code generation tools to generate simple POJO objects:

BeanGenerator beanGenerator = new BeanGenerator();

beanGenerator.addProperty("name", String.class);
Object myBean = beanGenerator.create();
Method setter = myBean.getClass().getMethod("setName", String.class);
setter.invoke(myBean, "some string value set by a cglib");

Method getter = myBean.getClass().getMethod("getName");
assertEquals("some string value set by a cglib", getter.invoke(myBean));

6. Creating Mixin

A mixin is a construct that allows combining multiple objects into one. We can include a behavior of a couple of classes and expose that behavior as a single class or interface.  The cglib Mixins allow the combination of several objects into a single object. However, in order to do so all objects that are included within a mixin must be backed by interfaces.

Let’s say that we want to create a mixin of two interfaces. We need to define both interfaces and their implementations:

public interface Interface1 {
    String first();
}

public interface Interface2 {
    String second();
}

public class Class1 implements Interface1 {
    @Override
    public String first() {
        return "first behaviour";
    }
}

public class Class2 implements Interface2 {
    @Override
    public String second() {
        return "second behaviour";
    }
}

To compose implementations of Interface1 and Interface2 we need to create an interface that extends both of them:

public interface MixinInterface extends Interface1, Interface2 { }

By using a create() method from the Mixin class we can include behaviors of Class1 and Class2 into a MixinInterface:

Mixin mixin = Mixin.create(
  new Class[]{ Interface1.class, Interface2.class, MixinInterface.class },
  new Object[]{ new Class1(), new Class2() }
);
MixinInterface mixinDelegate = (MixinInterface) mixin;

assertEquals("first behaviour", mixinDelegate.first());
assertEquals("second behaviour", mixinDelegate.second());

Calling methods on the mixinDelegate will invoke implementations from Class1 and Class2.

7. Conclusion

In this article, we were looking at the cglib and its most useful constructs. We created a proxy using an Enhancer class. We used a BeanCreator and finally, we created a Mixin that included behaviors of other classes.

Cglib is used extensively by the Spring framework. One example of using a cglib proxy by Spring is adding security constraints to method calls. Instead of calling a method directly, Spring security will first check (via proxy) if a specified security check passes and delegate to the actual method only if this verification was successful. In this article, we saw how to create such proxy for our own purpose.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Working with Relationships in Spring Data REST

$
0
0

1. Overview

In this article, we’re going to take a look at how to work with relationships between entities in Spring Data REST.

We will focus on the association resources that Spring Data REST exposes for a repository, considering each type of relationship that can be defined.

To avoid any extra setup, we will use the H2 embedded database for the examples. You can see the list of required dependencies in our Introduction to Spring Data REST article.

2. One-to-One Relationship

2.1. The Data Model

Let’s define two entity classes Library and Address having a one-to-one relationship, using the @OneToOne annotation. The association is owned by the Library end of the association:

@Entity
public class Library {

    @Id
    @GeneratedValue
    private long id;

    @Column
    private String name;

    @OneToOne
    @JoinColumn(name = "address_id")
    @RestResource(path = "libraryAddress", rel="address")
    private Address address;
    
    // standard constructor, getters, setters
}
@Entity
public class Address {

    @Id
    @GeneratedValue
    private long id;

    @Column(nullable = false)
    private String location;

    @OneToOne(mappedBy = "address")
    private Library library;

    // standard constructor, getters, setters
}

The @RestResource annotation is optional and can be used to customize the endpoint.

We must be careful to have different names for each association resource. Otherwise, we will encounter a JsonMappingException with the message: “Detected multiple association links with same relation type! Disambiguate association”.

The association name defaults to the property name and can be customized using the rel attribute of @RestResource annotation:

@OneToOne
@JoinColumn(name = "secondary_address_id")
@RestResource(path = "libraryAddress", rel="address")
private Address secondaryAddress;

If we were to add the secondaryAddress property above to the Library class, we would have two resources named address, and we would encounter a conflict.

We can resolve this by specifying a different value for the rel attribute or by omitting the RestResource annotation so that the resource name defaults to secondaryAddress.

2.2. The Repositories

In order to expose these entities as resources, let’s create two repository interfaces for each of them, by extending the CrudRepository interface:

public interface LibraryRepository extends CrudRepository<Library, Long> {}
public interface AddressRepository extends CrudRepository<Address, Long> {}

2.3. Creating the Resources

First, let’s add a Library instance to work with:

curl -i -X POST -H "Content-Type:application/json" 
  -d '{"name":"My Library"}' http://localhost:8080/libraries

The API returns the JSON object:

{
  "name" : "My Library",
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/libraries/1"
    },
    "library" : {
      "href" : "http://localhost:8080/libraries/1"
    },
    "address" : {
      "href" : "http://localhost:8080/libraries/1/libraryAddress"
    }
  }
}

Note that if you’re using curl on Windows, you have to escape the double-quote character inside the String that represents the JSON body:

-d "{\"name\":\"My Library\"}"

We can see in the response body that an association resource has been exposed at the libraries/{libraryId}/address endpoint.

Before we create an association, sending a GET request to this endpoint will return an empty object.

However, if we want to add an association, we must first create an Address instance also:

curl -i -X POST -H "Content-Type:application/json" 
  -d '{"location":"Main Street nr 5"}' http://localhost:8080/addresses

The result of the POST request is a JSON object containing the Address record:

{
  "location" : "Main Street nr 5",
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/addresses/1"
    },
    "address" : {
      "href" : "http://localhost:8080/addresses/1"
    },
    "library" : {
      "href" : "http://localhost:8080/addresses/1/library"
    }
  }
}

2.4. Creating the Associations

After persisting both instances, we can establish the relationship by using one of the association resources.

This is done using the HTTP method PUT, which supports a media type of text/uri-list, and a body containing the URI of the resource to bind to the association.

Since the Library entity is the owner of the association, let’s add an address to a library:

curl -i -X PUT -d "http://localhost:8080/addresses/1" 
  -H "Content-Type:text/uri-list" http://localhost:8080/libraries/1/address

If successful, this returns status 204. To verify, let’s check the library association resource of the address:

curl -i -X GET http://localhost:8080/addresses/1/library

This should return the Library JSON object with name “My Library”.

To remove an association, we can call the endpoint with DELETE method, making sure to use the association resource of the owner of the relationship:

curl -i -X DELETE http://localhost:8080/libraries/1/libraryAddress

3. One-to-Many Relationship

A one-to-many relationship is defined using the @OneToMany and @ManyToOne annotations and can have the optional @RestResource annotation to customize the association resource.

3.1. The Data Model

To exemplify a one-to-many relationship, let’s add a new Book entity that will represent the “many” end of a relationship with the Library entity:

@Entity
public class Book {

    @Id
    @GeneratedValue
    private long id;
    
    @Column(nullable=false)
    private String title;
    
    @ManyToOne
    @JoinColumn(name="library_id")
    private Library library;
    
    // standard constructor, getter, setter
}

Let’s add the relationship to the Library class as well:

public class Library {
 
    //...
 
    @OneToMany(mappedBy = "library")
    private List<Book> books;
 
    //...
 
}

3.2. The Repository

We also need to create a BookRepository:

public interface BookRepository extends CrudRepository<Book, Long> { }

3.3. The Association Resources

In order to add a book to a library, we need to create a Book instance first by using the /books collection resource:

curl -i -X POST -d "{\"title\":\"Book1\"}" 
  -H "Content-Type:application/json" http://localhost:8080/books

And here is the response from the POST request:

{
  "title" : "Book1",
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/books/1"
    },
    "book" : {
      "href" : "http://localhost:8080/books/1"
    },
    "bookLibrary" : {
      "href" : "http://localhost:8080/books/1/library"
    }
  }
}

In the response body, we can see the association endpoint /books/{bookId}/library has been created.

Let’s associate the book with the library created in the previous section by sending a PUT request to the association resource that contains the URI of the library resource:

curl -i -X PUT -H "Content-Type:text/uri-list" 
-d "http://localhost:8080/libraries/1" http://localhost:8080/books/1/library

We can verify the books in the library by using GET method on the library’s /books association resource:

curl -i -X GET http://localhost:8080/libraries/1/books

The returned JSON object will contain a books array:

{
  "_embedded" : {
    "books" : [ {
      "title" : "Book1",
      "_links" : {
        "self" : {
          "href" : "http://localhost:8080/books/1"
        },
        "book" : {
          "href" : "http://localhost:8080/books/1"
        },
        "bookLibrary" : {
          "href" : "http://localhost:8080/books/1/library"
        }
      }
    } ]
  },
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/libraries/1/books"
    }
  }
}

To remove an association, we can use the DELETE method on the association resource:

curl -i -X DELETE http://localhost:8080/books/1/library

4. Many-to-Many Relationship

A many-to-many relationship is defined using @ManyToMany annotation, to which we can add @RestResource.

4.1. The Data Model

To create an example of a many-to-many relationship, let’s add a new model class Author that will have a many-to-many relationship with the Book entity:

@Entity
public class Author {

    @Id
    @GeneratedValue
    private long id;

    @Column(nullable = false)
    private String name;

    @ManyToMany(cascade = CascadeType.ALL)
    @JoinTable(name = "book_author", 
      joinColumns = @JoinColumn(name = "book_id", referencedColumnName = "id"), 
      inverseJoinColumns = @JoinColumn(name = "author_id", 
      referencedColumnName = "id"))
    private List<Book> books;

    //standard constructors, getters, setters
}

Let’s add the association in the Book class as well:

public class Book {
 
    //...
 
    @ManyToMany(mappedBy = "books")
    private List<Author> authors;
 
    //...
}

4.2. The Repository

Let’s create a repository interface to manage the Author entity:

public interface AuthorRepository extends CrudRepository<Author, Long> { }

4.3. The Association Resources

As in the previous sections, we must first create the resources before we can establish the association.

Let’s first create an Author instance by sending a POST requests to the /authors collection resource:

curl -i -X POST -H "Content-Type:application/json" 
  -d "{\"name\":\"author1\"}" http://localhost:8080/authors

Next, let’s add a second Book record to our database:

curl -i -X POST -H "Content-Type:application/json" 
  -d "{\"title\":\"Book 2\"}" http://localhost:8080/books

Let’s execute a GET request on our Author record to view the association URL:

{
  "name" : "author1",
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/authors/1"
    },
    "author" : {
      "href" : "http://localhost:8080/authors/1"
    },
    "books" : {
      "href" : "http://localhost:8080/authors/1/books"
    }
  }
}

Now we can create an association between the two Book records and the Author record using the endpoint authors/1/books with PUT method, which supports a media type of text/uri-list and can receive more than one URI.

To send multiple URIs we have to separate them by a line break:

curl -i -X PUT -H "Content-Type:text/uri-list" 
  --data-binary @uris.txt http://localhost:8080/authors/1/books

The uris.txt file contains the URIs of the books, each on a separate line:

http://localhost:8080/books/1
http://localhost:8080/books/2

To verify both books have been associated with the author, we can send a GET request to the association endpoint:

curl -i -X GET http://localhost:8080/authors/1/books

And we receive this response:

{
  "_embedded" : {
    "books" : [ {
      "title" : "Book 1",
      "_links" : {
        "self" : {
          "href" : "http://localhost:8080/books/1"
        }
      //...
      }
    }, {
      "title" : "Book 2",
      "_links" : {
        "self" : {
          "href" : "http://localhost:8080/books/2"
        }
      //...
      }
    } ]
  },
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/authors/1/books"
    }
  }
}

To remove an association, we can send a request with DELETE method to the URL of the association resource followed by {bookId}:

curl -i -X DELETE http://localhost:8080/authors/1/books/1

5. Testing the Endpoints with TestRestTemplate

Let’s create a test class that injects a TestRestTemplate instance and defines the constants we will use:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = SpringDataRestApplication.class, 
  webEnvironment = WebEnvironment.DEFINED_PORT)
public class SpringDataRelationshipsTest {

    @Autowired
    private TestRestTemplate template;

    private static String BOOK_ENDPOINT = "http://localhost:8080/books/";
    private static String AUTHOR_ENDPOINT = "http://localhost:8080/authors/";
    private static String ADDRESS_ENDPOINT = "http://localhost:8080/addresses/";
    private static String LIBRARY_ENDPOINT = "http://localhost:8080/libraries/";

    private static String LIBRARY_NAME = "My Library";
    private static String AUTHOR_NAME = "George Orwell";
}

5.1. Testing the One-to-One Relationship

Let’s create a @Test method that saves Library and Address objects by making POST requests to the collection resources.

Then it saves the relationship with a PUT request to the association resource and verifies that it has been established with a GET request to the same resource:

@Test
public void whenSaveOneToOneRelationship_thenCorrect() {
    Library library = new Library(LIBRARY_NAME);
    template.postForEntity(LIBRARY_ENDPOINT, library, Library.class);
   
    Address address = new Address("Main street, nr 1");
    template.postForEntity(ADDRESS_ENDPOINT, address, Address.class);
    
    HttpHeaders requestHeaders = new HttpHeaders();
    requestHeaders.add("Content-type", "text/uri-list");
    HttpEntity<String> httpEntity 
      = new HttpEntity<>(ADDRESS_ENDPOINT + "/1", requestHeaders);
    template.exchange(LIBRARY_ENDPOINT + "/1/libraryAddress", 
      HttpMethod.PUT, httpEntity, String.class);

    ResponseEntity<Library> libraryGetResponse 
      = template.getForEntity(ADDRESS_ENDPOINT + "/1/library", Library.class);
    assertEquals("library is incorrect", 
      libraryGetResponse.getBody().getName(), LIBRARY_NAME);
}

5.2. Testing the One-to-Many Relationship

Let’s create a @Test method that saves a Library instance and two Book instances, sends a PUT request to each Book object’s /library association resource, and verifies that the relationship has been saved:

@Test
public void whenSaveOneToManyRelationship_thenCorrect() {
    Library library = new Library(LIBRARY_NAME);
    template.postForEntity(LIBRARY_ENDPOINT, library, Library.class);

    Book book1 = new Book("Dune");
    template.postForEntity(BOOK_ENDPOINT, book1, Book.class);

    Book book2 = new Book("1984");
    template.postForEntity(BOOK_ENDPOINT, book2, Book.class);

    HttpHeaders requestHeaders = new HttpHeaders();
    requestHeaders.add("Content-Type", "text/uri-list");    
    HttpEntity<String> bookHttpEntity 
      = new HttpEntity<>(LIBRARY_ENDPOINT + "/1", requestHeaders);
    template.exchange(BOOK_ENDPOINT + "/1/library", 
      HttpMethod.PUT, bookHttpEntity, String.class);
    template.exchange(BOOK_ENDPOINT + "/2/library", 
      HttpMethod.PUT, bookHttpEntity, String.class);

    ResponseEntity<Library> libraryGetResponse = 
      template.getForEntity(BOOK_ENDPOINT + "/1/library", Library.class);
    assertEquals("library is incorrect", 
      libraryGetResponse.getBody().getName(), LIBRARY_NAME);
}

5.3. Testing the Many-to-Many Relationship

For testing the many-to-many relationship between Book and Author entities, we will create a test method that saves one Author record and two Book records.

Then it sends a PUT request to the /books association resource with the two BooksURIs and verifies that the relationship has been established:

@Test
public void whenSaveManyToManyRelationship_thenCorrect() {
    Author author1 = new Author(AUTHOR_NAME);
    template.postForEntity(AUTHOR_ENDPOINT, author1, Author.class);

    Book book1 = new Book("Animal Farm");
    template.postForEntity(BOOK_ENDPOINT, book1, Book.class);

    Book book2 = new Book("1984");
    template.postForEntity(BOOK_ENDPOINT, book2, Book.class);

    HttpHeaders requestHeaders = new HttpHeaders();
    requestHeaders.add("Content-type", "text/uri-list");
    HttpEntity<String> httpEntity = new HttpEntity<>(
      BOOK_ENDPOINT + "/1\n" + BOOK_ENDPOINT + "/2", requestHeaders);
    template.exchange(AUTHOR_ENDPOINT + "/1/books", 
      HttpMethod.PUT, httpEntity, String.class);

    String jsonResponse = template
      .getForObject(BOOK_ENDPOINT + "/1/authors", String.class);
    JSONObject jsonObj = new JSONObject(jsonResponse).getJSONObject("_embedded");
    JSONArray jsonArray = jsonObj.getJSONArray("authors");
    assertEquals("author is incorrect", 
      jsonArray.getJSONObject(0).getString("name"), AUTHOR_NAME);
}

6. Conclusion

In this tutorial, we have demonstrated the use of different types of relationships with Spring Data REST.

The full source code of the examples can be found over on GitHub.

Introducing nudge4j

$
0
0

1. Overview

nudge4j allows developers to see the impact of any operation straight-away and provides an environment in which they can explore, learn, and ultimately spend less time debugging and redeploying their application.

In this article, we will explore what nudge4j is, how it works, and how any Java application in development might benefit from it.

2. How nudge4j Works

2.1. A REPL In Disguise

nudge4j is essentially a read-eval-print-loop (REPL) in which you talk to your Java application within a browser window via a simple page containing just two elements:

  • an editor
  • the Execute on JVM button

You can talk to your JVM in a typical REPL cycle:

  • Type any code into the editor and press Execute on JVM
  • The browser posts the code to your JVM, which then runs the code
  • The result is returned (as a string) and displayed below the button

nudge4j comes with a few examples to try straight-away, like querying how long the JVM has been running and how much memory is currently available. I suggest you start with these before writing your own code.

2.2. The JavaScript Engine

The code which is sent by the browser to the JVM is JavaScript that manipulates Java objects (not to be confused with any JavaScript that runs on the browser). The JavaScript is executed by the built-in JavaScript engine Nashorn.

Don’t worry if you don’t know (or like) JavaScript – for your nudge4j needs, you can just think of it as an untyped Java dialect.

Note that I am aware that saying that “JavaScript is untyped Java” is a huge simplification. But I want Java developers (who may be prejudiced towards anything that is JavaScript) to give nudge4j a fair chance.

2.3. Scope of JVM Interaction

nudge4j lets you access any Java class which is accessible from your JVM, allowing you to call methods, create objects, etc. This is very powerful, but it might not be sufficient while working with your application.

In some situations, you might want to reach one or more objects, specific to your application only, so that you can manipulate them. nudge4j allows for that. Any object that needs to be exposed can be passed as an argument at instantiation time.

2.4. Exception Handling

The design of nudge4j recognizes the possibility that users of the tool might make mistakes or cause errors on the JVM. In both of these cases, the tool is designed to report the full stack trace in order to guide the user to rectify the mistake or error.

Let’s look at a screenshot in which a snippet of code that has been executed results in an Exception being thrown:

3. Adding nudge4j to Your Application

3.1. Just Copy and Paste

The integration with nudge4j is achieved somewhat unconventionally, as there are no jar files to add to your classpath, and there are no dependencies to add to a Maven or Gradle build.

Instead, you are required to simply copy and paste a small snippet of Java code – around 100 lines – anywhere into your own code before you run it.

You’ll find the snippet on the nudge4j home page – there’s even a button on the page that you can click to copy the snippet to your clipboard.

This snippet of code might appear quite abstruse at first. There are a few reasons for that:

  • The nudge4j snippet can be dropped into any class; therefore, it could not make any assumption regarding the imports, and any class it contained had to be fully qualified
  • To avoid potential clashes with variables already defined, the code is wrapped in a function
  • Access to the built-in JDK HttpServer is done via introspection in order to avoid restrictions which exist with some IDEs (e.g. Eclipse) for packages beginning with  “com.sun.*”

So, even though Java is already a verbose language, it had to be made even more verbose to provide for a seamless integration.

3.2. Sample Application

Let’s start with a standard JVM application where we pretend that a simple java.util.HashMap holds most of the information that we want to play with:

public class MyApp {
    public static void main(String args[]) {
        Map map = new HashMap();
        map.put("health", 60);
        map.put("strength", 4);
        map.put("tools", Arrays.asList("hammer"));
        map.put("places", Arrays.asList("savannah","tundra"));
        map.put("location-x", -42 );
        map.put("location-y", 32);
 
        // paste original code from nudge4j below
        (new java.util.function.Consumer<Object[]>() {
            public void accept(Object args[]) {
                ...
                ...
            }
        }).accept(new Object[] { 
            5050,  // <-- the port
            map    // <-- the map is passed as a parameter.
        });
    }
}

As you can see from this example, you simply paste in the nudge4j snippet at the end of your own code. Lines 12-20 in the example here serve as a placeholder for an abbreviated version of the snippet.

Now, let’s point the browser to http://localhost:5050/. The map is now accessible as args[1] in the editor from the browser by simply typing:

args[1];

This will provide a summary of our Map (in this case relying on the toString() method of the Map and its keys and values).

Suppose we want to examine and modify the Map entry with the key value “tools”.

To get a list of all available tools in the Map, you would write:

map = args[1];
map.get("tools");

And to add a new tool to the Map, you would write:

map = args[1];
map.get("tools").add("axe");

In general, few lines of code should be sufficient to probe any Java application.

4. Conclusion

By combining two simple APIs within the JDK (Nashorn and Http server) nudge4j gives you the ability to probe into any Java 8 Application.

In a way, nudge4j is just a modern cut off an old idea: give developers access to the facilities of an existing system via a scripting language – an idea that can make an impact on how Java developers could spend their day coding.

Intro to Log4j2 – Appenders, Layouts and Filters

$
0
0

1. Overview

Logging events is a critical aspect of software development. While there are lots of frameworks available in Java ecosystem, Log4J has been the most popular for decades, due to the flexibility and simplicity it provides.

Log4j 2 is new and improved version of the classic Log4j framework.

In this article, we’ll introduce the most common appenders, layouts, and filters via practical examples.

In Log4J2, an appender is simply a destination for log events; it can be as simple as a console and can be complex like any RDBMS. Layouts determine how the logs will be presented and filters filter the data according to the various criterion.

2. Setup

In order to understand several logging components and their configuration let’s set up different test use-cases, each consisting of a log4J2.xml configuration file and a JUnit 4 test class.

Two maven dependencies are common to all examples:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.7</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.7</version>
    <type>test-jar</type>
    <scope>test</scope>
</dependency>

Besides the main log4j-core package we need to include the ‘test jar’ belonging to the package to gain access to a context rule needed for testing of uncommonly named configuration files.

3. Default Configuration

ConsoleAppender is the default configuration of the Log4J 2 core package. It logs messages to the system console in a simple pattern:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Appenders>
        <Console name="ConsoleAppender" target="SYSTEM_OUT">
            <PatternLayout 
              pattern="%d [%t] %-5level %logger{36} - %msg%n%throwable"/>
        </Console>
    </Appenders>
    <Loggers>
        <Root level="ERROR">
            <AppenderRef ref="ConsoleAppender"/>
        </Root>
    </Loggers>
</Configuration>

Let’s analyze the tags in this simple XML configuration:

  • Configuration: The root element of a Log4J 2 configuration file and attribute status is the level of the internal Log4J events, that we want to log
  • AppendersThis element is holding one or more appenders. Here we’ll configure an appender that outputs to the system console at standard out
  • Loggers: This element can consist of multiple configured Logger elements. With the special Root tag, you can configure a nameless standard logger that will receive all log messages from the application. Each logger can be set to a minimum log level
  • AppenderRef: This element defines a reference to an element from the Appenders section. Therefore the attribute ‘ref‘ is linked with an appenders ‘name‘ attribute

The corresponding unit test will be similarly simple. We’ll obtain a Logger reference and print two messages:

@Test
public void givenLoggerWithDefaultConfig_whenLogToConsole_thanOK()
  throws Exception {
    Logger logger = LogManager.getLogger(getClass());
    Exception e = new RuntimeException("This is only a test!");

    logger.info("This is a simple message at INFO level. " +
      "It will be hidden.");
    logger.error("This is a simple message at ERROR level. " +
    "This is the minimum visible level.", e);
}

4. ConsoleAppender with PatternLayout

Let’s define a new console appender with a customized color pattern in a separate XML file, and include that in our main configuration:

<?xml version="1.0" encoding="UTF-8"?>
<Console name="ConsoleAppender" target="SYSTEM_OUT">
    <PatternLayout pattern="%style{%date{DEFAULT}}{yellow}
      %highlight{%-5level}{FATAL=bg_red, ERROR=red, WARN=yellow, INFO=green} 
      %message"/>
</Console>

This file is using some pattern variables that gets replaced by Log4J 2 at runtime:

  • %style{…}{colorname}This will print the text in the first bracket pair () in a given color (colorname).
  • %highlight{…}{FATAL=colorname, …}This is similar to the ‘style’ variable. But a different color can be given for each log level.
  • %date{format}This gets replaced by the current date in the specified format. Here we’re using the ‘DEFAULT’ DateTime format,  yyyy-MM-dd HH:mm:ss,SSS’.
  • %-5level: Prints the level of the log message in a right-aligned fashion.
  • %message: Represents the raw log message

But there exists many more variables and formatting in the PatternLayout. You can refer them to the Log4J 2‘s official documentation.

Now we’ll include the defined console appender into our main configuration:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" xmlns:xi="http://www.w3.org/2001/XInclude">
    <Appenders>
        <xi:include href="log4j2-includes/
          console-appender_pattern-layout_colored.xml"/>
    </Appenders>
    <Loggers>
        <Root level="DEBUG">
            <AppenderRef ref="ConsoleAppender"/>
        </Root>
    </Loggers>
</Configuration>

The unit test:

@Test
public void givenLoggerWithConsoleConfig_whenLogToConsoleInColors_thanOK() 
  throws Exception {
    Logger logger = LogManager.getLogger("CONSOLE_PATTERN_APPENDER_MARKER");
    logger.trace("This is a colored message at TRACE level.");
    ...
}

5. Async File Appender with JSONLayout and BurstFilter

Sometimes it’s useful to write log messages in an asynchronous manner. For example, if application performance has priority over the availability of logs.

In such use-cases, we can use an AsyncAppender.

For our example, we’re configuring an asynchronous JSON log file. Furthermore, we’ll include a burst filter that limits the log output at a specified rate:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Appenders>
        ...
        <File name="JSONLogfileAppender" fileName="target/logfile.json">
            <JSONLayout compact="true" eventEol="true"/>
            <BurstFilter level="INFO" rate="2" maxBurst="10"/>
        </File>
        <Async name="AsyncAppender" bufferSize="80">
            <AppenderRef ref="JSONLogfileAppender"/>
        </Async>
    </Appenders>
    <Loggers>
        ...
        <Logger name="ASYNC_JSON_FILE_APPENDER" level="INFO"
          additivity="false">
            <AppenderRef ref="AsyncAppender" />
        </Logger>
        <Root level="INFO">
            <AppenderRef ref="ConsoleAppender"/>
        </Root>
    </Loggers>
</Configuration>

Notice that:

  • The JSONLayout is configured in a way, that writes one log event per row
  • The BurstFilter will drop every event with ‘INFO’ level and above if there are more than two of them, but at a maximum of 10 dropped events
  • The AsyncAppender is set to a buffer of 80 log messages; after that, the buffer is flushed to the log file

Let’s take a look at the corresponding unit test. We’re filling the appended buffer in a loop, let it write to disk and inspect the line count of the log file:

@Test
public void givenLoggerWithAsyncConfig_whenLogToJsonFile_thanOK() 
  throws Exception {
    Logger logger = LogManager.getLogger("ASYNC_JSON_FILE_APPENDER");

    final int count = 88;
    for (int i = 0; i < count; i++) {
        logger.info("This is async JSON message #{} at INFO level.", count);
    }
    
    long logEventsCount 
      = Files.lines(Paths.get("target/logfile.json")).count();
    assertTrue(logEventsCount > 0 && logEventsCount <= count);
}

6. RollingFile Appender and XMLLayout

Next, we’ll create a rolling log file. After a configured file size, the log file gets compressed and rotated.

This time we’re using an XML layout:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Appenders>
        <RollingFile name="XMLRollingfileAppender"
          fileName="target/logfile.xml"
          filePattern="target/logfile-%d{yyyy-MM-dd}-%i.log.gz">
            <XMLLayout/>
            <Policies>
                <SizeBasedTriggeringPolicy size="17 kB"/>
            </Policies>
        </RollingFile>
    </Appenders>
    <Loggers>
        <Logger name="XML_ROLLING_FILE_APPENDER" 
       level="INFO" additivity="false">
            <AppenderRef ref="XMLRollingfileAppender" />
        </Logger>
        <Root level="TRACE">
            <AppenderRef ref="ConsoleAppender"/>
        </Root>
    </Loggers>
</Configuration>

Notice that:

  • The RollingFile appender has a ‘filePattern’ attribute, which is used to name rotated log files and can be configured with placeholder variables. In our example, it should contain a date and a counter before the file suffix.
  • The default configuration of XMLLayout will write single log event objects without the root element.
  • We’re using a size based policy for rotating our log files.

Our unit test class will look like the one from the previous section:

@Test
public void givenLoggerWithRollingFileConfig_whenLogToXMLFile_thanOK()
  throws Exception {
    Logger logger = LogManager.getLogger("XML_ROLLING_FILE_APPENDER");
    final int count = 88;
    for (int i = 0; i < count; i++) {
        logger.info(
          "This is rolling file XML message #{} at INFO level.", i);
    }
}

7. Syslog Appender

Let’s say we need to send logged event’s to a remote machine over the network. The simplest way to do that using Log4J2 would be using it’s Syslog Appender:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Appenders>
        ...
        <Syslog name="Syslog" 
          format="RFC5424" host="localhost" port="514" 
          protocol="TCP" facility="local3" connectTimeoutMillis="10000" 
          reconnectionDelayMillis="5000">
        </Syslog>
    </Appenders>
    <Loggers>
        ...
        <Logger name="FAIL_OVER_SYSLOG_APPENDER" 
          level="INFO" 
          additivity="false">
            <AppenderRef ref="FailoverAppender" />
        </Logger>
        <Root level="TRACE">
            <AppenderRef ref="Syslog" />
        </Root>
    </Loggers>
</Configuration>

The attributes in the Syslog tag:

  • name: defines the name of the appender, and must be unique. Since we can have multiple Syslog appenders for the same application and configuration
  • format: it can be either set to BSD or RFC5424, and the Syslog records would be formatted accordingly
  • host & portthe hostname and port of the remote Syslog server machine
  • protocolwhether to use TCP or UPD
  • facilityto which Syslog facility the event will be written
  • connectTimeoutMillistime period of waiting for an established connection, defaults to zero
  • reconnectionDelayMillistime to wait before re-attempting connection

8. FailoverAppender

Now there may be instances where one appender fails to process the log events and we do not want to lose the data. In such cases, the FailoverAppender comes handy.

For example, if the Syslog appender fails to send events to the remote machine, instead of losing that data we might fall back to FileAppender temporarily.

The FailoverAppender takes a primary appender and number of secondary appenders. In case the primary fails, it tries to process the log event with secondary ones in order until one succeeds or there aren’t any secondaries to try:

<Failover name="FailoverAppender" primary="Syslog">
    <Failovers>
        <AppenderRef ref="ConsoleAppender" />
    </Failovers>
</Failover>

Let’s test it:

@Test
public void givenLoggerWithFailoverConfig_whenLog_thanOK()
  throws Exception {
    Logger logger = LogManager.getLogger("FAIL_OVER_SYSLOG_APPENDER");
    Exception e = new RuntimeException("This is only a test!"); 

    logger.trace("This is a syslog message at TRACE level.");
    logger.debug("This is a syslog message at DEBUG level.");
    logger.info("This is a syslog message at INFO level. 
      This is the minimum visible level.");
    logger.warn("This is a syslog message at WARN level.");
    logger.error("This is a syslog message at ERROR level.", e);
    logger.fatal("This is a syslog message at FATAL level.");
}

9. JDBC Appender

The JDBC appender sends log events to an RDBMS, using standard JDBC. The connection can be obtained either using any JNDI Datasource or any connection factory.

The basic configuration consists of a DataSource or ConnectionFactory, ColumnConfigs, and tableName:

<JDBC name="JDBCAppender" tableName="logs">
    <ConnectionFactory 
      class="com.baeldung.logging.log4j2.tests.jdbc.ConnectionFactory" 
      method="getConnection" />
    <Column name="when" isEventTimestamp="true" />
    <Column name="logger" pattern="%logger" />
    <Column name="level" pattern="%level" />
    <Column name="message" pattern="%message" />
    <Column name="throwable" pattern="%ex{full}" />
</JDBC>

Now let’s try out:

@Test
public void givenLoggerWithJdbcConfig_whenLogToDataSource_thanOK()
  throws Exception {
    Logger logger = LogManager.getLogger("JDBC_APPENDER");
    final int count = 88;
    for (int i = 0; i < count; i++) {
        logger.info("This is JDBC message #{} at INFO level.", count);
    }

    Connection connection = ConnectionFactory.getConnection();
    ResultSet resultSet = connection.createStatement()
      .executeQuery("SELECT COUNT(*) AS ROW_COUNT FROM logs");
    int logCount = 0;
    if (resultSet.next()) {
        logCount = resultSet.getInt("ROW_COUNT");
    }
    assertTrue(logCount == count);
}

10. Conclusion

This article shows very simple examples of how you can use different logging appenders, filter and layouts with Log4J2 and ways to configure them.

The examples that accompany the article are available over on GitHub.

Java 9 – Exploring the REPL

$
0
0

1. Introduction

This article is about jshell, an interactive REPL (Read-Evaluate-Print-Loop) console that is bundled with the JDK for the upcoming Java 9 release. For those not familiar with the concept, a REPL allows to interactively run arbitrary snippets of code and evaluate their results.

A REPL can be useful for things such as quickly checking the viability of an idea or figuring out e.g. a formatted string for String or SimpleDateFormat.

2. Running

To get started we need to run the REPL, which is done by invoking:

$JAVA_HOME/bin/jshell

If more detailed messaging from the shell is desired, a -v flag can be used:

$JAVA_HOME/bin/jshell -v

Once it is ready, we will be greeted by a friendly message and a familiar Unix-style prompt at the bottom.

3. Defining and Invoking Methods

Methods can be added by typing their signature and body:

jshell> void helloWorld() { System.out.println("Hello world");}
|  created method helloWorld()

Here we defined the ubiquitous “hello world” method.  It can be invoked using normal Java syntax:

jshell> helloWorld()
Hello world

4. Variables

Variables can be defined with the normal Java declaration syntax:

jshell> int i = 0;
i ==> 0
|  created variable i : int

jshell> String company = "Baeldung"
company ==> "Baeldung"
|  created variable company : String

jshell> Date date = new Date()
date ==> Sun Feb 26 06:30:16 EST 2017
|  created variable date : Date

Note that semicolons are optional.  Variables can also be declared without initialization:

jshell> File file
file ==> null
|  created variable file : File

5. Expressions

Any valid Java expression is accepted and the result of the evaluation will be shown. If no explicit receiver of the result is provided, “scratch” variables will be created:

jshell> String.format("%d of bottles of beer", 100)
$6 ==> "100 of bottles of beer"
|  created scratch variable $6 : String

The REPL is quite helpful here by informing us that it created a scratch variable named $6 which value is “100 of bottles of beer on the wall” and its type is String.

Multi-line expressions are also possible.  Jshell is smart enough to know when an expression is incomplete and will prompt the user to continue on a new line:

jshell> int i =
   ...> 5;
i ==> 5
|  modified variable i : int
|    update overwrote variable i : int

Note how the prompt changed to an indented …> to signify the continuation of an expression.

6. Commands

Jshell provides quite a few meta-commands that aren’t related to evaluating Java statements.  They all start with a forward-slash (/) to be distinguished from normal operations. For example, we can request a list of all available commands by issuing /help or /?.

Let’s take a look at some of them.

6.1. Imports

To list all the imports active in the current session we can use the /import command:

jshell> /import
|    import java.io.*
|    import java.math.*
|    import java.net.*
|    import java.nio.file.*
|    import java.util.*
|    import java.util.concurrent.*
|    import java.util.function.*
|    import java.util.prefs.*
|    import java.util.regex.*
|    import java.util.stream.*

As we can see, the shell starts with quite a few useful imports already added.

6.2. Lists

Working in a REPL is not nearly as easy as having a full-featured IDE at our fingertips: it is easy to forget what variables have which values, what methods have been defined and so on.  To check the state of the shell we can use /var, /methods, /list or /history:

jshell> /var
| int i = 0
| String company = "Baeldung"
| Date date = Sun Feb 26 06:30:16 EST 2017
| File file = null
| String $6 = "100 of bottles of beer on the wall"

jshell> /methods
| void helloWorld()

jshell> /list

 1 : void helloWorld() { System.out.println("Hello world");}
 2 : int i = 0;
 3 : String company = "Baeldung";
 4 : Date date = new Date();
 5 : File file;
 6 : String.format("%d of bottles of beer on the wall", 100)

jshell> /history

void helloWorld() { System.out.println("Hello world");}
int i = 0;
String company = "Baeldung"
Date date = new Date()
File file
String.format("%d of bottles of beer on the wall", 100)
/var
/methods
/list
/history

The difference between /list and /history is that the latter shows commands in addition to expressions.

6.3. Saving

To save the expression history the /save command can be used:

jshell> /save repl.java

This saves our expression history into repl.java in the same directory from which we ran the jshell command.

6.4. Loading

To load a previously saved file we can use the /open command:

jshell> /open repl.java

A loaded session can then be verified by issuing /var, /method or /list.

6.5. Exiting

When we are done with the work, the /exit command can terminate the shell:

jshell> /exit
|  Goodbye

Goodbye jshell.

7. Conclusion

In this article, we took a look at Java 9 REPL. Since Java has been around for over 20 years already, perhaps it arrived a little late. However, it should prove to be another valuable tool in our Java toolbox.

Spring Security – Redirect to the Previous URL After Login

$
0
0

1. Overview

This article will focus on how to redirect a user back to the originally requested URL – after they log in.

Previously, we’ve seen how to redirect to different pages after login with Spring Security for different types of users and covered various types of redirections with Spring MVC.

The article is based on top of the Spring Security Login tutorial.

2. Common Practice

The most common ways to implement redirection logic after login are:

  • using HTTP Referer header
  • saving the original request in the session
  • appending original URL to the redirected login URL

Using the HTTP Referer header is a straightforward way, for most browsers and HTTP clients set Referer automatically. However, as Referer is forgeable and relies on client implementation, using HTTP Referer header to implement redirection is generally not suggested.

Saving the original request in the session is a much safer and robust way is to save original request in the session. Besides the original URL, we can store original request attributes and any custom properties in the session.

And appending original URL to the redirected login URL is usually seen in SSO implementations. When authenticated via an SSO service, users will be redirected to the originally requested page, with the URL appended. We must ensure the appended URL is properly encoded.

Another similar implementation is to put the original request URL in a hidden field inside the login form. But this is no better than using HTTP Referer

In Spring Security, the first two approaches are natively supported.

3. AuthenticationSuccessHandler

In form-based authentication, redirection happens right after login, which is handled in an AuthenticationSuccessHandler instance in Spring Security.

Three default implementations are provided: SimpleUrlAuthenticationSuccessHandler, SavedRequestAwareAuthenticationSuccessHandler and ForwardAuthenticationSuccessHandler. We’ll focus on the first two implementations.

3.1. SavedRequestAwareAuthenticationSuccessHandler

SavedRequestAwareAuthenticationSuccessHandler makes use of the saved request stored in the session. After a successful login, users will be redirected to the URL saved in the original request.

For form login, SavedRequestAwareAuthenticationSuccessHandler is used as the default AuthenticationSuccessHandler.

@Configuration
@EnableWebSecurity
public class RedirectionSecurityConfig extends WebSecurityConfigurerAdapter {

    //...

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
          .authorizeRequests()
          .antMatchers("/login*")
          .permitAll()
          .anyRequest()
          .authenticated()
          .and()
          .formLogin();
    }
    
}

And the equivalent XML would be:

<http>
    <intercept-url pattern="/login" access="permitAll"/>
    <intercept-url pattern="/**" access="isAuthenticated()"/>
    <form-login />
</http>

Suppose we have a secured resource at location “/secured”. For the first time access to the resource, we’ll be redirected to the login page; after filling in credentials and posting the login form, we’ll be redirected back to our originally requested resource location:

@Test
public void givenAccessSecuredResource_whenAuthenticated_thenRedirectedBack() 
  throws Exception {
 
    MockHttpServletRequestBuilder securedResourceAccess = get("/secured");
    MvcResult unauthenticatedResult = mvc
      .perform(securedResourceAccess)
      .andExpect(status().is3xxRedirection())
      .andReturn();

    MockHttpSession session = (MockHttpSession) unauthenticatedResult
      .getRequest()
      .getSession();
    String loginUrl = unauthenticatedResult
      .getResponse()
      .getRedirectedUrl();
    mvc
      .perform(post(loginUrl)
        .param("username", userDetails.getUsername())
        .param("password", userDetails.getPassword())
        .session(session)
        .with(csrf()))
      .andExpect(status().is3xxRedirection())
      .andExpect(redirectedUrlPattern("**/secured"))
      .andReturn();

    mvc
      .perform(securedResourceAccess.session(session))
      .andExpect(status().isOk());
}

3.2. SimpleUrlAuthenticationSuccessHandler

Compared to the SavedRequestAwareAuthenticationSuccessHandler, SimpleUrlAuthenticationSuccessHandler gives us more options on redirection decisions.

We can enable Referer-based redirection by setUserReferer(true):

public class RefererRedirectionAuthenticationSuccessHandler 
  extends SimpleUrlAuthenticationSuccessHandler
  implements AuthenticationSuccessHandler {

    public RefererRedirectionAuthenticationSuccessHandler() {
        super();
        setUseReferer(true);
    }

}

Then use it as the AuthenticationSuccessHandler in RedirectionSecurityConfig:

@Override
protected void configure(HttpSecurity http) throws Exception {
    http
      .authorizeRequests()
      .antMatchers("/login*")
      .permitAll()
      .anyRequest()
      .authenticated()
      .and()
      .formLogin()
      .successHandler(new RefererAuthenticationSuccessHandler());
}

And for XML configuration:

<http>
    <intercept-url pattern="/login" access="permitAll"/>
    <intercept-url pattern="/**" access="isAuthenticated()"/>
    <form-login authentication-success-handler-ref="refererHandler" />
</http>

<beans:bean 
  class="RefererRedirectionAuthenticationSuccessHandler" 
  name="refererHandler"/>

3.3. Under the Hood

There is no magic in these easy to use features in Spring Security. When a secured resource is being requested, the request will be filtered by a chain of various filters. Authentication principals and permissions will be checked. If the request session is not authenticated yet, AuthenticationException will be thrown.

The AuthenticationException will be caught in the ExceptionTranslationFilterin which an authentication process will be commenced, resulting in a redirection to the login page.

public class ExceptionTranslationFilter extends GenericFilterBean {

    //...

    public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain)
      throws IOException, ServletException {
        //...

        handleSpringSecurityException(request, response, chain, ase);

        //...
    }

    private void handleSpringSecurityException(HttpServletRequest request,
      HttpServletResponse response, FilterChain chain, RuntimeException exception)
      throws IOException, ServletException {

        if (exception instanceof AuthenticationException) {

            sendStartAuthentication(request, response, chain,
              (AuthenticationException) exception);

        }

        //...
    }

    protected void sendStartAuthentication(HttpServletRequest request,
      HttpServletResponse response, FilterChain chain,
      AuthenticationException reason) throws ServletException, IOException {
       
       SecurityContextHolder.getContext().setAuthentication(null);
       requestCache.saveRequest(request, response);
       authenticationEntryPoint.commence(request, response, reason);
    }

    //... 

}

After login, we can customize behaviors in an AuthenticationSuccessHandler, as shown above.

4. Conclusion

In this Spring Security example, we discussed common practice for redirection after login and explained implementations using Spring Security.

Note that all the implementations we mentioned are vulnerable to certain attacks if no validation or extra method controls are applied. Users might be redirected to a malicious site by such attacks.

The OWASP has provided a cheat sheet to help us handle unvalidated redirects and forwards. This would do a lot of help if we need to build implementations on our own.

The full implementation code of this article can be found over on Github.

Java 9 Process API Improvements

$
0
0

1. Overview

The process API in Java had been quite primitive prior to Java 5, the only way to spawn a new process was to use the Runtime.getRuntime().exec() API. Then in Java 5, ProcessBuilder API was introduced which supported a cleaner way of spawning new processes.

Java 9 is adding a new way of getting information about current and any spawned processes.

In this article, we will look at both of these enhancements.

2. Current Java Process Information

We can now obtain a lot of information about the process via the API java.lang.ProcessHandle.Info API:

  • the command used to start the process
  • the arguments of the command
  • time instant when the process was started
  • total time spent by it and the user who created it

Here’s how we can do that:

@Test
public void givenCurrentProcess_whenInvokeGetInfo_thenSuccess() 
  throws IOException {
 
    ProcessHandle processHandle = ProcessHandle.current();
    ProcessHandle.Info processInfo = processHandle.info();
 
    assertNotNull(processHandle.getPid());
    assertEquals(false, processInfo.arguments().isPresent());
    assertEquals(true, processInfo.command().isPresent());
    assertTrue(processInfo.command().get().contains("java"));
    assertEquals(true, processInfo.startInstant().isPresent());
    assertEquals(true, 
      processInfo.totalCpuDuration().isPresent());
    assertEquals(true, processInfo.user().isPresent());
}

It is important to note that java.lang.ProcessHandle.Info is a public interface defined within another interface java.lang.ProcessHandle. The JDK provider (Oracle JDK, Open JDK, Zulu or others) should provide implementations to these interfaces in such a way that these implementations return the relevant information for the processes.

3. Spawned Process Information

It is also possible to get the process information of a newly spawned process. In this case, after we spawn the process and get an instance of the java.lang.Process, we invoke the toHandle() method on it to get an instance of java.lang.ProcessHandle.

The rest of the details remain the same as in the section above:

String javaCmd = ProcessUtils.getJavaCmd().getAbsolutePath();
ProcessBuilder processBuilder = new ProcessBuilder(javaCmd, "-version");
Process process = processBuilder.inheritIO().start();
ProcessHandle processHandle = process.toHandle();

4. Enumerating Live Processes in the System

We can list all the processes currently in the system, which are visible to the current process. The returned list is a snapshot at the time when the API was invoked, so it’s possible that some processes terminated after taking the snapshot or some new processes were added.

In order to do that, we can use the static method allProcesses() available in the java.lang.ProcessHandle interface which returns us a Stream of ProcessHandle:

@Test
public void givenLiveProcesses_whenInvokeGetInfo_thenSuccess() {
    Stream<ProcessHandle> liveProcesses = ProcessHandle.allProcesses();
    liveProcesses.filter(ProcessHandle::isAlive)
      .forEach(ph -> {
 
        assertNotNull(ph.getPid());
        assertEquals(true, ph.info()
          .command()
          .isPresent());
      });
}

5. Enumerating Child Processes

There are two variants to do this:

  • get direct children of the current process
  • get all the descendants of the current process

The former is achieved by using the method children() and the latter is achieved by using the method descendants():

@Test
public void givenProcess_whenGetChildProcess_thenSuccess() 
  throws IOException{
 
    int childProcessCount = 5;
    for (int i = 0; i < childProcessCount; i++){
        String javaCmd = ProcessUtils.getJavaCmd()
          .getAbsolutePath();
        ProcessBuilder processBuilder 
          = new ProcessBuilder(javaCmd, "-version");
        processBuilder.inheritIO().start();
    }
    Stream<ProcessHandle> children
      = ProcessHandle.current().children();

    children.filter(ProcessHandle::isAlive)
      .forEach(ph -> log.info("PID: {}, Cmd: {}",
        ph.getPid(), ph.info().command()));

    // and for descendants
    Stream<ProcessHandle> descendants
      = ProcessHandle.current().descendants();
    descendants.filter(ProcessHandle::isAlive)
      .forEach(ph -> log.info("PID: {}, Cmd: {}",
        ph.getPid(), ph.info().command()));
}

6. Triggering Dependent Actions on Process Termination

We might want to run something on termination of the process. This can be achieved by using the onExit() method in the java.lang.ProcessHandle interface. The method returns us a CompletableFuture which provides the ability to trigger dependent operations when the CompletableFuture is completed.

Here, the CompletableFuture indicates the process has completed, but it doesn’t matter if the process has completed successfully or not. We invoke the get() method on the CompletableFuture, to wait for its completion:

@Test
public void givenProcess_whenAddExitCallback_thenSuccess() 
  throws Exception {
 
    String javaCmd = ProcessUtils.getJavaCmd()
      .getAbsolutePath();
    ProcessBuilder processBuilder 
      = new ProcessBuilder(javaCmd, "-version");
    Process process = processBuilder.inheritIO()
      .start();
    ProcessHandle processHandle = process.toHandle();

    log.info("PID: {} has started", processHandle.getPid());
    CompletableFuture<ProcessHandle> onProcessExit 
      = processHandle.onExit();
    onProcessExit.get();
    assertEquals(false, processHandle.isAlive());
    onProcessExit.thenAccept(ph -> {
        log.info("PID: {} has stopped", ph.getPid());
    });
}

The onExit() method is available in the java.lang.Process interface as well.

7. Conclusion

In this tutorial, we covered interesting additions to the Process API in Java 9 that give us much more control over the running and spawned processes.

The code used in this article can be found over on GitHub.


Spring Security with Stormpath

$
0
0

1. Overview

Stormpath has developed solid support for Spring Boot and Spring Security – to make the integration with their infrastructure and services quite straightforward.

In this article, we’re going to have a look at a minimalistic setup and integration of Stormpath with Spring Security.

2. Setting Up Stormpath

Before we can really integrate Stormpath, we need to create access token in Stormpath’s cloud. For that, we need to sign up over on their website. Please remember that for development purpose we’ll need to sign up as a developer – which gives us 10000 API calls per month of using the free mode.

Of course if we already have an active Stormpath account, we can use that and directly login.

Now, we need to create the API keys; by clicking Manage API Keys” link inside Developers Tools, we’ll see a button named “Create API Key.

We need to click on this button to generate the API key. When clicking, we’ll get prompted us to download a properties file containing the API key details. The content will look like this:

apiKey.id = xxxxxxxxxxx
apiKey.secret = xxxxxxxxxxxx

We need to store this details very carefully since this data can’t be fetched again from the server.

3. Building The Application

3.1. Maven Dependencies

In order to use Stormpath API, we need to use their Java SDK. For that, we need to integrate the following dependency in the pom.xml:

<dependency>
    <groupId>com.stormpath.spring</groupId>
    <artifactId>stormpath-default-spring-boot-starter</artifactId>
    <version>1.5.4</version>
</dependency>

You can find the latest version of the stormpath-default-spring-boot-starter in Central Maven Repository.

3.2. Spring Security Configuration

One of the advantages of using Stormpath is that we don’t need to add much boilerplate code to configure Spring Security. The following couple of lines of code is all we need to fully configure the application:

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.apply(stormpath());
    }
}

The stormpath() is a static method, which will actually be enough for a simple integration with Spring Security.

What’s even more interesting here is that we don’t have to create any additional HTML pages to design login, sign-up, etc. Stormpath will generate those pages; however, depending on our need, we may create custom pages and integrate Stormpath’s functionalities.

3.3. Application.properties

We are almost done building this bare-bones application. We just need to add the API keys details, we have created earlier in the application.properties file:

stormpath.client.apiKey.id = // your api id
stormpath.client.apiKey.secret = // your api secret

As per the Stormpath Guidelines, it’s always a best practice to put a sensitive data in the JVM environment variables, instead of using them in the application.properties.

We can declare them as JVM parameters:

-Dstormpath.client.apiKey.id=[api_id] -Dstormpath.client.apiKey.secret=[api_secret]

Now, we’re ready to start the application and see the results. We can check the following URLs to test Stormpath’s functionalities:

  • /login – Login page
  • /register – Registration page
  • /forgot – Forgot password page

3.4. Other options

There’s also an interesting option to check on the login page, the Forgot Password link at the login box. When clicking this link, we’ll be redirected to the /forgot page, where we can provide our email address, which we created to sign up. This will trigger an automatic email containing the link to reset a password.

However, we need to do following configuration at the Stormpath Admin Panel to configure this:

  • Click on the Directories link on top of the page. It should show all of the directories created with this account. By default, after sign up, Stormpath automatically creates a directory named Stormpath Administrator. However, if can create other directories and use them.
  • In the left panel click on the Workflows & Emails link to see a password reset option. By default, it’s disabled. We need to click on the Enabled button to use it.
  • In the Link Base URL, we need to give the URL of our application and this URL will be attached to the password reset email.

4. Conclusion

In this quick article, we learned how to easily integrate Spring Security with Stormpath.

There are plenty of other configuration like email verification, etc., which can be configured via Stormpath Admin Console; using those, we can build a secured application quite quickly.

And, like always, you can find the full source code on GitHub.

Guide To Solr in Java With Apache SolrJ

$
0
0

1. Overview

Apache Solr is an open-source search platform built on top of Lucene. Apache SolrJ is a Java-based client for Solr that provides interfaces for the main features of search like indexing, querying, and deleting documents.

In this article, we’re going to explore how to interact with an Apache Solr server using SolrJ.

2. Setup

In order to install a Solr server on your machine, please refer to the Solr QuickStart Guide.

The installation process is simple — just download the zip/tar package, extract the contents, and start the server from the command line. For this article, we’ll create a Solr server with a core called ‘bigboxstore’:

bin/solr start
bin/solr create -c 'bigboxstore'

By default, Solr listens to port 8983 for incoming HTTP queries. You can verify that it is successfully launched by opening the http://localhost:8983/solr/#/bigboxstore URL in a browser and observing the Solr Dashboard.

3. Maven Configuration

Now that we have our Solr server up and running, let’s jump straight to the SolrJ Java client. To use SolrJ in your project, you will need to have the following Maven dependency declared in your pom.xml file:

<dependency>
    <groupId>org.apache.solr</groupId>
    <artifactId>solr-solrj</artifactId>
    <version>6.4.0</version>
</dependency>

You can always find the latest version hosted by Maven Central.

4. Apache SolrJ Java API

Let’s initiate the SolrJ client by connecting to our Solr server:

String urlString = "http://localhost:8983/solr/bigboxstore";
HttpSolrClient solr = new HttpSolrClient.Builder(urlString).build();
solr.setParser(new XMLResponseParser());

Note: SolrJ uses a binary format, rather than XML, as its default response format. For compatibility with Solr, it’s required to explicitly invoke setParser() to XML as shown above. More details on this can be found here.

4.1. Indexing Documents

Let’s define the data to be indexed using a SolrInputDocument and add it to our index using the add() method:

SolrInputDocument document = new SolrInputDocument();
document.addField("id", "123456");
document.addField("name", "Kenmore Dishwasher");
document.addField("price", "599.99");
solr.add(document);
solr.commit();

Note: Any action that modifies the Solr database requires the action to be followed by commit().

4.2. Indexing with Beans

You can also index Solr documents using beans. Let’s define a ProductBean whose properties are annotated with @Field:

public class ProductBean {

    String id;
    String name;
    String price;

    @Field("id")
    protected void setId(String id) {
        this.id = id;
    }

    @Field("name")
    protected void setName(String name) {
        this.name = name;
    }

    @Field("price")
    protected void setPrice(String price) {
        this.price = price;
    }

    // getters and constructor omitted for space
}

Then, let’s add the bean to our index:

solrClient.addBean( new ProductBean("888", "Apple iPhone 6s", "299.99") );
solrClient.commit();

4.3. Querying Indexed Documents by Field and Id

Let’s verify our document is added by using SolrQuery to query our Solr server.

The QueryResponse from the server will contain a list of SolrDocument objects matching any query with the format field:value. In this example, we query by price:

SolrQuery query = new SolrQuery();
query.set("q", "price:599.99");
QueryResponse response = solr.query(query);

SolrDocumentList docList = response.getResults();
assertEquals(docList.getNumFound(), 1);

for (SolrDocument doc : docList) {
     assertEquals((String) doc.getFieldValue("id"), "123456");
     assertEquals((Double) doc.getFieldValue("price"), (Double) 599.99);
}

A simpler option is to query by Id using getById(). which will return only one document if a match is found:

SolrDocument doc = solr.getById("123456");
assertEquals((String) doc.getFieldValue("name"), "Kenmore Dishwasher");
assertEquals((Double) doc.getFieldValue("price"), (Double) 599.99);

4.4. Deleting Documents

When we want to remove a document from the index, we can use deleteById() and verify it has been removed:

solr.deleteById("123456");
solr.commit();
SolrQuery query = new SolrQuery();
query.set("q", "id:123456");
QueryResponse response = solr.query(query);
SolrDocumentList docList = response.getResults();
assertEquals(docList.getNumFound(), 0);

We also have the option to deleteByQuery(), so let’s try deleting any document with a specific name:

solr.deleteByQuery("name:Kenmore Dishwasher");
solr.commit();
SolrQuery query = new SolrQuery();
query.set("q", "id:123456");
QueryResponse response = solr.query(query);
SolrDocumentList docList = response.getResults();
assertEquals(docList.getNumFound(), 0);

5. Conclusion

In this quick article, we’ve seen how to use the SolrJ Java API to perform some of the common interactions with the Apache Solr full-text search engine.

You can check out the examples provided in this article over on GitHub.

Java Web Weekly, Issue 166

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> Spring Framework 5.0 M5 Update [spring.io]

Very interesting functionality in the latest Spring 5 pre-release.

>> A use-case for local class declaration [frankel.ch]

From the engineering point of view, there are some nice use cases for defining classes locally but those should be used with caution because they might violate PoLA.

>> Integration testing strategies for Spring Boot microservices part 2 [codecentric.de]

The 2nd part from the series on testing strategies for microservices architectures done in Spring Boot.

>> How to encrypt and decrypt data with Hibernate [vladmihalcea.com]

A short and to-the-point write-up on how to do data encryption with Hibernate.

>> LRU Cache From LinkedHashMap [javaspecialists.eu]

LinkedHashMap can be used for building lightweight LRU caches.

Should you build your own cache? Definitely not, but it’s a fantastic learning tool.

>> Testing RxJava2 [infoq.com]

Testing RxJava is easier than it seems when using dedicated solutions like TestSubscriber, TestScheduler or RxJavaPlugins.

The Awaitility library might come in handy too.

>> Profile-based optimization techniques in the JVM [advancedweb.hu]

A new installment from a deep dive series into optimization techniques for the JVM.

>> The Last Frontier in Java Performance: Remove the Garbage Collector [infoq.com]

Very interesting article about potential ideas for decreasing the GC’s overhead.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> How does MVCC (Multi-Version Concurrency Control) work [vladmihalcea.com]

A short overview of the MVCC technique – applied of course to database systems, but potentially to other types of systems as well.

>> Secrets of Maintainable Codebases [daedtech.com]

Everyone is talking about developing clean and maintainable databases but what does it actually mean?

 Also worth reading:

3. Musings

>> Excited about a ‘2.0’ tech stack for microservices [christianposta.com]

A few thoughts about a new generation of tools for building microservices.

>> Tech jobs are already largely automated [lemire.me]

Very interesting points regarding the reality of our industry and how software is impacting the overall job market.

>> What’s in a Name? Spelling Matters in Code [daedtech.com]

In the age of advanced IDEs, there is no justification for having grammar errors or typos in your codebase.

>> First steps as a test automation coach [ontestautomation.com]

Thoughts about starting to coach teams towards – in this case, towards better testing.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Are you sure my data is correct? [dilbert.com]

>> Giving 110% [dilbert.com]

>> How to look busy [dilbert.com]

5. Pick of the Week

A really good episode on the important topic of doing deep work:

>> SPI 255: Deep Work with Cal Newport [smartpassiveincome.com]

How to Register a Servlet in Java

$
0
0

1. Introduction

This article will provide an overview of how to register a servlet within Java EE and Spring Boot. Specifically, we will look at two ways to register a Java Servlet in Java EE — one using a web.xml file, and the other using annotations. Then we’ll register servlets in Spring Boot using XML configuration, Java configuration, and through configurable properties.

A great introductory article on servlets can be found here.

2. Registering Servlets in Java EE

Let’s go over two ways to register a servlet in Java EE. First, we can register a servlet via web.xml. Alternatively, we can use the Java EE @WebServlet annotation.

2.1. Via web.xml

The most common way to register a servlet within your Java EE application is to add it to your web.xml file:

<welcome-file-list>
    <welcome-file>index.html</welcome-file>
    <welcome-file>index.htm</welcome-file>
    <welcome-file>index.jsp</welcome-file>
</welcome-file-list>
<servlet>
    <servlet-name>Example</servlet-name>
    <servlet-class>com.baeldung.Example</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>Example</servlet-name>
    <url-pattern>/Example</url-pattern>
</servlet-mapping>

As you can see, this involves two steps: (1) adding our servlet to the servlet tag, making sure to also specify the source path to the class the servlet resides within, and (2) specifying the URL path the servlet will be exposed on in the url-pattern tag.

The Java EE web.xml file is usually found in WebContent/WEB-INF.

2.2. Via Annotations

Now let’s register our servlet using the @WebServlet annotation on our custom servlet class. This eliminates the need for servlet mappings in the server.xml and registration of the servlet in web.xml:

@WebServlet(
  name = "AnnotationExample",
  description = "Example Servlet Using Annotations",
  urlPatterns = {"/AnnotationExample"}
)
public class Example extends HttpServlet {	
 
    @Override
    protected void doGet(
      HttpServletRequest request, 
      HttpServletResponse response) throws ServletException, IOException {
 
        response.setContentType("text/html");
        PrintWriter out = response.getWriter();
        out.println("<p>Hello World!</p>");
    }
}

The code above demonstrates how to add that annotation directly to a servlet. The servlet will still be available at the same URL path as before.

3. Registering Servlets in Spring Boot

Now that we’ve shown how to register servlets in Java EE, let’s take a look at several ways to register servlets in a Spring Boot application.

3.1. Programmatic Registration

Spring Boot supports 100% programmatic configuration of a web application.

First, we’ll implement the WebApplicationInitializer interface while also using a subclass of WebMvcConfigurerAdapter, which allows you to override preset defaults instead of having to specify each particular configuration setting, saving you time and allowing you to work with several tried and true settings out-of-the-box.

Let’s look at a sample WebApplicationInitializer implementation:

public class WebAppInitializer implements WebApplicationInitializer {
 
    public void onStartup(ServletContext container) throws ServletException {
        AnnotationConfigWebApplicationContext ctx
          = new AnnotationConfigWebApplicationContext();
        ctx.register(WebMvcConfigure.class);
        ctx.setServletContext(container);

        ServletRegistration.Dynamic servlet = container.addServlet(
          "dispatcherExample", new DispatcherServlet(ctx));
        servlet.setLoadOnStartup(1);
        servlet.addMapping("/");
     }
}

Next, let’s extend the WebMvcConfigurerAdapter class:

@Configuration
public class WebMvcConfigure extends WebMvcConfigurerAdapter {

    @Bean
    public ViewResolver getViewResolver() {
        InternalResourceViewResolver resolver
          = new InternalResourceViewResolver();
        resolver.setPrefix("/WEB-INF/");
        resolver.setSuffix(".jsp");
        return resolver;
    }

    @Override
    public void configureDefaultServletHandling(
      DefaultServletHandlerConfigurer configurer) {
        configurer.enable();
    }

    @Override
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry.addResourceHandler("/resources/**")
          .addResourceLocations("/resources/").setCachePeriod(3600)
          .resourceChain(true).addResolver(new PathResourceResolver());
    }
}

Above we specify some of the default settings for JSP servlets explicitly in order to support .jsp views and static resource serving.

3.2. XML Configuration

Another way to configure and register servlets within Spring Boot is through web.xml:

<servlet>
    <servlet-name>dispatcher</servlet-name>
    <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
    <init-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/spring/dispatcher.xml</param-value>
    </init-param>
    <load-on-startup>1</load-on-startup>
</servlet>

<servlet-mapping>
    <servlet-name>dispatcher</servlet-name>
    <url-pattern>/</url-pattern>
</servlet-mapping>

The web.xml used to specify configuration in Spring is similar to that found in Java EE. Above, you can see how we specify a few more parameters via attributes under the servlet tag.

Here we use another XML to complete the configuration:

<beans ...>
    
    <context:component-scan base-package="com.baeldung"/>

    <bean 
      class="org.springframework.web.servlet.view.InternalResourceViewResolver">
        <property name="prefix" value="/WEB-INF/jsp/"/>
        <property name="suffix" value=".jsp"/>
    </bean>
</beans>

Remember that your Spring web.xml will usually live in src/main/webapp/WEB-INF.

3.3. Combining XML and Programmatic Registration

Let’s mix an XML configuration approach with Spring’s programmatic configuration:

public void onStartup(ServletContext container) throws ServletException {
   XmlWebApplicationContext xctx = new XmlWebApplicationContext();
   xctx.setConfigLocation('classpath:/context.xml');
   xctx.setServletContext(container);

   ServletRegistration.Dynamic servlet = container.addServlet(
     "dispatcher", new DispatcherServlet(ctx));
   servlet.setLoadOnStartup(1);
   servlet.addMapping("/");
}

Let’s also configure the dispatcher servlet:

<beans ...>

    <context:component-scan base-package="com.baeldung"/>
    <bean class="com.baeldung.configuration.WebAppInitializer"/>
</beans>

3.4. Registration by Bean

We can also programmatically configure and register our servlets using a ServletRegistrationBean. Below we’ll do so in order to register an HttpServlet (which implements the javax.servlet.Servlet interface):

@Bean
public ServletRegistrationBean exampleServletBean() {
    ServletRegistrationBean bean = new ServletRegistrationBean(
      new CustomServlet(), "/exampleServlet/*");
    bean.setLoadOnStartup(1);
    return bean;
}

The main advantage of this approach is that it enables you to add both multiple servlets as well as different kinds of servlets to your Spring application.

Instead of merely utilizing a DispatcherServlet, which is a more specific kind of HttpServlet and the most common kind used in the WebApplicationInitializer programmatic approach to configuration we explored in section 3.1, we’ll use a simpler HttpServlet subclass instance which exposes the four basic HttpRequest operations through four functions: doGet(), doPost(), doPut(), and doDelete() just like in Java EE.

Remember that HttpServlet is an abstract class (so it can’t be instantiated). We can whip up a custom extension easily, though:

public class CustomServlet extends HttpServlet{
    ...
}

4. Registering Servlets with Properties

Another, though uncommon, way to configure and register your servlets is to use a custom properties file loaded into the app via a PropertyLoader, PropertySource, or PropertySources instance object.

This provides an intermediate kind of configuration and the ability to otherwise customize application.properties which provides little direct configuration for non-embedded servlets.

4.1. System Properties Approach

We can add some custom settings to our application.properties file or another properties file. Let’s add a few settings to configure our DispatcherServlet:

servlet.name=dispatcherExample
servlet.mapping=/dispatcherExampleURL

Let’s load our custom properties into our application:

System.setProperty("custom.config.location", "classpath:custom.properties");

And now we can access those properties via:

System.getProperty("custom.config.location");

4.2. Custom Properties Approach

Let’s start with a custom.properties file:

servlet.name=dispatcherExample
servlet.mapping=/dispatcherExampleURL

We can then use a run-of-the-mill Property Loader:

public Properties getProperties(String file) throws IOException {
  Properties prop = new Properties();
  InputStream input = null;
  input = getClass().getResourceAsStream(file);
  prop.load(input);
  if (input != null) {
      input.close();
  }
  return prop;
}

And now we can add these custom properties as constants to our WebApplicationInitializer implementation:

private static final PropertyLoader pl = new PropertyLoader(); 
private static final Properties springProps
  = pl.getProperties("custom_spring.properties"); 

public static final String SERVLET_NAME
  = springProps.getProperty("servlet.name"); 
public static final String SERVLET_MAPPING
  = springProps.getProperty("servlet.mapping");

We can then use them to, for example, configure our dispatcher servlet:

ServletRegistration.Dynamic servlet = container.addServlet(
  SERVLET_NAME, new DispatcherServlet(ctx));
servlet.setLoadOnStartup(1);
servlet.addMapping(SERVLET_MAPPING);

The advantage of this approach is the absence of .xml maintenance but with easy-to-modify configuration settings that don’t require redeploying the codebase.

4.3. The PropertySource Approach

A faster way to accomplish the above is to make use of Spring’s PropertySource which allows a configuration file to be accessed and loaded.

PropertyResolver is an interface implemented by ConfigurableEnvironment, which makes application properties available at servlet startup and initialization:

@Configuration 
@PropertySource("classpath:/com/yourapp/custom.properties") 
public class ExampleCustomConfig { 
    @Autowired 
    ConfigurableEnvironment env; 

    public String getProperty(String key) { 
        return env.getProperty(key); 
    } 
}

Above, we autowire a dependency into the class and specify the location of our custom properties file. We can then fetch our salient property by calling the function getProperty() passing in the String value.

4.4. The PropertySource Programmatic Approach

We can combine the above approach (which involves fetching property values) with the approach below (which allows us to programmatically specify those values):

ConfigurableEnvironment env = new StandardEnvironment(); 
MutablePropertySources props = env.getPropertySources(); 
Map map = new HashMap(); map.put("key", "value"); 
props.addFirst(new MapPropertySource("Map", map));

We’ve created a map linking a key to a value then add that map to PropertySources enabling invocation as needed.

5. Registering Embedded Servlets

Lastly, we’ll also take a look at basic configuration and registration of embedded servlets within Spring Boot.

An embedded servlet provides full web container (Tomcat, Jetty, etc.) functionality without having to install or maintain the web-container separately.

You can add the required dependencies and configuration for simple live server deployment wherever such functionality is supported painlessly, compactly, and quickly.

We’ll only look at how to do this Tomcat but the same approach can be undertaken for Jetty and alternatives.

Let’s specify the dependency for an embedded Tomcat 8 web container in pom.xml:

<dependency>
    <groupId>org.apache.tomcat.embed</groupId>
     <artifactId>tomcat-embed-core</artifactId>
     <version>8.5.11</version>
</dependency>

Now let’s add the tags required to successfully add Tomcat to the .war produced by Maven at build-time:

<build>
    <finalName>embeddedTomcatExample</finalName>
    <plugins>
        <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>appassembler-maven-plugin</artifactId>
            <version>2.0.0</version>
            <configuration>
                <assembleDirectory>target</assembleDirectory>
                <programs>
                    <program>
                        <mainClass>launch.Main</mainClass>
                        <name>webapp</name>
                    </program>
            </programs>
            </configuration>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>assemble</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

If you are using Spring Boot, you can instead add Spring’s spring-boot-starter-tomcat dependency to your pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-tomcat</artifactId>
    <scope>provided</scope>
</dependency>

5.1. Registration Through Properties

Spring Boot supports configuring most possible Spring settings through application.properties. After adding the necessary embedded servlet dependencies to your pom.xml, you can customize and configure your embedded servlet using several such configuration options:

server.jsp-servlet.class-name=org.apache.jasper.servlet.JspServlet 
server.jsp-servlet.registered=true
server.port=8080
server.servlet-path=/

Above are some of the application settings that can be used to configure the DispatcherServlet and static resource sharing. Settings for embedded servlets, SSL support, and sessions are also available.

There are really too many configuration parameters to list here but you can see the full list in the Spring Boot documentation.

5.2. Configuration Through YAML

Similarly, we can configure our embedded servlet container using YAML. This requires the use of a specialized YAML property loader — the YamlPropertySourceLoader  — which exposes our YAML and makes the keys and values therein available for use within our app.

YamlPropertySourceLoader sourceLoader = new YamlPropertySourceLoader();
PropertySource<?> yamlProps = sourceLoader.load("yamlProps", resource, null);

5.3. Programmatic Configuration Through TomcatEmbeddedServletContainerFactory

Programmatic configuration of an embedded servlet container is possible through a subclassed instance of EmbeddedServletContainerFactory. For example, you can use the TomcatEmbeddedServletContainerFactory to configure your embedded Tomcat servlet.

The TomcatEmbeddedServletContainerFactory wraps the org.apache.catalina.startup.Tomcat object providing additional configuration options:

@Bean
public EmbeddedServletContainerFactory servletContainer() {
    TomcatEmbeddedServletContainerFactory tomcatContainerFactory
      = new TomcatEmbeddedServletContainerFactory();
    return tomcatContainerFactory;
}

Then we can configure the returned instance:

tomcatContainerFactory.setPort(9000);
tomcatContainerFactory.setContextPath("/springboottomcatexample");

Each of those particular settings can be made configurable using any of the methods previously described.

We can also directly access and manipulate the org.apache.catalina.startup.Tomcat object:

Tomcat tomcat = new Tomcat();
tomcat.setPort(port);
tomcat.setContextPath("/springboottomcatexample");
tomcat.start();

6. Conclusion

In this article, we’ve reviewed several ways to register a Servlet in a Java EE and Spring Boot application.

The source code used in this tutorial is available in the Github project.

Intro To Reactor Core

$
0
0

1. Introduction

Reactor Core is a Java 8 library which implements the reactive programming model. It’s built on top of the Reactive Streams Specification, a standard for building reactive applications.

From the background of non-reactive Java development, going reactive can be quite a steep learning curve. This becomes more challenging when comparing it to the Java 8 Stream API, as they could be mistaken for being the same high-level abstractions.

In this article, we’ll attempt to demystify this paradigm. We’ll take small steps through Reactor until we’ve built a picture of how to compose reactive code, laying the foundation for more advanced articles to come in a later series.

2. Reactive Streams Specification

Before we look at Reactor, we should look at the Reactive Streams Specification. This is what Reactor implements, and it lays the groundwork for the library.

Essentially, Reactive Streams is a specification for asynchronous stream processing.

In other words, a system where lots of events are being produced and consumed asynchronously. Think about a stream of thousands of stock updates per second coming into a financial application, and for it to have to respond to those updates in a timely manner.

One of the main goals of this is to address the problem of back pressure. If we have a producer which is emitting events to a consumer faster than it can process them, then eventually the consumer will be overwhelmed with events, running out of system resources. Backpressure means that our consumer should be able to tell the producer how much data to send in order to prevent this, and this is what is laid out in the specification.

3. Maven Dependencies

Before we get started, let’s add our Maven dependencies:

<dependency>
    <groupId>io.projectreactor</groupId>
    <artifactId>reactor-core</artifactId>
    <version>3.0.5.RELEASE</version>
</dependency>

<dependency> 
    <groupId>ch.qos.logback</groupId> 
    <artifactId>logback-classic</artifactId> 
    <version>1.1.3</version> 
</dependency>

We’re also adding Logback as a dependency. This is because we’ll be logging the output of Reactor in order to better understand the flow of data.

4. Producing a Stream of Data

In order for an application to be reactive, the first thing it must be able to do is to produce a stream of data. This could be something like the stock update example that we gave earlier. Without this data, we wouldn’t have anything to react to, which is why this is a logical first step. Reactive Core gives us two data types that enable us to do this.

4.1. Flux

The first way of doing this is with a Flux.  It’s a stream which can emit 0..n elements. Let’s try creating a simple one:

Flux<String> just = Flux.just("1", "2", "3");

In this case, we have a static stream of three elements.

4.2. Mono

The second way of doing this is with a Mono, which is a stream of 0..1 elements. Let’s try instantiating one:

Mono<String> just = Mono.just("foo");

This looks and behaves almost exactly the same as the Flux, only this time we are limited to no more than one element.

4.3. Why Not Just Flux?

Before experimenting further, it’s worth highlighting why we have these two data types.

First, it should be noted that both a Flux and Mono are implementations of the Reactive Streams Publisher interface. Both classes are compliant with the specification, and we could use this interface in their place:

Publisher<String> just = Mono.just("foo");

But really, knowing this cardinality is useful. This is because a few operations only make sense for one of the two types, and because it can be more expressive (imagine findOne() in a repository).

5. Subscribing to a Stream

Now we have a high-level overview of how to produce a stream of data, we need to subscribe to it in order for it to emit the elements.

5.1. Collecting Elements

Let’s use the subscribe() method to collect all the elements in a stream:

List<Integer> elements = new ArrayList<>();

Flux.just(1, 2, 3, 4)
  .log()
  .subscribe(elements::add);

assertThat(elements).containsExactly(1, 2, 3, 4);

The data won’t start flowing until we subscribe. Notice that we have added some logging as well, this will be helpful when we look at what’s happening behind the scenes.

5.2. The Flow of Elements

With logging in place, we can use it to visualize how the data is flowing through our stream:

20:25:19.550 [main] INFO  reactor.Flux.Array.1 - | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
20:25:19.553 [main] INFO  reactor.Flux.Array.1 - | request(unbounded)
20:25:19.553 [main] INFO  reactor.Flux.Array.1 - | onNext(1)
20:25:19.553 [main] INFO  reactor.Flux.Array.1 - | onNext(2)
20:25:19.553 [main] INFO  reactor.Flux.Array.1 - | onNext(3)
20:25:19.553 [main] INFO  reactor.Flux.Array.1 - | onNext(4)
20:25:19.553 [main] INFO  reactor.Flux.Array.1 - | onComplete()

First of all, everything is running on the main thread. Let’s not go into any details about this, as we’ll be taking a further look at concurrency later on in this article. It does make things simple, though, as we can deal with everything in order.

Now let’s go through the sequence that we have logged one by one:

  1. onSubscribe() – This is called when we subscribe to our stream
  2. request(unbounded) – When we call subscribe, behind the scenes we are creating a SubscriptionThis subscription requests elements from the stream. In this case, it defaults to unbounded, meaning it requests every single element available
  3. onNext() – This is called on every single element
  4. onComplete() – This is called last, after receiving the last element. There’s actually a onError() as well, which would be called if there is an exception, but in this case, there isn’t

This is the flow laid out in the Subscriber interface as part of the Reactive Streams Specification, and in reality, that’s what’s been instantiated behind the scenes in our call to onSubscribe(). It’s a useful method, but to better understand what’s happening let’s provide a Subscriber interface directly:

Flux.just(1, 2, 3, 4)
  .log()
  .subscribe(new Subscriber<Integer>() {
    @Override
    public void onSubscribe(Subscription s) {
      s.request(Long.MAX_VALUE);
    }

    @Override
    public void onNext(Integer integer) {
      elements.add(integer);
    }

    @Override
    public void onError(Throwable t) {}

    @Override
    public void onComplete() {}
});

We can see that each possible stage in the above flow maps to a method in the Subscriber implementation. It just happens that the Flux has provided us with a helper method to reduce this verbosity.

5.3. Comparison to Java 8 Streams

It still might appear that we have something synonymous to a Java 8 Stream doing collect:

List<Integer> collected = Stream.of(1, 2, 3, 4)
  .collect(toList());

Only we don’t.

The core difference is that Reactive is a push model, whereas the Java 8 Streams are a pull model. In reactive approach. events are pushed to the subscribers as they come in. 

The next thing to notice is a Streams terminal operator is just that, terminal, pulling all the data and returning a result. With Reactive we could have an infinite stream coming in from an external resource, with multiple subscribers attached and removed on an ad hoc basis. We can also do things like combine streams, throttle streams and apply backpressure, which we will cover next.

6. Backpressure

The next thing we should consider is backpressure. In our example, the subscriber is telling the producer to push every single element at once. This could end up becoming overwhelming for the subscriber, consuming all of its resources.

Backpressure is when a downstream can tell an upstream to send it fewer data in order to prevent it from being overwhelmed.

We can modify our Subscriber implementation to apply backpressure. Let’s tell the upstream to only send two elements at a time by using request():

Flux.just(1, 2, 3, 4)
  .log()
  .subscribe(new Subscriber<Integer>() {
    private Subscription s;
    int onNextAmount;

    @Override
    public void onSubscribe(Subscription s) {
        this.s = s;
        s.request(2);
    }

    @Override
    public void onNext(Integer integer) {
        elements.add(integer);
        onNextAmount++;
        if (onNextAmount % 2 == 0) {
            s.request(2);
        }
    }

    @Override
    public void onError(Throwable t) {}

    @Override
    public void onComplete() {}
});

Now if we run our code again, we’ll see the request(2) is called, followed by two onNext() calls, then request(2) again.

23:31:15.395 [main] INFO  reactor.Flux.Array.1 - | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
23:31:15.397 [main] INFO  reactor.Flux.Array.1 - | request(2)
23:31:15.397 [main] INFO  reactor.Flux.Array.1 - | onNext(1)
23:31:15.398 [main] INFO  reactor.Flux.Array.1 - | onNext(2)
23:31:15.398 [main] INFO  reactor.Flux.Array.1 - | request(2)
23:31:15.398 [main] INFO  reactor.Flux.Array.1 - | onNext(3)
23:31:15.398 [main] INFO  reactor.Flux.Array.1 - | onNext(4)
23:31:15.398 [main] INFO  reactor.Flux.Array.1 - | request(2)
23:31:15.398 [main] INFO  reactor.Flux.Array.1 - | onComplete()

Essentially, this is reactive pull backpressure. We are requesting the upstream to only push a certain amount of elements, and only when we are ready. If we imagine we were being streamed tweets from twitter, it would then be up to the upstream to decide what to do. If tweets were coming in but there are no requests from the downstream, then the upstream could drop items, store them in a buffer, or some other strategy.

7. Operating on a Stream

We can also perform operations on the data in our stream, responding to events as we see fit.

7.1. Mapping Data in a Stream

A simple operation that we can perform is applying a transformation. In this case, let’s just double all the numbers in our stream:

Flux.just(1, 2, 3, 4)
  .log()
  .map(i -> i * 2)
  .subscribe(elements::add);

map() will be applied when onNext() is called.

7.2. Combining two Streams

We can then make things more interesting by combining another stream with this one. Let’s try this by using zip() function:

Flux.just(1, 2, 3, 4)
  .log()
  .map(i -> i * 2)
  .zipWith(Flux.range(0, Integer.MAX_VALUE), 
    (two, one) -> String.format("First Flux: %d, Second Flux: %d", one, two))
  .subscribe(elements::add);

assertThat(elements).containsExactly(
  "First Flux: 0, Second Flux: 2",
  "First Flux: 1, Second Flux: 4",
  "First Flux: 2, Second Flux: 6",
  "First Flux: 3, Second Flux: 8");

Here, we are creating another Flux which keeps incrementing by one and streaming it together with our original one. We can see how these work together by inspecting the logs:

20:04:38.064 [main] INFO  reactor.Flux.Array.1 - | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
20:04:38.065 [main] INFO  reactor.Flux.Array.1 - | onNext(1)
20:04:38.066 [main] INFO  reactor.Flux.Range.2 - | onSubscribe([Synchronous Fuseable] FluxRange.RangeSubscription)
20:04:38.066 [main] INFO  reactor.Flux.Range.2 - | onNext(0)
20:04:38.067 [main] INFO  reactor.Flux.Array.1 - | onNext(2)
20:04:38.067 [main] INFO  reactor.Flux.Range.2 - | onNext(1)
20:04:38.067 [main] INFO  reactor.Flux.Array.1 - | onNext(3)
20:04:38.067 [main] INFO  reactor.Flux.Range.2 - | onNext(2)
20:04:38.067 [main] INFO  reactor.Flux.Array.1 - | onNext(4)
20:04:38.067 [main] INFO  reactor.Flux.Range.2 - | onNext(3)
20:04:38.067 [main] INFO  reactor.Flux.Array.1 - | onComplete()
20:04:38.067 [main] INFO  reactor.Flux.Array.1 - | cancel()
20:04:38.067 [main] INFO  reactor.Flux.Range.2 - | cancel()

Note how we now have one subscription per Flux. The onNext() calls are also alternated, so the index of each element in the stream will match when we apply the zip() function.

8. Hot Streams

Currently, we’ve focused primarily on cold streams. These are static, fixed length streams which are easy to deal with. A more realistic use case for reactive might be something that happens infinitely. For example, we could have a stream of mouse movements which constantly needs to be reacted to or a twitter feed. These types of streams are called hot streams, as they are always running and can be subscribed to at any point in time, missing the start of the data.

8.1. Creating a ConnectableFlux

One way to create a hot stream is by converting a cold stream into one. Let’s create a Flux that lasts forever, outputting the results to the console, which would simulate an infinite stream of data coming from an external resource:

ConnectableFlux<Object> publish = Flux.create(fluxSink -> {
    while(true) {
        fluxSink.next(System.currentTimeMillis());
    }
})
  .publish();

By calling publish() we are given a ConnectableFluxThis means that calling subscribe() won’t cause it start emitting, allowing us to add multiple subscriptions:

publish.subscribe(System.out::println);        
publish.subscribe(System.out::println);

If we try running this code, nothing will happen. It’s not until we call connect(), that the Flux will start emitting. It doesn’t matter whether we are subscribing or not.

8.2. Throttling

If we run our code, our console will be overwhelmed with logging. This is simulating a situation where too much data is being passed to our consumers. Let’s try getting around this with throttling:

ConnectableFlux<Object> publish = Flux.create(fluxSink -> {
    while(true) {
        fluxSink.next(System.currentTimeMillis());
    }
})
  .sample(ofSeconds(2))
  .publish();

Here, we’ve introduced a sample() method with an interval of two seconds. Now values will only be pushed to our subscriber every two seconds, meaning the console will be a lot less hectic.

Of course, there’s multiple strategies to reduce the amount of data sent downstream, such as windowing and buffering, but they will be left out of scope for this article.

9. Concurrency

All of our above examples have currently run on the main thread. However, we can control which thread our code runs on if we want. The Scheduler interface provides an abstraction around asynchronous code, for which many implementations are provided for us. Let’s try subscribing to a different thread to main:

Flux.just(1, 2, 3, 4)
  .log()
  .map(i -> i * 2)
  .subscribeOn(Schedulers.parallel())
  .subscribe(elements::add);

The Parallel scheduler will cause our subscription to be run on a different thread, which we can prove by looking at the logs:

20:03:27.505 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
20:03:27.529 [parallel-1] INFO  reactor.Flux.Array.1 - | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
20:03:27.531 [parallel-1] INFO  reactor.Flux.Array.1 - | request(unbounded)
20:03:27.531 [parallel-1] INFO  reactor.Flux.Array.1 - | onNext(1)
20:03:27.531 [parallel-1] INFO  reactor.Flux.Array.1 - | onNext(2)
20:03:27.531 [parallel-1] INFO  reactor.Flux.Array.1 - | onNext(3)
20:03:27.531 [parallel-1] INFO  reactor.Flux.Array.1 - | onNext(4)
20:03:27.531 [parallel-1] INFO  reactor.Flux.Array.1 - | onComplete()

Concurrency get’s more interesting that this, and it will be worth us exploring it in another article.

10. Conclusion

In this article, we’ve given a high level, end-to-end overview of Reactive Core. We’ve explained how we can publish and subscribe to streams, apply backpressure, operate on streams and also handle data asynchronously. This should hopefully lay a foundation for us to write reactive applications.

Later articles in this series will cover more advanced concurrency and other reactive concepts. There’s also another article covering Reactor with Spring.

The source code for our application is available on over on GitHub; this is a Maven project which should be able to run as is.

Viewing all 4703 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>