Quantcast
Channel: Baeldung
Viewing all 4703 articles
Browse latest View live

A Guide to JGit

$
0
0

1. Introduction

JGit is a lightweight, pure Java library implementation of the Git version control system – including repository access routines, network protocols, and core version control algorithms.

JGit is a relatively full-featured implementation of Git written in Java and is widely used in the Java community. The JGit project is under the Eclipse umbrella, and its home can be found at JGit.

In this tutorial, we’ll explain how to work with it.

2. Getting Started

There are a number of ways to connect your project with JGit and start writing code. Probably the easiest way is to use Maven – the integration is accomplished by adding the following snippet to the <dependencies> tag in our pom.xml file:

<dependency>
    <groupId>org.eclipse.jgit</groupId>
    <artifactId>org.eclipse.jgit</artifactId>
    <version>4.6.0.201612231935-r</version>
</dependency>

Please visit the Maven Central repository for the newest version of JGit. Once this step is done, Maven will automatically acquire and use the JGit libraries that we’ll need.

If you prefer OSGi bundles, there is also a p2 repository. Please visit Eclipse JGit to get the necessary information how to integrate this library.

3. Creating a Repository

JGit has two basic levels of API: plumbing and porcelain. The terminology for these comes from Git itself. JGit is divided into the same areas:

  • porcelain APIs – front-end for common user-level actions (similar to Git command-line tool)
  • plumbing APIs – direct interacting with low-level repository objects

The starting point for most of JGit sessions is in the Repository class. The first thing we are going to do is the creation of a new Repository instance.

The init command will let us create an empty repository:

Git git = Git.init().setDirectory("/path/to/repo").call();

This will create a repository with a working directory at the location given to setDirectory().

An existing repository can be cloned with the cloneRepository command:

Git git = Git.cloneRepository()
  .setURI("https://github.com/eclipse/jgit.git")
  .setDirectory("/path/to/repo")
  .call();

The code above will clone the JGit repository into the local directory named path/to/repo.

4. Git Objects

All objects are represented by an SHA-1 id in the Git object model. In JGit, this is represented by the AnyObjectId and ObjectId classes.

There are four types of objects in the Git object model:

  • blob – used for storing file data
  • tree –  a directory; it references other trees and blobs
  • commit – points to a single tree
  • tag – marks a commit as special; generally used for marking specific releases

To resolve an object from a repository, simply pass the right revision as in the following function:

ObjectId head = repository.resolve("HEAD");

4.1. Ref

The Ref is a variable that holds a single object identifier. The object identifier can be any valid Git object (blobtreecommittag).

For example, to query for the reference to head, you can simply call:

Ref HEAD = repository.getRef("refs/heads/master");

4.2. RevWalk

The RevWalk walks a commit graph and produces the matching commits in order:

RevWalk walk = new RevWalk(repository);

4.3. RevCommit

The RevCommit represents a commit in the Git object model. To parse a commit, use a RevWalk instance:

RevWalk walk = new RevWalk(repository);
RevCommit commit = walk.parseCommit(objectIdOfCommit);

4.4. RevTag

The RevTag represents a tag in the Git object model. You can use a RevWalk instance to parse a tag:

RevWalk walk = new RevWalk(repository);
RevTag tag = walk.parseTag(objectIdOfTag);

4.5. RevTree

The RevTree represents a tree in the Git object model. A RevWalk instance is also used to parse a tree:

RevWalk walk = new RevWalk(repository);
RevTree tree = walk.parseTree(objectIdOfTree);

5. Porcelain API

While JGit contains a lot of low-level code to work with Git repositories, it also contains a higher level API that mimics some of the Git porcelain commands in the org.eclipse.jgit.api package.

5.1. AddCommand (git-add)

The AddCommand allows you to add files to the index via:

  • addFilepattern()

Here’s a quick example of how to add a set of files to the index using the porcelain API:

Git git = new Git(db);
AddCommand add = git.add();
add.addFilepattern("someDirectory").call();

5.2. CommitCommand (git-commit)

The CommitCommand allows you to perform commits and has following options available:

  • setAuthor()
  • setCommitter()
  • setAll()

Here’s a quick example of how to commit using the porcelain API:

Git git = new Git(db);
CommitCommand commit = git.commit();
commit.setMessage("initial commit").call();

5.3. TagCommand (git-tag)

The TagCommand supports a variety of tagging options:

  • setName()
  • setMessage()
  • setTagger()
  • setObjectId()
  • setForceUpdate()
  • setSigned()

Here’s a quick example of tagging a commit using the porcelain API:

Git git = new Git(db);
RevCommit commit = git.commit().setMessage("initial commit").call();
RevTag tag = git.tag().setName("tag").call();

5.4. LogCommand (git-log)

The LogCommand allows you to easily walk a commit graph.

  • add(AnyObjectId start)
  • addRange(AnyObjectId since, AnyObjectId until)

Here’s a quick example of how to get some log messages:

Git git = new Git(db);
Iterable<RevCommit> log = git.log().call();

6. Ant Tasks

JGit also has some common Ant tasks contained in the org.eclipse.jgit.ant bundle.

To use those tasks:

<taskdef resource="org/eclipse/jgit/ant/ant-tasks.properties">
    <classpath>
        <pathelement location="path/to/org.eclipse.jgit.ant-VERSION.jar"/>
        <pathelement location="path/to/org.eclipse.jgit-VERSION.jar"/>
        <pathelement location="path/to/jsch-0.1.44-1.jar"/>
    </classpath>
</taskdef>

This would provide the git-clone, git-init and git-checkout tasks.

6.1. git-clone

<git-clone uri="http://egit.eclipse.org/jgit.git" />

The following attributes are required:

  • uri: the URI to clone from

The following attributes are optional:

  • dest: the destination to clone to (defaults to use a human readable directory name based on the last path component of the URI)
  • baretrue/false/yes/no to indicate if the cloned repository should be bare or not (defaults to false)
  • branch: the initial branch to check out when cloning the repository (defaults to HEAD)

6.2. git-init

<git-init />

No attributes are required to run the git-init task.

The following attributes are optional:

  • dest: the path where a git repository is initialized (defaults to $GIT_DIR or the current directory)
  • baretrue/false/yes/no to indicate if the repository should be bare or not (defaults to false)

6.3. git-checkout

<git-checkout src="path/to/repo" branch="origin/newbranch" />

The following attributes are required:

  • src: the path to the git repository
  • branch: the initial branch to checkout

The following attributes are optional:

  • createbranchtrue/false/yes/no to indicate whether the branch should be created if it does not already exist (defaults to false)
  • forcetrue/false/yes/no: if true/yes and the branch with the given name already exists, the start-point of an existing branch will be set to a new start-point; if false, the existing branch will not be changed (defaults to false)

7. Conclusion

The high-level JGit API isn’t hard to understand. If you know what git command to use, you can easily guess which classes and methods to use in JGit.

There is a collection of ready-to-run JGit code snippets available here.

If you still have difficulties or questions, please leave a comment here or ask the JGit community for assistance.


Creating PDF Files in Java

$
0
0

1. Introduction

In this quick article, we’ll focus on creating PDF document from scratch based on popular iText and PdfBox library.

2. Maven Dependencies

Let’s take a look at the Maven dependencies, which needs to be included in our project:

<dependency>
    <groupId>com.itextpdf</groupId>
    <artifactId>itextpdf</artifactId>
    <version>5.5.10</version>
</dependency>
<dependency>
    <groupId>org.apache.pdfbox</groupId>
    <artifactId>pdfbox</artifactId>
    <version>2.0.4</version>
</dependency>

The latest version of the libraries can be found here: iText and PdfBox.

One extra dependency is necessary to add, in case our file will need to be encrypted. The Bounty Castle Provider package contains implementations of cryptographic algorithms and is required by both libraries:

<dependency>
    <groupId>org.bouncycastle</groupId>
    <artifactId>bcprov-jdk15on</artifactId>
    <version>1.56</version>
</dependency>

The latest version of the library can be found here: The Bounty Castle Provider.

3. Overview

Both, the iText and PdfBox are java libraries used for creation/manipulation of pdf files. Although the final output of the libraries is the same, they operate in a bit different manner. Let’s take a look at them.

4. Create Pdf in IText

4.1. Insert Text in Pdf

Let’s have a look, at the way a new file with “Hello World” text is inserted in pdf file

Document document = new Document();
PdfWriter.getInstance(document, new FileOutputStream("iTextHelloWorld.pdf"));

document.open();
Font font = FontFactory.getFont(FontFactory.COURIER, 16, BaseColor.BLACK);
Chunk chunk = new Chunk("Hello World", font);

document.add(chunk);
document.close();

Creating a pdf with a use of the iText library is based on manipulating objects implementing Elements interface in Document (in version 5.5.10 there are 45 of those implementations).

The smallest element which can be added to the document and used is called Chunk, which is basically a string with applied font.

Additionally, Chunk‘s can be combined with other elements like Paragraphs, Section etc. resulting in nice looking documents.

4.2. Inserting Image

The iText library provides an easy way to add an image to the document. We simply need to create an Image instance and add it to the Document.

Path path = Paths.get(ClassLoader.getSystemResource("Java_logo.png").toURI());

Document document = new Document();
PdfWriter.getInstance(document, new FileOutputStream("iTextImageExample.pdf"));
document.open();
Image img = Image.getInstance(path.toAbsolutePath().toString());
document.add(img);

document.close();

4.3. Inserting Table

We might face a problem when we would like to add a table to our pdf. Luckily iText provides out-of-the-box such functionality.

First what we need to do is to create a PdfTable object and in constructor provide a number of columns for our table. Now we can simply add new cell by calling

Now we can simply add new cell by calling addCell method on the newly created table object. iText will create table rows as long as all necessary cells are defined, what it means is that once you create a table with 3 columns and add 8 cells to it, only 2 rows with 3 cells in each will be displayed.

Let’s take a look at the example:

Document document = new Document();
PdfWriter.getInstance(document, new FileOutputStream("iTextTable.pdf"));

document.open();

PdfPTable table = new PdfPTable(3);
addTableHeader(table);
addRows(table);
addCustomRows(table);

document.add(table);
document.close();

We create a new table with 3 columns and 3 rows. First row we will treat as a table header with changed background color and border width:

private void addTableHeader(PdfPTable table) {
    Stream.of("column header 1", "column header 2", "column header 3")
      .forEach(columnTitle -> {
        PdfPCell header = new PdfPCell();
        header.setBackgroundColor(BaseColor.LIGHT_GRAY);
        header.setBorderWidth(2);
        header.setPhrase(new Phrase(columnTitle));
        table.addCell(header);
    });
}

The second row will be composed of three cells just with text, no extra formatting.

private void addRows(PdfPTable table) {
    table.addCell("row 1, col 1");
    table.addCell("row 1, col 2");
    table.addCell("row 1, col 3");
}

We can include not only text in cells but also images. Additionally, each cell might be formatted individually, in the example presented below we apply horizontal and vertical alignment adjustments:

private void addCustomRows(PdfPTable table) 
  throws URISyntaxException, BadElementException, IOException {
    Path path = Paths.get(ClassLoader.getSystemResource("Java_logo.png").toURI());
    Image img = Image.getInstance(path.toAbsolutePath().toString());
    img.scalePercent(10);

    PdfPCell imageCell = new PdfPCell(img);
    table.addCell(imageCell);

    PdfPCell horizontalAlignCell = new PdfPCell(new Phrase("row 2, col 2"));
    horizontalAlignCell.setHorizontalAlignment(Element.ALIGN_CENTER);
    table.addCell(horizontalAlignCell);

    PdfPCell verticalAlignCell = new PdfPCell(new Phrase("row 2, col 3"));
    verticalAlignCell.setVerticalAlignment(Element.ALIGN_BOTTOM);
    table.addCell(verticalAlignCell);
}

4.4. File Encryption

In order to apply permission using iText library, we need to have already created pdf document. In our example, we will use our iTextHelloWorld.pdf file generated previously.

Once we load the file using PdfReader, we need to create a PdfStamper which is used to apply additional content to file like metadata, encryption etc:

PdfReader pdfReader = new PdfReader("HelloWorld.pdf");
PdfStamper pdfStamper 
  = new PdfStamper(pdfReader, new FileOutputStream("encryptedPdf.pdf"));

pdfStamper.setEncryption(
  "userpass".getBytes(),
  ".getBytes(),
  0,
  PdfWriter.ENCRYPTION_AES_256
);

pdfStamper.close();

In our example, we encrypted the file with two passwords. The user password (“userpass”) where a user has only read-only right with no possibility to print it, and owner password (“ownerpass”) that is used as master key allowing a person to have full access to pdf.

If we want to allow the user to print pdf, instead of 0 (third parameter of setEncryption) we can pass:

PdfWriter.ALLOW_PRINTING

Of course, we can mixed different permissions like:

PdfWriter.ALLOW_PRINTING | PdfWriter.ALLOW_COPY

Keep in mind that using iText to set access permissions, we are also creating a temporary pdf which should be deleted and if not it could be fully accessible to anybody.

5. Create Pdf in PdfBox

5.1. Insert Text in Pdf

As opposite to the iText, the PdfBox library provides API which is based on stream manipulation. There are no classes like Chunk/Paragraph etc. The PDDocument class is a pdf representation in-memory where the user writes data by manipulating PDPageContentStream class.

Let’s take a look at the code example:

PDDocument document = new PDDocument();
PDPage page = new PDPage();
document.addPage(page);

PDPageContentStream contentStream = new PDPageContentStream(document, page);

contentStream.setFont(PDType1Font.COURIER, 12);
contentStream.beginText();
contentStream.showText("Hello World");
contentStream.endText();
contentStream.close();

document.save("pdfBoxHelloWorld.pdf");
document.close();

5.2. Inserting Image

Inserting images is straightforward.

First we need to load a file and create a PDImageXObject, subsequently draw it on the document (need to provide exact x,y coordinates).

That’s all:

PDDocument document = new PDDocument();
PDPage page = new PDPage();
document.addPage(page);

Path path = Paths.get(ClassLoader.getSystemResource("Java_logo.png").toURI());
PDPageContentStream contentStream = new PDPageContentStream(document, page);
PDImageXObject image 
  = PDImageXObject.createFromFile(path.toAbsolutePath().toString(), document);
contentStream.drawImage(image, 0, 0);
contentStream.close();

document.save("pdfBoxImage.pdf");
document.close();

5.3. Inserting a Table

Unfortunately, PdfBox does not provide any out-of-box methods allowing creating tables. What we can do in such situation is to draw it manually – literally draw each line until our drawing resembles our dreamed table.

5.4. File Encryption

PdfBox library provides a possibility to encrypt, and adjust file permission for the user. Comparing to iText, it does not require to use an already existing file, as we simply use PDDocument. Pdf file permissions are handled by AccessPermission class, where we can set if a user will be able to modify, extract content or print a file.

Subsequently, we create a StandardProtectionPolicy object which adds password-based protection to the document. We can specify two types of password. The user password, after which person will be able to open a file with applied access permissions and owner password (no limitations to the file):

PDDocument document = new PDDocument();
PDPage page = new PDPage();
document.addPage(page);

AccessPermission accessPermission = new AccessPermission();
accessPermission.setCanPrint(false);
accessPermission.setCanModify(false);

StandardProtectionPolicy standardProtectionPolicy 
  = new StandardProtectionPolicy("ownerpass", "userpass", accessPermission);
document.protect(standardProtectionPolicy);
document.save("pdfBoxEncryption.pdf");
document.close();

Our example presents a situation that if a user provides user password, the file cannot be modified and printed.

6. Conclusions

In this tutorial, we discussed ways of creating a pdf file in two popular Java libraries.

Full examples can be found in the Maven based project over on GitHub.

Guide to Pattern Matching in Javaslang

$
0
0

1. Overview

In this article, we are going to focus on Pattern Matching with Javaslang. If you do not know what about Javaslang, please read the Javaslang’s Overview first.

Pattern matching is a feature that is not natively available in Java. One could think of it as the advanced form of a switch-case statement.

The advantage of Javaslang’s pattern matching is that it saves us from writing stacks of switch cases or if-then-else statements. It, therefore, reduces the amount of code and represents conditional logic in a human-readable way.

We can use the pattern matching API by making the following import from Javaslang 2.0 onwards:

import static javaslang.API.*;

2. How Pattern Matching Works

As we saw in the previous article, pattern matching can be used to replace a switch block:

@Test
public void whenSwitchWorksAsMatcher_thenCorrect() {
    int input = 2;
    String output;
    switch (input) {
    case 0:
        output = "zero";
        break;
    case 1:
        output = "one";
        break;
    case 2:
        output = "two";
        break;
    case 3:
        output = "three";
        break;
    default:
        output = "unknown";
        break;
    }

    assertEquals("two", output);
}

Or multiple if statements:

@Test
public void whenIfWorksAsMatcher_thenCorrect() {
    int input = 3;
    String output;
    if (input == 0) {
        output = "zero";
    }
    if (input == 1) {
        output = "one";
    }
    if (input == 2) {
        output = "two";
    }
    if (input == 3) {
        output = "three";
    } else {
        output = "unknown";
    }

    assertEquals("three", output);
}

The snippets we have seen so far are verbose and therefore error prone. When using pattern matching, we use three main building blocks: the two static methods Match, Case and atomic patterns.

Atomic patterns represent the condition that should be evaluated to return a boolean value:

  • $(): a wild-card pattern that is similar to the default case in a switch statement. It handles a scenario where no match is found
  • $(value): this is the equals pattern where a value is simply equals-compared to the input.
  • $(predicate): this is the conditional pattern where a predicate function is applied to the input and the resulting boolean is used to make a decision.

The switch and if approaches could be replaced by a shorter and more concise piece of code as below:

@Test
public void whenMatchworks_thenCorrect() {
    int input = 2;
    String output = Match(input).of(
      Case($(1), "one"), 
      Case($(2), "two"), 
      Case($(3), "three"), 
      Case($(), "?"));
        
    assertEquals("two", output);
}

If the input does not get a match, the wild-card pattern gets evaluated:

@Test
public void whenMatchesDefault_thenCorrect() {
    int input = 5;
    String output = Match(input).of(
      Case($(1), "one"), 
      Case($(), "unknown"));

    assertEquals("unknown", output);
}

If there is no wild-card pattern and the input does not get matched, we will get a match error:

@Test(expected = MatchError.class)
public void givenNoMatchAndNoDefault_whenThrows_thenCorrect() {
    int input = 5;
    Match(input).of(
      Case($(1), "one"), 
      Case($(2), "two"));
}

In this section, we have covered the basics of Javaslang pattern matching and the following sections will cover various approaches to tackling different cases we are likely to encounter in our code.

3. Match With Option

As we saw in the previous section, the wild-card pattern $() matches default cases where no match is found for the input.

However, another alternative to including a wild-card pattern is wrapping the return value of a match operation in an Option instance:

@Test
public void whenMatchWorksWithOption_thenCorrect() {
    int i = 10;
    Option<String> s = Match(i)
      .option(Case($(0), "zero"));

    assertTrue(s.isEmpty());
    assertEquals("None", s.toString());
}

To get a better understanding of Option in Javaslang, you can refer to the introductory article.

4. Match With Inbuilt Predicates

Javaslang ships with some inbuilt predicates that make our code more human-readable. Therefore, our initial examples can be improved further with predicates:

@Test
public void whenMatchWorksWithPredicate_thenCorrect() {
    int i = 3;
    String s = Match(i).of(
      Case(is(1), "one"), 
      Case(is(2), "two"), 
      Case(is(3), "three"),
      Case($(), "?"));

    assertEquals("three", s);
}

Javaslang offers more predicates than this. For example, we can make our condition check the class of the input instead:

@Test
public void givenInput_whenMatchesClass_thenCorrect() {
    Object obj=5;
    String s = Match(obj).of(
      Case(instanceOf(String.class), "string matched"), 
      Case($(), "not string"));

    assertEquals("not string", s);
}

Or whether the input is null or not:

@Test
public void givenInput_whenMatchesNull_thenCorrect() {
    Object obj=5;
    String s = Match(obj).of(
      Case(isNull(), "no value"), 
      Case(isNotNull(), "value found"));

    assertEquals("value found", s);
}

Instead of matching values in equals style, we can use contains style. This way, we can check if an input exists in a list of values with the isIn predicate:

@Test
public void givenInput_whenContainsWorks_thenCorrect() {
    int i = 5;
    String s = Match(i).of(
      Case(isIn(2, 4, 6, 8), "Even Single Digit"), 
      Case(isIn(1, 3, 5, 7, 9), "Odd Single Digit"), 
      Case($(), "Out of range"));

    assertEquals("Odd Single Digit", s);
}

There is more we can do with predicates, like combining multiple predicates as a single match case.To match only when the input passes all of a given group of predicates, we can AND predicates using the allOf predicate.

A practical case would be where we want to check if a number is contained in a list as we did with the previous example. The problem is that the list contains nulls as well. So, apart from rejecting numbers which are not in the list, we want to reject nulls as well:

The problem is that the list contains nulls as well. So, we want to apply a filter that, apart from rejecting numbers which are not in the list, will also reject nulls:

@Test
public void givenInput_whenMatchAllWorks_thenCorrect() {
    Integer i = null;
    String s = Match(i).of(
      Case(allOf(isNotNull(),isIn(1,2,3,null)), "Number found"), 
      Case($(), "Not found"));

    assertEquals("Not found", s);
}

To match when an input matches any of a given group, we can OR the predicates using the anyOf predicate.

Assume we are screening candidates by their year of birth and we want only candidates who were born in 1990,1991 or 1992.

If no such candidate is found, then we can only accept those born in 1986 and we want to make this clear in our code too:

@Test
public void givenInput_whenMatchesAnyOfWorks_thenCorrect() {
    Integer year = 1990;
    String s = Match(year).of(
      Case(anyOf(isIn(1990, 1991, 1992), is(1986)), "Age match"), 
      Case($(), "No age match"));
    assertEquals("Age match", s);
}

Finally, we can also XOR predicates using the noneOf predicates so that an input gets a match when all conditions evaluate to false.

To demonstrate this, we can negate the condition in the previous example such that we get candidates who are not in the above age groups:

@Test
public void givenInput_whenMatchesNoneOfWorks_thenCorrect() {
    Integer year = 1990;
    String s = Match(year).of(
      Case(noneOf(isIn(1990, 1991, 1992), is(1986)), "Age match"), 
      Case($(), "No age match"));

    assertEquals("No age match", s);
}

5. Match With Custom Predicates

In the previous section, we explored the inbuilt predicates of Javaslang. But Javaslang does not stop there. With the knowledge of lambdas, we can build and use our own predicates or even just write them inline.

With this new knowledge, we can inline a predicate in the first example of the previous section and rewrite it like this:

@Test
public void whenMatchWorksWithCustomPredicate_thenCorrect() {
    int i = 3;
    String s = Match(i).of(
      Case(n -> n == 1, "one"), 
      Case(n -> n == 2, "two"), 
      Case(n -> n == 3, "three"), 
      Case($(), "?"));
    assertEquals("three", s);
}

We can also apply a functional interface in the place of a predicate in case we need more parameters. The contains example can be rewritten like this, albeit a little more verbose, but it gives us more power over what our predicate does:

@Test
public void givenInput_whenContainsWorks_thenCorrect2() {
    int i = 5;
    BiFunction<Integer, List<Integer>, Boolean> contains 
      = (t, u) -> u.contains(t);

    String s = Match(i).of(
      Case(o -> contains
        .apply(i, Arrays.asList(2, 4, 6, 8)), "Even Single Digit"), 
      Case(o -> contains
        .apply(i, Arrays.asList(1, 3, 5, 7, 9)), "Odd Single Digit"), 
      Case($(), "Out of range"));

    assertEquals("Odd Single Digit", s);
}

In the above example, we created a Java 8 BiFunction which simply checks the isIn relationship between the two arguments.

You could have used Javaslang’s FunctionN for this as well. Therefore, if the inbuilt predicates do not quite match your requirements or you want to have control over the whole evaluation, then use custom predicates.

6. Object Decomposition

Object decomposition is the process of breaking a Java object into its component parts. For example, consider the case of abstracting an employee’s bio-data alongside employment information:

public class Employee {

    private String name;
    private String id;

    //standard constructor, getters and setters
}

We can decompose an Employee’s record into its component parts: name and id. This is quite obvious in Java:

@Test
public void givenObject_whenDecomposesJavaWay_whenCorrect() {
    Employee person = new Employee("Carl", "EMP01");

    String result = "not found";
    if (person != null && "Carl".equals(person.getName())) {
        String id = person.getId();
        result="Carl has employee id "+id;
    }

    assertEquals("Carl has employee id EMP01", result);
}

We create an employee object, then we first check if it is null before applying a filter to ensure we end up with the record of an employee whose name is Carl. We then go ahead and retrieve his id. The Java way works but it is verbose and error prone.

What we are basically doing in the above example is matching what we know with what is coming in. We know we want an employee called Carl, so we try to match this name to the incoming object.

We then break down his details to get a human readable output. The null checks are simply defensive overheads we don’t need.

With Javaslang’s Pattern Matching API, we can forget about unnecessary checks and simply focus on what is important, resulting in very compact and readable code.

To use this provision, we must have an additional javaslang-match dependency installed in your project. You can get it by following this link.

The above code can then be written as below:

@Test
public void givenObject_whenDecomposesJavaslangWay_thenCorrect() {
    Employee person = new Employee("Carl", "EMP01");

    String result = Match(person).of(
      Case(Employee($("Carl"), $()),
        (name, id) -> "Carl has employee id "+id),
      Case($(),
        () -> "not found"));
         
    assertEquals("Carl has employee id EMP01", result);
}

The key constructs in the above example are the atomic patterns $(“Carl”) and $(), the value pattern the wild card pattern respectively. We discussed these in detail in the Javaslang introductory article.

Both patterns retrieve values from the matched object and store them into the lambda parameters. The value pattern $(“Carl”) can only match when the retrieved value matches what is inside it i.e. carl.

On the other hand, the wild card pattern $() matches any value at its position and retrieves the value into the id lambda parameter.

For this decomposition to work, we need to define decomposition patterns or what is formally known as unapply patterns.

This means that we must teach the pattern matching API how to decompose our objects, resulting in one entry for each object to be decomposed:

@Patterns
class Demo {
    @Unapply
    static Tuple2<String, String> Employee(Employee Employee) {
        return Tuple.of(Employee.getName(), Employee.getId());
    }

    // other unapply patterns
}

The annotation processing tool will generate a class called DemoPatterns.java which we have to statically import to wherever we want to apply these patterns:

import static com.baeldung.javaslang.DemoPatterns.*;

We can also decompose inbuilt Java objects.

For instance, java.time.LocalDate can be decomposed into a year, month and day of the month. Let us add its unapply pattern to Demo.java:

@Unapply
static Tuple3<Integer, Integer, Integer> LocalDate(LocalDate date) {
    return Tuple.of(
      date.getYear(), date.getMonthValue(), date.getDayOfMonth());
}

Then the test:

@Test
public void givenObject_whenDecomposesJavaslangWay_whenCorrect2() {
    LocalDate date = LocalDate.of(2017, 2, 13);

    String result = Match(date).of(
      Case(LocalDate($(2016), $(3), $(13)), 
        () -> "2016-02-13"),
      Case(LocalDate($(2016), $(), $()),
        (y, m, d) -> "month " + m + " in 2016"),
      Case(LocalDate($(), $(), $()),  
        (y, m, d) -> "month " + m + " in " + y),
      Case($(), 
        () -> "(catch all)")
    );

    assertEquals("month 2 in 2017",result);
}

7. Side Effects in Pattern Matching

By default, Match acts like an expression, meaning it returns a result. However, we can force it to produce a side-effect by using the helper function run within a lambda.

It takes a method reference or a lambda expression and returns Void. 

Consider a scenario where we want to print something when an input is a single digit even integer and another thing when the input is a single digit odd number and throw an exception when the input is none of these.

The even number printer:

public void displayEven() {
    System.out.println("Input is even");
}

The odd number printer:

public void displayOdd() {
    System.out.println("Input is odd");
}

And the match function:

@Test
public void whenMatchCreatesSideEffects_thenCorrect() {
    int i = 4;
    Match(i).of(
      Case(isIn(2, 4, 6, 8), o -> run(this::displayEven)), 
      Case(isIn(1, 3, 5, 7, 9), o -> run(this::displayOdd)), 
      Case($(), o -> run(() -> {
          throw new IllegalArgumentException(String.valueOf(i));
      })));
}

Which would print:

Input is even

8. Conclusion

In this article, we have explored the most important parts of the Pattern Matching API in Javaslang. Indeed we can now write simpler and more concise code without the verbose switch and if statements, thanks to Javaslang.

To get the full source code for this article, you can check out the the Github project.

Overview of AI Libraries in Java

$
0
0

1. Introduction

In this article, we’ll go over an overview of Artificial Intelligence (AI) libraries in Java.

Since this article is about libraries, we’ll not make any introduction to AI itself. Additionally, theoretical background of AI is necessary in order to use libraries presented in this article.

AI is a very wide field, so we will be focusing on the most popular fields today like Natural Language Processing, Machine Learning, Neural Networks and more. In the end, we’ll mention few interesting AI challenges where you can practice your understanding of AI.

2. Expert Systems

2.1. Apache Jena

Apache Jena is an open source Java framework for building semantic web and linked data applications from RDF data. The official website provides a detailed tutorial on how to use this framework with a quick introduction to RDF specification.

2.2. PowerLoom Knowledge Representation and Reasoning System

PowerLoom is a platform for the creation of intelligent, knowledge-based applications. It provides Java API with detailed documentation which can be found on this link.

2.3. d3web 

d3web is an open source reasoning engine for developing, testing and applying problem-solving knowledge onto a given problem situation, with many algorithms already included. The official website provides a quick introduction to the platform with many examples and documentation.

2.4. Eye

Eye is an open source reasoning engine for performing semi-backward reasoning.

2.5. Tweety

Tweety is a collection of Java frameworks for logical aspects of AI and knowledge representation. The official website provides documentation and many examples.

3. Neural Networks

3.1. Neuroph

Neuroph is an open source Java framework for neural network creation. Users can create networks through provided GUI or Java code. Neuroph provides API documentation which also explains what neural network actually is and how it works.

3.2. Deeplearning4j

Deeplearning4j is a deep learning library for JVM but it also provides API for neural network creation. The official website provides many tutorials and simple theoretical explanations for deep learning and neural networks.

4. Natural Language Processing

4.1. Apache OpenNLP

Apache OpenNLP library is a machine learning based toolkit for the processing of natural language text. The official website provides API documentation with information on how to use the library.

4.2. Stanford CoreNLP

Stanford CoreNLP is the most popular Java NLP framework which provides various tools for performing NLP tasks. The official website provides tutorials and documentation with information on how to use this framework.

5. Machine Learning

5.1. Java Machine Learning Library (Java-ML)

Java-ML is an open source Java framework which provides various machine learning algorithms specifically for programmers. The official website provides API documentation with many code samples and tutorials.

5.2. RapidMiner

RapidMiner is a data science platform which provides various machine learning algorithms through GUI and Java API. It has a very big community, many available tutorials, and an extensive documentation.

5.3. Weka 

Weka is a collection of machine learning algorithms which can be applied directly to the dataset, through the provided GUI or called through the provided API. Similar as for RapidMiner, a community is very big, providing various tutorials for Weka and machine learning itself.

5.4. Encog Machine Learning Framework

Encong is a Java machine learning framework which supports many machine learning algorithms. It’s developed by Jeff Heaton from Heaton Research. The official website provides documentation and many examples.

6. Genetic algorithms

6.1. Jenetics 

Jenetics is an advanced genetic algorithm written in Java. It provides a clear separation of the genetic algorithm concepts. The official website provides documentation and a user guide for new users.

6.2. Watchmaker Framework

Watchmaker Framework is a framework for implementing genetic algorithms in Java. The official website provides documentation, examples, and additional information about the framework itself.

6.3. ECJ 23

ECJ 23 is a Java based research framework with strong algorithmic support for genetic algorithms. ECJ is developed at George Mason University’s ECLab Evolutionary Computation Laboratory. The official website provides extensive documentation and tutorials.

6.4. Java Genetic Algorithms Package (JGAP)

JGAP is a genetic programming component provided as a Java framework. The official website provides documentation and tutorials.

7. Automatic programming

7.1. Spring Roo

Spring Roo is a lightweight developer tool from Spring. It’s using AspectJ mixins to provide separation of concerns during round-trip maintenance.

7.2. Acceleo

Acceleo is an open source code generator for Eclipse which generates code from EMF models defined from any metamodel (UML, SysML, etc.).

8. Challenges

Since AI is very interesting and popular topic, there are many challenges and competitions online. This is a list of some interesting competitions where you can train and test your skills:

9. Conclusion

In this article, we presented various Java AI frameworks which can be used in everyday work.

We also saw that AI is a very wide field with many frameworks and services – all of which can make your applications better and more innovative.

Guide to Spring Email

$
0
0

1. Overview

In this article, we’ll walk through the steps needed to send emails from both a plain vanilla Spring application as well as from a Spring Boot application, the former using the JavaMail library and the latter using the spring-boot-starter-mail dependency.

2. Maven Dependencies

First, we need to add the dependencies to our pom.xml.

2.1. Spring

For use in the plain vanilla Spring framework we’ll add:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context-support</artifactId>
    <version>4.3.5-RELEASE</version>
</dependency>

The latest version may be found here.

2.2. Spring Boot

And for Spring Boot:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-mail</artifactId>
    <version>1.4.3.RELEASE</version>
</dependency>

The latest version is available in the Maven Central repository.

3. Mail Server Properties

The interfaces and classes for Java mail support in the Spring framework are organized as follows:

  1. MailSender interface: The top-level interface that provides basic functionality for sending simple emails
  2. JavaMailSender interface: the subinterface of the above MailSender. It supports MIME messages and is mostly used in conjunction with the MimeMessageHelper class for the creation of a MimeMessage. It’s recommended to use the MimeMessagePreparator mechanism with this interface
  3. JavaMailSenderImpl class: provides an implementation of the JavaMailSender interface. It supports the MimeMessage and SimpleMailMessage
  4. SimpleMailMessage class: used to create a simple mail message including the from, to, cc, subject and text fields
  5. MimeMessagePreparator interface: provides a callback interface for the preparation of MIME messages
  6. MimeMessageHelper class: helper class for the creation of MIME messages. It offers support for images, typical mail attachments and text content in an HTML layout

In the following sections we show how these interfaces and classes are used.

3.1. Spring Mail Server Properties

Mail properties that are needed to specify e.g. the SMTP server may be defined using the JavaMailSenderImpl.

For example, for Gmail this can be configured as shown below:

@Bean
public JavaMailSender getJavaMailSender() {
    JavaMailSender mailSender = new JavaMailSenderImpl();
    mailSender.setHost("smtp.gmail.com");
    mailSender.setPort(587);
    
    mailSender.setUsername("my.gmail@gmail.com");
    mailSender.setPassword("password");
    
    Properties props = mailSender.getJavaMailProperties();
    props.put("mail.transport.protocol", "smtp");
    props.put("mail.smtp.auth", "true");
    props.put("mail.smtp.starttls.enable", "true");
    props.put("mail.debug", "true");
    
    return mailSender;
}

3.2. Spring Boot Mail Server Properties

Once the dependency is in place, the next step is to specify the mail server properties in the application.properties file using the spring.mail.* namespace.

For example, the properties for Gmail SMTP Server can be specified as:

spring.mail.host=smtp.gmail.com
spring.mail.port=587
spring.mail.username=<login user to smtp server>
spring.mail.password=<login password to smtp server>
spring.mail.properties.mail.smtp.auth=true
spring.mail.properties.mail.smtp.starttls.enable=true

Some SMTP servers require a TLS connection, so the property spring.mail.properties.mail.smtp.starttls.enable is used to enable a TLS-protected connection.

3.2.1. Gmail SMTP Properties

We can send an email via Gmail SMTP server. Have a look at the documentation to see the Gmail outgoing mail SMTP server properties.

Our application.the properties file is already configured to use Gmail SMTP (see the previous section).

Note that the password for your account should not be an ordinary password, but an application password generated for your google account. Follow this link to see the details and to generate your Google App Password.

3.2.2. SES SMTP Properties

To send emails using Amazon SES Service, set your application.properties as we do below:

spring.mail.host=email-smtp.us-west-2.amazonaws.com
spring.mail.username=username
spring.mail.password=password
spring.mail.properties.mail.transport.protocol=smtp
spring.mail.properties.mail.smtp.port=25
spring.mail.properties.mail.smtp.auth=true
spring.mail.properties.mail.smtp.starttls.enable=true
spring.mail.properties.mail.smtp.starttls.required=true

Please, be aware that Amazon requires you to verify your credentials before using them. Follow the link to verify your username and password.

4. Sending Email

Once dependency management and configuration are in place, we can use the aforementioned JavaMailSender to send an email.

Since both the plain vanilla Spring framework as well as the Boot version of it handle the composing and sending of e-mails in a similar way, we won’t have to distinguish between the two in the subsections below.

4.1. Sending Simple Emails

Let’s first compose and send a simple email message without any attachments:

@Component
public class EmailServiceImpl implements EmailService {
 
    @Autowired
    public JavaMailSender emailSender;

    public void sendSimpleMessage(
      String to, String subject, String text) {
        ...
        SimpleMailMessage message = new SimpleMailMessage(); 
        message.setTo(to); 
        message.setSubject(subject); 
        message.setText(text);
        emailSender.send(message);
        ...
    }
}

4.2. Sending Emails with Attachments

Sometimes Spring’s simple messaging is not enough for our use cases.

For example, we want to send an order confirmation email with an invoice attached. In this case, we should use a MIME multipart message from JavaMail library instead of SimpleMailMessage. Spring supports JavaMail messaging with the org.springframework.mail.javamail.MimeMessageHelper class.

First of all, we’ll add a method to the EmailServiceImpl to send emails with attachments:

@Override
public void sendMessageWithAttachment(
  String to, String subject, String text, String pathToAttachment) {
    // ...
    
    MimeMessage message = emailSender.createMimeMessage();
     
    MimeMessageHelper helper = new MimeMessageHelper(message, true);
    
    helper.setTo(to);
    helper.setSubject(subject);
    helper.setText(text);
        
    FileSystemResource file 
      = new FileSystemResource(new File(pathToAttachment));
    helper.addAttachment("Invoice", file);

    emailSender.send(message);
    // ...
}

4.3. Simple Email Template

SimpleMailMessage class supports text formatting. We can create a template for emails by defining a template bean in our configuration:

@Bean
public SimpleMailMessage templateSimpleMessage() {
    SimpleMailMessage message = new SimpleMailMessage();
    message.setText(
      "This is the test email template for your email:\n%s\n");
    return message;
}

Now we can use this bean as a template for email and only need to provide necessary parameters to the template:

@Autowired
public SimpleMailMessage template;
...
String text = String.format(template.getText(), templateArgs);  
sendSimpleMessage(to, subject, text);

5. Handling Send Errors

JavaMail provides SendFailedException to handle situations when a message cannot be sent. But it is possible that you won’t get this exception while sending an email to the incorrect address. The reason is the following:

The protocol specs for SMTP in RFC 821 specifies the 550 return code that SMTP server should return when attempting to send an email to the incorrect address. But most of the public SMTP servers don’t do this. Instead, they send a “delivery failed” email to your box, or give no feedback at all.

For example, Gmail SMTP server sends a “delivery failed” message. And you get no exceptions in your program.

So, there are few options you can go through to handle this case:

  1. Catch the SendFailedException, which can never be thrown
  2. Check your sender mailbox on “delivery failed” message for some period of time. This is not straightforward and the time period is not determined
  3. If your mail server gives no feedback at all, you can do nothing

6. Conclusion

In this quick article, we showed how to set up and send emails from a Spring Boot application.

The implementation of all these examples and code snippets can be found in the GitHub project; this is a Maven-based project, so it should be easy to import and run as it is.

Messaging With Spring AMQP

$
0
0

1. Overview

In this article, we will explore Messaging-based communication over AMQP protocol using Spring AMQP framework. First, we’ll cover some of the key concepts of messaging, and we’ll move on to practical examples in section 5.

1.1. Maven Dependencies

To use spring-amqp and spring-rabbit in your project, just add these dependencies:

<dependencies>
    <dependency>
        <groupId>org.springframework.amqp</groupId>
        <artifactId>spring-amqp</artifactId>
        <version>1.6.6.RELEASE</version>
    </dependency>
    <dependency>
        <groupId>org.springframework.amqp</groupId>
        <artifactId>spring-rabbit</artifactId>
        <version>1.6.6.RELEASE</version>
    </dependency>
</dependencies>

You will find the newest versions in the Maven repository.

2. Messaging Based Communication

Messaging is a technique for inter-application communication that relies on asynchronous message-passing instead of synchronous request response-based architecture. Producer and consumer of messages are decoupled by an intermediate messaging layer known as the message broker. Message broker provides features like persistent storage of messages, message filtering, message transformation, etc.

In a case of intercommunication between applications written in Java, JMS (Java Message Service) API is commonly used for sending and receiving messages. For interoperability between different vendors and platforms,  we won’t be able to use JMS clients and brokers. This is where AMQP comes in handy.

3. AMQP – Advanced Message Queuing Protocol

AMQP is an open standard wire specification for asynchronous messaging based communication. It provides a description on how a message should be constructed. Every byte of transmitted data is specified.

3.1. How AMQP is Different From JMS

Since AMQP is a platform neutral binary protocol standard, this characteristic allows libraries to be written in different programming languages, and to run on different operating systems and CPU architectures.

There is no vendor based protocol lock in, as in the case of migrating from one JMS broker to another. For more details refer to JMS vs AMQP and Understanding AMQP. Some of the widely used AMQP brokers are RabbitMQ, OpenAMQ, and StormMQ.

3.2. AMQP Entities

AMQP entities comprise of Exchanges, Queues, and Bindings:

  • Exchanges are like post offices or mailboxes and clients always publish a message to an AMQP exchange
  • Queues bind to exchange using the binding key. A binding is “link” that you set up to bind a queue to an exchange
  • Messages are sent to message broker/exchange with a routing key. The exchange then distributes copies of messages to queues. Exchange provides the abstraction to achieve different messaging routing topologies like fanout, hierarchical routing, etc.

3.3. Exchange Types

Exchanges are AMQP entities where messages are sent. Exchanges take a message and route it into zero or more queues. There are four built in exchange types:

  • Direct Exchange
  • Fanout Exchange
  • Topic Exchange
  • Headers Exchange

For more details, have a look at AMQP Concepts and Routing Topologies.

4. Spring AMQP

Spring AMQP comprise of two modules: spring-amqp and spring-rabbit, each represented by a jar in the distribution. spring-

Spring-amqp is the base abstraction and you will find classes that represent the core AMQP model: Exchange, Queue and Binding. 

We will be covering spring-rabbit specific features below in the article.

4.1. spring-amqp Features

  • Template-based abstraction – AMQPTemplate interface defines all basic operations for sending/receiving messages, with RabbitTemplate as the implementation
  • Publisher Confirmations/Consumer Acknowledgements support
  • AmqpAdmin tool that allows performing basic operations

4.2. spring-rabbit Features

AMQP model and related entities we discussed above are generic and applicable to all implementations. But there are some features which are specific to the implementation. A couple of those spring-rabbit features is explained below.

Connection Management Support – org.springframework.amqp.rabbit.connection.ConnectionFactory interface is the central component for managing the connections to RabbitMQ broker. The responsibility of CachingConnectionFactory implementation of ConnectionFactory interface is to provide an instance of org.springframework.amqp.rabbit.connection.Connection interface. Please note that spring-rabbit provides Connection interface as a wrapper over RabbitMQ client com.rabbitmq.client.Connection interface.

Asynchronous Message Consumption For asynchronous message consumption, two key concepts are a callback and a container. The callback is where your application code will be integrated with the messaging system.

One of the ways to code a callback is to provide an implementation of the MessageListener interface:

public interface MessageListener {
    void onMessage(Message message);
}

In order to have a clear separation between application code and messaging API, Spring AMQP provides MessageListenerAdapter also. Here is a Spring bean configuration example for listener container:

MessageListenerAdapter listener = new MessageListenerAdapter(somePojo);
listener.setDefaultListenerMethod("myMethod");

Now that we saw the various options for the Message-listening callback, we can turn our attention to the container. Basically, the container handles the “active” responsibilities so that the listener callback can remain passive. The container is an example of a lifecycle component. It provides methods for starting and stopping.

When configuring the container, you are essentially bridging the gap between an AMQP Queue and the MessageListener instance. By default, the listener container will start a single consumer which will receive messages from the queues.

5. Send and Receive Messages Using Spring AMQP

Here are the steps to send and receive a message via Spring AMQP:

  1. Setup and Start RabbitMQ broker – RabbitMQ installation and setup is straightforward, just follow the steps mentioned here
  2. Setup Java Project Create a Maven based Java project with dependencies mentioned above
  3. Implement Message Producer – We can use RabbitTemplate to send a “Hello, world!” message:
    AbstractApplicationContext ctx
      = new ClassPathXmlApplicationContext("beans.xml");
    AmqpTemplate template = ctx.getBean(RabbitTemplate.class);
    template.convertAndSend("Hello, world!");
    
  4. Implement Message Consumer – As discussed earlier, we can implement message consumer as a POJO:
    public class Consumer {
        public void listen(String foo) {
            System.out.println(foo);
        }
    }
  5. Wiring Dependencies  We will be using following spring bean configuration for setting up queues, exchange, and other entities. Most of the entries are self-explanatory. Queue named “myQueue” is bound to Topic Exchange “myExchange” using “foo.*” as the binding key. RabbitTemplate has been set up to send messages to “myExchange” exchange with “foo.bar” as the default routing key. ListenerContainer ensures asynchronous delivery of messages from “myQueue” queue to listen() method of Foo class:
    <rabbit:connection-factory id="connectionFactory"
      host="localhost" username="guest" password="guest" />
    
    <rabbit:template id="amqpTemplate" connection-factory="connectionFactory"
        exchange="myExchange" routing-key="foo.bar" />
    
    <rabbit:admin connection-factory="connectionFactory" />
    
    <rabbit:queue name="myQueue" />
    
    <rabbit:topic-exchange name="myExchange">
        <rabbit:bindings>
            <rabbit:binding queue="myQueue" pattern="foo.*" />
        </rabbit:bindings>
    </rabbit:topic-exchange>
    
    <rabbit:listener-container connection-factory="connectionFactory">
        <rabbit:listener ref="consumer" method="listen" queue-names="myQueue" />
    </rabbit:listener-container>
    
    <bean id="consumer" class="com.baeldung.springamqp.consumer.Consumer" />

    Note: By default, in order to connect to local RabbitMQ instance, use username “guest” and password “guest”.

  6. Run the Application: 
  • The first step is to make sure RabbitMQ is running, default port being 5672
  • Run the application by running Producer.java, executing the main() method
  • Producer sends the message to the “myExchange” with “foo.bar” as the routing key
  • As per the binding key of “myQueue”, it receives the message
  • Foo class which is a consumer of “myQueue” messages with listen() method as the callback receives the message and prints it on the console

6. Conclusion

In this article, we have covered messaging based architecture over AMQP protocol using Spring AMQP for communication between applications.

The complete source code and all code snippets for this article are available on the the GitHub project.

Guide to Guava Multimap

$
0
0

1. Overview

In this article, we will look at one of Map implementations from Google Guava library – Multimap. It is a collection that maps keys to values, similar to java.util.Map, but in which each key may be associated with multiple values.

2. Maven Dependency

First, let’s add a dependency:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>21.0</version>
</dependency>

The latest version can be found here.

3. Multimap Implementation

In the case of Guava Multimap, if we add two values for the same key, the second value will not override the first value. Instead, we will have two values in the resulting map. Let’s look at a test case:

String key = "a-key";
Multimap<String, String> map = ArrayListMultimap.create();

map.put(key, "firstValue");
map.put(key, "secondValue");

assertEquals(2, map.size());

Printing the map‘s content will output:

{a-key=[firstValue, secondValue]}

When we will get values by key “a-key” we will get Collection<String> that contains “firstValue” and “secondValue” as a result:

Collection<String> values = map.get(key);

Printing values will output:

[firstValue, secondValue]

4. Compared to the Standard Map

Standard map from java.util package doesn’t give us the ability to assign multiple values to the same key. Let’s consider a simple case when we put() two values into a Map using the same key:

String key = "a-key";
Map<String, String> map = new LinkedHashMap<>();

map.put(key, "firstValue");
map.put(key, "secondValue");

assertEquals(1, map.size());

The resulting map has only one element (“secondValue”), because of a second put() operation that overrides the first value. Should we want to achieve the same behavior as with Guava’s Multimapwe would need to create a Map that has a List<String> as a value type:

String key = "a-key";
Map<String, List<String>> map = new LinkedHashMap<>();

List<String> values = map.get(key);
if(values == null) {
    values = new LinkedList<>();
    values.add("firstValue");
    values.add("secondValue");
 }

map.put(key, values);

assertEquals(1, map.size());

Obviously, it is not very convenient to use. And if we have such need in our code then Guava’s Multimap could be a better choice than java.util.Map.

One thing to notice here is that, although we have a list that has two elements in it, size() method returns 1. In Multimap, size() returns an actual number of values stored in a Map, but keySet().size() returns the number of distinct keys.

5. Pros of Multimap

Multimaps are commonly used in places where a Map<K, Collection<V>> would otherwise have appeared. The differences include:

  • There is no need to populate an empty collection before adding an entry with put()
  • The get() method never returns null, only an empty collection (we do not need to check against null like in Map<String, Collection<V>> test case)
  • A key is contained in the Multimap if and only if it maps to at least one value. Any operation that causes a key to has zero associated values, has the effect of removing that key from the Multimap (in Map<String, Collection<V>>, even if we remove all values from the collection, we still keep an empty Collection as a value, and this is unnecessary memory overhead)
  • The total entry values count is available as size()

6. Conclusion

This article shows how and when to use Guava Multimap. It compares it to standard java.util.Map and shows pros of Guava Multimap.

All these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Exceptions in Java 8 Lambda Expressions

$
0
0

1. Overview

In Java 8, Lambda Expressions started to facilitate functional programming by providing a concise way to express behavior. However, the Functional Interfaces provided by the JDK don’t deal with exceptions very well – and the code becomes verbose and cumbersome when it comes to handling them.

In this article, we’ll explore some ways to deal with exceptions when writing lambda expressions.

2. Handling Unchecked Exceptions

First, let’s understand the problem with an example.

We have a List<Integer> and we want to divide a constant, say 50 with every element of this list and print the results:

List<Integer> integers = Arrays.asList(3, 9, 7, 6, 10, 20);
integers.forEach(i -> System.out.println(50 / i));

This expression works but there’s one problem. If any of the elements in the list is 0, then we get an ArithmeticException: / by zero. Let’s fix that by using a traditional try-catch block such that we log any such exception and continue execution for next elements:

List<Integer> integers = Arrays.asList(3, 9, 7, 0, 10, 20);
integers.forEach(i -> {
    try {
        System.out.println(50 / i);
    } catch (ArithmeticException e) {
        System.err.println(
          "Arithmetic Exception occured : " + e.getMessage());
    }
});

The use of try-catch solves the problem, but the conciseness of a Lambda Expression is lost and it’s no longer a small function as it’s supposed to be.

To deal with this problem, we can write a lambda wrapper for the lambda function. Let’s look at the code to see how it works:

static Consumer<Integer> lambdaWrapper(Consumer<Integer> consumer) {
    return i -> {
        try {
            consumer.accept(i);
        } catch (ArithmeticException e) {
            System.err.println(
              "Arithmetic Exception occured : " + e.getMessage());
        }
    };
}
List<Integer> integers = Arrays.asList(3, 9, 7, 0, 10, 20);
integers.forEach(lambdaWrapper(i -> System.out.println(50 / i)));

At first, we wrote a wrapper method that will be responsible for handling the exception and then passed the lambda expression as a parameter to this method.

The wrapper method works as expected but, you may argue that it’s basically removing the try-catch block from lambda expression and moving it to another method and it doesn’t reduce the actual number of lines of code being written.

This is true in this case where the wrapper is specific to a particular use case but we can make use of generics to improve this method and use it for a variety of other scenarios:

static <T, E extends Exception> Consumer<T>
  consumerWrapper(Consumer<T> consumer, Class<E> clazz) {
 
    return i -> {
        try {
            consumer.accept(i);
        } catch (Exception ex) {
            try {
                E exCast = clazz.cast(ex);
                System.err.println(
                  "Exception occured : " + exCast.getMessage());
            } catch (ClassCastException ccEx) {
                throw ex;
            }
        }
    };
}
List<Integer> integers = Arrays.asList(3, 9, 7, 0, 10, 20);
integers.forEach(
  consumerWrapper(
    i -> System.out.println(50 / i), 
    ArithmeticException.class));

As we can see, this iteration of our wrapper method takes two arguments, the lambda expression and the type of Exception to be caught. This lambda wrapper is capable of handling all data types, not just Integers, and catch any specific type of exception and not the superclass Exception.

Also, notice that we have changed the name of the method from lambdaWrapper to consumerWrapper. It’s because this method only handles lambda expressions for Functional Interface of type Consumer. We can write similar wrapper methods for other Functional Interfaces like Function, BiFunction, BiConsumer and so on.

3. Handling Checked Exceptions

Let’s consider the example from the previous section, but instead of dividing and printing the integers to the console, we want to write them to a file. This operation of writing to a file throws IOException.

static void writeToFile(Integer integer) throws IOException {
    // logic to write to file which throws IOException
}
List<Integer> integers = Arrays.asList(3, 9, 7, 0, 10, 20);
integers.forEach(i -> writeToFile(i));

On compilation, we get the following error.

java.lang.Error: Unresolved compilation problem: Unhandled exception type IOException

Since IOException is a checked exception, it must be handled. Now there are two options, we may want to throw the exception and handle it somewhere else or handle it inside the method that has the lambda expression. Let’s look at each of them one by one.

3.1. Throwing Checked Exception from Lambda Expressions

Let’s throw the exception from the method in which the lambda expression is written, in this case, the main:

public static void main(String[] args) throws IOException {
    List<Integer> integers = Arrays.asList(3, 9, 7, 0, 10, 20);
    integers.forEach(i -> writeToFile(i));
}

Still, while compiling, we get the same error of unhandled IOException. This is because lambda expressions are similar to Anonymous Inner Classes. In this case, the lambda expression is an implementation of accept(T t) method from Consumer<T> interface.

Throwing the exception from main is does nothing and since the method in the parent interface doesn’t throw any exception, it can’t in its implementation:

Consumer<Integer> consumer = new Consumer<Integer>() {
 
    @Override
    public void accept(Integer integer) throws Exception {
        writeToFile(integer);
    }
};

The above code doesn’t compile because the implementation of accept method can’t throw any Exception.

The most straightforward way would be to use a try-catch and wrap the checked exception into an unchecked exception and rethrow:

List<Integer> integers = Arrays.asList(3, 9, 7, 0, 10, 20);
integers.forEach(i -> {
    try {
        writeToFile(i);
    } catch (IOException e) {
        throw new RuntimeException(e);
    }
});

This approach gets the code to compile and run but has the same problem as the example in the case of unchecked exceptions in the previous section.

Since we just want to throw the exception,  we need to write our own Consumer Functional Interface which can throw an exception and then a wrapper method using it. Let’s call it ThrowingConsumer:

@FunctionalInterface
public interface ThrowingConsumer<T, E extends Exception> {
    void accept(T t) throws E;
}
static <T> Consumer<T> throwingConsumerWrapper(
  ThrowingConsumer<T, Exception> throwingConsumer) {
 
    return i -> {
        try {
            throwingConsumer.accept(i);
        } catch (Exception ex) {
            throw new RuntimeException(ex);
        }
    };
}

Now we can write our lambda expression which can throw exceptions without losing the conciseness.

List<Integer> integers = Arrays.asList(3, 9, 7, 0, 10, 20);
integers.forEach(throwingConsumerWrapper(i -> writeToFile(i)));

3.2. Handling a Checked Exception in Lambda Expression

In this final section. we will modify the wrapper to handle checked exceptions. Since our ThrowingConsumer interface uses generics, we can handle any specific exception.

static <T, E extends Exception> Consumer<T> handlingConsumerWrapper(
  ThrowingConsumer<T, E> throwingConsumer, Class<E> exceptionClass) {
 
    return i -> {
        try {
            throwingConsumer.accept(i);
        } catch (Exception ex) {
            try {
                E exCast = exceptionClass.cast(ex);
                System.err.println(
                  "Exception occured : " + exCast.getMessage());
            } catch (ClassCastException ccEx) {
                throw new RuntimeException(ex);
            }
        }
    };
}

We can user this wrapper in our example to handle only the IOException and throw any other checked exception by wrapping them in an unchecked exception:

List<Integer> integers = Arrays.asList(3, 9, 7, 0, 10, 20);
integers.forEach(handlingConsumerWrapper(
  i -> writeToFile(i), IOException.class));

Similar to the case of unchecked exceptions, throwing siblings for other Functional Interfaces like ThowingFunction, ThrowingBiFunction, ThrowingBiConsumer etc. can be written along with their corresponding wrapper methods.

4. Conclusion

In this article, we covered how to handle a specific exception in lambda expressions without losing the conciseness by use of wrapper methods. We also learned how to write throwing alternatives for the Functional Interfaces present in JDK to either throw a checked exception by wrapping them in an unchecked exception or to handle them.

The complete source code of Functional Interface and wrapper methods can be downloaded from here and test classes from here, over on Github.

If you are looking for the out-of-the-box working solutions, Javaslang and ThrowingFunction are worth checking out.


JAX-RS is just an API!

$
0
0

1. Overview

The REST paradigm has been around for quite a few years now and it’s still getting a lot of attention.

A RESTful API can be implemented in Java in a number of ways: you can use Spring, JAX-RS, or you might just write your own bare servlets if you’re good and brave enough. All you need is the ability to expose HTTP methods – the rest is all about how you organize them and how you guide the client when making calls to your API.

As you can make out from the title, this article will cover JAX-RS. But what does “just an API” mean? It means that the focus here is on clarifying the confusion between JAX-RS and its implementations and on offering an example of what a proper JAX-RS webapp looks like.

2. Inclusion in Java EE

JAX-RS is nothing more than a specification, a set of interfaces and annotations offered by Java EE. And then of course we have the implementations; some of the more well known are RESTEasy and Jersey.

Also, if you ever decide to build a JEE-compliant application server, the guys from Oracle will tell you that, among many other things, your server should provide a JAX-RS implementation for the deployed apps to use. That’s why it’s called Java Enterprise Edition Platform.

Another good example of specification and implementation is JPA and Hibernate.

2.1. Lightweight Wars

So how does all this help us, the developers? The help is in that our deployables can and should be very thin, letting the application server provide the needed libraries. This applies when developing a RESTful API as well: the final artifact should not contain any information about the used JAX-RS implementation.

Sure, we can provide the implementation (here‘s a tutorial for RESTeasy). But then we cannot call our application “Java EE app” anymore. If tomorrow someone comes and says “Ok, time to switch to Glassfish or Payara, JBoss became too expensive!“, we might be able to do it, but it won’t be an easy job.

If we provide our own implementation we have to make sure the server knows to exclude its own – this usually happens by having a proprietary XML file inside the deployable. Needless to say, such a file should contain all sorts of tags and instructions that nobody knows nothing about, except the developers who left the company three years ago.

2.2. Always Know your Server

We said so far that we should take advantage of the platform that we’re offered.

Before deciding on a server to use, we should see what JAX-RS implementation (name, vendor, version and known bugs) it provides, at least for Production environments. For instance, Glassfish comes with Jersey, while Wildfly or Jboss come with RESTEasy.

This, of course, means a little time spent on research, but it’s supposed to be done only once, at the beginning of the project or when migrating it to another server.

3. An Example

If you want to start playing with JAX-RS, the shortest path is: have a Maven webapp project with the following dependency in the pom.xml:

<dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-api</artifactId>
    <version>7.0</version>
    <scope>provided</scope>
</dependency>

We’re using JavaEE 7 since there are already plenty of application servers implementing it. That API jar contains the annotations that you need to use, located in package javax.ws.rs. Why is the scope “provided”? Because this jar doesn’t need to be in the final build either – we need it at compile time and it is provided by the server for the run time.

After the dependency is added, we first have to write the entry class: an empty class which extends javax.ws.rs.core.Application and is annotated with javax.ws.rs.ApplicationPath:

@ApplicationPath("/api")
public class RestApplication extends Application {
}

We defined the entry path as being /api. Whatever other paths we declare for our resources, they will be prefixed with /api.

Next, let’s see a resource:

@Path("/notifications")
public class NotificationsResource {
    @GET
    @Path("/ping")
    public Response ping() {
        return Response.ok().entity("Service online").build();
    }

    @GET
    @Path("/get/{id}")
    @Produces(MediaType.APPLICATION_JSON)
    public Response getNotification(@PathParam("id") int id) {
        return Response.ok()
          .entity(new Notification(id, "john", "test notification"))
          .build();
    }

    @POST
    @Path("/post/")
    @Consumes(MediaType.APPLICATION_JSON)
    @Produces(MediaType.APPLICATION_JSON)
    public Response postNotification(Notification notification) {
        return Response.status(201).entity(notification).build();
    }
}

We have a simple ping endpoint to call and check if our app is running, a GET and a POST for a Notification (this is just a POJO with attributes plus getters and setters).

Deploy this war on any application server implementing JEE7 and the following commands will work:

curl http://localhost:8080/simple-jaxrs-ex/api/notifications/ping/

curl http://localhost:8080/simple-jaxrs-ex/api/notifications/get/1

curl -X POST -d '{"id":23,"text":"lorem ipsum","username":"johana"}' 
  http://localhost:8080/simple-jaxrs-ex/api/notifications/post/ 
  --header "Content-Type:application/json"

Where simple-jaxrs-ex is the context-root of the webapp.

This was tested with Glassfish 4.1.0 and Wildfly 9.0.1.Final. Please note that the last two commands won’t work with Glassfish 4.1.1, because of this bug. It is apparently a known issue in this Glassfish version, regarding the serialization of JSON (if you have to use this server version, you’ll have to manage JSON marshaling on your own)

4. Conclusion

At the end this article, just keep in mind that JAX-RS is a powerful API and most (if not all) of the stuff that you need is already implemented by your web server. No need to turn your deployable into an unmanageable pile of libraries.

This writeup presents a simple example and things might get more complicated. For instance, you might want to write your own marshalers. When that’s needed, look for tutorials that solve your problem with JAX-RS, not with Jersey, Resteasy or other concrete implementation. It’s very likely that your problem can be solved with one or two annotations.

Guide to Spring Retry

$
0
0

1. Overview

Spring Retry provides an ability to automatically re-invoke a failed operation. This is helpful where the errors may be transient in nature (like a momentary network glitch). Spring Retry provides declarative control of the process and policy-based behavior that is easy to extend and customize.

In this article, we’ll see how to use String Retry to implement retry logic in Spring applications. We’ll also configure listeners to receive additional callbacks.

2. Maven Dependencies

Let’s begin by adding the dependency into our pom.xml:

<dependency>
    <groupId>org.springframework.retry</groupId>
    <artifactId>spring-retry</artifactId>
    <version>1.1.5.RELEASE</version>
</dependency>

We can check the latest version of spring-retry in Maven Central.

3. Enabling Spring Retry

To enable Spring Retry in an application, we need to add the @EnableRetry annotation to our @Configuration class:

@Configuration
@EnableRetry
public class AppConfig { ... }

4. Retry with Annotations

We can make a method call to be retried on failure by using annotations.

4.1. @Retryable

To add retry functionality to methods, @Retryable can be used:

@Service
public interface MyService {
    @Retryable(
      value = { SQLException.class }, 
      maxAttempts = 2,
      backoff = @Backoff(delay = 5000))
    void retryService(String sql) throws SQLException;
    ...
}

Here, the retry behavior is customized using the attributes of @Retryable. In this example, retry will be attempted only if the method throws an SQLException. There will be up to 2 retries and a delay of 5000 milliseconds.

If @Retryable is used without any attributes, if the method fails with an exception, then retry will be attempted up to three times, with a delay of one second.

4.2. @Recover

The @Recover annotation is used to define a separate recovery method when a @Retryable method fails with a specified exception:

@Service
public interface MyService {
    ...
    @Recover
    void recover(SQLException e, String sql);
}

So if the retryService() method throws an SQLException, the recover() method will be called. A suitable recovery handler has its first parameter of type Throwable (optional). Subsequent arguments are populated from the argument list of the failed method in the same order as the failed method, and with the same return type.

5. RetryTemplate

5.1 RetryOperations

Spring Retry provides RetryOperations interface which supplies a set of execute() methods:

public interface RetryOperations {
    <T> T execute(RetryCallback<T> retryCallback) throws Exception;

    ...
}

The RetryCallback which is a parameter of the execute() is an interface that allows insertion of business logic that needs to be retried upon failure:

public interface RetryCallback<T> {
    T doWithRetry(RetryContext context) throws Throwable;
}

5.2. RetryTemplate Configuration 

The RetryTemplate is an implementation of the RetryOperations. Let’s configure a RetryTemplate bean in our @Configuration class:

@Configuration
public class AppConfig {
    //...
    @Bean
    public RetryTemplate retryTemplate() {
        RetryTemplate retryTemplate = new RetryTemplate();
		
        FixedBackOffPolicy fixedBackOffPolicy = new FixedBackOffPolicy();
        fixedBackOffPolicy.setBackOffPeriod(2000l);
        retryTemplate.setBackOffPolicy(fixedBackOffPolicy);

        SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
        retryPolicy.setMaxAttempts(2);
        retryTemplate.setRetryPolicy(retryPolicy);
		
        return retryTemplate;
    }
}

RetryPolicy determines when an operation should be retried. A SimpleRetryPolicy is used to retry a fixed number of times.

BackOffPolicy is used to control back off between retry attempts. A FixedBackOffPolicy pauses for a fixed period of time before continuing.

5.3. Using the RetryTemplate

To run code with retry handling we call the retryTemplate.execute():

retryTemplate.execute(new RetryCallback<Void, RuntimeException>() {
    @Override
    public Void doWithRetry(RetryContext arg0) {
        myService.templateRetryService();
        ...
    }
});

The same could be achieved using a lambda expression instead of an anonymous class:

retryTemplate.execute(arg0 -> {
    myService.templateRetryService();
    return null;
});

6. XML Configuration

Spring Retry can be configured by XML using the Spring AOP namespace.

6.1. Adding XML File

In the classpath, let’s add retryadvice.xml:

...
<beans>
    <aop:config>
        <aop:pointcut id="transactional"
          expression="execution(*MyService.xmlRetryService(..))" />
        <aop:advisor pointcut-ref="transactional"
          advice-ref="taskRetryAdvice" order="-1" />
    </aop:config>

    <bean id="taskRetryAdvice"
      class="org.springframework.retry.interceptor.
        RetryOperationsInterceptor">
        <property name="RetryOperations" ref="taskRetryTemplate" />
    </bean>

    <bean id="taskRetryTemplate"
      class="org.springframework.retry.support.RetryTemplate">
        <property name="retryPolicy" ref="taskRetryPolicy" />
        <property name="backOffPolicy" ref="exponentialBackOffPolicy" />
    </bean>

    <bean id="taskRetryPolicy"
        class="org.springframework.retry.policy.SimpleRetryPolicy">
        <constructor-arg index="0" value="5" />
        <constructor-arg index="1">
            <map>
                <entry key="java.lang.RuntimeException" value="true" />
            </map>
        </constructor-arg>
    </bean>

    <bean id="exponentialBackOffPolicy"
      class="org.springframework.retry.backoff.ExponentialBackOffPolicy">
        <property name="initialInterval" value="300">
        </property>
        <property name="maxInterval" value="30000">
        </property>
        <property name="multiplier" value="2.0">
        </property>
    </bean>
</beans>
...

This example uses a custom RetryTemplate inside the interceptor of xmlRetryService method.

6.2. Using XML Configuration

Import retryadvice.xml from the classpath and enable @AspectJ support:

@Configuration
@EnableRetry
@EnableAspectJAutoProxy
@ImportResource("classpath:/retryadvice.xml")
public class AppConfig { ... }

7. Listeners

Listeners provide additional callbacks upon retries. They can be used for various cross-cutting concerns across different retries.

7.1. Adding Callbacks

The callbacks are provided in a RetryListener interface:

public class DefaultListenerSupport extends RetryListenerSupport {
    @Override
    public <T, E extends Throwable> void close(RetryContext context,
      RetryCallback<T, E> callback, Throwable throwable) {
        logger.info("onClose);
        ...
        super.close(context, callback, throwable);
    }

    @Override
    public <T, E extends Throwable> void onError(RetryContext context,
      RetryCallback<T, E> callback, Throwable throwable) {
        logger.info("onError"); 
        ...
        super.onError(context, callback, throwable);
    }

    @Override
    public <T, E extends Throwable> boolean open(RetryContext context,
      RetryCallback<T, E> callback) {
        logger.info("onOpen);
        ...
        return super.open(context, callback);
    }
}

The open and close callbacks come before and after the entire retry, and onError applies to the individual RetryCallback calls.

7.2. Registering the Listener

Next, we register our listener (DefaultListenerSupport) to our RetryTemplate bean:

@Configuration
public class AppConfig {
    ...

    @Bean
    public RetryTemplate retryTemplate() {
        RetryTemplate retryTemplate = new RetryTemplate();
        ...
        retryTemplate.registerListener(new DefaultListenerSupport());
        return retryTemplate;
    }
}

8. Testing the Results

Let’s verify the results:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  classes = AppConfig.class,
  loader = AnnotationConfigContextLoader.class)
public class SpringRetryTest {

    @Autowired
    private MyService myService;

    @Autowired
    private RetryTemplate retryTemplate;

    @Test(expected = RuntimeException.class)
    public void givenTemplateRetryService_whenCallWithException_thenRetry() {
        retryTemplate.execute(arg0 -> {
            myService.templateRetryService();
            return null;
        });
    }
}

When we run the test case, the below log text means that we have successfully configured the RetryTemplate and Listener:

2017-01-09 20:04:10 [main] INFO  o.b.s.DefaultListenerSupport - onOpen 
2017-01-09 20:04:10 [main] INFO  o.baeldung.springretry.MyServiceImpl
- throw RuntimeException in method templateRetryService() 
2017-01-09 20:04:10 [main] INFO  o.b.s.DefaultListenerSupport - onError 
2017-01-09 20:04:12 [main] INFO  o.baeldung.springretry.MyServiceImpl
- throw RuntimeException in method templateRetryService() 
2017-01-09 20:04:12 [main] INFO  o.b.s.DefaultListenerSupport - onError 
2017-01-09 20:04:12 [main] INFO  o.b.s.DefaultListenerSupport - onClose

9. Conclusion

In this article, we have introduced Spring Retry. We have seen examples of retry using annotations and RetryTemplate. We have then configured additional callbacks using listeners.

You can find the source code for this article over on GitHub.

Guide to the Guava BiMap

$
0
0

1. Overview

In this tutorial, we’ll show how to use the Google Guava’s BiMap interface and its multiple implementations.

A BiMap (or “bidirectional map”) is a special kind of a map which maintains an inverse view of the map while ensuring that no duplicate values are present and a value can always be used safely to get the key back.

The basic implementation of BiMap is HashBiMap where internally it makes use of two Maps, one for the key to value mapping and the other for the value to key mapping.

2. Google Guava’s BiMap

Let’s have a look at how to use the BiMap class.

We’ll start by adding the Google Guava library dependency in the pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>21.0</version>
</dependency>

The latest version of the dependency can be checked here.

3. Creating a BiMap

You can create an instance of BiMap in multiple ways as follows:

  • If you are going to deal with a custom Java object, use the create method from the class HashBiMap:
BiMap<String, String> capitalCountryBiMap = HashBiMap.create();
  • If we already have an existing map, you may create an instance of a BiMap using an overloaded version of the create method from a class HashBiMap:
Map<String, String> capitalCountryBiMap = new HashMap<>();
//...
HashBiMap.create(capitalCountryBiMap);
  • If you are going to deal with a key of type Enum, use the create method from the class EnumHashBiMap:
BiMap<MyEnum, String> operationStringBiMap = EnumHashBiMap.create(MyEnum.class);
  • If you intend to create an immutable map, use the ImmutableBiMap class (which follows a builder pattern):
BiMap<String, String> capitalCountryBiMap
  = new ImmutableBiMap.Builder<>()
    .put("New Delhi", "India")
    .build();

4. Using the BiMap

Let’s start with a simple example showing the usage of BiMap, where we can get a key based on a value and a value based on a key:

@Test
public void givenBiMap_whenQueryByValue_shouldReturnKey() {
    BiMap<String, String> capitalCountryBiMap = HashBiMap.create();
    capitalCountryBiMap.put("New Delhi", "India");
    capitalCountryBiMap.put("Washington, D.C.", "USA");
    capitalCountryBiMap.put("Moscow", "Russia");

    String keyFromBiMap = capitalCountryBiMap.inverse().get("Russia");
    String valueFromBiMap = capitalCountryBiMap.get("Washington, D.C.");
 
    assertEquals("Moscow", keyFromBiMap);
    assertEquals("USA", valueFromBiMap);
}

Note: the inverse method above returns the inverse view of the BiMap, which maps each of the BiMap’s values to its associated keys.

BiMap throws an IllegalArgumentException when we try to store a duplicate value twice.

Let’s see an example of the same:

@Test(expected = IllegalArgumentException.class)
public void givenBiMap_whenSameValueIsPresent_shouldThrowException() {
    BiMap<String, String> capitalCountryBiMap = HashBiMap.create();
    capitalCountryBiMap.put("Mumbai", "India");
    capitalCountryBiMap.put("Washington, D.C.", "USA");
    capitalCountryBiMap.put("Moscow", "Russia");
    capitalCountryBiMap.put("New Delhi", "India");
}

If we wish to override the value already present in BiMap, we can make use of the forcePut method:

@Test
public void givenSameValueIsPresent_whenForcePut_completesSuccessfully() {
    BiMap<String, String> capitalCountryBiMap = HashBiMap.create();
    capitalCountryBiMap.put("Mumbai", "India");
    capitalCountryBiMap.put("Washington, D.C.", "USA");
    capitalCountryBiMap.put("Moscow", "Russia");
    capitalCountryBiMap.forcePut("New Delhi", "India");

    assertEquals("USA", capitalCountryBiMap.get("Washington, D.C."));
    assertEquals("Washington, D.C.", capitalCountryBiMap.inverse().get("USA"));
}

5. Conclusion

In this concise tutorial, we illustrated examples of using the BiMap in the Guava library. It is predominantly used to get a key based on the value from the map.

The implementation of these examples can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as is.

Java Web Weekly, Issue 160

$
0
0

Lots of solid reactive focused talks this week.

Here we go…

1. Spring and Java

>> Java 10 Could Bring Upgraded Lambdas [infoq.com]

A short report about a cool possible enhancement of Lambda Expressions in Java 10.

>> Reflection vs Encapsulation [blog.codefx.org]

The introduction of modularity to the JVM casts a new light on the age-old Reflection vs Encapsulation discussions.

>> Open your classes and methods in Kotlin [blog.frankel.ch]

Kotlin’s features can sometimes be quite unhandy when working with Spring Boot.

>> Web frameworks and how to survive them [blog.codecentric.de]

Most web frameworks don’t stand the test of time – here are just a couple of reasons what that’s usually the case.

>> How to TDD FizzBuzz with JUnit Theories [opencredo.com]

This is how you overengineer FizzBuzz 🙂

>> Ultimate Guide to JPQL Queries with JPA and Hibernate [thoughts-on-java.org]

A comprehensive guide to JPQL with JPA / Hibernate.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Deploying Pull Requests with Docker [blog.codecentric.de]

A good way you can make your Pull Request easily testable by making good use of Docker containerization.

>> A Probably Incomplete, Comprehensive Guide to the Many Different Ways to JOIN Tables in SQL [blog.jooq.org]

A solid reference to JOINing in SQL.

>> Microservice using AWS API Gateway, AWS Lambda and Couchbase [blog.couchbase.com]

Short tutorial showing how to create a less standard style of microservice – using AWS API Gateway, AWS Lambda and Couchbase.

>> Flyway Tutorial – Managing Database Migrations [blog.codecentric.de]

Quick write-up showcasing Flyway – a database migration tool that uses immutable migration files.

>> Property Based Testing with Javaslang [sitepoint.com]

It turns out you can do Property Testing with Javaslang too 🙂

Also worth reading:

3. Musings

>> Types and Tests  [blog.cleancoder.com]

Continuation of the discussion about types and pros/cons of Static Typing.

>> Technodiversity [pointersgonewild.com]

Looks like technological diversity has more ‘pros’ than ‘cons’. Definitely an interesting read.

>> Couchbase Customer Advisory Note – Security  [blog.couchbase.com]

A few security “rules of thumb” for Couchbase users.

Considering just how many production instances seem to be wide-open – this one’s surprisingly relevant. And not just for Couchbase.

>> How to Turn Requirements into User Stories [daedtech.com]

A short guide to the effective conversion of requirements into User Stories.

Throughout my career, this has been an interesting skill to track, because it looks deceptively simple, but it’s generally quite the opposite.

>> 5 Code Review Tricks the Experts Use – Based on 3.2 Million Lines of Code [blog.takipi.com]

The title says everything 🙂

>> Top Heavy Department Growth [daedtech.com]

A few interesting insights about how organizations grow.

There are a few good ways to organically grow an organization well, and a whole lot of not so good ways as well.

>> Forget ISO-8859-1 [techblog.bozho.net]

Arguments for sticking to UTF-8.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Optimist employees [dilbert.com]

>> CEO Wisdom [dilbert.com]

>>Why are you wearing gloves? [dilbert.com]

5. Pick of the Week

>> Laws of 10x found everywhere. For good reason? [asmartbear.com]

Iterable to Stream in Java

$
0
0

1. Overview

In this short tutorial, let’s convert a Java Iterable object into a Stream and perform some standard operations on it.

2. Converting Iterable to Stream

The Iterable interface is designed keeping generality in mind and does not provide any stream() method on its own.

Simply put, you can pass it to StreamSupport.stream() method and get a Stream from the given Iterable instance.

Let’s consider our Iterable instance:

Iterable<String> iterable 
  = Arrays.asList("Testing", "Iterable", "conversion", "to", "Stream");

And here’s how we can convert this Iterable instance into a Stream:

StreamSupport.stream(iterable.spliterator(), false);

Note that the second param in StreamSupport.stream() determines if the resulting Stream should be parallel or sequential. You should set it true, for a parallel Stream.

Now let’s test our implementation:

@Test
public void givenIterable_whenConvertedToStream_thenNotNull() {
    Iterable<String> iterable 
      = Arrays.asList("Testing", "Iterable", "conversion", "to", "Stream");

    Assert.assertNotNull(StreamSupport.stream(iterable.spliterator(), false));
}

Also, a quick side-note – streams are not reusable, while Iterable is; it also provides a spliterator() method, which returns a java.lang.Spliterator instance over the elements described by the given Iterable.

3. Performing Stream Operation

Let’s perform a simple stream operation:

@Test
public void whenConvertedToList_thenCorrect() {
    Iterable<String> iterable 
      = Arrays.asList("Testing", "Iterable", "conversion", "to", "Stream");

    List<String> result = StreamSupport.stream(iterable.spliterator(), false)
      .map(String::toUpperCase)
      .collect(Collectors.toList());

    assertThat(
      result, contains("TESTING", "ITERABLE", "CONVERSION", "TO", "STREAM"));
}

4. Conclusion

This simple tutorial shows how you can convert an Iterable instance into a Stream instance and perform standard operations on it, just like you would have done for any other Collection instance.

The implementation of all the code snippets can be found in the Github project.

Concurrency with LMAX Disruptor – An Introduction

$
0
0

1. Overview

This article introduces the LMAX Disruptor and talks about how it helps to achieve software concurrency with low latency. We will also see a basic usage of the Disruptor library.

2. What is a Disruptor?

Disruptor is an open source Java library written by LMAX. It is a concurrent programming framework for the processing of a large number of transactions, with low-latency (and without the complexities of concurrent code). The performance optimization is achieved by a software design that exploits the efficiency of underlying hardware.

2.1. Mechanical Sympathy

Let’s start with the core concept of mechanical sympathy – that is all about understanding how the underlying hardware operates and programming in a way that best works with that hardware.

For example, let’s see how CPU and memory organization can impact software performance. The CPU has several layers of cache between it and main memory. When the CPU is performing an operation, it first looks in L1 for the data, then L2, then L3, and finally, the main memory.  The further it has to go, the longer the operation will take.

If the same operation is performed on a piece of data multiple times (for example, a loop counter), it makes sense to load that data into a place very close to the CPU.

Some indicative figures for the cost of cache misses:

Latency from CPU to CPU cycles Time
Main memory Multiple ~60-80 ns
L3 cache ~40-45 cycles ~15 ns
L2 cache ~10 cycles ~3 ns
L1 cache ~3-4 cycles ~1 ns
Register 1 cycle Very very quick

2.2. Why not Queues

Queue implementations tend to have write contention on the head, tail, and size variables. Queues are typically always close to full or close to empty due to the differences in pace between consumers and producers.  They very rarely operate in a balanced middle ground where the rate of production and consumption is evenly matched.

To deal with the write contention, a queue often uses locks, which can cause a context switch to the kernel. When this happens the processor involved is likely to lose the data in its caches.

To get the best caching behavior, the design should have only one core writing to any memory location (multiple readers are fine, as processors often use special high-speed links between their caches). Queues fail the one-writer principle.

If two separate threads are writing to two different values, each core invalidates the cache line of the other (data is transferred between main memory and cache in blocks of fixed size, called cache lines).  That is a write-contention between the two threads even though they’re writing to two different variables. This is called false sharing, because every time the head is accessed, the tail gets accessed too, and vice versa.

2.3. How the Disruptor Works

Ringbuffer overview and its API

Disruptor has an array based circular data structure (ring buffer).  It is an array that has a pointer to next available slot.  It is filled with pre-allocated transfer objects. Producers and consumers perform writing and reading of data to the ring without locking or contention.

In a Disruptor, all events are published to all consumers (multicast), for parallel consumption through separate downstream queues. Due to parallel processing by consumers, it is necessary to coordinate dependencies between the consumers (dependency graph).

Producers and consumers have a sequence counter to indicate which slot in the buffer it is currently working on. Each producer/consumer can write its own sequence counter but can read other’s sequence counters. The producers and consumers read the counters to ensure the slot it wants to write in is available without any locks.

3. Using the Disruptor Library

3.1. Maven Dependency

Let’s start by adding Disruptor library dependency in pom.xml:

<dependency>
    <groupId>com.lmax</groupId>
    <artifactId>disruptor</artifactId>
    <version>3.3.6</version>
</dependency>

The latest version of the dependency can be checked here.

3.2. Defining an Event

Let’s define the event that carries the data:

public static class ValueEvent {
    private int value;
    public final static EventFactory EVENT_FACTORY 
      = () -> new ValueEvent();

    // standard getters and setters
}

The EventFactory lets the Disruptor preallocate the events.

3.3. Consumer

Consumers read data from the ring buffer. Let’s define a consumer that will handle the events:

public class SingleEventPrintConsumer {
    ...

    public EventHandler<ValueEvent>[] getEventHandler() {
        EventHandler<ValueEvent> eventHandler 
          = (event, sequence, endOfBatch) 
            -> print(event.getValue(), sequence);
        return new EventHandler[] { eventHandler };
    }
 
    private void print(int id, long sequenceId) {
        logger.info("Id is " + id 
          + " sequence id that was used is " + sequenceId);
    }
}

In our example, the consumer is just printing to a log.

3.4. Constructing the Disruptor

Construct the Disruptor:

ThreadFactory threadFactory = DaemonThreadFactory.INSTANCE;

WaitStrategy waitStrategy = new BusySpinWaitStrategy();
Disruptor<ValueEvent> disruptor 
  = new Disruptor<>(
    ValueEvent.EVENT_FACTORY, 
    16, 
    threadFactory, 
    ProducerType.SINGLE, 
    waitStrategy);

In the constructor of Disruptor, the following are defined:

  • Event Factory – Responsible for generating objects which will be stored in ring buffer during initialization
  • The size of Ring Buffer – We have defined 16 as the size of the ring buffer. It has to be a power of 2 else it would throw an exception while initialization. This is important because it is easy to perform most of the operations using logical binary operators e.g. mod operation
  • Thread Factory – Factory to create threads for event processors
  • Producer Type – Specifies whether we will have single or multiple producers
  • Waiting strategy – Defines how we would like to handle slow subscriber who doesn’t keep up with producer’s pace

Connect the consumer handler:

disruptor.handleEventsWith(getEventHandler());

It is possible to supply multiple consumers with Disruptor to handle the data that is produced by producer. In the example above, we have just one consumer a.k.a. event handler.

3.5. Starting the Disruptor

To start the Disruptor:

RingBuffer<ValueEvent> ringBuffer = disruptor.start();

3.6. Producing and Publishing Events

Producers place the data in the ring buffer in a sequence. Producers have to be aware of the next available slot so that they don’t overwrite data that is not yet consumed.

Use the RingBuffer from Disruptor for publishing:

for (int eventCount = 0; eventCount < 32; eventCount++) {
    long sequenceId = ringBuffer.next();
    ValueEvent valueEvent = ringBuffer.get(sequenceId);
    valueEvent.setValue(eventCount);
    ringBuffer.publish(sequenceId);
}

Here, the producer is producing and publishing items in sequence. It is important to note here that Disruptor works similar to 2 phase commit protocol. It reads a new sequenceId and publishes. The next time it should get sequenceId + 1 as the next sequenceId.

4. Conclusion

In this tutorial, we have seen what a Disruptor is and how it achieves concurrency with low latency. We have seen the concept of mechanical sympathy and how it may be exploited to achieve low latency. We have then seen an example using the Disruptor library.

The example code can be found in the GitHub project – this is a Maven based project, so it should be easy to import and run as is.

A Guide to LinkedHashMap in Java

$
0
0

1. Overview

In this article, we are going to explore the internal implementation of LinkedHashMap class. LinkedHashMap is a common implementation of Map interface.

This particular implementation is a subclass of HashMap and therefore shares the core building blocks of the HashMap implementation. As a result, it’s highly recommended to brush up on that before proceeding with this article.

2. LinkedHashMap vs HashMap

The LinkedHashMap class is very similar to HashMap in most aspects. However, the linked hash map is based on both hash table and linked list to enhance the functionality of hash map.

It maintains a doubly-linked list running through all its entries in addition to an underlying array of default size 16.

To maintain the order of elements, the linked hashmap modifies the Map.Entry class of HashMap by adding pointers to the next and previous entries:

static class Entry<K,V> extends HashMap.Node<K,V> {
    Entry<K,V> before, after;
    Entry(int hash, K key, V value, Node<K,V> next) {
        super(hash, key, value, next);
    }
}

Notice that the Entry class simply adds two pointers; before and after which enable it to hook itself to the linked list. Aside from that, it uses the Entry class implementation of a the HashMap.

Finally, remember that this linked list defines the order of iteration, which by default is the order of insertion of elements (insertion-order).

3. Insertion-Order LinkedHashMap

Let’s have a look at a linked hash map instance which orders its entries according to how they’re inserted into the map. It also guarantees that this order will be maintained throughout the life cycle of the map:

@Test
public void givenLinkedHashMap_whenGetsOrderedKeyset_thenCorrect() {
    LinkedHashMap<Integer, String> map = new LinkedHashMap<>();
    map.put(1, null);
    map.put(2, null);
    map.put(3, null);
    map.put(4, null);
    map.put(5, null);

    Set<Integer> keys = map.keySet();
    Integer[] arr = keys.toArray(new Integer[0]);

    for (int i = 0; i < arr.length; i++) {
        assertEquals(new Integer(i + 1), arr[i]);
    }
}

Here, we’re simply making a rudimentary, non-conclusive test on the ordering of entries in the linked hash map.

We can guarantee that this test will always pass as the insertion order will always be maintained. We cannot make the same guarantee for a HashMap.

This attribute can be of great advantage in an API that receives any map, makes a copy to manipulate and returns it to the calling code. If the client needs the returned map to be ordered the same way before calling the API, then a linked hashmap is the way to go.

Insertion order is not affected if a key is re-inserted into the map.

4. Access-Order LinkedHashMap

LinkedHashMap provides a special constructor which enables us to specify, among custom load factor (LF) and initial capacity, a different ordering mechanism/strategy called access-order:

LinkedHashMap<Integer, String> map = new LinkedHashMap<>(16, .75f, true);

The first parameter is the initial capacity, followed by the load factor and the last param is the ordering mode. So, by passing in true, we turned out access-order, whereas the default was insertion-order.

This mechanism ensures that the order of iteration of elements is the order in which the elements were last accessed, from least-recently accessed to most-recently accessed.

And so, building a Least Recently Used (LRU) cache is quite easy and practical with kind of a map. A successful put or get operation results in an access for the entry:

@Test
public void givenLinkedHashMap_whenAccessOrderWorks_thenCorrect() {
    LinkedHashMap<Integer, String> map 
      = new LinkedHashMap<>(16, .75f, true);
    map.put(1, null);
    map.put(2, null);
    map.put(3, null);
    map.put(4, null);
    map.put(5, null);

    Set<Integer> keys = map.keySet();
    assertEquals("[1, 2, 3, 4, 5]", keys.toString());
 
    map.get(4);
    assertEquals("[1, 2, 3, 5, 4]", keys.toString());
 
    map.get(1);
    assertEquals("[2, 3, 5, 4, 1]", keys.toString());
 
    map.get(3);
    assertEquals("[2, 5, 4, 1, 3]", keys.toString());
}

Notice how the order of elements in the key set is transformed as we perform access operations on the map.

Simply put, any access operation on the map results in an order such that the element that was accessed would appear last if an iteration were to be carried out right away.

After the above examples, it should be obvious that a putAll operation generates one entry access for each of the mappings in the specified map.

Naturally, iteration over a view of the map does’t affect the order of iteration of the backing map; only explicit access operations on the map will affect the order.

LinkedHashMap also provides a mechanism for maintaining a fixed number of mappings and to keep dropping off the oldest entries in case a new one needs to be added.

The removeEldestEntry method may be overridden to enforce this policy for removing stale mappings automatically.

To see this in practice, let us create our own linked hash map class, for the sole purpose of enforcing the removal of stale mappings by extending LinkedHashMap:

public class MyLinkedHashMap<K, V> extends LinkedHashMap<K, V> {

    private static final int MAX_ENTRIES = 5;

    public MyLinkedHashMap(
      int initialCapacity, float loadFactor, boolean accessOrder) {
        super(initialCapacity, loadFactor, accessOrder);
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry eldest) {
        return size() > MAX_ENTRIES;
    }

}

Our override above will allow the map to grow to a maximum size of 5 entries. When the size exceeds that, each new entry will be inserted at the cost of losing the eldest entry in the map i.e. the entry whose last-access time precedes all the other entries:

@Test
public void givenLinkedHashMap_whenRemovesEldestEntry_thenCorrect() {
    LinkedHashMap<Integer, String> map
      = new MyLinkedHashMap<>(16, .75f, true);
    map.put(1, null);
    map.put(2, null);
    map.put(3, null);
    map.put(4, null);
    map.put(5, null);
    Set<Integer> keys = map.keySet();
    assertEquals("[1, 2, 3, 4, 5]", keys.toString());
 
    map.put(6, null);
    assertEquals("[2, 3, 4, 5, 6]", keys.toString());
 
    map.put(7, null);
    assertEquals("[3, 4, 5, 6, 7]", keys.toString());
 
    map.put(8, null);
    assertEquals("[4, 5, 6, 7, 8]", keys.toString());
}

Notice how oldest entries at the start of the key set keep dropping off as we add new ones to the map.

5. Performance Considerations

Just like HashMap, LinkedHashMap performs the basic Map operations of add, remove and contains in constant-time, as long as the hash function is well-dimensioned. It also accepts a null key as well as null values.

However, this constant-time performance of LinkedHashMap is likely to be a little worse than the constant-time of HashMap due to the added overhead of maintaining a doubly-linked list.

Iteration over collection views of LinkedHashMap also takes linear time O(n) similar to that of HashMap. On the flip side, LinkedHashMap‘s linear time performance during iteration is better than HashMap‘s linear time.

This is because, for LinkedHashMap, n in O(n) is only the number of entries in the map regardless of the capacity. Whereas, for HashMap, n is capacity and the size summed up, O(size+capacity).

Load Factor and Initial Capacity are defined precisely as for HashMap. Note, however, that the penalty for choosing an excessively high value for initial capacity is less severe for LinkedHashMap than for HashMap, as iteration times for this class are unaffected by capacity.

6. Concurrency

Just like HashMap, LinkedHashMap implementation is not synchronized. So if you are going to access it from multiple threads and at least one of these threads is likely to change it structurally, then it must be externally synchronized.

It’s best to do this at creation:

Map m = Collections.synchronizedMap(new LinkedHashMap());

The difference with HashMap lies in what entails a structural modification. In access-ordered linked hash maps, merely calling the get API results in a structural modification. Alongside this, are operations like put and remove.

7. Conclusion

In this article, we have explored Java LinkedHashMap class as one of the foremost implementations of Map interface in terms of usage. We have also explored its internal workings in terms of the difference from HashMap which is its superclass.

Hopefully, after having read this post, you can make more informed and effective decisions as to which Map implementation to employ in your use case.

The full source code for all the examples used in this article can be found in the GitHub project.


findFirst() and findAny() in the Java 8 Stream API

$
0
0

1. Introduction

The Java 8 Stream API introduced two methods that are often being misunderstood: findAny() and findFirst().

In this quick tutorial, we will be looking at the difference between these two methods and when to use them.

2. Using the Stream.findAny()

As the name suggests, the findAny() method allows you to find any element from a Stream. Use it when you are looking for an element without paying an attention to the encounter order:

The method returns an Optional instance which is empty if the Stream is empty:

@Test
public void createStream_whenFindAnyResultIsPresent_thenCorrect() {
    List<String> list = Arrays.asList("A","B","C","D");

    Optional<String> result = list.stream().findAny();

    assertTrue(result.isPresent());
    assertThat(result.get(), anyOf(is("A"), is("B"), is("C"), is("D")));
}

In a non-parallel operation, it will most likely return the first element in the Stream but there is no guarantee for this.

For maximum performance when processing the parallel operation the result cannot be reliably determined:

@Test
public void createParallelStream_whenFindAnyResultIsNotFirst_thenCorrect() {
    List<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
    Optional<Integer> result = list
      .stream().parallel()
      .filter(num -> num < 4).findAny();

    assertTrue(result.isPresent());
    assertThat(result.get(), anyOf(is(1), is(2), is(3)));
}

3. Using the Stream.findFirst()

The findFirst() method finds the first element in a Stream. Obviously, this method is used when you specifically want the first element from a sequence.

When there is no encounter order it returns any element from the Stream. The java.util.streams package documentation says:

Streams may or may not have a defined encounter order. It depends on the source and the intermediate operations.

The return type is also an Optional instance which is empty if the Stream is empty too:

@Test
public void createStream_whenFindFirstResultIsPresent_thenCorrect() {

    List<String> list = Arrays.asList("A", "B", "C", "D");

    Optional<String> result = list.stream().findFirst();

    assertTrue(result.isPresent());
    assertThat(result.get(), is("A"));
}

The behavior of the findFirst method does not change in the parallel scenario. If the encounter order exists, it will always behave deterministically.

4. Conclusion

In this tutorial, we looked at the findAny() and findFirst() methods of the Java 8 Streams API. The findAny() method returns any element from a Stream while the findFirst() method returns the first element in a Stream.

You can find the complete source code and all code snippets for this article over on GitHub.

Spring Cloud Sleuth in a Monolith Application

$
0
0

1. Overview

In this article, we’re introducing Spring Cloud Sleuth – a powerful tool for enhancing logs in any application, but especially in a system built up of multiple services.

And for this writeup we’re going to focus on using Sleuth in a monolith application, not across microservices.

We’ve all had the unfortunate experience of trying to diagnose a problem with a scheduled task, a multi-threaded operation, or a complex web request. Often, even when there is logging, it is hard to tell what actions need to be correlated together to create a single request.

This can make diagnosing a complex action very difficult or even impossible. Often resulting in solutions like passing a unique id to each method in the request to identify the logs.

In comes Sleuth. This library makes it possible to identify logs pertaining to a specific job, thread, or request. Sleuth integrates effortlessly with logging frameworks like Logback and SLF4J to add unique identifiers that help track and diagnose issues using logs.

Let’s take a look at how it works.

2. Setup

We’ll start by creating a Spring Boot web project in our favorite IDE and adding this dependency to our pom.xml file:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>

Our application runs with Spring Boot and the parent pom provides versions for each entry. The latest version of this dependency can be found here: spring-cloud-starter-sleuth. To see the entire POM check out the project on Github.

Additionally, let’s add an application name to instruct Sleuth to identify this application’s logs.

In our application.properties file add this line:

spring.application.name=Baeldung Sleuth Tutorial

3. Sleuth Configurations

Sleuth is capable of enhancing logs in many situations. The library comes ready with filters that add unique ids to each web request that enters our application. Furthermore, the Spring team has added support for sharing these ids across thread boundaries.

Traces can be thought of like a single request or job that is triggered in an application. All the various steps in that request, even across application and thread boundaries, will have the same traceId.

Spans on the other hand can be thought of as sections of a job or request. A single trace can be composed of multiple spans each correlating to a specific step or section of the request. Using trace and span ids we can pinpoint exactly when and where our application is as it processes a request. Making reading our logs much easier.

In our examples, we will explore these capabilities in a single application.

3.1. Simple Web Request

First, let’s create a controller class to be an entry point to work with:

@RestController
public class SleuthController {

    @GetMapping("/")
    public String helloSleuth() {
        logger.info("Hello Sleuth");
        return "success";
    }
}

Let’s run our application and navigate to “http://localhost:8080”. Watch the logs for output that looks like:

2017-01-10 22:36:38.254  INFO 
  [Baeldung Sleuth Tutorial,4e30f7340b3fb631,7fd8bd536e28479b,false] 12516 
  --- [nio-8080-exec-1] c.b.spring.session.SleuthController : Hello Sleuth

This looks like a normal log, except for the part in the beginning between the brackets. This is the core information that Spring Sleuth has added. This data follows the format of:

[application name, traceId, spanId, export]

  • Application name – This is the name we set in the properties file and can be used to aggregate logs from multiple instances of the same application.
  • TraceId – This is an id that is assigned to a single request, job, or action. Something like each unique user initiated web request will have its own traceId.
  • SpanId – Tracks a unit of work. Think of a request that consists of multiple steps. Each step could have its own spanId and be tracked individually.
  • Export – This property is a boolean that indicates whether or not this log was exported to an aggregator like Zipkin. Zipkin is beyond the scope of this article but plays an important role in analyzing logs created by Sleuth.

By now, you should have some idea of the power of this library. Let’s take a look at another example to further demonstrate how integral this library is to logging.

3.2. Simple Web Request with Service Access

Let’s start by creating a service with a single method:

@Service
public class SleuthService {

    public void doSomeWorkSameSpan() {
        Thread.sleep(1000L);
        logger.info("Doing some work");
    }
}

Now let’s inject our service into our controller and add a request mapping method that accesses it:

@Autowired
private SleuthService sleuthService;
    
    @GetMapping("/same-span")
    public String helloSleuthSameSpan() throws InterruptedException {
        logger.info("Same Span");
        sleuthService.doSomeWorkSameSpan();
        return "success";
}

Finally, restart the application and navigate to “http://localhost:8080/same-span”. Watch for log output that looks like:

2017-01-10 22:51:47.664  INFO 
  [Baeldung Sleuth Tutorial,b77a5ea79036d5b9,661b8087cd9d8f51,false] 12516 
  --- [nio-8080-exec-3] c.b.spring.session.SleuthController      : Same Span
2017-01-10 22:51:48.664  INFO 
  [Baeldung Sleuth Tutorial,b77a5ea79036d5b9,661b8087cd9d8f51,false] 12516 
  --- [nio-8080-exec-3] c.baeldung.spring.session.SleuthService  : Doing some work

Take note that the trace and span ids are the same between the two logs even though the messages originate from two different classes. This makes it trivial to identify each log during a request by searching for the traceId of that request.

This is the default behavior, one request gets a single traceId and spanId. But we can manually add spans as we see fit. Let’s take a look at an example that uses this feature.

3.3. Manually Adding a Span

To start, let’s add a new controller:

@GetMapping("/new-span")
public String helloSleuthNewSpan() {
    logger.info("New Span");
    sleuthService.doSomeWorkNewSpan();
    return "success";
}

And now let’s add the new method inside our service:

@Autowired
private Tracer tracer;
// ...
public void doSomeWorkNewSpan() throws InterruptedException {
    logger.info("I'm in the original span");

    Span newSpan = tracer.createSpan("newSpan");
    try {
        Thread.sleep(1000L);
        logger.info("I'm in the new span doing some cool work that needs its own span");
    } finally {
        tracer.close(newSpan);
    }

    logger.info("I'm in the original span");
}

Note that we also added a new object, Tracer. The tracer instance is created by Spring Sleuth during startup and is made available to our class through dependency injection.

Traces must be manually started and stopped. To accomplish this, code that runs in a manually created span is placed inside a try-finally block to ensure the span is closed regardless of the operation’s success.

Restart the application and navigate to “http://localhost:8080/new-span”. Watch for the log output that looks like:

2017-01-11 21:07:54.924  
  INFO [Baeldung Sleuth Tutorial,9cdebbffe8bbbade,1e706f252a0ee9c2,false] 12516 
  --- [nio-8080-exec-6] c.b.spring.session.SleuthController      : New Span
2017-01-11 21:07:54.924  
  INFO [Baeldung Sleuth Tutorial,9cdebbffe8bbbade,1e706f252a0ee9c2,false] 12516 
  --- [nio-8080-exec-6] c.baeldung.spring.session.SleuthService  : 
  I'm in the original span
2017-01-11 21:07:55.924  
  INFO [Baeldung Sleuth Tutorial,9cdebbffe8bbbade,9e9ddea8f2a7c8ce,false] 12516 
  --- [nio-8080-exec-6] c.baeldung.spring.session.SleuthService  : 
  I'm in the new span doing some cool work that needs its own span
2017-01-11 21:07:55.924  
  INFO [Baeldung Sleuth Tutorial,9cdebbffe8bbbade,1e706f252a0ee9c2,false] 12516 
  --- [nio-8080-exec-6] c.baeldung.spring.session.SleuthService  : 
  I'm in the original span

We can see that the third log shares the traceId with the others, but it has a unique spanId. This can be used to locate different sections in a single request for more fine grained tracing.

Now let’s take a look at Sleuth’s support for threads.

3.4. Spanning Runnables

To demonstrate the threading capabilities of Sleuth let’s first add a configuration class to set up a thread pool:

@Configuration
public class ThreadConfig {

    @Autowired
    private BeanFactory beanFactory;

    @Bean
    public Executor executor() {
        ThreadPoolTaskExecutor threadPoolTaskExecutor
         = new ThreadPoolTaskExecutor();
        threadPoolTaskExecutor.setCorePoolSize(1);
        threadPoolTaskExecutor.setMaxPoolSize(1);
        threadPoolTaskExecutor.initialize();

        return new LazyTraceExecutor(beanFactory, threadPoolTaskExecutor);
    }
}

It is important to note here the use of LazyTraceExecutor. This class comes from the Sleuth library and is a special kind of executor that will propagate our traceIds to new threads and create new spanIds in the process.

Now let’s wire this executor into our controller and use it in a new request mapping method:

@Autowired
private Executor executor;
    
    @GetMapping("/new-thread")
    public String helloSleuthNewThread() {
        logger.info("New Thread");
        Runnable runnable = () -> {
            try {
                Thread.sleep(1000L);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            logger.info("I'm inside the new thread - with a new span");
        };
        executor.execute(runnable);

        logger.info("I'm done - with the original span");
        return "success";
}

With our runnable in place, let’s restart our application and navigate to “http://localhost:8080/new-thread”. Watch for log output that looks like:

2017-01-11 21:18:15.949  
  INFO [Baeldung Sleuth Tutorial,96076a78343c364d,5179d4eeb0037a86,false] 12516 
  --- [nio-8080-exec-9] c.b.spring.session.SleuthController      : New Thread
2017-01-11 21:18:15.950  
  INFO [Baeldung Sleuth Tutorial,96076a78343c364d,5179d4eeb0037a86,false] 12516 
  --- [nio-8080-exec-9] c.b.spring.session.SleuthController      : 
  I'm done - with the original span
2017-01-11 21:18:16.953  
  INFO [Baeldung Sleuth Tutorial,96076a78343c364d,e3b6a68013ddfeea,false] 12516 
  --- [lTaskExecutor-1] c.b.spring.session.SleuthController      : 
  I'm inside the new thread - with a new span

Much like the previous example we can see that all the logs share the same traceId. But the log coming from the runnable has a unique span that will track the work done in that thread. Remember that this happens because of the LazyTraceExecutor, if we were to use a normal executor we would continue to see the same spanId used in the new thread.

Now let’s look into Sleuth’s support for @Async methods.

3.5. @Async Support

To add async support let’s first modify our ThreadConfig class to enable this feature:

@Configuration
@EnableAsync
public class ThreadConfig extends AsyncConfigurerSupport {
    
    //...
    @Override
    public Executor getAsyncExecutor() {
        ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
        threadPoolTaskExecutor.setCorePoolSize(1);
        threadPoolTaskExecutor.setMaxPoolSize(1);
        threadPoolTaskExecutor.initialize();

        return new LazyTraceExecutor(beanFactory, threadPoolTaskExecutor);
    }
}

Note that we extend AsyncConfigurerSupport to specify our async executor and use LazyTraceExecutor to ensure traceIds and spanIds are propagated correctly. We have also added @EnableAsync to the top of our class.

Let’s now add an async method to our service:

@Async
public void asyncMethod() {
    logger.info("Start Async Method");
    Thread.sleep(1000L);
    logger.info("End Async Method");
}

Now let’s call into this method from our controller:

@GetMapping("/async")
public String helloSleuthAsync() {
    logger.info("Before Async Method Call");
    sleuthService.asyncMethod();
    logger.info("After Async Method Call");
    
    return "success";
}

Finally, let’s restart our service and navigate to “http://localhost:8080/async”. Watch for the log output that looks like:

2017-01-11 21:30:40.621  
  INFO [Baeldung Sleuth Tutorial,c187f81915377fff,65f7e9a59b52e82d,false] 10072 
  --- [nio-8080-exec-2] c.b.spring.session.SleuthController      : 
  Before Async Method Call
2017-01-11 21:30:40.622  
  INFO [Baeldung Sleuth Tutorial,c187f81915377fff,65f7e9a59b52e82d,false] 10072 
  --- [nio-8080-exec-2] c.b.spring.session.SleuthController      : 
  After Async Method Call
2017-01-11 21:30:40.622  
  INFO [Baeldung Sleuth Tutorial,c187f81915377fff,8a9f3f097dca6a9e,false] 10072 
  --- [lTaskExecutor-1] c.baeldung.spring.session.SleuthService  : 
  Start Async Method
2017-01-11 21:30:41.622  
  INFO [Baeldung Sleuth Tutorial,c187f81915377fff,8a9f3f097dca6a9e,false] 10072 
  --- [lTaskExecutor-1] c.baeldung.spring.session.SleuthService  : 
  End Async Method

We can see here that much like our runnable example, Sleuth propagates the traceId into the async method and adds a unique spanId.

Let’s now work through an example using spring support for scheduled tasks.

3.6. @Scheduled Support

Finally, let’s look at how Sleuth works with @Scheduled methods. To do this let’s update our ThreadConfig class to enable scheduling:

@Configuration
@EnableAsync
@EnableScheduling
public class ThreadConfig extends AsyncConfigurerSupport
  implements SchedulingConfigurer {
 
    //...
    
    @Override
    public void configureTasks(ScheduledTaskRegistrar scheduledTaskRegistrar) {
        scheduledTaskRegistrar.setScheduler(schedulingExecutor());
    }

    @Bean(destroyMethod = "shutdown")
    public Executor schedulingExecutor() {
        return Executors.newScheduledThreadPool(1);
    }
}

Note that we have implemented the SchedulingConfigurer interface and overridden its configureTasks method. We have also added @EnableScheduling to the top of our class.

Next, let’s add a service for our scheduled tasks:

@Service
public class SchedulingService {

    private Logger logger = LoggerFactory.getLogger(this.getClass());
 
    @Autowired
    private SleuthService sleuthService;

    @Scheduled(fixedDelay = 30000)
    public void scheduledWork() throws InterruptedException {
        logger.info("Start some work from the scheduled task");
        sleuthService.asyncMethod();
        logger.info("End work from scheduled task");
    }
}

In this class, we have created a single scheduled task with a fixed delay of 30 seconds.

Let’s now restart our application and wait for our task to be executed. Watch the console for output like this:

2017-01-11 21:30:58.866  
  INFO [Baeldung Sleuth Tutorial,3605f5deaea28df2,3605f5deaea28df2,false] 10072 
  --- [pool-1-thread-1] c.b.spring.session.SchedulingService     : 
  Start some work from the scheduled task
2017-01-11 21:30:58.866  
  INFO [Baeldung Sleuth Tutorial,3605f5deaea28df2,3605f5deaea28df2,false] 10072 
  --- [pool-1-thread-1] c.b.spring.session.SchedulingService     : 
  End work from scheduled task

We can see here that Sleuth has created new trace and span ids for our task. Each instance of a task will get it’s own trace and span by default.

4. Conclusion

In conclusion, we have seen how Spring Sleuth can be used in a variety of situations inside a single web application. We can use this technology to easily correlate logs from a single request, even when that request spans multiple threads.

By now we can see how Spring Cloud Sleuth can help us keep our sanity when debugging a multi-threaded environment. By identifying each operation in a traceId and each step in a spanId we can really begin to break down our analysis of complex jobs in our logs.

Even if we don’t go to the cloud, Spring Sleuth is likely a critical dependency in almost any project; it’s seamless to integrate and is a massive addition of value.

From here you may want to investigate other features of Sleuth. It can support tracing in distributed systems using RestTemplate, across messaging protocols used by RabbitMQ and Redis, and through a gateway like Zuul.

As always you can find the source code over on Github.

JSON Processing in Java EE 7

$
0
0

1. Overview

This article will show you how to process JSON using only core Java EE, without the use of third-party dependencies like Jersey or Jackson. Pretty much everything we’ll be using is provided by the javax.json package.

2. Writing an Object to JSON String

Converting a Java object into a JSON String is super easy. Let’s assume we have a simple Person class:

public class Person {
    private String firstName;
    private String lastName;
    private Date birthdate;

    // getters and setters
}

To convert an instance of that class to a JSON String, first we need to create an instance of JsonObjectBuilder and add property/value pairs using the add() method:

JsonObjectBuilder objectBuilder = Json.createObjectBuilder()
  .add("firstName", person.getFirstName())
  .add("lastName", person.getLastName())
  .add("birthdate", new SimpleDateFormat("DD/MM/YYYY")
  .format(person.getBirthdate()));

Notice that the add() method has a few overloaded versions. It can receive most of the primitive types (as well as boxed objects) as its second parameter.

Once we’re done setting the properties we just need to write the object into a String:

JsonObject jsonObject = objectBuilder.build();
        
String jsonString;
try(Writer writer = new StringWriter()) {
    Json.createWriter(writer).write(jsonObject);
    jsonString = writer.toString();
}

And that’s it! The generated String will look like this:

{"firstName":"Michael","lastName":"Scott","birthdate":"06/15/1978"}

2.1. Using JsonArrayBuilder to Build Arrays

Now, to add a little more complexity to our example, let’s assume that the Person class was modified to add a new property called emails which will contain a list of email addresses:

public class Person {
    private String firstName;
    private String lastName;
    private Date birthdate;
    private List<String> emails;
    
    // getters and setters

}

To add all the values from that list to the JsonObjectBuilder we’ll need the help of JsonArrayBuilder:

JsonArrayBuilder arrayBuilder = Json.createArrayBuilder();
                
for(String email : person.getEmails()) {
    arrayBuilder.add(email);
}
        
objectBuilder.add("emails", arrayBuilder);

Notice that we’re using yet another overloaded version of the add() method that takes a JsonArrayBuilder object as its second parameter.

So, let’s look at the generated String for a Person object with two email addresses:

{"firstName":"Michael","lastName":"Scott","birthdate":"06/15/1978",
 "emails":["michael.scott@dd.com","michael.scarn@gmail.com"]}

2.2. Formatting the Output with PRETTY_PRINTING

So we have successfully converted a Java object to a valid JSON String. Now, before moving to the next section, let’s add some simple formatting to make the output more “JSON-like” and easier to read.

In the previous examples, we created a JsonWriter using the straightforward Json.createWriter() static method. In order to get more control of the generated String, we will leverage Java 7’s JsonWriterFactory ability to create a writer with a specific configuration.

Map<String, Boolean> config = new HashMap<>();

config.put(JsonGenerator.PRETTY_PRINTING, true);
        
JsonWriterFactory writerFactory = Json.createWriterFactory(config);
        
String jsonString;
 
try(Writer writer = new StringWriter()) {
    writerFactory.createWriter(writer).write(jsonObject);
    jsonString = writer.toString();
}

The code may look a bit verbose, but it really doesn’t do much.

First, it creates an instance of JsonWriterFactory passing a configuration map to its constructor. The map contains only one entry which sets true to the PRETTY_PRINTING property. Then, we use that factory instance to create a writer, instead of using Json.createWriter().

The new output will contain the distinctive line breaks and tabulation that characterizes a JSON String:

{
    "firstName":"Michael",
    "lastName":"Scott",
    "birthdate":"06/15/1978",
    "emails":[
        "michael.scott@dd.com",
        "michael.scarn@gmail.com"
    ]
}

3. Building a Java Object From a String

Now let’s do the opposite operation: convert a JSON String into a Java object.

The main part of the conversion process revolves around JsonObject. To create an instance of this class, use the static method Json.createReader() followed by readObject():

JsonReader reader = Json.createReader(new StringReader(jsonString));

JsonObject jsonObject = reader.readObject();

The createReader() method takes an InputStream as a parameter. In this example, we’re using a StringReader, since our JSON is contained in a String object, but this same method could be used to read content from a file, for example, using FileInputStream.

With an instance of JsonObject at hand, we can read the properties using the getString() method and assign the obtained values to a newly created instance of our Person class:

Person person = new Person();

person.setFirstName(jsonObject.getString("firstName"));
person.setLastName(jsonObject.getString("lastName"));
person.setBirthdate(dateFormat.parse(jsonObject.getString("birthdate")));

3.1. Using JsonArray to Get List Values

We’ll need to use a special class, called JsonArray to extract list values from JsonObject:

JsonArray emailsJson = jsonObject.getJsonArray("emails");

List<String> emails = new ArrayList<>();

for (JsonString j : emailsJson.getValuesAs(JsonString.class)) {
    emails.add(j.getString());
}

person.setEmails(emails);

That’s it! We have created a complete instance of Person from a Json String.

4. Querying for Values 

Now, let’s assume we are interested in a very specific piece of data that lies inside a JSON String.

Consider the JSON below representing a client from a pet shop. Let’s say that, for some reason, you need to get the name of the third pet from the pets list:

{
    "ownerName": "Robert",
    "pets": [{
        "name": "Kitty",
        "type": "cat"
    }, {
        "name": "Rex",
        "type": "dog"
    }, {
        "name": "Jake",
        "type": "dog"
    }]
}

Converting the whole text into a Java object just to get a single value wouldn’t be very efficient. So, let’s check a couple of strategies to query JSON Strings without having to go through the whole conversion ordeal.

4.1. Querying Using Object Model API

Querying for a property’s value with a known location in the JSON structure is straightforward. We can use an instance of JsonObject, the same class used in previous examples:

JsonReader reader = Json.createReader(new StringReader(jsonString));

JsonObject jsonObject = reader.readObject();

String searchResult = jsonObject
  .getJsonArray("pets")
  .getJsonObject(2)
  .getString("name");

The catch here is to navigate through jsonObject properties using the correct sequence of get*() methods.

In this example, we first get a reference to the “pets” list using getJsonArray(), which returns a list with 3 records. Then, we use getJsonObject() method, which takes an index as a parameter, returning another JsonObject representing the third item in the list. Finally, we use getString() to get the string value we are looking for.

4.2. Querying Using Streaming API

Another way to perform precise queries on a JSON String is using the Streaming API, which has JsonParser as its main class.

JsonParser provides extremely fast, read-only, forward access to JS, with the drawback of being somewhat more complicated than the Object Model:

JsonParser jsonParser = Json.createParser(new StringReader(jsonString));

int count = 0;
String result = null;

while(jsonParser.hasNext()) {
    Event e = jsonParser.next();
    
    if (e == Event.KEY_NAME) {
        if(jsonParser.getString().equals("name")) {
            jsonParser.next();
           
            if(++count == 3) {
                result = jsonParser.getString();
                break;
            }
        }   
    }
}

This example delivers the same result as the previous one. It returns the name from the third pet in the pets list.

Once a JsonParser is created using Json.createParser(), we need to use an iterator (hence the “forward access” nature of the JsonParser) to navigate through the JSON tokens until we get to the property (or properties) we are looking for.

Every time we step through the iterator we move to the next token of the JSON data. So we have to be careful to check if the current token has the expected type. This is done by checking the Event returned by the next() call.

There are many different types of tokens. In this example, we are interested in the KEY_NAME types, which represent the name of a property (e.g. “ownerName”, “pets”, “name”, “type”). Once we stepped through a KEY_NAME token with a value of “name” for the third time, we know that the next token will contain a string value representing the name of the third pet from the list.

This is definitely harder than using the Object Model API, especially for more complicated JSON structures. The choice between one or the other, as always, depends on the specific scenario you will be dealing with.

5. Conclusion

We have covered a lot of ground on the Java EE JSON Processing API with a couple of simple examples. To learn other cool stuff about JSON processing, check our series of Jackson articles.

Check the source code of the classes used in this article, as well as some unit tests, in our GitHub repository.

Guide to Guava RangeSet

$
0
0

1. Overview

In this tutorial, we’ll show how to use the Google Guava’s RangeSet interface and its implementations.

A RangeSet is a set comprising of zero or more non-empty, disconnected ranges. When adding a range to a mutable RangeSet, any connected ranges are merged together while empty ranges are ignored.

The basic implementation of RangeSet is a TreeRangeSet.

2. Google Guava’s RangeSet

Let’s have a look at how to use the RangeSet class.

2.1. Maven Dependency

Let’s start by adding Google’s Guava library dependency in the pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>21.0</version>
</dependency>

The latest version of the dependency can be checked here.

3. Creation

Let’s explore some of the ways in which we can create an instance of RangeSet.

First, we can use the create method from the class TreeRangeSet to create a mutable set:

RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

If we already have collections in place, use the create method from the class TreeRangeSet to create a mutable set by passing that collection:

List<Range<Integer>> numberList = Arrays.asList(Range.closed(0, 2));
RangeSet<Integer> numberRangeSet = TreeRangeSet.create(numberList);

Finally, if we need to create an immutable range set, use the ImmutableRangeSet class (creating which follows a builder pattern):

RangeSet<Integer> numberRangeSet 
  = new ImmutableRangeSet.<Integer>builder().add(Range.closed(0, 2)).build();

4. Usage

Let’s start with a simple example that shows the usage of RangeSet.

4.1. Adding to a Range

We can check whether the input supplied is within a range present in any of the range items in a set:

@Test
public void givenRangeSet_whenQueryWithinRange_returnsSucessfully() {
    RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

    numberRangeSet.add(Range.closed(0, 2));
    numberRangeSet.add(Range.closed(3, 5));
    numberRangeSet.add(Range.closed(6, 8));

    assertTrue(numberRangeSet.contains(1));
    assertFalse(numberRangeSet.contains(9));
}

Notes:

  • The closed method of the Range class assumes the range of integer values to be between 0 to 2 (both inclusive)
  • The Range in above example consists of integers. We can use a range consisting of any type as long as it implements the Comparable interface such as String, Character, floating point decimals etc
  • In the case of an ImmutableRangeSet, a range item present in the set cannot overlap with a range item that one would like to add. If that happens, we get an IllegalArgumentException
  • Range input to a RangeSet cannot be null. If the input is null, we will get a NullPointerException

4.2. Removing a Range

Let’s see how we can remove values from a RangeSet:

@Test
public void givenRangeSet_whenRemoveRangeIsCalled_removesSucessfully() {
    RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

    numberRangeSet.add(Range.closed(0, 2));
    numberRangeSet.add(Range.closed(3, 5));
    numberRangeSet.add(Range.closed(6, 8));
    numberRangeSet.add(Range.closed(9, 15));
    numberRangeSet.remove(Range.closed(3, 5));
    numberRangeSet.remove(Range.closed(7, 10));

    assertTrue(numberRangeSet.contains(1));
    assertFalse(numberRangeSet.contains(9));
    assertTrue(numberRangeSet.contains(12));
}

As can be seen, after removal we can still access values present in any of the range items left in the set.

4.3. Range Span

Let’s now see what the overall span of a RangeSet is:

@Test
public void givenRangeSet_whenSpanIsCalled_returnsSucessfully() {
    RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

    numberRangeSet.add(Range.closed(0, 2));
    numberRangeSet.add(Range.closed(3, 5));
    numberRangeSet.add(Range.closed(6, 8));
    Range<Integer> experienceSpan = numberRangeSet.span();

    assertEquals(0, experienceSpan.lowerEndpoint().intValue());
    assertEquals(8, experienceSpan.upperEndpoint().intValue());
}

4.4. Getting a Subrange

If we wish to get part of RangeSet based on a given Range, we can use the subRangeSet method:

@Test
public void 
  givenRangeSet_whenSubRangeSetIsCalled_returnsSubRangeSucessfully() {
  
    RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

    numberRangeSet.add(Range.closed(0, 2));
    numberRangeSet.add(Range.closed(3, 5));
    numberRangeSet.add(Range.closed(6, 8));
    RangeSet<Integer> numberSubRangeSet 
      = numberRangeSet.subRangeSet(Range.closed(4, 14));

    assertFalse(numberSubRangeSet.contains(3));
    assertFalse(numberSubRangeSet.contains(14));
    assertTrue(numberSubRangeSet.contains(7));
}

4.5. Complement Method

Next, let’s get all the values except the one present in RangeSet, using the complement method:

@Test
public void givenRangeSet_whenComplementIsCalled_returnsSucessfully() {
    RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

    numberRangeSet.add(Range.closed(0, 2));
    numberRangeSet.add(Range.closed(3, 5));
    numberRangeSet.add(Range.closed(6, 8));
    RangeSet<Integer> numberRangeComplementSet
      = numberRangeSet.complement();

    assertTrue(numberRangeComplementSet.contains(-1000));
    assertFalse(numberRangeComplementSet.contains(2));
    assertFalse(numberRangeComplementSet.contains(3));
    assertTrue(numberRangeComplementSet.contains(1000));
}

4.6. Intersection with a Range

Finally, when we would like to check whether a range interval present in RangeSet intersects with some or all the values in another given range, we can make use of the intersect method:

@Test
public void givenRangeSet_whenIntersectsWithinRange_returnsSucessfully() {
    RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

    numberRangeSet.add(Range.closed(0, 2));
    numberRangeSet.add(Range.closed(3, 10));
    numberRangeSet.add(Range.closed(15, 18));

    assertTrue(numberRangeSet.intersects(Range.closed(4, 17)));
    assertFalse(numberRangeSet.intersects(Range.closed(19, 200)));
}

5. Conclusion

In this tutorial, we illustrated the RangeSet of the Guava library using some examples. The RangeSet is predominantly used to check whether a value falls within a certain range present in the set.

The implementation of these examples can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as is.

Simple Inheritance with Jackson

$
0
0

1. Overview

In this tutorial, we’re going to take a look at inheritance and in particular how to handle JSON serialization and deserialization of Java classes that extend a superclass.

In order to get started with Jackson, you can have a look at this article here.

2. JSON Inheritance – Real World Scenario

Let’s say that we have an abstract Event class which is used for some sort of event processing:

abstract public class Event {
    private String id;
    private Long timestamp;

    // standard constructors, getters/setters
}

There are two subclasses that extend the Event class, the first is ItemIdAddedToUser, and the second is ItemIdRemovedFromUser.

The problem is that if we want to represent these classes as JSON, we would need to serialize their type information in addition to their ordinary fields. Otherwise, when deserializing into an Event there would be no way of knowing what subclass the JSON represents. Fortunately, there is a mechanism to achieve this in the Jackson library.

3. Implementing Inheritance Using Jackson

There is a @JsonTypeInfo annotation that allows us to store an object’s type information as a JSON field.

Let’s enrich the Event class with the annotation:

@JsonTypeInfo(
  use = JsonTypeInfo.Id.MINIMAL_CLASS,
  include = JsonTypeInfo.As.PROPERTY,
  property = "eventType")
abstract public class Event {
    private String id;
    private Long timestamp;

    @JsonCreator
    public Event(
      @JsonProperty("id") String id,
      @JsonProperty("timestamp") Long timestamp) {
 
        this.id = id;
        this.timestamp = timestamp;
    }
    
    // standard getters
}

In this case, using the annotation will cause the ObjectMapper to add an additional field called eventType to the resulting JSON, with a value equal to the object’s type. By using JsonTypeInfo.Id. MINIMAL_CLASS, it means that the value of the eventType property will be equal to the name of the class. Let’s serialize an instance of an ItemIdRemovedFromUserEvent:

Event event = new ItemIdRemovedFromUser("1", 12345567L, "item_1", 2L);
ObjectMapper objectMapper = new ObjectMapper();
String eventJson = objectMapper.writeValueAsString(event);

When we print the JSON we will now see that the additional type information is stored:

{  
    "eventType":".ItemIdRemovedFromUser",
    "id":"1",
    "timestamp":12345567,
    "itemId":"item_1",
    "quantity":2
}

Let’s validate that the deserialization works by asserting that the ObjectMapper creates an instance of an ItemIdRemovedFromUser class:

@Test
public void givenRemoveItemJson_whenDeserialize_shouldHaveProperClassType()
  throws IOException {
 
    //given
    Event event = new ItemIdRemovedFromUser("1", 12345567L, "item_1", 2L);
    ObjectMapper objectMapper = new ObjectMapper();
    String eventJson = objectMapper.writeValueAsString(event);

    //when
    Event result = new ObjectMapper().readValue(eventJson, Event.class);

    //then
    assertTrue(result instanceof ItemIdRemovedFromUser);
    assertEquals("item_1", ((ItemIdRemovedFromUser) result).getItemId());
}

6. Ignoring Fields From a Super-Class

Let’s say that we want to extend an Event class, but we want our ObjectMapper to ignore it’s id field so it is not present in the resulting JSON.

It’s quite easily achieve this by using the @JsonIgnoreProperties annotation:

@JsonIgnoreProperties("id")
public class ItemIdAddedToUser extends Event {
    private String itemId;
    private Long quantity;

    @JsonCreator
    public ItemIdAddedToUser(
      @JsonProperty("id") String id,
      @JsonProperty("timestamp") Long timestamp,
      @JsonProperty("itemId") String itemId,
      @JsonProperty("quantity") Long quantity) {
 
        super(id, timestamp);
        this.itemId = itemId;
        this.quantity = quantity;
    }

    // standard getters
}

Let’s serialize an ItemAddedToUserEvent, and see which fields are ignored by the ObjectMapper:

Event event = new ItemIdAddedToUser("1", 12345567L, "item_1", 2L);
ObjectMapper objectMapper = new ObjectMapper();
String eventJson = objectMapper.writeValueAsString(event);

The resulting JSON will look like this (note that there is no id field from Event super-class):

{  
    "eventType":".ItemIdAddedToUser",
    "timestamp":12345567,
    "itemId":"item_1",
    "quantity":2
}

Let’s assert that the id field is missing with a test case:

@Test
public void givenAdddItemJson_whenSerialize_shouldIgnoreIdPropertyFromSuperclass()
  throws IOException {
 
    // given
    Event event = new ItemIdAddedToUser("1", 12345567L, "item_1", 2L);
    ObjectMapper objectMapper = new ObjectMapper();
        
    // when
    String eventJson = objectMapper.writeValueAsString(event);

    // then
    assertFalse(eventJson.contains("id"));
}

7. Conclusion

This article demonstrates how to use inheritance with Jackson library.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Viewing all 4703 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>