Quantcast
Channel: Baeldung
Viewing all 4717 articles
Browse latest View live

A Simple Tagging Implementation with MongoDB

$
0
0

1. Overview

In this tutorial, we’ll take a look at a simple tagging implementation using Java and MongoDB.

For those unfamiliar with the concept, a tag is a keyword used as a “label” to group documents into different categories. This allows the users to quickly navigate through similar content and it’s especially useful when dealing with a big amount of data.

That being said, it’s not surprising that this technique is very commonly used in blogs. In this scenario, each post has one or more tags according to the topics covered. When the user finishes reading, he can follow one of the tags to view more content related to that topic.

Let’s see how we can implement this scenario.

2. Dependency

In order to query the database, we’ll have to include the MongoDB driver dependency in our pom.xml:

<dependency>
    <groupId>org.mongodb</groupId>
    <artifactId>mongo-java-driver</artifactId>
    <version>3.6.3</version>
</dependency>

The current version of this dependency can be found here.

3. Data Model

First of all, let’s start by planning out what a post document should look like.

To keep it simple, our data model will only have a title, which we’ll also use as the document id, an author, and some tags.

We’ll store the tags inside an array since a post will probably have more than just one:

{
    "_id" : "Java 8 and MongoDB",
    "author" : "Donato Rimenti",
    "tags" : ["Java", "MongoDB", "Java 8", "Stream API"]
}

We’ll also create the corresponding Java model class:

public class Post {
    private String title;
    private String author;
    private List<String> tags;

    // getters and setters
}

4. Updating Tags

Now that we have set up the database and inserted a couple of sample posts, let’s see how we can update them.

Our repository class will include two methods to handle the addition and removal of tags by using the title to find them. We’ll also return a boolean to indicate whether the query updated an element or not:

public boolean addTags(String title, List<String> tags) {
    UpdateResult result = collection.updateOne(
      new BasicDBObject(DBCollection.ID_FIELD_NAME, title), 
      Updates.addEachToSet(TAGS_FIELD, tags));
    return result.getModifiedCount() == 1;
}

public boolean removeTags(String title, List<String> tags) {
    UpdateResult result = collection.updateOne(
      new BasicDBObject(DBCollection.ID_FIELD_NAME, title), 
      Updates.pullAll(TAGS_FIELD, tags));
    return result.getModifiedCount() == 1;
}

We used the addEachToSet method instead of push for the addition so that if the tags are already there, we won’t add them again.

Notice also that the addToSet operator wouldn’t work either since it would add the new tags as a nested array which is not what we want.

Another way we can perform our updates is through the Mongo shell. For instance, let’s update the post JUnit5 with Java. In particular, we want to add the tags Java and JUnit5  and remove the tags Spring and REST:

db.posts.updateOne(
    { _id : "JUnit 5 with Java" }, 
    { $addToSet : 
        { "tags" : 
            { $each : ["Java", "JUnit5"] }
        }
});

db.posts.updateOne(
    {_id : "JUnit 5 with Java" },
    { $pull : 
        { "tags" : { $in : ["Spring", "REST"] }
    }
});

5. Queries

Last but not least, let’s go through some of the most common queries we may be interested in while working with tags. For this purpose, we’ll take advantage of three array operators in particular:

  • $in – returns the documents where a field contains any value of the specified array
  • $nin – returns the documents where a field doesn’t contain any value of the specified array
  • $all – returns the documents where a field contains all the values of the specified array

We’ll define three methods to query the posts in relation to a collection of tags passed as arguments. They will return the posts which match at least one tag, all the tags and none of the tags. We’ll also create a mapping method to handle the conversion between a document and our model using Java 8’s Stream API:

public List<Post> postsWithAtLeastOneTag(String... tags) {
    FindIterable<Document> results = collection
      .find(Filters.in(TAGS_FIELD, tags));
    return StreamSupport.stream(results.spliterator(), false)
      .map(TagRepository::documentToPost)
      .collect(Collectors.toList());
}

public List<Post> postsWithAllTags(String... tags) {
    FindIterable<Document> results = collection
      .find(Filters.all(TAGS_FIELD, tags));
    return StreamSupport.stream(results.spliterator(), false)
      .map(TagRepository::documentToPost)
      .collect(Collectors.toList());
}

public List<Post> postsWithoutTags(String... tags) {
    FindIterable<Document> results = collection
      .find(Filters.nin(TAGS_FIELD, tags));
    return StreamSupport.stream(results.spliterator(), false)
      .map(TagRepository::documentToPost)
      .collect(Collectors.toList());
}

private static Post documentToPost(Document document) {
    Post post = new Post();
    post.setTitle(document.getString(DBCollection.ID_FIELD_NAME));
    post.setAuthor(document.getString("author"));
    post.setTags((List<String>) document.get(TAGS_FIELD));
    return post;
}

Again, let’s also take a look at the shell equivalent queries. We’ll fetch three different post collection respectively tagged with MongoDB or Stream API, tagged with both Java 8 and JUnit 5 and not tagged with Groovy nor Scala:

db.posts.find({
    "tags" : { $in : ["MongoDB", "Stream API" ] } 
});

db.posts.find({
    "tags" : { $all : ["Java 8", "JUnit 5" ] } 
});

db.posts.find({
    "tags" : { $nin : ["Groovy", "Scala" ] } 
});

6. Conclusion

In this article, we showed how to build a tagging mechanism. Of course, we can use and readapt this same methodology for other purposes apart from a blog.

If you are interested further in learning MongoDB, we encourage you to read this introductory article.

As always, all the code in the example is available over on the Github project.


Hamcrest Object Matchers

$
0
0

1. Overview

Hamcrest provides static matchers for making unit test assertions simpler and more legible. You can get started exploring some of the available matchers here.

In this quick tutorial, we’ll dive deeper into object matchers.

2. Setup

To get Hamcrest, we just need to add the following Maven dependency to our pom.xml:

<dependency>
    <groupId>org.hamcrest</groupId>
    <artifactId>java-hamcrest</artifactId>
    <version>2.0.0.0</version>
    <scope>test</scope>
</dependency>

The latest Hamcrest version can be found over on Maven Central.

3. Object Matchers

Object matchers are meant to perform checks over object’s properties.

Before looking into the matchers, we’ll create a couple of beans to make the examples simple to understand.

Our first object is called Location and has no properties:

public class Location {}

We’ll name our second bean City and add the following implementation to it:

public class City extends Location {
    
    String name;
    String state;

    // standard constructor, getters and setters

    @Override
    public String toString() {
        if (this.name == null && this.state == null) {
            return null;
        }
        StringBuilder sb = new StringBuilder();
        sb.append("[");
        sb.append("Name: ");
        sb.append(this.name);
        sb.append(", ");
        sb.append("State: ");
        sb.append(this.state);
        sb.append("]");
        return sb.toString();
    }
}

Note that City extends Location. We’ll make use of that later. Now, let’s start with the object matchers!

3.1. hasToString

As the name says, the hasToString method verifies that certain object has a toString method that returns a specific String:

@Test
public void givenACity_whenHasToString_thenCorrect() {
    City city = new City("San Francisco", "CA");
    
    assertThat(city, hasToString("[Name: San Francisco, State: CA]"));
}

So, we’re creating a City and verifying that its toString method returns the String that we want. We can take this one step further and instead of checking for equality, check for some other condition:

@Test
public void givenACity_whenHasToStringEqualToIgnoringCase_thenCorrect() {
    City city = new City("San Francisco", "CA");

    assertThat(city, hasToString(
      equalToIgnoringCase("[NAME: SAN FRANCISCO, STATE: CA]")));
}

As we can see, hasToString is overloaded and can receive both a String or a text matcher as a parameter. So, we can also do things like:

@Test
public void givenACity_whenHasToStringEmptyOrNullString_thenCorrect() {
    City city = new City(null, null);
    
    assertThat(city, hasToString(emptyOrNullString()));
}

You can find more information on text matchers here. Now let’s move to the next object matcher.

3.2. typeCompatibleWith

This matcher represents an is-a relationship. Here comes our Location superclass into play:

@Test
public void givenACity_whenTypeCompatibleWithLocation_thenCorrect() {
    City city = new City("San Francisco", "CA");

    assertThat(city.getClass(), is(typeCompatibleWith(Location.class)));
}

This is saying that City is-a Location, which is true and this test should pass. Also, if we wanted to test the negative case:

@Test
public void givenACity_whenTypeNotCompatibleWithString_thenCorrect() {
    City city = new City("San Francisco", "CA");

    assertThat(city.getClass(), is(not(typeCompatibleWith(String.class))));
}

Of course, our City class is not a String.

Finally, note that all Java objects should pass the following test:

@Test
public void givenACity_whenTypeCompatibleWithObject_thenCorrect() {
    City city = new City("San Francisco", "CA");

    assertThat(city.getClass(), is(typeCompatibleWith(Object.class)));
}

Please remember that the matcher is consists of a wrapper over another matcher with the purpose of making the whole assertion more readable.

4. Conclusion

Hamcrest provides a simple and clean way of creating assertions. There is a wide variety of matchers that make every developer’s life simpler as well as every project more readable.

And object matchers are definitely a straightforward way of checking class properties. 

As always, you’ll find the full implementation over on the GitHub project.

Introduction to CheckStyle

$
0
0

1. Overview

Checkstyle is an open source tool that checks code against a configurable set of rules.

In this tutorial, we’re going to look at how to integrate Checkstyle into a Java project via Maven and by using IDE plugins.

The plugins mentioned in below sections aren’t dependent on each other and can be integrated individually in our build or IDEs. For example, the Maven plugin isn’t needed in our pom.xml to run the validations in our Eclipse IDE.

2. Checkstyle Maven Plugin

2.1. Maven Configuration

To add Checkstyle to a project, we need to add the plugin in the reporting section of a pom.xml:

<reporting>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-checkstyle-plugin</artifactId>
            <version>3.0.0</version>
            <configuration>
                <configLocation>checkstyle.xml</configLocation>
            </configuration>
        </plugin>
    </plugins>
</reporting>

This plugin comes with two predefined checks, a Sun-style check, and a Google-style check. The default check for a project is sun_checks.xml.

To use our custom configuration, we can specify our configuration file as shown in the sample above. Using this config, the plugin will now read our custom configuration instead of the default one provided.

The latest version of the plugin can be found on Maven Central.

2.2. Report Generation

Now that our Maven plugin is configured, we can generate a report for our code by running the mvn site command. Once the build finishes, the report is available in the target/site folder under the name checkstyle.html.

There are three major parts to a Checkstyle report:

Files: This section of the report provides us with the list of files in which the violations have happened. It also shows us the counts of the violations against their severity levels. Here is how the files section of the report looks like:

Rules: This part of the report gives us an overview of the rules that were used to check for violations. It shows the category of the rules, the number of violations and the severity of those violations. Here is a sample of the report that shows the rules section:

Details: Finally, the details section of the report provides us the details of the violations that have happened. The details provided are at line number level. Here is a sample details section of the report:

2.3. Build Integration

If there’s a need to have stringent checks on the coding style, we can configure the plugin in such a way that the build fails when the code doesn’t adhere to the standards.

We do this by adding an execution goal to our plugin definition:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-checkstyle-plugin</artifactId>
    <version>${checkstyle-maven-plugin.version}</version>
    <configuration>
        <configLocation>checkstyle.xml</configLocation>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>check</goal>
            </goals>
        </execution>
    </executions>
</plugin>

The configLocation attribute defines which configuration file to refer to for the validations.

In our case, the config file is checkstyle.xml. The goal check mentioned in the execution section asks the plugin to run in the verify phase of the build and forces a build failure when a violation of coding standards occurs.

Now, if we run the mvn clean install command, it will scan the files for violations and the build will fail if any violations are found.

3. Eclipse Plugin

3.1. Configurations

Just like with the Maven integration, Eclipse enables us to use our custom configuration.

To import our configuration, go to Window -> Preferences -> Checkstyle. At the Global Check Configurations section, click on New.

This will open up a dialogue which will provide us options to specify our custom configuration file.

3.2. Reports Browsing

Now that our plugin is configured we can use it to analyze our code.

To check coding style for a project, right-click the project in the Eclipse Project Explorer and select CheckStyle -> Check Code with Checkstyle.

The plugin will give us feedback on our Java code within the Eclipse, text editor. It will also generate the violation report for the project which is available as a view in Eclipse.

To view to violation report, go to Window -> Show View -> Other, and search for Checkstyle. Options for Violations and Violations Chart should be displayed.

Selecting either option will give us a representation of violations grouped by type. Here is the violation pie chart for a sample project:

Clicking on a section of the pie chart would take us to the list of actual violations in the code.

Alternatively, we can open the Problem view of Eclipse IDE and check the problems reported by the plugin.

Here is a sample Problem View of Eclipse IDE:

Clicking on any of the warnings will take us to the code where the violation has happened.

4. IntelliJ IDEA Plugin

4.1. Configuration

Like Eclipse, IntelliJ IDEA also enables us to use our own custom configurations with a project.

In the IDE open Settings and search for Checkstyle. A window is shown that has the option to select our checks. Click on the + button and a window will open which will let us specify the location of the file to be used.

Now, we select a configuration XML file and click Next. This will open up the previous window and show our newly added custom configuration option. We select the new configuration and click on OK to start using it in our project.

4.2. Reports Browsing

Now that our plugin is configured, let’s use it to check for violations. To check for violations of a particular project, go to Analyze -> Inspect Code.

The Inspections Results will give us a view of the violations under the Checkstyle section. Here is a sample report:

Clicking on the violations will take us to the exact lines on the file where the violations have happened.

5. Custom Checkstyle Configuration

In the Maven report generation section (Section 2.2), we used a custom configuration file to perform our own coding standard checks.

We have a way to create our own custom configuration XML file if we don’t want to use the packaged Google or Sun checks.

Here is the custom configuration file used for above checks:

<!DOCTYPE module PUBLIC
  "-//Puppy Crawl//DTD Check Configuration 1.3//EN"
  "http://www.puppycrawl.com/dtds/configuration_1_3.dtd">
<module name="Checker">
    <module name="TreeWalker">
        <module name="AvoidStarImport">
            <property name="severity" value="warning" />
        </module>
    </module>
</module>

5.1. DOCTYPE Definition 

The first line of the i.e. the DOCTYPE definition is an important part of the file and it tells where to download the DTD from so that the configurations can be understood by the system.

If we don’t include this definition in our configuration file won’t be a valid configuration file.

5.2. Modules

A config file is primarily composed of Modules. A module has an attribute name which represents what the module does. The value of the name attribute corresponds to a class in the plugin’s code which is executed when the plugin is run.

Let’s learn about the different modules present in the config above.

5.3. Module Details

  • Checker: Modules are structured in a tree that has the Checker module at the root. This module defines the properties that are inherited by all other modules of the configuration.
  • TreeWalker: This module checks the individual Java source files and defines properties that are applicable to checking such files.
  • AvoidStarImport: This module sets a standard for not using Star imports in our Java code. It also has a property that asks the plugin to report the severity of such issues as a warning. Thus, whenever such violations are found in the code, a warning will be flagged against them.

To read more about custom configurations follow this link.

6. Report Analysis for the Spring-Rest Project

In this section, we’re going to shed some light on an analysis done by Checkstyle, using the custom configuration created in section 5 above, on the spring-rest project available on Github as an example.

6.1. Violation Report Generation

We’ve imported the configuration to Eclipse IDE and here is the violation report that is generated for the project:

The warnings reported here says that wildcard imports should be avoided in the code. We have two files that don’t comply with this standard. When we click on the warning it takes us to the Java file which has the violation.

Here is how the HeavyResourceController.java file shows the warning reported:

6.2. Issue Resolution

Using Star imports is not a good practice in general as it can create conflicts when two or more packages contain the same class.

As an example, consider the class List, which is available in packages java.util and java.awt both. If we use both the imports of java.util .* and  java.awt.* our compiler will fail to compile the code, as List is available in both packages.

To resolve the issue mentioned above we organize the imports in both files and save them. Now when we run the plugin again we don’t see the violations and our code is now following the standards set in our custom configuration.

7. Conclusion

In this article, we’ve covered basics for integrating Checkstyle in our Java project.

We’ve learned that it is a simple yet powerful tool that’s used to make sure that developers adhere to the coding standards set by the organization.

The sample code we used for static analysis is available over on Github.

The “final” Keyword in Java

$
0
0

1. Overview

While inheritance enables us to reuse existing code, sometimes we do need to set limitations on extensibility for various reasons; the final keyword allows us to do exactly that.

In this tutorial, we’ll take a look at what the final keyword means for classes, methods, and variables.

2. Final Classes

Classes marked as final can’t be extended. If we look at the code of Java core libraries, we’ll find many final classes there. One example is the String class.

Consider the situation if we can extend the String class, override any of its methods, and substitute all the String instances with the instances of our specific String subclass.

The result of the operations over String objects will then become unpredictable. And given that the String class is used everywhere, it’s unacceptable. That’s why the String class is marked as final.

Any attempt to inherit from a final class will cause a compiler error. To demonstrate this, let’s create the final class Cat:

public final class Cat {

    private int weight;

    // standard getter and setter
}

And let’s try to extend it:

public class BlackCat extends Cat {
}

We’ll see the compiler error:

The type BlackCat cannot subclass the final class Cat

Note that the final keyword in a class declaration doesn’t mean that the objects of this class are immutable. We can change the fields of Cat object freely:

Cat cat = new Cat();
cat.setWeight(1);

assertEquals(1, cat.getWeight());

We just can’t extend it.

If we follow the rules of good design strictly, we should create and document a class carefully or declare it final for safety reasons. However, we should use caution when creating final classes.

Notice that making a class final means that no other programmer can improve it. Imagine that we’re using a class and don’t have the source code for it, and there’s a problem with one method.

If the class is final, we can’t extend it to override the method and fix the problem. In other words, we lose extensibility, one of the benefits of object-oriented programming.

3. Final Methods

Methods marked as final cannot be overridden. When we design a class and feel that a method shouldn’t be overridden, we can make this method final. We can also find many final methods in Java core libraries.

Sometimes we don’t need to prohibit a class extension entirely, but only prevent overriding of some methods. A good example of this is the Thread class. It’s legal to extend it and thus create a custom thread class. But its isAlive() methods is final.

This method checks if a thread is alive. It’s impossible to override the isAlive() method correctly for many reasons. One of them is that this method is native. Native code is implemented in another programming language and is often specific to the operating system and hardware it’s running on.

Let’s create a Dog class and make its sound() method final:

public class Dog {
    public final void sound() {
        // ...
    }
}

Now let’s extend the Dog class and try to override its sound() method:

public class BlackDog extends Dog {
    public void sound() {
    }
}

We’ll see the compiler error:

- overrides
com.baeldung.finalkeyword.Dog.sound
- Cannot override the final method from Dog
sound() method is final and can’t be overridden

If some methods of our class are called by other methods, we should consider making the called methods final. Otherwise, overriding them can affect the work of callers and cause surprising results.

If our constructor calls other methods, we should generally declare these methods final for the above reason.

What’s the difference between making all methods of the class final and marking the class itself final? In the first case, we can extend the class and add new methods to it.

In the second case, we can’t do this.

4. Final Variables

Variables marked as final can’t be reassigned. Once a final variable is initialized, it can’t be altered.

4.1. Final Primitive Variables

Let’s declare a primitive final variable i, then assign 1 to it.

And let’s try to assign a value of 2 to it:

public void whenFinalVariableAssign_thenOnlyOnce() {
    final int i = 1;
    //...
    i=2;
}

The compiler says:

The final local variable i may already have been assigned

4.2. Final Reference Variables

If we have a final reference variable, we can’t reassign it either. But this doesn’t mean that the object it refers to is immutable. We can change the properties of this object freely.

To demonstrate this, let’s declare the final reference variable cat and initialize it:

final Cat cat = new Cat();

If we try to reassign it we’ll see a compiler error:

The final local variable cat cannot be assigned. It must be blank and not using a compound assignment

But we can change the properties of Cat instance:

cat.setWeight(5);

assertEquals(5, cat.getWeight());

4.3. Final Fields

Final fields can be either constants or write-once fields. To distinguish them, we should ask a question — would we include this field if we were to serialize the object? If no, then it’s not part of the object, but a constant.

Note that according to naming conventions, class constants should be uppercase, with components separated by underscore (“_”) characters:

static final int MAX_WIDTH = 999;

Note that any final field must be initialized before the constructor completes.

For static final fields, this means that we can initialize them:

  • upon declaration as shown in the above example
  • in the static initializer block

For instance final fields, this means that we can initialize them:

  • upon declaration
  • in the instance initializer block
  • in the constructor

Otherwise, the compiler will give us an error.

4.4. Final Arguments

The final keyword is also legal to put before method arguments. A final argument can’t be changed inside a method:

public void methodWithFinalArguments(final int x) {
    x=1;
}

The above assignment causes the compiler error:

The final local variable x cannot be assigned. It must be blank and not using a compound assignment

5. Conclusion

In this article, we learned what the final keyword means for classes, methods, and variables. Although we may not use the final keyword often in our internal code, it may be a good design solution.

As always, the complete code for this article can be found in the GitHub project.

Headers, Cookies and Parameters with REST-assured

$
0
0

1. Overview

In this quick tutorial, we’ll explore some REST-assured advanced scenarios. We explored REST-assured before in the tutorial a Guide to REST-assured.

To continue, we’ll cover examples that show how to set headers, cookie and parameters for our requests.

The setup is the same as the previous article, so let’s dive into our examples.

2. Setting Parameters

Now, let’s discuss how to specify different parameters to our request – starting with path parameters.

2.1. Path Parameters

We can use pathParam(parameter-name, value) to specify a path parameter:

@Test
public void whenUsePathParam_thenOK() {
    given().pathParam("user", "eugenp")
      .when().get("/users/{user}/repos")
      .then().statusCode(200);
}

To add multiple path parameters we’ll use the pathParams() method:

@Test
public void whenUseMultiplePathParam_thenOK() {
    given().pathParams("owner", "eugenp", "repo", "tutorials")
      .when().get("/repos/{owner}/{repo}")
      .then().statusCode(200);

    given().pathParams("owner", "eugenp")
      .when().get("/repos/{owner}/{repo}","tutorials")
      .then().statusCode(200);
}

In this example, we’ve used named path parameters, but we can also add unnamed parameters, and even combine the two:

given().pathParams("owner", "eugenp")
  .when().get("/repos/{owner}/{repo}", "tutorials")
  .then().statusCode(200);

The resulting URL, in this case, is https://api.github.com/repos/eugenp/tutorials.

Note that the unnamed parameters are index-based.

2.2. Query Parameters

Next, let’s see how we can specify query parameters using queryParam():

@Test
public void whenUseQueryParam_thenOK() {
    given().queryParam("q", "john").when().get("/search/users")
      .then().statusCode(200);

    given().param("q", "john").when().get("/search/users")
      .then().statusCode(200);
}

The param() method will act like queryParam() with GET requests.

For adding multiple query parameters, we can either chain several queryParam() methods, or add the parameters to a queryParams() method:

@Test
public void whenUseMultipleQueryParam_thenOK() {
 
    int perPage = 20;
    given().queryParam("q", "john").queryParam("per_page",perPage)
      .when().get("/search/users")
      .then().body("items.size()", is(perPage));   
     
    given().queryParams("q", "john","per_page",perPage)
      .when().get("/search/users")
      .then().body("items.size()", is(perPage));
}

2.3. Form Parameters

Finally, we can specify form parameters using formParam():

@Test
public void whenUseFormParam_thenSuccess() {
 
    given().formParams("username", "john","password","1234").post("/");

    given().params("username", "john","password","1234").post("/");
}

The param() method will act life formParam() for POST requests.

Also note that formParam() adds a Content-Type header with the value “application/x-www-form-urlencoded“.

3. Setting Headers

Next, we can customize our request headers using header():

@Test
public void whenUseCustomHeader_thenOK() {
 
    given().header("User-Agent", "MyAppName").when().get("/users/eugenp")
      .then().statusCode(200);
}

In this example, we’ve used header() to set the User-Agent header.

We can also add a header with multiple values using the same method:

@Test
public void whenUseMultipleHeaderValues_thenOK() {
 
    given().header("My-Header", "val1", "val2")
      .when().get("/users/eugenp")
      .then().statusCode(200);
}

In this example, we’ll have a request with two headers: My-Header:val1 and My-Header:val2.

For adding multiple headers, we’ll use the headers() method:

@Test
public void whenUseMultipleHeaders_thenOK() {
 
    given().header("User-Agent", "MyAppName", "Accept-Charset", "utf-8")
      .when().get("/users/eugenp")
      .then().statusCode(200);
}

4. Adding Cookies

We can also specify custom cookie to our request using cookie():

@Test
public void whenUseCookie_thenOK() {
 
    given().cookie("session_id", "1234").when().get("/users/eugenp")
      .then().statusCode(200);
}

We can also customize our cookie using cookie Builder:

@Test
public void whenUseCookieBuilder_thenOK() {
    Cookie myCookie = new Cookie.Builder("session_id", "1234")
      .setSecured(true)
      .setComment("session id cookie")
      .build();

    given().cookie(myCookie)
      .when().get("/users/eugenp")
      .then().statusCode(200);
}

5. Conclusion

In this article, we’ve shown how we can specify request parameters, headers, and cookies when using REST-assured.

And, as always, the full source code for the examples is available over on GitHub.

JSON Schema Validation with REST-assured

$
0
0

1. Overview

The REST-assured library provides support for testing REST APIs, usually in JSON format.

From time to time it may be desirable, without analyzing the response in detail, to know first-off whether the JSON body conforms to a certain JSON format.

In this quick tutorial, we’ll take a look at how we can validate a JSON response based on a predefined JSON schema.

2. Setup

The initial REST-assured setup is the same as our previous article.

In addition, we also need to include the json-schema-validator module in the pom.xml file:

<dependency>
    <groupId>io.rest-assured</groupId>
    <artifactId>json-schema-validator</artifactId>
    <version>3.0.0</version>
</dependency>

To ensure you have the latest version, follow this link.

We also need another library with the same name but a different author and functionality. It’s not a module from REST-assured but rather, it’s used under the hood by the json-schema-validator to perform validation:

<dependency>
    <groupId>com.github.fge</groupId>
    <artifactId>json-schema-validator</artifactId>
    <version>2.2.6</version>
</dependency>

Its latest version can be found here.

The library, json-schema-validator, may also need the json-schema-core dependency:

<dependency>
    <groupId>com.github.fge</groupId>
    <artifactId>json-schema-core</artifactId>
    <version>1.2.5</version>
</dependency>

And the latest version is always found here.

3. JSON Schema Validation

Let’s have a look at an example.

As a JSON schema, we’ll use a JSON saved in a file called event_0.json, which is present in the classpath:

{
    "id": "390",
    "data": {
        "leagueId": 35,
        "homeTeam": "Norway",
        "visitingTeam": "England",
    },
    "odds": [{
        "price": "1.30",
        "name": "1"
    },
    {
        "price": "5.25",
        "name": "X"
    }]
}

Then assuming that this is the general format followed by all data returned by our REST API, we can then check a JSON response for conformance like so:

@Test
public void givenUrl_whenJsonResponseConformsToSchema_thenCorrect() {
    get("/events?id=390").then().assertThat()
      .body(matchesJsonSchemaInClasspath("event_0.json"));
}

Notice that we’ll still statically import matchesJsonSchemaInClasspath from io.restassured.module.jsv.JsonSchemaValidator.

4. JSON Schema Validation Settings

4.1. Validate a Response

The json-schema-validator module of REST-assured gives us the power to perform fine-grained validation by defining our own custom configuration rules.

Say we want our validation to always use the JSON schema version 4:

@Test
public void givenUrl_whenValidatesResponseWithInstanceSettings_thenCorrect() {
    JsonSchemaFactory jsonSchemaFactory = JsonSchemaFactory.newBuilder()
      .setValidationConfiguration(
        ValidationConfiguration.newBuilder()
          .setDefaultVersion(SchemaVersion.DRAFTV4).freeze())
            .freeze();
    get("/events?id=390").then().assertThat()
      .body(matchesJsonSchemaInClasspath("event_0.json")
        .using(jsonSchemaFactory));
}

We would do this by using the JsonSchemaFactory and specify the version 4 SchemaVersion and assert that it is using that schema when a request is made.

4.2. Check Validations

By default, the json-schema-validator runs checked validations on the JSON response String. This means that if the schema defines odds as an array as in the following JSON:

{
    "odds": [{
        "price": "1.30",
        "name": "1"
    },
    {
        "price": "5.25",
        "name": "X"
    }]
}

then the validator will always be expecting an array as the value for odds, hence a response where odds is a String will fail validation. So, if we would like to be less strict with our responses, we can add a custom rule during validation by first making the following static import:

io.restassured.module.jsv.JsonSchemaValidatorSettings.settings;

then execute the test with the validation check set to false:

@Test
public void givenUrl_whenValidatesResponseWithStaticSettings_thenCorrect() {
    get("/events?id=390").then().assertThat().body(matchesJsonSchemaInClasspath
      ("event_0.json").using(settings().with().checkedValidation(false)));
}

4.3. Global Validation Configuration

These customizations are very flexible, but with a large number of tests we would have to define a validation for each test, this is cumbersome and not very maintainable.

To avoid this, we have the freedom to define our configuration just once and let it apply to all tests.

We’ll configure the validation to be unchecked and to always use it against JSON schema version 3:

JsonSchemaFactory factory = JsonSchemaFactory.newBuilder()
  .setValidationConfiguration(
   ValidationConfiguration.newBuilder()
    .setDefaultVersion(SchemaVersion.DRAFTV3)
      .freeze()).freeze();
JsonSchemaValidator.settings = settings()
  .with().jsonSchemaFactory(factory)
      .and().with().checkedValidation(false);

then to remove this configuration call the reset method:

JsonSchemaValidator.reset();

5. Conclusion

In this article, we’ve shown how we can validate a JSON response against a schema when using REST-assured.

As always, the full source code for the example is available over on GitHub.

Handling Daylight Savings Time in Java

$
0
0

1. Overview

Daylight Saving Time, or DST, is a practice of advancing clocks during summer months in order to leverage an additional hour of the natural light (saving heating power, illumination power, enhancing the mood, and so on).

It’s used by several countries and needs to be taken into account when working with dates and timestamps.

In this tutorial, we’ll see how to correctly handle DST in Java according to different locations.

2. JRE and DST Mutability

First, it’s extremely important to understand that worldwide DST zones change very often and there’s no central authority coordinating it.

A country, or in some cases even a city, can decide if and how to apply or revoke it.

Everytime it happens, the change is recorded in the IANA Time Zone Database, and the update will be rolled out in a future release of the JRE.

In case it’s not possible to wait, we can force the modified Time Zone data containing the new DST settings into the JRE through an official Oracle tool called Java Time Zone Updater Tool, available on the Java SE download page.

3. The Wrong Way: Three-Letter Timezone ID

Back in the JDK 1.1 days, the API allowed three-letter time zone IDs, but this led to several problems.

First, this was because the same three-letter ID could refer to multiple time zones. For example, CST could be U.S. “Central Standard Time”, but also “China Standard Time”.  The Java platform could then only recognize one of them.

Another issue was that Standard timezones never take Daylight Saving Time into an account. Multiple areas/regions/cities can have their local DST inside the same Standard time zone, so the Standard time doesn’t observe it.

Due to backward compatibility, it’s still possible to instantiate a java.util.Timezone with a three-letter ID. However, this method is deprecated and shouldn’t be used anymore.

4. The Right Way: TZDB Timezone ID

The right way to handle DST in Java is to instantiate a Timezone with a specific TZDB Timezone ID, eg. “Europe/Rome”.

Then, we’ll use this in conjunction with time-specific classes like java.util.Calendar to get a proper configuration of the TimeZone’s raw offset (to the GMT time zone), and automatic DST shift adjustments.

Let’s see how the shift from GMT+1 to GMT+2 (which happens in Italy on March 25, 2018, at 02:00 am) is automatically handled when using the right TimeZone:

TimeZone tz = TimeZone.getTimeZone("Europe/Rome");
TimeZone.setDefault(tz);
Calendar cal = Calendar.getInstance(tz, Locale.ITALIAN);
DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.ITALIAN);
Date dateBeforeDST = df.parse("2018-03-25 01:55");
cal.setTime(dateBeforeDST);
 
assertThat(cal.get(Calendar.ZONE_OFFSET)).isEqualTo(3600000);
assertThat(cal.get(Calendar.DST_OFFSET)).isEqualTo(0);

As we can see, ZONE_OFFSET is 60 minutes (because Italy is GMT+1) while DST_OFFSET is 0 at that time.

Let’s add ten minutes to the Calendar:

cal.add(Calendar.MINUTE, 10);

Now DST_OFFSET has become 60 minutes too, and the country has transitioned its local time from CET (Central European Time) to CEST (Central European Summer Time) which is GMT+2:

Date dateAfterDST = cal.getTime();
 
assertThat(cal.get(Calendar.DST_OFFSET))
  .isEqualTo(3600000);
assertThat(dateAfterDST)
  .isEqualTo(df.parse("2018-03-25 03:05"));

If we display the two dates in the console, we’ll see the time zone change as well:

Before DST (00:55 UTC - 01:55 GMT+1) = Sun Mar 25 01:55:00 CET 2018
After DST (01:05 UTC - 03:05 GMT+2) = Sun Mar 25 03:05:00 CEST 2018

As a final test, we can measure the distance between the two Dates, 1:55 and 3:05:

Long deltaBetweenDatesInMillis = dateAfterDST.getTime() - dateBeforeDST.getTime();
Long tenMinutesInMillis = (1000L * 60 * 10);
 
assertThat(deltaBetweenDatesInMillis)
  .isEqualTo(tenMinutesInMillis);

As we’d expect, the distance is of 10 minutes instead of 70.

We’ve seen how to avoid falling into the common pitfalls that we can encounter when working with Date through the correct usage of TimeZone and Locale.

5. The Best Way: Java 8 Date/Time API

Working with these thread-unsafe and not always user-friendly java.util classes have always been tough, especially due to compatibility concerns which prevented them from being properly refactored.

For this reason, Java 8 introduced a brand new package, java.time, and a whole new API set, the Date/Time API. This is ISO-centric, fully thread-safe and heavily inspired by the famous library Joda-Time.

Let’s take a closer look at this new classes, starting from the successor of java.util.Date, java.time.LocalDateTime:

LocalDateTime localDateTimeBeforeDST = LocalDateTime
  .of(2018, 3, 25, 1, 55);
 
assertThat(localDateTimeBeforeDST.toString())
  .isEqualTo("2018-03-25T01:55");

We can observe how a LocalDateTime is conforming to the ISO8601 profile, a standard and widely adopted date-time notation.

It’s completely unaware of Zones and Offsets, though, that’s why we need to convert it into a fully DST-aware java.time.ZonedDateTime:

ZoneId italianZoneId = ZoneId.of("Europe/Rome");
ZonedDateTime zonedDateTimeBeforeDST = localDateTimeBeforeDST
  .atZone(italianZoneId);
 
assertThat(zonedDateTimeBeforeDST.toString())
  .isEqualTo("2018-03-25T01:55+01:00[Europe/Rome]"); 

As we can see, now the date incorporates two fundamental trailing pieces of information: +01:00 is the ZoneOffset, while [Europe/Rome] is the ZoneId.

Like in the previous example, let’s trigger DST through the addition of ten minutes:

ZonedDateTime zonedDateTimeAfterDST = zonedDateTimeBeforeDST
  .plus(10, ChronoUnit.MINUTES);
 
assertThat(zonedDateTimeAfterDST.toString())
  .isEqualTo("2018-03-25T03:05+02:00[Europe/Rome]");

Again, we see how both the time and the zone offset are shifting forward, and still keeping the same distance:

Long deltaBetweenDatesInMinutes = ChronoUnit.MINUTES
  .between(zonedDateTimeBeforeDST,zonedDateTimeAfterDST);
assertThat(deltaBetweenDatesInMinutes)
  .isEqualTo(10);

6. Conclusion

We’ve seen what Daylight Saving Time is and how to handle it through some practical examples in different versions of Java core API.

When working with Java 8 and above, the usage of the new java.time package is encouraged thanks to the ease of use and to its standard, thread-safe nature.

As always, the full source code is available over on Github.

Combining Observables in RxJava

$
0
0

1. Introduction

In this quick tutorial, we’ll discuss different ways of combining Observables in RxJava.

If you’re new to RxJava, definitely check out this intro tutorial first.

Now, let’s jump right in.

2. Observables

Observable sequences, or simply Observables, are representations of asynchronous data streams.

These’re based on the Observer pattern wherein an object called an Observer, subscribes to items emitted by an Observable.

The subscription is non-blocking as the Observer stands to react to whatever the Observable will emit in the future. This, in turn, facilitates concurrency.

Here’s a simple demonstration in RxJava:

Observable
  .from(new String[] { "John", "Doe" })
  .subscribe(name -> System.out.println("Hello " + name))

3. Combining Observables

When programming using a reactive framework, it’s a common use-case to combine various Observables.

In a web application, for example, we may need to get two sets of asynchronous data streams that are independent of each other.

Instead of waiting for the previous stream to complete before requesting the next stream, we can call both at the same time and subscribe to the combined streams.

In this section, we’ll discuss some of the different ways we can combine multiple Observables in RxJava and the different use-cases to which each method applies.

3.1. Merge

We can use the merge operator to combine the output of multiple Observables so that they act like one:

@Test
public void givenTwoObservables_whenMerged_shouldEmitCombinedResults() {
    TestSubscriber<String> testSubscriber = new TestSubscriber<>();

    Observable.merge(
      Observable.from(new String[] {"Hello", "World"}),
      Observable.from(new String[] {"I love", "RxJava"})
    ).subscribe(testSubscriber);

    testSubscriber.assertValues("Hello", "World", "I love", "RxJava");
}

3.2. MergeDelayError

The mergeDelayError method is the same as merge in that it combines multiple Observables into one, but if errors occur during the merge, it allows error-free items to continue before propagating the errors:

@Test
public void givenMutipleObservablesOneThrows_whenMerged_thenCombineBeforePropagatingError() {
    TestSubscriber<String> testSubscriber = new TestSubscriber<>();
        
    Observable.mergeDelayError(
      Observable.from(new String[] { "hello", "world" }),
      Observable.error(new RuntimeException("Some exception")),
      Observable.from(new String[] { "rxjava" })
    ).subscribe(testSubscriber);

    testSubscriber.assertValues("hello", "world", "rxjava");
    testSubscriber.assertError(RuntimeException.class);
}

The above example emits all the error-free values:

hello
world
rxjava

Note that if we use merge instead of mergeDelayError, the String “rxjava” won’t be emitted because merge immediately stops the flow of data from Observables when an error occurs.

3.3. Zip

The zip extension method brings together two sequences of values as pairs:

@Test
public void givenTwoObservables_whenZipped_thenReturnCombinedResults() {
    List<String> zippedStrings = new ArrayList<>();

    Observable.zip(
      Observable.from(new String[] { "Simple", "Moderate", "Complex" }), 
      Observable.from(new String[] { "Solutions", "Success", "Hierarchy"}),
      (str1, str2) -> str1 + " " + str2).subscribe(zippedStrings::add);
        
    assertThat(zippedStrings).isNotEmpty();
    assertThat(zippedStrings.size()).isEqualTo(3);
    assertThat(zippedStrings).contains("Simple Solutions", "Moderate Success", "Complex Hierarchy");
}

3.4. Zip with Interval

In this example, we will zip a stream with interval which in effect will delay the emission of elements of the first stream:

@Test
public void givenAStream_whenZippedWithInterval_shouldDelayStreamEmmission() {
    TestSubscriber<String> testSubscriber = new TestSubscriber<>();
        
    Observable<String> data = Observable.just("one", "two", "three", "four", "five");
    Observable<Long> interval = Observable.interval(1L, TimeUnit.SECONDS);
        
    Observable
      .zip(data, interval, (strData, tick) -> String.format("[%d]=%s", tick, strData))
      .toBlocking().subscribe(testSubscriber);
        
    testSubscriber.assertCompleted();
    testSubscriber.assertValueCount(5);
    testSubscriber.assertValues("[0]=one", "[1]=two", "[2]=three", "[3]=four", "[4]=five");
}

4. Summary

In this article, we’ve seen a few of the methods for combining Observables with RxJava. You can learn about other methods like combineLatest, join, groupJoin, switchOnNext, in the official RxJava documentation.

As always, the source code for this article is available in our GitHub repo.


The Spring @Controller and @RestController Annotations

$
0
0

1. Overview

In this quick tutorial, we’ll discuss the difference between @Controller and @RestController annotations in Spring MVC.

The first annotation is used for traditional Spring controllers and has been part of the framework for a very long time.

The @RestController annotation was introduced in Spring 4.0 to simplify the creation of RESTful web services. It’s a convenience annotation that combines @Controller and @ResponseBody – which eliminates the need to annotate every request handling method of the controller class with the @ResponseBody annotation.

2. Spring MVC @Controller

Classic controllers can be annotated with the @Controller annotation. This is simply a specialization of the @Component class and allows implementation classes to be autodetected through the classpath scanning.

@Controller is typically used in combination with a @RequestMapping annotation used on request handling methods.

Let’s see a quick example of the Spring MVC controller:

@Controller
@RequestMapping("books")
public class SimpleBookController {

    @GetMapping("/{id}", produces = "application/json")
    public @ResponseBody Book getBook(@PathVariable int id) {
        return findBookById(id);
    }

    private Book findBookById(int id) {
        // ...
    }
}

The request handling method is annotated with @ResponseBody. This annotation enables automatic serialization of the return object into the HttpResponse.

3. Spring MVC @RestController

@RestController is a specialized version of the controller. It includes the @Controller and @ResponseBody annotations and as a result, simplifies the controller implementation:

@RestController
@RequestMapping("books-rest")
public class SimpleBookRestController {
    
    @GetMapping("/{id}", produces = "application/json")
    public Book getBook(@PathVariable int id) {
        return findBookById(id);
    }

    private Book findBookById(int id) {
        // ...
    }
}

The controller is annotated with the @RestController annotation, therefore the @ResponseBody isn’t required.

Every request handling method of the controller class automatically serializes return objects into HttpResponse.

4. Conclusion

In this article, we saw the classic and specialized REST controllers available in the Spring Framework.

The complete source code for the example is available in the GitHub project; this is a Maven project, so it can be imported and used as-is.

Command-Line Arguments in Spring Boot

$
0
0

1. Overview

In this quick tutorial, we’ll discuss how to pass command-line arguments to a Spring Boot application.

We can use command-line arguments to configure our application, override application properties or pass custom arguments.

2. Maven Command-Line Arguments

First, let’s see how we can pass arguments while running our application using Maven Plugin.

Later, we’ll see how to access the arguments in our code.

2.1. Spring Boot 1.x

For Spring Boot 1.x, we can pass the arguments to our application using -Drun.arguments:

mvn spring-boot:run -Drun.arguments=--customArgument=custom

We can also pass multiple parameters to our app:

mvn spring-boot:run -Drun.arguments=--spring.main.banner-mode=off,--customArgument=custom

Note that:

  • Arguments should be comma separated
  • Each argument should be prefixed with —
  • We can also pass configuration properties, like spring.main.banner-mode shown in the example above

2.2. Spring Boot 2.x

For Spring Boot 2.x, we can pass the arguments using -Dspring-boot.run.arguments:

mvn spring-boot:run -Dspring-boot.run.arguments=--spring.main.banner-mode=off,--customArgument=custom

3. Gradle Command-Line Arguments

Next, let’s discover how to pass arguments while running our application using Gradle Plugin.

We’ll need to configure our bootRun task in build.gradle file:

bootRun {
    if (project.hasProperty('args')) {
        args project.args.split(',')
    }
}

Now, we can pass the command-line arguments as follows:

./gradlew bootRun -Pargs=--spring.main.banner-mode=off,--customArgument=custom

4. Overriding System Properties

Other than passing custom arguments, we can also override system properties.

For example, here’s our application.properties file:

server.port=8081
spring.application.name=SampleApp

To override the server.port value, we need to pass the new value in the following manner (for Spring Boot 1.x):

mvn spring-boot:run -Drun.arguments=--server.port=8085

Similarly for Spring Boot 2.x:

mvn spring-boot:run -Dspring-boot.run.arguments=--server.port=8085

Note that:

  • Spring Boot converts command-line arguments to properties and adds them as environment variables
  • We can use short command-line arguments –port=8085 instead of –server.port=8085 by using a placeholder in our application.properties:
    server.port=${port:8080}
  • Command-line arguments take precedence over application.properties values

If needed, we can stop our application from converting command-line arguments to properties:

@SpringBootApplication
public class Application extends SpringBootServletInitializer {
    public static void main(String[] args) {
        SpringApplication application = new SpringApplication(Application.class);
        application.setAddCommandLineProperties(false);
        application.run(args);
    }
}

5. Accessing Command-Line Arguments

Let’s see how we can access the command-line arguments from our application’s main() method:

@SpringBootApplication
public class Application extends SpringBootServletInitializer {
    public static void main(String[] args) {
        for(String arg:args) {
            System.out.println(arg);
        }
        SpringApplication.run(Application.class, args);
    }
}

This will print the arguments we passed to our application from command-line, but we could also use them later in our application.

6. Conclusion

In this article, we learned how to pass arguments to our Spring Boot application from command-line, and how to do it using both Maven and Gradle.

We’ve also shown how you can access those arguments from your code, in order to configure your application.

Publish and Receive Messages with Nats Java Client

$
0
0

1. Overview

In this tutorial, we’ll use the Java Client for NATs to connect to a NATS Server and publish and receive messages.

NATS offers three primary modes of message exchange. Publish/Subscribe semantics delivers messages to all subscribers of a topic. Request/Reply messaging sends requests via topics and routes responses back to the requestor.

Subscribers can also join message queue groups when they subscribe to a topic. Messages sent to the associated topic are only delivered to one subscriber in the queue group.

2. Setup

2.1. Maven Dependency

First, we need to add the NATS library to our pom.xml:

<dependency>
    <groupId>io.nats</groupId>
    <artifactId>jnats</artifactId>
    <version>1.0</version>
</dependency>

The latest version of the library can be found here, and the Github project is here.

2.2. NATS Server

Second, we’ll need a NATS Server for exchanging messages. There’re instructions for all major platforms here.

We assume that there’s a server running on localhost:4222.

3. Connect and Exchange Messages

3.1. Connect to NATS

The connect() method in the static NATS class creates Connections.

If we want to use a connection with default options and listening at localhost on port 4222, we can use the default method:

Connection natsConnection = Nats.connect();

But Connections have many configurable options, a few of which we want to override.

We’ll create an Options object and pass it to Nats:

private Connection initConnection() {
    Options options = new Options.Builder()
      .errorCb(ex -> log.error("Connection Exception: ", ex))
      .disconnectedCb(event -> log.error("Channel disconnected: {}", event.getConnection()))
      .reconnectedCb(event -> log.error("Reconnected to server: {}", event.getConnection()))
      .build();

    return Nats.connect(uri, options);
}

NATS Connections are durable. The API will attempt to reconnect a lost connection.

We’ve installed callbacks to notify us of when a disconnect occurs and when the connection is restored. In this example, we’re using lambdas, but for applications that need to do more than simply log the event, we can install objects that implement the required interfaces.

We can run a quick test. Create a connection and add a sleep for 60 seconds to keep the process running:

Connection natsConnection = initConnection();
Thread.sleep(60000);

Run this. Then stop and start your NATS server:

[jnats-callbacks] ERROR com.baeldung.nats.NatsClient 
  - Channel disconnected: io.nats.client.ConnectionImpl@79428dc1
[reconnect] WARN io.nats.client.ConnectionImpl 
  - couldn't connect to nats://localhost:4222 (nats: connection read error)
[jnats-callbacks] ERROR com.baeldung.nats.NatsClient 
  - Reconnected to server: io.nats.client.ConnectionImpl@79428dc1

We can see the callbacks log the disconnection and reconnect.

3.2. Subscribe to Messages

Now that we have a connection, we can work on message processing.

A NATS Message is a container for an array of bytes[]. In addition to the expected setData(byte[]) and byte[] getData() methods there’re methods for setting and getting the message destination and reply to topics.

We subscribe to topics, which are Strings.

NATS supports both synchronous and asynchronous subscriptions.

Let’s take a look at an asynchronous subscription:

AsyncSubscription subscription = natsConnection
  .subscribe( topic, msg -> log.info("Received message on {}", msg.getSubject()));

The API delivers Messages to our MessageHandler(), in its thread.

Some applications may want to control the thread that processes messages instead:

SyncSubscription subscription = natsConnection.subscribeSync("foo.bar");
Message message = subscription.nextMessage(1000);

SyncSubscription has a blocking nextMessage() method that will block for the specified number of milliseconds. We’ll use synchronous subscriptions for our tests to keep the test cases simple.

AsyncSubscription and SyncSubscription both have an unsubscribe() method that we can use to close the subscription.

subscription.unsubscribe();

3.3. Publish Messages

Publishing Messages can be done several ways.

The simplest method requires only a topic String and the message bytes:

natsConnection.publish("foo.bar", "Hi there!".getBytes());

If a publisher wishes a response or to provide specific information about the source of a message, it’s may also send a message with a reply-to topic:

natsConnection.publish("foo.bar", "bar.foo", "Hi there!".getBytes());

There are also overloads for a few other combinations such as passing in a Message instead of bytes.

3.4. A Simple Message Exchange

Given a valid Connection, we can write a test that verifies message exchange:

SyncSubscription fooSubscription = natsConnection.subscribe("foo.bar");
SyncSubscription barSubscription = natsConnection.subscribe("bar.foo");
natsConnection.publish("foo.bar", "bar.foo", "hello there".getBytes());

Message message = fooSubscription.nextMessage();
assertNotNull("No message!", message);
assertEquals("hello there", new String(message.getData()));

natsConnection
  .publish(message.getReplyTo(), message.getSubject(), "hello back".getBytes());

message = barSubscription.nextMessage();
assertNotNull("No message!", message);
assertEquals("hello back", new String(message.getData()));

We start by subscribing to two topics with synchronous subscriptions since they work much better inside a JUnit test. Then we send a message to one of them, specifying the other as a replyTo address.

After reading the message from the first destination we “flip” the topics to send a response.

3.5. Wildcard Subscriptions

NATS server supports topic wildcards.

Wildcards operate on topic tokens that are separated with the ’.’ character.  The asterisk character ‘*’ matches an individual token. The greater-than symbol ‘>’ is a wildcard match for the remainder of a topic, which may be more than one token.

For example:

  • foo.* matches foo.bar, foo.requests, but not foo.bar.requests
  • foo.> matches foo.bar, foo.requests, foo.bar.requests, foo.bar.baeldung, etc.

Let’s try a few tests:

SyncSubscription fooSubscription = client.subscribeSync("foo.*");

client.publishMessage("foo.bar", "bar.foo", "hello there");

Message message = fooSubscription.nextMessage(200);
assertNotNull("No message!", message);
assertEquals("hello there", new String(message.getData()));

client.publishMessage("foo.bar.plop", "bar.foo", "hello there");
message = fooSubscription.nextMessage(200);
assertNull("Got message!", message);

SyncSubscription barSubscription = client.subscribeSync("foo.>");

client.publishMessage("foo.bar.plop", "bar.foo", "hello there");

message = barSubscription.nextMessage(200);
assertNotNull("No message!", message);
assertEquals("hello there", new String(message.getData()));

4. Request/Reply Messaging

Our message exchange test resembled a common idiom on pub/sub messaging systems; request/reply. NATS has explicit support for this request/reply messaging.

Publishers can install a handler for requests using the asynchronous subscription method we used above:

AsyncSubscription subscription = natsConnection
  .subscribe("foo.bar.requests", new MessageHandler() {
    @Override
    public void onMessage(Message msg) {
        natsConnection.publish(message.getReplyTo(), reply.getBytes());
    }
});

Or they can respond to requests as they arrive.

The API provides a request() method:

Message reply = natsConnection.request("foo.bar.requests", request.getBytes(), 100);

This method creates a temporary mailbox for the response, and write the reply-to address for us.

Request() returns the response, or null if the request times out. The last argument is the number of milliseconds to wait.

We can modify our test for request/reply:

natsConnection.subscribe(salary.requests", message -> {
    natsConnection.publish(message.getReplyTo(), "denied!".getBytes());
});
Message reply = natsConnection.request("salary.requests", "I need a raise.", 100);
assertNotNull("No message!", reply);
assertEquals("denied!", new String(reply.getData()));

5. Message Queues

Subscribers may specify queue groups at subscription time. When a message is published to the group NATS will deliver it to a one-and-only-one subscriber.

Queue groups do not persist messages. If no listeners are available, the message is discarded.

5.1. Subscribing to Queues

Subscribers specify a queue group name as a String:

SyncSubscription subscription = natsConnection.subscribe("topic", "queue name");

There is also an asynchronous version, of course:

SyncSubscription subscription = natsConnection
  .subscribe("topic", "queue name", new MessageHandler() {
    @Override
    public void onMessage(Message msg) {
        log.info("Received message on {}", msg.getSubject());
    }
});

The subscription creates the queue on the NATS server.

5.2. Publishing to Queues

Publishing message to queue groups simply requires publishing to the associated topic:

natsConnection.publish("foo",  "queue message".getBytes());

The NATS server will route the message to the queue and select a message receiver.

We can verify this with a test:

SyncSubscription queue1 = natsConnection.subscribe("foo", "queue name");
SyncSubscription queue2 = natsConnection.subscribe("foo", "queue name");

natsConnection.publish("foo", "foobar".getBytes());

List<Message> messages = new ArrayList<>();

Message message = queue1.nextMessage(200);
if (message != null) messages.add(message);

message = queue2.nextMessage(200);
if (message != null) messages.add(message);

assertEquals(1, messages.size());

We only receive one message.

If we change the first two lines to a normal subscription:

SyncSubscription queue1 = natsConnection.subscribe("foo");
SyncSubscription queue2 = natsConnection.subscribe("foo");

The test fails because the message is delivered to both subscribers.

6. Conclusion

In this brief introduction, we connected to a NATS server and sent both pub/sub messages and load-balanced queue messages. We looked at NATS support for wildcard subscriptions.  We also used request/reply messaging.

Code samples, as always, can be found over on GitHub.

Integration Testing with a Local DynamoDB Instance

$
0
0

1. Overview

If we develop an application which uses Amazon’s DynamoDB, it can be tricky to develop integration tests without having a local instance.

In this tutorial, we’ll explore multiple ways of configuring, starting and stopping a local DynamoDB for our integration tests.

This tutorial also complements our existing DynamoDB article.

2. Configuration

2.1. Maven Setup

DynamoDB Local is a tool developed by Amazon which supports all the DynamoDB APIs. It doesn’t directly manipulate the actual DynamoDB tables in production but performs it locally instead.

First, we add the DynamoDB Local dependency to the list of dependencies in our Maven configuration:

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>DynamoDBLocal</artifactId>
    <version>1.11.86</version>
    <scope>test</scope>
</dependency>

Next, we also need to add the Amazon DynamoDB repository, since the dependency doesn’t exist in the Maven Central repository.

We can select the closest Amazon server to our current IP address geolocation:

<repository>
    <id>dynamodb-local</id>
    <name>DynamoDB Local Release Repository</name>
    <url>https://s3-us-west-2.amazonaws.com/dynamodb-local/release</url>
</repository>

2.2. Add SQLite4Java Dependency

The DynamoDB Local uses the SQLite4Java library internally; thus, we also need to include the library files when we run the test. The SQLite4Java library files depend on the environment where the test is running, but Maven can pull them transitively once we declare the DynamoDBLocal dependency.

Next, we need to add a new build step to copy native libraries into a specific folder that we’ll define in the JVM system property later on.

Let’s copy the transitively-pulled SQLite4Java library files to a folder named native-libs:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-dependency-plugin</artifactId>
    <version>2.10</version>
    <executions>
        <execution>
            <id>copy</id>
            <phase>test-compile</phase>
            <goals>
                <goal>copy-dependencies</goal>
            </goals>
            <configuration>
                <includeScope>test</includeScope>
                <includeTypes>so,dll,dylib</includeTypes>
                <outputDirectory>${project.basedir}/native-libs</outputDirectory>
            </configuration>
        </execution>
    </executions>
</plugin>

2.3. Set the SQLite4Java System Property

Now, we’ll reference the previously created folder (where the SQLite4Java libraries are located), using a JVM system property named sqlite4java.library.path:

System.setProperty("sqlite4java.library.path", "native-libs");

In order to successfully run the test later, it’s mandatory to have all the SQLite4Java libraries in the folder defined by the sqlite4java.library.path system property. We must run Maven test-compile (mvn test-compile) at least once to fulfill the prerequisite.

3. Setting up the Test Database’s Lifecycle

We can define the code to create and start the local DynamoDB server in a setup method annotated with @BeforeClass; and, symmetrically, stop the server in a teardown method annotated with @AfterClass.

In the following example, we’ll start up the local DynamoDB server on port 8000 and make sure it’s stopped again after running our tests:

public class ProductInfoDAOIntegrationTest {
    private static DynamoDBProxyServer server;

    @BeforeClass
    public static void setupClass() throws Exception {
        System.setProperty("sqlite4java.library.path", "native-libs");
        String port = "8000";
        server = ServerRunner.createServerFromCommandLineArgs(
          new String[]{"-inMemory", "-port", port});
        server.start();
        //...
    }

    @AfterClass
    public static void teardownClass() throws Exception {
        server.stop();
    }

    //...
}

We can also run the local DynamoDB server on any available port instead of a fixed port using java.net.ServerSocket. In this case, we must also configure the test to set the endpoint to the correct DynamoDB port:

public String getAvailablePort() throws IOException {
    ServerSocket serverSocket = new ServerSocket(0);
    return String.valueOf(serverSocket.getLocalPort());
}

4. Alternative Approach: Using @ClassRule

We can wrap the previous logic in a JUnit rule which performs the same action:

public class LocalDbCreationRule extends ExternalResource {
    private DynamoDBProxyServer server;

    public LocalDbCreationRule() {
        System.setProperty("sqlite4java.library.path", "native-libs");
    }

    @Override
    protected void before() throws Exception {
        String port = "8000";
        server = ServerRunner.createServerFromCommandLineArgs(
          new String[]{"-inMemory", "-port", port});
        server.start();
    }

    @Override
    protected void after() {
        this.stopUnchecked(server);
    }

    protected void stopUnchecked(DynamoDBProxyServer dynamoDbServer) {
        try {
            dynamoDbServer.stop();
        } catch (Exception e) {
            throw new RuntimeException(e);
        }    
    }
}

To use our custom rule, we’ll have to create and annotate an instance with @ClassRule as shown below. Again, the test will create and start the local DynamoDB server prior to the test class initialization.

Note that the access modifier of the test rule must be public in order to run the test:

public class ProductInfoRepositoryIntegrationTest {
    @ClassRule
    public static LocalDbCreationRule dynamoDB = new LocalDbCreationRule();

    //...
}

 

Before wrapping up, a very quick note – since DynamoDB Local uses the SQLite database internally, its performance doesn’t reflect the real performance in production.

5. Conclusion

In this article, we’ve seen how to setup and configure DynamoDB Local to run integration tests.

As always, the source code and the configuration example can be found over on Github.

Find Sum and Average in a Java Array

$
0
0

1. Introduction

In this quick tutorial, we’ll cover how we can calculate sum & average in an array using both Java standard loops and the Stream API.

2. Find Sum of Array Elements

2.1. Sum Using a For Loop

In order to find the sum of all elements in an array, we can simply iterate the array and add each element to a sum accumulating variable.

This very simply starts with a sum of 0 and add each item in the array as we go:

public static int findSumWithoutUsingStream(int[] array) {
    int sum = 0;
    for (int value : array) {
        sum += value;
    }
    return sum;
}

2.2. Sum With the Java Stream API

We can use the Stream API for achieving the same result:

public static int findSumUsingStream(int[] array) {
    return Arrays.stream(array).sum();
}

It’s important to know that the sum() method only supports primitive type streams.

If we want to use a stream on a boxed Integer value, we must first convert the stream into IntStream using the mapToInt method.

After that, we can apply the sum() method to our newly converted IntStream:

public static int findSumUsingStream(Integer[] array) {
    return Arrays.stream(array)
      .mapToInt(Integer::intValue)
      .sum();
}

You can read a lot more about the Stream API here.

3. Find Average in a Java Array

3.1. Average Without the Stream API

Once we know how to calculate the sum of array elements, finding average is pretty easy – as Average = Sum of Elements / Number of Elements:

public static double findAverageWithoutUsingStream(int[] array) {
    int sum = findSumWithoutUsingStream(array);
    return (double) sum / array.length;
}

Notes:

  1. Dividing an int by another int returns an int result. To get an accurate average, we first cast sum to double.
  2. Java Array has a length field which stores the number of elements in the array.

3.2. Average Using the Java Stream API

public static double findAverageUsingStream(int[] array) {
    return Arrays.stream(array).average().orElse(Double.NaN);
}

IntStream.average() returns an OptionalDouble which may not contain a value and which needs a special handling.

Read more about Optionals in this article and about the OptionalDouble class in the Java 8 Documentation.

4. Conclusion

In this article, we explored how to find sum/average of int array elements.

As always, the code is available over on Github.

Handling Cookies and a Session in a Java Servlet

$
0
0

1. Overview

In this tutorial, we’ll cover the handling of cookies and sessions in Java, using Servlets.

Additionally, we’ll shortly describe what a cookie is, and explore some sample use cases for it.

2. Cookie Basics

Simply put, a cookie is a small piece of data stored on the client-side which servers use when communicating with clients.

They’re used to identify a client when sending a subsequent request. They can also be used for passing some data from one servlet to another.

For more details, please refer to this article.

2.1. Create a Cookie

The Cookie class is defined in the javax.servlet.http package.

To send it to the client, we need to create one and add it to the response:

Cookie uiColorCookie = new Cookie("color", "red");
response.addCookie(uiColorCookie);

However, its API is a lot broader – let’s explore it.

2.2. Set the Cookie Expiration Date

We can set the max age (with a method maxAge(int)) which defines how many seconds a given cookie should be valid for:

uiColorCookie.setMaxAge(60*60);

We set a max age to one hour. After this time, the cookie cannot be used by a client (browser) when sending a request and it also should be removed from the browser cache.

2.3. Set the Cookie Domain

Another useful method in the Cookie API is setDomain(String).

This allows us to specify domain names to which it should be delivered by the client. It also depends on if we specify domain name explicitly or not.

Let’s set the domain for a cookie:

uiColorCookie.setDomain("example.com");

The cookie will be delivered to each request made by example.com and its subdomains.

If we don’t specify a domain explicitly, it will be set to the domain name which created a cookie.

For example, if we create a cookie from example.com and leave domain name empty, then it’ll be delivered to the www.example.com (without subdomains).

Along with a domain name, we can also specify a path. Let’s have a look at that next.

2.4. Set the Cookie Path

The path specifies where a cookie will be delivered.

If we specify a path explicitly, then a Cookie will be delivered to the given URL and all its subdirectories:

uiColorCookie.setPath("/welcomeUser");

Implicitly, it’ll be set to the URL which created a cookie and all its subdirectories.

Now let’s focus on how we can retrieve their values inside a Servlet.

2.5. Read Cookies in the Servlet

Cookies are added to the request by the client. The client checks its parameters and decides if it can deliver it to the current URL.

We can get all cookies by calling getCookies() on the request (HttpServletRequest) passed to the Servlet.

We can iterate through this array and search for the one we need, e.g., by comparing their names:

public Optional<String> readCookie(String key) {
    return Arrays.stream(request.getCookies())
      .filter(c -> key.equals(c.getName()))
      .map(Cookie::getValue)
      .findAny();
}

2.6. Remove a Cookie

To remove a cookie from a browser, we have to add a new one to the response with the same name, but with a maxAge value set to 0:

Cookie userNameCookieRemove = new Cookie("userName", "");
userNameCookieRemove.setMaxAge(0);
response.addCookie(userNameCookieRemove);

A sample use case for removing cookies is a user logout action – we may need to remove some data which was stored for an active user session.

Now we know how we can handle cookies inside a Servlet.

Next, we’ll cover another important object which we access very often from a Servlet – a Session object.

3. HttpSession Object

The HttpSession is another option for storing user-related data across different requests. A session is a server-side storage holding contextual data.

Data isn’t shared between different session objects (client can access data from its session only). It also contains key-value pairs, but in comparison to a cookie, a session can contain object as a value. The storage implementation mechanism is server-dependent.

A session is matched with a client by a cookie or request parameters. More info can be found here.

3.1. Getting a Session

We can obtain an HttpSession straight from a request:

HttpSession session = request.getSession();

The above code will create a new session in case it doesn’t exist. We can achieve the same by calling:

request.getSession(true)

In case we just want to obtain existing session and not create a new one, we need to use:

request.getSession(false)

If we access the JSP page for the first time, then a new session gets created by default. We can disable this behavior by setting the session attribute to false:

<%@ page contentType="text/html;charset=UTF-8" session="false" %>

In most cases, a web server uses cookies for session management. When a session object is created, then a server creates a cookie with JSESSIONID key and value which identifies a session.

3.2. Session Attributes

The session object provides a bunch of methods for accessing (create, read, modify, remove) attributes created for a given user session:

  • setAttribute(String, Object) which creates or replaces a session attribute with a key and a new value
  • getAttribute(String) which reads an attribute value with a given name (key)
  • removeAttribute(String) which removes an attribute with a given name

We can also easily check already existing session attributes by calling getAttributeNames().

As we already mentioned, we could retrieve a session object from a request. When we already have it, we can quickly perform methods mentioned above.

We can create an attribute:

HttpSession session = request.getSession();
session.setAttribute("attributeKey", "Sample Value");

The attribute value can be obtained by its key (name):

session.getAttribute("attributeKey");

We can remove an attribute when we don’t need it anymore:

session.removeAttribute("attributeKey");

A well-known use case for a user session is to invalidate whole data it stores when a user logs out from our website. The session object provides a solution for it:

session.invalidate();

This method removes the whole session from the web server so we cannot access attributes from it anymore.

HttpSession object has more methods, but the one we mentioned are the most common.

4. Conclusion

In this article, we covered two mechanism which allows us to store user data between subsequent requests to the server – the cookie and the session.

Keep in mind that the HTTP protocol is stateless, and so maintaining state across requests is a must.

As always, code snippets are available over on Github.

Shutdown a Spring Boot Application

$
0
0

1. Overview

Managing the lifecycle of Spring Boot Application is very important for a production-ready system. The Spring container handles the creation, initialization, and destruction of all the Beans with the help of the ApplicationContext.

The emphasize of this write-up is the destruction phase of the lifecycle. More specifically, we’ll have a look at different ways to shut down a Spring Boot Application.

To learn more about how to set up a project using Spring Boot, check out the Spring Boot Starter article, or go over the Spring Boot Configuration.

2. Shutdown Endpoint

By default, all the endpoints are enabled in Spring Boot Application except /shutdown; this is, naturally, part of the Actuator endpoints.

Here’s the Maven dependency to set up these up:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

And, if we want to also set up security support, we need:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
</dependency>

Lastly, we enable the shutdown endpoint in application.properties file:

management.endpoint.shutdown.enabled=true
endpoints.shutdown.enabled=true

To shut down the Spring Boot application, we simply call a POST method like this:

curl -X POST localhost:port/shutdown

And a quick test fragment, using the Spring MVC testing support:

mockMvc.perform(post("/shutdown")).andExpect(status().isOk());

3. Close Application Context

We can also call the close() method directly using the application context:

ConfigurableApplicationContext ctx = new SpringApplicationBuilder(Application.class).web(false).run();
System.out.println("Spring Boot application started");
ctx.getBean(TerminateBean.class);
ctx.close();

This destroys all the beans, releases the locks, then closes the bean factory. To verify the application shutdown, we use the Spring’s standard lifecycle callback with @PreDestroy annotation:

@PreDestroy
public void onDestroy() throws Exception {
    System.out.println("Spring Container is destroyed!");
}

Here’s the output after running this example:

Spring Boot application started
Closing AnnotationConfigApplicationContext@39b43d60
DefaultLifecycleProcessor - Stopping beans in phase 0
Unregistering JMX-exposed beans on shutdown
Spring Container is destroyed!

The important thing here to keep in mind: while closing the application context, the parent context isn’t affected due to separate lifecycles.

4. Exit SpringApplication

SpringApplication registers a shutdown hook with the JVM to make sure the application exits appropriately.

Beans may implement the ExitCodeGenerator interface to return a specific error code:

ConfigurableApplicationContext ctx = new SpringApplicationBuilder(Application.class).web(false).run();

int exitCode = SpringApplication.exit(ctx, new ExitCodeGenerator() {
@Override
public int getExitCode() {
        // return the error code
        return 0;
    }
});

System.exit(exitCode);

The same code with the application of Java 8 lambdas:

SpringApplication.exit(ctx, () -> 0);

After calling the System.exit(exitCode), the program terminates with a 0 return code:

Process finished with exit code 0

5. Kill the App Process

Finally, we can also shut down a Spring Boot Application from outside the application by using a bash script. Our first step for this option is to have the application context write it’s PID into a file:

SpringApplicationBuilder app = new SpringApplicationBuilder(Application.class).web(false);
app.build().addListeners(new ApplicationPidFileWriter("./bin/shutdown.pid"));
app.run();

Next, create a shutdown.bat file with the following content:

kill $(cat ./bin/shutdown.pid)

The execution of shutdown.bat extracts the Process ID from the shutdown.pid file and uses the kill command to terminate the Boot application.

6. Conclusion

In this quick write-up, we’ve covered few simple methods that can be used to shut down a running Spring Boot Application.

While it’s up to the developer to choose an appropriate a method; all of these methods should be used by design and on purpose.

For example, .exit() is preferred when we need to pass an error code to another environment, say JVM for further actions. Using Application PID gives more flexibility, as we can also start or restart the application with the use of bash script.

Finally, /shutdown is here to make it possible to terminate the applications externally via HTTP. For all the other cases .close() will work perfectly.

As usual, the complete code for this article is available over on the GitHub project.


Creating and Deploying Smart Contracts with Solidity

$
0
0

 1. Overview

The ability to run smart contracts is what has made the Ethereum blockchain so popular and disruptive.

Before we explain what a smart contract is, let’s start with a definition of blockchain:

Blockchain is a public database that keeps a permanent record of digital transactions. It operates as a trustless transactional system, a framework in which individuals can make peer-to-peer transactions without needing to trust a third party or one another.

Let’s see how we can create smart contracts on Ethereum with solidity:

2. Ethereum

Ethereum is a platform that allows people to write decentralized applications using blockchain technology efficiently.

A decentralized application (Dapp) is a tool for people and organizations on different sides of an interaction used to come together without any centralized intermediary. Early examples of Dapps include BitTorrent (file sharing) and Bitcoin (currency).

We can describe Ethereum as a blockchain with a built-in programming language.

2.1. Ethereum Virtual Machine (EVM)

From a practical standpoint, the EVM can be thought of as a large, decentralized system containing millions of objects, called accounts, which can maintain an internal database, execute code and talk to each other.

The first type of account is probably the most familiar for the average user who uses the network. Its name is EOA (Externally Owned Account); it is used to transmit value (such as Ether) and is controlled by a private key.

On the other hand, there is another type of account which is the contract. Let’s go ahead and see what is this about:

3. What is a Smart Contract?

In simple terms, we can see a smart contract as a collection of code stored in the blockchain network that defines conditions to which all parties using the contract agree upon.

This enables developers to create things that haven’t been invented yet. Think about it for a second – there is no need for a middleman, and there is also no counterparty risk. We can create new markets, store registries of debts or promises and rest assure that we have the consensuses of the network that validates the transactions.

Anyone can deploy a smart contract to the decentralized database for a fee proportional to the storage size of the containing code. Nodes wishing to use the smart contract must somehow indicate the result of their participation to the rest of the network.

3.1. Solidity

The main language used in Ethereum is Solidity – which is a Javascript-like language developed specifically for writing smart contracts. Solidity is statically typed, supports inheritance, libraries and complex user-defined types among other features.

The solidity compiler turns code into EVM bytecode, which can then be sent to the Ethereum network as a deployment transaction. Such deployments have more substantial transaction fees than smart contract interactions and must be paid by the owner of the contract.

4. Creating a Smart Contract with Solidity

The first line in a solidity contract sets the source code version. This is to ensure that the contract doesn’t suddenly behave differently with a new compiler version.

pragma solidity ^0.4.0;

For our example, the name of the contract is Greeting and as we can see the creation of it’s similar to a class in Java or another object-oriented programming language:

contract Greeting {
    address creator;
    string message;

    // functions that interact with state variables
}

In this example, we declared two states variables: creator and message. In Solidity, we use the data type named address to store addresses of accounts.

Next, we need to initialize both variables in the constructor.

4.1. Constructor

We declare a constructor by using the function keyword followed by the name of the contract (just like in Java).

The constructor is a special function that is invoked only once when a contract is first deployed to the Ethereum blockchain. We can only declare a single constructor for a contract:

function Greeting(string _message) {
    message = _message;
    creator = msg.sender;
}

We also inject the initial string _message as a parameter into the constructor and set it to the message state variable.

In the second line of the constructor, we initialize the creator variable to a value called msg.sender. The reason why there’s no need for injecting msg into the constructor is because msg is a global variable that provides specific information about the message such as the address of the account sending it.

We could potentially use this information to implement access control for certain functions.

4.2. Setter and Getter Methods

Finally, we implement the setter and getter methods for the message:

function greet() constant returns (string) {
    return message;
}

function setGreeting(string _message) {
    message = _message;
}

Invoking the function greet will simply return the currently saved message. We use the constant keyword to specify that this function doesn’t modify the contract state and doesn’t trigger any writes to the blockchain.

We can now change the value of the state in the contract by calling the function setGreeting. Anyone can alter the value just by calling this function. This method doesn’t have a return type but does take a String type as a parameter.

Now that we’ve created our first smart contract the next step will be to deploy it into the Ethereum blockchain so everybody can use it. We can use Remix, which’s currently the best online IDE and it’s effortless to use.

5. Interacting with a Smart Contract

To interact with a smart contract in the decentralized network (blockchain) we need to have access to one of the clients.

There’re two ways to do this:

Infura is the most straightforward option, so we’ll request a free access token. Once we sign up, we need to pick the URL of the Rinkeby test network: “https://rinkeby.infura.io/<token>”.

To be able to transact with the smart contract from Java, we need to use a library called Web3j. Here is the Maven dependency:

<dependency>
    <groupId>org.web3j</groupId>
    <artifactId>core</artifactId>
    <version>3.3.1</version>
</dependency>

And in Gradle:

compile ('org.web3j:core:3.3.1')

Before starting to write code, there are some things that we need to do first.

5.1. Creating a Wallet

Web3j allow us to use some of its functionality from the command line:

  • Wallet creation
  • Wallet password management
  • Transfer of funds from one wallet to another
  • Generate Solidity smart contract function wrappers

Command line tools can be obtained as a zip file/tarball from the releases page of the project repository, under the downloads section, or for OS X users via homebrew:

brew tap web3j/web3j
brew install web3j

To generate a new Ethereum wallet we simply type the following on the command line:

$ web3j wallet create

It will ask us for a password and a location where we can save our wallet. The file is in Json format, and the main thing to keep in mind is the Ethereum address.

We’ll use it in the next step to request an Ether.

5.2. Requesting Ether in the Rinkeby Testnet

We can request free Ether here. To prevent malicious actors from exhausting all available funds, they ask us to provide a public link to one social media post with our Ethereum address.

This is a very simple step, almost instantly they provide the Ether so we can run the tests.

5.3. Generating the Smart Contract Wrapper

Web3j can auto-generate smart contract wrapper code to deploy and interact with smart contracts without leaving the JVM.

To generate the wrapper code, we need to compile our smart contract. We can find the instruction to install the compiler here. From there, we type the following on the command line:

$ solc Greeting.sol --bin --abi --optimize -o <output_dir>/

The latter will create two files: Greeting.bin and Greeting.abi. Now, we can generate the wrapper code using web3j’s command line tools:

$ web3j solidity generate /path/to/Greeting.bin 
  /path/to/Greeting.abi -o /path/to/src/main/java -p com.your.organisation.name

With this, we’ll now have the Java class to interact with the contract in our main code.

6. Interacting with the Smart Contract

In our main class, we start by creating a new web3j instance to connect to remote nodes on the network:

Web3j web3j = Web3j.build(
  new HttpService("https://rinkeby.infura.io/<your_token>"));

We then need to load our Ethereum wallet file:

Credentials credentials = WalletUtils.loadCredentials(
  "<password>",
 "/path/to/<walletfile>");

Now let’s deploy our smart contract:

Greeting contract = Greeting.deploy(
  web3j, credentials,
  ManagedTransaction.GAS_PRICE, Contract.GAS_LIMIT,
  "Hello blockchain world!").send();

Deploying the contract may take a while depending the work in the network. Once is deployed, we might want to store the address where the contract was deployed. We can obtain the address this way:

String contractAddress = contract.getContractAddress();

All the transactions made with the contract can be seen in the url: “https://rinkeby.etherscan.io/address/<contract_address>”. 

On the other hand, we can modify the value of the smart contract performing a transaction:

TransactionReceipt transactionReceipt = contract.setGreeting("Hello again").send();

Finally, if we want to view the new value stored, we can simply write:

String newValue = contract.greet().send();

7. Conclusion

In this tutorial, we saw that Solidity is a statically-typed programming language designed for developing smart contracts that run on the EVM.

We also created a straightforward contract with this language and saw that it’s very similar to other programming languages.

The smart contract is just a phrase used to describe computer code that can facilitate the exchange of value. When running on the blockchain, a smart contract becomes a self-operating computer program that automatically executes when specific conditions are met.

We saw in this article that the ability to run code in the blockchain is the main differentiation in Ethereum because it allows developers to build a new type of applications that go way beyond anything we have seen before.

As always, code samples can be found over on GitHub.

Introduction to Atlassian Fugue

$
0
0

1. Introduction

Fugue is a Java library by Atlassian; it’s a collection of utilities supporting Functional Programming.

In this write-up, we’ll focus on and explore the most important Fugue’s APIs.

2. Getting Started with Fugue

To start using Fugue in our projects, we need to add the following dependency:

<dependency>
    <groupId>io.atlassian.fugue</groupId>
    <artifactId>fugue</artifactId>
    <version>4.5.1</version>
</dependency>

We can find the most recent version of Fugue over on Maven Central.

3. Option

Let’s start our journey by looking at the Option class which is Fugue’s answer to java.util.Optional.

As we can guess by the name, Option’s a container representing a potentially absent value.

In other words, an Option is either Some value of a certain type or None:

Option<Object> none = Option.none();
assertFalse(none.isDefined());

Option<String> some = Option.some("value");
assertTrue(some.isDefined());
assertEquals("value", some.get());

Option<Integer> maybe = Option.option(someInputValue);

3.1. The map Operation

One of the standard Functional Programming APIs is the map() method which allows applying a provided function to underlying elements.

The method applies the provided function to the Option‘s value if it’s present:

Option<String> some = Option.some("value") 
  .map(String::toUpperCase);
assertEquals("VALUE", some.get());

3.2. Option and a Null Value

Besides naming differences, Atlassian did make a few design choices for Option that differ from Optional; let’s now look at them.

We cannot directly create a non-empty Option holding a null value:

Option.some(null);

The above throws an exception.

However, we can get one as a result of using the map() operation:

Option<Object> some = Option.some("value")
  .map(x -> null);
assertNull(some.get());

This isn’t possible when simply using java.util.Optional.

3.3. Option is Iterable

Option can be treated as a collection that holds maximum one element, so it makes sense for it to implement the Iterable interface.

This highly increases the interoperability when working with collections/streams.

And now, for example, can be concatenated with another collection:

Option<String> some = Option.some("value");
Iterable<String> strings = Iterables
  .concat(some, Arrays.asList("a", "b", "c"));

3.4. Converting Option to Stream

Since an Option is an Iterable, it can be converted to a Stream easily too.

After converting, the Stream instance will have exactly one element if the option is present, or zero otherwise:

assertEquals(0, Option.none().toStream().count());
assertEquals(1, Option.some("value").toStream().count());

3.5. java.util.Optional Interoperability

If we need a standard Optional implementation, we can obtain it easily using the toOptional() method:

Optional<Object> optional = Option.none()
  .toOptional();
assertTrue(Option.fromOptional(optional)
  .isEmpty());

3.6. The Options Utility Class

Finally, Fugue provides some utility methods for working with Options in the aptly named Options class.

It features methods such as filterNone for removing empty Options from a collection, and flatten for turning a collection of Options into a collection of enclosed objects, filtering out empty Options.

Additionally, it features several variants of the lift method that lifts a Function<A,B> into a Function<Option<A>, Option<B>>:

Function<Integer, Integer> f = (Integer x) -> x > 0 ? x + 1 : null;
Function<Option<Integer>, Option<Integer>> lifted = Options.lift(f);

assertEquals(2, (long) lifted.apply(Option.some(1)).get());
assertTrue(lifted.apply(Option.none()).isEmpty());

This is useful when we want to pass a function which is unaware of Option to some method that uses Option.

Note that, just like the map method, lift doesn’t map null to None:

assertEquals(null, lifted.apply(Option.some(0)).get());

4. Either for Computations With Two Possible Outcomes

As we’ve seen, the Option class allows us to deal with the absence of a value in a functional manner.

However, sometimes we need to return more information than “no value”; for example, we might want to return either a legitimate value or an error object.

The Either class covers that use case.

An instance of Either can be a Right or a Left but never both at the same time.

By convention, the right is the result of a successful computation, while the left is the exceptional case.

4.1. Constructing an Either

We can obtain an Either instance by calling one of its two static factory methods.

We call right if we want an Either containing the Right value:

Either<Integer, String> right = Either.right("value");

Otherwise, we call left:

Either<Integer, String> left = Either.left(-1);

Here, our computation can either return a String or an Integer.

4.2. Using an Either

When we have an Either instance, we can check whether it’s left or right and act accordingly:

if (either.isRight()) {
    ...
}

More interestingly, we can chain operations using a functional style:

either
  .map(String::toUpperCase)
  .getOrNull();

4.3. Projections

The main thing that differentiates Either from other monadic tools like Option, Try, is the fact that often it’s unbiased. Simply put, if we call the map() method, Either doesn’t know if to work with Left or Right side.

This is where projections come in handy.

Left and right projections are specular views of an Either that focus on the left or right value, respectively:

either.left()
  .map(x -> decodeSQLErrorCode(x));

In the above code snippet, if Either is Left, decodeSQLErrorCode() will get applied to the underlying element. If Either is Right, it won’t. Same the other way around when using the right projection.

4.4. Utility Methods

As with Options, Fugue provides a class full of utilities for Eithers, as well, and it’s called just like that: Eithers.

It contains methods for filtering, casting and iterating over collections of Eithers.

5. Exception Handling with Try

We conclude our tour of either-this-or-that data types in Fugue with another variation called Try.

Try is similar to Either, but it differs in that it’s dedicated for working with exceptions.

Like Option and unlike Either, Try is parameterized over a single type, because the “other” type is fixed to Exception (while for Option it’s implicitly Void).

So, a Try can be either a Success or a Failure:

assertTrue(Try.failure(new Exception("Fail!")).isFailure());
assertTrue(Try.successful("OK").isSuccess());

5.1. Instantiating a Try

Often, we won’t be creating a Try explicitly as a success or a failure; rather, we’ll create one from a method call.

Checked.of calls a given function and returns a Try encapsulating its return value or any thrown exception:

assertTrue(Checked.of(() -> "ok").isSuccess());
assertTrue(Checked.of(() -> { throw new Exception("ko"); }).isFailure());

Another method, Checked.lift, takes a potentially throwing function and lifts it to a function returning a Try:

Checked.Function<String, Object, Exception> throwException = (String x) -> {
    throw new Exception(x);
};
        
assertTrue(Checked.lift(throwException).apply("ko").isFailure());

5.2. Working with Try

Once we have a Try, the three most common things we might ultimately want to do with it are:

  1. extracting its value
  2. chaining some operation to the successful value
  3. handling the exception with a function

Besides, obviously, discarding the Try or passing it along to other methods, the above three aren’t the only options that we have, but all the other built-in methods are just a convenience over these three.

5.3. Extracting the Successful Value

To extract the value, we use the getOrElse method:

assertEquals(42, failedTry.getOrElse(() -> 42));

It returns the successful value if present, or some computed value otherwise.

There is no getOrThrow or similar, but since getOrElse doesn’t catch any exception, we can easily write it:

someTry.getOrElse(() -> {
    throw new NoSuchElementException("Nothing to get");
});

5.4. Chaining Calls After Success

In a functional style, we can apply a function to the success value (if present) without extracting it explicitly first.

This is the typical map method we find in Option, Either and most other containers and collections:

Try<Integer> aTry = Try.successful(42).map(x -> x + 1);

It returns a Try so we can chain further operations.

Of course, we also have the flatMap variety:

Try.successful(42).flatMap(x -> Try.successful(x + 1));

5.5. Recovering From Exceptions

We have analogous mapping operations that work with the exception of a Try (if present), rather than its successful value.

However, those methods differ in that their meaning is to recover from the exception, i.e. to produce a successful Try in the default case.

Thus, we can produce a new value with recover:

Try<Object> recover = Try
  .failure(new Exception("boo!"))
  .recover((Exception e) -> e.getMessage() + " recovered.");

assertTrue(recover.isSuccess());
assertEquals("boo! recovered.", recover.getOrElse(() -> null));

As we can see, the recovery function takes the exception as its only argument.

If the recovery function itself throws, the result is another failed Try:

Try<Object> failure = Try.failure(new Exception("boo!")).recover(x -> {
    throw new RuntimeException(x);
});

assertTrue(failure.isFailure());

The analogous to flatMap is called recoverWith:

Try<Object> recover = Try
  .failure(new Exception("boo!"))
  .recoverWith((Exception e) -> Try.successful("recovered again!"));

assertTrue(recover.isSuccess());
assertEquals("recovered again!", recover.getOrElse(() -> null));

6. Other Utilities

Let’s now have a quick look at some of the other utilities in Fugue, before we wrap it up.

6.1. Pairs

A Pair is a really simple and versatile data structure, made of two equally important components, which Fugue calls left and right:

Pair<Integer, String> pair = Pair.pair(1, "a");
        
assertEquals(1, (int) pair.left());
assertEquals("a", pair.right());

Fugue doesn’t provide many built-in methods on Pairs, besides mapping and the applicative functor pattern.

However, Pairs are used throughout the library and they are readily available for user programs.

The next poor person’s implementation of Lisp is just a few keystrokes away!

6.2. Unit

Unit is an enum with a single value which is meant to represent “no value”.

It’s a replacement for the void return type and Void class, that does away with null:

Unit doSomething() {
    System.out.println("Hello! Side effect");
    return Unit();
}

Quite surprisingly, however, Option doesn’t understand Unit, treating it like some value instead of none.

6.3. Static Utilities

We have a few classes packed full of static utility methods that we won’t have to write and test.

The Functions class offers methods that use and transform functions in various ways: composition, application, currying, partial functions using Option, weak memoization et cetera.

The Suppliers class provides a similar, but more limited, collection of utilities for Suppliers, that is, functions of no arguments.

Iterables and Iterators, finally, contain a host of static methods for manipulating those two widely used standard Java interfaces.

7. Conclusion

In this article, we’ve given an overview of the Fugue library by Atlassian.

We haven’t touched the algebra-heavy classes like Monoid and Semigroups because they don’t fit in a generalist article.

However, you can read about them and more in the Fugue javadocs and source code.

We also haven’t touched on any of the optional modules, that offer for example integrations with Guava and Scala.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as is.

Java Weekly, Issue 223

$
0
0

Here we go…

1. Spring and Java

>> Monitor and troubleshoot Java applications and services with Datadog 

Optimize performance with end-to-end tracing and out-of-the-box support for popular Java frameworks, application servers, and databases. Try it free.

>> The 30-seconds “State of Java in 2018” Survey [docs.google.com]

I’m running the annual Java survey, to get a clear idea of the state of the Java ecosystem right now.

If you haven’t – definitely take the 30 seconds and fill it in. Thanks.

>> Java 10: Parallel Full GC in G1GC [javaspecialists.eu]

JDK 10 finally fixed the problem with G1 which would do the full GC using a single thread.

>> Why I Moved Back from Gradle to Maven [blog.philipphauer.de]

Just like any tool out there, Gradle isn’t flaw-free. It’s always a good idea to weigh and understand the tool before committing to it for your project.

>> CountDownLatch vs Phaser [javaspecialists.eu]

Definitely, Phaser is harder to understand but easier to use 🙂

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Why I Deleted My IDE and How It Changed My Life For the Better [blog.takipi.com]

Sometimes it can be beneficial to ditch the technology and go back to basics. Or try a better IDE 🙂

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Spare Time [dilbert.com]

>> Anyone Fired Lately [dilbert.com]

>> Meetings [dilbert.com]

4. Pick of the Week

>> The Mistakes I Made As a Beginner Programmer [medium.com]

New Password Storage In Spring Security 5

$
0
0

1. Introduction

With the latest Spring Security release, a lot has changed. One of those changes is how we can handle password encoding in our applications.

In this tutorial, we’re going to explore some of these changes.

Later, we’ll see how to configure the new delegation mechanism and how to update our existing password encoding, without our users recognizing it.

2. Relevant Changes in Spring Security 5.x

The Spring Security team declared the PasswordEncoder in org.springframework.security.authentication.encoding as deprecated. It was a logical move, as the old interface wasn’t designed for a randomly generated salt. Consequently, version 5 removed this interface.

Additionally, Spring Security changes the way it handles encoded passwords. In previous versions, each application employed one password encoding algorithm only.

By default, StandardPasswordEncoder dealt with that. It used SHA-256 for the encoding. By changing the password encoder, we could switch to another algorithm. But our application had to stick to exactly one algorithm.

Version 5.0 introduces the concept of password encoding delegation. Now, we can use different encodings for different passwords. Spring recognizes the algorithm by an identifier prefixing the encoded password.

Here’s an example of a bcrypt encoded password:

{bcrypt}$2b$12$FaLabMRystU4MLAasNOKb.HUElBAabuQdX59RWHq5X.9Ghm692NEi

Note how bcrypt is specified in curly braces in the very beginning.

3. Delegation Configuration

If the password hash has no prefix, the delegation process uses a default encoder. Hence, by default, we get the StandardPasswordEncoder.

That makes it compatible with the default configuration of previous Spring Security versions.

With version 5, Spring Security introduces PasswordEncoderFactories.createDelegatingPasswordEncoder(). This factory method returns a configured instance of DelegationPasswordEncoder.

For passwords without a prefix, that instance ensures the just mentioned default behavior. And for password hashes that contain a prefix, the delegation is done accordingly.

The Spring Security team lists the supported algorithms in the latest version of the corresponding JavaDoc.

Of course, Spring lets us configure this behavior.

Let’s assume we want to support:

  • bcrypt as our new default
  • scrypt as an alternative
  • SHA-256 as the currently used algorithm.

The configuration for this set-up will look like this:

@Bean
public PasswordEncoder delegatingPasswordEncoder() {
    PasswordEncoder defaultEncoder = new StandardPasswordEncoder();
    Map<String, PasswordEncoder> encoders = new HashMap<>();
    encoders.put("bcrypt", new BCryptPasswordEncoder());
    encoders.put("scrypt", new SCryptPasswordEncoder());

    DelegatingPasswordEncoder passworEncoder = new DelegatingPasswordEncoder(
      "bcrypt", encoders);
    passworEncoder.setDefaultPasswordEncoderForMatches(defaultEncoder);

    return passworEncoder;
}

4. Migrating the Password Encoding Algorithm

In the previous section, we explored how to configure password encoding according to our needs. Therefore, now we’ll work on how to switch an already encoded password to a new algorithm.

Let’s imagine we want to change the encoding from SHA-256 to bcrypt, however, we don’t want our user to change their passwords.

One possible solution is to use the login request. At this point, we can access the credentials in plain text. That is the moment we can take the current password and re-encode it.

Consequently, we can use Spring’s AuthenticationSuccessEvent for that. This event fires after a user successfully logged into our application.

Here is the example code:

@Bean
public ApplicationListener<AuthenticationSuccessEvent>
  authenticationSuccessListener( PasswordEncoder encoder) {
    return (AuthenticationSuccessEvent event) -> {
        Authentication auth = event.getAuthentication();

        if (auth instanceof UsernamePasswordAuthenticationToken
          && auth.getCredentials() != null) {

            CharSequence clearTextPass = (CharSequence) auth.getCredentials();
            String newPasswordHash = encoder.encode(clearTextPass);

            // [...] Update user's password

            ((UsernamePasswordAuthenticationToken) auth).eraseCredentials();
        }
    };
}

In the previous snippet:

  • We retrieved the user password in clear text from the provided authentication details
  • Created a new password hash with the new algorithm
  • Removed the clear text password from the authentication token

By default, extracting the password in clear text wouldn’t be possible because Spring Security deletes it as soon as possible.

Hence, we need to configure Spring so that it keeps the cleartext version of the password.

Additionally, we need to register our encoding delegation:

@Configuration
public class PasswordStorageWebSecurityConfigurer
  extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(AuthenticationManagerBuilder auth) 
      throws Exception {
        auth.eraseCredentials(false)
          .passwordEncoder(delegatingPasswordEncoder());
    }

    // ...
}

5. Conclusion

In this quick article, we talked about some new password encoding features available in 5.x.

We also saw how to configure multiple password encoding algorithms to encode our passwords. Furthermore, we explored a way to change the password encoding, without breaking the existing one.

Lastly, we described how to use Spring events to update encrypted user password transparently, allowing us to seamlessly change our encoding strategy without disclosing that to our users.

Lastly and as always, all code examples are available in our GitHub repository.

Introduction to EasyMock

$
0
0

1. Introduction

In the past, we’ve talked extensively about JMockit and Mockito.

In this tutorial, we’ll give an introduction to another mocking tool – EasyMock.

2. Maven Dependencies

Before we dive in, let’s add the following dependency to our pom.xml:

<dependency>
    <groupId>org.easymock</groupId>
    <artifactId>easymock</artifactId>
    <version>3.5.1</version>
    <scope>test</scope>
</dependency>

The latest version can always be found here.

3. Core Concepts

When generating a mock, we can simulate the target object, specify its behavior, and finally verify whether it’s used as expected.

Working with EasyMock’s mocks involves four steps:

  1. creating a mock of the target class
  2. recording its expected behavior, including the action, result, exceptions, etc.
  3. using mocks in tests
  4. verifying if it’s behaving as expected

After our recording finishes, we switch it to “replay” mode, so that the mock behaves as recorded when collaborating with any object that will be using it.

Eventually, we verify if everything goes as expected.

The four steps mentioned above relate to methods in org.easymock.EasyMock:

  1. mock(…): generates a mock of the target class, be it a concrete class or an interface. Once created, a mock is in “recording” mode, meaning that EasyMock will record any action the Mock Object takes, and replay them in the “replay” mode
  2. expect(…): with this method, we can set expectations, including calls, results, and exceptions,  for associated recording actions
  3. replay(…): switches a given mock to “replay” mode. Then, any action triggering previously recorded method calls will replay “recorded results”
  4. verify(…): verifies that all expectations were met and that no unexpected call was performed on a mock

In the next section, we’ll show how these steps work in action, using real-world examples.

4. A Practical Example of Mocking

Before we continue, let’s take a look at the example context: say we have a reader of the Baeldung blog, who likes to browse articles on the website, and then he/she tries to write articles.

Let’s start by creating the following model:

public class BaeldungReader {

    private ArticleReader articleReader;
    private IArticleWriter articleWriter;

    // constructors

    public BaeldungArticle readNext(){
        return articleReader.next();
    }

    public List<BaeldungArticle> readTopic(String topic){
        return articleReader.ofTopic(topic);
    }

    public String write(String title, String content){
        return articleWriter.write(title, content);
    }
}

In this model, we have two private members: the articleReader(a concrete class) and the articleWriter (an interface).

Next, we’ll mock them to verify BaeldungReader‘s behavior.

5. Mock with Java Code

Let’s begin with mocking an ArticleReader.

5.1. Typical Mocking

We expect the articleReader.next() method to be called when a reader skips an article:

@Test
public void whenReadNext_thenNextArticleRead(){
    ArticleReader mockArticleReader = mock(ArticleReader.class);
    BaeldungReader baeldungReader
      = new BaeldungReader(mockArticleReader);

    expect(mockArticleReader.next()).andReturn(null);
    replay(mockArticleReader);

    baeldungReader.readNext();

    verify(mockArticleReader);
}

In the sample code above, we stick strictly to the 4-step procedure and mock the ArticleReader class.

Although we really don’t care what mockArticleReader.next() returns, we still need to specify a return value for mockArticleReader.next() by using expect(…).andReturn(…).

With expect(…), EasyMock is expecting the method to return a value or throw an Exception.

If we simply do:

mockArticleReader.next();
replay(mockArticleReader);

EasyMock will complain about this, as it requires a call on expect(…).andReturn(…) if the method returns anything.

If it’s a void method, we can expect its action using expectLastCall() like this:

mockArticleReader.someVoidMethod();
expectLastCall();
replay(mockArticleReader);

5.2. Replay Order

If we need actions to be replayed in a specific order, we can be more strict:

@Test
public void whenReadNextAndSkimTopics_thenAllAllowed(){
    ArticleReader mockArticleReader
      = strictMock(ArticleReader.class);
    BaeldungReade baeldungReader
      = new BaeldungReader(mockArticleReader);

    expect(mockArticleReader.next()).andReturn(null);
    expect(mockArticleReader.ofTopic("easymock")).andReturn(null);
    replay(mockArticleReader);

    baeldungReader.readNext();
    baeldungReader.readTopic("easymock");

    verify(mockArticleReader);
}

In this snippet, we use strictMock(…) to check the order of method calls. For mocks created by mock(…) and strictMock(…), any unexpected method calls would cause an AssertionError.

To allow any method call for the mock, we can use niceMock(…):

@Test
public void whenReadNextAndOthers_thenAllowed(){
    ArticleReader mockArticleReader = niceMock(ArticleReader.class);
    BaeldungReade baeldungReader = new BaeldungReader(mockArticleReader);

    expect(mockArticleReader.next()).andReturn(null);
    replay(mockArticleReader);

    baeldungReader.readNext();
    baeldungReader.readTopic("easymock");

    verify(mockArticleReader);
}

Here we didn’t expect the baeldungReader.readTopic(…) to be called, but EasyMock won’t complain. With niceMock(…), EasyMock now only cares if the target object performed expected action or not.

5.3. Mocking Exception Throws

Now, let’s continue with mocking the interface IArticleWriter, and how to handle expected Throwables:

@Test
public void whenWriteMaliciousContent_thenArgumentIllegal() {
    // mocking and initialization

    expect(mockArticleWriter
      .write("easymock","<body onload=alert('baeldung')>"))
      .andThrow(new IllegalArgumentException());
    replay(mockArticleWriter);

    // write malicious content and capture exception as expectedException

    verify(mockArticleWriter);
    assertEquals(
      IllegalArgumentException.class, 
      expectedException.getClass());
}

In the snippet above, we expect the articleWriter is solid enough to detect XSS(Cross-site Scripting) attacks.

So when the reader tries to inject malicious code into the article content, the writer should throw an IllegalArgumentException. We recorded this expected behavior using expect(…).andThrow(…).

6. Mock with Annotation

EasyMock also supports injecting mocks using annotations. To use them, we need to run our unit tests with EasyMockRunner so that it processes @Mock and @TestSubject annotations.

Let’s rewrite previous snippets:

@RunWith(EasyMockRunner.class)
public class BaeldungReaderAnnotatedTest {

    @Mock
    ArticleReader mockArticleReader;

    @TestSubject
    BaeldungReader baeldungReader = new BaeldungReader();

    @Test
    public void whenReadNext_thenNextArticleRead() {
        expect(mockArticleReader.next()).andReturn(null);
        replay(mockArticleReader);
        baeldungReader.readNext();
        verify(mockArticleReader);
    }
}

Equivalent to mock(…), a mock will be injected into fields annotated with @Mock. And these mocks will be injected into fields of the class annotated with @TestSubject.

In the snippet above, we didn’t explicitly initialize the articleReader field in baeldungReader. When calling baeldungReader.readNext(), we can inter that implicitly called mockArticleReader.

That was because mockArticleReader was injected to the articleReader field.

Note that if we want to use another test runner instead of EasyMockRunner, we can use the JUnit test rule EasyMockRule:

public class BaeldungReaderAnnotatedWithRuleTest {

    @Rule
    public EasyMockRule mockRule = new EasyMockRule(this);

    //...

    @Test
    public void whenReadNext_thenNextArticleRead(){
        expect(mockArticleReader.next()).andReturn(null);
        replay(mockArticleReader);
        baeldungReader.readNext();
        verify(mockArticleReader);
    }

}

7. Mock with EasyMockSupport

Sometimes we need to introduce multiple mocks in a single test, and we have to repeat manually:

replay(A);
replay(B);
replay(C);
//...
verify(A);
verify(B);
verify(C);

This is ugly, and we need an elegant solution.

Luckily, we have a class EasyMockSupport in EasyMock to help deal with this. It helps keep track of mocks, such that we can replay and verify them in a batch like this:

//...
public class BaeldungReaderMockSupportTest extends EasyMockSupport{

    //...

    @Test
    public void whenReadAndWriteSequencially_thenWorks(){
        expect(mockArticleReader.next()).andReturn(null)
          .times(2).andThrow(new NoSuchElementException());
        expect(mockArticleWriter.write("title", "content"))
          .andReturn("BAEL-201801");
        replayAll();

        // execute read and write operations consecutively
 
        verifyAll();
 
        assertEquals(
          NoSuchElementException.class, 
          expectedException.getClass());
        assertEquals("BAEL-201801", articleId);
    }

}

Here we mocked both articleReader and articleWriter. When setting these mocks to “replay” mode, we used a static method replayAll() provided by EasyMockSupport, and used verifyAll() to verify their behaviors in batch.

We also introduced times(…) method in the expect phase. It helps specify how many times we expect the method to be called so that we can avoid introducing duplicate code.

We can also use EasyMockSupport through delegation:

EasyMockSupport easyMockSupport = new EasyMockSupport();

@Test
public void whenReadAndWriteSequencially_thenWorks(){
    ArticleReader mockArticleReader = easyMockSupport
      .createMock(ArticleReader.class);
    IArticleWriter mockArticleWriter = easyMockSupport
      .createMock(IArticleWriter.class);
    BaeldungReader baeldungReader = new BaeldungReader(
      mockArticleReader, mockArticleWriter);

    expect(mockArticleReader.next()).andReturn(null);
    expect(mockArticleWriter.write("title", "content"))
      .andReturn("");
    easyMockSupport.replayAll();

    baeldungReader.readNext();
    baeldungReader.write("title", "content");

    easyMockSupport.verifyAll();
}

Previously, we used static methods or annotations to create and manage mocks. Under the hood, these static and annotated mocks are controlled by a global EasyMockSupport instance.

Here, we explicitly instantiated it and take all these mocks under our own control, through delegation. This may help avoid confusion if there’s any name conflicts in our test code with EasyMock or be there any similar cases.

8. Conclusion

In this article, we briefly introduced the basic usage of EasyMock, about how to generate mock objects, record and replay their behaviors, and verify if they behaved correctly.

In case you may be interested, check out this article for a comparison of EasyMock, Mocket, and JMockit.

As always, the full implementation can be found over on Github.

Viewing all 4717 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>