Quantcast
Channel: Baeldung
Viewing all 4753 articles
Browse latest View live

The Basics of JUnit 5 – A Preview

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

JUnit is one of the most popular unit testing frameworks for Java, so it is a big deal in the developer community when new major versions come out. An alpha version of JUnit 5 was released in early February, and it contains a number of exciting innovations.

This article will explore the new features of this release, and the main differences with previous versions.

2. Dependencies and Setup

Installing JUnit 5 is pretty straightforward. Just add the following dependency to your pom.xml:

<dependency>
    <groupId>org.junit</groupId>
    <artifactId>junit5-api</artifactId>
    <version>5.0.0-ALPHA</version>
</dependency>

However, at the moment no IDEs support JUnit 5 yet, so you will also have to specify a build script:

<plugin>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.19</version>
    <dependencies>
        <dependency>
            <groupId>org.junit</groupId>
            <artifactId>surefire-junit5</artifactId>
            <version>5.0.0-SNAPSHOT</version>
        </dependency>
    </dependencies>
</plugin>

It is important to note that this version requires Java 8 to work.

When creating a test, make sure you import org.junit.gen5.api.Test, not org.junit.Test. Also, the test methods no longer need to be public; package local will do.

3. What’s New

JUnit 5 tries to take full advantage of the new features from Java 8, especially lambda expressions.

3.1. Assertions

Assertions have been moved to org.junit.gen5.api.Assertions and have been improved significantly. As mentioned earlier, you can now use lambdas in assertions:

@Test
void lambdaExpressions() {
    List<Integer> numbers = Arrays.asList(1, 2, 3);
    assertTrue(numbers
        .stream()
        .mapToInt(i -> i)
        .sum() > 5, () -> "Sum should be greater than 5");
}

Although the example above is trivial, one advantage of using the lambda expression for the assertion message is that it is lazily evaluated, which can save time and resources if the message construction is expensive.

It is also now possible to group assertions with assertAll() which will report any failed assertions within the group with a MultipleFailuresError:

 @Test
 void groupAssertions() {
     int[] numbers = {0, 1, 2, 3, 4};
     assertAll("numbers",
         () -> assertEquals(numbers[0], 1),
         () -> assertEquals(numbers[3], 3),
         () -> assertEquals(numbers[4], 1)
     );
 }

This means it is now safer to make more complex assertions, as you will be able to pinpoint the exact location of any failure.

3.2. Assumptions

Assumptions are used to run tests only if certain conditions are met. This is typically used for external conditions that are required for the test to run properly, but which are not directly related to whatever is being tested.

You can declare an assumption with assumeTrue(), assumeFalse(), and assumingThat(). 

@Test
void trueAssumption() {
    assumeTrue(5 > 1);
    assertEquals(5 + 2, 7);
}

@Test
void falseAssumption() {
    assumeFalse(5 < 1);
    assertEquals(5 + 2, 7);
}

@Test
void assumptionThat() {
    String someString = "Just a string";
    assumingThat(
        someString.equals("Just a string"),
        () -> assertEquals(2 + 2, 4)
    );
}

If an assumption fails, a TestAbortedException is thrown and the test is simply skipped.

Assumptions also understand lambda expressions.

3.3. Exceptions

JUnit 5 improves support for exceptions. An expectThrows() method has been added that verifies that an expression throws an expression of a given type:

@Test
void shouldThrowException() {
    Throwable exception = expectThrows(UnsupportedOperationException.class, () -> {
      throw new UnsupportedOperationException("Not supported");
    });
    assertEquals(exception.getMessage(), "Not supported");
}

As the example demonstrates, JUnit 5 offers more control over the thrown exceptions than JUnit 4 used to. The most obvious implication is that it is now possible to easily get any information we might need about the exception, as we did in our example by inspecting the exception message.

3.4. Nested Tests

Nested tests have been added to allow developers to express complex relationships between different groups of tests. The syntax is quite straightforward – all you have to do is annotate an inner class with @Nested.

The JUnit documentation offers an elaborate example, which illustrates one of the possible uses.

3.5. Disabling Tests

Tests can now be disable with the @Disabled annotation.

@Test
@Disabled
void disabledTest() {
    assertTrue(false);
}

This test will not be run. The @Disabled annotation can be applied to either a test case or a test method and is equivalent to @Ignore from JUnit 4.

3.6. Tagging

Tags are the replacement for Categories from JUnit 4. Tags can be applied with the @Tag annotation. These allow developers to group and filter tests.

@Tag("Test case")
public class TaggedTest {

    @Test
    @Tag("Method")
    void testMethod() {
        assertEquals(2+2, 4);
    }
}

4. Conclusion

The writeup was a quick overview of the changes that are coming with JUnit 5.

It is important to note that at the moment of writing only the Alpha build was available, so some things might still change before release.

The examples used in this article can be found in the linked GitHub project.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Introduction to Spring REST Docs

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

Spring REST Docs generates documentation for RESTful services that is both accurate and readable. It combines hand-written documentation with auto-generated document snippets produced with Spring MVC tests.

2. Advantages

One major philosophy behind the project is the use of tests to produce the documentation. This ensures that the documentation generated always accurately matches the actual behavior of the API. Additionally, the output is ready to be processed by Asciidoctor, a publishing toolchain centered around the AsciiDoc syntax. This is the same tool that is used to generate the Spring Framework’s documentation.

These approaches reduce the limitations imposed by other frameworks. Spring REST Docs produces documentation that is accurate, concise, and well-structured. This documentation then allows the web service consumers to get the information they need with a minimum of fuss.

The tool has some other advantages, such as:

  • curl and http request snippets are generated
  • easy to package documentation in projects jar file
  • easy to add extra information to the snippets
  • supports both JSON and XML

3. Dependencies

The ideal way to get started using Spring REST Docs in a project is by using a dependency management system. Here, we are using Maven as build tool, so the dependency below can be copied and pasted into your POM:

<dependency>
    <groupId>org.springframework.restdocs</groupId>
    <artifactId>spring-restdocs-mockmvc</artifactId>
    <version>1.0.1.RELEASE</version>
</dependency>

You can also check Maven Central for a new version of the dependency here.

4. Configuration

Spring REST Docs can be created using the Spring MVC Test framework to make requests to the REST services which are to be documented. This produces documentation snippets for the request and the resulting response.

The very first step in generating documentation snippets is to declare a public RestDocumentation field that is annotated as a JUnit @Rule. The RestDocumentation rule is configured with the output directory into which the generated snippets should be saved. For example, this directory can be the build out directory of Maven:

@Rule
public RestDocumentation restDocumentation = new RestDocumentation("target/generated-snippets");

Next, we set up the MockMvc context so that it will be configured to produce documentation.

@Autowired
private WebApplicationContext context;

private MockMvc mockMvc;

@Before
public void setUp(){
    this.mockMvc = MockMvcBuilders.webAppContextSetup(this.context)
      .apply(documentationConfiguration(this.restDocumentation))
      .build();
}

The MockMvc object is configured using a RestDocumentationMockMvcConfigurer. An instance of this class can be obtained from the static documentationConfiguration() method on org.springframework.restdocs.mockmvc.MockMvcRestDocumentation.

Let’s create a CRUD RESTful service that we can document:

@RestController
@RequestMapping("/crud")
public class CRUDController {
	
    @RequestMapping(method=RequestMethod.GET)
    public List<CrudInput> read() {
        List<CrudInput> returnList=new ArrayList<CrudInput>();
        return returnList;
    }
	
    @ResponseStatus(HttpStatus.CREATED)
    @RequestMapping(method=RequestMethod.POST)
    public HttpHeaders save(@RequestBody CrudInput crudInput) {
        HttpHeaders httpHeaders = new HttpHeaders();
        httpHeaders.setLocation(linkTo(CRUDController.class).slash(crudInput.getTitle()).toUri());
        return httpHeaders;
    }
	
    @RequestMapping(value = "/{id}", method = RequestMethod.DELETE)
    void delete(@PathVariable("id") long id) {
        // delete
    }
}

Back in the tests, a MockMvc instance has been already created. It can be used to call the service we just created, and will document the request and response. The method below will generate the document in the configured directory:


@Test
public void index() throws Exception {
    this.mockMvc.perform(
      get("/")
        .accept(MediaTypes.HAL_JSON))
        .andExpect(status().isOk())
        .andExpect(jsonPath("_links.crud", is(notNullValue())));
}

It is time to create the document snippets for the REST service:

@Test
public void indexExample() throws Exception {
    this.document.snippets(
      links(
        linkWithRel("notes").description("The <<resources-notes,Notes resource>>"), 
        linkWithRel("tags").description("The <<resources-tags,Tags resource>>")
      ),
      responseFields(
        fieldWithPath("_links").description("<<resources-index-links,Links>> to other resources")
      )
    );
    this.mockMvc.perform(get("/rest/api")).andExpect(status().isOk());
}

5. Output

Once the Maven build runs successfully, the output of the REST docs snippets will be generated and will be saved to the target/generated-snippets folder.

Screen Shot 2016-04-04 at 11.48.52 PM

The generated output will have the information about the service, how to call the REST service like ‘curl’ calls, the HTTP request and response from the REST service, and links/endpoints to the service:

<strong>CURL Command</strong>
----
$ curl 'http://localhost:8080/api' -i
----

<strong>HTTP - REST Response</strong>
[source,http]
----
HTTP/1.1 200 OK
Content-Type: application/hal+json
Content-Length: 160

{
  "_links" : {
    "notes" : {
      "href" : "http://localhost:8080/testapi"
    },
    "tags" : {
      "href" : "http://localhost:8080/testapi"
    }
  }
}
----

6. Using Snippets to Create Documentation

In order to use the snippets in a larger document, you can reference them using Asciidoc inlcudes. In our case, we have created a document in src/docs called api-guide.adoc:

Screen Shot 2016-05-01 at 8.51.48 PM

In that document, if we wished to reference the response headers snippet, we can include it, using a placeholder {snippets} that will be replaced by Maven when it processes the document:

[[overview-headers]]
== Headers

Every response has the following header(s):

include::{snippets}/headers-example/response-headers.adoc[]

7. Asciidocs Maven Plugins

To convert the API guide from Asciidoc to a readable format, we can add a Maven plugin to the build lifecycle. There are several steps to enable this:

  1. Apply the Asciidoctor plugin to the pom.xml
  2. Add a dependency on spring-restdocs-mockmvc in the testCompile configuration as mentioned in the dependencies section
  3. Configure a property to define the output location for generated snippets
  4. Configure the test task to add the snippets directory as an output
  5. Configure the asciidoctor task
  6. Define an attribute named snippets that can be used when including the generated snippets in your documentation
  7. Make the task depend on the test task so that the tests are run before the documentation is created
  8. Configure the snippets directory as an input. All the generated snippets will be created under this directory

Add the snippet directory as a property in pom.xml so the Asciidoctor plugin can use this path to generate the snippets under this folder:

<properties>
    <snippetsDirectory>${project.build.directory}/generated-snippets</snippetsDirectory>
</properties>

The Maven plugin configuration in the pom.xml to generate the Asciidoc snippets from the build is as below:

<build>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
        </plugin>
        <plugin> 
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
            <configuration>
	        <includes>
                    <include>**/*Documentation.java</include>
	        </includes>
	    </configuration>
        </plugin>
        <plugin> 
            <groupId>org.asciidoctor</groupId>
	    <artifactId>asciidoctor-maven-plugin</artifactId>
	    <version>1.5.2</version>
	    <executions>
	        <execution>
	            <id>generate-docs</id>
	            <phase>package</phase> 
	            <goals>
	                <goal>process-asciidoc</goal>
	            </goals>
	            <configuration>
	                <backend>html</backend>
	                <doctype>book</doctype>
	                <attributes>
	                    <snippets>${snippetsDirectory}</snippets> 
	                </attributes>
	                <sourceDirectory>src/docs/asciidocs</sourceDirectory>
	                <outputDirectory>target/generated-docs</outputDirectory>
                    </configuration>
	        </execution>
	    </executions>
        </plugin>
    </plugins>
</build>

8. API Doc generation process

When the Maven build runs and the tests are executed, all the snippets will be generated in the snippets folder under the configured target/generated-snippets directory. Once the snippets are generated, the build process generates HTML output.

Screen Shot 2016-05-08 at 11.32.25 PM

The generated HTML file is formatted and readable, so the REST documentation is ready to use. Every time the Maven build runs, the documents also get generated with the latest updates.

Screen Shot 2016-05-08 at 11.36.36 PM

9. Conclusion

Having no documentation is better than wrong documentation, but Spring REST docs will help generate accurate documentation for RESTful services.

As an official Spring project, it accomplishes its goals by using the Spring MVC Test library. This method of generating documentation can help support a test-driven approach to developing and documenting RESTful APIs.

You can find an example project based on the code in this article in the linked GitHub repository.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 126

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> How We Migrated Our Backend to Spring Boot in 3 Weeks [stormpath.com]

Definitely an interesting writeup of how Stormpath basically migrated their entire codebase to Spring Boot.

>> We Crunched 1 Billion Java Logged Errors – Here’s What Causes 97% of Them [takipi.com]

Some fun numbers and insights, despite the link-baity title.

>> How to persist additional attributes for a relationship with JPA and Hibernate [thoughts-on-java.org]

Some advanced Hibernate/JPA goodness about more complex relationship information and understanding how to handle these well.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Hash and Salt Your Users’ Passwords [martinfowler.com]

When it comes to password storage, this writeup covers the basics, sure – but it does so in a well structured, clear way. Definitely worth a quick read.

Also worth reading:

3. Musings

>> Observations and thoughts on the LinkedIn data breach [troyhunt.com]

Very interesting, in-depth analysis of the LinkedIn data breach.

>> Solve Small Problems [daedtech.com]

This is definitely one to read, as it sheds some light into how real progress is usually made – one unassuming brick at a time.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Don’t turn your back [dilbert.com]

>> Just do your job and leave the strategy to management [dilbert.com]

>> I’ll use this dummy to demonstrate … [dilbert.com]

5. Pick of the Week

>> “We only hire the best” [signalvnoise.com]

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 127

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Zero Downtime Deployment with a Database [spring.io]

It’s no coincidence that this is first on the list this week – Marcin wrote a well researched, solid and referenceable resource for handling deployments in production.

>> Back to the CompletableFuture: Java 8 Feature Highlight [takipi.com]

Brushing up on the basics is always a good idea, and the CompletableFuture was such a useful addition in Java 8.

The fact that the examples are built using Marvel superheroes is just gravy.

>> JVM JIT optimization techniques [advancedweb.hu]

A comprehensive introduction to the underpinnings of how the JVM actually optimizes and runs code.

>> The Open Session In View Anti-Pattern [vladmihalcea.com]

A low level and highly useful deep-dive into how using the Open Session In View “solution” is essentially a code smell for a root problem in the architecture of the system.

>> Oracle Moves In Strange Ways [adam-bien.com]

A very interesting lesson in the history of Java EE and a quick read.

>> Why Microservices Should Be Event Driven: Autonomy vs Authority [christianposta.com]

As it’s becoming clearer and clearer after the ruckus has died down – microservices require a fundamentally different way of architecting our systems.

>> How to use PostgreSQL’s JSONB data type with Hibernate [thoughts-on-java.org]

Some Hibernate goodness with the JSON support in PostgreSQL.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Modelling Microservice Patterns in Code [vanilla-java.github.io]

An writeup that can define and clarify the mental model of certain client-server interactions.

Also worth reading:

3. Musings

>> The emergence of historical mega breaches [troyhunt.com] and >> Dating the ginormous MySpace breach [troyhunt.com]

Some very interesting things happening in the security ecosystem this week, with a few unprecedented data breaches seeing the light of day all at once.

>> Bridging the Communication Gap Between Developers and Architects [daedtech.com]

Consensus about what an “architect” should be is unlikely, but defining a few useful things that they should definitely do is easier. Some interesting take-aways here.

>> OutcomeOriented [martinfowler.com] and >> ActivityOriented [martinfowler.com]

Organizing teams well is a tough nut to crack. If you’re working on cracking it – these two short writeups are a good read.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> How did people do sarcasm before the internet? [dilbert.com]

>> I remember a time when I had to listen [dilbert.com]

>> The least important thing I do is more important than all of you put together [dilbert.com]

5. Pick of the Week

>> Immortality Begins at Forty [ribbonfarm.com]

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

JavaServer Faces (JSF) with Spring

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In this article we will look at a recipe for accessing beans defined in Spring from within a JSF managed bean and a JSF page, for the purposes of delegating the execution of business logic to the Spring beans.

This article presumes the reader has a prior understanding of both JSF and Spring separately. The article is based on the Mojarra implementation of JSF.

2. In Spring

Let’s have the following bean defined in Spring. The UserManagementDAO bean adds a username to an in-memory store, and its defined by the following interface:

public interface UserManagementDAO {
    boolean createUser(String newUserData);
}

The implementation of the bean is configured using the following Java config:

public class SpringCoreConfig {
    @Bean
    public UserManagementDAO userManagementDAO() {
        return new UserManagementDAOImpl();
    }
}

Or using the following XML configuration:

<bean class="org.springframework.context.annotation.CommonAnnotationBeanPostProcessor" />
<bean class="com.baeldung.dao.UserManagementDAOImpl" id="userManagementDAO"/>

We define the bean in XML, and register CommonAnnotationBeanPostProcessor to ensure that the @PostConstruct annotation is picked up.

3. Configuration

The following sections explain the configuration items that enable the integration of the Spring and JSF contexts.

3.1. Java Configuration Without web.xml

By implementing the WebApplicationInitializer we are able to programatically configure the ServletContext. The following is the onStartup() implementation inside the MainWebAppInitializer class:

public void onStartup(ServletContext sc) throws ServletException {
    AnnotationConfigWebApplicationContext root = new AnnotationConfigWebApplicationContext();
    root.register(SpringCoreConfig.class);
    sc.addListener(new ContextLoaderListener(root));
}

The AnnotationConfigWebApplicationContext bootstraps the Spring’g context and adds the beans by registering the SpringCoreConfig class.

Similarly, in the Mojarra implementation there is a FacesInitializer class that configures the FacesServlet. To use this configuration it is enough to extend the FacesInitializer. The complete implementation of the MainWebAppInitializer, is now as follows:

public class MainWebAppInitializer extends FacesInitializer implements WebApplicationInitializer {
    public void onStartup(ServletContext sc) throws ServletException {
        AnnotationConfigWebApplicationContext root = new AnnotationConfigWebApplicationContext();
        root.register(SpringCoreConfig.class);
        sc.addListener(new ContextLoaderListener(root));
    }
}

3.2. With web.xml

We’ll start by configuring the ContextLoaderListener in web.xml file of the application:

<listener>
    <listener-class>
        org.springframework.web.context.ContextLoaderListener
    </listener-class>
</listener>

This listener is responsible for starting up the Spring application context when the web application starts up. This listener will look for a spring configuration file named applicationContext.xml by default.

3.3. faces-config.xml

We now configure the SpringBeanFacesELResolver in the face-config.xml file:

<el-resolver>org.springframework.web.jsf.el.SpringBeanFacesELResolver</el-resolver>

An EL resolver is a pluggable component supported by the JSF framework, allowing us to customize the behavior of the JSF runtime when evaluating Expression Language (EL) expressions. This EL resolver will allow the JSF runtime access Spring components via EL expressions defined in JSF.

4. Accessing Spring Beans in JSF

At this point, our JSF web application is primed to access our Spring bean from either a JSF backing bean, or from a JSF page.

4.1. From a Backing Bean JSF 2.0

The Spring bean can now be accessed from a JSF backing bean. Depending on the version of JSF you’re running, there are two possible methods. With JSF 2.0, you use the @ManagedProperty annotation on the JSF managed bean.

@ManagedBean(name = "registration")
@RequestScoped
public class RegistrationBean implements Serializable {
    @ManagedProperty(value = "#{userManagementDAO}")
    transient private IUserManagementDAO theUserDao;

    private String userName;
    // getters and setters
}

Note that the getter and setter are mandatory when using the @ManagedProperty. 
Now – to assert the accessibility of a Spring bean from a managed bean, we will add the createNewUser() method:

public void createNewUser() {
    FacesContext context = FacesContext.getCurrentInstance();
    boolean operationStatus = userDao.createUser(userName);
    context.isValidationFailed();
    if (operationStatus) {
        operationMessage = "User " + userName + " created";
    }
}

The gist of the method is using the userDao Spring bean, and accessing its functionality.

4.2. From a Backing Bean in JSF 2.2

Another approach, valid only in JSF2.2 and above, is to use CDI’s @Inject annotation. This is applicable to JSF managed beans (with the @ManagedBean annotation), and CDI-managed beans (with the @Named annotation).

Indeed, with a CDI annotation, this is the only valid method of injecting the bean:

@Named( "registration")
@RequestScoped
public class RegistrationBean implements Serializable {
    @Inject
    UserManagementDAO theUserDao;
}

With this approach, the getter and setter are not necessary. Also note that the EL expression is absent.

4.3. From a JSF View

The createNewUser() method will be triggered from the following JSF page:

<h:form>
    <h:panelGrid id="theGrid" columns="3">
        <h:outputText value="Username"/>
        <h:inputText id="firstName" binding="#{userName}" required="true"
          requiredMessage="#{msg['message.valueRequired']}" value="#{registration.userName}"/>
        <h:message for="firstName" style="color:red;"/>
        <h:commandButton value="#{msg['label.saveButton']}" action="#{registration.createNewUser}"
          process="@this"/>
        <h:outputText value="#{registration.operationMessage}" style="color:green;"/>
    </h:panelGrid>
</h:form>

To render the page, start the server and navigate to:

http://localhost:8080/jsf/index.jsf

We can also use EL in the JSF view, to access the Spring bean. To test it it is enough to change the line number 7 from the previously introduced JSF page to:

<h:commandButton value="Save"
  action="#{registration.userDao.createUser(userName.value)}"/>

Here, we call the createUser method directly on the Spring DAO, passing the bind value of the userName to the method from within the JSF page, circumventing the managed bean all together.

5. Conclusion

We examined a basic integration between the Spring and JSF contexts, where we’re able to access a Spring bean in a JSF bean and page.

It’s worth noting that while the JSF runtime provides the pluggable architecture that enables the Spring framework to provide integration components, the annotations from the Spring framework cannot be used in a JSF context and vice versa.

What this means is that you’ll not be able to use annotations like @Autowired or @Component e.t.c. in a JSF managed bean, or use the @ManagedBean annotation on a Spring managed bean. You can however, use the @Inject annotation in both a JSF 2.2+ managed bean, and a Spring bean (because Spring supports JSR-330).

The source code that accompanies this article is available at GitHub.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Multiple Buckets and Spatial View Queries in Spring Data Couchbase

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

In this third tutorial on Spring Data Couchbase, we demonstrate the configuration required to support a Couchbase data model that spans multiple buckets, and we introduce the use of Spatial views for querying multi-dimensional data.

2. Data Model

In addition to the Person entity from our first tutorial and the Student entity from our second tutorial, we define a Campus entity for this tutorial:

@Document
public class Campus {
    @Id
    private String id;

    @Field
    @NotNull
    private String name;

    @Field
    @NotNull
    private Point location;

    // standard getters and setters
}

3. Java Configuration for Multiple Couchbase Buckets

In order to use multiple buckets in your project, you will need to use version 2.0.0 or later of the Spring Data Couchbase module, and you will need to use a Java-based configuration, because the XML-bsaed configuration supports only single-bucket scenarios.

Here is the dependency that we include in our Maven pom.xml file:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-couchbase</artifactId>
    <version>2.1.1.RELEASE</version>
</dependency>

3.1. Defining the Bucket Bean

In our Introduction to Spring Data Couchbase tutorial, we designated “baeldung” as the name of our default Couchbase bucket for use with Spring Data.

We will store Campus entities in the “baeldung2” bucket.

To make use of a second bucket, we first must define a @Bean for the Bucket itself in our Couchbase configuration class:

@Bean
public Bucket campusBucket() throws Exception {
    return couchbaseCluster().openBucket("baeldung2", "");
}

3.2. Configuring the Template Bean

Next, we define a @Bean for the CouchbaseTemplate to be used with this bucket:

@Bean
public CouchbaseTemplate campusTemplate() throws Exception {
    CouchbaseTemplate template = new CouchbaseTemplate(
      couchbaseClusterInfo(), campusBucket(),
      mappingCouchbaseConverter(), translationService());
    template.setDefaultConsistency(getDefaultConsistency());
    return template;
}

3.3. Mapping the Repositories

Finally, we define a custom mapping of Couchbase repository operations so that the Campus entity class will use the new template and bucket, while other entity classes will continue to use the default template and bucket:

@Override
public void configureRepositoryOperationsMapping(
  RepositoryOperationsMapping baseMapping) {
    try {
        baseMapping.mapEntity(Campus.class, campusTemplate());
    } catch (Exception e) {
        //custom Exception handling
    }
}

4. Querying Spatial or Multi-Dimensional Data

Couchbase provides native support for running bounding box queries against two-dimensional data, such as geographical data, using a special type of view known as a Spatial view.

A bounding box query is a range query that uses the southwestern-most [x,y] point of the box as its startRange parameter and the northwestern-most [x,y] point as its endRange parameter.

Spring Data extends Couchbase’s native bounding box query feature to queries involving circles and polygons using an algorithm that seeks to eliminate false positive matches, and it also provides support for queries involving more than two dimensions.

Spring Data simplifies the creation of multi-dimensional queries through a set of keywords that can be used to define derived queries in Couchbase repositories.

4.1. Supported Data Types

Spring Data Couchbase repository queries support data types from the org.springframework.data.geo package, including Point, Box, Circle, Polygon, and Distance.

4.2. Derived Query Keywords

In addition to the standard Spring Data repository keywords, Couchbase repositories support the following keywords in derived queries involving two dimensions:

  • Within, InWithin (takes two Point parameters defining a bounding box)
  • Near, IsNear (takes a Point and Distance as parameters)

And the following keywords may be used for queries involving more than two dimensions:

  • Between (for adding a single numerical value to both the startRange and endRange)
  • GreaterThan, GreaterThanEqual, After (for adding a single numerical value to the startRange)
  • LessThan, LessThanEqual, Before (for adding a single numerical value to the endRange)

Here are some examples of derived query methods using these keywords:

  • findByLocationNear
  • findByLocationWithin
  • findByLocationNearAndPopulationGreaterThan
  • findByLocationWithinAndAreaLessThan
  • findByLocationNearAndTuitionBetween

5. Defining the Repository

Repository methods backed by Spatial views must be decorated with the @Dimensional annotation, which specifies the design document name, view name, and number of dimensions used to define the view’s key (default 2 if not otherwise specified).

5.1. The CampusRespository Interface

Here in our CampusRepository interface, we declare two methods — one that uses traditional Spring Data keywords, backed by a MapReduce view, and one that uses dimensional Spring Data keywords, backed by a Spatial view:

public interface CampusRepository extends CrudRepository<Campus, String> {

    @View(designDocument="campus", viewName="byName")
    Set<Campus> findByName(String name);

    @Dimensional(dimensions=2, designDocument="campus_spatial",
      spatialViewName="byLocation")
    Set<Campus> findByLocationNear(Point point, Distance distance);
}

5.2. Spatial Views

Spatial views are written as JavaScript functions, much like MapReduce views. Unlike MapReduce views, which consist of both a map function and a reduce function, Spatial views consist of only a spatial function and may not coexist in the same Couchbase design document as MapReduce views.

For our Campus entities, we will create a design document named “campus_spatial” containing a Spatial view named “byLocation” with the following function:

function (doc) {
  if (doc.location &&
      doc._class == "org.baeldung.spring.data.couchbase.model.Campus") {
    emit([doc.location.x, doc.location.y], null);
  }
}

As this example demonstrates, when you write a Spatial view function, the key used in the emit function call must be an array of two or more values.

5.3. MapReduce Views

To provide full support for our repository, we must create a design document named “campus” containing two MapReduce views: “all” and “byName”.

Here is the map function for the “all” view:

function (doc, meta) {
  if(doc._class == "org.baeldung.spring.data.couchbase.model.Campus") {    
    emit(meta.id, null);
  }
}

And here is the map function for the “byName” view:

function (doc, meta) {
  if(doc._class == "org.baeldung.spring.data.couchbase.model.Campus" &&
     doc.name) {    
    emit(doc.name, null);
  }
}

6. Conclusion

We showed how to configure your Spring Data Couchbase project to support the use of multiple buckets, and we demonstrated how to use the repository abstraction to write spatial view queries against multi-dimensional data.

You can view the complete source code for this tutorial in the github project.

To learn more about Spring Data Couchbase, visit the official Spring Data Couchbase project site.

I usually post about Persistence on Twitter - you can follow me there:


Guide to Spring @Autowired

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

Starting with Spring 2.5, the framework introduced a new style of Dependency Injection driven by @Autowired Annotations. This annotation allows Spring to resolve and inject collaborating beans into your bean.

In this tutorial, we will look at how to enable autowiring, various ways to wire in beans, making beans optional, resolving bean conflicts using @Qualifier annotation along with potential exception scenarios.

2. Enabling @Autowired Annotations

If you are using Java based configuration in your application you can enable annotation driven injection by using AnnotationConfigApplicationContext to load your spring configuration as below:

@Configuration
@ComponentScan("com.baeldung.autowire.sample")
public class AppConfig {}

As an alternative, in Spring XML, it can be enabled by declaring it in Spring XML files like so: <context:annotation-config/>

3. Using @Autowired

Once annotation injection is enabled, autowiring can be used on properties, setters and constructors.

3.1. @Autowired on Properties

The annotation can be used directly on properties, therefore eliminating need for getters and setters:

@Component("fooFormatter")
public class FooFormatter {

    public String format() {
        return "foo";
    }
}
@Component
public class FooService {
    
    @Autowired
    private FooFormatter fooFormatter;

}

In the above example, Spring looks for and injects fooFormatter when FooService is created.

3.2. @Autowired on Setters

The @Autowired annotation can be used on setter methods. In the below example, when the annotation is used on the setter method, the setter method is called with the instance of FooFormatter when FooService is created:

public class FooService {

    private FooFormatter fooFormatter;

    @Autowired
    public void setFooFormatter(FooFormatter fooFormatter) {
            this.fooFormatter = fooFormatter;
    }
}

3.3. @Autowired on Constructors

The @Autowired annotation can also be used on constructors. In the below example, when the annotation is used on a constructor,  an instance of FooFormatter  is injected as an argument to the constructor when FooService is created:

public class FooService {

    private FooFormatter fooFormatter;

    @Autowired
    public FooService(FooFormatter fooFormatter) {
        this.fooFormatter = fooFormatter;
    }
}

4. @Autowired and Optional Dependencies

Spring expects @Autowired dependencies to be available when the dependent bean is being constructed. If the framework cannot resolve a bean for wiring, it will throw the below quoted exception and prevent the Spring container from launching successfully:

Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: 
No qualifying bean of type [com.autowire.sample.FooDAO] found for dependency: 
expected at least 1 bean which qualifies as autowire candidate for this dependency. 
Dependency annotations: 
{@org.springframework.beans.factory.annotation.Autowired(required=true)}

To avoid this from happening, a bean can optional be specified as below:

public class FooService {

    @Autowired(required = false)
    private FooDAO dataAccessor; 
    
}

5. Autowire Disambiguation

By default Spring resolves @Autowired entries by type. If more than one beans of same type is available in the container, the framework will throw a fatal exception indicating that more than one bean is available for autowiring.

5.1. Autowiring by @Qualifier

The @Qualifier annotation can be used to hint at and narrow down the required bean:

@Component("fooFormatter")
public class FooFormatter implements Formatter {

    public String format() {
        return "foo";
    }
}
@Component("barFormatter")
public class BarFormatter implements Formatter {

    public String format() {
        return "bar";
    }
}
public class FooService {
    
    @Autowired
    private Formatter formatter;

}

Since there are two concrete implementations of Formatter available for the Spring container to inject, Spring will throw a NoUniqueBeanDefinitionException exception when constructing the FooService:

Caused by: org.springframework.beans.factory.NoUniqueBeanDefinitionException: 
No qualifying bean of type [com.autowire.sample.Formatter] is defined: 
expected single matching bean but found 2: barFormatter,fooFormatter

This can be avoided by narrowing the implementation using a @Qualifier annotation:

public class FooService {
    
    @Autowired
    @Qualifier("fooFormatter")
    private Formatter formatter;

}

By specifying the @Qualifier with the name of the specific implementation, in this case as fooFormatter, we can avoid ambiguity when Spring finds multiple beans of same type.

Please note that the value inside the @Qualifier annotation matches with the name declared in the @Component annotation of our FooFormatter implementation.

5.2. Autowiring by Custom Qualifier

Spring allows us to create our own @Qualifier annotation. To create a custom Qualifier, define an annotation and provide the @Qualifier annotation within the definition as below:

@Qualifier
@Target({
  ElementType.FIELD, ElementType.METHOD, ElementType.TYPE, ElementType.PARAMETER})
@Retention(RetentionPolicy.RUNTIME)
public @interface FormatterType {
    
    String value();

}

Once defined, the FormatterType can be used within various implementations to specify custom value:

@FormatterType("Foo")
@Component
public class FooFormatter implements Formatter {

    public String format() {
        return "foo";
    }
}
@FormatterType("Bar")
@Component
public class BarFormatter implements Formatter {

    public String format() {
        return "bar";
    }
}

Once the implementations are annotated, the custom Qualifier annotation can be used as below:

@Component
public class FooService {
    
    @Autowired
    @FormatterType("Foo")
    private Formatter formatter;
    
}

The value specified in the @Target annotation restrict where the qualifier can be used to mark injection points.

In the above code snippet the qualifier can be used to disambiguate the point where Spring can inject the bean into a field, a method, a type and a parameter.

5.2. Autowiring by Name

As a fall back Spring uses the bean name as a default qualifier value.

So by defining the bean property name, in this case as fooFormatter, Spring matches that to the FooFormatter implementation and injects that specific implementation when FooService is constructed:

public class FooService {
    
    @Autowired
    private Formatter fooFormatter;
    
}

6. Conclusion

Although both @Qualifier and bean name fallback match can be used to narrow down to a specific bean, autowiring is really all about injection by type and this is how best to use this container feature.

The source code of this tutorial can be found in the github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Spring Boot Support for jOOQ

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

This tutorial is a follow-up to the Introduction to jOOQ with Spring article, covering the ways that jOOQ can be used within a Spring Boot application.

If you have not gone through that tutorial, please take a look at it and follow the instructions in section 2 on Maven Dependencies and in section 3 on Code Generation. This will generate source code for Java classes representing tables in the sample database, including Author, Book and AuthorBook.

2. Maven Configuration

In addition to the dependencies and plugins as in the previous tutorial, several other components needs to be included in the Maven POM file to make jOOQ work with Spring Boot.

2.1. Dependency Management

The most common way to make use of Spring Boot is to inherit from the spring-boot-starter-parent project by declaring it in the parent element. However, this method is not always suitable as it imposes an inheritance chain to follow, which may not be what users want in many cases.

This tutorial uses another approach: delegating the dependency management to Spring Boot. To make it happen, just adds the following dependencyManagement element to the POM file:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-dependencies</artifactId>
            <version>1.3.5.RELEASE</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

2.3. Dependencies

In order for Spring Boot to control jOOQ, a dependency on the spring-boot-starter-jooq artifact needs to be declared:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-jooq</artifactId>
    <version>1.3.5.RELEASE</version>
</dependency>

3. Spring Boot Configuration

3.1. Initial Boot Config

Before we get to the jOOQ support, we’re going to start preparing things with Spring Boot.

First, we’re going to take advantage of the persistence support and improvements in Boot and our data access information in the standard application.properties file. That way, we can skip over defining the beans and making these configurable via a separate properties file.

We’ll add the URL and credentials here to define our embedded H2 database:

spring.datasource.url=jdbc:h2:~/jooq
spring.datasource.username=sa
spring.datasource.password=

We’re also going to define a simple Boot application:

@SpringBootApplication
@EnableTransactionManagement
public class Application {
    
}

We’ll leave this one simple and empty and we’ll define all other bean declarations in another configuration class –  InitialConfiguration.

3.2. Bean Configuration

Let’s now define this InitialConfiguration class:

@Configuration
public class InitialConfiguration {
    // Other declarations
}

Spring Boot has automatically generated and configured the dataSource bean based on properties set in the application.properties file, so we do not need to register it manually. The following code lets the auto-configured DataSource bean to be injected into a field, and shows how this bean is used:

@Autowired
private DataSource dataSource;

@Bean
public DataSourceConnectionProvider connectionProvider() {
    return new DataSourceConnectionProvider
      (new TransactionAwareDataSourceProxy(dataSource));
}

Since a bean named transactionManager has also been automatically created and configured by Spring Boot, we do not need to declare any other bean of the DataSourceTransactionManager type as in the previous tutorial to take advantage of Spring transaction support.

DSLContext bean is created in the same way as in the PersistenceContext class of the preceding tutorial:

@Bean
public DefaultDSLContext dsl() {
    return new DefaultDSLContext(configuration());
}

Lastly, a Configuration implementation needs to be provided to DSLContext. Since Spring Boot is able to recognize the SQL dialect in use through the existence of H2 artifact on the classpath, a dialect configuration is no longer necessary:

public DefaultConfiguration configuration() {
    DefaultConfiguration jooqConfiguration = new DefaultConfiguration();
    jooqConfiguration.set(connectionProvider());
    jooqConfiguration
      .set(new DefaultExecuteListenerProvider(exceptionTransformer()));

    return jooqConfiguration;
}

4. Using Spring Boot with jOOQ

To make the demonstration of Spring Boot support for jOOQ easier to be followed, the test cases in the prequel of this tutorial are reused with a little change to its class-level annotations:

@SpringApplicationConfiguration(Application.class)
@Transactional("transactionManager")
@RunWith(SpringJUnit4ClassRunner.class)
public class SpringBootTest {
    // Other declarations
}

It is clear that rather than adopting the @ContextConfiguration annotation, Spring Boot uses @SpringApplicationConfiguration to take advantage of SpringApplicationContextLoader context loader to test applications.

Test methods for inserting, updating and deleting data are exactly the same as in the previous tutorial. Please have a look at section 5 of that article on Using jOOQ with Spring for more information. All the tests should be successfully executed with the new configuration, proving that jOOQ is fully supported by Spring Boot.

5. Conclusion

This tutorial dug deeper into the use of jOOQ with Spring. It introduced the ways for a Spring Boot application to take advantage of jOQQ to interact with a database in a type-safe manner.

The implementation of all these examples and code snippets can be found in a GitHub project.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


REST API Testing with Cucumber

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Overview

This tutorial gives an introduction to Cucumber, a commonly used tool for user acceptance testing, and how to use it in REST API tests.

In addition, to make the article self-contained and independent of any external REST services, we will use WireMock, a stubbing and mocking web service library. If you want to know more about this library, please refer to the introduction to WireMock.

2. Gherkin – The Language of Cucumber

Cucumber is a testing framework that supports Behavior Driven Development (BDD), allowing users to define application operations in plain text. It works based on the Gherkin Domain Specific Language (DSL). This simple but powerful syntax of Gherkin lets developers and testers write complex tests while keeping it comprehensible to even non-technical users.

2.1. Introduction to Gherkin

Gherkin is a line-oriented language using line endings, indentations and keywords to define documents. Each non-blank line usually starts with a Gherkin keyword, followed by an arbitrary text, which is usually a description of the keyword.

The whole structure must be written into a file with the feature extension to be recognized by Cucumber.

Here is a simple Gherkin document example:

Feature: A short description of the desired functionality

  Scenario: A business situation
    Given a precondition
    And another precondition
    When an event happens
    And another event happens too
    Then a testable outcome is achieved
    And something else is also completed

The subsections below will describe a couple of the most important elements in a Gherkin structure.

2.2. Feature

A Gherkin file is used to describe an application feature that needs to be tested. The file contains the Feature keyword at the very beginning, followed up by the feature name on the same line and an optional description that may span multiple lines underneath.

All the text, except for the Feature keyword, is skipped by the Cucumber parser and included for the purpose of documentation only.

2.3. Scenarios and Steps

A Gherkin structure may consist of one or more scenarios, recognized by the Scenario keyword. A scenario is basically a test allowing users to validate a capability of the application. It should describe an initial context, events that may happen and expected outcomes created by those events.

These things are done using steps, identified by one of the five keywords: Given, When, Then, And, and But.

  • Given: This step is to put the system into a well-defined state before users start interacting with the application. A Given clause can by considered a precondition for the use case.
  • When: A When step is used to describe an event that happens to the application. This can be an action taken by users, or an event triggered by another system.
  • Then: This step is to specify an expected outcome of the test. The outcome should be related to business values of the feature under test.
  • And and But: These keywords can be used to replace the above step keywords when there are multiple steps of the same type.

Cucumber does not actually distinguish these keywords, however they are still there to make the feature more readable and consistent with the BDD structure.

3. Cucumber-JVM Implementation

Cucumber was originally written in Ruby and has been ported into Java with Cucumber-JVM implementation, which is the subject of this section.

3.1. Maven Dependencies

In order to make use of Cucumber-JVM in a Maven project, the following dependency needs to be included in the POM:

<dependency>
    <groupId>info.cukes</groupId>
    <artifactId>cucumber-java</artifactId>
    <version>1.2.4</version>
    <scope>test</scope>
</dependency>

To facilitate JUnit testing with Cucumber, we need to have one more dependency:

<dependency>
    <groupId>info.cukes</groupId>
    <artifactId>cucumber-junit</artifactId>
    <version>1.2.4</version>
</dependency>

Alternatively, we can use another artifact to take advantage of lambda expressions in Java 8, which will not be covered in this tutorial.

3.2. Step Definitions

Gherkin scenarios would be useless if they were not translated into actions and this is where step definitions come into play. Basically, a step definition is an annotated Java method with an attached pattern whose job is to convert Gherkin steps in plain text to executable code. After parsing a feature document, Cucumber will search for step definitions that match predefined Gherkin steps to execute.

In order to make it clearer, let’s take a look at the following step:

Given I have registered a course in Baeldung

And a step definition:

@Given("I have registered a course in Baeldung")
public void verifyAccount() {
    // method implementation
}

When Cucumber reads the given step, it will be looking for step definitions whose annotating patterns match the Gherkin text. In our illustration, the testMethod method is found to be a match and its code is then executed, resulting in the Let me in! string printed out on the console.

4. Creating and Running Tests

4.1. Writing a Feature File

Let’s start with declaring scenarios and steps in a file with the name ending in the .feature extension:

Feature: Testing a REST API
  Users should be able to submit GET and POST requests to a web service, 
  represented by WireMock

  Scenario: Data Upload to a web service
    When users upload data on a project
    Then the server should handle it and return a success status

  Scenario: Data retrieval from a web service
    When users want to get information on the Cucumber project
    Then the requested data is returned

We now save this file in a directory named Feature, on the condition that the directory will be loaded into the classpath at runtime, e.g. src/main/resources.

4.2. Configuring JUnit to Work with Cucumber

In order for JUnit to be aware of Cucumber and read feature files when running, the Cucumber class must be declared as the Runner. We also need to tell JUnit the place to search for feature files and step definitions.

@RunWith(Cucumber.class)
@CucumberOptions(features = "classpath:Feature")
public class CucumberTest {
    
}

As you can see, the features element of CucumberOption locates the feature file created before. Another important element, called glue, provides paths to step definitions. However, if the test case and step definitions are in the same package as in this tutorial, that element may be dropped.

4.3. Writing Step Definitions

When Cucumber parses steps, it will search for methods annotated with Gherkin keywords to locate the matching step definitions. In this tutorial, these step definitions are defined in a class within the same package with CucumberTest.

The following is a method that fully matches a Gherkin step. The method will be used to post data to a REST web service:

@When("^users upload data on a project$")
public void usersUploadDataOnAProject() throws IOException {
    
}

And here is a method matching a Gherkin step and takes an argument from the text, which will be used to get information from a REST web service:

@When("^users want to get information on the (.+) project$")
public void usersGetInformationOnAProject(String projectName) throws IOException {
    
}

As you can see, the usersGetInformationOnAProject method takes a String argument, which is the project name. This argument is declared by (.+) in the annotation and over here it corresponds to Cucumber in the step text.

The working code for both above methods will be provided in the next section.

4.4. Creating and Running Tests

First, we will begin with a JSON structure to illustrate the data uploaded to the server by a POST request, and downloaded to the client using a GET. This structure is saved in the jsonString field, and shown below:

{
    "testing-framework": "cucumber",
    "supported-language": 
    [
        "Ruby",
        "Java",
        "Javascript",
        "PHP",
        "Python",
        "C++"
    ],

    "website": "cucumber.io"
}

To demonstrate a REST API, a WireMock server comes into play:

WireMockServer wireMockServer = new WireMockServer();

In addition, this tutorial will make use of Apache HttpClient API to represent the client used to connect to the server:

CloseableHttpClient httpClient = HttpClients.createDefault();

Now, let’s move on to writing testing code within step definitions. We will do this for the usersUploadDataOnAProject method first.

The server should be running before the client connects to it:

wireMockServer.start();

Using the WireMock API to stub the REST service:

configureFor("localhost", 8080);
stubFor(post(urlEqualTo("/create"))
  .withHeader("content-type", equalTo("application/json"))
  .withRequestBody(containing("testing-framework"))
  .willReturn(aResponse().withStatus(200)));

Now, send a POST request with the content taken from the jsonString field declared above to the server:

HttpPost request = new HttpPost("http://localhost:8080/create");
StringEntity entity = new StringEntity(jsonString);
request.addHeader("content-type", "application/json");
request.setEntity(entity);
HttpResponse response = httpClient.execute(request);

The following code asserts that the POST request has been successfully received and handled:

assertEquals(200, response.getStatusLine().getStatusCode());
verify(postRequestedFor(urlEqualTo("/create"))
  .withHeader("content-type", equalTo("application/json")));

The server should stop after being used:

wireMockServer.stop();

The second method we will implement herein is usersGetInformationOnAProject(String projectName). Similar to the first test, we need to start the server and then stub the REST service:

wireMockServer.start();

configureFor("localhost", 8080);
stubFor(get(urlEqualTo("/projects/cucumber"))
  .withHeader("accept", equalTo("application/json"))
  .willReturn(aResponse().withBody(jsonString)));

Submitting a GET request and receiving a response:

HttpGet request = new HttpGet("http://localhost:8080/projects/" + projectName.toLowerCase());
request.addHeader("accept", "application/json");
HttpResponse httpResponse = httpClient.execute(request);

We will convert the httpResponse variable to a String using a helper method:

String responseString = convertResponseToString(httpResponse);

Here is the implementation of that conversion helper method:

private String convertResponseToString(HttpResponse response) throws IOException {
    InputStream responseStream = response.getEntity().getContent();
    Scanner scanner = new Scanner(responseStream, "UTF-8");
    String responseString = scanner.useDelimiter("\\Z").next();
    scanner.close();
    return responseString;
}

The following verifies the whole process:

assertThat(responseString, containsString("\"testing-framework\": \"cucumber\""));
assertThat(responseString, containsString("\"website\": \"cucumber.io\""));
verify(getRequestedFor(urlEqualTo("/projects/cucumber"))
  .withHeader("accept", equalTo("application/json")));

Finally, stop the server as described before.

5. Conclusion

This tutorial covered the basics of Cucumber and how this framework uses the Gherkin domain specific language for testing a REST API.

The implementation of all these examples and code snippets can be found in a GitHub project.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

Introduction to WireMock

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Overview

WireMock is a library for stubbing and mocking web services. It constructs a HTTP server that we could connect to as we would to an actual web service.

When a WireMock server is in action, we can set up expectations, call the service, and then verify its behaviors.

2. Maven Dependencies

In order to be able to take advantage of the WireMock library, we need to include the following dependency in the POM:

<dependency>
    <groupId>com.github.tomakehurst</groupId>
    <artifactId>wiremock</artifactId>
    <version>1.58</version>
    <scope>test</scope>
</dependency>

3. Programmatically Managed Server

This section will cover the way to manually configure a WireMock server. i.e. without the support of JUnit auto-configuration. The usage is demonstrated by a very simple stub.

3.1. Server Setting-Up

A WireMock server can be instantiated like this:

WireMockServer wireMockServer = new WireMockServer(String host, int port);

In case no arguments are provided, the server host defaults to localhost and the server port to 8080.

The server may then be started and stopped using two simple methods:

wireMockServer.start();

And:

wireMockServer.stop();

3.2. Basic Usage

The WireMock library will be firstly demonstrated by a basic usage, where a stub for an exact URL without any further configuration is provided. Let’s create a server instance:

WireMockServer wireMockServer = new WireMockServer();

WireMock server must be running before the client connects to it:

wireMockServer.start();

The web service is then stubbed:

configureFor("localhost", 8080);
stubFor(get(urlEqualTo("/baeldung")).willReturn(aResponse().withBody("Welcome to Baeldung!")));

This tutorial makes use of the Apache HttpClient API to represent a client connecting to the server:

CloseableHttpClient httpClient = HttpClients.createDefault();

A request is executed and a response is returned, respectively, afterwards:

HttpGet request = new HttpGet("http://localhost:8080/baeldung");
HttpResponse httpResponse = httpClient.execute(request);

We will convert the httpResponse variable to a String using a helper method:

String responseString = convertResponseToString(httpResponse);

Here is the implementation of that conversion helper method:

private String convertResponseToString(HttpResponse response) throws IOException {
    InputStream responseStream = response.getEntity().getContent();
    Scanner scanner = new Scanner(responseStream, "UTF-8");
    String responseString = scanner.useDelimiter("\\Z").next();
    scanner.close();
    return responseString;
}

The following code verifies that the server has got a request to the expected URL and the response arriving at the client is exactly what was sent:

verify(getRequestedFor(urlEqualTo("/baeldung")));
assertEquals("Welcome to Baeldung!", stringResponse);

Finally, the WireMock server should be stopped to release system resources:

wireMockServer.stop();

4. JUnit Managed Server

In contrast to section 3, this section illustrates the usage of a WireMock server with the help of JUnit Rule.

4.1. Server Setting-Up

A WireMock server can be integrated into JUnit test cases by using the @Rule annotation. This allows JUnit to manage the lifecycle, starting the server prior to each test method and stopping it after the method returns.

Similar to the programmatically managed server, a JUnit managed WireMock server can be created as a Java object with the given port number:

@Rule
public WireMockRule wireMockRule = new WireMockRule(int port);

If no arguments are supplied, server port will take the default value, 8080. Server host, defaulting to localhost, and other configurations may be specified using the Options interface.

4.2. URL Matching

After setting up a WireMockRule instance, the next step is to configure a stub. In this subsection, we will provide a REST stub for a service endpoint using regular expression:

stubFor(get(urlPathMatching("/baeldung/.*"))
  .willReturn(aResponse()
  .withStatus(200)
  .withHeader("Content-Type", "application/json")
  .withBody("\"testing-library\": \"WireMock\"")));

Let’s move on to creating an HTTP client, executing a request and receive a response:

CloseableHttpClient httpClient = HttpClients.createDefault();
HttpGet request = new HttpGet("http://localhost:8080/baeldung/wiremock");
HttpResponse httpResponse = httpClient.execute(request);
String stringResponse = convertHttpResponseToString(httpResponse);

The above code snippet takes advantage of a conversion helper method:

private String convertHttpResponseToString(HttpResponse httpResponse) throws IOException {
    InputStream inputStream = httpResponse.getEntity().getContent();
    return convertInputStreamToString(inputStream);
}

This in turn makes use of another private method:

private String convertInputStreamToString(InputStream inputStream) {
    Scanner scanner = new Scanner(inputStream, "UTF-8");
    String string = scanner.useDelimiter("\\Z").next();
    scanner.close();
    return string;
}

The stub’s operations are verified by the testing code below:

verify(getRequestedFor(urlEqualTo("/baeldung/wiremock")));
assertEquals(200, httpResponse.getStatusLine().getStatusCode());
assertEquals("application/json", httpResponse.getFirstHeader("Content-Type").getValue());
assertEquals("\"testing-library\": \"WireMock\"", stringResponse);

4.3. Request Header Matching

Now we will demonstrate how to stub a REST API with the matching of headers. Let’s start with the stub configuration:

stubFor(get(urlPathEqualTo("/baeldung/wiremock"))
  .withHeader("Accept", matching("text/.*"))
  .willReturn(aResponse()
  .withStatus(503)
  .withHeader("Content-Type", "text/html")
  .withBody("!!! Service Unavailable !!!")));

Similar to the preceding subsection, we illustrate HTTP interaction using the HttpClient API, with help of the same helper methods:

CloseableHttpClient httpClient = HttpClients.createDefault();
HttpGet request = new HttpGet("http://localhost:8080/baeldung/wiremock");
request.addHeader("Accept", "text/html");
HttpResponse httpResponse = httpClient.execute(request);
String stringResponse = convertHttpResponseToString(httpResponse);

The following verification and assertions confirm functions of the stub we created before:

verify(getRequestedFor(urlEqualTo("/baeldung/wiremock")));
assertEquals(503, httpResponse.getStatusLine().getStatusCode());
assertEquals("text/html", httpResponse.getFirstHeader("Content-Type").getValue());
assertEquals("!!! Service Unavailable !!!", stringResponse);

4.4. Request Body Matching

The WireMock library can also be used to stub a REST API with body matching. Here is the configuration for a stub of this kind:

stubFor(post(urlEqualTo("/baeldung/wiremock"))
  .withHeader("Content-Type", equalTo("application/json"))
  .withRequestBody(containing("\"testing-library\": \"WireMock\""))
  .withRequestBody(containing("\"creator\": \"Tom Akehurst\""))
  .withRequestBody(containing("\"website\": \"wiremock.org\""))
  .willReturn(aResponse()
  .withStatus(200)));

Now, it is time to create a StringEntity object that will be used as the body of a request:

InputStream jsonInputStream 
  = this.getClass().getClassLoader().getResourceAsStream("wiremock_intro.json");
String jsonString = convertInputStreamToString(jsonInputStream);
StringEntity entity = new StringEntity(jsonString);

The code above uses one of the conversion helper methods define before, convertInputStreamToString.

Here is content of the wiremock_intro.json file on the classpath:

{
    "testing-library": "WireMock",
    "creator": "Tom Akehurst",
    "website": "wiremock.org"
}

HTTP requests and responses can be configured and executed as follows:

CloseableHttpClient httpClient = HttpClients.createDefault();
HttpPost request = new HttpPost("http://localhost:8080/baeldung/wiremock");
request.addHeader("Content-Type", "application/json");
request.setEntity(entity);
HttpResponse response = httpClient.execute(request);

This is the testing code used to validate the stub:

verify(postRequestedFor(urlEqualTo("/baeldung/wiremock"))
  .withHeader("Content-Type", equalTo("application/json")));
assertEquals(200, response.getStatusLine().getStatusCode());

4.5. Stub Priority

The previous subsections deal with situations where an HTTP request matches only a single stub. It would be more complicated if there is more than a match for a request. By default, the most recently added stub will take precedence in such a case. However, users are allowed to customize that behavior to take more control of WireMock stubs.

We will demonstrate operations of a WireMock server when a coming request matches two different stubs, with and without setting the priority level, at the same time. Both scenarios will use the following private helper method:

private HttpResponse generateClientAndReceiveResponseForPriorityTests() throws IOException {
    CloseableHttpClient httpClient = HttpClients.createDefault();
    HttpGet request = new HttpGet("http://localhost:8080/baeldung/wiremock");
    request.addHeader("Accept", "text/xml");
    return httpClient.execute(request);
}

Firstly, configure two stubs without consideration of the priority level:

stubFor(get(urlPathMatching("/baeldung/.*"))
  .willReturn(aResponse()
  .withStatus(200)));
stubFor(get(urlPathEqualTo("/baeldung/wiremock"))
  .withHeader("Accept", matching("text/.*"))
  .willReturn(aResponse()
  .withStatus(503)));

Next, create an HTTP client and execute a request using the helper method described right above:

HttpResponse httpResponse = generateClientAndReceiveResponseForPriorityTests();

The following code snippet verifies that the last configured stub is applied regardless of the one defined before when a request matches both of them:

verify(getRequestedFor(urlEqualTo("/baeldung/wiremock")));
assertEquals(503, httpResponse.getStatusLine().getStatusCode());

Let’s move on to stubs with priority levels being set, where a lower number represents a higher priority:

stubFor(get(urlPathMatching("/baeldung/.*"))
  .atPriority(1)
  .willReturn(aResponse()
  .withStatus(200)));
stubFor(get(urlPathEqualTo("/baeldung/wiremock"))
  .atPriority(2)
  .withHeader("Accept", matching("text/.*"))
  .willReturn(aResponse()
  .withStatus(503)));

Creation and execution of an HTTP request:

HttpResponse httpResponse = generateClientAndReceiveResponseForPriorityTests();

The following code validates the effect of priority levels, where the first configured stub is applied instead of the last:

verify(getRequestedFor(urlEqualTo("/baeldung/wiremock")));
assertEquals(200, httpResponse.getStatusLine().getStatusCode());

5. Conclusion

This tutorial introduced WireMock and how to set up as well as configure this library for testing of REST APIs using various techniques, including matching of URL, request headers and body.

The implementation of all the examples and code snippets can be found in a GitHub project.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

Intro to Gatling

$
0
0

1. Overview

Gatling is a load testing tool that comes with excellent support of the HTTP protocol – which makes it a really good choice for load testing any HTTP server.

This quick guide will show you how to setup a simple scenario for load testing an HTTP server.

Gatling simulation scripts are written in Scala, but don’t worry – the tool comes to help us with a GUI allowing us to record the scenario. Once we have finished recording the scenario the GUI create the Scala script representing the simulation.

After running the simulation we have a ready-to-present HTML reports.

Last but not least, Gatling’s architecture is asynchronous. This kind of architecture lets us implement virtual users as messages instead of dedicated threads, making them very resource cheap. Thus, running thousands of concurrent virtual users is not an issue.

It’s also worth noting though that the core engine is actually protocol agnostic, so it’s perfectly possible to implement support for other protocols. For example, Gatling currently also ships JMS support.

2. Creating a Project Using the Archetype

Although we can get Gatling bundles as a .zip we choose to use Gatling’s Maven Archetype. This allows us to integrate Gatling and run it into an IDE and make it easy to maintain the project in a version control system. Be careful as Gatling require a JDK8.

From the command line, type:

mvn archetype:generate

Then, when prompted:

Choose a number or apply filter (format: [groupId:]artifactId, case sensitive contains):

Type:

gatling

You should then see:

Choose archetype:
1: remote -> 
  io.gatling.highcharts:gatling-highcharts-maven-archetype (gatling-highcharts-maven-archetype)

Type:

1

to select the archetype, then select the version to use (choose the latest version).

Select the groupId, artifactId, version and package name for the classes before confirming the archetype creation.

Finish by importing the archetype into an IDE – for example into the Scala IDE (based on Eclipse) or into IntelliJ IDEA.

3. Define a Scenario

Before launching the recorder, we need to define a scenario. It will be a representation of what really happens when users navigate a web application.

In this tutorial, we will use the application provided by the Gatling’s team for sample purpose and hosted at the URL http://computer-database.gatling.io.

Our simple scenario could be:

  • A user arrives at the application.
  • The user searches for ‘amstrad’.
  • The user opens one of the related models.
  • The user goes back to home page.
  • The user iterates through pages.

4. Configuring the Recorder

First of all launch the Recorder class from the IDE. Once launched, the GUI lets you configure how requests and responses will be recorded. Choose the following options:

  • 8000 as listening port
  • org.baeldung.simulation package
  • RecordedSimulation class name
  • Follow Redirects? checked
  • Automatic Referers? checked
  • Black list first filter strategy selected
  • .*\.css, .*\.js and .*\.ico in the black list filters
setting

Now we have to configure our browser to use the defined port (8000) chosen during the configuration. This is the port our browser must connect to so that the Recorder is able to capture our navigation.

Here is how to do with Firefox, open the browser Advanced settings, then go to the Network panel and update the connection settings:

proxy settings

5. Recording the Scenario

Now that everything is configured we can record the scenario that we have defined above. The step are the following:

  1. Initiate the recording by clicking the ‘Start’ button
  2. Go to the website: http://computer-database.gatling.io
  3. Search for models with ‘amstrad’ in their name
  4. Select ‘Amstrad CPC 6128’
  5. Go back to home page
  6. Iterates several times through the model pages by clicking on Next button
  7. Click on ‘Stop & save’ button

The Simulation will be generated in the package org.baeldung defined during the configuration under the name RecordedSimulation.scala

6. Run a Simulation with Maven

To run our recorded simulation we need to update our pom.xml:

<plugin>
    <groupId>io.gatling</groupId>
    <artifactId>gatling-maven-plugin</artifactId>
    <version>2.2.0</version>
    <executions>
        <execution>
            <phase>test</phase>
            <goals><goal>execute</goal></goals>
        </execution>
    </executions>
</plugin>

This let us execute the simulation at test phase. To start the test just run:

mvn test

When the simulation is done, the console will display the path to the HTML reports.

7. Reviewing the Result

If we open the index.html at the suggested location the reports look like as follow:

reports

8. Conclusion

In this tutorial we have explored load testing an HTTP server with Gatling. The tools allows us to record a simulation based on a defined scenario with the help of a GUI interface. After the recording is done we can launch our test. The test report will be in a form of HTML resume.

To build up our example we have chosen to use a maven archetype. This help us to integrate Gatling and run it into an IDE and make it easy to maintain the project in a version control system.

The example code can be found in the GitHub project.

Intro to XPath with Java

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In this article we’re going to go over the basics of XPath with the support in the standard Java JDK.

We are going to use a simple XML document, process it and see how to go over the document to extract the information we need from it.

XPath is a standard syntax recommended by the W3C, it is a set of expressions to navigate XML documents.  You can find a full XPath reference here.

2. A Simple XPath Parser

import javax.xml.namespace.NamespaceContext;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.xpath.XPath;
import javax.xml.xpath.XPathConstants;
import javax.xml.xpath.XPathExpressionException;
import javax.xml.xpath.XPathFactory;

import org.w3c.dom.Document;

public class DefaultParser {
    
    private File file;

    public DefaultParser(File file) {
        this.file = file;
    }
}

Now lets take a closer look to the elements you will find in the DefaultParser:

FileInputStream fileIS = new FileInputStream(this.getFile());
DocumentBuilderFactory builderFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = builderFactory.newDocumentBuilder();
Document xmlDocument = builder.parse(fileIS);
XPath xPath = XPathFactory.newInstance().newXPath();
String expression = "/Tutorials/Tutorial";
nodeList = (NodeList) xPath.compile(expression).evaluate(xmlDocument, XPathConstants.NODESET);

Let’s break that down:

DocumentBuilderFactory builderFactory = DocumentBuilderFactory.newInstance();

We will use this object to produce a DOM object tree from our xml document:

DocumentBuilder builder = builderFactory.newDocumentBuilder();

Having an instance of this class, we can parse XML documents from many different input sources like InputStream, File, URL and SAX:

Document xmlDocument = builder.parse(fileIS);

A Document (org.w3c.dom.Document) represents the entire XML document, is the root of the document tree, provides our first access to data:

XPath xPath = XPathFactory.newInstance().newXPath();

From the XPath object we’ll access the expressions and execute them over our document to extract what we need from it:

xPath.compile(expression).evaluate(xmlDocument, XPathConstants.NODESET);

We can compile an XPath expression passed as string and define what kind of data we are expecting to receive such a NODESET, NODE or String for example.

3. Lets Start

Now that we took a look to the base components we will use, lets start with some code using some simple XML, for testing purposes:

<?xml version="1.0"?>
<Tutorials>
    <Tutorial tutId="01" type="java">
        <title>Guava</title>
	<description>Introduction to Guava</description>
	<date>04/04/2016</date>
	<author>GuavaAuthor</author>
    </Tutorial>
    <Tutorial tutId="02" type="java">
        <title>XML</title>
	<description>Introduction to XPath</description>
	<date>04/05/2016</date>
	<author>XMLAuthor</author>
    </Tutorial>
</Tutorials>

3.1. Retrieve a Basic List of Elements

The first method is a simple use of an XPath expression to retrieve a list of nodes from the XML:

FileInputStream fileIS = new FileInputStream(this.getFile());
DocumentBuilderFactory builderFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = builderFactory.newDocumentBuilder();
Document xmlDocument = builder.parse(fileIS);
XPath xPath = XPathFactory.newInstance().newXPath();
String expression = "/Tutorials/Tutorial";
nodeList = (NodeList) xPath.compile(expression).evaluate(xmlDocument, XPathConstants.NODESET);

We can retrieve the tutorial list contained in the root node by using the expression above, or by using the expression “//Tutorial” but this one will retrieve all <Tutorial> nodes in the document from the current node no matter where they are located in the document, this means at whatever level of the tree starting from the current node.

The NodeList it returns by specifying NODESET to the compile instruction as return type, is an ordered collection of nodes that can be accessed by passing an index as parameter.

3.2. Retrieving an Specific Node by its ID

We can look for an element based on any given id just by filtering:

DocumentBuilderFactory builderFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = builderFactory.newDocumentBuilder();
Document xmlDocument = builder.parse(this.getFile());
XPath xPath = XPathFactory.newInstance().newXPath();
String expression = "/Tutorials/Tutorial[@tutId=" + "'" + id + "'" + "]";
node = (Node) xPath.compile(expression).evaluate(xmlDocument, XPathConstants.NODE);

By using this kind of expressions we can filter for whatever element we need to look for just by using the correct syntax. These kind of expressions are called predicates and they are an easy way to locate specific data over a document, for example:

/Tutorials/Tutorial[1]

/Tutorials/Tutorial[first()]

/Tutorials/Tutorial[position()<4]

You can find a complete reference of predicates here

3.3. Retrieving Nodes by a Specific Tag Name

Now we’re going further by introducing axes, lets see how this works by using it in an XPath expression:

Document xmlDocument = builder.parse(this.getFile());
this.clean(xmlDocument);
XPath xPath = XPathFactory.newInstance().newXPath();
String expression = "//Tutorial[descendant::title[text()=" + "'" + name + "'" + "]]";
nodeList = (NodeList) xPath.compile(expression).evaluate(xmlDocument, XPathConstants.NODESET);

With the expression used above, we are looking for every <Tutorial> element who has a descendant <title> with the text passed as parameter in the “name” variable.

Following the sample xml provided for this article, we could look for a <title> containing the text “Guava” or “XML” and we will retrieve the whole <Tutorial> element with all its data.

Axes provide a very flexible way to navigate an XML document and you can find a full documentation it the official site.

3.4. Manipulating Data in Expressions

XPath allows us to manipulate data too in the expressions if needed.

XPath xPath = XPathFactory.newInstance().newXPath();
String expression = "//Tutorial[number(translate(date, '/', '')) > " + date + "]";
nodeList = (NodeList) xPath.compile(expression).evaluate(xmlDocument, XPathConstants.NODESET);

In this expression we are passing to our method a simple string as a date that looks like “ddmmyyyy” but the XML stores this data with the format “dd/mm/yyyy“, so to match a result we manipulate the string to convert it to the correct data format used by our document and  we do it by using one of the functions provided by XPath

3.5. Retrieving Elements from a Document with Namespace Defined

If our xml document has a namespace defined as it is in the example_namespace.xml used here, the rules to retrieve the data we need are going to change since our xml starts like this:

<?xml version="1.0"?>
<Tutorials xmlns="http://www.baeldung.com/full_archive">

</Tutorials>

Now when we use an expression similar to “//Tutorial”, we are not going to get any result.  That XPath expression is going to return all <Tutorial> elements that aren’t under any namespace, and in our new example_namespace.xml, all <Tutorial> elements are defined in the namespace http://www.baeldung.com/full_archive.

Lets see how to handle namespaces.

First of all we need to set the namespace context so XPath will be able to know where are we looking for our data:

xPath.setNamespaceContext(new NamespaceContext() {
    @Override
    public Iterator getPrefixes(String arg0) {
        return null;
    }
    @Override
    public String getPrefix(String arg0) {
        return null;
    }
    @Override
    public String getNamespaceURI(String arg0) {
        if ("bdn".equals(arg0)) {
            return "http://www.baeldung.com/full_archive";
        }
        return null;
    }
});

In the method above, we are defining “bdn” as the name for our namespace “http://www.baeldung.com/full_archive“, and from now on, we need to add “bdn” to the XPath expressions used to locate elements:

String expression = "/bdn:Tutorials/bdn:Tutorial";
nodeList = (NodeList) xPath.compile(expression).evaluate(xmlDocument, XPathConstants.NODESET);

Using the expression above we are able to retrieve all <Tutorial> elements under “bdn” namespace.

3.6. Avoiding Empty Text Nodes Troubles

As you could notice, in the code at the 3.3 section of this article a new function is called just right after parsing our XML to a Document object, this.clean(xmlDocument);

Sometimes when we iterate through elements, childnodes and so on, if our document has empty text nodes we can find an unexpected behavior in the results we want to get.

We called node.getFirstChild() when we are iterating over all <Tutorial> elements looking for the <title> information, but instead of what we are looking for we just have “#Text” as an empty node.

To fix the problem we can navigate through our document and remove those empty nodes, like this:

NodeList childs = node.getChildNodes();
for (int n = childs.getLength() - 1; n >= 0; n--) {
    Node child = childs.item(n);
    short nodeType = child.getNodeType();
    if (nodeType == Node.ELEMENT_NODE) {
        clean(child);
    }
    else if (nodeType == Node.TEXT_NODE) {
        String trimmedNodeVal = child.getNodeValue().trim();
        if (trimmedNodeVal.length() == 0){
            node.removeChild(child);
        }
        else {
            child.setNodeValue(trimmedNodeVal);
        }
    } else if (nodeType == Node.COMMENT_NODE) {
        node.removeChild(child);
    }
}

By doing this we can check each type of node we find and remove those ones we don’t need.

4. Conclusions

Here we just introduced the default XPath provided support, but there are many popular libraries as JDOM, Saxon, XQuery, JAXP, Jaxen or even Jackson now. There are libraries for specific HTML parsing too like JSoup.

It’s not limited to java, XPath expressions can be used by XSLT language to navigate XML documents.

As you can see, there is a wide range of possibilities on how to handle these kind of files.

There is a great standard support by default for XML/HTML documents parsing, reading and processing. You can find the full working sample here.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

A Guide to Java Enums

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In this article we will see what Java enums are, what problems they solve and how some of the design patterns they can be used in practice.

The enum keyword was introduced in Java 5. It denotes a special type of class that always extends the java.lang.Enum class. For the official documentation on their usage have a look at the documentation.

Constants defined this way make the code more readable, allow compile time checking, document upfront the list of accepted values and avoid unexpected behavior due to invalid values being passed in.

Here’s a quick and simple example of an enum that defines the status of an order for a pizza; the order status can be ORDERED, READY or DELIVERED:

public enum PizzaStatus {
    ORDERED,
    READY, 
    DELIVERED; 
}

Additionally, they come with many useful methods, which you would otherwise have to write yourself if you were using traditional public static final constants.

1.1. An Enum with a Custom API

OK, so now that we have a basic understanding of what enums are and how you can use them, let’s take our previous example to the next level by defining some extra API methods on the enum

public class Pizza {
    private PizzaStatus status;
    public enum PizzaStatus {
        ORDERED,
        READY,
        DELIVERED;
    }

    public boolean isDeliverable() {
        if (getStatus() == PizzaStatus.READY) {
            return true;
        }
        return false;
    }
    
    // Methods that set and get the status variable.
}

1.2. Comparing Enum Types using “==” Operator

Since enum types ensure that only one instance of the constants exist in the JVM, we can safely use “==” operator to compare two variables as seen in the example above; moreover the “==” operator provides compile-time and run-time safety.

Let’s first have a look at run-time safety in the following snippet where the “==” operator is used to compare statuses and a NullPointerException will not be thrown if either value is null. Conversely the an NullPointerException would be thrown if the equals method were used:

if(testPz.getStatus().equals(Pizza.PizzaStatus.DELIVERED)); 
if(testPz.getStatus() == Pizza.PizzaStatus.DELIVERED); 

As for compile time safety, let’s have a look at another example where an enum of a different type is compared using the equals method is determined to be true – because the values of the enum and the getStatus method coincidentally are the same, but logically the comparison should be false. This issue is avoided by using the “==” operator.

The compiler will flag the comparison as an incompatibility error:

if(testPz.getStatus().equals(TestColor.GREEN));
if(testPz.getStatus() == TestColor.GREEN);

1.3. Using Enum Types in Switch Statements

Enum types can be used in a switch statements also:

public int getDeliveryTimeInDays() {
    switch (status) {
        case ORDERED: return 5;
        case READY: return 2;
        case DELIVERED: return 0;
    }
    return 0;
}

1.4. Fields, Methods and Constructors in Enums

You can define constructors, methods and fields inside enum types that make it very powerful.

Let’s extend the example above and implement the transition from one stage of a pizza to another and see how we can get rid of the if statement and switch statement used before:

public class Pizza {

    private PizzaStatus status;
    public enum PizzaStatus {
        ORDERED (5){
            @Override
            public boolean isOrdered() {
                return true;
            }
        },
        READY (2){
            @Override
            public boolean isReady() {
                return true;
            }
        },
        DELIVERED (0){
            @Override
            public boolean isDelivered() {
                return true;
            }
        };

        private int timeToDelivery;

        public boolean isOrdered() {return false;}

        public boolean isReady() {return false;}

        public boolean isDelivered(){return false;}

        public int getTimeToDelivery() {
            return timeToDelivery;
        }

        PizzaStatus (int timeToDelivery) {
            this.timeToDelivery = timeToDelivery;
        }
    }

    public boolean isDeliverable() {
        return this.status.isReady();
    }

    public void printTimeToDeliver() {
        System.out.println("Time to delivery is " + 
          this.getStatus().getTimeToDelivery());
    }
    
    // Methods that set and get the status variable.
}

The test snippet below demonstrate how this works:

@Test
public void givenPizaOrder_whenReady_thenDeliverable() {
    Pizza testPz = new Pizza();
    testPz.setStatus(Pizza.PizzaStatus.READY);
    assertTrue(testPz.isDeliverable());
}

2. EnumSet and EnumMap

2.1. EnumSet

The EnumSet is a specialized Set implementation meant to be used with Enum types.

It is a very efficient and compact representation of a particular Set of Enum constants when compared to a HashSet, owning to the internal Bit Vector Representation that is used. And it provides a type safe alternative to traditional int-based “bit flags”, allowing us to write concise code that is more readable and maintainable.

The EnumSet is an abstract class that has two implementations called RegularEnumSet and JumboEnumSet, one of which is chosen depending on the number of constants in the enum at the time of instantiation.

Therefore it is always a good idea to use this set when ever we want to work with a collection of enum constants in most of the scenarios (like subsetting, adding, removing, and for bulk operations like containsAll and removeAll) and use Enum.values() if you just want to iterate over all possible constants.

In the code snippet below, you can see how EnumSet is used to create a subset of constants and its usage:

public class Pizza {

    private static EnumSet<PizzaStatus> undeliveredPizzaStatuses =
      EnumSet.of(PizzaStatus.ORDERED, PizzaStatus.READY);

    private PizzaStatus status;

    public enum PizzaStatus {
        ...
    }

    public boolean isDeliverable() {
        return this.status.isReady();
    }

    public void printTimeToDeliver() {
        System.out.println("Time to delivery is " + 
          this.getStatus().getTimeToDelivery() + " days");
    }

    public static List<Pizza> getAllUndeliveredPizzas(List<Pizza> input) {
        return input.stream().filter(
          (s) -> undeliveredPizzaStatuses.contains(s.getStatus()))
            .collect(Collectors.toList());
    }

    public static EnumMap<PizzaStatus, List<Pizza>> 
      groupPizzaByStatus(List<Pizza> pzList) {
        return pzList.stream().collect(
          Collectors.groupingBy(Pizza::getStatus,
          () -> new EnumMap<>(PizzaStatus.class), Collectors.toList()));
    }

    public void deliver() { 
        if (isDeliverable()) { 
            PizzaDeliverySystemConfiguration.getInstance().getDeliveryStrategy()
              .deliver(this); 
            this.setStatus(PizzaStatus.DELIVERED); 
        } 
    }
    
    // Methods that set and get the status variable.
}

Executing the following test demonstrated the power of the EnumSet implementation of the Set interface:

@Test
public void givenPizaOrders_whenRetrievingUnDeliveredPzs_thenCorrectlyRetrieved() {
    List<Pizza> pzList = new ArrayList<>();
    Pizza pz1 = new Pizza();
    pz1.setStatus(Pizza.PizzaStatus.DELIVERED);

    Pizza pz2 = new Pizza();
    pz2.setStatus(Pizza.PizzaStatus.ORDERED);

    Pizza pz3 = new Pizza();
    pz3.setStatus(Pizza.PizzaStatus.ORDERED);

    Pizza pz4 = new Pizza();
    pz4.setStatus(Pizza.PizzaStatus.READY);

    pzList.add(pz1);
    pzList.add(pz2);
    pzList.add(pz3);
    pzList.add(pz4);

    List<Pizza> undeliveredPzs = Pizza.getAllUndeliveredPizzas(pzList); 
    assertTrue(undeliveredPzs.size() == 3); 
}

2.2. EnumMap

EnumMap is a specialized Map implementation meant to be used with enum constants as keys. It is an efficient and compact implementation compared to its counterpart HashMap and is internally represented as an array:

EnumMap<Pizza.PizzaStatus, Pizza> map;

Let’s have a quick look at a real example that shows how it can be used in practice:

public static EnumMap<PizzaStatus, List<Pizza>> 
  groupPizzaByStatus(List<Pizza> pizzaList) {
    EnumMap<PizzaStatus, List<Pizza>> pzByStatus = 
      new EnumMap<PizzaStatus, List<Pizza>>(PizzaStatus.class);
    
    for (Pizza pz : pizzaList) {
        PizzaStatus status = pz.getStatus();
        if (pzByStatus.containsKey(status)) {
            pzByStatus.get(status).add(pz);
        } else {
            List<Pizza> newPzList = new ArrayList<Pizza>();
            newPzList.add(pz);
            pzByStatus.put(status, newPzList);
        }
    }
    return pzByStatus;
}

Executing the following test demonstrated the power of the EnumMap implementation of the Map interface:

@Test
public void givenPizaOrders_whenGroupByStatusCalled_thenCorrectlyGrouped() {
    List<Pizza> pzList = new ArrayList<>();
    Pizza pz1 = new Pizza();
    pz1.setStatus(Pizza.PizzaStatus.DELIVERED);

    Pizza pz2 = new Pizza();
    pz2.setStatus(Pizza.PizzaStatus.ORDERED);

    Pizza pz3 = new Pizza();
    pz3.setStatus(Pizza.PizzaStatus.ORDERED);

    Pizza pz4 = new Pizza();
    pz4.setStatus(Pizza.PizzaStatus.READY);

    pzList.add(pz1);
    pzList.add(pz2);
    pzList.add(pz3);
    pzList.add(pz4);

    EnumMap<Pizza.PizzaStatus,List<Pizza>> map = Pizza.groupPizzaByStatus(pzList);
    assertTrue(map.get(Pizza.PizzaStatus.DELIVERED).size() == 1);
    assertTrue(map.get(Pizza.PizzaStatus.ORDERED).size() == 2);
    assertTrue(map.get(Pizza.PizzaStatus.READY).size() == 1);
}

3. Implement Design Patterns using Enums

3.1. Singleton Pattern

Normally, implementing a class using the Singleton pattern is quite non-trivial. Enums provide an easy and quick way of implementing singletons. In addition to that, since the enum class implements the Serializable interface under the hood, the class is guaranteed to be a singleton by the JVM, which unlike the conventional implementation where we have to ensure that no new instances are created during deserialization.

In the code snippet below, we see how we can implement singleton pattern:

public enum PizzaDeliverySystemConfiguration {
    INSTANCE;
    PizzaDeliverySystemConfiguration() {
        // Initialization configuration which involves
        // overriding defaults like delivery strategy
    }

    private PizzaDeliveryStrategy deliveryStrategy = PizzaDeliveryStrategy.NORMAL;

    public static PizzaDeliverySystemConfiguration getInstance() {
        return INSTANCE;
    }

    public PizzaDeliveryStrategy getDeliveryStrategy() {
        return deliveryStrategy;
    }
}

3.2. Strategy Pattern

Conventionally the Strategy pattern is written by having an interface that is implemented by different classes. Adding a new strategy meant adding a new implementation class. With enums, this is achieved with less effort, adding a new implementation means defining just another instance with some implementation.

The code snippet below shows how to implement the Strategy pattern:

public enum PizzaDeliveryStrategy {
    EXPRESS {
        @Override
        public void deliver(Pizza pz) {
            System.out.println("Pizza will be delivered in express mode");
        }
    },
    NORMAL {
        @Override
        public void deliver(Pizza pz) {
            System.out.println("Pizza will be delivered in normal mode");
        }
    };

    public abstract void deliver(Pizza pz);
}

Add the following method to the Pizza class:

public void deliver() {
    if (isDeliverable()) {
        PizzaDeliverySystemConfiguration.getInstance().getDeliveryStrategy()
          .deliver(this);
        this.setStatus(PizzaStatus.DELIVERED);
    }
}
@Test
public void givenPizaOrder_whenDelivered_thenPizzaGetsDeliveredAndStatusChanges() {
    Pizza pz = new Pizza();
    pz.setStatus(Pizza.PizzaStatus.READY);
    pz.deliver();
    assertTrue(pz.getStatus() == Pizza.PizzaStatus.DELIVERED);
}

4. Java 8 and Enums

The Pizza class can be rewritten in Java 8, and you can see how the methods getAllUndeliveredPizzas() and groupPizzaByStatus() become so concise with the use of lambdas and the Stream APIs:

public static List<Pizza> getAllUndeliveredPizzas(List<Pizza> input) {
    return input.stream().filter(
      (s) -> !deliveredPizzaStatuses.contains(s.getStatus()))
        .collect(Collectors.toList());
}

public static EnumMap<PizzaStatus, List<Pizza>> 
  groupPizzaByStatus(List<Pizza> pzList) {
    EnumMap<PizzaStatus, List<Pizza>> map = pzList.stream().collect(
      Collectors.groupingBy(Pizza::getStatus,
      () -> new EnumMap<>(PizzaStatus.class), Collectors.toList()));
    return map;
}

5. JSON Representation of Enum

Using Jackson libraries, it is possible to have a JSON representation of enum types as if they are POJOs. The code snippet below shows the Jackson annotations that can be used for the same:

@JsonFormat(shape = JsonFormat.Shape.OBJECT)
public enum PizzaStatus {
    ORDERED (5){
        @Override
        public boolean isOrdered() {
            return true;
        }
    },
    READY (2){
        @Override
        public boolean isReady() {
            return true;
        }
    },
    DELIVERED (0){
        @Override
        public boolean isDelivered() {
            return true;
        }
    };

    private int timeToDelivery;

    public boolean isOrdered() {return false;}

    public boolean isReady() {return false;}

    public boolean isDelivered(){return false;}

    @JsonProperty("timeToDelivery")
    public int getTimeToDelivery() {
        return timeToDelivery;
    }

    private PizzaStatus (int timeToDelivery) {
        this.timeToDelivery = timeToDelivery;
    }
}

We can use the Pizza and PizzaStatus as follows:

Pizza pz = new Pizza();
pz.setStatus(Pizza.PizzaStatus.READY);
System.out.println(Pizza.getJsonString(pz));

to generate the following JSON representation of the Pizzas status:

{
  "status" : {
    "timeToDelivery" : 2,
    "ready" : true,
    "ordered" : false,
    "delivered" : false
  },
  "deliverable" : true
}

For more information on JSON serializing/deserializing (including customization) of enum types refer to the Jackson – Serialize Enums as JSON Objects.

6. Conclusion

In this article we explored the Java enum, from the language basics to more advanced and interesting real-world use cases.

Code snippets from this article can be found in the Core Java 8 main Github repository and tests can be found in the Core Java 8 test Github repository.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 128

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Notes on Reactive Programming Part I: The Reactive Landscape [spring.io]

A solid intro to the reactive programming.

And no, it’s no coincidence that this is first.

>> The Top 10 Exception Types in Production Java Applications – Based on 1B Events [takipi.com]

Another set of insights out of an interesting dataset – with the venerable NullPointerException of course at number one.

>> How To Implement equals Correctly [codefx.org]

A back-to-basics looking at equals – nicely done.

>> How to implement equals and hashCode using the entity identifier (primary key) [vladmihalcea.com]

And since we were just talking about equals, this writeup definitely fits well into that narrative with a look from the persistence side of things.

>> Observations From A History of Java Backwards Incompatibility [marxsoftware.com]

You might argue that keeping full backwards compatibility is what made Java as popular as it is today, or that it’s what keeps Java from actually moving forward well.

Either way – here’s a quick look at what it means to keep that compatibility for over 20 years.

>> Spring-Reactive samples [java-allandsundry.com]

The reactive programming model is coming to Spring, no two-ways about it. And there isn’t a whole lot of information about it out there – so this piece looks quite interesting in terms of filling that gap.

>> Netflix OSS, Spring Cloud, or Kubernetes? How About All of Them! [christianposta.com]

The Netflix ecosystem of tools is based on practical usage at scale, so it’s always super useful to go deep into understanding their tools.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Practical Event Sourcing And CQRS Benefits [sapiensworks.com]

If you’re literally just starting out, than this may be to early, but if you’ve been building systems for a while know in one form or another and haven’t explored things like DDD, Event Sourcing and CQRS – well, this is as good of time as any to.

Yes, it’s a significantly different way of building a system, but then again, really leveling up probably won’t happen from doing CRUD marginally better.

Also worth reading:

3. Musings

>> Why I switched to making products [swizec.com]

A quick and fun intro to why it’s well worth doing products. This was a fun read for me, since I made the jump into products almost one year ago today.

>> Why is Github Taking over the World? [daedtech.com]

A discussion around the history and the Why of Github.

>> Creating virtual assets in service virtualization: record and playback or behaviour modeling? [ontestautomation.com]

Definitely an interesting read exploring the two alternatives of driving the testing and explorations of a system, either by using a recorder, or programmatically. A bit high level but well worth reading.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> How are you doing on your unspoken objectives? [dilbert.com]

>> A monkey could do your assignment while eating a banana [dilbert.com]

>> Moving to a shared leadership model [dilbert.com]

5. Pick of the Week

>> Happiness is the Only Logical Pursuit [mrmoneymustache.com]

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Load Testing Baeldung with Gatling

$
0
0

1. Overview

In the previous tutorial, we’ve seen how to use Gatling to load test a custom web application.

In this article we’ll make use the Gatling stress tool to measure the performance of the staging environment of this website.

2. The Test Scenario

Let’s first set up our main usage scenario – one that comes close to a typical user that might be browsing the site:

  1. Go to the Home Page
  2. Open an Article from Home Page
  3. Go to Guides/REST
  4. Go to the REST Category
  5. Go to the Full Archive
  6. Open an Article from the Archive

3. Record the Scenario

Now, we’ll record our scenario using the Gatling recorder – as follows:

$GATLING_HOME/bin/recorder.sh

And for Windows users:

%GATLING_HOME%\bin\recorder.bat

Note: GATLING_HOME is your Gatling installation directory.

There are two modes for Gatling Recorder: HTTP Proxy and HAR Converter.

We discussed the HTTP Proxy mode in detail in the previous tutorial – so let’s now have a look at the HAR Converter option.

3.1. HAR Converter

HAR is short for HTTP Archive – which is a format that basically records the full information about a browsing session.

We can obtain HAR files from the browser then use the Gatling Recorder to convert it into a Simulation.

We’ll create our HAR file with the help of the Chrome Developer Tools:

  • Menu -> More Tools -> Developer Tools
  • Go to Network Tab
  • Make sure Preserve log is checked
  • After you finish navigating the website, right click on the requests you want to export
  • Then, select Copy All as HAR
  • Paste them in a file, then import it from the Gatling recorder

After you finish adjusting Gatling recorder to your preference, Click start.

Note that the output folder is by default GATLING_HOME/user-files-simulations

4. The Simulation

The generated simulation file is similarly written in Scala. It’s generally OK, but not super readable, so we’ll do some adjustments to clean up. Here is our final Simulation:

class RestSimulation extends Simulation {

    val httpProtocol = http.baseURL("http://staging.baeldung.com")

    val scn = scenario("RestSimulation")
      .exec(http("home").get("/"))
      .pause(23)
      .exec(http("article_1").get("/spring-rest-api-metrics"))
      .pause(39)
      .exec(http("rest_series").get("/rest-with-spring-series"))
      .pause(60)
      .exec(http("rest_category").get("/category/rest/"))
      .pause(26)
      .exec(http("archive").get("/full_archive"))
      .pause(70)
      .exec(http("article_2").get("/spring-data-rest-intro"))

    setUp(scn.inject(atOnceUsers(1))).protocols(httpProtocol)
}

An important note here is that the full simulation file is much larger; here, we didn’t include static resources for simplicity.

5. Run the Load Test

Now, we can run our simulation – as follows:

$GATLING_HOME/bin/gatling.sh

And for Windows users:

%GATLING_HOME%\bin\gatling.bat

The Gatling tool will scan GATLING_HOME/user-files-simulations and list all found simulations for us to choose.

After running the simulation here’s what the results look like:

For one user:

> request count                                304 (OK=304    KO=0)
> min response time                             75 (OK=75     KO=-)
> max response time                          13745 (OK=13745  KO=-)
> mean response time                          1102 (OK=1102   KO=-)
> std deviation                               1728 (OK=1728   KO=-)
> response time 50th percentile                660 (OK=660    KO=-)
> response time 75th percentile               1006 (OK=1006   KO=-)
> mean requests/sec                           0.53 (OK=0.53   KO=-)
---- Response Time Distribution ------------------------------------
> t < 800 ms                                           183 ( 60%)
> 800 ms < t < 1200 ms                                  54 ( 18%)
> t > 1200 ms                                           67 ( 22%)
> failed                                                 0 (  0%)

For 5 simultaneous users:

> request count                               1520 (OK=1520   KO=0)
> min response time                             70 (OK=70     KO=-)
> max response time                          30289 (OK=30289  KO=-)
> mean response time                          1248 (OK=1248   KO=-)
> std deviation                               2079 (OK=2079   KO=-)
> response time 50th percentile                504 (OK=504    KO=-)
> response time 75th percentile               1440 (OK=1440   KO=-)
> mean requests/sec                          2.411 (OK=2.411  KO=-)
---- Response Time Distribution ------------------------------------
> t < 800 ms                                           943 ( 62%)
> 800 ms < t < 1200 ms                                 138 (  9%)
> t > 1200 ms                                          439 ( 29%)
> failed                                                 0 (  0%)

For 10 simultaneous users:

> request count                               3058 (OK=3018   KO=40)
> min response time                              0 (OK=69     KO=0)
> max response time                          44916 (OK=44916  KO=30094)
> mean response time                          2193 (OK=2063   KO=11996)
> std deviation                               4185 (OK=3953   KO=7888)
> response time 50th percentile                506 (OK=494    KO=13670)
> response time 75th percentile               2035 (OK=1976   KO=15835)
> mean requests/sec                          3.208 (OK=3.166  KO=0.042)
---- Response Time Distribution ----------------------------------------
> t < 800 ms                                          1752 ( 57%)
> 800 ms < t < 1200 ms                                 220 (  7%)
> t > 1200 ms                                         1046 ( 34%)
> failed                                                40 (  1%)

Note that some of the requests failed when tested 10 simultaneous users – simply because the staging environment isn’t capable of handling that kind of load.

6. Conclusion

In this quick article we explored the HAR option of recording test scenarios in Gatling as well as did a simple initial test of baeldung.com.


Introduction to Project Lombok

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

Lombok is one of the tools I literally always drop into my projects builds the first. I couldn’t imagine myself programming Java without it these days. I really hope you find its power reading this article!

1. Avoid Repetitive Code

Java is a great language but it sometimes gets too verbose for things you have to do in your code for common tasks or compliancy with some framework practices. These do very often bring no real value to the business side of your programs – and this is where Lombok is here to make your life happier and yourself more productive.

The way it works is by plugging into your build process and autogenerating Java bytecode into your .class files as per a number of project annotations you introduce in your code.

Including it in your builds, whichever system you are using, is very straight forward. Their project page  has detailed instructions on the specifics. Most of my projects are maven based, so I just typically drop their dependency in the provided scope and I’m good to go:

<dependencies>
    ...
    <dependency>
        <groupId>org.projectlombok</groupId>
        <artifactId>lombok</artifactId>
        <version>1.16.8</version>
        <scope>provided</scope>
    </dependency>
    ...
</dependencies>

Check for the most recent available version here.

Note that depending on Lombok won’t make users of your .jars depend on it as well, as it is a pure build dependency, not runtime.

2. Getters/Setters, Constructors – So Repetitive

Encapsulating object properties via public getter and setter methods is such a common practice in the Java world, and lots of frameworks rely on this “Java Bean” pattern extensively: a class with an empty constructor and get/set methods for “properties”.

This is so common that most IDE’s support autogenerating code for these patterns (and more). This code however needs to live in your sources and also be maintained when, say, a new property is added or a field renamed.

Let’s consider this class we want to use as a JPA entity as an example:

@Entity
public class User implements Serializable {

    private @Id Long id; // will be set when persisting

    private String firstName;
    private String lastName;
    private int age;

    public User() {
    }

    public User(String firstName, String lastName, int age) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.age = age;
    }

    // getters and setters: ~30 extra lines of code
}

This is a rather simple class, but still consider if we added the extra code for getters and setters we’d end up with a definition where we would have more boilerplate zero-value code than the relevant business information: “a User has first and last names, and age.”

Let us now Lombok-ize this class:

@Entity
@Getter @Setter @NoArgsConstructor // <--- THIS is it
public class User implements Serializable {

    private @Id Long id; // will be set when persisting

    private String firstName;
    private String lastName;
    private int age;

    public User(String firstName, String lastName, int age) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.age = age;
    }
}

By adding the @Getter and @Setter annotations we told Lombok to, well, generate these for all the fields of the class. @NoArgsConstructor will lead to an empty constructor generation.

Note this is the whole class code, I am not omitting anything as opposed to the version above with the // getters and setters comment. For a three relevant attributes class, this is a significant saving in code!

If you further add attributes (properties) to your User class, the same will happen: you applied the annotations to the type itself so they will mind all fields by default.

What if you wanted to refine the visibility of some properties? For example, I like to keep my entities’ id field modifiers package or protected visible because they are expected to be read but not explicitly set by application code. Just use a finer grained @Setter for this particular field:

private @Id @Setter(AccessLevel.PROTECTED) Long id;

3. Value Classes/DTO’s

There are many situations in which we want to define a data type with the sole purpose of representing complex “values” or as “Data Transfer Objects”, most of the time in the form of immutable data structures we build once and never want to change.

We design a class to represent a successful login operation. We want all fields to be non-null and objects be immutable so that we can thread-safely access its properties:

public class LoginResult {

    private final Instant loginTs;

    private final String authToken;
    private final Duration tokenValidity;
    
    private final URL tokenRefreshUrl;

    // constructor taking every field and checking nulls

    // read-only accessor, not necessarily as get*() form
}

Again, the amount of code we’d have to write for the commented sections would be of a much larger volume that the information we want to encapsulate and that has real value for us. We can use Lombok again to improve this:

@RequiredArgsConstructor
@Accessors(fluent = true) @Getter
public class LoginResult {

    private final @NonNull Instant loginTs;

    private final @NonNull String authToken;
    private final @NonNull Duration tokenValidity;
    
    private final @NonNull URL tokenRefreshUrl;

}

Just add the @RequiredArgsConstructor annotation and you’d get a constructor for all the final fields int the class, just as you declared them. Adding @NonNull to attributes makes our constructor check for nullability and throw NullPointerExceptions accordingly. This would also happen if the fields were non-final and we added @Setter for them.

Don’t you want boring old get*() form for your properties? Because we added @Accessors(fluent=true) in this example “getters” would have the same method name as the properties: getAuthToken() simply becomes authToken().

This “fluent” form would apply to non-final fields for attribute setters and as well allow for chained calls:

// Imagine fields were no longer final now
return new LoginResult()
  .loginTs(Instant.now())
  .authToken("asdasd")
  . // and so on

4. Core Java Boilerplate

Another situation in which we end up writing code we need to maintain is when generating toString(), equals() and hashCode() methods. IDEs try to help with templates for autogenerating these in terms of our class attributes.

We can automate this by means of other Lombok class-level annotations:

  • @ToString: will generate a toString() method including all class attributes. No need to write one ourselves and maintain it as we enrich our data model.
  • @EqualsAndHashCode: will generate both equals() and hashCode() methods by default considering all relevant fields, and according to very well though semantics.

These generators ship very handy configuration options. For example, if your annotated classes take part of a hierarchy you can just use the callSuper=true parameter and parent results will be considered when generating the method’s code.

More on this: say we had our User JPA entity example include a reference to events associated to this user:

@OneToMany(mappedBy = "user")
private List<UserEvent> events;

We wouldn’t like to have the hole list of events dumped whenever we call the toString() method of our User, just because we used the @ToString annotation. No problem: just parameterize it like this: @ToString(exclude = {“events”}), and that won’t happen. This is also helpful to avoid circular references if, for example, UserEvents had a reference to a User.

For the LoginResult example, we may want to define equality and hash code calculation just in terms of the token itself and not the other final attributes in our class. Then, simply write something like @EqualsAndHashCode(of = {“authToken”}).

Bonus: if you liked the features from the annotations we’ve reviewed so far you may want to examine @Data and @Value annotations as they behave as if a set of them had been applied to our classes. After all, these discussed usages are very commonly put together in many cases.

5. The Builder Pattern

The following could make for a sample configuration class for a REST API client:

public class ApiClientConfiguration {

    private String host;
    private int port;
    private boolean useHttps;

    private long connectTimeout;
    private long readTimeout;

    private String username;
    private String password;

    // Whatever other options you may thing.

    // Empty constructor? All combinations?

    // getters... and setters?
}

We could have an initial approach based on using the class default empty constructor and providing setter methods for every field. However, we’d ideally want configurations no to be set-ed once they have been built (instantiated), effectively making them immutable. We therefore want to avoid setters, but writing such a potentially long args constructor is an anti-pattern.

Instead, we can tell the tool to generate a builder pattern, preventing us to write an extra Builder class and associated fluent setter-like methods by simply adding the @Builder annotation to our ApiClientConfiguration.

@Builder
public class ApiClientConfiguration {

    // ... everything else remains the same

}

Leaving the class definition above as such (no declare constructors nor setters + @Builder) we can end up using it as:

ApiClientConfiguration config = 
    new ApiClientConfigurationBuilder()
        .host("api.server.com")
        .port(443)
        .useHttps(true)
        .connectTimeout(15_000L)
        .readTimeout(5_000L)
        .username("myusername")
        .password("secret")
    .build();

6. Checked Exceptions Burden

Lots of Java APIs are designed so that they can throw a number of checked exceptions client code is forced to either catch or declare to throws. How many times have you turned these exceptions you know won’t happen into something like this?

public String resourceAsString() {
    try (InputStream is = this.getClass().getResourceAsStream("sure_in_my_jar.txt")) {
        BufferedReader br = new BufferedReader(new InputStreamReader(is, "UTF-8"));
        return br.lines().collect(Collectors.joining("\n"));
    } catch (IOException | UnsupportedCharsetException ex) {
        // If this ever happens, then its a bug.
        throw new RuntimeException(ex); <--- encapsulate into a Runtime ex.
    }
}

If you want to avoid this code patterns because the compiler won’t be otherwise happy (and, after all, you know the checked errors cannot happen), use the aptly named @SneakyThrows:

@SneakyThrows
public String resourceAsString() {
    try (InputStream is = this.getClass().getResourceAsStream("sure_in_my_jar.txt")) {
        BufferedReader br = new BufferedReader(new InputStreamReader(is, "UTF-8"));
        return br.lines().collect(Collectors.joining("\n"));
    } 
}

7. Ensure Your Resources are Released

Java 7 introduced the try-with-resources block to ensure your resources held by instances of anything implementing java.lang.AutoCloseable are released when exiting.

Lombok provides an alternative way of achieving this, and more flexibly via @Cleanup. Use it for any local variable whose resources you want to make sure are released. No need for them to implement any particular interface, you’ll just get its close() method called.

@Cleanup InputStream is = this.getClass().getResourceAsStream("res.txt");

Your releasing method has a different name? No problem, just customize the annotation:

@Cleanup("dispose") JFrame mainFrame = new JFrame("Main Window");

8. Annotate Your Class To Get a Logger

Many of us add logging statements to our code sparingly by creating an instance of a Logger from our framework of choice. Say, SLF4J:

public class ApiClientConfiguration {

    private static Logger LOG = LoggerFactory.getLogger(ApiClientConfiguration.class);

    // LOG.debug(), LOG.info(), ...

}

This is such a common pattern that Lombok developers have cared to simplify it for us:

@Slf4j // or: @Log @CommonsLog @Log4j @Log4j2 @XSlf4j
public class ApiClientConfiguration {

    // log.debug(), log.info(), ...

}

Many logging frameworks are supported and of course you can customize the instance name, topic, etc.

9. Write Thread-Safer Methods

In Java you can use the synchronized keyword to implement critical sections. However, this is not a 100% safe approach: other client code can eventually also synchronize on your instance, potentially leading to unexpected deadlocks.

This is where @Synchronized comes in: annotate your methods (both instance and static) with it and you’ll get an autogenerated private, unexposed field your implementation will use for locking:

@Synchronized
public /* better than: synchronized */ void putValueInCache(String key, Object value) {
    // whatever here will be thread-safe code
}

10. Automate Objects Composition

Java does not have language level constructs to smooth out a “favor composition inheritance” approach. Other languages have built-in concepts such as Traits or Mixins to achieve this.

Lombok’s @Delegate comes in very handy when you want to use this programming pattern. Let’s consider an example:

  • We want Users and Customers to share some common attributes for naming and phone number
  • We define both an interface and an adapter class for these fields
  • We’ll have our models implement the interface and @Delegate to their adapter, effectively composing them with our contact information

First, let’s define an interface:

public interface HasContactInformation {

    String getFirstName();
    void setFirstName(String firstName);

    String getFullName();

    String getLastName();
    void setLastName(String lastName);

    String getPhoneNr();
    void setPhoneNr(String phoneNr);

}

And now an adapter as a support class:

@Data
public class ContactInformationSupport implements HasContactInformation {

    private String firstName;
    private String lastName;
    private String phoneNr;

    @Override
    public String getFullName() {
        return getFirstName() + " " + getLastName();
    }
}

The interesting part comes now, see how easy it is to now compose contact information into both model classes:

public class User implements HasContactInformation {

    // Whichever other User-specific attributes

    @Delegate(types = {HasContactInformation.class})
    private final ContactInformationSupport contactInformation =
            new ContactInformationSupport();

    // User itself will implement all contact information by delegation
    
}

The case for Customer would be so similar we’d omit the sample for brevity.

11. Rolling Lombok Back?

Short answer: Not at all really.

You may be worried there is a chance that you use Lombok in one of your projects, but later want to rollback that decision. You’d then have a maybe large number of classes annotated for it… what could you do?

I have never really regretted this, but who knows for you, your team or your organization. For these cases you’re covered thanks to the delombok tool from the same project.

By delombok-ing your code you’d get autogenerated Java source code with exactly the same features from the bytecode Lombok built. So then you may simply replace your original annotated code with these new delomboked files and no longer depend on it.

This is something you can integrate in your build and I have done this in the past to just study the generated code or to integrate Lombok with some other Java source code based tool.

12. Conclusion

There are some other features we have not presented in this article, I’d encourage you to take a deeper dive into the feature overview for more details and use cases.

Also most functions we’ve shown have a number of customization options you may find handy to get the tool generate things the most compliant with your team practices for naming etc. The available built-in configuration system could also help you with that.

I hope you have found the motivation to give Lombok a chance to get into your Java development toolset. Give it a try and boost your productivity!

The example code can be found in the GitHub project.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Scheduling in Java EE

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In a previous article we demonstrated how to schedule tasks in Spring using @Scheduled annotation. In this article we will demonstrate how to achieve the same by using the timer service in a Java Enterprise Edition application for each case presented in the previous article.

2. Enable Support for Scheduling

In a Java EE application there is no need to enable support for timed tasks. The timer service is a container managed service that allows applications to call methods that are scheduled for time based events. As an example an application may have to run some daily reports at a certain hour in order to generate statistics.

There are two types of timers:

  • Programmatic timers: the timer service can be injected in to any bean (except a stateful session bean) and the business logic should be placed in a method annotated with @Timeout. The timer can be initialized by a method annotated @PostConstruct of the beans or it can also be initialized just by calling a method.
  • Automatic timers: the business logic is placed in any method annotated with @Schedule or @Schedules. These timers are initialized as soon as the application starts.

So let’s get started with our first example.

3. Schedule Task with a Fixed Delay

In Spring this is done simply by using the @Scheduled(fixedDelay = 1000) annotation. In this case, the duration between the end of the last execution and the start of next execution is fixed. The task always waits until the previous one is finished.

Doing exactly same thing in Java EE is a little bit harder to achieve because there is no similar built in mechanism provided, nevertheless a similar scenario can be implemented with a bit of extra coding. Let’s have a look at how this is done:

@Singleton
public class FixedTimerBean {

    @EJB
    private WorkerBean workerBean;

    @Lock(LockType.READ)
    @Schedule(second = "*/5", minute = "*", hour = "*", persistent = false)
    public void atSchedule() throws InterruptedException {
        workerBean.doTimerWork();
    }
}
@Singleton
public class WorkerBean {

    private AtomicBoolean busy = new AtomicBoolean(false);

    @Lock(LockType.READ)
    public void doTimerWork() throws InterruptedException {
        if (!busy.compareAndSet(false, true)) {
            return;
        }
        try {
            Thread.sleep(20000L);
        } finally {
            busy.set(false);
        }
    }
}

As you can see the timer is scheduled to be triggered every five seconds. However the method triggered in our case simulated a 20 seconds response time by call sleep() on the current Thread.

As a consequence the container will continue to call doTimerWork() every five seconds but the condition put at the beginning of the method, busy.compareAndSet(false, true), will return immediately if the previous call has not finished. In this we ensure that the next task will be executed only after the previous one has finished.

4. Schedule Task at a Fixed Rate

One way of doing this is to use the timer service which is injected by using @Resource and configured in the method annotated @PostConstruct. The method annotated with @Timeout will be called when the timer expires.

As mentioned in the previous article the beginning of the task execution doesn’t wait for the completion of the previous execution. This option should be used when each execution of the task is independent. The following code snippet creates a timer that fires every second:

@Startup
@Singleton
public class ProgrammaticAtFixedRateTimerBean {

    @Inject
    Event<TimerEvent> event;

    @Resource
    TimerService timerService;

    @PostConstruct
    public void initialize() {
        timerService.createTimer(0,1000, "Every second timer with no delay");
    }

    @Timeout
    public void programmaticTimout(Timer timer) {
        event.fire(new TimerEvent(timer.getInfo().toString()));
    }
}

Another way is to use @Scheduled annotation. In the following code snippet we fire a timer every five seconds:

@Startup
@Singleton
public class ScheduleTimerBean {

    @Inject
    Event<TimerEvent> event;

    @Schedule(hour = "*", minute = "*", second = "*/5", info = "Every 5 seconds timer")
    public void automaticallyScheduled(Timer timer) {
        fireEvent(timer);
    }


    private void fireEvent(Timer timer) {
        event.fire(new TimerEvent(timer.getInfo().toString()));
    }
}

5. Schedule Task with Initial Delay

If your use case scenario requires the timer to start with a delay we can do that too. In this case Java EE allows the use of the timer service. Let’s have a look at an example where the timer has an initial delay of 10 seconds and then fires every five seconds:

@Startup
@Singleton
public class ProgrammaticWithInitialFixedDelayTimerBean {

    @Inject
    Event<TimerEvent> event;

    @Resource
    TimerService timerService;

    @PostConstruct
    public void initialize() {
        timerService.createTimer(10000, 5000, "Delay 10 seconds then every 5 seconds timer");
    }

    @Timeout
    public void programmaticTimout(Timer timer) {
        event.fire(new TimerEvent(timer.getInfo().toString()));
    }
}

The createTimer method used in our sample is using the following signature createTimer(long initialDuration, long intervalDuration, java.io.Serializable info) where initialDuration is the number of milliseconds that must elapse before the first timer expiration notification and intervalDuration is the number of milliseconds that must elapse between timer expiration notifications.

In this example we’re using an initialDuration of 10 seconds and an intervalDuration of five seconds. The task will be executed for the first time after the initialDuration value and it will continue to be executed according to the intervalDuration.

6. Schedule Task using Cron Expressions

All the schedulers that we have seen, both programmatic and automatic, allow the use of cron expressions. Let’s see an example:

@Schedules ({
   @Schedule(dayOfMonth="Last"),
   @Schedule(dayOfWeek="Fri", hour="23")
})
public void doPeriodicCleanup() { ... }

In this example the method doPeriodicCleanup() will be called every Friday at 23:00 and on the last day of the month.

7. Conclusion

In this article we have looked at various ways to schedule tasks in the Java EE environment using as a starting point a previous article where samples were done using Spring.

Code samples can be found in the github repository.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

The Market Share of Java IDEs in Q2 2016

$
0
0

The adoption numbers in the Java IDE ecosystem have always been interesting to watch.

So, this year, when I ran the regular Java and Spring survey, I decided to include the IDE question:

What is your main IDE?

And 2255 responses later – here’s what the market share looks like for the major players:

It definitely is a tight race between Eclipse in IntelliJ – both of them effectively hold about half of the market.

What’s even more interesting is understanding these numbers in the context of the ZeroTurnaround 2014 survey – which had a similar sample size – 2164 answers.

The 2014 data has Eclipse at a slightly higher market share – 52%, and IntelliJ at only 33% of market.

The trend is clear – Eclipse has been slowly shedding users, and IntelliJ IDEA has been picking up these users as well as a good chunk of the Netbeans and other numbers.

The Java 8 Stream API Tutorial

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In this in-depth tutorial we will go through practical usage of Java 8 Streams from creation to parallel execution.

To understand this material reader should have the basic knowledge of Java 8 (lambda expressions, Optional, method references) and Stream API. If you aren’t familiar with these topics, please take a look at our previous articles – New Features in Java 8 and Introduction to Java 8 Streams.

2. Stream Creation

There are many ways to create a stream instance of different sources. Once created, the instance will not modify its source, therefore allowing creation of multiple instances from a single source.

2.1. Empty Stream

The empty() method should be used in case of a creation of an empty stream:

Stream<String> streamEmpty = Stream.empty();

Its often the case that the empty() method is used upon creation to avoid returning null for streams with no element:

public Stream<String> streamOf(List<String> list) {
    return list == null || list.isEmpty() ? Stream.empty() : list.stream();
}

2.2. Stream of Collection

Stream can also be created of any type of Collection (Collection, List, Set):

Collection<String> collection = Arrays.asList("a", "b", "c");
Stream<String> streamOfCollection = collection.stream();

2.3. Stream of Array

Array can also be a source of a Stream:

Stream<String> streamOfArray = Stream.of("a", "b", "c");

They can also be created out of an existing array or of a part of an array:

String[] arr = new String[]{"a", "b", "c"};
Stream<String> streamOfArrayFull = Arrays.stream(arr);
Stream<String> streamOfArrayPart = Arrays.stream(arr, 1, 3);

2.4. Stream.builder()

When builder is used the desired type should be additionally specified in the right part of the statement, otherwise the build() method will create an instance of the Stream<Object>:

Stream<String> streamBuilder =
  Stream.<String>builder().add("a").add("b").add("c").build();

2.5. Stream.generate()

The generate() method accepts a Supplier<T> for element generation. As the resulting stream is infinite, developer should specify the desired size or the generate() method will work until it reaches the memory limit:

Stream<String> streamGenerated =
  Stream.generate(() -> "element").limit(10);

The code above creates a sequence of ten strings with the value – “element”.

2.6. Stream.iterate()

Another way of creating an infinite stream is by using the iterate() method:

Stream<Integer> streamIterated = Stream.iterate(40, n -> n + 2).limit(20);

The first element of the resulting stream is a first parameter of the iterate() method. For creating every following element the specified function is applied to the previous element. In the example above the second element will be 42.

2.7. Stream of Primitives

Java 8 offers a possibility to create streams out of three primitive types: int, long and double. As Stream<T> is a generic interface and there is no way to use primitives as a type parameter with generics, three new special interfaces were created: IntStream, LongStream, DoubleStream.

Using the new interfaces alleviates unnecessary auto-boxing allows increased productivity:

IntStream intStream = IntStream.range(1, 3);
LongStream longStream = LongStream.rangeClosed(1, 3);

The range(int startInclusive, int endExclusive) method creates an ordered stream from first parameter to the second parameter. It increments the value of subsequent elements with the step equal to 1. The result doesn’t include the last parameter, it is just an upper bound of the sequence.

The rangeClosed(int startInclusive, int endInclusive) method does the same with only one difference – the second element is included. These two methods can be used to generate any of the three types of streams of primitives.

Since Java 8 the Random class provides a wide range of methods for generation streams of primitives. For example, the following code creates a DoubleStream, which has three elements:

Random random = new Random();
DoubleStream doubleStream = random.doubles(3);

2.8. Stream of String

String can also be used as a source for creating a stream.

With the help of the chars() method of the String class. Since there is no interface CharStream in JDK, the IntStream is used to represent stream of chars instead.

IntStream streamOfChars = "abc".chars();

The following example breaks a String into sub-strings according to specified RegEx:

Stream<String> streamOfString =
  Pattern.compile(", ").splitAsStream("a, b, c");

2.9. Stream of File

Java NIO class Files allows to generate a Stream<String> of a text file through the lines() method. Every line of the text becomes an element of the stream:

Path path = Paths.get("C:\\file.txt");
Stream<String> streamOfStrings = Files.lines(path);
Stream<String> streamWithCharset = 
  Files.lines(path, Charset.forName("UTF-8"));

The Charset can be specified as an argument of the lines() method.

3. Referencing a Stream

It is possible to instantiate a stream and to have an accessible reference to it as long as only intermediate operations was called. Executing a terminal operation makes a stream inaccessible.

To demonstrate this we will forget for a while that the best practice is to chain sequence of operation. Besides its unnecessary verbosity, technically the following code is valid:

Stream<String> stream = 
  Stream.of("a", "b", "c").filter(element -> element.contains("b"));
Optional<String> anyElement = stream.findAny();

But an attempt to reuse the same reference after calling the terminal operation will trigger the IllegalStateException:

Optional<String> firstElement = stream.findFirst();

As the IllegalStateException is a RuntimeException, compiler will not signalize about a problem. So, it is very important to remember that Java 8 streams can’t be reused.

This kind of behavior is logical because streams were designed to provide an ability of applying a finite sequence of operations to the source of elements in a functional style, but not to store elements.

So, to make previous code work properly some changes should be done:

List<String> elements =
  Stream.of("a", "b", "c").filter(element -> element.contains("b"))
    .collect(Collectors.toList());
Optional<String> anyElement = elements.stream().findAny();
Optional<String> firstElement = elements.stream().findFirst();

4. Stream Pipeline

To perform a sequence of operations over the elements of the data source and aggregate their results, three parts are needed – the source, intermediate operation(s) and a terminal operation.

Intermediate operations return a new modified stream. For example, to create a new stream of the existing one without few elements the skip() method should be used:

Stream<String> onceModifiedStream =
  Stream.of("abcd", "bbcd", "cbcd").skip(1);

If more than one modification is needed, intermediate operations can be chained. Assume that we also need to substitute every element of current Stream<String> with a sub-string of first few chars. This will be done by chaining the skip() and the map() methods:

Stream<String> twiceModifiedStream =
  stream.skip(1).map(element -> element.substring(0, 3));

As you can see, the map() method takes a lambda expression as a parameter. If you want to learn more about lambdas take a look at our tutorial Lambda Expressions and Functional Interfaces: Tips and Best Practices.

A stream by itself is worthless, the real thing a user is interested in is a result of the terminal operation, which can be a value of some type or an action applied to every element of the stream. Only one terminal operation can be used per stream.

The right and most convenient way to use streams is by a stream pipeline, which is a chain of stream source, intermediate operations and a terminal operation. For example:

List<String> list = Arrays.asList("abc1", "abc2", "abc3");
long size = list.stream().skip(1)
  .map(element -> element.substring(0, 3)).sorted().count();

5. Lazy Invocation

Intermediate operations are lazy. This means that they will be invoked only if it is necessary for the terminal operation execution.

To demonstrate this, imagine that we have method wasCalled(), which increments an inner counter every time it was called:

private long counter;
 
private void wasCalled() {
    counter++;
}

Lets call method wasCalled() from operation filter():

List<String> list = Arrays.asList(“abc1”, “abc2”, “abc3”);
counter = 0;
Stream<String> stream = list.stream().filter(element -> {
    wasCalled();
    return element.contains("2");
});

As we have a source of three elements we can assume that method filter() will be called three times and the value of the counter variable will be 3. But running this code doesn’t change counter at all, it is still zero, so, the filter() method wasn’t called even once. The reason why – is missing of the terminal operation.

Lets rewrite this code a little bit by adding a map() operation and a terminal operation – findFirst(). We will also add an ability to track an order of method calls with a help of logging:

Optional<String> stream = list.stream().filter(element -> {
    log.info("filter() was called");
    return element.contains("2");
}).map(element -> {
    log.info("map() was called");
    return element.toUpperCase();
}).findFirst();

Resulting log shows that the filter() method was called twice and the map() method just once. It is so, because the pipeline executes vertically. In our example the first element of the stream didn’t satisfy filter’s predicate, than the filter() method was invoked for the second element, which passed the filter. Without calling the filter() for third element we went down through pipeline to the map() method. The findFirst() operation satisfies by just one element. So, in this particular example the lazy invocation allowed to avoid two method calls – one for the filter() and one for the map().

6. Order of Execution

From the performance point of view, the right order is one of the most important aspects of chaining operations in the stream pipeline:

long size = list.stream().map(element -> {
    wasCalled();
    return element.substring(0, 3);
}).skip(2).count();

Execution of this code will increase the value of counter by three. This means that the map() method of the stream was called three times. But the value of the size is one. So, resulting stream has just one element and we executed the expensive map() operations for no reason twice out of three times.

If we change the order of the skip() and the map() methodsthe counter will increase only by one. So, the method map() will be called just once:

long size = list.stream().skip(2).map(element -> {
    wasCalled();
    return element.substring(0, 3);
}).count();

This bring us up to the rule: intermediate operations which reduce the size of the stream should be placed before operations which are applying to each element. So, keep such methods as skip(), filter(), distinct() at the top of your stream pipeline.

7. Stream Reduction

The API has many terminal operations which aggregate a stream to a type or to a primitive, for example count(), max(), min(), sum(), but these operations work according to the predefined implementation. And what if developer needs to customize a stream’s reduction mechanism? There are two methods which allow to do this – the reduce() and the collect() methods.

7.1. The reduce() Method

There are three variations of this method, which differ by their signatures and returning types. They can have the following parameters:

identity – the initial value for accumulator or a default value if stream is empty and there is nothing to accumulate;

accumulator – a function which specifies a logic of aggregation of elements. As accumulator creates a new value for every step of reducing, the quantity of new values equals to the stream’s size and only the last value is useful. This is not very good for the performance.

combiner – a function which aggregates results of the accumulator. Combiner is called only in parallel mode to reduce results of accumulators from different threads.

So, lets look at these three methods in action:

OptionalInt reduced =
  IntStream.range(1, 4).reduce((a, b) -> a + b);

reduced = 6 (1 + 2 + 3)

int reducedTwoParams =
  IntStream.range(1, 4).reduce(10, (a, b) -> a + b);

reducedTwoParams = 16 (10 + 1 + 2 + 3)

int reducedParams = Stream.of(1, 2, 3)
  .reduce(10, (a, b) -> a + b, (a, b) -> {
     log.info("combiner was called");
     return a + b;
  });

The result will be the same as in the previous example (16) and there will be no login which means, that combiner wasn’t called. To make a combiner work, stream should be parallel:

int reducedParallel = Arrays.asList(1, 2, 3).parallelStream()
    .reduce(10, (a, b) -> a + b, (a, b) -> {
       log.info("combiner was called");
       return a + b;
    });

The result here is different (36) and the combiner was called twice. Here the reduction works by the following algorithm: accumulator ran three times by adding every element of the stream to identity to every element of the stream. These actions are being done in parallel. As a result, they have (10 + 1 = 11; 10 + 2 = 12; 10 + 3 = 13;). Now combiner can merge these three results. It needs two iterations for that (12 + 13 = 25; 25 + 11 = 36).

7.2. The collect() Method

Reduction of a stream can also be executed by another terminal operation – the collect() method. It accepts an argument of the type Collector, which specifies the mechanism of reduction. There are already created predefined collectors for most common operations. They can be accessed with the help of the Collectors type.

In this section we will use the following List as a source for all streams:

List<Product> productList = Arrays.asList(new Product(23, "potatoes"),
  new Product(14, "orange"), new Product(13, "lemon"),
  new Product(23, "bread"), new Product(13, "sugar"));

Converting a stream to the Collection (Collection, List or Set):

List<String> collectorCollection = 
  productList.stream().map(Product::getName).collect(Collectors.toList());

Reducing to String:

String listToString = productList.stream().map(Product::getName)
  .collect(Collectors.joining(", ", "[", "]"));

The joiner() method can have from one to three parameters (delimiter, prefix, suffix). The most handy thing about using joiner() – developer don’t need to check if the stream reaches its end to apply the suffix and not to apply a delimiter. Collector will take care of that.

Processing the average value of all numeric elements of the stream:

double averagePrice = productList.stream()
  .collect(Collectors.averagingInt(Product::getPrice));

Processing the sum of all numeric elements of the stream:

int summingPrice = productList.stream()
  .collect(Collectors.summingInt(Product::getPrice));

Methods averagingXX(), summingXX() and summarizingXX() can work as with primitives (int, long, double) as with their wrapper classes (Integer, Long, Double). One more powerful feature of these methods is providing mapping. So, developer don’t need to use an additional map() operation before the collect() method.

Collecting statistical information about stream’s elements:

IntSummaryStatistics statistics = productList.stream()
  .collect(Collectors.summarizingInt(Product::getPrice));

By using the resulting instance of type IntSummaryStatistics developer can create a statistical report by applying toString() method. The result will be a String common to this one “IntSummaryStatistics{count=5, sum=86, min=13, average=17,200000, max=23}”. It is also easy to extract from this object separate values for count, sum, min, average by applying methods getCount(), getSum(), getMin(), getAverage(), getMax(). All these values can be extracted from a single pipeline.

Grouping of stream’s elements according to the specified function:

Map<Integer, List<Product>> collectorMapOfLists = productList.stream()
  .collect(Collectors.groupingBy(Product::getPrice));

In the example above the stream was reduced to the Map which groups all products by their price.

Dividing stream’s elements into groups according to some predicate:

Map<Boolean, List<Product>> mapPartioned = productList.stream()
  .collect(Collectors.partitioningBy(element -> element.getPrice() > 15));

Pushing the collector to perform additional transformation:

Set<Product> unmodifiableSet = productList.stream()
  .collect(Collectors.collectingAndThen(Collectors.toSet(),
  Collections::unmodifiableSet));

In this particular case the collector has converted a stream to a Set and then created the unmodifiable Set out of it.

Custom collector:

If, for some reason, a custom collector should be created, the most easier and the less verbose way of doing so – is to use the method of() of the type Collector.

Collector<Product, ?, LinkedList<Product>> toLinkedList =
  Collector.of(LinkedList::new, LinkedList::add, 
    (first, second) -> { 
       first.addAll(second); 
       return first; 
    });

LinkedList<Product> linkedListOfPersons =
  productList.stream().collect(toLinkedList);

In this example an instance of the Collector got reduced to the LinkedList<Persone>.

Parallel Streams

Before Java 8, parallelization was complex. Emerging of the ExecutorService and the ForkJoin simplified developer’s life a little bit, but they still should to keep in mind how to create a specific executor, how to run it and so on. Java 8 introduced a way of accomplishing parallelism in a functional style.

The API allows creating parallel streams, which perform operations in a parallel mode. When the source of a stream is a Collection or an array it can be achieved with the help of the parallelStream() method:

Stream<Product> streamOfCollection = productList.parallelStream();
boolean isParallel = streamOfCollection.isParallel();
boolean bigPrice = streamOfCollection
  .map(product -> product.getPrice() * 12)
  .anyMatch(price -> price > 200);

If the source of stream is something different than a Collection or an array, the parallel() method should be used:

IntStream intStreamParallel = IntStream.range(1, 150).parallel();
boolean isParallel = intStreamParallel.isParallel();

Under the hood, Stream API automatically uses the ForkJoin framework to execute operations in parallel. By default the common thread pool will be used and there is no way (at least for now) to assign some custom thread pool to it.

When using streams in parallel mode, avoid blocking operations and use parallel mode when tasks need the similar amount of time to execute (if one task lasts much longer than the other, it can slow down the complete app’s workflow).

The stream in parallel mode can be converted back to sequential mode by using the sequential() method:

IntStream intStreamSequential = intStreamParallel.sequential();
boolean isParallel = intStreamSequential.isParallel();

Conclusions

The Stream API is a powerful but simple to understand set of tools for processing sequence of elements. It allows us to reduce a huge amount of boilerplate code, create more readable programs and improve app’s productivity when used properly.

In most of the code samples shown in this article streams were left unconsumed (we didn’t apply the close() method or a terminal operation). In a real app, don’t leave an instantiated streams unconsumed as that will lead to memory leaks.

The complete code samples that accompany the article are available at GitHub.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

A Quick Guide to Spring MVC Matrix Variables

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

The URI specification RFC 3986 defined URI path parameters as name-value pairs. Matrix variables is a Spring coined term, and an alternative implementation for passing and parsing URI path parameters.

Matrix variables support became available in Spring MVC 3.2 and is meant to simplify requests with a large number of parameters.

In this article, we will show how we can simplify complex GET requests that use either variable or optional path parameters inside the different path segments of an URI.

2. Configuration

In order to enable Spring MVC Matrix Variables, let’s start with the configuration:

@Configuration
public class WebConfig extends WebMvcConfigurerAdapter {

    @Override
    public void configurePathMatch(PathMatchConfigurer configurer) {
        UrlPathHelper urlPathHelper = new UrlPathHelper();
        urlPathHelper.setRemoveSemicolonContent(false);
        configurer.setUrlPathHelper(urlPathHelper);
    }
}

Otherwise, they’re disabled by default.

3. How to Use Matrix Variables

These variables can appear in any part of the path and the character equals (“=”) is used for giving values and the semicolon(‘;’) for delimiting each matrix variable. On the same path, we can also repeat the same variable name or separate different values using the character comma(‘,’).

Our example has a controller that provides information about the employees. Each employee has a working area and we are able to do a search by that attribute. The following request could be used for searching:

http://localhost:8080/spring-mvc-java/employeeArea/workingArea=rh,informatics,admin

or like this:

http://localhost:8080/spring-mvc-java
  /employeeArea/workingArea=rh;workingArea=informatics;workingArea=admin

When we want to refer to these variables in Spring MVC, we should use the annotation @MatrixVariable.

In our examples, we will use the Employee class:

public class Employee {

    private long id;
    private String name;
    private String contactNumber;

    // standard setters and getters 
}

And also the Company class:

public class Company {

    private long id;
    private String name;

    // standard setters and getters
}

These two classes will bind the request parameters.

4. Defining Matrix Variable Properties

We are able to define required or default properties for the variable. In the following example, the contactNumber is required, so it must be included in our path, something like this:

http://localhost:8080/spring-mvc-java/employeesContacts/contactNumber=223334411

The request will be handled by the following method:

@RequestMapping(value = "/employeesContacts/{contactNumber}", 
  method = RequestMethod.GET)
@ResponseBody
public ResponseEntity<List<Employee>> getEmployeeBycontactNumber(
  @MatrixVariable(required = true) String contactNumber) {
    List<Employee> employeesList = new ArrayList<Employee>();
    ...
    return new ResponseEntity<List<Employee>>(employeesList, HttpStatus.OK);
}

As a result, we will get all the employees which have the contact number 223334411.

5. Complement Parameter

Matrix variables can complement path variables.

For example, we are searching an employee for his/her name, but we can also include the starting numbers of his/her contact number.

The request for this search should be like this:

http://localhost:8080/spring-mvc-java/employees/John;beginContactNumber=22001

The request will be handled by the following method:

@RequestMapping(value = "/employees/{name}", method = RequestMethod.GET)
@ResponseBody
public ResponseEntity<List<Employee>> getEmployeeByNameAndBeginContactNumber(
  @PathVariable String name, @MatrixVariable String beginContactNumber) {
    List<Employee> employeesList = new ArrayList<Employee>();
    ...
    return new ResponseEntity<>(employeesList, HttpStatus.OK);
}

As a result, we will get all the employees which have the contact number 22001 or whose name is John.

6. Binding All Matrix Variables

If, for some reason, we want to get all the variables that are available on the path, we can binding them to a Map:

http://localhost:8080/spring-mvc-java/employeeData/id=1;name=John;contactNumber=2200112334

This request will be handled by the following method:

@RequestMapping(value = "employeeData/{employee}",
  method = RequestMethod.GET)
@ResponseBody
public ResponseEntity<Map<String, String>> getEmployeeData(
  @MatrixVariable Map<String, String> matrixVars) {
    return new ResponseEntity<>(matrixVars, HttpStatus.OK);
}

Of course, we are able to restrict binding to the matrix variables of a specific part of the path. For example, if we have a request like this:

http://localhost:8080/spring-mvc-java/
  companyEmployee/id=2;name=Xpto/employeeData/id=1;name=John;
  contactNumber=2200112334

And we only want to get all the variables that belong to employeeData, then we should use as an input parameter this:

@RequestMapping(
 value = "/companyEmployee/{company}/employeeData/{employee}",
 method = RequestMethod.GET)
@ResponseBody
public ResponseEntity<Map<String, String>> getEmployeeDataFromCompany(
  @MatrixVariable(pathVar = "employee") Map<String, String> matrixVars) {
  ...
}

7. Partial Binding

Apart from simplicity, flexibility is another gain, matrix variables can be used in a variety of different ways. For example, we are able to get each variable from each path segment. Consider the following request:

http://localhost:8080/spring-mvc-java/
  companyData/id=2;name=Xpto/employeeData/id=1;name=John;
  contactNumber=2200112334

If we only want to know the matrix variable name of the companyData segment, then, we should use as an input parameter the following:

@MatrixVariable(value="name", pathVar="company") String name

8. Conclusion

This article illustrated some of the various ways that matrix variables can be used.

It’s very important to understand how this new tool can deal with requests that are too complex or help us add more parameters to delimit our search.

The implementation of all these examples and code snippets can be found in a github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Viewing all 4753 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>