Quantcast
Channel: Baeldung
Viewing all 4772 articles
Browse latest View live

Introduction to Open Liberty

$
0
0

1. Overview

With the popularity of microservice architecture and cloud-native application development, there's a growing need for a fast and lightweight application server.

In this introductory tutorial, we'll explore the Open Liberty framework to create and consume a RESTful web service. We'll also examine a few of the essential features that it provides.

2. Open Liberty

Open Liberty is an open framework for the Java ecosystem that allows developing microservices using features of the Eclipse MicroProfile and Jakarta EE platforms.

It is a flexible, fast, and lightweight Java runtime that seems promising for cloud-native microservices development.

The framework allows us to configure only the features our app needs, resulting in a smaller memory footprint during startup. Also, it is deployable on any cloud platform using containers like Docker and Kubernetes.

It supports rapid development by live reloading of the code for quick iteration.

3. Build and Run

First, we'll create a simple Maven-based project named open-liberty and then add the latest liberty-maven-plugin plugin to the pom.xml:

<plugin>
    <groupId>io.openliberty.tools</groupId>
    <artifactId>liberty-maven-plugin</artifactId>
    <version>3.1</version>
</plugin>

Or, we can add the latest openliberty-runtime Maven dependency as an alternative to the liberty-maven-plugin:

<dependency>
    <groupId>io.openliberty</groupId>
    <artifactId>openliberty-runtime</artifactId>
    <version>20.0.0.1</version>
    <type>zip</type>
</dependency>

Similarly, we can add the latest Gradle dependency to the build.gradle:

dependencies {
    libertyRuntime group: 'io.openliberty', name: 'openliberty-runtime', version: '20.0.0.1'
}

Then, we'll add the latest jakarta.jakartaee-web-api and microprofile Maven dependencies:

<dependency>
    <groupId>jakarta.platform</groupId>
    <artifactId>jakarta.jakartaee-web-api</artifactId>
    <version>8.0.0</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.eclipse.microprofile</groupId>
    <artifactId>microprofile</artifactId>
    <version>3.2</version>
    <type>pom</type>
    <scope>provided</scope>
</dependency>

Then, let's add the default HTTP port properties to the pom.xml:

<properties>
    <liberty.var.default.http.port>9080</liberty.var.default.http.port>
    <liberty.var.default.https.port>9443</liberty.var.default.https.port>
</properties>

Next, we'll create the server.xml file in the src/main/liberty/config directory:

<server description="Baeldung Open Liberty server">
    <featureManager>
        <feature>mpHealth-2.0</feature>
    </featureManager>
    <webApplication location="open-liberty.war" contextRoot="/" />
    <httpEndpoint host="*" httpPort="${default.http.port}" 
      httpsPort="${default.https.port}" id="defaultHttpEndpoint" />
</server>

Here, we've added the mpHealth-2.0 feature to check the health of the application.

That's it with all the basic setup. Let's run the Maven command to compile the files for the first time:

mvn clean package

Last, let's run the server using a Liberty-provided Maven command:

mvn liberty:dev

Voila! Our application is started and will be accessible at localhost:9080:

Also, we can access the health of the app at localhost:9080/health:

{"checks":[],"status":"UP"}

The liberty:dev command starts the Open Liberty server in development mode, which hot-reloads any changes made to the code or configuration without restarting the server.

Similarly, the liberty:run command is available to start the server in production mode.

Also, we can use liberty:start-server and liberty:stop-server to start/stop the server in the background.

4. Servlet

To use servlets in the app, we'll add the servlet-4.0 feature to the server.xml:

<featureManager>
    ...
    <feature>servlet-4.0</feature>
</featureManager>

Add the latest servlet-4.0 Maven dependency if using the openliberty-runtime Maven dependency in the pom.xml:

<dependency>
    <groupId>io.openliberty.features</groupId>
    <artifactId>servlet-4.0</artifactId>
    <version>20.0.0.1</version>
    <type>esa</type>
</dependency>

However, if we're using the liberty-maven-plugin plugin, this isn't necessary.

Then, we'll create the AppServlet class extending the HttpServlet class:

@WebServlet(urlPatterns="/app")
public class AppServlet extends HttpServlet {
    private static final long serialVersionUID = 1L;

    @Override
    protected void doGet(HttpServletRequest request, HttpServletResponse response) 
      throws ServletException, IOException {
        String htmlOutput = "<html><h2>Hello! Welcome to Open Liberty</h2></html>";
        response.getWriter().append(htmlOutput);
    }
}

Here, we've added the @WebServlet annotation that will make the AppServlet available at the specified URL pattern.

Let's access the servlet at localhost:9080/app:

5. Create a RESTful Web Service

First, let's add the jaxrs-2.1 feature to the server.xml:

<featureManager>
    ...
    <feature>jaxrs-2.1</feature>
</featureManager>

Then, we'll create the ApiApplication class, which provides endpoints to the RESTful web service:

@ApplicationPath("/api")
public class ApiApplication extends Application {
}

Here, we've used the @ApplicationPath annotation for the URL path.

Then, let's create the Person class that serves the model:

public class Person {
    private String username;
    private String email;

    // getters and setters
    // constructors
}

Next, we'll create the PersonResource class to define the HTTP mappings:

@RequestScoped
@Path("persons")
public class PersonResource {
    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public List<Person> getAllPersons() {
        return Arrays.asList(new Person(1, "normanlewis", "normanlewis@email.com"));
    }
}

Here, we've added the getAllPersons method for the GET mapping to the /api/persons endpoint. So, we're ready with a RESTful web service, and the liberty:dev command will load changes on-the-fly.

Let's access the /api/persons RESTful web service using a curl GET request:

curl --request GET --url http://localhost:9080/api/persons

Then, we'll get a JSON array in response:

[{"id":1, "username":"normanlewis", "email":"normanlewis@email.com"}]

Similarly, we can add the POST mapping by creating the addPerson method:

@POST
@Consumes(MediaType.APPLICATION_JSON)
public Response addPerson(Person person) {
    String respMessage = "Person " + person.getUsername() + " received successfully.";
    return Response.status(Response.Status.CREATED).entity(respMessage).build();
}

Now, we can invoke the endpoint with a curl POST request:

curl --request POST --url http://localhost:9080/api/persons \
  --header 'content-type: application/json' \
  --data '{"username": "normanlewis", "email": "normanlewis@email.com"}'

The response will look like:

Person normanlewis received successfully.

6. Persistence

6.1. Configuration

Let's add persistence support to our RESTful web services.

First, we'll add the derby Maven dependency to the pom.xml:

<dependency>
    <groupId>org.apache.derby</groupId>
    <artifactId>derby</artifactId>
    <version>10.14.2.0</version>
</dependency>

Then, we'll add a few features like jpa-2.2, jsonp-1.1, and cdi-2.0 to the server.xml:

<featureManager>
    ...
    <feature>jpa-2.2</feature> 
    <feature>jsonp-1.1</feature>
    <feature>cdi-2.0</feature>
</featureManager>

Here, the jsonp-1.1 feature provides the Java API for JSON Processing, and the cdi-2.0 feature handles the scopes and dependency injection.

Next, we'll create the persistence.xml in the src/main/resources/META-INF directory:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.2"
    xmlns="http://xmlns.jcp.org/xml/ns/persistence"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence
                        http://xmlns.jcp.org/xml/ns/persistence/persistence_2_2.xsd">
    <persistence-unit name="jpa-unit" transaction-type="JTA">
        <jta-data-source>jdbc/jpadatasource</jta-data-source>
        <properties>
            <property name="eclipselink.ddl-generation" value="create-tables"/>
            <property name="eclipselink.ddl-generation.output-mode" value="both" />
        </properties>
    </persistence-unit>
</persistence>

Here, we've used the EclipseLink DDL generation to create our database schema automatically. We can also use other alternatives like Hibernate.

Then, let's add the dataSource configuration to the server.xml:

<library id="derbyJDBCLib">
    <fileset dir="${shared.resource.dir}" includes="derby*.jar"/> 
</library>
<dataSource id="jpadatasource" jndiName="jdbc/jpadatasource">
    <jdbcDriver libraryRef="derbyJDBCLib" />
    <properties.derby.embedded databaseName="libertyDB" createDatabase="create" />
</dataSource>

Note, the jndiName has the same reference to the jta-data-source tag in the persistence.xml.

6.2. Entity and DAO

Then, we'll add the @Entity annotation and an identifier to our Person class:

@Entity
public class Person {
    @GeneratedValue(strategy = GenerationType.AUTO)
    @Id
    private int id;
    
    private String username;
    private String email;

    // getters and setters
}

Next, let's create the PersonDao class that will interact with the database using the EntityManager instance:

@RequestScoped
public class PersonDao {
    @PersistenceContext(name = "jpa-unit")
    private EntityManager em;

    public Person createPerson(Person person) {
        em.persist(person);
        return person;
    }

    public Person readPerson(int personId) {
        return em.find(Person.class, personId);
    }
}

Note that the @PersistenceContext defines the same reference to the persistence-unit tag in the persistence.xml.

Now, we'll inject the PersonDao dependency in the PersonResource class:

@RequestScoped
@Path("person")
public class PersonResource {
    @Inject
    private PersonDao personDao;

    // ...
}

Here, we've used the @Inject annotation provided by the CDI feature.

Last, we'll update the addPerson method of the PersonResource class to persist the Person object:

@POST
@Consumes(MediaType.APPLICATION_JSON)
@Transactional
public Response addPerson(Person person) {
    personDao.createPerson(person);
    String respMessage = "Person #" + person.getId() + " created successfully.";
    return Response.status(Response.Status.CREATED).entity(respMessage).build();
}

Here, the addPerson method is annotated with the @Transactional annotation to control transactions on CDI managed beans.

Let's invoke the endpoint with the already discussed curl POST request:

curl --request POST --url http://localhost:9080/api/persons \
  --header 'content-type: application/json' \
  --data '{"username": "normanlewis", "email": "normanlewis@email.com"}'

Then, we'll receive a text response:

Person #1 created successfully.

Similarly, let's add the getPerson method with GET mapping to fetch a Person object:

@GET
@Path("{id}")
@Produces(MediaType.APPLICATION_JSON)
@Transactional
public Person getPerson(@PathParam("id") int id) {
    Person person = personDao.readPerson(id);
    return person;
}

Let's invoke the endpoint using a curl GET request:

curl --request GET --url http://localhost:9080/api/persons/1

Then, we'll get the Person object as JSON response:

{"email":"normanlewis@email.com","id":1,"username":"normanlewis"}

7. Consume RESTful Web Service using JSON-B

First, we'll enable the ability to directly serialize and deserialize models by adding the jsonb-1.0 feature to the server.xml:

<featureManager>
    ...
    <feature>jsonb-1.0</feature>
</featureManager>

Then, let's create the RestConsumer class with the consumeWithJsonb method:

public class RestConsumer {
    public static String consumeWithJsonb(String targetUrl) {
        Client client = ClientBuilder.newClient();
        Response response = client.target(targetUrl).request().get();
        String result = response.readEntity(String.class);
        response.close();
        client.close();
        return result;
    }
}

Here, we've used the ClientBuilder class to request the RESTful web service endpoints.

Last, let's write a unit test to consume the /api/person RESTful web service and verify the response:

@Test
public void whenConsumeWithJsonb_thenGetPerson() {
    String url = "http://localhost:9080/api/persons/1";
    String result = RestConsumer.consumeWithJsonb(url);        
    
    Person person = JsonbBuilder.create().fromJson(result, Person.class);
    assertEquals(1, person.getId());
    assertEquals("normanlewis", person.getUsername());
    assertEquals("normanlewis@email.com", person.getEmail());
}

Here, we've used the JsonbBuilder class to parse the String response into the Person object.

Also, we can use MicroProfile Rest Client by adding the mpRestClient-1.3 feature to consume the RESTful web services. It provides the RestClientBuilder interface to request the RESTful web service endpoints.

8. Conclusion

In this article, we explored the Open Liberty framework — a fast and lightweight Java runtime that provides full features of the Eclipse MicroProfile and Jakarta EE platforms.

To begin with, we created a RESTful web service using JAX-RS. Then, we enabled persistence using features like JPA and CDI.

Last, we consumed the RESTful web service using JSON-B.

As usual, all the code implementations are available over on GitHub.


What’s New in Gradle 6.0

$
0
0

1. Overview

The Gradle 6.0 release brings several new features that will help make our builds more efficient and robust. These features include improved dependency management, module metadata publishing, task configuration avoidance, and support for JDK 13.

In this tutorial, we'll introduce the new features available in Gradle 6.0. Our example build files will use Gradle's Kotlin DSL.

2. Dependency Management Improvements

With each release in recent years, Gradle has made incremental improvements to how projects manage dependencies. These dependency improvements culminate in Gradle 6.0. Let's review dependency management improvements that are now stable.

2.1. API and Implementation Separation

The java-library plugin helps us to create a reusable Java library. The plugin encourages us to separate dependencies that are part of our library's public API from dependencies that are implementation details. This separation makes builds more stable because users won't accidentally refer to types that are not part of a library's public API.

The java-library plugin and its api and implementation configurations were introduced in Gradle 3.4. While this plugin is not new to Gradle 6.0, the enhanced dependency management capabilities it provides are part of the comprehensive dependency management realized in Gradle 6.0.

2.2. Rich Versions

Our project dependency graphs often have multiple versions of the same dependency. When this happens, Gradle needs to select which version of the dependency the project will ultimately use.

Gradle 6.0 allows us to add rich version information to our dependencies. Rich version information helps Gradle make the best possible choice when resolving dependency conflicts.

For example, consider a project that depends on Guava. Suppose further that this project uses Guava version 28.1-jre, even though we know that it only uses Guava APIs that have been stable since version 10.0.

We can use the require declaration to tell Gradle that this project may use any version of Guava since 10.0, and we use the prefer declaration to tell Gradle that it should use 28.1-jre if no other constraints are preventing it from doing so. The because declaration adds a note explaining this rich version information:

implementation("com.google.guava:guava") {
    version {
        require("10.0")
        prefer("28.1-jre")
        because("Uses APIs introduced in 10.0. Tested with 28.1-jre")
    }
}

How does this help make our builds more stable? Suppose this project also relies on a dependency foo that must use Guava version 16.0. The build file for the foo project would declare that dependency as:

dependencies {
    implementation("com.google.guava:guava:16.0")
}

Since the foo project depends on Guava 16.0, and our project depends on both Guava version 28.1-jre and foo, we have a conflict. Gradle's default behavior is to pick the latest version. In this case, however, picking the latest version is the wrong choice, because foo must use version 16.0.

Prior to Gradle 6.0, users had to solve conflicts on their own. Because Gradle 6.0 allows us to tell Gradle that our project may use Guava versions as low as 10.0, Gradle will correctly resolve this conflict and choose version 16.0.

In addition to the require and prefer declarations, we can use the strictly and reject declarations. The strictly declaration describes a dependency version range that our project must use. The reject declaration describes dependency versions that are incompatible with our project.

If our project relied on an API that we know will be removed in Guava 29, then we use the strictly declaration to prevent Gradle from using a version of Guava greater than 28. Likewise, if we know there is a bug in Guava 27.0 that causes problems for our project, we use reject to exclude it:

implementation("com.google.guava:guava") {
    version {
        strictly("[10.0, 28[")
        prefer("28.1-jre")
        reject("27.0")
        because("""
            Uses APIs introduced in 10.0 but removed in 29. Tested with 28.1-jre.
            Known issues with 27.0
        """)
    }
}

2.3. Platforms

The java-platform plugin allows us to reuse a set of dependency constraints across projects. A platform author declares a set of tightly coupled dependencies whose versions are controlled by the platform.

Projects that depend on the platform do not need to specify versions for any of the dependencies controlled by the platform. Maven users will find this similar to a Maven parent POM's dependencyManagement feature.

Platforms are especially useful in multi-project builds. Each project in the multi-project build may use the same external dependencies, and we don't want the versions of those dependencies to be out of sync.

Let's create a new platform to make sure our multi-project build uses the same version of Apache HTTP Client across projects. First, we create a project, httpclient-platform, that uses the java-platform plugin:

plugins {
    `java-platform`
}

Next, we declare constraints for the dependencies included in this platform. In this example, we'll pick the versions of the Apache HTTP Components that we want to use in our project:

dependencies {
    constraints {
        api("org.apache.httpcomponents:fluent-hc:4.5.10")
        api("org.apache.httpcomponents:httpclient:4.5.10")
    }
}

Finally, let's add a person-rest-client project that uses the Apache HTTP Client Fluent API. Here, we're adding a dependency on our httpclient-platform project using the platform method. We'll also add a dependency on org.apache.httpcomponents:fluent-hc. This dependency does not include a version because the httpclient-platform determines the version to use:

plugins {
    `java-library`
}

dependencies {
    api(platform(project(":httpclient-platform")))
    implementation("org.apache.httpcomponents:fluent-hc")
}

The java-platform plugin helps to avoid unwelcome surprises at runtime due to misaligned dependencies in the build.

2.4. Test Fixtures

Prior to Gradle 6.0, build authors who wanted to share test fixtures across projects extracted those fixtures to another library project. Now, build authors can publish test fixtures from their project using the java-test-fixtures plugin.

Let's build a library that defines an abstraction and publishes test fixtures that verify the contract expected by that abstraction.

In this example, our abstraction is a Fibonacci sequence generator, and the test fixture is a JUnit 5 test mix-in. Implementors of the Fibonacci sequence generator may use the test mix-in to verify they have implemented the sequence generator correctly.

First, let's create a new project, fibonacci-spi, for our abstraction and test fixtures. This project requires the java-library and java-test-fixtures plugins:

plugins {
    `java-library`
    `java-test-fixtures`
}

Next, let's add JUnit 5 dependencies to our test fixtures. Just as the java-library plugin defines the api and implementation configurations, the java-test-fixtures plugin defines the testFixturesApi and testFixturesImplementation configurations:

dependencies {
    testFixturesApi("org.junit.jupiter:junit-jupiter-api:5.5.2")
    testFixturesImplementation("org.junit.jupiter:junit-jupiter-engine:5.5.2")
}

With our dependencies in place, let's add a JUnit 5 test mix-in to the src/testFixtures/java source set created by the java-test-fixtures plugin. This test mix-in verifies the contract of our FibonacciSequenceGenerator abstraction:

public interface FibonacciSequenceGeneratorFixture {

    FibonacciSequenceGenerator provide();

    @Test
    default void whenSequenceIndexIsNegative_thenThrows() {
        FibonacciSequenceGenerator generator = provide();
        assertThrows(IllegalArgumentException.class, () -> generator.generate(-1));
    }

    @Test
    default void whenGivenIndex_thenGeneratesFibonacciNumber() {
        FibonacciSequenceGenerator generator = provide();
        int[] sequence = { 0, 1, 1, 2, 3, 5, 8 };
        for (int i = 0; i < sequence.length; i++) {
            assertEquals(sequence[i], generator.generate(i));
        }
    }
}

This is all we need to do to share this test fixture with other projects.

Now, let's create a new project, fibonacci-recursive, which will reuse this test fixture. This project will declare a dependency on the test fixtures from our fibonacci-spi project using the testFixtures method in our dependencies block:

dependencies {
    api(project(":fibonacci-spi"))
    
    testImplementation(testFixtures(project(":fibonacci-spi")))
    testImplementation("org.junit.jupiter:junit-jupiter-api:5.5.2")
    testRuntimeOnly("org.junit.jupiter:junit-jupiter-engine:5.5.2")
}

Finally, we can now use the test mix-in defined in the fibonacci-spi project to create a new test for our recursive fibonacci sequence generator:

class RecursiveFibonacciUnitTest implements FibonacciSequenceGeneratorFixture {
    @Override
    public FibonacciSequenceGenerator provide() {
        return new RecursiveFibonacci();
    }
}

The Gradle 6.0 java-test-fixtures plugin gives build authors more flexibility to share their test fixtures across projects.

3. Gradle Module Metadata Publishing

Traditionally, Gradle projects publish build artifacts to Ivy or Maven repositories. This includes generating ivy.xml or pom.xml metadata files respectively.

The ivy.xml and pom.xml models cannot store the rich dependency information that we've discussed in this article. This means that downstream projects do not benefit from this rich dependency information when we publish our library to a Maven or Ivy repository.

Gradle 6.0 addresses this gap by introducing the Gradle Module Metadata specification. The Gradle Module Metadata specification is a JSON format that supports storing all the enhanced module dependency metadata introduced in Gradle 6.0.

Projects can build and publish this metadata file to Ivy and Maven repositories in addition to traditional ivy.xml and pom.xml metadata files. This backward compatibility allows Gradle 6.0 projects to take advantage of this module metadata if it is present without breaking legacy tools.

To publish the Gradle Module Metadata files, projects must use the new Maven Publish Plugin or Ivy Publish Plugin. As of Gradle 6.0, these plugins publish the Gradle Module Metadata file by default. These plugins replace the legacy publishing system.

3.1. Publishing Gradle Module Metadata to Maven

Let's configure a build to publish Gradle Module Metadata to Maven. First, we include the maven-publish in our build file:

plugins {
    `java-library`
    `maven-publish`
}

Next, we configure a publication. A publication can include any number of artifacts. Let's add the artifact associated with the java configuration:

publishing {
    publications {
        register("mavenJava", MavenPublication::class) {
            from(components["java"])
        }
    }
}

The maven-publish plugin adds the publishToMavenLocal task. Let's use this task to test our Gradle Module Metadata publication:

./gradlew publishToMavenLocal

Next, let's list the directory for this artifact in our local Maven repository:

ls ~/.m2/repository/com/baeldung/gradle-6/1.0.0/
gradle-6-1.0.0.jar	gradle-6-1.0.0.module	gradle-6-1.0.0.pom

As we can see in the console output, Gradle generates the Module Metadata file in addition to the Maven POM.

4. Configuration Avoidance API

Since version 5.1, Gradle encouraged plugin developers to make use of new, incubating Configuration Avoidance APIs. These APIs help builds avoid relatively slow task configuration steps when possible. Gradle calls this performance improvement Task Configuration Avoidance. Gradle 6.0 promotes this incubating API to stable.

While the Configuration Avoidance feature mostly affects plugin authors, build authors who create any custom Configuration, Task, or Property in their build are also affected. Plugin authors and build authors alike can now use the new lazy configuration APIs to wrap objects with the Provider type, so that Gradle will avoid “realizing” these objects until they're needed.

Let's add a custom task using lazy APIs. First, we register the task using the TaskContainer.registering extension method. Since registering returns a TaskProvider, the creation of the Task instance is deferred until Gradle or the build author calls the TaskProvider.get(). Lastly, we provide a closure that will configure our Task after Gradle creates it:

val copyExtraLibs by tasks.registering(Copy::class) {
    from(extralibs)
    into(extraLibsDir)
}

Gradle's Task Configuration Avoidance Migration Guide helps plugin authors and build authors migrate to the new APIs. The most common migrations for build authors include:

  • tasks.register instead of tasks.create
  • tasks.named instead of tasks.getByName
  • configurations.register instead of configurations.create
  • project.layout.buildDirectory.dir(“foo”) instead of File(project.buildDir, “foo”)

5. JDK 13 Support

Gradle 6.0 introduces support for building projects with JDK 13. We can configure our Java build to use Java 13 with the familiar sourceCompatibility and targetCompatibility settings:

sourceCompatibility = JavaVersion.VERSION_13
targetCompatibility = JavaVersion.VERSION_13

Some of JDK 13's most exciting language features, such as Raw String Literals, are still in preview status. Let's configure the tasks in our Java build to enable these preview features:

tasks.compileJava {
    options.compilerArgs.add("--enable-preview")
}
tasks.test {
    jvmArgs.add("--enable-preview")
}
tasks.javadoc {
    val javadocOptions = options as CoreJavadocOptions
    javadocOptions.addStringOption("source", "13")
    javadocOptions.addBooleanOption("-enable-preview", true)
}

6. Conclusion

In this article, we discussed some of the new features in Gradle 6.0.

We covered enhanced dependency management, publishing Gradle Module Metadata, Task Configuration Avoidance, and how early adopters can configure their builds to use Java 13 preview language features.

As always, the code for this article is over on GitHub.

Intro to OpenCV with Java

$
0
0

1. Introduction

In this tutorial, we'll learn how to install and use the OpenCV computer vision library and apply it to real-time face detection.

2. Installation

To use the OpenCV library in our project, we need to add the opencv Maven dependency to our pom.xml:

<dependency>
    <groupId>org.openpnp</groupId>
    <artifactId>opencv</artifactId>
    <version>3.4.2-0</version>
</dependency>

For Gradle users, we'll need to add the dependency to our build.gradle file:

compile group: 'org.openpnp', name: 'opencv', version: '3.4.2-0'

After adding the library to our dependencies, we can use the features provided by OpenCV.

3. Using the Library

To start using OpenCV, we need to initialize the library, which we can do in our main method:

OpenCV.loadShared();

OpenCV is a class that holds methods related to loading native packages required by the OpenCV library for various platforms and architectures.

It's worth noting that the documentation does things slightly differently:

System.loadLibrary(Core.NATIVE_LIBRARY_NAME)

Both of those method calls will actually load the required native libraries.

The difference here is that the latter requires the native libraries to be installed. The former, however, can install the libraries to a temporary folder if they are not available on a given machine. Due to this difference, the loadShared method is usually the best way to go.

Now that we've initialized the library, let's see what we can do with it.

4. Loading Images

To start, let's load the sample image from the disk using OpenCV:

public static Mat loadImage(String imagePath) {
    Imgcodecs imageCodecs = new Imgcodecs();
    return imageCodecs.imread(imagePath);
}

This method will load the given image as a Mat object, which is a matrix representation.

To save the previously loaded image, we can use the imwrite() method of the Imgcodecs class:

public static void saveImage(Mat imageMatrix, String targetPath) {
    Imgcodecs imgcodecs = new Imgcodecs();
    imgcodecs.imwrite(targetPath, imageMatrix);
}

5. Haar Cascade Classifier

Before diving into facial-recognition, let's understand the core concepts that make this possible.

Simply put, a classifier is a program that seeks to place a new observation into a group dependent on past experience. Cascading classifiers seek to do this using a concatenation of several classifiers. Each subsequent classifier uses the output from the previous as additional information, improving the classification greatly.

5.1. Haar Features

Face detection in OpenCV is done by Haar-feature-based cascade classifiers.

Haar features are filters that are used to detect edges and lines on the image. The filters are seen as squares with black and white colors:

Haar Features

These filters are applied multiple times to an image, pixel by pixel, and the result is collected as a single value. This value is the difference between the sum of pixels under the black square and the sum of pixels under the white square.

6. Face Detection

Generally, the cascade classifier needs to be pre-trained to be able to detect anything at all.

Since the training process can be long and would require a big dataset, we're going to use one of the pre-trained models offered by OpenCV. We'll place this XML file in our resources folder for easy access.

Let's step through the process of detecting a face:

Face To Detect

We'll attempt to detect the face by outlining it with a red rectangle.

To get started, we need to load the image in Mat format from our source path:

Mat loadedImage = loadImage(sourceImagePath);

Then, we'll declare a MatOfRect object to store the faces we find:

MatOfRect facesDetected = new MatOfRect();

Next, we need to initialize the CascadeClassifier to do the recognition:

CascadeClassifier cascadeClassifier = new CascadeClassifier(); 
int minFaceSize = Math.round(loadedImage.rows() * 0.1f); 
cascadeClassifier.load("./src/main/resources/haarcascades/haarcascade_frontalface_alt.xml"); 
cascadeClassifier.detectMultiScale(loadedImage, 
  facesDetected, 
  1.1, 
  3, 
  Objdetect.CASCADE_SCALE_IMAGE, 
  new Size(minFaceSize, minFaceSize), 
  new Size() 
);

Above, the parameter 1.1 denotes the scale factor we want to use, specifying how much the image size is reduced at each image scale. The next parameter, 3, is minNeighbors. This is the number of neighbors a candidate rectangle should have in order to retain it.

Finally, we'll loop through the faces and save the result:

Rect[] facesArray = facesDetected.toArray(); 
for(Rect face : facesArray) { 
    Imgproc.rectangle(loadedImage, face.tl(), face.br(), new Scalar(0, 0, 255), 3); 
} 
saveImage(loadedImage, targetImagePath);

When we input our source image, we should now receive the output image with all the faces marked with a red rectangle:

Face Detected

7. Accessing the Camera Using OpenCV

So far, we've seen how to perform face detection on loaded images. But most of the time, we want to do it in real-time. To be able to do that, we need to access the camera.

However, to be able to show an image from a camera, we need a few additional things, apart from the obvious — a camera. To show the images, we'll use JavaFX.

Since we'll be using an ImageView to display the pictures our camera has taken, we need a way to translate an OpenCV Mat to a JavaFX Image:

public Image mat2Img(Mat mat) {
    MatOfByte bytes = new MatOfByte();
    Imgcodecs.imencode("img", mat, bytes);
    InputStream inputStream = new ByteArrayInputStream(bytes.toArray());
    return new Image(inputStream);
}

Here, we are converting our Mat into bytes, and then converting the bytes into an Image object.

We'll start by streaming the camera view to a JavaFX Stage.

Now, let's initialize the library using the loadShared method:

OpenCV.loadShared();

Next, we'll create the stage with a VideoCapture and an ImageView to display the Image:

VideoCapture capture = new VideoCapture(0); 
ImageView imageView = new ImageView(); 
HBox hbox = new HBox(imageView); 
Scene scene = new Scene(hbox);
stage.setScene(scene); 
stage.show();

Here, 0 is the ID of the camera we want to use. We also need to create an AnimationTimer to handle setting the image:

new AnimationTimer() { 
    @Override public void handle(long l) { 
        imageView.setImage(getCapture()); 
    } 
}.start();

Finally, our getCapture method handles converting the Mat to an Image:

public Image getCapture() { 
    Mat mat = new Mat(); 
    capture.read(mat); 
    return mat2Img(mat); 
}

The application should now create a window and then live-stream the view from the camera to the imageView window.

8. Real-Time Face Detection

Finally, we can connect all the dots to create an application that detects a face in real-time.

The code from the previous section is responsible for grabbing the image from the camera and displaying it to the user. Now, all we have to do is to process the grabbed images before showing them on screen by using our CascadeClassifier class.

Let's simply modify our getCapture method to also perform face detection:

public Image getCaptureWithFaceDetection() {
    Mat mat = new Mat();
    capture.read(mat);
    Mat haarClassifiedImg = detectFace(mat);
    return mat2Img(haarClassifiedImg);
}

Now, if we run our application, the face should be marked with the red rectangle.

We can also see a disadvantage of the cascade classifiers. If we turn our face too much in any direction, then the red rectangle disappears. This is because we've used a specific classifier that was trained only to detect the front of the face.

9. Summary

In this tutorial, we learned how to use OpenCV in Java.

We used a pre-trained cascade classifier to detect faces on the images. With the help of JavaFX, we managed to make the classifiers detect the faces in real-time with images from a camera.

As always all the code samples can be found over on GitHub.

Java Weekly, Issue 320

$
0
0

1. Spring and Java

>> Java 14 Feature Spotlight: Records [infoq.com]

A deep dive into the records preview feature with Java Language Architect Brian Goetz.

>> Multitenancy Applications with Spring Boot and Flyway [reflectoring.io]

A basic example of how to bind an incoming request to a tenant and its data source, with practical tips on managing multi-tenant database migrations.

>> How to map a PostgreSQL ARRAY to a Java List with JPA and Hibernate [vladmihalcea.com]

You'll need to update to version 2.9 of the Hibernate Types project to take advantage of this enhancement.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> TDD Classic State Based UI [blog.code-cop.org]

A practical application of TDD to heavyweight, state-based UI frameworks, using a Java Swing example.

Also worth reading:

3. Musings

>> The Laboring Strategist, A Free-Agent Anti-Pattern (And How to Fix) [daedtech.com]

A intro to the certified Solo Content Marketer and its parallels to the freelance software engineer who fancies himself a consultant.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Cancelled Presentation [dilbert.com]

>> Slide Deck Too Well Designed [dilbert.com]

>> Making The World A Better Place [dilbert.com]

5. Pick of the Week

>> You can have two Big Things, but not three [asmartbear.com]

Spring Projects Version Naming Scheme

$
0
0

1. Overview

It is common to use Semantic Versioning when naming release versions. For example, these rules apply for a version format such as MAJOR.MINOR.REVISION:

  • MAJOR: Major features and potential breaking changes
  • MINOR: Backward compatible features
  • REVISION: Backward compatible fixes and improvements

Together with Semantic Versioning, projects often use labels to further clarify the state of a particular release. In fact, by using these labels we give hints about the build lifecycle or where artifacts are published.

In this quick article, we'll examine the version-naming schemes adopted by major Spring projects.

2. Spring Framework and Spring Boot

In addition to Semantic Versioning, we can see that Spring Framework and Spring Boot use these labels:

  • BUILD-SNAPSHOT
  • M[number]
  • RC[number]
  • RELEASE

BUILD-SNAPSHOT is the current development release. The Spring team builds this artifact every day and deploys it to https://maven.springframework.org/snapshot.

A Milestone release (M1, M2, M3, …) marks a significant stage in the release process. The team builds this artifact when a development iteration is completed and deploys it to https://maven.springframework.org/milestone.

A Release Candidate (RC1, RC2, RC3, …) is the last step before building the final release. To minimize code changes, only bug fixes should occur at this stage. It is also deployed to https://maven.springframework.org/milestone.

At the very end of the release process, the Spring team produces a RELEASE. Consequently, this is usually the only production-ready artifact. We can also refer to this release as GA, for General Availability.

These labels are alphabetically ordered to make sure that build and dependency managers correctly determine if a version is more recent than another. For example, Maven 2 wrongly considered 1.0-SNAPSHOT as more recent than 1.0-RELEASE. Maven 3 fixed this behavior. As a consequence, we can experience strange behaviors when our naming scheme is not optimal.

3. Umbrella Projects

Umbrella projects, like Spring Cloud and Spring Data, are projects over independent but related sub-projects. To avoid conflicts with these sub-projects, an umbrella project adopts a different naming scheme. Instead of a numbered version, each Release Train has a special name.

In alphabetical order, the London Subway Stations are the inspiration for the Spring Cloud release names — for starters, Angel, Brixton, Finchley, Greenwich, and Hoxton.

In addition to Spring labels shown above, it also defines a Service Release label (SR1, SR2…). If we find a critical bug, a Service Release can be produced.

It is important to realize that a Spring Cloud release is only compatible with a specific Spring Boot version. As a reference, the Spring Cloud Project page contains the compatibility table.

4. Conclusion

As shown above, having a clear version-naming scheme is important. While some releases like Milestones or Release Candidates may be stable, we should always use production-ready artifacts. What is your naming scheme? What advantages does it have over this one?

Breaking YAML Strings Over Multiple Lines

$
0
0

1. Overview

In this article, we'll learn about breaking YAML strings over multiple lines.

In order to parse and test our YAML files, we'll make use of the SnakeYAML library.

2. Multi-Line Strings

Before we begin, let's create a method to simply read a YAML key from a file into a String:

String parseYamlKey(String fileName, String key) {
    InputStream inputStream = this.getClass()
      .getClassLoader()
      .getResourceAsStream(fileName);
    Map<String, String> parsed = yaml.load(inputStream);
    return parsed.get(key);
}

In the next subsections, we'll look over a few strategies for splitting strings over multiple lines.

We'll also learn how YAML handles leading and ending line breaks represented by empty lines at the beginning and end of a block.

3. Literal Style

The literal operator is represented by the pipe (“|”) symbol. It keeps our line breaks but reduces empty lines at the end of the string down to a single line break.

Let's take a look at the YAML file literal.yaml:

key: |
  Line1
  Line2
  Line3

We can see that our line breaks are preserved:

String key = parseYamlKey("literal.yaml", "key");
assertEquals("Line1\nLine2\nLine3", key);

Next, let's take a look at literal2.yaml, which has some leading and ending line breaks:

key: |


  Line1

  Line2

  Line3


...

We can see that every line break is present except for ending line breaks, which are reduced to one:

String key = parseYamlKey("literal2.yaml", "key");
assertEquals("\n\nLine1\n\nLine2\n\nLine3\n", key);

Next, we'll talk about block chomping and how it gives us more control over starting and ending line breaks.

We can change the default behavior by using two chomping methods: keep and strip.

3.1. Keep

Keep is represented by “+” as we can see in literal_keep.yaml:

key: |+
  Line1
  Line2
  Line3


...

By overriding the default behavior, we can see that every ending empty line is kept:

String key = parseYamlKey("literal_keep.yaml", "key");
assertEquals("Line1\nLine2\nLine3\n\n", key);

3.2. Strip

The strip is represented by “-” as we can see in literal_strip.yaml:

key: |-
  Line1
  Line2
  Line3

...

As we might've expected, this results in removing every ending empty line:

String key = parseYamlKey("literal_strip.yaml", "key");
assertEquals("Line1\nLine2\nLine3", key);

4. Folded Style

The folded operator is represented by “>” as we can see in folded.yaml:

key: >
  Line1
  Line2
  Line3

By default, line breaks are replaced by space characters for consecutive non-empty lines:

String key = parseYamlKey("folded.yaml", "key");
assertEquals("Line1 Line2 Line3", key);

Let's look at a similar file, folded2.yaml, which has a few ending empty lines:

key: >
  Line1
  Line2


  Line3


...

We can see that empty lines are preserved, but ending line breaks are also reduced to one:

String key = parseYamlKey("folded2.yaml", "key");
assertEquals("Line1 Line2\n\nLine3\n", key);

We should keep in mind that block chomping affects the folding style in the same way it affects the literal style.

5. Quoting

Let's have a quick look at splitting strings with the help of double and single quotes.

5.1. Double Quotes

With double quotes, we can easily create multi-line strings by using “\n“:

key: "Line1\nLine2\nLine3"
String key = parseYamlKey("plain_double_quotes.yaml", "key");
assertEquals("Line1\nLine2\nLine3", key);

5.2. Single Quotes

On the other hand, single-quoting treats “\n” as part of the string, so the only way to insert a line break is by using an empty line:

key: 'Line1\nLine2

  Line3'
String key = parseYamlKey("plain_single_quotes.yaml", "key");
assertEquals("Line1\\nLine2\nLine3", key);

6. Conclusion

In this quick tutorial, we've looked over multiple ways of breaking YAML strings over multiple lines through quick and practical examples.

As always, the code is available over on GitHub.

Add Build Properties to a Spring Boot Application

$
0
0

1. Introduction

Usually, our project's build configuration contains quite a lot of information about our application. Some of this information might be needed in the application itself. So, rather than hard-code this information, we can use it from the existing build configuration.

In this article, we'll see how to use information from the project's build configuration in a Spring Boot application.

2. The Build Information

Let's say we want to display the application description and version on our website's home page.

Usually, this information is present in pom.xml:

<project xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <artifactId>spring-boot</artifactId>
    <name>spring-boot</name>
    <packaging>war</packaging>
    <description>This is simple boot application for Spring boot actuator test</description>
    <version>0.0.1-SNAPSHOT</version>
...
</project>

3. Referencing the Information in the Application Properties File

Now, to use the above information in our application, we'll have to first reference it in one of our application properties files:

application-description=@project.description@
application-version=@project.version@

Here, we've used the value of the build property project.description to set the application property application-description. Similarly, application-version is set using project.version.

The most significant bit here is the use of the @ character around the property name. This tells Spring to expand the named property from the Maven project.

Now, when we build our project, these properties will be replaced with their values from pom.xml.

This expansion is also referred to as resource filtering. It's worth noting that this kind of filtering is only applied to the production configuration. Consequently, we cannot use the build properties in the files under src/test/resources.

Another thing to note is that if we use the addResources flag, the spring-boot:run goal adds src/main/resources directly to the classpath. Although this is useful for hot reloading purposes, it circumvents resource filtering and, consequently, this feature, too.

Now, the above property expansion works out-of-the-box only if we use spring-boot-starter-parent.

3.1. Expanding Properties Without spring-boot-starter-parent

Let's see how we can enable this feature without using the spring-boot-starter-parent dependency.

First, we have to enable resource filtering inside the <build/> element in our pom.xml:

<resources>
    <resource>
        <directory>src/main/resources</directory>
        <filtering>true</filtering>
    </resource>
</resources>

Here, we've enabled resource filtering under src/main/resources only.

Then, we can add the delimiter configuration for the maven-resources-plugin:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-resources-plugin</artifactId>
    <configuration>
        <delimiters>
            <delimiter>@</delimiter>
        </delimiters>
        <useDefaultDelimiters>false</useDefaultDelimiters>
    </configuration>
</plugin>

Note that we've specified the useDefaultDelimiters property as false. This ensures that the standard Spring placeholders such as ${placeholder} are not expanded by the build.

4. Using the Build Information in YAML Files

If we're using YAML to store application properties, we might not be able to use @ to specify the build properties. This is because @ is a reserved character in YAML.

But, we can overcome this by either configuring a different delimiter in maven-resources-plugin:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-resources-plugin</artifactId>
    <configuration>
        <delimiters>
            <delimiter>^</delimiter>
        </delimiters>
        <useDefaultDelimiters>false</useDefaultDelimiters>
    </configuration>
</plugin>

Or, simply by overriding the resource.delimiter property in the properties block of our pom.xml:

<properties>
    <resource.delimiter>^</resource.delimiter>
</properties>

Then, we can use ^ in our YAML file:

application-description: ^project.description^
application-version: ^project.version^

5. Conclusion

In this article, we saw how we could use Maven project information in our application. This can help us to avoid hardcoding the information that's already present in the project build configuration in our application properties files.

And of course, the code that accompanies this tutorial can be found over on GitHub.

The Java Headless Mode

$
0
0

1. Overview

On occasion, we need to work with graphics-based applications in Java without an actual display, keyboard, or mouse, let's say, on a server or a container.

In this short tutorial, we're going to learn about Java's headless mode to address this scenario. We'll also look at what we can do in headless mode and what we can't.

2. Setting up Headless Mode

There are many ways we can set up headless mode in Java explicitly:

  • Programmatically setting the system property java.awt.headless to true
  • Using the command line argument: java -Djava.awt.headless=true
  • Adding -Djava.awt.headless=true to the JAVA_OPTS environment variable in a server startup script

If the environment is actually headless, the JVM would be aware of it implicitly. However, there will be subtle differences in some scenarios. We'll see them shortly.

3. Examples of UI Components in Headless Mode

A typical use case of UI components running in a headless environment could be an image converter app. Though it needs graphics data for image processing, a display is not really necessary. The app could be run on a server and converted files saved or sent over the network to another machine for display.

Let's see this in action.

First, we'll turn the headless mode on programmatically in a JUnit class:

@Before
public void setUpHeadlessMode() {
    System.setProperty("java.awt.headless", "true");
}

To make sure it is set up correctly, we can use java.awt.GraphicsEnvironment#isHeadless:

@Test
public void whenSetUpSuccessful_thenHeadlessIsTrue() {
    assertThat(GraphicsEnvironment.isHeadless()).isTrue();
}

We should bear in mind that the above test will succeed in a headless environment even if the mode is not explicitly turned on.

Now let's see our simple image converter:

@Test
public void whenHeadlessMode_thenImagesWork() {
    boolean result = false;
    try (InputStream inStream = HeadlessModeUnitTest.class.getResourceAsStream(IN_FILE); 
      FileOutputStream outStream = new FileOutputStream(OUT_FILE)) {
        BufferedImage inputImage = ImageIO.read(inStream);
        result = ImageIO.write(inputImage, FORMAT, outStream);
    }

    assertThat(result).isTrue();
}

In this next sample, we can see that information of all fonts, including font metrics, is also available to us:

@Test
public void whenHeadless_thenFontsWork() {
    GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
    String fonts[] = ge.getAvailableFontFamilyNames();
      
    assertThat(fonts).isNotEmpty();

    Font font = new Font(fonts[0], Font.BOLD, 14);
    FontMetrics fm = (new Canvas()).getFontMetrics(font);
        
    assertThat(fm.getHeight()).isGreaterThan(0);
    assertThat(fm.getAscent()).isGreaterThan(0);
    assertThat(fm.getDescent()).isGreaterThan(0);
}

4. HeadlessException

There are components that require peripheral devices and won't work in the headless mode. They throw a HeadlessException when used in a non-interactive environment:

Exception in thread "main" java.awt.HeadlessException
	at java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:204)
	at java.awt.Window.<init>(Window.java:536)
	at java.awt.Frame.<init>(Frame.java:420)

This test asserts that using Frame in a headless mode will indeed throw a HeadlessException:

@Test
public void whenHeadlessmode_thenFrameThrowsHeadlessException() {
    assertThatExceptionOfType(HeadlessException.class).isThrownBy(() -> {
        Frame frame = new Frame();
        frame.setVisible(true);
        frame.setSize(120, 120);
    });
}

As a rule of thumb, remember that top-level components such as Frame and Button always need an interactive environment and will throw this exception. However, it will be thrown as an irrecoverable Error if the headless mode is not explicitly set.

5. Bypassing Heavyweight Components in Headless Mode

At this point, we might be asking a question to ourselves – but what if we have code with GUI components to run on both types of environments – a headed production machine and a headless source code analysis server?

In the above examples, we have seen that the heavyweight components won't work on the server and will throw an exception.

So, we can use a conditional approach:

public void FlexibleApp() {
    if (GraphicsEnvironment.isHeadless()) {
        System.out.println("Hello World");
    } else {
        JOptionPane.showMessageDialog(null, "Hello World");
    }
}

Using this pattern, we can create a flexible app that adjusts its behavior as per the environment.

6. Conclusion

With different code samples, we saw the how and why of headless mode in java. This technical article provides a complete list of what all can be done while operating in headless mode.

As usual, the source code for the above examples is available over on GitHub.


MongoDB Aggregations Using Java

$
0
0

1. Overview

In this tutorial, we'll take a dive into the MongoDB Aggregation framework using the MongoDB Java driver.

We'll first look at what aggregation means conceptually, and then set up a dataset. Finally, we'll see various aggregation techniques in action using Aggregates builder.

2. What are Aggregations?

Aggregations are used in MongoDB to analyze data and derive meaningful information out of it. Aggregation is usually performed in various stages, and these stages together form a pipeline, such that the output of one stage is passed on as input to the next stage.

The most commonly used stages can be summarized as:

Stage SQL Equivalent Description
 project SELECT selects only the required fields, can also be used to compute and add derived fields to the collection
 match WHERE filters the collection as per specified criteria
 group GROUP BY gathers input together as per the specified criteria (e.g. count, sum) to return a document for each distinct grouping
 sort ORDER BY sorts the results in ascending or descending order of a given field
 count COUNT counts the documents the collection contains
 limit LIMIT limits the result to a specified number of documents, instead of returning the entire collection
 out SELECT INTO NEW_TABLE writes the result to a named collection; this stage is only acceptable as the last in a pipeline


The SQL Equivalent for each aggregation stage is included above to give us an idea of what the said operation means in the SQL world.

We'll look at Java code samples for all of these stages shortly. But before that, we need a database.

3. Database Setup

3.1. Dataset

The first and foremost requirement for learning anything database-related is the dataset itself!

For the purpose of this tutorial, we'll use a publicly available restful API endpoint that provides comprehensive information about all the countries of the world. This API gives us a lot of data points for a country in a convenient JSON format. Some of the fields that we'll be using in our analysis are:

  • name – the name of the country; for example, United States of America
  • alpha3Code – a shortcode for the country name; for example, IND (for India)
  • region – the region the country belongs to; for example, Europe
  • area – the geographical area of the country
  • languages – official languages of the country in an array format; for example, English
  • borders – an array of neighboring countries' alpha3Codes

Now let's see how to convert this data into a collection in a MongoDB database.

3.2. Importing to MongoDB

First, we need to hit the API endpoint to get all countries and save the response locally in a JSON file. The next step is to import it into MongoDB using the mongoimport command:

mongoimport.exe --db <db_name> --collection <collection_name> --file <path_to_file> --jsonArray

Successful import should give us a collection with 250 documents.

4. Aggregation Samples in Java

Now that we have the bases covered, let's get into deriving some meaningful insights from the data we have for all the countries. We'll use several JUnit tests for this purpose.

But before we do that, we need to make a connection to the database:

@BeforeClass
public static void setUpDB() throws IOException {
    mongoClient = MongoClients.create();
    database = mongoClient.getDatabase(DATABASE);
    collection = database.getCollection(COLLECTION);
}

In all the examples that follow, we'll be using the Aggregates helper class provided by the MongoDB Java driver.

For better readability of our snippets, we can add a static import:

import static com.mongodb.client.model.Aggregates.*;

4.1. match and count

To begin with, let's start with something simple. Earlier we noted that the dataset contains information about languages.

Now, let's say we want to check the number of countries in the world where English is an official language:

@Test
public void givenCountryCollection_whenEnglishSpeakingCountriesCounted_thenNinetyOne() {
    Document englishSpeakingCountries = collection.aggregate(Arrays.asList(
      match(Filters.eq("languages.name", "English")),
      count())).first();
    
    assertEquals(91, englishSpeakingCountries.get("count"));
}

Here we are using two stages in our aggregation pipeline: match and count.

First, we filter out the collection to match only those documents that contain English in their languages field. These documents can be imagined as a temporary or intermediate collection that becomes the input for our next stage, count. This counts the number of documents in the previous stage.

Another point to note in this sample is the use of the method first. Since we know that the output of the last stage, count, is going to be a single record, this is a guaranteed way to extract out the lone resulting document.

4.2. group (with sum) and sort

In this example, our objective is to find out the geographical region containing the maximum number of countries:

@Test
public void givenCountryCollection_whenCountedRegionWise_thenMaxInAfrica() {
    Document maxCountriedRegion = collection.aggregate(Arrays.asList(
      group("$region", Accumulators.sum("tally", 1)),
      sort(Sorts.descending("tally")))).first();
    
    assertTrue(maxCountriedRegion.containsValue("Africa"));
}

As is evident, we are using group and sort to achieve our objective here.

First, we gather the number of countries in each region by accumulating a sum of their occurrences in a variable tally. This gives us an intermediate collection of documents, each containing two fields: the region and the tally of countries in it.  Then we sort it in the descending order and extract the first document to give us the region with maximum countries.

4.3. sort, limit, and out

Now let's use sort, limit and out to extract the seven largest countries area-wise and write them into a new collection:

@Test
public void givenCountryCollection_whenAreaSortedDescending_thenSuccess() {
    collection.aggregate(Arrays.asList(
      sort(Sorts.descending("area")), 
      limit(7),
      out("largest_seven"))).toCollection();

    MongoCollection<Document> largestSeven = database.getCollection("largest_seven");

    assertEquals(7, largestSeven.countDocuments());

    Document usa = largestSeven.find(Filters.eq("alpha3Code", "USA")).first();

    assertNotNull(usa);
}

Here, we first sorted the given collection in the descending order of area. Then, we used the Aggregates#limit method to restrict the result to seven documents only. Finally, we used the out stage to deserialize this data into a new collection called largest_seven. This collection can now be used in the same way as any other – for example, to find if it contains USA.

4.4. project, group (with max), match

In our last sample, let's try something trickier. Say we need to find out how many borders each country shares with others, and what is the maximum such number.

Now in our dataset, we have a borders field, which is an array listing alpha3Codes for all bordering countries of the nation, but there isn't any field directly giving us the count. So we'll need to derive the number of borderingCountries using project:

@Test
public void givenCountryCollection_whenNeighborsCalculated_thenMaxIsFifteenInChina() {
    Bson borderingCountriesCollection = project(Projections.fields(Projections.excludeId(), 
      Projections.include("name"), Projections.computed("borderingCountries", 
        Projections.computed("$size", "$borders"))));
    
    int maxValue = collection.aggregate(Arrays.asList(borderingCountriesCollection, 
      group(null, Accumulators.max("max", "$borderingCountries"))))
      .first().getInteger("max");

    assertEquals(15, maxValue);

    Document maxNeighboredCountry = collection.aggregate(Arrays.asList(borderingCountriesCollection,
      match(Filters.eq("borderingCountries", maxValue)))).first();
       
    assertTrue(maxNeighboredCountry.containsValue("China"));
}

After that, as we saw before, we'll group the projected collection to find the max value of borderingCountries. One thing to point out here is that the max accumulator gives us the maximum value as a number, not the entire Document containing the maximum value. We need to perform match to filter out the desired Document if any further operations are to be performed.

5. Conclusion

In this article, we saw what are MongoDB aggregations, and how to apply them in Java using an example dataset.

We used four samples to illustrate the various aggregation stages to form a basic understanding of the concept. There are umpteen possibilities for data analytics that this framework offers which can be explored further.

For further reading, Spring Data MongoDB provides an alternative way to handle projections and aggregations in Java.

As always, source code is available over on GitHub.

The BeanDefinitionOverrideException in Spring Boot

$
0
0

1. Introduction

The Spring Boot 2.1 upgrade surprised several people with unexpected occurrences of BeanDefinitionOverrideException. It can confuse some developers and make them wonder about what happened to the bean overriding behavior in Spring.

In this tutorial, we'll unravel this issue and see how best to address it.

2. Maven Dependencies

For our example Maven project, we need to add the Spring Boot Starter dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter</artifactId>
    <version>2.2.3.RELEASE</version>
</dependency>

3. Bean Overriding

Spring beans are identified by their names within an ApplicationContext.

Thus, bean overriding is a default behavior that happens when we define a bean within an ApplicationContext which has the same name as another bean. It works by simply replacing the former bean in case of a name conflict.

Starting in Spring 5.1, the BeanDefinitionOverrideException was introduced to allow developers to automatically throw the exception to prevent any unexpected bean overriding. By default, the original behavior is still available which allows bean overriding.

4. Configuration Change for Spring Boot 2.1

Spring Boot 2.1 disabled bean overriding by default as a defensive approach. The main purpose is to notice the duplicate bean names in advance to prevent overriding beans accidentally.

Therefore, if our Spring Boot application relies on bean overriding, it is very likely to encounter the BeanDefinitionOverrideException after we upgrade the Spring Boot version to 2.1 and later.

In the next sections, we'll look at an example where the BeanDefinitionOverrideException would occur, and then we will discuss some solutions.

5. Identifying the Beans in Conflict

Let's create two different Spring configurations, each with a testBean() method, to produce the BeanDefinitionOverrideException:

@Configuration
public class TestConfiguration1 {

    class TestBean1 {
        private String name;

        // standard getters and setters

    }

    @Bean
    public TestBean1 testBean(){
        return new TestBean1();
    }
}
@Configuration
public class TestConfiguration2 {

    class TestBean2 {
        private String name;

        // standard getters and setters

    }

    @Bean
    public TestBean2 testBean(){
        return new TestBean2();
    }
}

Next, we will create our Spring Boot test class:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {TestConfiguration1.class, TestConfiguration2.class})
public class SpringBootBeanDefinitionOverrideExceptionIntegrationTest {

    @Test
    public void whenBeanOverridingAllowed_thenTestBean2OverridesTestBean1() {
        Object testBean = applicationContext.getBean("testBean");

        assertThat(testBean.getClass()).isEqualTo(TestConfiguration2.TestBean2.class);
    }
}

Running the test produces a BeanDefinitionOverrideException. However, the exception provides us with some helpful information:

Invalid bean definition with name 'testBean' defined in ... 
... com.baeldung.beandefinitionoverrideexception.TestConfiguration2 ...
Cannot register bean definition [ ... defined in ... 
... com.baeldung.beandefinitionoverrideexception.TestConfiguration2] for bean 'testBean' ...
There is already [ ... defined in ...
... com.baeldung.beandefinitionoverrideexception.TestConfiguration1] bound.

Notice that the exception reveals two important pieces of information.

The first one is the conflicting bean name, testBean:

Invalid bean definition with name 'testBean' ...

And the second shows us the full path of the configurations affected:

... com.baeldung.beandefinitionoverrideexception.TestConfiguration2 ...
... com.baeldung.beandefinitionoverrideexception.TestConfiguration1 ...

As a result, we can see that two different beans are identified as testBean causing a conflict. Additionally, the beans are contained inside the configuration classes TestConfiguration1 and TestConfiguration2.

6. Possible Solutions

Depending on our configuration, Spring Beans have default names unless we set them explicitly.

Therefore, the first possible solution is to rename our beans.

There are some common ways to set bean names in Spring.

6.1. Changing Method Names

By default, Spring takes the name of the annotated methods as bean names.

Therefore, if we have beans defined in a configuration class, like our example, then simply changing the method names will prevent the BeanDefinitionOverrideException:

@Bean
public TestBean1 testBean1() {
    return new TestBean1();
}
@Bean
public TestBean2 testBean2() {
    return new TestBean2();
}

6.2. @Bean Annotation

Spring's @Bean annotation is a very common way of defining a bean.

Thus, another option is to set the name property of @Bean annotation:

@Bean("testBean1")
public TestBean1 testBean() {
    return new TestBean1();
}
@Bean("testBean2")
public TestBean1 testBean() {
    return new TestBean2();
}

6.3. Stereotype Annotations

Another way to define a bean is with stereotype annotations. With Spring's @ComponentScan feature enabled, we can define our bean names at the class level using the @Component annotation:

@Component("testBean1")
class TestBean1 {

    private String name;

    // standard getters and setters

}
@Component("testBean2")
class TestBean2 {

    private String name;

    // standard getters and setters

}

6.4. Beans Coming From 3rd Party Libraries

In some cases, it's possible to encounter a name conflict caused by beans originating from 3rd party spring-supported libraries.

When this happens, we should attempt to identify which conflicting bean belongs to our application, to determine if any of the above solutions can be used.

However, if we are unable to alter any of the bean definitions, then configuring Spring Boot to allow bean overriding can be a workaround.

To enable bean overriding, let's set the spring.main.allow-bean-definition-overriding property to true in our application.properties file:

spring.main.allow-bean-definition-overriding=true

By doing this, we are telling Spring Boot to allow bean overriding without any change to bean definitions.

As a final notice, we should be aware that it is difficult to guess which bean will have priority because the bean creation order is determined by dependency relationships mostly influenced in runtime. Therefore, allowing bean overriding can produce unexpected behavior unless we know the dependency hierarchy of our beans well enough.

7. Conclusion

In this tutorial, we explained what BeanDefinitionOverrideException means in Spring, why it suddenly appears, and how to address it after the Spring Boot 2.1 upgrade.

Like always, the complete source code of this article can be found over on GitHub.

Jenkins Slack Integration

$
0
0

1. Overview

When our teams are responsible for DevOps practices, we often need to monitor builds and other automated jobs.

In this tutorial, we'll see how to configure two popular platforms, Jenkins and Slack, to work together and tell us what's happening while our CI/CD pipelines are running.

2. Setting up Slack

Let's start by configuring Slack so Jenkins can send messages to it. To do this, we'll create a custom Slack app, which requires an Administrator account.

In Slack, we'll create an application and generate an OAuth token:

  • Visit https://api.slack.com
  • Login to the desired workspace
  • Click the Start Building button
  • Name the application Jenkins and click Create App
  • Click on OAuth & Permissions
  • In the Bot Token Scopes section, add the chat:write scope
  • Click the Install App to Workspace button
  • Click the Accept button

When this is done, we'll see a summary screen:

Now, we need to take note of the OAuth token — we'll need it later when we configure Jenkins. We should treat these as sensitive credentials and keep them safe..

To complete the Slack setup, we must invite the new Jenkins user into the channels we wish it to use. One easy way to do this is to mention the new user with the @ character inside each channel.

3. Setting up Jenkins

To set up Jenkins, we'll need an administrator account.

First, let's start by logging into Jenkins and navigating to Manage Jenkins > Plugin Manager.

Then, on the Available tab, we'll search for Slack:

Let's select the checkbox for Slack Notification and click Install without restart.

Now, we need to configure new credentials. Let's navigate to Jenkins > Credentials > System > Global Credentials and add a new Secret text credential:

We'll put the OAuth token from Slack into the Secret field. We should also give these credentials a meaningful ID and description to help us easily identify them later. The Jenkins credentials store is a safe place to keep this token.

Once we save the credentials, there is one more global configuration to set. Under Jenkins > Manage Jenkins > Configure System, we need to check the Custom slack app bot user checkbox under the Slack section:

Now that we've completed the Jenkins setup, let's look at how to configure Jenkins jobs and pipelines to send Slack messages.

4. Configuring a Traditional Jenkins Job

Traditional Jenkins jobs usually execute one or more actions to accomplish their goals. These are configured via the Jenkins user interface.

In order to integrate a traditional job with Slack, we'll use a post-build action.

Let's pick any job, or create a new one. When we drop down the Add post-build action menu, we'll find Slack Notifications:

Once selected, there are lots of available inputs to the Slack Notification action. Generally, most of the default values are sufficient. However, there are a few required pieces of information:

  • Which build phases to send messages for (start, success, failure, etc)
  • The name of the credentials to use – the ones we added previously
  • The Slack channel name or member ID to send messages to

We can also specify additional fields if desired, such as commit information used for the Jenkins job, custom messages, custom bot icons, and more:

When setting things up via the UI, we can use the Test Connection button to ensure that Jenkins can reach Slack. If successful, we'll see a test message in the Slack channel from the Jenkins user:

If the message doesn't show up, the Jenkins log files are useful for troubleshooting. Generally, we need to double-check that the post-build action has all required fields, that the OAuth token was copied correctly, and that the token was granted the proper scopes when we configured Slack.

5. Configuring a Jenkins Pipeline

Jenkins Pipelines differ from traditional jobs. They use a single Groovy script, broken into stages, to define a build. They also don't have post-build actions, so we use the pipeline script itself to send Slack messages.

The following snippet sends a message to Slack from a Jenkins pipeline:

slackSend botUser: true, 
  channel: 'builds', 
  color: '#00ff00', 
  message: 'Testing Jekins with Slack', 
  tokenCredentialId: 'slack-token'

Just like with the traditional Jenkins job setup, we must still specify a channel name and the name of the credential to use.

Via a Jenkins pipeline, we can also use a variety of additional Slack features, such as file upload, message threads, and more.

One downside to using Jenkins pipelines is that there's no test button. To test the integration with Slack, we have to execute the whole pipeline.

When first setting things up, we can create a new pipeline that contains only the Slack command while we're getting things working.

6. Additional Considerations

Now that we've got Jenkins and Slack connected, there are some additional considerations.

Firstly, a single Jenkins instance can communicate with multiple Slack workspaces. All we have to do is create a custom application and generate a new token for each workspace. As long as each token is stored as its own credential in Jenkins, different jobs can post to different workspaces.

Along those same lines, a different Jenkins job can post to different Slack channels. This is a per-job setting in the post-build actions we configure. For example, jobs related to software builds could post to a development-only channel. And jobs related to test or production could go to their own dedicated channels.

Finally, while we've looked at one of the more popular Slack plugins for Jenkins, which provides fine-grain control over what to send, there are a number of other plugins that serve different purposes. For example, if we want every Jenkins job to send the same notification, there is a Global Slack Notifier plugin that might be better suited for this.

7. Conclusion

In this article, we've seen how to integrate Jenkins and Slack to gain feedback on our CI/CD pipelines.

Using a Jenkins plugin, along with a custom Slack application, we were able to send messages from Jenkins to Slack. This allows teams to notice the status of Jenkins jobs and address issues more quickly.

Design Patterns in the Spring Framework

$
0
0

1. Introduction

Design patterns are an essential part of software development. These solutions not only solve recurring problems but also help developers understand the design of a framework by recognizing common patterns.

In this tutorial, we'll look at four of the most common design patterns used in the Spring Framework:

  1. Singleton pattern
  2. Factory Method pattern
  3. Proxy pattern
  4. Template pattern

We'll also look at how Spring uses these patterns to reduce the burden on developers and help users quickly perform tedious tasks.

2. Singleton Pattern

The singleton pattern is a mechanism that ensures only one instance of an object exists per application. This pattern can be useful when managing shared resources or providing cross-cutting services, such as logging.

2.1. Singleton Beans

Generally, a singleton is globally unique for an application, but in Spring, this constraint is relaxed. Instead, Spring restricts a singleton to one object per Spring IoC container. In practice, this means Spring will only create one bean for each type per application context.

Spring's approach differs from the strict definition of a singleton since an application can have more than one Spring container. Therefore, multiple objects of the same class can exist in a single application if we have multiple containers.

 

 

By default, Spring creates all beans as singletons.

2.2. Autowired Singletons

For example, we can create two controllers within a single application context and inject a bean of the same type into each.

First, we create a BookRepository that manages our Book domain objects.

Next, we create LibraryController, which uses the BookRepository to return the number of books in the library:

@RestController
public class LibraryController {
    
    @Autowired
    private BookRepository repository;

    @GetMapping("/count")
    public Long findCount() {
        System.out.println(repository);
        return repository.count();
    }
}

Lastly, we create a BookController, which focuses on Book-specific actions, such as finding a book by its ID:

@RestController
public class BookController {
     
    @Autowired
    private BookRepository repository;
 
    @GetMapping("/book/{id}")
    public Book findById(@PathVariable long id) {
        System.out.println(repository);
        return repository.findById(id).get();
    }
}

We then start this application and perform a GET on /count and /book/1:

curl -X GET http://localhost:8080/count
curl -X GET http://localhost:8080/book/1

In the application output, we see that both BookRepository objects have the same object ID:

com.baeldung.spring.patterns.singleton.BookRepository@3ea9524f
com.baeldung.spring.patterns.singleton.BookRepository@3ea9524f

The BookRepository object IDs in the LibraryController and BookController are the same, proving that Spring injected the same bean into both controllers.

We can create separate instances of the BookRepository bean by changing the bean scope from singleton to prototype using the @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE) annotation.

Doing so instructs Spring to create separate objects for each of the BookRepository beans it creates. Therefore, if we inspect the object ID of the BookRepository in each of our controllers again, we see that they are no longer the same.

3. Factory Method Pattern

The factory method pattern entails a factory class with an abstract method for creating the desired object.

Often, we want to create different objects based on a particular context.

For example, our application may require a vehicle object. In a nautical environment, we want to create boats, but in an aerospace environment, we want to create airplanes:

 

 

To accomplish this, we can create a factory implementation for each desired object and return the desired object from the concrete factory method.

3.1. Application Context

Spring uses this technique at the root of its Dependency Injection (DI) framework.

Fundamentally, Spring treats a bean container as a factory that produces beans.

Thus, Spring defines the BeanFactory interface as an abstraction of a bean container:

public interface BeanFactory {

    getBean(Class<T> requiredType);
    getBean(Class<T> requiredType, Object... args);
    getBean(String name);

    // ...
]

Each of the getBean methods is considered a factory method, which returns a bean matching the criteria supplied to the method, like the bean's type and name.

Spring then extends BeanFactory with the ApplicationContext interface, which introduces additional application configuration. Spring uses this configuration to start-up a bean container based on some external configuration, such as an XML file or Java annotations.

Using the ApplicationContext class implementations like AnnotationConfigApplicationContext, we can then create beans through the various factory methods inherited from the BeanFactory interface.

First, we create a simple application configuration:

@Configuration
@ComponentScan(basePackageClasses = ApplicationConfig.class)
public class ApplicationConfig {
}

Next, we create a simple class, Foo, that accepts no constructor arguments:

@Component
public class Foo {
}

Then create another class, Bar, that accepts a single constructor argument:

@Component
@Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class Bar {
 
    private String name;
     
    public Bar(String name) {
        this.name = name;
    }
     
    // Getter ...
}

Lastly, we create our beans through the AnnotationConfigApplicationContext implementation of ApplicationContext:

@Test
public void whenGetSimpleBean_thenReturnConstructedBean() {
    
    ApplicationContext context = new AnnotationConfigApplicationContext(ApplicationConfig.class);
    
    Foo foo = context.getBean(Foo.class);
    
    assertNotNull(foo);
}

@Test
public void whenGetPrototypeBean_thenReturnConstructedBean() {
    
    String expectedName = "Some name";
    ApplicationContext context = new AnnotationConfigApplicationContext(ApplicationConfig.class);
    
    Bar bar = context.getBean(Bar.class, expectedName);
    
    assertNotNull(bar);
    assertThat(bar.getName(), is(expectedName));
}

Using the getBean factory method, we can create configured beans using just the class type and — in the case of Bar — constructor parameters.

3.2. External Configuration

This pattern is versatile because we can completely change the application's behavior based on external configuration.

If we wish to change the implementation of the autowired objects in the application, we can adjust the ApplicationContext implementation we use.

 

For example, we can change the AnnotationConfigApplicationContext to an XML-based configuration class, such as ClassPathXmlApplicationContext:

@Test 
public void givenXmlConfiguration_whenGetPrototypeBean_thenReturnConstructedBean() { 

    String expectedName = "Some name";
    ApplicationContext context = new ClassPathXmlApplicationContext("context.xml");
 
    // Same test as before ...
}

4. Proxy Pattern

Proxies are a handy tool in our digital world, and we use them very often outside of software (such as network proxies). In code, the proxy pattern is a technique that allows one object — the proxy — to control access to another object — the subject or service.

 

 

4.1. Transactions

To create a proxy, we create an object that implements the same interface as our subject and contains a reference to the subject.

We can then use the proxy in place of the subject.

In Spring, beans are proxied to control access to the underlying bean. We see this approach when using transactions:

@Service
public class BookManager {
    
    @Autowired
    private BookRepository repository;

    @Transactional
    public Book create(String author) {
        System.out.println(repository.getClass().getName());
        return repository.create(author);
    }
}

In our BookManager class, we annotate the create method with the @Transactional annotation. This annotation instructs Spring to atomically execute our create method. Without a proxy, Spring wouldn't be able to control access to our BookRepository bean and ensure its transactional consistency.

4.2. CGLib Proxies

Instead, Spring creates a proxy that wraps our BookRepository bean and instruments our bean to execute our create method atomically.

When we call our BookManager#create method, we can see the output:

com.baeldung.patterns.proxy.BookRepository$$EnhancerBySpringCGLIB$$3dc2b55c

Typically, we would expect to see a standard BookRepository object ID; instead, we see an EnhancerBySpringCGLIB object ID.

Behind the scenes, Spring has wrapped our BookRepository object inside as EnhancerBySpringCGLIB object. Spring thus controls access to our BookRepository object (ensuring transactional consistency).

Generally, Spring uses two types of proxies:

  1. CGLib Proxies – Used when proxying classes
  2. JDK Dynamic Proxies – Used when proxying interfaces

While we used transactions to expose the underlying proxies, Spring will use proxies for any scenario in which it must control access to a bean.

5. Template Method Pattern

In many frameworks, a significant portion of the code is boilerplate code.

For example, when executing a query on a database, the same series of steps must be completed:

  1. Establish a connection
  2. Execute query
  3. Perform cleanup
  4. Close the connection

These steps are an ideal scenario for the template method pattern.

5.1. Templates & Callbacks

The template method pattern is a technique that defines the steps required for some action, implementing the boilerplate steps, and leaving the customizable steps as abstract. Subclasses can then implement this abstract class and provide a concrete implementation for the missing steps.

We can create a template in the case of our database query:

public abstract DatabaseQuery {

    public void execute() {
        Connection connection = createConnection();
        executeQuery(connection);
        closeConnection(connection);
    } 

    protected Connection createConnection() {
        // Connect to database...
    }

    protected void closeConnection(Connection connection) {
        // Close connection...
    }

    protected abstract void executeQuery(Connection connection);
}

Alternatively, we can provide the missing step by supplying a callback method.

A callback method is a method that allows the subject to signal to the client that some desired action has completed.

In some cases, the subject can use this callback to perform actions — such as mapping results.

 

 

For example, instead of having an executeQuery method, we can supply the execute method a query string and a callback method to handle the results.

First, we create the callback method that takes a Results object and maps it to an object of type T:

public interface ResultsMapper<T> {
    public T map(Results results);
}

Then we change our DatabaseQuery class to utilize this callback:

public abstract DatabaseQuery {

    public <T> T execute(String query, ResultsMapper<T> mapper) {
        Connection connection = createConnection();
        Results results = executeQuery(connection, query);
        closeConnection(connection);
        return mapper.map(results);
    ]

    protected Results executeQuery(Connection connection, String query) {
        // Perform query...
    }
}

This callback mechanism is precisely the approach that Spring uses with the JdbcTemplate class.

5.2. JdbcTemplate

The JdbcTemplate class provides the query method, which accepts a query String and ResultSetExtractor object:

public class JdbcTemplate {

    public <T> T query(final String sql, final ResultSetExtractor<T> rse) throws DataAccessException {
        // Execute query...
    }

    // Other methods...
}

The ResultSetExtractor converts the ResultSet object — representing the result of the query — into a domain object of type T:

@FunctionalInterface
public interface ResultSetExtractor<T> {
    T extractData(ResultSet rs) throws SQLException, DataAccessException;
}

Spring further reduces boilerplate code by creating more specific callback interfaces.

For example, the RowMapper interface is used to convert a single row of SQL data into a domain object of type T.

@FunctionalInterface
public interface RowMapper<T> {
    T mapRow(ResultSet rs, int rowNum) throws SQLException;
}

To adapt the RowMapper interface to the expected ResultSetExtractor, Spring creates the RowMapperResultSetExtractor class:

public class JdbcTemplate {

    public <T> List<T> query(String sql, RowMapper<T> rowMapper) throws DataAccessException {
        return result(query(sql, new RowMapperResultSetExtractor<>(rowMapper)));
    }

    // Other methods...
}

Instead of providing logic for converting an entire ResultSet object, including iteration over the rows, we can provide logic for how to convert a single row:

public class BookRowMapper implements RowMapper<Book> {

    @Override
    public Book mapRow(ResultSet rs, int rowNum) throws SQLException {

        Book book = new Book();
        
        book.setId(rs.getLong("id"));
        book.setTitle(rs.getString("title"));
        book.setAuthor(rs.getString("author"));
        
        return book;
    }
}

With this converter, we can then query a database using the JdbcTemplate and map each resulting row:

JdbcTemplate template = // create template...
template.query("SELECT * FROM books", new BookRowMapper());

Apart from JDBC database management, Spring also uses templates for:

6. Conclusion

In this tutorial, we looked at four of the most common design patterns applied in the Spring Framework.

We also explored how Spring utilizes these patterns to provide rich features while reducing the burden on developers.

The code from this article can be found over on GitHub.

Cache Headers in Spring MVC

$
0
0

1. Overview

In this tutorial, we'll learn about HTTP caching. We'll also look at various ways to implement this mechanism between a client and a Spring MVC application.

2. Introducing HTTP Caching

When we open a web page on a browser, it usually downloads a lot of resources from the webserver:

For instance, in this example, a browser needs to download three resources for one /login page. It's common for a browser to make multiple HTTP requests for every web page. Now, if we request such pages very frequently, it causes a lot of network traffic and takes longer to serve these pages.

To reduce network load, the HTTP protocol allows browsers to cache some of these resources. If enabled, browsers can save a copy of a resource in the local cache. As a result, browsers can serve these pages from the local storage instead of requesting it over the network:

A web server can direct the browser to cache a particular resource by adding a Cache-Control header in the response.

Since the resources are cached as a local copy, there is a risk of serving stale content from the browser. Therefore, web servers usually add an expiration time in the Cache-Control header.

In the following sections, we'll add this header in a response from the Spring MVC controller. Later, we'll also see Spring APIs to validate the cached resources based on the expiration time.

3. Cache-Control in Controller's Response

3.1. Using ResponseEntity

The most straightforward way to do this is to use the CacheControl builder class provided by Spring:

@GetMapping("/hello/{name}")
@ResponseBody
public ResponseEntity<String> hello(@PathVariable String name) {
    CacheControl cacheControl = CacheControl.maxAge(60, TimeUnit.SECONDS)
      .noTransform()
      .mustRevalidate();
    return ResponseEntity.ok()
      .cacheControl(cacheControl)
      .body("Hello " + name);
}

This will add a Cache-Control header in the response:

@Test
void whenHome_thenReturnCacheHeader() throws Exception {
    this.mockMvc.perform(MockMvcRequestBuilders.get("/hello/baeldung"))
      .andDo(MockMvcResultHandlers.print())
      .andExpect(MockMvcResultMatchers.status().isOk())
      .andExpect(MockMvcResultMatchers.header()
        .string("Cache-Control","max-age=60, must-revalidate, no-transform"));
}

3.2. Using HttpServletResponse

Often, the controllers need to return the view name from the handler method. However, the ResponseEntity class doesn't allow us to return the view name and deal with the request body at the same time.

Alternatively, for such controllers we can set the Cache-Control header in the HttpServletResponse directly:

@GetMapping(value = "/home/{name}")
public String home(@PathVariable String name, final HttpServletResponse response) {
    response.addHeader("Cache-Control", "max-age=60, must-revalidate, no-transform");
    return "home";
}

This will also add a Cache-Control header in the HTTP response similar to the last section:

@Test
void whenHome_thenReturnCacheHeader() throws Exception {
    this.mockMvc.perform(MockMvcRequestBuilders.get("/home/baeldung"))
      .andDo(MockMvcResultHandlers.print())
      .andExpect(MockMvcResultMatchers.status().isOk())
      .andExpect(MockMvcResultMatchers.header()
        .string("Cache-Control","max-age=60, must-revalidate, no-transform"))
      .andExpect(MockMvcResultMatchers.view().name("home"));
}

4. Cache-Control for Static Resources

Generally, our Spring MVC application serves a lot of static resources like HTML, CSS and JS files. Since such files consume a lot of network bandwidth, so it's important for browsers to cache them. We'll again enable this with the Cache-Control header in the response.

Spring allows us to control this caching behavior in resource mapping:

@Override
public void addResourceHandlers(final ResourceHandlerRegistry registry) {
    registry.addResourceHandler("/resources/**").addResourceLocations("/resources/")
      .setCacheControl(CacheControl.maxAge(60, TimeUnit.SECONDS)
        .noTransform()
        .mustRevalidate());
}

This ensures that all resources defined under /resources are returned with a Cache-Control header in the response.

5. Cache-Control in Interceptors

We can use interceptors in our Spring MVC application to do some pre- and post-processing for every request. This is another placeholder where we can control the caching behavior of the application.

Now instead of implementing a custom interceptor, we'll use the WebContentInterceptor provided by Spring:

@Override
public void addInterceptors(InterceptorRegistry registry) {
    WebContentInterceptor interceptor = new WebContentInterceptor();
    interceptor.addCacheMapping(CacheControl.maxAge(60, TimeUnit.SECONDS)
      .noTransform()
      .mustRevalidate(), "/login/*");
    registry.addInterceptor(interceptor);
}

Here, we registered the WebContentInterceptor and added the Cache-Control header similar to the last few sections. Notably, we can add different Cache-Control headers for different URL patterns.

In the above example, for all requests starting with /login, we'll add this header:

@Test
void whenInterceptor_thenReturnCacheHeader() throws Exception {
    this.mockMvc.perform(MockMvcRequestBuilders.get("/login/baeldung"))
      .andDo(MockMvcResultHandlers.print())
      .andExpect(MockMvcResultMatchers.status().isOk())
      .andExpect(MockMvcResultMatchers.header()
        .string("Cache-Control","max-age=60, must-revalidate, no-transform"));
}

6. Cache Validation in Spring MVC

So far, we've discussed various ways of including a Cache-Control header in the response. This indicates the clients or browsers to cache the resources based on configuration properties like max-age.

It's generally a good idea to add a cache expiry time with each resource. As a result, browsers can avoid serving expired resources from the cache.

Although browsers should always check for expiry, it may not be necessary to re-fetch the resource every time. If a browser can validate that a resource hasn't changed on the server, it can continue to serve the cached version of it. And for this purpose, HTTP provides us with two response headers:

  1. Etag – an HTTP response header that stores a unique hash value to determine whether a cached resource has changed on the server – a corresponding If-None-Match request header must carry the last Etag value
  2. LastModified – an HTTP response header that stores a unit of time when the resource was last updated – a corresponding If-Unmodified-Since request header must carry the last modified date

We can use either of these headers to check if an expired resource needs to be re-fetched. After validating the headers, the server can either re-send the resource or send a 304 HTTP code to signify no change. For the latter scenario, browsers can continue to use the cached resource.

The LastModified header can only store time intervals up to seconds precision. This can be a limitation in cases where a shorter expiry is required. For this reason, it's recommended to use Etag instead. Since Etag header stores a hash value, it's possible to create a unique hash up to more finer intervals like nanoseconds.

That said, let's check out what it looks like to use LastModified.

Spring provides some utility methods to check if the request contains an expiration header or not:

@GetMapping(value = "/productInfo/{name}")
public ResponseEntity<String> validate(@PathVariable String name, WebRequest request) {
 
    ZoneId zoneId = ZoneId.of("GMT");
    long lastModifiedTimestamp = LocalDateTime.of(2020, 02, 4, 19, 57, 45)
      .atZone(zoneId).toInstant().toEpochMilli();
     
    if (request.checkNotModified(lastModifiedTimestamp)) {
        return ResponseEntity.status(304).build();
    }
     
    return ResponseEntity.ok().body("Hello " + name);
}

Spring provides the checkNotModified() method to check if a resource has been modified since the last request:

@Test
void whenValidate_thenReturnCacheHeader() throws Exception {
    HttpHeaders headers = new HttpHeaders();
    headers.add(IF_UNMODIFIED_SINCE, "Tue, 04 Feb 2020 19:57:25 GMT");
    this.mockMvc.perform(MockMvcRequestBuilders.get("/productInfo/baeldung").headers(headers))
      .andDo(MockMvcResultHandlers.print())
      .andExpect(MockMvcResultMatchers.status().is(304));
}

7. Conclusion

In this article, we learned about HTTP caching by using the Cache-Control response header in Spring MVC. We can either add the header in the controller's response using the ResponseEntity class or through resource mapping for static resources.

We can also add this header for particular URL patterns using Spring interceptors.

As always, the code is available over on GitHub.

How to Handle Java SocketException

$
0
0

1. Introduction

In this tutorial, we'll learn the causes of SocketException with an example. We’ll also discuss how to handle the exception.

2. Causes of SocketException

The most common cause of SocketException is writing or reading data to or from a closed socket connection. Another cause of it is closing the connection before reading all data in the socket buffer.

Let's take a closer look at some common underlying reasons.

2.1. Slow Network

A poor network connection might be the underlying problem. Setting a higher socket connection timeout can decrease the rate of SocketException for slow connections:

socket.setSoTimeout(30000); // timeout set to 30,000 ms

2.2. Firewall Intervention

A network firewall can close socket connections. If we have access to the firewall, we can turn it off and see if it solves the problem.

Otherwise, we can use a network monitoring tool such as Wireshark to check firewall activities.

2.3. Long Idle Connection

Idle connections might get forgotten by the other end (to save resources). If we have to use a connection for a long time, we can send heartbeat messages to prevent idle state.

2.4. Application Error

Last but not least, SocketException can occur because of mistakes or bugs in our code.

To demonstrate this, let's start a server on port 6699:

SocketServer server = new SocketServer();
server.start(6699);

When the server is started, we'll wait for a message from the client:

serverSocket = new ServerSocket(port);
clientSocket = serverSocket.accept();
out = new PrintWriter(clientSocket.getOutputStream(), true);
in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
String msg = in.readLine();

Once we get it, we'll respond and close the connection:

out.println("hi");
in.close();
out.close();
clientSocket.close();
serverSocket.close();

So, let's say a client connects to our server and sends “hi”:

SocketClient client = new SocketClient();
client.startConnection("127.0.0.1", 6699);
client.sendMessage("hi");

So far, so good.

But, if the client sends another message:

client.sendMessage("hi again");

Since the client sends “hi again” to the server after the connection is aborted, a SocketException occurs.

3. Handling of a SocketException

Handling SocketException is pretty easy and straightforward. Similar to any other checked exception, we must either throw it or surround it with a try-catch block.

Let's handle the exception in our example:

try {
    client.sendMessage("hi");
    client.sendMessage("hi again");
} catch (SocketException e) {
    client.stopConnection();
}

Here, we've closed the client connection after the exception occurred. Retrying won't work, because the connection is already closed. We should start a new connection instead:

client.startConnection("127.0.0.1", 6699);
client.sendMessage("hi again");

4. Conclusion

In this article, we learned what causes SocketException and how to handle it.

As always, the code is available over on Github.

Arrays.deepEquals

$
0
0

1. Overview

In this tutorial, we'll dive into the details of the deepEquals method from the Arrays class. We'll see when we should use this method, and we'll go through some simple examples.

To learn more about the different methods in the java.util.Arrays class, check out our quick guide.

2. Purpose

We should use the deepEquals method when we want to check the equality between two nested or multidimensional arrays. Also, when we want to compare two arrays composed of user-defined objects, as we'll see later, we must override the equals method.

Now, let's find out more details about the deepEquals method.

2.1. Syntax

We'll start by having a look at the method signature:

public static boolean deepEquals(Object[] a1, Object[] a2)

From the method signature, we notice that we cannot use deepEquals to compare two unidimensional arrays of primitive data types. For this, we must either box the primitive array to its corresponding wrapper or use the Arrays.equals method, which has overloaded methods for primitive arrays.

2.2. Implementation

By analyzing the method's internal implementation, we can see that the method not only checks the top-level elements of the arrays but also checks recursively every subelement of it.

Therefore, we should avoid using the deepEquals method with arrays that have a self-reference because this will result in a java.lang.StackOverflowError.

Next, let's find out what output we can get from this method.

3. Output

The Arrays.deepEquals method returns:

  • true if both parameters are the same object (have the same reference)
  • true if both parameters are null
  • false if only one of the two parameters is null
  • false if the arrays have different lengths
  • true if both arrays are empty
  • true if the arrays contain the same number of elements and every pair of subelements are deeply equal
  • false in other cases

In the next section, we'll have a look at some code examples.

4. Examples

Now it's time to start looking at deepEquals method in action. Moreover, we'll compare the deepEquals method with the equals method from the same Arrays class.

4.1. Unidimensional Arrays

Firstly, let's start with a simple example and compare two unidimensional arrays of type Object:

    Object[] anArray = new Object[] { "string1", "string2", "string3" };
    Object[] anotherArray = new Object[] { "string1", "string2", "string3" };

    assertTrue(Arrays.equals(anArray, anotherArray));
    assertTrue(Arrays.deepEquals(anArray, anotherArray));

We see that both equals and deepEquals methods return true. Let's find out what happens if one element of our arrays is null:

    Object[] anArray = new Object[] { "string1", null, "string3" };
    Object[] anotherArray = new Object[] { "string1", null, "string3" };

    assertTrue(Arrays.equals(anArray, anotherArray));
    assertTrue(Arrays.deepEquals(anArray, anotherArray));

We see that both assertions are passing. Hence, we can conclude that when using the deepEquals method, null values are accepted at any depth of the input arrays.

But let's try one more thing and let's check the behavior with nested arrays:

    Object[] anArray = new Object[] { "string1", null, new String[] {"nestedString1", "nestedString2" }};
    Object[] anotherArray = new Object[] { "string1", null, new String[] {"nestedString1", "nestedString2" } };

    assertFalse(Arrays.equals(anArray, anotherArray));
    assertTrue(Arrays.deepEquals(anArray, anotherArray));

Here we find out that the deepEquals returns true while equals returns false. This is because deepEquals calls itself recursively when encountering an array, while equals just compares the references of the sub-arrays.

4.2. Multidimensional Arrays of Primitive Types

Next, let's check the behavior using multidimensional arrays. In the next example, the two methods have different outputs, emphasizing the fact that we should use deepEquals instead of the equals method when we are comparing multidimensional arrays:

    int[][] anArray = { { 1, 2, 3 }, { 4, 5, 6, 9 }, { 7 } };
    int[][] anotherArray = { { 1, 2, 3 }, { 4, 5, 6, 9 }, { 7 } };

    assertFalse(Arrays.equals(anArray, anotherArray));
    assertTrue(Arrays.deepEquals(anArray, anotherArray));

4.3. Multidimensional Arrays of User-Defined Objects

Finally, let's check the behavior of deepEquals and equals methods when testing the equality of two multidimensional arrays for a user-defined object:

Let's start by creating a simple Person class:

    class Person {
        private int id;
        private String name;
        private int age;

        // constructor & getters & setters

        @Override
        public boolean equals(Object obj) {
            if (this == obj) {
                return true;
            }
            if (obj == null) {
                return false;
            }
            if (!(obj instanceof Person))
                return false;
            Person person = (Person) obj;
            return id == person.id && name.equals(person.name) && age == person.age;
        }
    }

It is necessary to override the equals method for our Person class. Otherwise, the default equals method will compare only the references of the objects.

Also, let's take into consideration that, even though it's not relevant for our example, we should always override hashCode when we override the equals method so that we don't violate their contracts.

Next, we can compare two multidimensional arrays of the Person class:

    Person personArray1[][] = { { new Person(1, "John", 22), new Person(2, "Mike", 23) },
      { new Person(3, "Steve", 27), new Person(4, "Gary", 28) } };
    Person personArray2[][] = { { new Person(1, "John", 22), new Person(2, "Mike", 23) }, 
      { new Person(3, "Steve", 27), new Person(4, "Gary", 28) } };
        
    assertFalse(Arrays.equals(personArray1, personArray2));
    assertTrue(Arrays.deepEquals(personArray1, personArray2));

As a result of recursively comparing the subelements, the two methods again have different results.

Finally, it is worth mentioning that the Objects.deepEquals method executes the Arrays.deepEquals method internally when it is called with two Object arrays:

    assertTrue(Objects.deepEquals(personArray1, personArray2));

5. Conclusion

In this quick tutorial, we learned that we should use the Arrays.deepEquals method when we want to compare the equality between two nested or multi-dimensional arrays of objects or primitive types.

As always, the full source code of the article is available over on GitHub.


Java Weekly, Issue 321

$
0
0

1. Spring and Java

>> Announcing: The NEW Spring Website! [spring.io]

A fresh, clean look to the official Spring site, with a renewed focus on making the site feel more welcoming and comfortable.

>> Micro optimising class.getName [alblue.bandlem.com]

A deep dive into bytecode optimization and inline methods.

>> Java Records: A Closer Look [alidg.me]

And an under-the-hood look at the Java Records preview feature.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Stream processing for computing approximations [blog.frankel.ch]

And a way to use Hazelcast Jet's stream model to calculate mathematical approximations for infinite series.

Also worth reading:

3. Musings

>> Mob programming and shared everything [blog.codecentric.de]

And while progress may seem slow at first, these techniques pay huge dividends in the long run.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Who Is The Fool [dilbert.com]

>> Buy An Adapter [dilbert.com]

>> Before Or After Firing [dilbert.com]

5. Pick of the Week

>> Productivity [samaltman.com]

Getting Started with CRaSH

$
0
0

1. Introduction

CRaSH is a reuseable shell that deploys in a JVM and helps us interact with the JVM.

In this tutorial, we'll see how to install CRaSH as a standalone application. Also, we'll embed in a Spring Web application and create some custom commands.

2. Standalone Installation

Let's install CRaSH as a standalone application by downloading the distribution from CRaSH's official website.

The CRaSH directory structure contains three important directories cmd, bin, and conf:

 

The bin directory contains the standalone CLI scripts to start CRaSH.

The cmd directory holds all the commands that it supports out of the box. Also, this is where we can put our custom commands. We'll look into that in the later sections of this article.

To start the CLI, we go to bin and start the standalone instance with either the crash.bat or crash.sh:

3. Embedding CRaSH in a Spring Web Application

Let's embed CRaSH into a Spring web application. First, we'll need some dependencies:

<dependency>
    <groupId>org.crashub</groupId>
    <artifactId>crash.embed.spring</artifactId>
    <version>1.3.2</version>
</dependency>
<dependency>
    <groupId>org.crashub</groupId>
    <artifactId>crash.cli</artifactId>
    <version>1.3.2</version>
</dependency>
<dependency>
    <groupId>org.crashub</groupId>
    <artifactId>crash.connectors.telnet</artifactId>
    <version>1.3.2</version>
</dependency>

We can check for the latest version in Maven Central.

CRaSH supports both Java and Groovy, so we'll need to add Groovy for the Groovy scripts to work:

<dependency>
    <groupId>org.codehaus.groovy</groupId>
    <artifactId>groovy</artifactId>
    <version>3.0.0-rc-3</version>
</dependency>

Its latest version is also in Maven Central.

Next, we need to add a listener in our web.xml:

<listener>
    <listener-class>org.crsh.plugin.WebPluginLifeCycle</listener-class>
</listener>

With the listener now ready, let's add properties and commands in the WEB-INF directory. We'll create a directory named crash and put commands and properties inside it:

Once we deploy the application, we can connect to the shell via telnet:

telnet localhost 5000

We can change the telnet port in the crash.properties file using crash.telnet.port property.

Alternatively, we can also create a Spring bean to configure the properties and override the command's directory locations:

<bean class="org.crsh.spring.SpringWebBootstrap">
    <property name="cmdMountPointConfig" value="war:/WEB-INF/crash/commands/" />
    <property name="confMountPointConfig" value="war:/WEB-INF/crash/" />
    <property name="config">
        <props>
             <prop key="crash.telnet.port">5000</prop>
         </props>
     </property>
</bean>

4. CRaSH and Spring Boot

Spring Boot used to offer CRaSH as an embedded sell, via its remote shell:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-remote-shell</artifactId>
</dependency>

Unfortunately, the support is now deprecated. If we still want to use the shell along with a Spring Boot application, we can use the attach mode. In the attach mode, CRaSH hooks into the JVM of the Spring Boot application instead of its own:

crash.sh <PID>

Here, <PID> the process id of that JVM instance. We can retrieve the process ids for JVM's running on a host using the jps command.

5. Creating a Custom Command

Now, let's create a custom command for our crash shell. There are two ways we can create and use the commands; one using Groovy, and also with Java. We'll look into them one by one.

5.1. Command with Groovy

First, let's create a simple command with Groovy:

class message {
	
    @Usage("show my own message")
    @Command
    Object main(@Usage("custom message") @Option(names=["m","message"]) String message) {
        if (message == null) {
            message = "No message given...";
        }
        return message;
    }
}

The @Command annotation marks the method as a command, @Usage is used to display the usage and parameters of the command, and finally, the @Option is for any parameters to be passed to the command.

Let's test the command:

5.2. Command with Java

Let's create the same command with Java:

public class message2 extends BaseCommand {
    @Usage("show my own message using java")
    @Command
    public Object main(@Usage("custom message") 
      @Option(names = { "m", "message" }) String message) {
        if (message == null) {
            message = "No message given...";
        }
        return message;
    }
}

The command is similar to that of Groovy, but here we need to extend the org.crsh.command.BaseCommand.

So, let's test again:

6. Conclusion

In this tutorial, we looked in to installing CRaSH as a standalone application, embedding it in a Spring web application. Also, we created customs commands with Groovy as well as Java.

As always, the code is available over on GitHub.

Difference Between Docker Images and Containers

$
0
0

1. Overview

Docker is a tool for creating, deploying, and running applications easily. It allows us to package our applications with all the dependencies, and distribute them as individual bundles. Docker guarantees that our application will run in the same way on every Docker instance.

When we start using Docker, there are two main concepts we need to be clear on — images and containers.

In this tutorial, we'll learn what they are and how they differ.

2. Docker Images

An image is a file that represents a packaged application with all the dependencies needed to run correctly. In other words, we could say that a Docker image is like a Java class.

Images are built as a series of layers. Layers are assembled on top of one another. So, what is a layer? Simply put, a layer is an image.

Let's say we want to create a Docker image of a Hello World Java application. The first thing we need to think about is what does our application need.

To start, it is a Java application, so we will need a JVM. OK, this seems easy, but what does a JVM need to run? It needs an Operating System. Therefore, our Docker image will have an Operating System layer, a JVM, and our Hello World application.

A major advantage of Docker is its large community. If we want to build on to an image, we can go to Docker Hub and search if the image we need is available.

Let's say we want to create a database, using the PostgreSQL database. We don't need to create a new PostgreSQL image from scratch. We just go to Docker Hub, search for postgres, which is the Docker official image name for PostgresSQL, choose the version we need, and run it.

Every image we create or pull from Docker Hub is stored in our filesystem and is identified by its name and tag.  It can also be identified by its image id.

Using the docker images command, we can view a list of images we have available in our filesystem:

$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
postgres             11.6                d3d96b1e5d48        4 weeks ago         332MB
mongo                latest              9979235fc504        6 weeks ago         364MB
rabbitmq             3-management        44c4867e4a8b        8 weeks ago         180MB
mysql                8.0.18              d435eee2caa5        2 months ago        456MB
jboss/wildfly        18.0.1.Final        bfc71fe5d7d1        2 months ago        757MB
flyway/flyway        6.0.8               0c11020ffd69        3 months ago        247MB
java                 8-jre               e44d62cf8862        3 years ago         311MB

3. Running Docker Images

An image is run using the docker run command with the image name and tag. Let's say we want to run the postgres 11.6 image:

docker run -d postgres:11.6

Notice we provided the -d option. This tells Docker to run the image in the background — also known as detached mode.

Using the docker ps command we can check if our image is running we should use this command:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
3376143f0991        postgres:11.6       "docker-entrypoint.s…"   3 minutes ago       Up 3 minutes        5432/tcp            tender_heyrovsky

Notice the CONTAINER ID in the output above. Let's take a look at what a container is and how it is related to an image.

4. Docker Containers

A container is an instance of an image. Each container can be identified by its ID. Going back to our Java development analogy, we could say that a container is like an instance of a class.

Docker defines seven states for a container: created, restarting, running, removing, paused, exited, and dead. This is important to know. Since a container is just an instance of the image, it doesn't need to be running.

Now let's think again about the run command we have seen above. We have said it is used to run images, but that is not totally accurate. The truth is that the run command is used to create and start a new container of the image.

One big advantage is that containers are like lightweight VMs. Their behaviors are completely isolated from each other. This means that we can run multiple containers of the same image, having each one in a different state with different data and different IDs.

Being able to run multiple containers of the same image at the same time is a great advantage because it allows us an easy way of scaling applications. For example, let's think about microservices. If every service is packaged as a Docker image, then that means that new services can be deployed as containers on demand.

5. Containers Lifecycle

Earlier, we mentioned the seven states of a container, Now, let's see how we can use the docker command-line tool to process the different lifecycle states.

Starting up a new container requires us to create it and then start it. This means that it has to go through the create state before it can be running. We can do this by creating and starting the container explicitly:

docker container create <image_name>:<tag>
docker container start <container_id>

Or we can easily do this with the run command:

docker run <image_name>:<tag>

We can pause a running container and then put it on running state again:

docker pause <container_id>
docker unpause <container_id>

A paused container will show “Paused” as the status when we check the processes:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                  PORTS               NAMES
9bef2edcad7b        postgres:11.6       "docker-entrypoint.s…"   5 minutes ago       Up 4 minutes (Paused)   5432/tcp            tender_heyrovsky

We can also stop a running container and then rerun it:

docker stop <container_id>
docker start <container_id>

And finally, we can remove a container:

docker container rm <container_id>

Only containers in the stopped or created state can be removed.

For more information regarding the Docker commands, we can refer to the Docker Command Line Reference.

6. Conclusion

In this article, we discussed Docker images and containers and how they differ. Images describe the applications and how they can be run. Containers are the image instances, where multiple containers of the same image can be run, each in a different state.

We have also talked about the containers' lifecycle and learned the basic commands to manage them.

Now that we know the basics, it's time to learn more about the exciting world of Docker and to start increasing our knowledge!

Modifying the Response Body in a Zuul Filter

$
0
0

1. Overview

In this tutorial, we're going to look at Netflix Zuul's post filter. Netflix Zuul is an edge service provider that sits between an API client and a plethora of microservices.

The post-filter runs before the final responses are sent to the API client. This gives us the opportunity to act on the raw response body and do things like logging and other data transformations we desire.

2. Dependencies

We're going to be working with Zuul in a Spring Cloud environment. So let's add the following to the dependency management section of our pom.xml:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>Hoxton.SR1</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-netflix-zuul</artifactId>
        <version>2.2.1.RELEASE</version>
    </dependency>
</dependencies>

The latest version of the Spring Cloud dependencies and spring-cloud-starter-netflix-zuul can be found on Maven Central.

3. Creating a Post Filter

A post filter is a regular class that extends the abstract class ZuulFilter and has a filter type of post:

public class ResponseLogFilter extends ZuulFilter {
    
    @Override
    public String filterType() {
        return POST_TYPE;
    }

    @Override
    public int filterOrder() {
        return 0;
    }

    @Override
    public boolean shouldFilter() {
        return true;
    }

    @Override
    public Object run() throws ZuulException {
        return null;
    }
}

Please note that we returned POST_TYPE in the filterType() method. This is what actually differentiates this filter from other types.

Another important method to take note of is the shouldFilter() method. We're returning true here since we want the filter to be run in the filter chain.

In a production-ready application, we may externalize this configuration for better flexibility.

Let's take a closer look at the run() which gets called whenever our filter is running.

4. Modifying the Response Body

As previously stated, Zuul sits between microservices and their clients. Consequently, it can access the response body and optionally modify it before passing it down.

For example, we can read the response body and log its content:

@Override
public Object run() throws ZuulException {

    RequestContext context = RequestContext.getCurrentContext();
    try (final InputStream responseDataStream = context.getResponseDataStream()) {

        if(responseDataStream == null) {
            logger.info("BODY: {}", "");
            return null;
        }

        String responseData = CharStreams.toString(new InputStreamReader(responseDataStream, "UTF-8"));
        logger.info("BODY: {}", responseData);

        context.setResponseBody(responseData);
    }
    catch (Exception e) {
        throw new ZuulException(e, INTERNAL_SERVER_ERROR.value(), e.getMessage());
    }

    return null;
}

The snippet above shows the full implementation of the run() method in the ResponseLogFilter we created earlier. First, we obtained an instance of the RequestContext. And from that context, we were able to get the response data InputStream in a try with resources construct.

Note that the response input stream can be null, which is why we check for it. This can be due to service timeout or other unexpected exceptions on the microservice. In our case, we just log an empty response body when this occurs.

Going forward, we read the input stream into a String that we can then log.

Very importantly, we add the response body back to the context for processing using the context.setResponseBody(responseData). If we omit this step, we'll get an IOException along the following lines: java.io.IOException: Attempted read on a closed stream.

5. Conclusion

In conclusion, post filters in Zuul offer an opportunity for developers to do something with the service response before sending it to the client.

However, we have to be cautious not to expose sensitive information accidentally which can lead to a breach. Moreover, we should be wary of doing long-running tasks within our post filter as it can add considerably to the response time.

As usual, the source code is available over on GitHub.

Introduction to the jcabi-aspects AOP Annotations Library

$
0
0

1. Overview

In this quick tutorial, we'll explore the jcabi-aspects Java library, a collection of handy annotations that modify the behavior of Java application using aspect-oriented programming (AOP).

The jcabi-aspects library provides annotations like @Async, @Loggable, and @RetryOnFailure, that are useful in performing certain operations efficiently using AOP. At the same time, they help to reduce the amount of boilerplate code in our application. The library requires AspectJ to weave the aspects into compiled classes.

2. Setup

First, we'll add the latest jcabi-aspects Maven dependency to the pom.xml:

<dependency>
    <groupId>com.jcabi</groupId>
    <artifactId>jcabi-aspects</artifactId>
    <version>0.22.6</version>
</dependency>

The jcabi-aspects library requires AspectJ runtime support to act. Therefore, let's add the aspectjrt Maven dependency:

<dependency>
    <groupId>org.aspectj</groupId>
    <artifactId>aspectjrt</artifactId>
    <version>1.9.2</version>
    <scope>runtime</scope>
</dependency>

Next, let's add the jcabi-maven-plugin plugin that weaves the binaries with AspectJ aspects at compile-time. The plugin provides the ajc goal that does the automatic weaving:

<plugin>
    <groupId>com.jcabi</groupId>
    <artifactId>jcabi-maven-plugin</artifactId>
    <version>0.14.1</version>
    <executions>
        <execution>
            <goals>
                <goal>ajc</goal>
            </goals>
        </execution>
    </executions>
    <dependencies>
        <dependency>
            <groupId>org.aspectj</groupId>
            <artifactId>aspectjtools</artifactId>
            <version>1.9.2</version>
        </dependency>
        <dependency>
            <groupId>org.aspectj</groupId>
            <artifactId>aspectjweaver</artifactId>
            <version>1.9.2</version>
        </dependency>
    </dependencies>
</plugin>

Last, let's compile the classes using the Maven command:

mvn clean package

The logs generated by the jcabi-maven-plugin at compilation will look like:

[INFO] --- jcabi-maven-plugin:0.14.1:ajc (default) @ jcabi ---
[INFO] jcabi-aspects 0.18/55a5c13 started new daemon thread jcabi-loggable for watching of 
  @Loggable annotated methods
[INFO] Unwoven classes will be copied to /jcabi/target/unwoven
[INFO] Created temp dir /jcabi/target/jcabi-ajc
[INFO] jcabi-aspects 0.18/55a5c13 started new daemon thread jcabi-cacheable for automated
  cleaning of expired @Cacheable values
[INFO] ajc result: 11 file(s) processed, 0 pointcut(s) woven, 0 error(s), 0 warning(s)

Now that we know how to add the library to our project, let's see some if its annotations in action.

3. @Async

The @Async annotation allows executing the method asynchronously. However, it is only compatible with methods that return a void or Future type.

Let's write a displayFactorial method that displays the factorial of a number asynchronously:

@Async
public static void displayFactorial(int number) {
    long result = factorial(number);
    System.out.println(result);
}

Then, we'll recompile the class to let Maven weave the aspect for the @Async annotation. Last, we can run our example:

[main] INFO com.jcabi.aspects.aj.NamedThreads - 
jcabi-aspects 0.22.6/3f0a1f7 started new daemon thread jcabi-async for Asynchronous method execution

As we can see from the log, the library creates a separate daemon thread jcabi-async to perform all asynchronous operations.

Now, let's use the @Async annotation to return a Future instance:

@Async
public static Future<Long> getFactorial(int number) {
    Future<Long> factorialFuture = CompletableFuture.supplyAsync(() -> factorial(number));
    return factorialFuture;
}

If we use @Async on a method that does not return void or Future, an exception will be thrown at runtime when we invoke it.

4. @Cacheable

The @Cacheable annotation allows caching a method's results to avoid duplicate calculations.

For instance, let's write a cacheExchangeRates method that returns the latest exchange rates:

@Cacheable(lifetime = 2, unit = TimeUnit.SECONDS)
public static String cacheExchangeRates() {
    String result = null;
    try {
        URL exchangeRateUrl = new URL("https://api.exchangeratesapi.io/latest");
        URLConnection con = exchangeRateUrl.openConnection();
        BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
        result = in.readLine();
    } catch (IOException e) {
        e.printStackTrace();
    }
    return result;
}

Here, the cached result will have a lifetime of 2 seconds. Similarly, we can make a result cacheable forever by using:

@Cacheable(forever = true)

Once we recompile the class and execute it again, the library will log the details of two daemon threads that handle the caching mechanism:

[main] INFO com.jcabi.aspects.aj.NamedThreads - 
jcabi-aspects 0.22.6/3f0a1f7 started new daemon thread jcabi-cacheable-clean for automated 
  cleaning of expired @Cacheable values
[main] INFO com.jcabi.aspects.aj.NamedThreads - 
jcabi-aspects 0.22.6/3f0a1f7 started new daemon thread jcabi-cacheable-update for async 
  update of expired @Cacheable values

When we invoke our cacheExchangeRates method, the library will cache the result and log the details of the execution:

[main] INFO com.baeldung.jcabi.JcabiAspectJ - #cacheExchangeRates(): 
'{"rates":{"CAD":1.458,"HKD":8.5039,"ISK":137.9,"P..364..:4.5425},"base":"EUR","date":"2020-02-10"}'
  cached in 560ms, valid for 2s

So, if invoked again (within 2 seconds), cacheExchangeRates will return the result from the cache:

[main] INFO com.baeldung.jcabi.JcabiAspectJ - #cacheExchangeRates(): 
'{"rates":{"CAD":1.458,"HKD":8.5039,"ISK":137.9,"P..364..:4.5425},"base":"EUR","date":"2020-02-10"}'
  from cache (hit #1, 563ms old)

If the method throws an exception, the result won't be cached.

5. @Loggable

The library provides the @Loggable annotation for simple logging using the SLF4J logging facility.

Let's add the @Loggable annotation to our displayFactorial and cacheExchangeRates methods:

@Loggable
@Async
public static void displayFactorial(int number) {
    ...
}

@Loggable
@Cacheable(lifetime = 2, unit = TimeUnit.SECONDS)
public static String cacheExchangeRates() {
    ...
}

Then, after recompilation, the annotation will log the method name, return value, and execution time:

[main] INFO com.baeldung.jcabi.JcabiAspectJ - #displayFactorial(): in 1.16ms
[main] INFO com.baeldung.jcabi.JcabiAspectJ - #cacheExchangeRates(): 
'{"rates":{"CAD":1.458,"HKD":8.5039,"ISK":137.9,"P..364..:4.5425},"base":"EUR","date":"2020-02-10"}'
  in 556.92ms

6. @LogExceptions

Similar to @Loggable, we can use the @LogExceptions annotation to log only the exceptions thrown by a method.

Let's use @LogExceptions on a method divideByZero that will throw an ArithmeticException:

@LogExceptions
public static void divideByZero() {
    int x = 1/0;
}

The execution of the method will log the exception and also throw the exception:

[main] WARN com.baeldung.jcabi.JcabiAspectJ - java.lang.ArithmeticException: / by zero
    at com.baeldung.jcabi.JcabiAspectJ.divideByZero_aroundBody12(JcabiAspectJ.java:77)

java.lang.ArithmeticException: / by zero
    at com.baeldung.jcabi.JcabiAspectJ.divideByZero_aroundBody12(JcabiAspectJ.java:77)
    ...

7. @Quietly

The @Quietly annotation is similar to @LogExceptions, except that it doesn't propagate any exception thrown by the method. Instead, it just logs them.

Let's add the @Quietly annotation to our divideByZero method:

@Quietly
public static void divideByZero() {
    int x = 1/0;
}

Hence, the annotation will swallow the exception and only log the details of the exception that would've otherwise been thrown:

[main] WARN com.baeldung.jcabi.JcabiAspectJ - java.lang.ArithmeticException: / by zero
    at com.baeldung.jcabi.JcabiAspectJ.divideByZero_aroundBody12(JcabiAspectJ.java:77)

The @Quietly annotation is only compatible with methods that have a void return type.

8. @RetryOnFailure

The @RetryOnFailure annotation allows us to repeat the execution of a method in the event of an exception or failure.

For example, let's add the @RetryOnFailure annotation to our divideByZero method:

@RetryOnFailure(attempts = 2)
@Quietly
public static void divideByZero() {
    int x = 1/0;
}

So, if the method throws an exception, the AOP advice will attempt to execute it twice:

[main] WARN com.baeldung.jcabi.JcabiAspectJ - 
#divideByZero(): attempt #1 of 2 failed in 147µs with java.lang.ArithmeticException: / by zero
[main] WARN com.baeldung.jcabi.JcabiAspectJ - 
#divideByZero(): attempt #2 of 2 failed in 110µs with java.lang.ArithmeticException: / by zero

Also, we can define other parameters like delay, unit, and types, while declaring the @RetryOnFailure annotation:

@RetryOnFailure(attempts = 3, delay = 5, unit = TimeUnit.SECONDS, 
  types = {java.lang.NumberFormatException.class})

In this case, the AOP advice will attempt the method thrice, with a delay of 5 seconds between attempts, only if the method throws a NumberFormatException.

9. @UnitedThrow

The @UnitedThrow annotation allows us to catch all exceptions thrown by a method and wrap it in an exception we specify. Thus, it unifies the exceptions thrown by the method.

For instance, let's create a method processFile that throws IOException and InterruptedException:

@UnitedThrow(IllegalStateException.class)
public static void processFile() throws IOException, InterruptedException {
    BufferedReader reader = new BufferedReader(new FileReader("baeldung.txt"));
    reader.readLine();
    // additional file processing
}

Here, we've added the annotation to wrap all exceptions into IllegalStateException. Therefore, when the method is invoked, the stack trace of the exception will look like:

java.lang.IllegalStateException: java.io.FileNotFoundException: baeldung.txt (No such file or directory)
    at com.baeldung.jcabi.JcabiAspectJ.processFile(JcabiAspectJ.java:92)
    at com.baeldung.jcabi.JcabiAspectJ.main(JcabiAspectJ.java:39)
Caused by: java.io.FileNotFoundException: baeldung.txt (No such file or directory)
    at java.io.FileInputStream.open0(Native Method)
    ...

10. Conclusion

In this article, we've explored the jcabi-aspects Java library.

First, we've seen a quick way to set up the library in our Maven project using jcabi-maven-plugin.

Then, we examined a few handy annotations, like @Async, @Loggable, and @RetryOnFailure, that modify the behavior of the Java application using AOP.

As usual, all the code implementations are available over on GitHub.

Viewing all 4772 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>