Quantcast
Channel: Baeldung
Viewing all 4752 articles
Browse latest View live

Extending an Array’s Length

$
0
0

1. Overview

In this tutorial, we’ll take a look at the different ways in which we can extend a Java array.

Since arrays are a contiguous block of memory, the answer may not be readily apparent, but let’s unpack that now.

2. Using Arrays.copyOf

First, let’s look at Arrays.copyOf. We’ll copy the array and add a new element to the copy:

public Integer[] addElementUsingArraysCopyOf(Integer[] srcArray, int elementToAdd) {
    Integer[] destArray = Arrays.copyOf(srcArray, srcArray.length + 1);
    destArray[destArray.length - 1] = elementToAdd;
    return destArray;
}

The way Arrays.copyOf works is that it takes the srcArray and copies the number of elements specified in the length argument to a new array which it internally creates. The size of the new array is the argument that we provide.

One thing to notice is that when the length argument is greater than the size of the source array, Arrays.copyOf will fill the extra elements in the destination array with null.

Depending on the datatype, the behavior of the filling will be different. For example, if we use primitive data types in place of Integer then the extra elements are filled with the zeros. In the case of char, Arrays.copyOf will fill extra elements with null and in case of boolean, with false.

3. Using ArrayList

The next way we’ll look at is using ArrayList.

We’ll first convert the array to an ArrayList and then add the element. Then we’ll convert the ArrayList back to an array:

public Integer[] addElementUsingArrayList(Integer[] srcArray, int elementToAdd) {
    Integer[] destArray = new Integer[srcArray.length + 1];
    ArrayList<Integer> arrayList = new ArrayList<>(Arrays.asList(srcArray));
    arrayList.add(elementToAdd);
    return arrayList.toArray(destArray);
}

Note that we’ve passed the srcArray by converting it to a Collection. The srcArray will populate the underlying array in the ArrayList.

Also, another thing to note is that we’ve passed the destination array as an argument to toArray. This method will copy the underlying array to the destArray.

4. Using System.arraycopy

Finally, we’ll take a look at System.arraycopy, which quite similar to Arrays.copyOf:

public Integer[] addElementUsingSystemArrayCopy(Integer[] srcArray, int elementToAdd) {
    Integer[] destArray = new Integer[srcArray.length + 1];
    System.arraycopy(srcArray, 0, destArray, 0, srcArray.length);
    destArray[destArray.length - 1] = elementToAdd;
    return destArray;
}

One interesting fact is that Arrays.copyOf internally uses this method.

Here we can notice that we copy the elements from the srcArray to destArray and then add the new element to the destArray.

5. Performance

One thing common in all the solutions is that we have to create a new array one way or another. The reason for it lies in how arrays are allocated in memory. An array holds a contiguous block of memory for super-fast lookup, which is why we cannot simply resize it.

This, of course, has a performance impact, especially for large arrays. This is why ArrayList over-allocates, effectively reducing the number of times the JVM needs to reallocate memory.

But, if we are doing a lot of inserts, an array might not be the right data structure, and we should consider a LinkedList.

6. Conclusion

In this article, we have explored the different ways of adding elements to the end of an array.

The entire code is available over on GitHub.


Java Weekly, Issue 279

$
0
0

Here we go…

1. Spring and Java

>> Analyzing dependencies in IntelliJ IDEA [vojtechruzicka.com]

A couple of solid Maven tools available to IntelliJ IDEA Ultimate users — the Dependency Structure Matrix and dependency graph diagrams.

>> Service Virtualization Meets Java: Hoverfly Tutorial [infoq.com]

A JUnit 5 integration with Hoverfly can help you test communication points in your microservices architecture. Very cool.

>> Toppling the Giant: Kotlin vs. Java [blog.scottlogic.com]

And if you’re considering switching from Java, you’ll find value in this summary of Kotlin’s pros and cons.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Test Automation: Prevention or Cure? [infoq.com]

And as one team learned, breaking up work into smaller batches proved more effective than test automation in speeding up their software delivery cycle.

>> How to check if your Reserved Instances are used [advancedweb.hu]

Since this doesn’t come with AWS out-of-the-box, here’s a way you can keep tabs on them through budget alerts.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Two Step Reorg [dilbert.com]

>> Dogbert Starts a Podcast [dilbert.com]

>> Objective Reality [dilbert.com]

4. Pick of the Week

>> Why software projects take longer than you think – a statistical model [erikbern.com]

Guide to Maven Profiles

$
0
0

1. Overview

Maven profiles can be used to create customized build configurations, like targeting a level of test granularity or a specific deployment environment.

In this tutorial, we’ll learn how to work with Maven profiles.

2. A Basic Example

Normally when we run mvn package, the unit tests are executed as well. But what if we want to quickly package the artifact and run it to see if it works?

First, we’ll create a no-tests profile which sets the maven.test.skip property to true:

<profile>
    <id>no-tests</id>
    <properties>
        <maven.test.skip>true</maven.test.skip>
    </properties>
</profile>

Next, we’ll execute the profile by running the mvn package -Pno-tests command. Now the artifact is created and the tests are skipped. In this case the mvn package -Dmaven.test.skip command would have been easier.

However, this was just an introduction to Maven profiles. Let’s take a look at some more complex setups.

3. Declaring Profiles

In the previous section, we saw how to create one profile. We can configure as many profiles as we want by giving them unique ids.

Let’s say we wanted to create a profile that only ran our integration tests and another for a set of mutation tests.

We would begin by specifying an id for each one in our pom.xml file:

<profiles>
    <profile>
        <id>integration-tests</id>
    </profile>
    <profile>
        <id>mutation-tests</id>
    </profile>
</profiles>

Within each profile element, we can configure many elements such as dependencies, plugins, resources, finalName.

So, for the example above, we could add plugins and their dependencies separately for integration-tests and mutation-tests.

Separating tests into profiles can make the default build faster by having it focus, say, on just the unit tests.

3.1. Profile Scope

Now, we just placed these profiles in our pom.xml file, which declares them only for our project.

But, in Maven 3, we can actually add profiles to any of three locations:

  1. Project-specific profiles go into the project’s pom.xml file
  2. User-specific profiles go into the user’s settings.xml file
  3. Global profiles go into the global settings.xml file

Note that Maven 2 did support a fourth location, but this was removed in Maven 3.

We try to configure profiles in the pom.xml whenever possible. The reason is that we want to use the profiles both on our development machines and on the build machines. Using the settings.xml is more difficult and error-prone as we have to distribute it across build environments ourselves.

4. Activating Profiles

After we create one or more profiles we can start using them, or in other words, activating them.

4.1. Seeing Which Profiles Are Active

Let’s use the help:active-profiles goal to see which profiles are active in our default build:

mvn help:active-profiles

Actually, since we haven’t activated anything yet, we get:

The following profiles are active:

Well, nothing.

We’ll activate them in just a moment. But quickly, another way to see what is activated is to include the maven-help-plugin in our pom.xml and tie the active-profiles goal to the compile phase:

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-help-plugin</artifactId>
            <version>3.2.0</version>
            <executions>
                <execution>
                    <id>show-profiles</id>
                    <phase>compile</phase>
                    <goals>
                        <goal>active-profiles</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

Now, let’s get to using them! We’ll look at a few different ways.

4.2. Using -P

Actually, we already saw one way at the beginning, which is that we can activate profiles with the -P argument.

So let’s begin by enabling the integration-test profile:

mvn package -P integration-tests

If we verify the active profiles, with the maven-help-plugin or the mvn help:active-profiles -P integration-tests command we’ll get the following result:

The following profiles are active:

 - integration-tests

In case we want to activate multiple profiles at the same time, we use a comma-separated list of profiles:

mvn package -P integration-tests,mutation-tests

4.3. Active by Default

If we always want to execute a profile, we can make one active by default:

<profile>
    <id>integration-tests</id>
    <activation>
        <activeByDefault>true</activeByDefault>
    </activation>
</profile>

Then, we can run mvn package without specifying the profiles, and we can verify that the integration-test profile is active.

However, if we run the Maven command and enable another profile than the activeByDefault profile is skipped. So when we run mvn package -P mutation-tests then only the mutation-tests profile is active.

When we activate in other ways, the activeByDefault profile is also skipped as we’ll see in the next sections.

4.4. Based on a Property

We can activate profiles on the command-line. However, sometimes it’s more convenient if they’re activated automatically. For instance, we can base it on a -D system property:

<profile>
    <id>active-on-property-environment</id>
    <activation>
        <property>
            <name>environment</name>
        </property>
    </activation>
</profile>

We now activate the profile with the mvn package -Denvironment command.

It’s also possible to activate a profile if a property is not present:

<property>
    <name>!environment</name>
</property>

Or we can activate the profile if the property has a specific value:

<property>
    <name>environment</name>
    <value>test</value>
</property>

We can now run the profile with mvn package -Denvironment=test.

Lastly, we can activate the profile if the property has a value other then the specified value:

<property>
    <name>environment</name>
    <value>!test</value>
</property>

4.5. Based on the JDK version

Another option is to enable a profile based on the JDK running on the machine. In this case, we want to enable the profile if the JDK version starts with 11:

<profile>
    <id>active-on-jdk-11</id>
    <activation>
        <jdk>11</jdk>
    </activation>
</profile>

We can also use ranges for the JDK version as explained in Maven Version Range Syntax.

4.6. Based on the Operating System

Alternatively, we can activate the profile based on some operating system information.

And if we aren’t sure of that, we can first use the mvn enforcer:display-info command which gives the following output on my machine:

Maven Version: 3.5.4
JDK Version: 11.0.2 normalized as: 11.0.2
OS Info: Arch: amd64 Family: windows Name: windows 10 Version: 10.0

After that, we can configure a profile which is activated only on Windows 10:

<profile>
    <id>active-on-windows-10</id>
    <activation>
        <os>
            <name>windows 10</name>
            <family>Windows</family>
            <arch>amd64</arch>
            <version>10.0</version>
        </os>
    </activation>
</profile>

4.7. Based on a File

Another option is to run a profile if a file exists or is missing.

So, let’s create a test profile that only executes if the testreport.html is not yet present:

<activation>
    <file>
        <missing>target/testreport.html</missing>
    </file>
</activation>

5. Deactivating a Profile

We’ve seen many ways to activate profiles, but sometimes we need to disable one as well.

To disable a profile we can use the ‘!’ or ‘-‘.

So, to disable the active-on-jdk-11 profile we execute the mvn compile -P -active-on-jdk-11 command.

6. Conclusion

In this article, we’ve seen how to work with Maven profiles, so we can create different build configurations.

The profiles help to execute specific elements of the build when we need them. This optimizes our build process and helps to give faster feedback to developers.

Feel free to have a look at the finished pom.xml file over on GitHub.

Convert Time to Milliseconds in Java

$
0
0

1. Overview

In this quick tutorial, we’ll illustrate multiple ways of converting time into Unix-epoch milliseconds in Java.

More specifically, we’ll use:

  • Core Java’s java.util.Date and Calendar
  • Java 8’s Date and Time API
  • Joda-Time library

2. Core Java

2.1. Using Date

Firstly, let’s define a millis property holding a random value of milliseconds:

long millis = 1556175797428L; // April 25, 2019 7:03:17.428 UTC

We’ll use this value to initialize our various objects and verify our results.

Next, let’s start with a Date object:

Date date = // implementation details

Now, we’re ready to convert the date into milliseconds by simply invoking the getTime() method:

Assert.assertEquals(millis, date.getTime());

2.2. Using Calendar

Likewise, if we have a Calendar object, we can use the getTimeInMillis() method:

Calendar calendar = // implementation details
Assert.assertEquals(millis, calendar.getTimeInMillis());

3. Java 8 Date Time API

3.1. Using Instant

Simply put, Instant is a point in Java’s epoch timeline.

We can get the current time in milliseconds from the Instant:

java.time.Instant instant = // implementation details
Assert.assertEquals(millis, instant.toEpochMilli());

As a result, the toEpochMilli() method returns the same number of milliseconds as we defined earlier.

3.2. Using LocalDateTime

Similarly, we can use Java 8’s Date and Time API to convert a LocalDateTime into milliseconds:

LocalDateTime localDateTime = // implementation details
ZonedDateTime zdt = ZonedDateTime.of(localDateTime, ZoneId.systemDefault());
Assert.assertEquals(millis, zdt.toInstant().toEpochMilli());

First, we created an instance of the current date. After that, we used the toEpochMilli() method to convert the ZonedDateTime into milliseconds.

As we know, LocalDateTime doesn’t contain information about the time zone. In other words, we can’t get milliseconds directly from LocalDateTime instance.

4. Joda-Time

While Java 8 adds much of Joda-Time’s functionality, we may want to use this option if we are on Java 7 or earlier.

4.1. Using Instant

Firstly, we can obtain the current system milliseconds from the Joda-Time Instant class instance using the getMillis() method:

Instant jodaInstant = // implementation details
Assert.assertEquals(millis, jodaInstant.getMillis());

4.2. Using DateTime

Additionally, if we have a Joda-Time DateTime instance:

DateTime jodaDateTime = // implementation details

Then we can retrieve the milliseconds with the getMillis() method:

Assert.assertEquals(millis, jodaDateTime.getMillis());

5. Conclusion

In conclusion, this article demonstrates how to convert time into milliseconds in Java.

Finally, as always, the complete code for this article is available over on GitHub.

Writing Clojure Webapps with Ring

$
0
0

1. Introduction

Ring is a library for writing web applications in Clojure. It supports everything needed to write fully-featured web apps and has a thriving ecosystem to make it even more powerful.

In this tutorial, we’ll give an introduction to Ring, and show some of the things that can we can achieve with it.

Ring isn’t a framework designed for creating REST APIs, like so many modern toolkits. It’s a lower-level framework to handle HTTP requests in general, with a focus on traditional web development. However, some libraries build on top of it to support many other desired application structures.

2. Dependencies

Before we can start working with Ring, we need to add it to our project. The minimum dependencies we need are:

We can add these to our Leiningen project:

  :dependencies [[org.clojure/clojure "1.10.0"]
                 [ring/ring-core "1.7.1"]
                 [ring/ring-jetty-adapter "1.7.1"]]

We can then add this to a minimal project:

(ns ring.core
  (:use ring.adapter.jetty))

(defn handler [request]
  {:status 200
   :headers {"Content-Type" "text/plain"}
   :body "Hello World"})

(defn -main
  [& args]
  (run-jetty handler {:port 3000}))

Here, we’ve defined a handler function – which we’ll cover soon – which always returns the string “Hello World”. Also, we’ve added our main function to use this handler – it’ll listen for requests on port 3000.

3. Core Concepts

Leiningen has a few core concepts around which everything builds: Requests, Responses, Handlers, and Middleware.

3.1. Requests

Requests are a representation of incoming HTTP requests. Ring represents a request as a map, allowing our Clojure application to interact with the individual fields easily. There’re a standard set of keys in this map, including but not limited to:

  • :uri – The full URI path.
  • :query-string – The full query string.
  • :request-method – The request method, one of :get, :head, :post, :put, :delete or :options.
  • :headers – A map of all the HTTP headers provided to the request.
  • :body – An InputStream representing the request body, if present.

Middleware may add more keys to this map as well as needed.

3.2. Responses

Similarly, responses are a representation of the outgoing HTTP responses. Ring also represents these as maps with three standard keys:

  • :status – The status code to send back
  • :headers – A map of all the HTTP headers to send back
  • :body – The optional body to send back

As before, Middleware may alter this between our handler producing it and the final result getting sent to the client.

Ring also provides some helpers to make building the responses easier.

The most basic of these is the ring.util.response/response function, which creates a simple response with a status code of 200 OK:

ring.core=> (ring.util.response/response "Hello")
{:status 200, :headers {}, :body "Hello"}

There are a few other methods that go along with this for common status codes – for example, bad-request, not-found and redirect:

ring.core=> (ring.util.response/bad-request "Hello")
{:status 400, :headers {}, :body "Hello"}
ring.core=> (ring.util.response/created "/post/123")
{:status 201, :headers {"Location" "/post/123"}, :body nil}
ring.core=> (ring.util.response/redirect "https://ring-clojure.github.io/ring/")
{:status 302, :headers {"Location" "https://ring-clojure.github.io/ring/"}, :body ""}

We also have the status method that will convert an existing response to any arbitrary status code:

ring.core=> (ring.util.response/status (ring.util.response/response "Hello") 409)
{:status 409, :headers {}, :body "Hello"}

We then have some methods to adjust other features of the response similarly – for example, content-type, header or set-cookie:

ring.core=> (ring.util.response/content-type (ring.util.response/response "Hello") "text/plain")
{:status 200, :headers {"Content-Type" "text/plain"}, :body "Hello"}
ring.core=> (ring.util.response/header (ring.util.response/response "Hello") "X-Tutorial-For" "Baeldung")
{:status 200, :headers {"X-Tutorial-For" "Baeldung"}, :body "Hello"}
ring.core=> (ring.util.response/set-cookie (ring.util.response/response "Hello") "User" "123")
{:status 200, :headers {}, :body "Hello", :cookies {"User" {:value "123"}}}

Note that the set-cookie method adds a whole new entry to the response map. This needs the wrap-cookies middleware to process it correctly for it to work.

3.3. Handlers

Now that we understand requests and responses, we can start to write our handler function to tie it together.

A handler is a simple function that takes the incoming request as a parameter and returns the outgoing response. What we do in this function is entirely up to our application, as long as it fits this contract.

At the very simplest, we could write a function that always returns the same response:

(defn handler [request] (ring.util.response/response "Hello"))

We can interact with the request as needed as well.

For example, we could write a handler to return the incoming IP Address:

(defn check-ip-handler [request]
    (ring.util.response/content-type
        (ring.util.response/response (:remote-addr request))
        "text/plain"))

3.4. Middleware

Middleware is a name that’s common in some languages but less so in the Java world. Conceptually they are similar to Servlet Filters and Spring Interceptors.

In Ring, middleware refers to simple functions that wrap the main handler and adjusts some aspects of it in some way. This could mean mutating the incoming request before it’s processed, mutating the outgoing response after it’s generated or potentially doing nothing more than logging how long it took to process.

In general, middleware functions take a first parameter of the handler to wrap and returns a new handler function with the new functionality.

The middleware can use as many other parameters as needed. For example, we could use the following to set the Content-Type header on every response from the wrapped handler:

(defn wrap-content-type [handler content-type]
  (fn [request]
    (let [response (handler request)]
      (assoc-in response [:headers "Content-Type"] content-type))))

Reading through it we can see that we return a function that takes a request – this’s the new handler. This will then call the provided handler and then return a mutated version of the response.

We can use this to produce a new handler by simply chaining them together:

(def app-handler (wrap-content-type handler "text/html"))

Clojure also offers a way to chain many together in a more natural way – by the use of Threading Macros. These are a way to provide a list of functions to call, each with the output of the previous one.

In particular, we want the Thread First macro, ->. This will allow us to call each middleware with the provided value as the first parameter:

(def app-handler
  (-> handler
      (wrap-content-type "text/html")
      wrap-keyword-params
      wrap-params))

This has then produced a handler that’s the original handler wrapped in three different middleware functions.

4. Writing Handlers

Now that we understand the components that make up a Ring application, we need to know what we can do with the actual handlers. These are the heart of the entire application and is where the majority of the business logic will go.

We can put whatever code we wish into these handlers, including database access or calling other services. Ring gives us some additional abilities for working directly with the incoming requests or outgoing responses that are very useful as well.

4.1. Serving Static Resources

One of the simplest functions that any web application can perform is to serve up static resources. Ring provides two middleware functions to make this easy – wrap-file and wrap-resource.

The wrap-file middleware takes a directory on the filesystem. If the incoming request matches a file in this directory then that file gets returned instead of calling the handler function:

(use 'ring.middleware.file)
(def app-handler (wrap-file your-handler "/var/www/public"))

In a very similar manner, the wrap-resource middleware takes a classpath prefix in which it looks for the files:

(use 'ring.middleware.resource)
(def app-handler (wrap-resource your-handler "public"))

In both cases, the wrapped handler function is only ever called if a file isn’t found to return to the client.

Ring also provides additional middleware to make these cleaner to use over the HTTP API:

(use 'ring.middleware.resource
     'ring.middleware.content-type
     'ring.middleware.not-modified)

(def app-handler
  (-> your-handler
      (wrap-resource "public")
      wrap-content-type
      wrap-not-modified)

The wrap-content-type middleware will automatically determine the Content-Type header to set based on the filename extension requested. The wrap-not-modified middleware compares the If-Not-Modified header to the Last-Modified value to support HTTP caching, only returning the file if it’s needed.

4.2. Accessing Request Parameters

When processing a request, there are some important ways that the client can provide information to the server. These include query string parameters – included in the URL and form parameters – submitted as the request payload for POST and PUT requests.

Before we can use parameters, we must use the wrap-params middleware to wrap the handler. This correctly parses the parameters, supporting URL encoding, and makes them available to the request. This can optionally specify the character encoding to use, defaulting to UTF-8 if not specified:

(def app-handler
  (-> your-handler
      (wrap-params {:encoding "UTF-8"})
  ))

Once done, the request will get updated to make the parameters available. These go into appropriate keys in the incoming request:

  • :query-params – The parameters parsed out of the query string
  • :form-params – The parameters parsed out of the form body
  • :params – The combination of both :query-params and :form-params

We can make use of this in our request handler exactly as expected.

(defn echo-handler [{params :params}]
    (ring.util.response/content-type
        (ring.util.response/response (get params "input"))
        "text/plain"))

This handler will return a response containing the value from the parameter input.

Parameters map to a single string if only one value is present, or to a list if multiple values are present.

For example, we get the following parameter maps:

// /echo?input=hello
{"input "hello"}

// /echo?input=hello&name=Fred
{"input "hello" "name" "Fred"}

// /echo?input=hello&input=world
{"input ["hello" "world"]}

4.3. Receiving File Uploads

Often we want to be able to write web applications that users can upload files to. In the HTTP protocol, this is typically handled using Multipart requests. These allow for a single request to contain both form parameters and a set of files.

Ring comes with a middleware called wrap-multipart-params to handle this kind of request. This is similar to the way that wrap-params parses simple requests.

wrap-multipart-params automatically decodes and stores any uploaded files onto the file system and tells the handler where they are for it to work with them:

(def app-handler
  (-> your-handler
      wrap-params
      wrap-multipart-params
  ))

By default, the uploaded files get stored in the temporary system directory and automatically deleted after an hour. Note that this does require that the JVM is still running for the next hour to perform the cleanup.

If preferred, there’s also an in-memory store, though obviously, this risks running out of memory if large files get uploaded.

We can also write our storage engines if needed, as long as it fulfills the API requirements.

(def app-handler
  (-> your-handler
      wrap-params
      (wrap-multipart-params {:store ring.middleware.multipart-params.byte-array/byte-array-store})
  ))

Once this middleware is set up, the uploaded files are available on the incoming request object under the params key. This is the same as using the wrap-params middleware. This entry is a map containing the details needed to work with the file, depending on the store used.

For example, the default temporary file store returns values:

  {"file" {:filename     "words.txt"
           :content-type "text/plain"
           :tempfile     #object[java.io.File ...]
           :size         51}}

Where the :tempfile entry is a java.io.File object that directly represents the file on the file system.

4.4. Working with Cookies

Cookies are a mechanism where the server can provide a small amount of data that the client will continue to send back on subsequent requests. This is typically used for session IDs, access tokens, or persistent user data such as the configured localization settings.

Ring has middleware that will allow us to work with cookies easily. This will automatically parse cookies on incoming requests, and will also allow us to create new cookies on outgoing responses.

Configuring this middleware follows the same patterns as before:

(def app-handler
  (-> your-handler
      wrap-cookies
  ))

At this point, all incoming requests will have their cookies parsed and put into the :cookies key in the request. This will contain a map of the cookie name and value:

{"session_id" {:value "session-id-hash"}}

We can then add cookies to outgoing responses by adding the :cookies key to the outgoing response. We can do this by creating the response directly:

{:status 200
 :headers {}
 :cookies {"session_id" {:value "session-id-hash"}}
 :body "Setting a cookie."}

There’s also a helper function that we can use to add cookies to responses, in a similar way to how earlier we could set status codes or headers:

(ring.util.response/set-cookie 
    (ring.util.response/response "Setting a cookie.") 
    "session_id" 
    "session-id-hash")

Cookies can also have additional options set on them, as needed for the HTTP specification. If we’re using set-cookie then we provide these as a map parameter after the key and value. The keys to this map are:

  • :domain – The domain to restrict the cookie to
  • :path – The path to restrict the cookie to
  • :secure – true to only send the cookie on HTTPS connections
  • :http-onlytrue to make the cookie inaccessible to JavaScript
  • :max-age – The number of seconds after which the browser deletes the cookie
  • :expires – A specific timestamp after which the browser deletes the cookie
  • :same-site – If set to :strict, then the browser won’t send this cookie back with cross-site requests.
(ring.util.response/set-cookie
    (ring.util.response/response "Setting a cookie.")
    "session_id"
    "session-id-hash"
    {:secure true :http-only true :max-age 3600})

4.5. Sessions

Cookies give us the ability to store bits of information that the client sends back to the server on every request. A more powerful way of achieving this is to use sessions. These get stored entirely on the server, but the client maintains the identifier that determines which session to use.

As with everything else here, sessions are implemented using a middleware function:

(def app-handler
  (-> your-handler
      wrap-session
  ))

By default, this stores session data in memory. We can change this if needed, and Ring comes with an alternative store that uses cookies to store all of the session data.

As with uploading files, we can provide our storage function if needed.

(def app-handler
  (-> your-handler
      wrap-cookies
      (wrap-session {:store (cookie-store {:key "a 16-byte secret"})})
  ))

We can also adjust the details of the cookie used to store the session key.

For example, to make it so that the session cookie persists for one hour we could do:

(def app-handler
  (-> your-handler
      wrap-cookies
      (wrap-session {:cookie-attrs {:max-age 3600}})
  ))

The cookie attributes here are the same as supported by the wrap-cookies middleware.

Sessions can often act as data stores to work with. This doesn’t always work as well in a functional programming model, so Ring implements them slightly differently.

Instead, we access the session data from the request, and we return a map of data to store into it as part of the response. This is the entire session state to store, not only the changed values.

For example, the following keeps a running count of how many times the handler has been requested:

(defn handler [{session :session}]
  (let [count   (:count session 0)
        session (assoc session :count (inc count))]
    (-> (response (str "You accessed this page " count " times."))
        (assoc :session session))))

Working this way, we can remove data from the session simply by not including the key. We can also delete the entire session by returning nil for the new map.

(defn handler [request]
  (-> (response "Session deleted.")
      (assoc :session nil)))

5. Leiningen Plugin

Ring provides a plugin for the Leiningen build tool to aid both development and production.

We set up the plugin by adding the correct plugin details to the project.clj file:

  :plugins [[lein-ring "0.12.5"]]
  :ring {:handler ring.core/handler}

It’s important that the version of lein-ring is correct for the version of Ring. Here we’ve been using Ring 1.7.1, which means we need lein-ring 0.12.5. In general, it’s safest to just use the latest version of both, as seen on Maven central or with the lein search command:

$ lein search ring-core
Searching clojars ...
[ring/ring-core "1.7.1"]
  Ring core libraries.

$ lein search lein-ring
Searching clojars ...
[lein-ring "0.12.5"]
  Leiningen Ring plugin

The :handler parameter to the :ring call is the fully-qualified name of the handler that we want to use. This can include any middleware that we’ve defined.

Using this plugin means that we no longer need a main function. We can use Leiningen to run in development mode, or else we can build a production artifact for deployment purposes. Our code now comes down exactly to our logic and nothing more.

5.1. Building a Production Artifact

Once this is set up, we can now build a WAR file that we can deploy to any standard servlet container:

$ lein ring uberwar
2019-04-12 07:10:08.033:INFO::main: Logging initialized @1054ms to org.eclipse.jetty.util.log.StdErrLog
Created ./clojure/ring/target/uberjar/ring-0.1.0-SNAPSHOT-standalone.war

We can also build a standalone JAR file that will run our handler exactly as expected:

$ lein ring uberjar
Compiling ring.core
2019-04-12 07:11:27.669:INFO::main: Logging initialized @3016ms to org.eclipse.jetty.util.log.StdErrLog
Created ./clojure/ring/target/uberjar/ring-0.1.0-SNAPSHOT.jar
Created ./clojure/ring/target/uberjar/ring-0.1.0-SNAPSHOT-standalone.jar

This JAR file will include a main class that will start the handler in the embedded container that we included. This will also honor an environment variable of PORT allowing us to easily run it in a production environment:

PORT=2000 java -jar ./clojure/ring/target/uberjar/ring-0.1.0-SNAPSHOT-standalone.jar
2019-04-12 07:14:08.954:INFO::main: Logging initialized @1009ms to org.eclipse.jetty.util.log.StdErrLog
WARNING: seqable? already refers to: #'clojure.core/seqable? in namespace: clojure.core.incubator, being replaced by: #'clojure.core.incubator/seqable?
2019-04-12 07:14:10.795:INFO:oejs.Server:main: jetty-9.4.z-SNAPSHOT; built: 2018-08-30T13:59:14.071Z; git: 27208684755d94a92186989f695db2d7b21ebc51; jvm 1.8.0_77-b03
2019-04-12 07:14:10.863:INFO:oejs.AbstractConnector:main: Started ServerConnector@44a6a68e{HTTP/1.1,[http/1.1]}{0.0.0.0:2000}
2019-04-12 07:14:10.863:INFO:oejs.Server:main: Started @2918ms
Started server on port 2000

5.2. Running in Development Mode

For development purposes, we can run the handler directly from Leiningen without needing to build and run it manually. This makes things easier for testing our application in a real browser:

$ lein ring server
2019-04-12 07:16:28.908:INFO::main: Logging initialized @1403ms to org.eclipse.jetty.util.log.StdErrLog
2019-04-12 07:16:29.026:INFO:oejs.Server:main: jetty-9.4.12.v20180830; built: 2018-08-30T13:59:14.071Z; git: 27208684755d94a92186989f695db2d7b21ebc51; jvm 1.8.0_77-b03
2019-04-12 07:16:29.092:INFO:oejs.AbstractConnector:main: Started ServerConnector@69886d75{HTTP/1.1,[http/1.1]}{0.0.0.0:3000}
2019-04-12 07:16:29.092:INFO:oejs.Server:main: Started @1587ms

This also honors the PORT environment variable if we’ve set that.

Additionally, there’s a Ring Development library that we can add to our project. If this is available, then the development server will attempt to automatically reload any detected source changes. This can give us an efficient workflow of changing the code and seeing it live in our browser. This requires the ring-devel dependency adding:

[ring/ring-devel "1.7.1"]

6. Conclusion

In this article, we gave a brief introduction to the Ring library as a means to write web applications in Clojure. Why not try it on the next project?

Examples of some of the concepts we’ve covered here can be seen in GitHub.

Guide to FastUtil

$
0
0

1. Introduction

In this tutorial, we’ll be looking at the FastUtil library.

First, we’ll code some examples of its type-specific collections.

Then, we’ll analyze the performance that gives FastUtil its name.

Finally, let’s take a peek at FastUtil‘s BigArray utilities.

2. Features

The FastUtil Java library seeks to extend the Java Collections Framework. It provides type-specific maps, sets, lists and queues with a smaller memory footprint and fast access and insertion. FastUtil also provides a set of utilities for working with and manipulating large (64-bit) arrays, sets and lists.

The library also includes a multitude of practical Input/Output classes for binary and text files.

Its latest release, FastUtil 8, also released a host of type-specific functions, extending the JDK’s Functional Interfaces.

2.1. Speed

In many cases, the FastUtil implementations are the fastest available. The authors have even provided their own in-depth benchmark report, comparing it against similar libraries include HPPC and Trove.

In this tutorial, we’ll look to define our own benchmarks using the Java Microbench Harness (JMH).

3. Full Sized Dependency

On top of the usual JUnit dependency, we’ll be using the FastUtils and JMH dependencies in this tutorial.

We’ll need the following dependencies in our pom.xml file:

<dependency>
    <groupId>it.unimi.dsi</groupId>
    <artifactId>fastutil</artifactId>
    <version>8.2.2</version>
</dependency>
<dependency>
    <groupId>org.openjdk.jmh</groupId>
    <artifactId>jmh-core</artifactId>
    <version>1.19</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.openjdk.jmh</groupId>
    <artifactId>jmh-generator-annprocess</artifactId>
    <version>1.19</version>
    <scope>test</scope>
</dependency>

Or for Gradle users:

testCompile group: 'org.openjdk.jmh', name: 'jmh-core', version: '1.19'
testCompile group: 'org.openjdk.jmh', name: 'jmh-generator-annprocess', version: '1.19'
compile group: 'it.unimi.dsi', name: 'fastutil', version: '8.2.2'

3.1. Customized Jar File

Due to the lack of generics, FastUtils generates a large number of type-specific classes. And unfortunately, this leads to a huge jar file.

However, luckily for us, FastUtils includes a find-deps.sh script which allows generation of smaller, more focused jars comprising of only the classes we want to use in our application.

4. Type-Specific Collections

Before we begin, let’s take a quick peek at the simple process of instantiating a type-specific collection. Let’s pick a HashMap that stores keys and values using doubles. 

For this purpose, FastUtils provides a Double2DoubleMap interface and a Double2DoubleOpenHashMap implementation:

Double2DoubleMap d2dMap = new Double2DoubleOpenHashMap();

Now that we’ve instantiated our class, we can simply populate data as we would with any Map from the Java Collections API:

d2dMap.put(2.0, 5.5);
d2dMap.put(3.0, 6.6);

Finally, we can check that the data has been added correctly:

assertEquals(5.5, d2dMap.get(2.0));

4.1. Performance

FastUtils focuses on its performant implementations. In this section, we’ll make use of the JMH to verify that fact. Let’s compare the Java Collections HashSet<Integer> implementation against FastUtil’s IntOpenHashSet.

First, let’s see how to implement the IntOpenHashSet:

@Param({"100", "1000", "10000", "100000"})
public int setSize;

@Benchmark
public IntSet givenFastUtilsIntSetWithInitialSizeSet_whenPopulated_checkTimeTaken() {
    IntSet intSet = new IntOpenHashSet(setSize);
    for(int i = 0; i < setSize; i++) {
        intSet.add(i);
    }
    return intSet; 
}

Above, we’ve simply declared the IntOpenHashSet implementation of the IntSet interface. We’ve also declared the initial size setSize with the @Param annotation.

Put simply, these numbers are fed into JMH to produce a series of benchmark tests with different set sizes.

Next, let’s do the same thing using the Java Collections implementation:

@Benchmark
public Set<Integer> givenCollectionsHashSetWithInitialSizeSet_whenPopulated_checkTimeTaken() {
    Set<Integer> intSet = new HashSet<>(setSize);
    for(int i = 0; i < setSize; i++) {
        intSet.add(i);
    }
    return intSet;
}

Finally, let’s run the benchmark and compare the two implementations:

Benchmark                                     (setSize)  Mode  Cnt     Score   Units
givenCollectionsHashSetWithInitialSizeSet...        100  avgt    2     1.460   us/op
givenCollectionsHashSetWithInitialSizeSet...       1000  avgt    2    12.740   us/op
givenCollectionsHashSetWithInitialSizeSet...      10000  avgt    2   109.803   us/op
givenCollectionsHashSetWithInitialSizeSet...     100000  avgt    2  1870.696   us/op
givenFastUtilsIntSetWithInitialSizeSet...           100  avgt    2     0.369   us/op
givenFastUtilsIntSetWithInitialSizeSet...          1000  avgt    2     2.351   us/op
givenFastUtilsIntSetWithInitialSizeSet...         10000  avgt    2    37.789   us/op
givenFastUtilsIntSetWithInitialSizeSet...        100000  avgt    2   896.467   us/op

These results make it clear the FastUtils implementation is much more performant than the Java Collections alternative.

5. Big Collections

Another important feature of FastUtils is the ability to use 64-bit arrays. Arrays in Java, by default, are limited to 32 bits.

To get started, let’s take a look at the BigArrays class for Integer types. IntBigArrays provides static methods for working with 2-dimensional Integer arrays. By using these provided methods, we can essentially wrap our array into a more user-friendly 1-dimensional array.

Let’s take a look at how this works.

First, we’ll start by initializing a 1-dimensional array, and converting it into a 2-dimensional array using IntBigArray’s wrap method:

int[] oneDArray = new int[] { 2, 1, 5, 2, 1, 7 };
int[][] twoDArray = IntBigArrays.wrap(oneDArray.clone());

We should make sure to use the clone method to ensure a deep copy of the array.

Now, as we’d do with a List or a Map, we can gain access to the elements using the get method:

int firstIndex = IntBigArrays.get(twoDArray, 0);
int lastIndex = IntBigArrays.get(twoDArray, IntBigArrays.length(twoDArray)-1);

Finally, let’s add some checks to ensure our IntBigArray returns the correct values:

assertEquals(2, firstIndex);
assertEquals(7, lastIndex);

6. Conclusion

In this article, we’ve taken a dive into FastUtils core features.

We looked at some of the type-specific collections that FastUtil offers, before playing around with some BigCollections.

As always, the code can be found over on GitHub

Multi-Module Maven Application with Java Modules

$
0
0

1. Overview

The Java Platform Module System (JPMS) adds more reliability, better separation of concerns, and stronger encapsulation to Java applications. However, it’s not a build tool, hence it lacks the ability for automatically managing project dependencies.

Of course, we may wonder whether if we can use well-established build tools, like Maven or Gradle, in modularized applications.

Actually, we can! In this tutorial, we’ll learn how to create a multi-module Maven application using Java modules.

2. Encapsulating Maven Modules in Java Modules

Since modularity and dependency management are not mutually exclusive concepts in Java, we can seamlessly integrate the JPMS, for instance, with Maven, thus leveraging the best of both worlds.

In a standard multi-module Maven project, we add one or more child Maven modules by placing them under the project’s root folder and declaring them in the parent POM, within the <modules> section.

In turn, we edit each child module’s POM and specify its dependencies via the standard <groupId>, <artifactId> and <version> coordinates.

The reactor mechanism in Maven — responsible for handling multi-module projects — takes care of building the whole project in the right order.

In this case, we’ll basically use the same design methodology, but with one subtle yet fundamental variant: we’ll wrap each Maven module into a Java module by adding to it the module descriptor file, module-info.java.

3. The Parent Maven Module

To demonstrate how modularity and dependency management work great together, we’ll build a basic demo multi-module Maven project, whose functionality will be narrowed to just fetching some domain objects from a persistence layer.

To keep the code simple, we’ll use a plain Map as the underlying data structure for storing the domain objects. Of course, we can easily switch further down the road to a fully-fledged relational database.

Let’s start by defining the parent Maven module. To accomplish this, let’s create a root project directory called, for instance, multimodulemavenproject (but it could be anything else), and add to it the parent pom.xml file:

<groupId>com.baeldung.multimodulemavenproject</groupId>
<artifactId>multimodulemavenproject</artifactId>
<version>1.0</version>
<packaging>pom</packaging>
<name>multimodulemavenproject</name>
<build>
    <pluginManagement>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.8.0</version>
                <configuration>
                    <source>11</source>
                    <target>11</target>
                </configuration>
            </plugin>
        </plugins>
    </pluginManagement>
</build>
<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>

There are a few details worth noting in the definition of the parent POM.

First off, since we’re using Java 11, we’ll need at least Maven 3.5.0 on our system, as Maven supports Java 9 and higher from that version onward.

And, we’ll also need at least version 3.8.0 of the Maven compiler plugin. Therefore, let’s make sure to check the latest version of the plugin on Maven Central.

4. The Child Maven Modules

Notice that up to this point, the parent POM doesn’t declare any child modules.

Since our demo project will fetch some domain objects from the persistence layer, we’ll create four child Maven modules:

  1. entitymodule: will contain a simple domain class
  2. daomodule: will hold the interface required for accessing the persistence layer (a basic DAO contract)
  3. userdaomodule: will include an implementation of the daomodule‘s interface
  4. mainappmodule: the project’s entry point

4.1. The entitymodule Maven Module

Now, let’s add the first child Maven module, which just includes a basic domain class.

Under the project’s root directory, let’s create the entitymodule/src/main/java/com/baeldung/entity directory structure and add a User class:

public class User {

    private final String name;

    // standard constructor / getter / toString

}

Next, let’s include the module’s pom.xml file:

<parent>
    <groupId>com.baeldung.multimodulemavenproject</groupId>
    <artifactId>multimodulemavenproject</artifactId>
    <version>1.0</version>
</parent>
<groupId>com.baeldung.entitymodule</groupId>
<artifactId>entitymodule</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
<name>entitymodule</name>

As we can see, the Entity module doesn’t have any dependencies to other modules, nor does it require additional Maven artifacts, as it only includes the User class.

Now, we need to encapsulate the Maven module into a Java module. To achieve this, let’s simply place the following module descriptor file (module-info.java) under the entitymodule/src/main/java directory:

module com.baeldung.entitymodule {
    exports com.baeldung.entitymodule;
}

Finally, let’s add the child Maven module to the parent POM:

<modules>
    <module>entitymodule</module>
</modules>

4.2. The daomodule Maven Module

Let’s create a new Maven module that will contain a simple interface. This is convenient for defining an abstract contract for fetching generic types from the persistence layer.

As a matter of fact, there’s a very compelling reason to place this interface in a separate Java module. By doing so, we have an abstract, highly-decoupled contract, which is easy to reuse in different contexts. At the core, this is an alternative implementation of the Dependency Inversion Principle, which yields a more flexible design.

Therefore, let’s create the daomodule/src/main/java/com/baeldung/dao directory structure under the project’s root directory, and add to it the Dao<T> interface:

public interface Dao<T> {

    Optional<T> findById(int id);

    List<T> findAll();

}

Now, let’s define the module’s pom.xml file:

<parent>
    // parent coordinates
</parent>
<groupId>com.baeldung.daomodule</groupId>
<artifactId>daomodule</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
<name>daomodule</name>

The new module doesn’t require other modules or artifacts either, so we’ll just wrap it up into a Java module. Let’s create the module descriptor under the daomodule/src/main/java directory:

module com.baeldung.daomodule {
    exports com.baeldung.daomodule;
}

Finally, let’s add the module to the parent POM:

<modules>
    <module>entitymodule</module>
    <module>daomodule</module>
</modules>

4.3. The userdaomodule Maven Module

Next, let’s define the Maven module that holds an implementation of the Dao interface.

Under the project’s root directory, let’s create the userdaomodule/src/main/java/com/baeldung/userdao directory structure, and add to it the following UserDao class:

public class UserDao implements Dao<User> {

    private final Map<Integer, User> users;

    // standard constructor

    @Override
    public Optional<User> findById(int id) {
        return Optional.ofNullable(users.get(id));
    }

    @Override
    public List<User> findAll() {
        return new ArrayList<>(users.values());
    }
}

Simply put, the UserDao class provides a basic API that allows us to fetch User objects from the persistence layer.

To keep things simple, we used a Map as the backing data structure for persisting the domain objects. Of course, it’s possible to provide a more thorough implementation that uses, for instance, Hibernate’s entity manager.

Now, let’s define the Maven module’s POM:

<parent>
    // parent coordinates
</parent>
<groupId>com.baeldung.userdaomodule</groupId>
<artifactId>userdaomodule</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
<name>userdaomodule</name>
<dependencies>
    <dependency>
        <groupId>com.baeldung.entitymodule</groupId>
        <artifactId>entitymodule</artifactId>
        <version>1.0</version>
    </dependency>
    <dependency>
        <groupId>com.baeldung.daomodule</groupId>
        <artifactId>daomodule</artifactId>
        <version>1.0</version>
    </dependency>
</dependencies>

In this case, things are slightly different, as the userdaomodule module requires the entitymodule and daomodule modules. That’s why we added them as dependencies in the pom.xml file.

We still need to encapsulate this Maven module into a Java module. So, let’s add the following module descriptor under the userdaomodule/src/main/java directory:

module com.baeldung.userdaomodule {
    requires com.baeldung.entitymodule;
    requires com.baeldung.daomodule;
    provides com.baeldung.daomodule.Dao with com.baeldung.userdaomodule.UserDao;
    exports com.baeldung.userdaomodule;
}

Finally, we need to add this new module to the parent POM:

<modules>
    <module>entitymodule</module>
    <module>daomodule</module>
    <module>userdaomodule</module>
</modules>

From a high-level view, it’s easy to see that the pom.xml file and the module descriptor play different roles. Even so, they complement each other nicely.

Let’s say that we need to update the versions of the entitymodule and daomodule Maven artifacts. We can easily do this without having to change the dependencies in the module descriptor. Maven will take care of including the right artifacts for us.

Similarly, we can change the service implementation that the module provides by modifying the “provides..with” directive in the module descriptor.

We gain a lot when we use Maven and Java modules together. The former brings the functionality of automatic, centralized dependency management, while the latter provides the intrinsic benefits of modularity.

4.4. The mainappmodule Maven Module

Additionally, we need to define the Maven module that contains the project’s main class.

As we did before, let’s create the mainappmodule/src/main/java/mainapp directory structure under the root directory, and add to it the following Application class:

public class Application {
    
    public static void main(String[] args) {
        Map<Integer, User> users = new HashMap<>();
        users.put(1, new User("Julie"));
        users.put(2, new User("David"));
        Dao userDao = new UserDao(users);
        userDao.findAll().forEach(System.out::println);
    }   
}

The Application class’s main() method is quite simple. First, it populates a HashMap with a couple of User objects. Next, it uses a UserDao instance for fetching them from the Map, and then it displays them to the console.

In addition, we also need to define the module’s pom.xml file:

<parent>
    // parent coordinates
</parent>
<groupId>com.baeldung.mainappmodule</groupId>
<artifactId>mainappmodule</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
<name>mainappmodule</name>
    
<dependencies>
    <dependency>
        <groupId>com.baeldung.entitymodule</groupId>
         <artifactId>entitymodule</artifactId>
         <version>1.0</version>
    </dependency>
    <dependency>
        <groupId>com.baeldung.daomodule</groupId>
        <artifactId>daomodule</artifactId>
        <version>1.0</version>
    </dependency>
    <dependency>
        <groupId>com.baeldung.userdaomodule</groupId>
        <artifactId>userdaomodule</artifactId>
        <version>1.0</version>
    </dependency>
</dependencies>

The module’s dependencies are pretty self-explanatory. So, we just need to place the module inside a Java module. Therefore, under the mainappmodule/src/main/java directory structure, let’s include the module descriptor:

module com.baeldung.mainappmodule {
    requires com.baeldung.entitypmodule;
    requires com.baeldung.userdaopmodule;
    requires com.baeldung.daopmodule;
    uses com.baeldung.daopmodule.Dao;
}

Finally, let’s add this module to the parent POM:

<modules>
    <module>entitymodule</module>
    <module>daomodule</module>
    <module>userdaomodule</module>
    <module>mainappmodule</module>
</modules>

With all the child Maven modules already in place, and neatly encapsulated in Java modules, here’s how the project’s structure looks:

multimodulemavenproject (the root directory)
pom.xml
|-- entitymodule
    |-- src
        |-- main
            | -- java
            module-info.java
            |-- com
                |-- baeldung
                    |-- entity
                    User.class
    pom.xml
|-- daomodule
    |-- src
        |-- main
            | -- java
            module-info.java
            |-- com
                |-- baeldung
                    |-- dao
                    Dao.class
    pom.xml
|-- userdaomodule
    |-- src
        |-- main
            | -- java
            module-info.java
            |-- com
                |-- baeldung
                    |-- userdao
                    UserDao.class
    pom.xml
|-- mainappmodule
    |-- src
        |-- main
            | -- java
            module-info.java
            |-- com
                |-- baeldung
                    |-- mainapp
                    Application.class
    pom.xml

5. Running the Application

Finally, let’s run the application, either from within our IDE or from a console.

As we might expect, we should see a couple of User objects printed out to the console when the application starts up:

User{name=Julie}
User{name=David}

6. Conclusion

In this tutorial, we learned in a pragmatic way how to put Maven and the JPMS to work side-by-side, in the development of a basic multi-module Maven project that uses Java modules.

As usual, all the code samples shown in this tutorial are available over on GitHub.

Batch Insert/Update with Hibernate/JPA

$
0
0

1. Overview

In this tutorial, we’ll look at how we can batch insert or update entities using Hibernate/JPA.

Batching allows us to send a group of SQL statements to the database in a single network call. This way, we can optimize the network and memory usage of our application.

2. Setup

2.1. Sample Data Model

Let’s look at our sample data model that we’ll use in the examples.

Firstly, we’ll create a School entity:

@Entity
public class School {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE)
    private long id;

    private String name;

    @OneToMany(mappedBy = "school")
    private List<Student> students;

    // Getters and setters...
}

Each School will have zero or more Students:

@Entity
public class Student {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE)
    private long id;

    private String name;

    @ManyToOne
    private School school;

    // Getters and setters...
}

2.2. Tracing SQL Queries

When running our examples, we’ll need to verify that insert/update statements are indeed sent in batches. Unfortunately, we can’t understand from Hibernate log statements whether SQL statements are batched or not. Because of this, we’ll be using a data source proxy to trace Hibernate/JPA SQL statements:

private static class ProxyDataSourceInterceptor implements MethodInterceptor {
    private final DataSource dataSource;
    public ProxyDataSourceInterceptor(final DataSource dataSource) {
        this.dataSource = ProxyDataSourceBuilder.create(dataSource)
            .name("Batch-Insert-Logger")
            .asJson().countQuery().logQueryToSysOut().build();
    }
    
    // Other methods...
}

3. Default Behaviour

Hibernate doesn’t enable batching by default. This means that it’ll send a separate SQL statement for each insert/update operation:

@Transactional
@Test
public void whenNotConfigured_ThenSendsInsertsSeparately() {
    for (int i = 0; i < 10; i++) {
        School school = createSchool(i);
        entityManager.persist(school);
    }
    entityManager.flush();
}

Here, we’ve persisted 10 School entities. If we look at the query logs, we can see that Hibernate sends each insert statement separately:

"querySize":1, "batchSize":0, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School1","1"]]
"querySize":1, "batchSize":0, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School2","2"]]
"querySize":1, "batchSize":0, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School3","3"]]
"querySize":1, "batchSize":0, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School4","4"]]
"querySize":1, "batchSize":0, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School5","5"]]
"querySize":1, "batchSize":0, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School6","6"]]
"querySize":1, "batchSize":0, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School7","7"]]
"querySize":1, "batchSize":0, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School8","8"]]
"querySize":1, "batchSize":0, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School9","9"]]
"querySize":1, "batchSize":0, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School10","10"]]

Hence we should configure Hibernate to enable batching. For this purpose, we should set hibernate.jdbc.batch_size property to a number bigger than 0.

If we’re creating EntityManager manually, we should add hibernate.jdbc.batch_size to the Hibernate properties:

public Properties hibernateProperties() {
    Properties properties = new Properties();
    properties.put("hibernate.jdbc.batch_size", "5");
    
    // Other properties...
    return properties;
}

If we’re using Spring Boot, we can define it as an application property:

spring.jpa.properties.hibernate.jdbc.batch_size=5

4. Batch Insert for Single Table

4.1. Batch Insert without Explicit Flush

Let’s first look at how we can use batch inserts when we’re dealing with only one entity type.

We’ll use the previous code sample, but this time batching is enabled:

@Transactional
@Test
public void whenInsertingSingleTypeOfEntity_thenCreatesSingleBatch() {
    for (int i = 0; i < 10; i++) {
        School school = createSchool(i);
        entityManager.persist(school);
    }
}

Here we’ve persisted 10 School entities. When we look at the logs, we can verify that Hibernate sends insert statements in batches:

"batch":true, "querySize":1, "batchSize":5, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School1","1"],["School2","2"],["School3","3"],["School4","4"],["School5","5"]]
"batch":true, "querySize":1, "batchSize":5, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School6","6"],["School7","7"],["School8","8"],["School9","9"],["School10","10"]]

One important thing to mention here is the memory consumption. When we persist an entity, Hibernate stores it in the persistence context. For example, if we persist 100,000 entities in one transaction, we’ll end up having 100,000 entity instances in memory, possibly causing an OutOfMemoryException.

4.2. Batch Insert with Explicit Flush

Now, we’ll look at how we can optimize memory usage during batching operations. Let’s dig deep into the persistence context’s role.

First of all, the persistence context stores newly created entities and also the modified ones in memory. Hibernate sends these changes to the database when the transaction is synchronized. This generally happens at the end of a transaction. However, calling EntityManager.flush() also triggers a transaction synchronization.

Secondly, the persistence context serves as an entity cache, thus also referred to as the first level cache. To clear entities in the persistence context, we can call EntityManager.clear().

So, to reduce the memory load during batching, we can call EntityManager.flush() and EntityManager.clear() on our application code, whenever batch size is reached:

@Transactional
@Test
public void whenFlushingAfterBatch_ThenClearsMemory() {
    for (int i = 0; i < 10; i++) {
        if (i > 0 && i % BATCH_SIZE == 0) {
            entityManager.flush();
            entityManager.clear();
        }
        School school = createSchool(i);
        entityManager.persist(school);
    }
}

Here we’re flushing the entities in the persistence context thus making Hibernate send queries to the database. Furthermore, by clearing the persistence context, we’re removing the School entities from memory. Batching behavior will remain the same.

5. Batch Insert for Multiple Tables

Now let’s see how we can configure batch inserts when dealing with multiple entity types in one transaction.

When we want to persist the entities of several types, Hibernate creates a different batch for each entity type. This is because there can be only one type of entity in a single batch.

Additionally, as Hibernate collects insert statements, whenever it encounters an entity type different from the one in the current batch, it creates a new batch. This is the case even though there is already a batch for that entity type:

@Transactional
@Test
public void whenThereAreMultipleEntities_ThenCreatesNewBatch() {
    for (int i = 0; i < 10; i++) {
        if (i > 0 && i % BATCH_SIZE == 0) {
            entityManager.flush();
            entityManager.clear();
        }
        School school = createSchool(i);
        entityManager.persist(school);
        Student firstStudent = createStudent(school);
        Student secondStudent = createStudent(school);
        entityManager.persist(firstStudent);
        entityManager.persist(secondStudent);
    }
}

Here, we’re inserting a School and assigning it two Students and repeating this process 10 times.

In the logs, we see that Hibernate sends School insert statements in several batches of size 1 while we were expecting only 2 batches of size 5. Moreover, Student insert statements are also sent in several batches of size 2 instead of 4 batches of size 5:

"batch":true, "querySize":1, "batchSize":1, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School1","1"]]
"batch":true, "querySize":1, "batchSize":2, "query":["insert into student (name, school_id, id) 
  values (?, ?, ?)"], "params":[["Student-School1","1","2"],["Student-School1","1","3"]]
"batch":true, "querySize":1, "batchSize":1, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School2","4"]]
"batch":true, "querySize":1, "batchSize":2, "query":["insert into student (name, school_id, id) 
  values (?, ?, ?)"], "params":[["Student-School2","4","5"],["Student-School2","4","6"]]
"batch":true, "querySize":1, "batchSize":1, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School3","7"]]
"batch":true, "querySize":1, "batchSize":2, "query":["insert into student (name, school_id, id) 
  values (?, ?, ?)"], "params":[["Student-School3","7","8"],["Student-School3","7","9"]]
Other log lines...

To batch all insert statements of the same entity type, we should configure the hibernate.order_inserts property.

We can configure the Hibernate property manually using EntityManagerFactory:

public Properties hibernateProperties() {
    Properties properties = new Properties();
    properties.put("hibernate.order_inserts", "true");
    
    // Other properties...
    return properties;
}

If we’re using Spring Boot, we can configure the property in application.properties:

spring.jpa.properties.hibernate.order_inserts=true

After adding this property, we’ll have 1 batch for School inserts and 2 batches for Student inserts:

"batch":true, "querySize":1, "batchSize":5, "query":["insert into school (name, id) values (?, ?)"], 
  "params":[["School6","16"],["School7","19"],["School8","22"],["School9","25"],["School10","28"]]
"batch":true, "querySize":1, "batchSize":5, "query":["insert into student (name, school_id, id) 
  values (?, ?, ?)"], "params":[["Student-School6","16","17"],["Student-School6","16","18"],
  ["Student-School7","19","20"],["Student-School7","19","21"],["Student-School8","22","23"]]
"batch":true, "querySize":1, "batchSize":5, "query":["insert into student (name, school_id, id) 
  values (?, ?, ?)"], "params":[["Student-School8","22","24"],["Student-School9","25","26"],
  ["Student-School9","25","27"],["Student-School10","28","29"],["Student-School10","28","30"]]

6. Batch Update

Now, let’s move on to batch updates. Similar to batch inserts, we can group several update statements and send them to the database in one go.

To enable this, we’ll configure hibernate.order_updates and hibernate.jdbc.batch_versioned_data properties.

If we’re creating our EntityManagerFactory manually, we can set the properties programmatically:

public Properties hibernateProperties() {
    Properties properties = new Properties();
    properties.put("hibernate.order_updates", "true");
    properties.put("hibernate.batch_versioned_data", "true");
    
    // Other properties...
    return properties;
}

And if we’re using Spring Boot, we’ll just add them to application.properties:

spring.jpa.properties.hibernate.order_updates=true
spring.jpa.properties.hibernate.batch_versioned_data=true

After configuring these properties, Hibernate should group update statements in batches:

@Transactional
@Test
public void whenUpdatingEntities_thenCreatesBatch() {
    TypedQuery<School> schoolQuery = 
      entityManager.createQuery("SELECT s from School s", School.class);
    List<School> allSchools = schoolQuery.getResultList();
    for (School school : allSchools) {
        school.setName("Updated_" + school.getName());
    }
}

Here we’ve updated school entities and Hibernate sends SQL statements in 2 batches of size 5:

"batch":true, "querySize":1, "batchSize":5, "query":["update school set name=? where id=?"], 
  "params":[["Updated_School1","1"],["Updated_School2","2"],["Updated_School3","3"],
  ["Updated_School4","4"],["Updated_School5","5"]]
"batch":true, "querySize":1, "batchSize":5, "query":["update school set name=? where id=?"], 
  "params":[["Updated_School6","6"],["Updated_School7","7"],["Updated_School8","8"],
  ["Updated_School9","9"],["Updated_School10","10"]]

7. @Id Generation Strategy

When we want to use batching for inserts/updates, we should be aware of the primary key generation strategy. If our entities use GenerationType.IDENTITY identifier generator, Hibernate will silently disable batch inserts/updates.

Since entities in our examples use GenerationType.SEQUENCE identifier generator, Hibernate enables batch operations:

@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE)
private long id;

8. Summary

In this article, we looked at batch inserts and updates using Hibernate/JPA.

Check out the code samples for this article over on Github.


Run JAR Application With Command Line Arguments

$
0
0

1. Overview

Typically, every meaningful application includes one or more JAR files as dependencies. However, there are times a JAR file itself represents a standalone application or a web application.

We’ll focus on the standalone application scenario in this article. Hereafter, we’ll refer to it as a JAR application.

In this tutorial, we’ll first learn how to create a JAR application. Later, we’ll learn how to run a JAR application with or without command-line arguments.

2. Create a JAR Application

JAR file can contain one or more main classes. Each main class is the entry point of an application. So, theoretically, a JAR file can contain more than one application, but it has to contain at least one main class to be able to run.

A JAR file can have one entry point set in its manifest file. In this case, the JAR file is an executable JAR. The main class has to be included in that JAR file.

First of all, let’s see a quick example of how to compile our classes and create an executable JAR with a manifest file:

$ javac com/baeldung/jarArguments/*.java
$ jar cfm JarExample.jar ../resources/example_manifest.txt com/baeldung/jarArguments/*.class

A non-executable JAR is simply a JAR file that doesn’t have a Main-Class defined in the manifest file. As we’ll see later, we can still run a main class that’s contained in the JAR file itself.

Here’s how we would create a non-executable JAR without a manifest file:

$ jar cf JarExample2.jar com/baeldung/jarArguments/*.class

3. Java Command Line Arguments

Just like any application, a JAR application accepts any number of arguments, including zero arguments. It all depends on the application’s need.

This allows the user to specify configuration information when the application is launched.

As a result, the application can avoid hardcoded values, and it still can handle many different use cases.

An argument can contain any alphanumeric characters, unicode characters, and possibly some special characters allowed by the shell, for example ‘@’.

Arguments are separated by one or more spaces. If an argument needs to contain spaces, the spaces have to be enclosed between quotes. Either single quotes or double quotes work fine.

Usually, for a typical Java application, when invoking the application, the user enters command-line arguments after the name of the class.

However, it’s not always the case for JAR applications.

As we have already discussed, the entry point of a Java main class is the main method. The arguments are all Strings and are passed to the main method as a String array.

That said, inside the application, we can convert any element of the String array to other data types, such as char, int, double, their wrapper classes, or other appropriate types.

4. Run an Executable JAR with Arguments

Let’s see the basic syntax for running an executable JAR file with arguments:

java -jar jar-file-name [args …]

The executable JAR created earlier is a simple application that just prints out the arguments passed in. We can run it with any number of arguments. Below is an example with two arguments:

$ java -jar JarExample.jar "arg 1" arg2@

We’ll see the following output in the console:

Hello Baeldung Reader in JarExample!
There are 2 argument(s)!
Argument(1):arg 1
Argument(2):arg2@

So, when invoking an executable JAR, we don’t need to specify the main class name on the command line. We simply add our arguments after the JAR file name. If we do provide a class name after the executable JAR file name, it simply becomes the first argument to the actual main class.

Most times, a JAR application is an executable JAR. An executable JAR can have a maximum of one main class defined in the manifest file.

Consequently, other applications in the same executable JAR file can’t be set in the manifest file, but we can still run them from the command line just like we would for a non-executable JAR. We’ll see exactly how in the next section.

5. Run a Non-Executable JAR with Arguments

To run an application in a non-executable JAR file, we have to use -cp option instead of -jar. We’ll use the -cp option (short for classpath) to specify the JAR file that contains the class file we want to execute:

java -cp jar-file-name main-class-name [args …]

As you can see, in this case, we’ll have to include the main class name in the command line, followed by arguments.

The non-executable JAR created earlier contains the same simple application. We can run it with any (including zero) arguments. Here’s an example with two arguments:

$ java -cp JarExample2.jar com.baeldung.jarArguments.JarExample "arg 1" arg2@

And, just like we saw above, we’ll see the following output:

Hello Baeldung Reader in JarExample!
There are 2 argument(s)!
Argument(1):arg 1
Argument(2):arg2@

6. Conclusion

In this tutorial, we learned two ways of running a JAR application on the command line with or without arguments.

We also demonstrated that an argument could contain spaces and special characters (when allowed by the shell).

As always, the code for the examples is available over on GitHub.

RestTemplate Post Request with JSON

$
0
0

1. Introduction

In this tutorial, we’ll illustrate how to use Spring’s RestTemplate to make POST requests sending JSON content.

2. Setting Up the Example

Let’s start by adding a simple Person model class to represent the data to be posted:

public class Person {
    private Integer id;
    private String name;

    // standard constructor, getters, setters
}

To work with Person objects, we’ll add a PersonService interface and implementation with 2 methods:

public interface PersonService {

    public Person saveUpdatePerson(Person person);
    public Person findPersonById(Integer id);
}

The implementation of these methods will simply return an object. We’re using a dummy implementation of this layer here so we can focus on the web layer.

3. Setting Up the REST API

Let’s define a simple REST API for our Person class:

@PostMapping(value = "/createPerson", consumes = "application/json", produces = "application/json")
public Person createPerson(@RequestBody Person person) {
    return personService.saveUpdatePerson(person);
}

@PostMapping(value = "/updatePerson", consumes = "application/json", produces = "application/json")
public Person updatePerson(@RequestBody Person person, HttpServletResponse response) {
    response.setHeader("Location", ServletUriComponentsBuilder.fromCurrentContextPath()
      .path("/findPerson/" + person.getId()).toUriString());
    return personService.saveUpdatePerson(person);
}

As we remember, we want to post the data in JSON format. In order to that, we added the consumes attribute in the @PostMapping annotation with the value of “application/json” for both methods.

Similarly, we set the produces attribute to “application/json” to tell Spring that we want the response body in JSON format.

We annotated the person parameter with the @RequestBody annotation for both methods. This will tell Spring that the person object will be bound to the body of the HTTP request.

Lastly, both methods return a Person object that will be bound to the response body. Let’s note that we’ll annotate our API class with @RestController to annotate all API methods with a hidden @ResponseBody annotation.

4. Using RestTemplate

Now we can write a few unit tests to test our Person REST API. Here, we’ll try to send POST requests to the Person API by using the POST methods provided by the RestTemplate: postForObject, postForEntity, and postForLocation.

Before we start to implement our unit tests, let’s define a setup method to initialize the objects that we’ll use in all our unit test methods:

@BeforeClass
public static void runBeforeAllTestMethods() {
    createPersonUrl = "http://localhost:8082/spring-rest/createPerson";
    updatePersonUrl = "http://localhost:8082/spring-rest/updatePerson";

    restTemplate = new RestTemplate();
    headers = new HttpHeaders();
    headers.setContentType(MediaType.APPLICATION_JSON);
    personJsonObject = new JSONObject();
    personJsonObject.put("id", 1);
    personJsonObject.put("name", "John");
}

Besides this setup method, note that we’ll refer to the following mapper to convert the JSON String to a JSONNode object in our unit tests:

private final ObjectMapper objectMapper = new ObjectMapper();

As we previously mentioned, we want to post the data in JSON format. In order to achieve this, we’ll add a Content-Type header to our request with the APPLICATION_JSON media type.

Spring’s HttpHeaders class provides different methods to access the headers. Here, we set the Content-Type header to application/json by calling the setContentType method. We’ll attach the headers object to our requests.

4.1. Posting JSON with postForObject

RestTemplate‘s postForObject method creates a new resource by posting an object to the given URI template. It returns the result as automatically converted to the type specified in the responseType parameter.

Let’s say that we want to make a POST request to our Person API to create a new Person object and return this newly created object in the response.

First, we’ll build the request object of type HttpEntity based on the personJsonObject and the headers containing the Content-Type. This allows the postForObject method to send a JSON request body:

@Test
public void givenDataIsJson_whenDataIsPostedByPostForObject_thenResponseBodyIsNotNull()
  throws IOException {
    HttpEntity<String> request = new HttpEntity<String>(personJsonObject.toString(), headers);
    
    String personResultAsJsonStr = restTemplate.postForObject(createPersonUrl, request, String.class);
    JsonNode root = objectMapper.readTree(personResultAsJsonStr);
    
    assertNotNull(personResultAsJsonStr);
    assertNotNull(root);
    assertNotNull(root.path("name").asText());
}

In this example, the postForObject() method returns the response body as a String type. We can also return the response as a Person object by setting the responseType parameter:

Person person = restTemplate.postForObject(createPersonUrl, request, Person.class);

assertNotNull(person);
assertNotNull(person.getName());

Actually, our request handler method matching with the createPersonUrl URI produces the response body in JSON format. But this is not a limitation for us — postForObject is able to automatically convert the response body into the requested Java type (e.g. String, Person) specified in the responseType parameter.

4.2. Posting JSON with postForEntity

Compared to postForObject(), postForEntity() returns the response as a ResponseEntity object. Other than that, both methods do the same job.

Let’s say that we want to make a POST request to our Person API to create a new Person object and return the response as a ResponseEntity. We can make use of the postForEntity method to implement this:

@Test
public void givenDataIsJson_whenDataIsPostedByPostForEntity_thenResponseBodyIsNotNull()
  throws IOException {
    HttpEntity<String> request = new HttpEntity<String>(personJsonObject.toString(), headers);
    
    ResponseEntity<String> responseEntityStr = restTemplate.
      postForEntity(createPersonUrl, request, String.class);
    JsonNode root = objectMapper.readTree(responseEntityStr.getBody());
 
    assertNotNull(responseEntityStr.getBody());
    assertNotNull(root.path("name").asText());
}

Similar to the postForObject, postForEntity has the responseType parameter to convert the response body to the requested Java type.

Here, we were able to return the response body as a ResponseEntity<String>.

We can also return the response as a ResponseEntity<Person> object by setting the responseType parameter to Person.class:

ResponseEntity<Person> responseEntityPerson = restTemplate.
  postForEntity(createPersonUrl, request, Person.class);

assertNotNull(responseEntityPerson.getBody());
assertNotNull(responseEntityPerson.getBody().getName());

4.3. Posting JSON with postForLocation

Similar to the postForObject and postForEntity methods, postForLocation also creates a new resource by posting the given object to the given URI. The only difference is that it returns the value of the Location header.

Remember, we already saw how to set the Location header of a response in our updatePerson REST API method above:

response.setHeader("Location", ServletUriComponentsBuilder.fromCurrentContextPath()
  .path("/findPerson/" + person.getId()).toUriString());

Now, let’s imagine that we want to return the Location header of the response after updating the person object we posted. We can implement this by using the postForLocation method:

@Test
public void givenDataIsJson_whenDataIsPostedByPostForLocation_thenResponseBodyIsTheLocationHeader() 
  throws JsonProcessingException {
    HttpEntity<String> request = new HttpEntity<String>(personJsonObject.toString(), headers);
    URI locationHeader = restTemplate.postForLocation(updatePersonUrl, request);

    assertNotNull(locationHeader);
}

5. Conclusion

In this quick tutorial, we explored how to use RestTemplate to make a POST request with JSON.

As always, all the examples and code snippets can be found over on Github.

Guide to Classgraph Library

$
0
0

1. Overview

In this brief tutorial, we’ll talk about the Classgraph library — what it helps with and how we can use it.

Classgraph helps us to find target resources in the Java classpath, builds metadata about the resources found, and provides convenient APIs for working with the metadata.

This use-case is very popular in Spring-based applications, where components marked with stereotype annotations are automatically registered in the application context. However, we can exploit that approach for custom tasks as well. For example, we might want to find all classes with a particular annotation, or all resource files with a certain name.

The cool thing is that Classgraph is fast, as it works on the byte-code level, meaning the inspected classes are not loaded to the JVM, and it doesn’t use reflection for processing.

2. Maven Dependencies

First, let’s add the classgraph library to our pom.xml:

<dependency>
    <groupId>io.github.classgraph</groupId>
    <artifactId>classgraph</artifactId>
    <version>4.8.28</version>
</dependency>

In the next sections, we’ll look into several practical examples with the library’s API.

3. Basic Usage

There are three basic steps to using the library:

  1. Set up scan options – for example, target package(s)
  2. Perform the scan
  3. Work with the scan results

Let’s create the following domain for our example setup:

@Target({TYPE, METHOD, FIELD})
@Retention(RetentionPolicy.RUNTIME)
public @interface TestAnnotation {

    String value() default "";
}
@TestAnnotation
public class ClassWithAnnotation {
}

Now let’s see the 3 steps above on an example of looking for classes with the @TestAnnotation:

try (ScanResult result = new ClassGraph().enableClassInfo().enableAnnotationInfo()
  .whitelistPackages(getClass().getPackage().getName()).scan()) {
    
    ClassInfoList classInfos = result.getClassesWithAnnotation(TestAnnotation.class.getName());
    
    assertThat(classInfos).extracting(ClassInfo::getName).contains(ClassWithAnnotation.class.getName());
}

Let’s break down the example above:

  • we started by setting up the scan options (we’ve configured the scanner to parse only class and annotation info, as well instructing it to parse only files from the target package)
  • we performed the scan using the ClassGraph.scan() method
  • we used the ScanResult to find annotated classes by calling the getClassWithAnnotation() method

As we’ll also see in the next examples, the ScanResult object can contain a lot of information about the APIs we want to inspect, such as the ClassInfoList.

4. Filtering by Method Annotation

Let’s expand our example to method annotations:

public class MethodWithAnnotation {

    @TestAnnotation
    public void service() {
    }
}

We can find all classes that have methods marked by the target annotation using a similar method — getClassesWithMethodAnnotations():

try (ScanResult result = new ClassGraph().enableAllInfo()
  .whitelistPackages(getClass().getPackage().getName()).scan()) {
    
    ClassInfoList classInfos = result.getClassesWithMethodAnnotation(TestAnnotation.class.getName());
    
    assertThat(classInfos).extracting(ClassInfo::getName).contains(MethodWithAnnotation.class.getName());
}

The method returns a ClassInfoList object containing information about the classes that match the scan.

5. Filtering by Annotation Parameter

Let’s also see how we can find all classes with methods marked by the target annotation and with a target annotation parameter value.

First, let’s define classes containing methods with the @TestAnnotation, with 2 different parameter values:

public class MethodWithAnnotationParameterDao {

    @TestAnnotation("dao")
    public void service() {
    }
}
public class MethodWithAnnotationParameterWeb {

    @TestAnnotation("web")
    public void service() {
    }
}

Now, let’s iterate through the ClassInfoList result, and verify each method’s annotations:

try (ScanResult result = new ClassGraph().enableAllInfo()
  .whitelistPackages(getClass().getPackage().getName()).scan()) {

    ClassInfoList classInfos = result.getClassesWithMethodAnnotation(TestAnnotation.class.getName());
    ClassInfoList webClassInfos = classInfos.filter(classInfo -> {
        return classInfo.getMethodInfo().stream().anyMatch(methodInfo -> {
            AnnotationInfo annotationInfo = methodInfo.getAnnotationInfo(TestAnnotation.class.getName());
            if (annotationInfo == null) {
                return false;
            }
            return "web".equals(annotationInfo.getParameterValues().getValue("value"));
        });
    });

    assertThat(webClassInfos).extracting(ClassInfo::getName)
      .contains(MethodWithAnnotationParameterWeb.class.getName());
}

Here, we’ve used the AnnotationInfo and MethodInfo metadata classes to find metadata on the methods and annotations we want to check.

6. Filtering by Field Annotation

We can also use the getClassesWithFieldAnnotation() method to filter a ClassInfoList result based on field annotations:

public class FieldWithAnnotation {

    @TestAnnotation
    private String s;
}
try (ScanResult result = new ClassGraph().enableAllInfo()
  .whitelistPackages(getClass().getPackage().getName()).scan()) {

    ClassInfoList classInfos = result.getClassesWithFieldAnnotation(TestAnnotation.class.getName());
 
    assertThat(classInfos).extracting(ClassInfo::getName).contains(FieldWithAnnotation.class.getName());
}

7. Finding Resources

Finally, we’ll have a look at how we can find information on classpath resources.

Let’s create a resource file in the classgraph classpath root directory — for example, src/test/resources/classgraph/my.config — and give it some content:

my data

We can now find the resource and get its contents:

try (ScanResult result = new ClassGraph().whitelistPaths("classgraph").scan()) {
    ResourceList resources = result.getResourcesWithExtension("config");
    assertThat(resources).extracting(Resource::getPath).containsOnly("classgraph/my.config");
    assertThat(resources.get(0).getContentAsString()).isEqualTo("my data");
}

We can see here we’ve used the ScanResult’s getResourcesWithExtension() method to look for our specific file. The class has a few other useful resource-related methods, such as getAllResources(), getResourcesWithPath() and getResourcesMatchingPattern().

These methods return a ResourceList object, which can be further used to iterate through and manipulate Resource objects.

8. Instantiation

When we want to instantiate found classes, it’s very important to do that not via Class.forName, but by using the library method ClassInfo.loadClass.

The reason is that Classgraph uses its own class loader to load classes from some JAR files. So, if we use Class.forName, the same class might be loaded more than once by different class loaders, and this might lead to non-trivial bugs.

9. Conclusion

In this article, we learned how to effectively find classpath resources and inspect their contents with the Classgraph library.

As usual, the complete source code for this article is available over on GitHub.

Guide to QuarkusIO

$
0
0

1. Introduction

Today, it’s very common to write an application and deploy to the cloud and not worry about the infrastructure. Serverless and FaaS have become very popular. In this type of environment, where instances are created and destroyed frequently, the time to boot and time to first request are extremely important, as they can create a completely different user experience.

Languages as JavaScript and Python are always in the spotlight in this type of scenario. In other words, Java with its fat JARs and long booting time was never a top contender.

In this tutorial, we’ll present Quarkus and discuss if it’s an alternative for bringing Java more effectively to the cloud.

2. QuarkusIO

QuarkusIO, the Supersonic Subatomic Java, promises to deliver small artifacts, extremely fast boot time, and lower time-to-first-request. When combined with GraalVM, Quarkus will compile ahead-of-time (AOT).

And, since Quarkus is built on top of standards, we don’t need to learn anything new. Consequently, we can use CDI and JAX-RS, among others. Also, Quarkus has a lot of extensions, including ones that support Hibernate, Kafka, OpenShift, Kubernetes, and Vert.x.

3. Our First Application

The easiest way to create a new Quarkus project is to open a terminal and type:

mvn io.quarkus:quarkus-maven-plugin:0.13.1:create \
    -DprojectGroupId=com.baeldung.quarkus \
    -DprojectArtifactId=quarkus-project \
    -DclassName="com.baeldung.quarkus.HelloResource" \
    -Dpath="/hello"

This will generate the project skeleton, a HelloResource with a /hello endpoint exposed, configuration, Maven project, and Dockerfiles.

Once imported into our IDE, we’ll have a structure similar to that shown in the image below:

Let’s examine the content of the HelloResource class:

@Path("/hello")
public class HelloResource {

    @GET
    @Produces(MediaType.TEXT_PLAIN)
    public String hello() {
        return "hello";
    }
}

Everything looks good so far. At this point, we have a simple application with a single RESTEasy JAX-RS endpoint. Let’s go ahead and test it by opening a terminal and running the command:

./mvnw compile quarkus:dev:

Our REST endpoint should be exposed at localhost:8080/hello. Let’s test it with the curl command:

$ curl localhost:8080/hello
hello

4. Hot Reload

When running in development mode (./mvn compile quarkus:dev), Quarkus provides a hot-reload capability. In other words, changes made to Java files or to configuration files will automatically be compiled once the browser is refreshed. The most impressive feature here is that we don’t need to save our files. This could be good or bad, depending on our preference.

We’ll now modify our example to demonstrate the hot-reload capability. If the application is stopped, we can simply restart it in dev mode. We’ll use the same example as before as our starting point.

First, we’ll create a HelloService class:

@ApplicationScoped
public class HelloService {
    public String politeHello(String name){
        return "Hello Mr/Mrs " + name;
    }
}

Now, we’ll modify the HelloResource class, injecting the HelloService and adding a new method:

@Inject
HelloService helloService;

@GET
@Produces(MediaType.APPLICATION_JSON)
@Path("/polite/{name}")
public String greeting(@PathParam("name") String name) {
    return helloService.politeHello(name);
}

Next, let’s test our new endpoint:

$ curl localhost:8080/hello/polite/Baeldung
Hello Mr/Mrs Baeldung

We’ll make one more change to demonstrate that the same can be applied to property files. Let’s edit the application.properties file and add one more key:

greeting=Good morning

After that, we’ll modify the HelloService to use our new property:

@ConfigProperty(name = "greeting")
private String greeting;

public String politeHello(String name){
    return greeting + " " + name;
}

If we execute the same curl command, we should now see:

Good morning Baeldung

We can easily package the application by running:

./mvnw package

This will generate 2 jar files inside the target directory:

  • quarkus-project-1.0-SNAPSHOT-runner.jar — an executable jar with the dependencies copied to target/lib
  • quarkus-project-1.0-SNAPSHOT.jar — contains classes and resource files

We can now run the packaged application:

java -jar target/quarkus-project-1.0-SNAPSHOT-runner.jar

5. Native Image

Next, we’ll produce a native image of our application. A native image will improve start-up time and time to first response. In other words, it contains everything it needs to run, including the minimal JVM necessary to run the application.

To start with, we need to have GraalVM installed and the GRAALVM_HOME environment variable configured.

We’ll now stop the application (Ctrl + C), if not stopped already, and run the command:

./mvnw package -Pnative

This can take a few seconds to complete. Because native images try to create all code AOT to boot faster, as a result, we’ll have longer build times.

We can run ./mvnw verify -Pnative to verify that our native artifact was properly constructed:

Secondly, we’ll create a container image using our native executable. For that, we must have a container runtime (i.e. Docker) running in our machine. Let’s open up a terminal window and execute:

./mvnw package -Pnative -Dnative-image.docker-build=true

This will create a Linux 64-bit executable, therefore if we’re using a different OS, it might not be runnable anymore. That’s okay for now.

The project generation created a Dockerfile.native for us:

FROM registry.fedoraproject.org/fedora-minimal
WORKDIR /work/
COPY target/*-runner /work/application
RUN chmod 775 /work
EXPOSE 8080
CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]

If we examine the file, we have a hint at what comes next. First, we’ll create a docker image:

docker build -f src/main/docker/Dockerfile.native -t quarkus/quarkus-project .

Now, we can run the container using:

docker run -i --rm -p 8080:8080 quarkus/quarkus-project


The container started in an incredibly low time of 0.009s. That’s one of the strengths of Quarkus.

Finally, we should test our modified REST to validate our application:

$ curl localhost:8080/hello/polite/Baeldung
Good morning Baeldung

6. Deploying to OpenShift

Once we’re done testing locally using Docker, we’ll deploy our container to OpenShift. Assuming we have the Docker image on our registry, we can deploy the application following the steps below:

oc new-build --binary --name=quarkus-project -l app=quarkus-project
oc patch bc/quarkus-project -p '{"spec":{"strategy":{"dockerStrategy":{"dockerfilePath":"src/main/docker/Dockerfile.native"}}}}'
oc start-build quarkus-project --from-dir=. --follow
oc new-app --image-stream=quarkus-project:latest
oc expose service quarkus-project

Now, we can get the application URL by running:

oc get route

Lastly, we’ll access the same endpoint (note that the URL might be different, depending on our IP address):

$ curl http://quarkus-project-myproject.192.168.64.2.nip.io/hello/polite/Baeldung
Good morning Baeldung

7. Conclusion

In this article, we demonstrated that Quarkus is a great addition that can bring Java more effectively to the cloud. For example, it’s possible now to imagine Java on AWS Lambda. Also, Quarkus is based on standards such as JPA and JAX/RS. Therefore, we don’t need to learn anything new.

Quarkus has caught a lot of attention lately, and lots of new features are being added every day. There are several quickstart projects for us to try Quarkus at Quarkus GitHub repository.

As always, the code for this article is available on GitHub. Happy coding!

Void Type in Java

$
0
0

1. Overview

As Java developers, we might have encountered the Void type at some occasion and wondered what was its purpose.

In this quick tutorial, we’ll learn about this peculiar class and see when and how to use it as well as how to avoid using it when possible.

2. What’s the Void Type

Since JDK 1.1, Java provides us with the Void type. Its purpose is simply to represent the void return type as a class and contain a Class<Void> public value. It’s not instantiable as its only constructor is private.

Therefore, the only value we can assign to a Void variable is null. It may seem a little bit useless, but we’ll now see when and how to use this type.

3. Usages

There are some situations when using the Void type can be interesting.

3.1. Reflection

First, we could use it when doing reflection. Indeed, the return type of any void method will match the Void.TYPE variable that holds the Class<Void> value mentioned earlier.

Let’s imagine a simple Calculator class:

public class Calculator {
    private int result = 0;

    public int add(int number) {
        return result += number;
    }

    public int sub(int number) {
        return result -= number;
    }

    public void clear() {
        result = 0;
    }

    public void print() {
        System.out.println(result);
    }
}

Some methods are returning an integer, some are not returning anything. Now, let’s say we have to retrieve, by reflection, all methods that don’t return any result. We’ll achieve this by using the Void.TYPE variable:

@Test
void givenCalculator_whenGettingVoidMethodsByReflection_thenOnlyClearAndPrint() {
    Method[] calculatorMethods = Calculator.class.getDeclaredMethods();
    List<Method> calculatorVoidMethods = Arrays.stream(calculatorMethods)
      .filter(method -> method.getReturnType().equals(Void.TYPE))
      .collect(Collectors.toList());

    assertThat(calculatorVoidMethods)
      .allMatch(method -> Arrays.asList("clear", "print").contains(method.getName()));
}

As we can see, only the clear() and print() methods have been retrieved.

3.2. Generics

Another usage of the Void type is with generic classes. Let’s suppose we are calling a method which requires a Callable parameter:

public class Defer {
    public static <V> V defer(Callable<V> callable) throws Exception {
        return callable.call();
    }
}

But, the Callable we want to pass doesn’t have to return anything. Therefore, we can pass a Callable<Void>:

@Test
void givenVoidCallable_whenDiffer_thenReturnNull() throws Exception {
    Callable<Void> callable = new Callable<Void>() {
        @Override
        public Void call() {
            System.out.println("Hello!");
            return null;
        }
    };

    assertThat(Defer.defer(callable)).isNull();
}

We could have either use a random type (e.g. Callable<Integer>) and return null or no type at all (Callable), but using Void states our intentions clearly.

We can also apply this method to lambdas. As a matter of fact, our Callable could have been written as a lambda. Let’s imagine a method requiring a Function, but we want to use a Function that doesn’t return anything. Then we just have to make it return Void:

public static <T, R> R defer(Function<T, R> function, T arg) {
    return function.apply(arg);
}
@Test
void givenVoidFunction_whenDiffer_thenReturnNull() {
    Function<String, Void> function = s -> {
        System.out.println("Hello " + s + "!");
        return null;
    };

    assertThat(Defer.defer(function, "World")).isNull();
}

4. How to Avoid Using It?

Now, we’ve seen some usages of the Void type. However, even if the first usage is totally fine, we might want to avoid using Void in generics if possible. Indeed, encountering a return type that represents the absence of a result and can only contain null can be cumbersome.

We’ll now see how to avoid these situations. First, let’s consider our method with the Callable parameter. In order to avoid using a Callable<Void>, we might offer another method taking a Runnable parameter instead:

public static void defer(Runnable runnable) {
    runnable.run();
}

So, we can pass it a Runnable which doesn’t return any value and thus get rid of the useless return null:

Runnable runnable = new Runnable() {
    @Override
    public void run() {
        System.out.println("Hello!");
    }
};

Defer.defer(runnable);

But then, what if the Defer class is not ours to modify? Then we can either stick to the Callable<Void> option or create another class taking a Runnable and deferring the call to the Defer class:

public class MyOwnDefer {
    public static void defer(Runnable runnable) throws Exception {
        Defer.defer(new Callable<Void>() {
            @Override
            public Void call() {
                runnable.run();
                return null;
            }
        });
    }
}

By doing that, we encapsulate the cumbersome part once and for all in our own method, allowing future developers to use a simpler API.

Of course, the same can be achieved for Function. In our example, the Function doesn’t return anything, thus we can provide another method taking a Consumer instead:

public static <T> void defer(Consumer<T> consumer, T arg) {
    consumer.accept(arg);
}

Then, what if our function doesn’t take any parameter? We can either use a Runnable or create our own functional interface (if that seems clearer):

public interface Action {
    void execute();
}

Then, we overload the defer() method again:

public static void defer(Action action) {
    action.execute();
}
Action action = () -> System.out.println("Hello!");

Defer.defer(action);

5. Conclusion

In this short article, we covered the Java Void class. We saw what was its purpose and how to use it. We also learned some alternatives to its usage.

As usual, the full code of this article can be found on our GitHub.

Skipping Tests with Maven

$
0
0

1. Introduction

Skipping tests is often a bad idea. However, there are some situations where it could be useful — maybe when we’re developing new code and want to run intermediate builds in which the tests are not passing or compiling. In these kind of situations only, we might skip the tests to avoid the overhead of compiling and running them. However, consider that not running tests can lead to bad coding practices.

In this quick tutorial, we’ll explore all the possible commands and options to skip tests using Maven.

2. Maven Lifecycle

Before getting into the details of how to skip tests, we must understand when tests are compiled or run. In the article about Maven goals and phases, we go deeper into the concept of the Maven lifecycle, but for the purpose of this article, it’s important to know that Maven can:

  1. Ignore tests
  2. Compile tests
  3. Run tests

In our examples, we’ll use the package phase, which includes compiling and running the tests. The options explored throughout this tutorial belong to the Maven Surefire Plugin.

3. Using Command Line Flags

3.1. Skipping the Test Compilation

First, let’s look at an example of a test that doesn’t compile:

@Test
public void thisDoesntCompile() {
    baeldung;
}

When we run the command-line command:

mvn package

We’ll get an error:

[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /Users/baeldung/skip-tests/src/test/java/com/antmordel/skiptests/PowServiceTest.java:[11,9] not a statement
[INFO] 1 error

Therefore, let’s explore how to skip the compilation phase for the test’s sources. In Maven, we can use the maven.test.skip flag:

mvn -Dmaven.test.skip package

As a result, the test sources are not compiled and, therefore, are not executed.

3.2. Skipping the Test Execution

As a second option, let’s see how we can compile the test folder but skip the run process. This is useful for those cases where we’re not changing the signature of the methods or classes but we’ve changed the business logic, and as a result, we broke the tests. Let’s consider a contrived test case like the one below, which will always fail:

@Test
public void thisTestFails() {
    fail("This is a failed test case");
}

Since we included the statement fail(), if we run the package phase, the build will fail with the error:

[ERROR] Failures:
[ERROR]   PowServiceTest.thisTestFails:16 This is a failed test case
[INFO]
[ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0

Let’s imagine we want to skip running the tests but still we want to compile them. In this case, we can use the -DskipTests flag:

mvn -DskipTests package

and the package phase will succeed.

Finally, it’s worth mentioning that the now-deprecated flag -Dmaven.test.skip.exec will also compile the test classes but will not run them.

4. Using Maven Configuration

In the case that we need to exclude compiling or running the tests for a longer period of time, we can modify the pom.xml file in order to include the proper configuration.

4.1. Skipping the Test Compilation

As we did in the previous section, let’s examine how we can avoid compiling the test folder. In this case, we’ll use the pom.xml file. Let’s add the following property:

<properties>
    <maven.test.skip>true</maven.test.skip>
</properties>

Keep in mind that we can override that value by adding the opposite flag in the command line:

mvn -Dmaven.test.skip=false package

4.2. Skipping the Test Execution

Again, as a second step, let’s explore how we can build the test folder but skip the test execution using the Maven configuration. In order to do that, we have to configure the Maven Surefire Plugin with a property:

<properties>
    <tests.skip>true</tests.skip>
</properties>
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.22.1</version>
    <configuration>
        <skipTests>${tests.skip}</skipTests>
    </configuration>
</plugin>

The Maven property tests.skip is a custom property that we previously defined. Therefore, we can override it if we want to execute the tests:

mvn -Dtests.skip=false package

4. Conclusion

In this quick tutorial, we’ve explored all the alternatives that Maven offers in order to skip compiling and/or running the tests. We went through the Maven command line options and the Maven configuration options.

Create a Java Command Line Program with Picocli

$
0
0

1. Introduction

In this tutorial, we’ll approach the picocli library, which allows us to easily create command line programs in Java.

We’ll first get started by creating a Hello World command. We’ll then take a deep dive into the key features of the library by reproducing, partially, the git command.

2. Hello World Command

Let’s begin with something easy: a Hello World command!

First things first, we need to add the dependency to the picocli project:

<dependency>
    <groupId>info.picocli</groupId>
    <artifactId>picocli</artifactId>
    <version>3.9.6</version>
</dependency>

As we can see, we’ll use the 3.9.6 version of the library, though a 4.0.0 version is under construction (currently available in alpha test).

Now that the dependency is set up, let’s create our Hello World command. In order to do that, we’ll use the @Command annotation from the library:

@Command(
  name = "hello",
  description = "Says hello"
)
public class HelloWorldCommand {
}

As we can see, the annotation can take parameters. We’re only using two of them here. Their purpose is to provide information about the current command and text for the automatic help message.

At the moment, there’s not much we can do with this command. To make it do something, we need to add a main method calling the convenience CommandLine.run(Runnable, String[]) method. This takes two parameters: an instance of our command, which thus has to implement the Runnable interface, and a String array representing the command arguments (options, parameters, and subcommands):

public class HelloWorldCommand implements Runnable {
    public static void main(String[] args) {
        CommandLine.run(new HelloWorldCommand(), args);
    }

    @Override
    public void run() {
        System.out.println("Hello World!");
    }
}

Now, when we run the main method, we’ll see that the console outputs “Hello World!”

When packaged to a jar, we can run our Hello World command using the java command:

java -cp "pathToPicocliJar;pathToCommandJar" com.baeldung.picoli.helloworld.HelloWorldCommand

With no surprise, that also outputs the “Hello World!” string to the console.

3. A Concrete Use Case

Now that we’ve seen the basics, we’ll deep dive into the picocli library. In order to do that, we’re going to reproduce, partially, a popular command: git.

Of course, the purpose won’t be to implement the git command behavior but to reproduce the possibilities of the git command — which subcommands exist and which options are available for a peculiar subcommand.

First, we have to create a GitCommand class as we did for our Hello World command:

@Command
public class GitCommand implements Runnable {
    public static void main(String[] args) {
        CommandLine.run(new GitCommand(), args);
    }

    @Override
    public void run() {
        System.out.println("The popular git command");
    }
}

4. Adding Subcommands

The git command offers a lot of subcommandsadd, commit, remote, and many more. We’ll focus here on add and commit.

So, our goal here will be to declare those two subcommands to the main command. Picocli offers three ways to achieve this.

4.1. Using the @Command Annotation on Classes

The @Command annotation offers the possibility to register subcommands through the subcommands parameter:

@Command(
  subcommands = {
      GitAddCommand.class,
      GitCommitCommand.class
  }
)

In our case, we add two new classes: GitAddCommand and GitCommitCommand. Both are annotated with @Command and implement Runnable. It’s important to give them a name, as the names will be used by picocli to recognize which subcommand(s) to execute:

@Command(
  name = "add"
)
public class GitAddCommand implements Runnable {
    @Override
    public void run() {
        System.out.println("Adding some files to the staging area");
    }
}

 

@Command(
  name = "commit"
)
public class GitCommitCommand implements Runnable {
    @Override
    public void run() {
        System.out.println("Committing files in the staging area, how wonderful?");
    }
}

Thus, if we run our main command with add as an argument, the console will output “Adding some files to the staging area”.

4.2. Using the @Command Annotation on Methods

Another way to declare subcommands is to create @Command-annotated methods representing those commands in the GitCommand class:

@Command(name = "add")
public void addCommand() {
    System.out.println("Adding some files to the staging area");
}

@Command(name = "commit")
public void commitCommand() {
    System.out.println("Committing files in the staging area, how wonderful?");
}

That way, we can directly implement our business logic into the methods and not create separate classes to handle it.

4.3. Adding Subcommands Programmatically

Finally, picocli offers us the possibility to register our subcommands programmatically. This one’s a bit trickier, as we have to create a CommandLine object wrapping our command and then add the subcommands to it:

CommandLine commandLine = new CommandLine(new GitCommand());
commandLine.addSubcommand("add", new GitAddCommand());
commandLine.addSubcommand("commit", new GitCommitCommand());

After that, we still have to run our command, but we can’t make use of the CommandLine.run() method anymore. Now, we have to call the parseWithHandler() method on our newly created CommandLine object:

commandLine.parseWithHandler(new RunLast(), args);

We should note the use of the RunLast class, which tells picocli to run the most specific subcommand. There are two other command handlers provided by picocli: RunFirst and RunAll. The former runs the topmost command, while the latter runs all of them.

When using the convenience method CommandLine.run(), the RunLast handler is used by default.

5. Managing Options Using the @Option Annotation

5.1. Option with No Argument

Let’s now see how to add some options to our commands. Indeed, we would like to tell our add command that it should add all modified files. To achieve that, we’ll add a field annotated with the @Option annotation to our GitAddCommand class:

@Option(names = {"-A", "--all"})
private boolean allFiles;

@Override
public void run() {
    if (allFiles) {
        System.out.println("Adding all files to the staging area");
    } else {
        System.out.println("Adding some files to the staging area");
    }
}

As we can see, the annotation takes a names parameter, which gives the different names of the option. Therefore, calling the add command with either -A or –all will set the allFiles field to true. So, if we run the command with the option, the console will show “Adding all files to the staging area”.

5.2. Option with an Argument

As we just saw, for options without arguments, their presence or absence is always evaluated to a boolean value.

However, it’s possible to register options that take arguments. We can do this simply by declaring our field to be of a different type. Let’s add a message option to our commit command:

@Option(names = {"-m", "--message"})
private String message;

@Override
public void run() {
    System.out.println("Committing files in the staging area, how wonderful?");
    if (message != null) {
        System.out.println("The commit message is " + message);
    }
}

Unsurprisingly, when given the message option, the command will show the commit message on the console. Later in the article, we’ll cover which types are handled by the library and how to handle other types.

5.3. Option with Multiple Arguments

But now, what if we want our command to take multiple messages, as is done with the real git commit command? No worries, let’s make our field be an array or a Collection, and we’re pretty much done:

@Option(names = {"-m", "--message"})
private String[] messages;

@Override
public void run() {
    System.out.println("Committing files in the staging area, how wonderful?");
    if (messages != null) {
        System.out.println("The commit message is");
        for (String message : messages) {
            System.out.println(message);
        }
    }
}

Now, we can use the message option multiple times:

commit -m "My commit is great" -m "My commit is beautiful"

However, we might also want to give the option only once and separate the different parameters by a regex delimiter. Hence, we can use the split parameter of the @Option annotation:

@Option(names = {"-m", "--message"}, split = ",")
private String[] messages;

Now, we can pass -m “My commit is great”,”My commit is beautiful” to achieve the same result as above.

5.4. Required Option

Sometimes, we might have an option that is required. The required argument, which defaults to false, allows us to do that:

@Option(names = {"-m", "--message"}, required = true)
private String[] messages;

Now it’s impossible to call the commit command without specifying the message option. If we try to do that, picocli will print an error:

Missing required option '--message=<messages>'
Usage: git commit -m=<messages> [-m=<messages>]...
  -m, --message=<messages>

6. Managing Positional Parameters

6.1. Capture Positional Parameters

Now, let’s focus on our add command because it’s not very powerful yet. We can only decide to add all files, but what if we wanted to add specific files?

We could use another option to do that, but a better choice here would be to use positional parameters. Indeed, positional parameters are meant to capture command arguments that occupy specific positions and are neither subcommands nor options.

In our example, this would enable us to do something like:

add file1 file2

In order to capture positional parameters, we’ll make use of the @Parameters annotation:

@Parameters
private List<Path> files;

@Override
public void run() {
    if (allFiles) {
        System.out.println("Adding all files to the staging area");
    }

    if (files != null) {
        files.forEach(path -> System.out.println("Adding " + path + " to the staging area"));
    }
}

Now, our command from earlier would print:

Adding file1 to the staging area
Adding file2 to the staging area

6.2. Capture a Subset of Positional Parameters

It’s possible to be more fine-grained about which positional parameters to capture, thanks to the index parameter of the annotation. The index is zero-based. Thus, if we define:

@Parameters(index="2..*")

This would capture arguments that don’t match options or subcommands, from the third one to the end.

The index can be either a range or a single number, representing a single position.

7. A Word About Type Conversion

As we’ve seen earlier in this tutorial, picocli handles some type conversion by itself. For example, it maps multiple values to arrays or Collections, but it can also map arguments to specific types like when we use the Path class for the add command.

As a matter of fact, picocli comes with a bunch of pre-handled types. This means we can use those types directly without having to think about converting them ourselves.

However, we might need to map our command arguments to types other than those that are already handled. Fortunately for us, this is possible thanks to the ITypeConverter interface and the CommandLine#registerConverter method, which associates a type to a converter.

Let’s imagine we want to add the config subcommand to our git command, but we don’t want users to change a configuration element that doesn’t exist. So, we decide to map those elements to an enum:

public enum ConfigElement {
    USERNAME("user.name"),
    EMAIL("user.email");

    private final String value;

    ConfigElement(String value) {
        this.value = value;
    }

    public String value() {
        return value;
    }

    public static ConfigElement from(String value) {
        return Arrays.stream(values())
          .filter(element -> element.value.equals(value))
          .findFirst()
          .orElseThrow(() -> new IllegalArgumentException("The argument " 
          + value + " doesn't match any ConfigElement"));
    }
}

Plus, in our newly created GitConfigCommand class, let’s add two positional parameters:

@Parameters(index = "0")
private ConfigElement element;

@Parameters(index = "1")
private String value;

@Override
public void run() {
    System.out.println("Setting " + element.value() + " to " + value);
}

This way, we make sure that users won’t be able to change non-existent configuration elements.

Finally, we have to register our converter. What’s beautiful is that, if using Java 8 or higher, we don’t even have to create a class implementing the ITypeConverter interface. We can just pass a lambda or method reference to the registerConverter() method:

CommandLine commandLine = new CommandLine(new GitCommand());
commandLine.registerConverter(ConfigElement.class, ConfigElement::from);

commandLine.parseWithHandler(new RunLast(), args);

This happens in the GitCommand main() method. Note that we had to let go of the convenience CommandLine.run() method.

When used with an unhandled configuration element, the command would show the help message plus a piece of information telling us that it wasn’t possible to convert the parameter to a ConfigElement:

Invalid value for positional parameter at index 0 (<element>): 
cannot convert 'user.phone' to ConfigElement 
(java.lang.IllegalArgumentException: The argument user.phone doesn't match any ConfigElement)
Usage: git config <element> <value>
      <element>
      <value>

8. Integrating with Spring Boot

Finally, let’s see how to Springify all that!

Indeed, we might be working within a Spring Boot environment and want to benefit from it in our command-line program. In order to do that, we must create a SpringBootApplication implementing the CommandLineRunner interface:

@SpringBootApplication
public class Application implements CommandLineRunner {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

    @Override
    public void run(String... args) {
    }
}

Plus, let’s annotate all our commands and subcommands with the Spring @Component annotation and autowire all that in our Application:

private GitCommand gitCommand;
private GitAddCommand addCommand;
private GitCommitCommand commitCommand;
private GitConfigCommand configCommand;

public Application(GitCommand gitCommand, GitAddCommand addCommand, 
  GitCommitCommand commitCommand, GitConfigCommand configCommand) {
    this.gitCommand = gitCommand;
    this.addCommand = addCommand;
    this.commitCommand = commitCommand;
    this.configCommand = configCommand;
}

Note that we had to autowire every subcommand. Unfortunately, this is because, for now, picocli is not yet able to retrieve subcommands from the Spring context when declared declaratively (with annotations). Thus, we’ll have to do that wiring ourselves, in a programmatic way:

@Override
public void run(String... args) {
    CommandLine commandLine = new CommandLine(gitCommand);
    commandLine.addSubcommand("add", addCommand);
    commandLine.addSubcommand("commit", commitCommand);
    commandLine.addSubcommand("config", configCommand);

    commandLine.parseWithHandler(new CommandLine.RunLast(), args);
}

And now, our command line program works like a charm with Spring components. Therefore, we could create some service classes and use them in our commands, and let Spring take care of the dependency injection.

9. Conclusion

In this article, we’ve seen some key features of the picocli library. We’ve learned how to create a new command and add some subcommands to it. We’ve seen many ways to deal with options and positional parameters. Plus, we’ve learned how to implement our own type converters to make our commands strongly typed. Finally, we’ve seen how to bring Spring Boot into our commands.

Of course, there are many things more to discover about it. The library provides complete documentation.

As for the full code of this article, it can be found on our GitHub.


Defining JPA Entities

$
0
0

1. Introduction

In this tutorial, we’ll learn about the basics of entities along with various annotations that define and customize an entity in JPA.

2. Entity

Entities in JPA are nothing but POJOs representing data that can be persisted to the database. An entity represents a table stored in a database. Every instance of an entity represents a row in the table.

2.1. The Entity Annotation

Let’s say we have a POJO called Student which represents the data of a student and we would like to store it in the database.

public class Student {
    
    // fields, getters and setters
    
}

In order to do this, we should define an entity so that JPA is aware of it.

So let’s define it by making use of the @Entity annotation. We must specify this annotation at the class level. We must also ensure that the entity has a no-arg constructor and a primary key: 

@Entity
public class Student {
    
    // fields, getters and setters
    
}

The entity name defaults to the name of the class. We can change its name using the name element.

@Entity(name="student")
public class Student {
    
    // fields, getters and setters
    
}

Because various JPA implementations will try subclassing our entity in order to provide their functionality, entity classes must not be declared final.

2.2. The Id Annotation

Each JPA entity must have a primary key which uniquely identifies it. The @Id annotation defines the primary key. We can generate the identifiers in different ways which are specified by the @GeneratedValue annotation.

We can choose from four id generation strategies with the strategy element. The value can be AUTO, TABLE, SEQUENCE, or IDENTITY.

@Entity
public class Student {
    @Id
    @GeneratedValue(strategy=GenerationType.AUTO)
    private Long id;
    
    private String name;
    
    // getters and setters
}

If we specify GenerationType.AUTO, the JPA provider will use any strategy it wants to generate the identifiers.

If we annotate the entity’s fields, the JPA provider will use these fields to get and set the entity’s state. In addition to Field Access, we can also do Property Access or Mixed Access, which enables us to use both Field and Property access in the same entity.

2.3. The Table Annotation

In most cases, the name of the table in the database and the name of the entity will not be the same.

In these cases, we can specify the table name using the @Table annotation:

@Entity
@Table(name="STUDENT")
public class Student {
    
    // fields, getters and setters
    
}

We can also mention the schema using the schema element:

@Entity
@Table(name="STUDENT", schema="SCHOOL")
public class Student {
    
    // fields, getters and setters
    
}

Schema name helps to distinguish one set of tables from another,

If we do not use the @Table annotation, the name of the entity will be considered the name of the table.

2.4. The Column Annotation

Just like the @Table annotation, we can use the @Column annotation to mention the details of a column in the table.

The @Column annotation has many elements such as name, length, nullable, and unique.

@Entity
@Table(name="STUDENT")
public class Student {
    @Id
    @GeneratedValue(strategy=GenerationType.AUTO)
    private Long id;
    
    @Column(name="STUDENT_NAME", length=50, nullable=false, unique=false)
    private String name;
    
    // other fields, getters and setters
}

The name element specifies the name of the column in the table. The length element specifies its length. The nullable element specifies whether the column is nullable or not, and the unique element specifies whether the column is unique.

If we don’t specify this annotation, the name of the field will be considered the name of the column in the table.

2.5. The Transient Annotation

Sometimes, we may want to make a field non-persistent. We can use the @Transient annotation to do so. It specifies that the field will not be persisted.

For instance, we can calculate the age of a student from the date of birth.

So let’s annotate the field age with the @Transient annotation:

@Entity
@Table(name="STUDENT")
public class Student {
    @Id
    @GeneratedValue(strategy=GenerationType.AUTO)
    private Long id;
    
    @Column(name="STUDENT_NAME", length=50, nullable=false)
    private String name;
    
    @Transient
    private Integer age;
    
    // other fields, getters and setters
}

As a result, the field age will not be persisted to the table.

2.6. The Temporal Annotation

In some cases, we may have to save temporal values in our table.

For this, we have the @Temporal annotation:

@Entity
@Table(name="STUDENT")
public class Student {
    @Id
    @GeneratedValue(strategy=GenerationType.AUTO)
    private Long id;
    
    @Column(name="STUDENT_NAME", length=50, nullable=false, unique=false)
    private String name;
    
    @Transient
    private Integer age;
    
    @Temporal(TemporalType.DATE)
    private Date birthDate;
    
    // other fields, getters and setters
}

However, with JPA 2.2, we also have support for java.time.LocalDate, java.time.LocalTime, java.time.LocalDateTime, java.time.OffsetTime and java.time.OffsetDateTime.

2.7. The Enumerated Annotation

Sometimes, we may want to persist a Java enum type.

We can use the @Enumerated annotation to specify whether the enum should be persisted by name or by ordinal (default).

public enum Gender {
    MALE, 
    FEMALE
}
@Entity
@Table(name="STUDENT")
public class Student {
    @Id
    @GeneratedValue(strategy=GenerationType.AUTO)
    private Long id;
    
    @Column(name="STUDENT_NAME", length=50, nullable=false, unique=false)
    private String name;
    
    @Transient
    private Integer age;
    
    @Temporal(TemporalType.DATE)
    private Date birthDate;
    
    @Enumerated(EnumType.STRING)
    private Gender gender;
    
    // other fields, getters and setters
}

Actually, we don’t have to specify the @Enumerated annotation at all if we are going to persist the Gender by the enum‘s ordinal.

However, to persist the Gender by enum name, we’ve configured the annotation with EnumType.STRING.

3. Conclusion

In this article, we learned what JPA entities are and how to create them. We also learned about the different annotations that can be used to customize the entity further.

The complete code for this article can be found over on Github.

JUnits Tagging and Filtering Tests

$
0
0

1. Overview

It’s very common to execute all our JUnits automatically as a part of the CI build using Maven. This, however, is often time-consuming.

Therefore, we often want to filter our tests and execute either unit tests or integration tests or both at various stages of the build process.

In this tutorial, we’ll look at a few filtering techniques for test cases with JUnit5. In the following sections, we’ll also look at various filtering mechanisms before JUnit5.

2. JUnit5 Tags

2.1. Annotating JUnits with Tag

With JUnit5 we can filter JUnits by tagging a subset of them under a unique tag name. For example, suppose we have both unit tests and integration tests implemented using JUnit5. We can add tags on both sets of test cases:

@Test
@Tag("IntegrationTest")
public void testAddEmployeeUsingSimpelJdbcInsert() {
}

@Test
@Tag("UnitTest")
public void givenNumberOfEmployeeWhenCountEmployeeThenCountMatch() {
}

Henceforth we can execute all JUnits under a particular tag name separately. We can also tag the class instead of methods. Thereby including all tests in a class under a tag.

In the next few sections, we’ll see various ways of filtering and executing the tagged JUnits.

2.2. Filtering Tags with Test Suite

JUnit5 allows us to implement test suites through which we can execute tagged test cases:

@RunWith(JUnitPlatform.class)
@SelectPackages("com.baeldung.tags")
@IncludeTags("UnitTest")
public class EmployeeDAOUnitTestSuite {
}

Now, if we run this suite, all JUnits under the tag UnitTest would be executed. Similarly, we can exclude JUnits with ExcludeTags annotation.

2.3. Filtering Tags with Maven Surefire Plugin

For filtering JUnits within the various phases of the Maven build, we can use the Maven Surefire plugin. The Surefire plugin allows us to include or exclude the tags in the plugin configuration:

<plugin>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.20.1</version>
    <configuration>
        <groups>UnitTest</groups>
    </configuration>
</plugin>

If we now execute this plugin, it will execute all JUnits which are tagged as UnitTest. Similarly, we can exclude test cases under a tag name:

<excludedGroups>IntegrationTest</excludedGroups>

2.4. Filtering Tags with an IDE

IDEs now allow filtering the JUnits by tags. This way we can execute a specific set of tagged JUnits directly from our IDE.

IntelliJ allows such filtering through a custom Run/Debug Configuration:

JUnit5 Tags in IntelliJ

As shown in this image, we selected the Test Kind as tags and the tag to be executed in the Tag Expression.

JUnit5 allows various Tag Expressions which can be used to filter the tags. For example, to run everything but the integration tests, we could use !IntegrationTest as the Tag Expression. Or for executing both UnitTest and IntegrationTest, we can use UnitTest | IntegrationTest.

Similarly, Eclipse also allows including or excluding tags in the JUnit Run/Debug configurations:

JUnit5 Tags in Eclipse

3. JUnit4 Categories

3.1. Categorizing JUnits

JUnit4 allows us to execute a subset of JUnit tests by adding them into different categories. As a result, we can execute the test cases in a particular category while excluding other categories.

We can create as many categories by implementing marker interfaces where the name of the marker interface represents the name of the category. For our example, we’ll implement two categories, UnitTest:

public interface UnitTest {
}

and IntegrationTest:

public interface IntegrationTest {
}

Now, we can categorize our JUnit by annotating it with Category annotation:

@Test
@Category(IntegrationTest.class)
public void testAddEmployeeUsingSimpelJdbcInsert() {
}

@Test
@Category(UnitTest.class)
public void givenNumberOfEmployeeWhenCountEmployeeThenCountMatch() {
}

In our example, we put the Category annotation on the test methods. Similarly, we can also add this annotation on the test class, thus adding all tests into one category.

3.2. Categories Runner

In order to execute JUnits in a category, we need to implement a test suite class:

@RunWith(Categories.class)
@IncludeCategory(UnitTest.class)
@SuiteClasses(EmployeeDAOCategoryIntegrationTest.class)
public class EmployeeDAOUnitTestSuite {
}

This test suite can be executed from an IDE and would execute all JUnits under the UnitTest category. Similarly, we can also exclude a category of JUnits in the suite:

@RunWith(Categories.class)
@ExcludeCategory(IntegrationTest.class)
@SuiteClasses(EmployeeDAOCategoryIntegrationTest.class)
public class EmployeeDAOUnitTestSuite {
}

3.3. Excluding or Including Categories in Maven

Finally, we can also include or exclude the categories of JUnit tests from the Maven build. Thus, we can execute different categories of JUnit tests in different Maven profiles.

We’ll use the Maven Surefire plugin for this:

<plugin>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.20.1</version>
    <configuration>
        <groups>com.baeldung.categories.UnitTest</groups>
    </configuration>
</plugin>

And similarly we can exclude a category from the Maven build:

<plugin>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.20.1</version>
    <configuration>
        <excludedGroups>com.baeldung.categories.IntegrationTest</excludedGroups>
    </configuration>
</plugin>

This is similar to the example we discussed in the previous section. The only difference is that we replaced the tag name with the fully qualified name of the Category implementation.

4. Filtering JUnits with Maven Surefire Plugin

Both of the approaches we’ve discussed have been implemented with the JUnit library. An implementation agnostic way of filtering test cases is by following a naming convention. For our example, we’ll use UnitTest suffix for unit tests and IntegrationTest for integration tests.

Now we’ll use the Maven Surefire Plugin for executing either the unit tests or the integrations tests:

<plugin>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.20.1</version>
    <configuration>
        <excludes>
            **/*IntegrationTest.java
        </excludes>
    </configuration>
</plugin>

The excludes tag here filters all integration tests and executes only the unit tests. Such a configuration would save a considerable amount of build time.

Furthermore, we can execute the Surefire plugin within various Maven profiles with different exclusions or inclusions.

Although Surefire works well for filtering, it is recommended to use the Failsafe Plugin for executing integration tests in Maven.

5. Conclusion

In this article, we saw a way to tag and filter test cases with JUnit5. We used the Tag annotation and also saw various ways for filtering the JUnits with a specific tag through the IDE or in the build process using Maven.

We also discussed some of the filtering mechanisms before JUnit5.

All examples are available at Github.

Mockito Strict Stubbing and The UnnecessaryStubbingException

$
0
0

1. Overview

In this quick tutorial, we’ll learn about the Mockito UnnecessaryStubbingException. This exception is one of the common exceptions we’ll likely encounter when using stubs incorrectly.

We’ll start by explaining the philosophy behind strict stubbing and why Mockito encourages its use by default. Next, we’ll take a look at exactly what this exception means and under what circumstances it can occur. To conclude, we’ll see an example of how we can suppress this exception in our tests.

To learn more about testing with Mockito, check out our comprehensive Mockito series.

2. Strict Stubbing

With version 1.x of Mockito, it was possible to configure and interact with mocks with no kind of restriction. This meant that, over time, tests would often become overcomplicated and at times harder to debug.

Since version 2.+, Mockito has been introducing new features that nudge the framework towards “strictness”. The main goals behind this are:

  • Detect unused stubs in the test code
  • Reduce test code duplication and unnecessary test code
  • Promote cleaner tests by removing ‘dead’ code
  • Help improve debuggability and productivity

Following these principles helps us create cleaner tests by eliminating unnecessary test code. They also help avoid copy-paste errors as well as other developer oversights.

To summarise, strict stubbing reports unnecessary stubs, detects stubbing argument mismatch and makes our tests more DRY (Don’t Repeat Yourself). This facilitates a clean and maintainable codebase.

2.1. Configuring Strict Stubs

Since Mockito 2.+, strict stubbing is used by default when initializing our mocks using either of:

  • MockitoJUnitRunner
  • MockitoJUnit.rule()

Mockito strongly recommends the use of either of the above. However, there is also another way to enable strict stubbing in our tests when we’re not leveraging the Mockito rule or runner:

Mockito.mockitoSession()
  .initMocks(this)
  .strictness(Strictness.STRICT_STUBS)
  .startMocking();

One last important point to make is that in Mockito 3.0, all stubbings will be “strict” and validated by default.

3. UnnecessaryStubbingException Example

Simply put, an unnecessary stub is a stubbed method call that was never realized during test execution.

Let’s take a look at a simple example:

@Test
public void givenUnusedStub_whenInvokingGetThenThrowUnnecessaryStubbingException() {
    when(mockList.add("one")).thenReturn(true); // this won't get called
    when(mockList.get(anyInt())).thenReturn("hello");
    assertEquals("List should contain hello", "hello", mockList.get(1));
}

When we run this unit test, Mockito will detect the unused stub and throw an UnnecessaryStubbingException:

org.mockito.exceptions.misusing.UnnecessaryStubbingException: 
Unnecessary stubbings detected.
Clean & maintainable test code requires zero unnecessary code.
Following stubbings are unnecessary (click to navigate to relevant line of code):
  1. -> at com.baeldung.mockito.misusing.MockitoUnecessaryStubUnitTest.givenUnusedStub_whenInvokingGetThenThrowUnnecessaryStubbingException(MockitoUnecessaryStubUnitTest.java:37)
Please remove unnecessary stubbings or use 'lenient' strictness. More info: javadoc for UnnecessaryStubbingException class.

Thankfully, it’s quite clear from the error message what the problem is here. We can also see that the exception message even points us to the exact line which causes the error.

Why does this happen? Well, the first when invocation configures our mock to return true when we call the add method with the argument “one”. However, we do not then invoke this method during the rest of the unit test execution.

Mockito is telling us that our first when line is redundant and perhaps we made an error when configuring our stubs.

Although this example is trivial, it’s easy to imagine when mocking a complex hierarchy of objects how this kind of message can assist debugging and be otherwise very helpful.

4. Bypassing Strict Stubbing

Finally, let’s see how to bypass strict stubs. This is also known as lenient stubbing.

Sometimes we need to configure specific stubbing to be lenient while maintaining all the other stubbings and mocks to use strict stubbing:

@Test
public void givenLenientdStub_whenInvokingGetThenThrowUnnecessaryStubbingException() {
    lenient().when(mockList.add("one")).thenReturn(true);
    when(mockList.get(anyInt())).thenReturn("hello");
    assertEquals("List should contain hello", "hello", mockList.get(1));
}

In the above example, we use the static method Mockito.lenient() to enable the lenient stubbing on the add method of our mock list.

Lenient stubs bypass “strict stubbing” validation rules. For example, when stubbing is declared as lenient, it won’t be checked for potential stubbing problems such as the unnecessary stubbing described earlier.

5. Conclusion

In this brief article, we began by introducing the concept of strict stubbing in Mockito and understood the philosophy behind why it was introduced and why it’s important.

Next, we looked at an example of the UnnecessaryStubbingException before finishing with an example of how to enable lenient stubbing in our tests.

As always, the full source code of the article is available over on GitHub.

Get the Path of the /src/test/resources Directory in JUnit

$
0
0

1. Overview

Sometimes during unit testing, we might need to read some file from the classpath or pass a file to an object under test. Or, we may have a file in src/test/resources with data for stubs that could be used by libraries like WireMock.

In this tutorial, we’ll show how to read the path of the /src/test/resources directory.

2. Maven Dependencies

First, we’ll need to add JUnit 5 to our Maven dependencies:

<dependency>
    <groupId>org.junit.jupiter</groupId>
    <artifactId>junit-jupiter-engine</artifactId>
    <version>5.4.2</version>
</dependency>

We can find the latest version of JUnit 5 on Maven Central.

2. Using java.io.File

The simplest approach uses an instance of the java.io.File class to read the /src/test/resources directory, by calling the getAbsolutePath() method:

String path = "src/test/resources";

File file = new File(path);
String absolutePath = file.getAbsolutePath();

System.out.println(absolutePath);

assertTrue(absolutePath.endsWith("src/test/resources"));

Note that this path is relative to the current working directory, meaning the project directory.

Let’s see an example output when running the test on macOS:

/Users/user.name/my_projects/tutorials/testing-modules/junit-5-configuration/src/test/resources

3. Using Path

Next, we can use the Path class, which was introduced in Java 7.

First, we need to call a static factory method – Paths.get(). Then, we’ll convert Path to File. In the end, we just need to call getAbsolutePath(), as in the previous example:

Path resourceDirectory = Paths.get("src","test","resources");
String absolutePath = resourceDirectory.toFile().getAbsolutePath();

System.out.println(absolutePath);
 
Assert.assertTrue(absolutePath.endsWith("src/test/resources"));

And, we’d get the same output as in the previous example:

/Users/user.name/my_projects/tutorials/testing-modules/junit-5-configuration/src/test/resources

4. Using ClassLoader

Finally, we can also use a ClassLoader:

String resourceName = "example_resource.txt";

ClassLoader classLoader = getClass().getClassLoader();
File file = new File(classLoader.getResource(resourceName).getFile());
String absolutePath = file.getAbsolutePath();

System.out.println(absolutePath);

assertTrue(absolutePath.endsWith("/example_resource.txt"));

And, let’s have a look at the output:

/Users/user.name/my_projects/tutorials/testing-modules/junit-5-configuration/target/test-classes/example_resource.txt

Note that this time, we have a /junit-5-configuration/target/test-classes/example-resource.txt file. It differs when we compare the result to the previous methods.

This is because the ClassLoader looks for the resources on the classpath. In Maven, the compiled classes and resources are put in the /target/ directory. That’s why this time, we got a path to a classpath resource.

5. Conclusion

To sum up, in this quick tutorial we’ve discussed how to read a /src/test/resources directory in JUnit 5.

Depending on our needs, we can achieve our goal with multiple methods: by using File, Paths, or ClassLoader classes.

As always, you can find all of our examples on our GitHub project!

Authenticating with Amazon Cognito Using Spring Security

$
0
0

1. Introduction

In this tutorial, we will look at how we can use Spring Security‘s OAuth 2.0 support to authenticate with Amazon Cognito.

Along the way, we’ll briefly take a look at what Amazon Cognito is and what kind of OAuth 2.0 flows it supports.

In the end, we’ll have a simple one-page application. Nothing fancy.

2. What is Amazon Cognito?

Cognito is a user identity and data synchronization service that makes it easy for us to manage user data for our apps across multiple devices.

With Amazon Cognito, we can:

  • create, authenticate, and authorize users for our applications
  • create identities for users of our apps who use other public identity providers like Google, Facebook, or Twitter
  • save our app’s user data in key-value pairs

3. Setup

3.1. Amazon Cognito Setup

As an Identity Provider, Cognito supports the authorization_code, implicit, and client_credentials grants. For our purposes, let’s set things up to use the authorization_code grant type.

First, we need a bit of Cognito setup:

In the configuration of the application client, make sure the CallbackURL matches the redirectUriTemplate from the Spring config file. In our case, this will be:

http://localhost:8080/login/oauth2/code/cognito

The Allowed OAuth flow should be Authorization code grant. Then, on the same page, we need to set the Allowed OAuth scope to openid.

3.2. Spring Setup

Since we want to use OAuth 2.0 Login, we’ll need to add the appropriate Spring Security dependencies to our application:

<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-boot-starter-security-oauth2-client</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-oauth2-jose</artifactId>
</dependency>

And then, we’ll need some configuration to bind everything together:

spring:
  security:
    oauth2:
      client:
        registration:
          cognito:
            clientId: clientId
            clientSecret: clientSecret
            scope: openid
            redirectUriTemplate: "http://localhost:8080/login/oauth2/code/cognito"
            clientName: cognito-client-name
        provider:
          cognito:
            issuerUri: https://cognito-idp.{region}.amazonaws.com/{poolId}
            usernameAttribute: cognito:username

And with that, we should have Spring and Amazon Cognito set up! The rest of the tutorial is just tying up a couple of loose ends.

4. Add a Landing Page

Next, we add a simple Thymeleaf landing page so that we know when we’re logged in:

<div>
    <h1 class="title">OAuth 2.0 Spring Security Cognito Demo</h1>
    <div sec:authorize="isAuthenticated()">
        <div class="box">
            Hello, <strong th:text="${#authentication.name}"></strong>!
        </div>
    </div>
    <div sec:authorize="isAnonymous()">
        <div class="box">
            <a class="button login is-primary" th:href="@{/oauth2/authorization/cognito}">
              Log in with Amazon Cognito</a>
        </div>
    </div>
</div>

Simply put, this will display our user name when we’re logged in or a login link when we’re not. Pay close attention to what the link looks like since it picks up the cognito part from our configuration file.

And then let’s make sure we tie the application root to our welcome page:

@Configuration
public class CognitoWebConfiguration implements WebMvcConfigurer {
    @Override
    public void addViewControllers(ViewControllerRegistry registry) {
        registry.addViewController("/").setViewName("home");
    }
}

5. Run the App

This is the class that will put everything related to auth in motion:

@SpringBootApplication
public class SpringCognitoApplication {
    public static void main(String[] args) {
        SpringApplication.run(SpringCognitoApplication.class, args);
    }
}

Now we can start our application, go to http://localhost:8080, and click the login link.

6. Conclusion

In this tutorial, we looked at how we can integrate Spring Security with Amazon Cognito with just some simple configuration. And then we put everything together with just a few pieces of code.

As always, the code presented in this article is available over on Github.

Viewing all 4752 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>