Quantcast
Channel: Baeldung
Viewing all 4754 articles
Browse latest View live

An Introduction to Java.util.Hashtable Class

$
0
0

1. Overview

Hashtable is the oldest implementation of a hash table data structure in Java. The HashMap is the second implementation, which was introduced in JDK 1.2.

Both classes provide similar functionality, but there are also small differences, which we’ll explore in this tutorial.

2. When to use Hashtable

Let’s say we have a dictionary, where each word has its definition. Also, we need to get, insert and remove words from the dictionary quickly.

Hence, Hashtable (or HashMap) makes sense. Words will be the keys in the Hashtable, as they are supposed to be unique. Definitions, on the other hand, will be the values.

3. Example of Use

Let’s continue with the dictionary example. We’ll model Word as a key:

public class Word {
    private String name;

    public Word(String name) {
        this.name = name;
    }
    
    // ...
}

Let’s say the values are Strings. Now we can create a Hashtable:

Hashtable<Word, String> table = new Hashtable<>();

First, let’s add an entry:

Word word = new Word("cat");
table.put(word, "an animal");

Also, to get an entry:

String definition = table.get(word);

Finally, let’s remove an entry:

definition = table.remove(word);

There are many more methods in the class, and we’ll describe some of them later.

But first, let’s talk about some requirements for the key object.

4. The Importance of hashCode()

To be used as a key in a Hashtable, the object mustn’t violate the hashCode() contract. In short, equal objects must return the same code. To understand why let’s look at how the hash table is organized.

Hashtable uses an array. Each position in the array is a “bucket” which can be either null or contain one or more key-value pairs. The index of each pair is calculated.

But why not to store elements sequentially, adding new elements to the end of the array?

The point is that finding an element by index is much quicker than iterating through the elements with the comparison sequentially. Hence, we need a function that maps keys to indexes.

4.1. Direct Address Table

The simplest example of such mapping is the direct-address table. Here keys are used as indexes:

index(k)=k,
where k is a key

Keys are unique, that is each bucket contains one key-value pair. This technique works well for integer keys when the possible range of them is reasonably small.

But we have two problems here:

  • First, our keys are not integers, but Word objects
  • Second, if they were integers, nobody would guarantee they were small. Imagine that the keys are 1, 2 and 1000000. We’ll have a big array of size 1000000 with only three elements, and the rest will be a wasted space

hashCode() method solves the first problem.

The logic for data manipulation in the Hashtable solves the second problem.

Let’s discuss this in depth.

4.2. hashCode() Method

Any Java object inherits the hashCode() method which returns an int value. This value is calculated from the internal memory address of the object. By default hashCode() returns distinct integers for distinct objects.

Thus any key object can be converted to an integer using hashCode(). But this integer may be big.

4.3. Reducing the Range

get(), put() and remove() methods contain the code which solves the second problem – reducing the range of possible integers.

The formula calculates an index for the key:

int index = (hash & 0x7FFFFFFF) % tab.length;

Where tab.length is the array size, and hash is a number returned by the key’s hashCode() method.

As we can see index is a reminder of the division hash by the array size. Note that equal hash codes produce the same index.

4.4. Collisions

Furthermore, even different hash codes can produce the same index. We refer to this as a collision. To resolve collisions Hashtable stores a LinkedList of key-value pairs.

Such data structure is called a hash table with chaining.

4.5. Load Factor

It is easy to guess that collisions slow down operations with elements. To get an entry it is not enough to know its index, but we need to go through the list and perform a comparison with each item.

Therefore it’s important to reduce the number of collisions. The bigger is an array, the smaller is the chance of a collision. The load factor determines the balance between the array size and the performance. By default, it’s 0.75 which means that the array size doubles when 75% of the buckets become not empty. This operation is executed by rehash() method.

But let’s return to the keys.

4.6. Overriding equals() and hashCode()

When we put an entry into a Hashtable and get it out of it, we expect that the value can be obtained not only with same the instance of the key but also with an equal key:

Word word = new Word("cat");
table.put(word, "an animal");
String extracted = table.get(new Word("cat"));

To set the rules of equality, we override the key’s equals() method:

public boolean equals(Object o) {
    if (o == this)
        return true;
    if (!(o instanceof Word))
        return false;

    Word word = (Word) o;
    return word.getName().equals(this.name);
}

But if we don’t override hashCode() when overriding equals() then two equal keys may end up in the different buckets because Hashtable calculates the key’s index using its hash code.

Let’s take a close look at the above example. What happens if we don’t override hashCode()?

  • Two instances of Word are involved here – the first is for putting the entry and the second is for getting the entry. Although these instances are equal, their hashCode() method return different numbers
  • The index for each key is calculated by the formula from section 4.3. According to this formula, different hash codes may produce different indexes
  • This means that we put the entry into one bucket and then try to get it out from the other bucket. Such logic breaks Hashtable

Equal keys must return equal hash codes, that’s why we override the hashCode() method:

public int hashCode() {
    return name.hashCode();
}

Note that it’s also recommended to make not equal keys return different hash codes, otherwise they end up in the same bucket.  This will hit the performance, hence, losing some of the advantages of a Hashtable.

Also, note that we don’t care about the keys of String, Integer, Long or another wrapper type. Both equal() and hashCode() methods are already overridden in wrapper classes.

5. Iterating Hashtables

There are a few ways to iterate Hashtables. In this section well talk about them and explain some of the implications.

5.1. Fail Fast: Iteration

Fail-fast iteration means that if a Hashtable is modified after its Iterator is created, then the ConcurrentModificationException will be thrown. Let’s demonstrate this.

First, we’ll create a Hashtable and add entries to it:

Hashtable<Word, String> table = new Hashtable<Word, String>();
table.put(new Word("cat"), "an animal");
table.put(new Word("dog"), "another animal");

Second, we’ll create an Iterator:

Iterator<Word> it = table.keySet().iterator();

And third, we’ll modify the table:

table.remove(new Word("dog"));

Now if we try to iterate through the table, we’ll get a ConcurrentModificationException:

while (it.hasNext()) {
    Word key = it.next();
}
java.util.ConcurrentModificationException
	at java.util.Hashtable$Enumerator.next(Hashtable.java:1378)

ConcurrentModificationException helps to find bugs and thus avoid unpredictable behavior, when, for example, one thread is iterating through the table, and another one is trying to modify it at the same time.

5.2. Not Fail Fast: Enumeration

Enumeration in a Hashtable is not fail-fast. Let’s look at an example.

First, let’s create a Hashtable and add entries to it:

Hashtable<Word, String> table = new Hashtable<Word, String>();
table.put(new Word("1"), "one");
table.put(new Word("2"), "two");

Second, let’s create an Enumeration:

Enumeration<Word> enumKey = table.keys();

Third, let’s modify the table:

table.remove(new Word("1"));

Now if we iterate through the table it won’t throw an exception:

while (enumKey.hasMoreElements()) {
    Word key = enumKey.nextElement();
}

5.3. Unpredictable Iteration Order

Also, note that iteration order in a Hashtable is unpredictable and does not match the order in which the entries were added.

This is understandable as it calculates each index using the key’s hash code. Moreover, rehashing takes place from time to time, rearranging the order of the data structure.

Hence, let’s add some entries and check the output:

Hashtable<Word, String> table = new Hashtable<Word, String>();
    table.put(new Word("1"), "one");
    table.put(new Word("2"), "two");
    // ...
    table.put(new Word("8"), "eight");

    Iterator<Map.Entry<Word, String>> it = table.entrySet().iterator();
    while (it.hasNext()) {
        Map.Entry<Word, String> entry = it.next();
        // ...
    }
}
five
four
three
two
one
eight
seven

6. Hashtable vs. HashMap

Hashtable and HashMap provide very similar functionality.

Both of them provide:

  • Fail-fast iteration
  • Unpredictable iteration order

But there are some differences too:

  • HashMap doesn’t provide any Enumeration, while Hashtable provides not fail-fast Enumeration
  • Hashtable doesn’t allow null keys and null values, while HashMap do allow one null key and any number of null values
  • Hashtable‘s methods are synchronized while HashMaps‘s methods are not

7. Hashtable API in Java 8

Java 8 has introduced new methods which help make our code cleaner. In particular, we can get rid of some if blocks. Let’s demonstrate this.

7.1. getOrDefault()

Let’s say we need to get the definition of the word “dog” and assign it to the variable if it is on the table. Otherwise, assign “not found” to the variable.

Before Java 8:

Word key = new Word("dog");
String definition;

if (table.containsKey(key)) {
     definition = table.get(key);
} else {
     definition = "not found";
}

After Java 8:

definition = table.getOrDefault(key, "not found");

7.2. putIfAbsent()

Let’s say we need to put a word “cat only if it’s not in the dictionary yet.

Before Java 8:

if (!table.containsKey(new Word("cat"))) {
    table.put(new Word("cat"), definition);
}

After Java 8:

table.putIfAbsent(new Word("cat"), definition);

7.3. boolean remove()

Let’s say we need to remove the word “cat” but only if it’s definition is “an animal”.

Before Java 8:

if (table.get(new Word("cat")).equals("an animal")) {
    table.remove(new Word("cat"));
}

After Java 8:

boolean result = table.remove(new Word("cat"), "an animal");

Finally, while old remove() method returns the value, the new method returns boolean.

7.4. replace()

Let’s say we need to replace a definition of “cat”, but only if its old definition is “a small domesticated carnivorous mammal”.

Before Java 8:

if (table.containsKey(new Word("cat")) 
    && table.get(new Word("cat")).equals("a small domesticated carnivorous mammal")) {
    table.put(new Word("cat"), definition);
}

After Java 8:

table.replace(new Word("cat"), "a small domesticated carnivorous mammal", definition);

7.5. computeIfAbsent()

This method is similar to putIfabsent().  But putIfabsent()  takes the value directly, and computeIfAbsent() takes a mapping function. It calculates the value only after it checks the key, and this is more efficient, especially if the value is difficult to obtain.

table.computeIfAbsent(new Word("cat"), key -> "an animal");

Hence, the above line is equivalent to:

if (!table.containsKey(cat)) {
    String definition = "an animal"; // note that calculations take place inside if block
    table.put(new Word("cat"), definition);
}

7.6. computeIfPresent()

This method is similar to the replace() method. But, again, replace()  takes the value directly, and computeIfPresent() takes a mapping function. It calculates the value inside of the if block, that’s why it’s more efficient.

Let’s say we need to change the definition:

table.computeIfPresent(cat, (key, value) -> key.getName() + " - " + value);

Hence, the above line is equivalent to:

if (table.containsKey(cat)) {
    String concatination=cat.getName() + " - " + table.get(cat);
    table.put(cat, concatination);
}

7.7. compute()

Now we’ll solve another task. Let’s say we have an array of String, where the elements are not unique.  Also, let’s calculate how many occurrences of a String we can get in the array. Here is the array:

String[] animals = { "cat", "dog", "dog", "cat", "bird", "mouse", "mouse" };

Also, we want to create a Hashtable which contains an animal as a key and the number of its occurrences as a value.

Here is a solution:

Hashtable<String, Integer> table = new Hashtable<String, Integer>();

for (String animal : animals) {
    table.compute(animal, 
        (key, value) -> (value == null ? 1 : value + 1));
}

Finally, let’s make sure, that the table contains two cats, two dogs, one bird and two mouses:

assertThat(table.values(), hasItems(2, 2, 2, 1));

7.8. merge()

There is another way to solve the above task:

for (String animal : animals) {
    table.merge(animal, 1, (oldValue, value) -> (oldValue + value));
}

The second argument, 1, is the value which is mapped to the key if the key is not yet on the table. If the key is already in the table, then we calculate it as oldValue+1.

7.9. foreach()

This is a new way to iterate through the entries. Let’s print all the entries:

table.forEach((k, v) -> System.out.println(k.getName() + " - " + v)

7.10. replaceAll()

Additionally, we can replace all the values without iteration:

table.replaceAll((k, v) -> k.getName() + " - " + v);

8. Conclusion

In this article, we’ve described the purpose of the hash table structure and showed how to complicate a direct-address table structure to get it.

Additionally, we’ve covered what collisions are and what a load factor is in a Hashtable. Also, we’ve learned why to override equals() and hashCode() for key objects.

Finally, we’ve talked about Hashtable‘s properties and Java 8 specific API.

As usual, the complete source code is available on Github.


Kotlin and Javascript

$
0
0

1. Overview

Kotlin is a next-generation programming language developed by JetBrains. It gains popularity with the Android development community as a replacement for Java.

Another exciting feature of Kotlin is the support of server- and client-side JavaScript. In this article, we’re going to discuss how to write server-side JavaScript applications using Kotlin.

Kotlin can be transpiled (source code written in one language and transformed into another language) to JavaScript. It gives users the option of targeting the JVM and JavaScript using the same language.

In the coming sections, we’re going to develop a node.js application using Kotlin.

2. node.js

Node.js is a lean, fast, cross-platform runtime environment for JavaScript. It’s useful for both server and desktop applications.

Let’s start by setting up a node.js environment and project.

2.1. Installing node.js

Node.js can be downloaded from the Node website. It comes with the npm package manager. After installing we need to set up the project.

In the empty directory, let’s run:

npm init

It will ask a few questions about package name, version description, and an entry point. Provide “kotlinNode” as name, “Kotlin Node Example”  as description and “crypto.js” as entrypoint. For the rest of the values, we’ll keep the default.

This process will generate a package.json file.

After this, we need to install a few dependency packages:

npm install
npm install kotlin --save
npm install express --save

This will install the modules required by our example application in the current project directory.

3.  Creating a node.js Application using Kotlin

In this section, we’re going to create a crypto API server using node.js in KotlinThe API will fetch some self-generated cryptocurrency rates.

3.1. Setting up the Kotlin Project

Now let’s setup the Kotlin project. We’ll be using Gradle here which is the recommended and easy to use approach. Gradle can be installed by following the instructions provided on the Gradle site.

Let’s start by creating the build.gradle file in the same directory:

buildscript {
    ext.kotlin_version = '1.2.41'
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
    }
}

group 'com.baeldung'
version '1.0-SNAPSHOT'
apply plugin: 'kotlin2js'

repositories {
    mavenCentral()
}

dependencies {
    compile "org.jetbrains.kotlin:kotlin-stdlib-js:$kotlin_version"
    testCompile "org.jetbrains.kotlin:kotlin-test-js:$kotlin_version"
}

compileKotlin2Js.kotlinOptions {
    moduleKind = "commonjs"
    outputFile = "node/crypto.js"
}

There are two important points to highlight in the build.gradle file. First, we apply the kotlin2js plugin to do the transpilation.

Then, in KotlinOptions, we set moduleKind to “commonjs” to work with node.js. With the outputFile optionwe control where the transpiled code will be generated.

3.2. Creating an API

Let’s start implementing our application by creating the source folder src/main/kotlin.

In this folder, we create the file CryptoRate.kt.

In this file, we first need to import the require function and then create the main method:

external fun require(module: String): dynamic

fun main(args: Array<String>) {
    
}

Next, we import the required modules and create a server that listens on port 3000:

val express = require("express")

val app = express()
app.listen(3000, {
    println("Listening on port 3000")
})

Finally, we add the API endpoint “/crypto”. It will generate and return the data:

app.get("/crypto", { _, res ->
    res.send(generateCrypoRates())
})

data class CryptoCurrency(var name: String, var price: Float)

fun generateCryptoRates(): Array<CryptoCurrency> {
    return arrayOf<CryptoCurrency>(
      CryptoCurrency("Bitcoin", 90000F),
      CryptoCurrency("ETH",1000F),
      CryptoCurrency("TRX",10F)
    );
}

We’ve used the node.js express module to create the API endpoint. 

4. Run the Application

Running the application will be a two-part process. We need to transpile the Kotlin code into JavaScript before we can start our application with Node.

To create the JavaScript code, we use the Gradle build task:

gradlew build

This will generate the source code in the node directory.

Next, we execute the generated code file crypto.js using Node.js:

node node/crypto.js

This will launch the server running at port 3000In the browser let’s access the API by invoking http://localhost:3000/crypto to get this JSON result:

[
  {
    "name": "Bitcoin",
    "price": 90000
  },
  {
    "name": "ETH",
    "price": 1000
  },
  {
    "name": "TRX",
    "price": 10
  }
]

Alternatively, we can use tools like Postman or SoapUI to consume the API.

5. Conclusion

In this article, we’ve learned how to write node.js applications using Kotlin.

We’ve built a small service in a few minutes without using any boilerplate code.

As always, the code samples can be found over on GitHub.

Spring Data Annotations

$
0
0

1. Introduction

Spring Data provides an abstraction over data storage technologies. Therefore, our business logic code can be much more independent of the underlying persistence implementation. Also, Spring simplifies the handling of implementation-dependent details of data storage.

In this tutorial, we’ll see the most common annotations of the Spring Data, Spring Data JPA, and Spring Data MongoDB projects.

2. Common Spring Data Annotations

2.1. @Transactional

When we want to configure the transactional behavior of a method, we can do it with:

@Transactional
void pay() {}

If we apply this annotation on class level, then it works on all methods inside the class. However, we can override its effects by applying it to a specific method.

It has many configuration options, which can be found in this article.

2.2. @NoRepositoryBean

Sometimes we want to create repository interfaces with the only goal of providing common methods for the child repositories.

Of course, we don’t want Spring to create a bean of these repositories since we won’t inject them anywhere. @NoRepositoryBean does exactly this: when we mark a child interface of org.springframework.data.repository.Repository, Spring won’t create a bean out of it.

For example, if we want an Optional<T> findById(ID id) method in all of our repositories, we can create a base repository:

@NoRepositoryBean
interface MyUtilityRepository<T, ID extends Serializable> extends CrudRepository<T, ID> {
    Optional<T> findById(ID id);
}

This annotation doesn’t affect the child interfaces; hence Spring will create a bean for the following repository interface:

@Repository
interface PersonRepository extends MyUtilityRepository<Person, Long> {}

Note, that the example above isn’t necessary since Spring Data version 2 which includes this method replacing the older T findOne(ID id).

2.3. @Param

We can pass named parameters to our queries using @Param:

@Query("FROM Person p WHERE p.name = :name")
Person findByName(@Param("name") String name);

Note, that we refer to the parameter with the :name syntax.

For further examples, please visit this article.

2.4. @Id

@Id marks a field in a model class as the primary key:

class Person {

    @Id
    Long id;

    // ...
    
}

Since it’s implementation-independent, it makes a model class easy to use with multiple data store engines.

2.5. @Transient

We can use this annotation to mark a field in a model class as transient. Hence the data store engine won’t read or write this field’s value:

class Person {

    // ...

    @Transient
    int age;

    // ...

}

Like @Id, @Transient is also implementation-independent, which makes it convenient to use with multiple data store implementations.

2.6. @CreatedBy, @LastModifiedBy, @CreatedDate, @LastModifiedDate

With these annotations, we can audit our model classes: Spring automatically populates the annotated fields with the principal who created the object, last modified it, and the date of creation, and last modification:

public class Person {

    // ...

    @CreatedBy
    User creator;
    
    @LastModifiedBy
    User modifier;
    
    @CreatedDate
    Date createdAt;
    
    @LastModifiedBy
    Date modifiedAt;

    // ...

}

Note, that if we want Spring to populate the principals, we need to use Spring Security as well.

For a more thorough description, please visit this article.

3. Spring Data JPA Annotations

3.1. @Query

With @Query, we can provide a JPQL implementation for a repository method:

@Query("SELECT COUNT(*) FROM Person p")
long getPersonCount();

Also, we can use named parameters:

@Query("FROM Person p WHERE p.name = :name")
Person findByName(@Param("name") String name);

Besides, we can use native SQL queries, if we set the nativeQuery argument to true:

@Query(value = "SELECT AVG(p.age) FROM person p", nativeQuery = true)
Person getAverageAge();

For more information, please visit this article.

3.2. @Procedure

With Spring Data JPA we can easily call stored procedures from repositories.

First, we need to declare the repository on the entity class using standard JPA annotations:

@NamedStoredProcedureQueries({ 
    @NamedStoredProcedureQuery(
        name = "count_by_name", 
        procedureName = "person.count_by_name", 
        parameters = { 
            @StoredProcedureParameter(
                mode = ParameterMode.IN, 
                name = "name", 
                type = String.class),
            @StoredProcedureParameter(
                mode = ParameterMode.OUT, 
                name = "count", 
                type = Long.class) 
            }
    ) 
})

class Person {}

After this, we can refer to it in the repository with the name we declared in the name argument:

@Procedure(name = "count_by_name")
long getCountByName(@Param("name") String name);

3.3. @Lock

We can configure the lock mode when we execute a repository query method:

@Lock(LockModeType.NONE)
@Query("SELECT COUNT(*) FROM Person p")
long getPersonCount();

The available lock modes:

  • READ
  • WRITE
  • OPTIMISTIC
  • OPTIMISTIC_FORCE_INCREMENT
  • PESSIMISTIC_READ
  • PESSIMISTIC_WRITE
  • PESSIMISTIC_FORCE_INCREMENT
  • NONE

3.4. @Modifying

We can modify data with a repository method if we annotate it with @Modifying:

@Modifying
@Query("UPDATE Person p SET p.name = :name WHERE p.id = :id")
void changeName(@Param("id") long id, @Param("name") String name);

For more information, please visit this article.

3.5. @EnableJpaRepositories

To use JPA repositories, we have to indicate it to Spring. We can do this with @EnableJpaRepositories.

Note, that we have to use this annotation with @Configuration:

@Configuration
@EnableJpaRepositories
class PersistenceJPAConfig {}

Spring will look for repositories in the sub packages of this @Configuration class.

We can alter this behavior with the basePackages argument:

@Configuration
@EnableJpaRepositories(basePackages = "org.baeldung.persistence.dao")
class PersistenceJPAConfig {}

Also note, that Spring Boot does this automatically if it finds Spring Data JPA on the classpath.

4. Spring Data Mongo Annotations

Spring Data makes working with MongoDB much easier. In the next sections, we’ll explore the most basic features of Spring Data MongoDB.

For more information, please visit our article about Spring Data MongoDB.

4.1. @Document

This annotation marks a class as being a domain object that we want to persist to the database:

@Document
class User {}

It also allows us to choose the name of the collection we want to use:

@Document(collection = "user")
class User {}

Note, that this annotation is the Mongo equivalent of @Entity in JPA.

4.2. @Field

With @Field, we can configure the name of a field we want to use when MongoDB persists the document:

@Document
class User {

    // ...

    @Field("email")
    String emailAddress;

    // ...

}

Note, that this annotation is the Mongo equivalent of @Column in JPA.

4.3. @Query

With @Query, we can provide a finder query on a MongoDB repository method:

@Query("{ 'name' : ?0 }")
List<User> findUsersByName(String name);

4.4. @EnableMongoRepositories

To use MongoDB repositories, we have to indicate it to Spring. We can do this with @EnableMongoRepositories.

Note, that we have to use this annotation with @Configuration:

@Configuration
@EnableMongoRepositories
class MongoConfig {}

Spring will look for repositories in the sub packages of this @Configuration class. We can alter this behavior with the basePackages argument:

@Configuration
@EnableMongoRepositories(basePackages = "org.baeldung.repository")
class MongoConfig {}

Also note, that Spring Boot does this automatically if it finds Spring Data MongoDB on the classpath.

5. Conclusion

In this article, we saw which are the most important annotations we need to deal with data in general, using Spring. In addition, we looked into the most common JPA and MongoDB annotations.

As usual, examples are available over on GitHub here for common and JPA annotations, and here for MongoDB annotations.

Spring Data REST Events with @RepositoryEventHandler

$
0
0

1. Introduction

While working with an entity, the REST exporter handles operations for creating, saving, and deleting events. We can use an ApplicationListener to listen to these events and execute a function when the particular action is performed.

Alternatively, we can use annotated handler which filters events based on domain type.

2. Writing an Annotated Handler

The ApplicationListener doesn’t distinguish between entity types; but with the annotated handler, we can filter events based on domain type.

We can declare an annotation based event handler by adding @RepositoryEventHandler annotation on a POJO. As a result, this informs the BeanPostProcessor that the POJO needs to be inspected for handler methods.

In the example below, we annotate the class with RepositoryEventHandler corresponding to the entity Author – and declare methods pertaining to different before and after events corresponding to the Author entity in the AuthorEventHandler class:

@RepositoryEventHandler(Author.class) 
public class AuthorEventHandler {
    Logger logger = Logger.getLogger("Class AuthorEventHandler");
    
    @HandleBeforeCreate
    public void handleAuthorBeforeCreate(Author author){
        logger.info("Inside Author Before Create....");
        String name = author.getName();
    }

    @HandleAfterCreate
    public void handleAuthorAfterCreate(Author author){
        logger.info("Inside Author After Create ....");
        String name = author.getName();
    }

    @HandleBeforeDelete
    public void handleAuthorBeforeDelete(Author author){
        logger.info("Inside Author Before Delete ....");
        String name = author.getName();
    }

    @HandleAfterDelete
    public void handleAuthorAfterDelete(Author author){
        logger.info("Inside Author After Delete ....");
        String name = author.getName();
    }
}

Here, different methods of the AuthorEventHandler class will be invoked based on the operation performed on Author entity.

On finding the class with @RepositoryEventHandler annotation, Spring iterates over the methods in the class to find annotations corresponding to the before and after events mentioned below:

Before* Event Annotations – associated with before annotations are called before the event is called.

  • BeforeCreateEvent
  • BeforeDeleteEvent
  • BeforeSaveEvent
  • BeforeLinkSaveEvent

After* Event Annotations – associated with after annotations are called after the event is called.

  • AfterLinkSaveEvent
  • AfterSaveEvent
  • AfterCreateEvent
  • AfterDeleteEvent

We can also declare methods with different entity type corresponding to the same event type in a class:

@RepositoryEventHandler
public class BookEventHandler {

    @HandleBeforeCreate
    public void handleBookBeforeCreate(Book book){
        // code for before create book event
    }

    @HandleBeforeCreate
    public void handleAuthorBeforeCreate(Author author){
        // code for before create author event
    }
}

Here, the BookEventHandler class deals with more than one entity. On finding the class with @RepositoryEventHandler annotation, it iterates over the methods and calls the respective entity before the respective create event.

Also, we need to declare the event handlers in the @Configuration class which will inspect the bean for handlers and matches them with the right events:

@Configuration
public class RepositoryConfiguration{
    
    public RepositoryConfiguration(){
        super();
    }

    @Bean
    AuthorEventHandler authorEventHandler() {
        return new AuthorEventHandler();
    }

    @Bean
    BookEventHandler bookEventHandler(){
        return new BookEventHandler();
    }
}

3. Conclusion

In conclusion, this serves as an introduction to implementing and understanding @RepositoryEventHandler.

In this quick tutorial, we learned how to implement @RepositoryEventHandler annotation to handle various events corresponding to entity type.

And, as always, find the complete code samples over on Github.

Example of Downloading File in a Servlet

$
0
0

1. Overview

A common feature of web applications is the ability to expose a file for download.

In this tutorial, we’ll cover a simple example of creating a downloadable file and serving it from a Java Servlet application.

The file we are using will be from the webapp resources.

2. Maven Dependencies

If using Java EE, then we wouldn’t need to add any dependencies. However, if we’re using Java SE, we’ll need the javax.servlet-api dependency:

<dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>javax.servlet-api</artifactId>
    <version>4.0.1</version>
    <scope>provided</scope>
</dependency>

The latest version of the dependency can be found here.

3. Servlet

Lets have a look at the code first and then find out what’s going on:

@WebServlet("/download")
public class DownloadServlet extends HttpServlet {
    private final int ARBITARY_SIZE = 1048;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
        resp.setContentType("text/plain");
        resp.setHeader("Content-disposition", "attachment; filename=sample.txt");
        
        InputStream in = req.getServletContext().getResourceAsStream("/WEB-INF/sample.txt");
        OutputStream out = resp.getOutputStream();

        byte[] buffer = new byte[ARBITARY_SIZE];
        
        int numBytesRead;
        while ((numBytesRead = in.read(buffer)) > 0) {
            out.write(buffer, 0, numBytesRead);
        }
        
        in.close();
        out.flush();
    }
}

3.1. Request End Point

@WebServlet(“/download”) annotation marks the DownloadServlet class to serve requests directed at the “/download” end-point.

Alternatively, we can do this by describing the mapping in the web.xml file.

3.2. Response Content-Type

The HttpServletResponse object has a method called as setContentType which we can use to set the Content-Type header of the HTTP response.

Content-Type is the historical name of the header property. Another name was the MIME type (Multipurpose Internet Mail Extensions). We now simply refer to the value as the Media Type.

This value could be “application/pdf”, “text/plain”, “text/html”, “image/jpg”, etc., the official list is maintained by the Internet Assigned Numbers Authority (IANA) and can be found here.

For our example, we are using a simple text file. The Content-Type for a text file is “text/plain”.

3.3. Response Content-Disposition

Setting the Content-Disposition header in the response object tells the browser how to handle the file it is accessing.

Browsers understand the use of Content-Disposition as a convention but it’s not actually a part of the HTTP standard. W3 has a memo on the use of Content-Disposition available to read here.

The Content-Disposition values for the main body of a response will be either “inline” (for webpage content to be rendered) or “attachment” (for a downloadable file).

If not specified, the default Content-Disposition is “inline”.

Using an optional header parameter, we can specify the filename “sample.txt”.

Some browsers will immediately download the file using the given filename and others will show a download dialog containing our predefined value.

The exact action taken will depend on the browser.

3.4. Reading From File and Writing to Output Stream

In the remaining lines of code, we take the ServletContext from the request, and use it to obtain the file at “/WEB-INF/sample.txt”.

Using HttpServletResponse#getOutputStream(), we then read from the input stream of the resource and write to the response’s OutputStream.

The size of the byte array we use is arbitrary. We can decide the size based on the amount of memory is reasonable to allocate for passing the data from the InputStream to the OutputStream; the smaller the number, the more loops; the bigger the number, the higher memory usage.

This cycle continues until numByteRead is 0 as that indicates the end of the file.

3.5. Close and Flush

close will close the Stream and release any resources that it is currently holding.

flush will write any remaining buffered bytes to the OutputStream destination.

We use these two methods to release memory, ensuring that the data we have prepared is sent out from our application.

3.6. Downloading the File

With everything in place, we are now ready to run our Servlet.

Now when we visit the relative end-point “/download”, our browser will attempt to download the file as “simple.txt”.

4. Conclusion

Downloading a file from a Servlet becomes a simple process. Using streams allow us to pass out the data as bytes and the Media Types inform the client browser what type of data to expect.

It is down to the browser to determine how to handle the response, however, we can give some guidelines with the Content-Disposition header.

All code in this article can be found over on GitHub.

Java Weekly, Issue 231

$
0
0

Here we go…

1. Spring and Java

>> From Java to Kotlin and Back [allegro.tech]

A controversial but interesting read about one team’s story which migrated from Java 8 to Kotlin… and then to Java 10.

>> Getting Started with Kafka in Spring Boot [e4developer.com]

Although Kafka can be an intimidating technology, Spring makes it much easier to get started using it.

>> Structuring and Testing Modules and Layers with Spring Boot [reflectoring.io]

A very interesting showcase of testing of multiple application layers in a Spring Boot application.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Let’s Encrypt tips [advancedweb.hu]

A really good set of tips to keep top of mind when setting up certificates Let’s Encrypt. 

>> UTC is enough for everyone…right? [zachholman.com]

Reinventing the calendar, apparently 🙂 – with all the complexity that comes with that.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Wifi In Slide Deck [dilbert.com]

>> Boring And Needy Children [dilbert.com]

>> Two Jobs Forever [dilbert.com]

5. Pick of the Week

Not necessarily light reading:

>> OAuth 2.0 Security Best Current Practice [tools.ietf.org]

Spring Boot Annotations

$
0
0

1. Overview

Spring Boot made configuring Spring easier with its auto-configuration feature.

In this quick tutorial, we’ll explore the annotations from the org.springframework.boot.autoconfigure and org.springframework.boot.autoconfigure.condition packages.

2. @SpringBootApplication

We use this annotation to mark the main class of a Spring Boot application:

@SpringBootApplication
class VehicleFactoryApplication {

    public static void main(String[] args) {
        SpringApplication.run(VehicleFactoryApplication.class, args);
    }
}

@SpringBootApplication encapsulates @Configuration, @EnableAutoConfiguration, and @ComponentScan annotations with their default attributes.

3. @EnableAutoConfiguration

@EnableAutoConfiguration, as its name says, enables auto-configuration. It means that Spring Boot looks for auto-configuration beans on its classpath and automatically applies them.

Note, that we have to use this annotation with @Configuration:

@Configuration
@EnableAutoConfiguration
class VehicleFactoryConfig {}

4. Auto-Configuration Conditions

Usually, when we write our custom auto-configurations, we want Spring to use them conditionally. We can achieve this with the annotations in this section.

We can place the annotations in this section on @Configuration classes or @Bean methods.

In the next sections, we’ll only introduce the basic concept behind each condition. For further information, please visit this article.

4.1. @ConditionalOnClass and @ConditionalOnMissingClass

Using these conditions, Spring will only use the marked auto-configuration bean if the class in the annotation’s argument is present/absent:

@Configuration
@ConditionalOnClass(DataSource.class)
class MySQLAutoconfiguration {
    //...
}

4.2. @ConditionalOnBean and @ConditionalOnMissingBean

We can use these annotations when we want to define conditions based on the presence or absence of a specific bean:

@Bean
@ConditionalOnBean(name = "dataSource")
LocalContainerEntityManagerFactoryBean entityManagerFactory() {
    // ...
}

4.3. @ConditionalOnProperty

With this annotation, we can make conditions on the values of properties:

@Bean
@ConditionalOnProperty(
    name = "usemysql", 
    havingValue = "local"
)
DataSource dataSource() {
    // ...
}

4.4. @ConditionalOnResource

We can make Spring to use a definition only when a specific resource is present:

@ConditionalOnResource(resources = "classpath:mysql.properties")
Properties additionalProperties() {
    // ...
}

4.5. @ConditionalOnWebApplication and @ConditionalOnNotWebApplication

With these annotations, we can create conditions based on if the current application is or isn’t a web application:

@ConditionalOnWebApplication
HealthCheckController healthCheckController() {
    // ...
}

4.6. @ConditionalExpression

We can use this annotation in more complex situations. Spring will use the marked definition when the SpEL expression is evaluated to true:

@Bean
@ConditionalOnExpression("${usemysql} && ${mysqlserver == 'local'}")
DataSource dataSource() {
    // ...
}

4.7. @Conditional

For even more complex conditions, we can create a class evaluating the custom condition. We tell Spring to use this custom condition with @Conditional:

@Conditional(HibernateCondition.class)
Properties additionalProperties() {
    //...
}

5. Conclusion

In this article, we saw an overview of how can we fine-tune the auto-configuration process and provide conditions for custom auto-configuration beans.

As usual, the examples are available over on GitHub.

Spring Web Annotations

$
0
0

1. Overview

In this tutorial, we’ll explore Spring Web annotations from the org.springframework.web.bind.annotation package.

2. @RequestMapping

Simply put, @RequestMapping marks request handler methods inside @Controller classes; it can be configured using:

  • path, or its aliases, name, and value: which URL the method is mapped to
  • method: compatible HTTP methods
  • params: filters requests based on presence, absence, or value of HTTP parameters
  • headers: filters requests based on presence, absence, or value of HTTP headers
  • consumes: which media types the method can consume in the HTTP request body
  • produces: which media types the method can produce in the HTTP response body

Here’s a quick example of what that looks like:

@Controller
class VehicleController {

    @RequestMapping(value = "/vehicles/home", method = RequestMethod.GET)
    String home() {
        return "home";
    }
}

We can provide default settings for all handler methods in a @Controller class if we apply this annotation on the class level. The only exception is the URL which Spring won’t override with method level settings but appends the two path parts.

For example, the following configuration has the same effect as the one above:

@Controller
@RequestMapping(value = "/vehicles", method = RequestMethod.GET)
class VehicleController {

    @RequestMapping("/home")
    String home() {
        return "home";
    }
}

Moreover, @GetMapping, @PostMapping, @PutMapping, @DeleteMapping, and @PatchMapping are different variants of @RequestMapping with the HTTP method already set to GET, POST, PUT, DELETE, and PATCH respectively.

These are available since Spring 4.3 release.

3. @RequestBody

Let’s move on to @RequestBody – which maps the body of the HTTP request to an object:

@PostMapping("/save")
void saveVehicle(@RequestBody Vehicle vehicle) {
    // ...
}

The deserialization is automatic and depends on the content type of the request.

4. @PathVariable

Next, let’s talk about @PathVariable.

This annotation indicates that a method argument is bound to a URI template variable. We can specify the URI template with the @RequestMapping annotation and bind a method argument to one of the template parts with @PathVariable.

We can achieve this with the name or its alias, the value argument:

@RequestMapping("/{id}")
Vehicle getVehicle(@PathVariable("id") long id) {
    // ...
}

If the name of the part in the template matches the name of the method argument, we don’t have to specify it in the annotation:

@RequestMapping("/{id}")
Vehicle getVehicle(@PathVariable long id) {
    // ...
}

Moreover, we can mark a path variable optional by setting the argument required to false:

@RequestMapping("/{id}")
Vehicle getVehicle(@PathVariable(required = false) long id) {
    // ...
}

5. @RequestParam

We use @RequestParam for accessing HTTP request parameters:

@RequestMapping
Vehicle getVehicleByParam(@RequestParam("id") long id) {
    // ...
}

It has the same configuration options as the @PathVariable annotation.

In addition to those settings, with @RequestParam we can specify an injected value when Spring finds no or empty value in the request. To achieve this, we have to set the defaultValue argument.

Providing a default value implicitly sets required to false:

@RequestMapping("/buy")
Car buyCar(@RequestParam(defaultValue = "5") int seatCount) {
    // ...
}

Besides parameters, there are other HTTP request parts we can access: cookies and headers. We can access them with the annotations @CookieValue and @RequestHeader respectively.

We can configure them the same way as @RequestParam.

6. Response Handling Annotations

In the next sections, we will see the most common annotations to manipulate HTTP responses in Spring MVC.

6.1. @ResponseBody

If we mark a request handler method with @ResponseBody, Spring treats the result of the method as the response itself:

@ResponseBody
@RequestMapping("/hello")
String hello() {
    return "Hello World!";
}

If we annotate a @Controller class with this annotation, all request handler methods will use it.

6.2. @ExceptionHandler

With this annotation, we can declare a custom error handler method. Spring calls this method when a request handler method throws any of the specified exceptions.

The caught exception can be passed to the method as an argument:

@ExceptionHandler(IllegalArgumentException.class)
void onIllegalArgumentException(IllegalArgumentException exception) {
    // ...
}

6.3. @ResponseStatus

We can specify the desired HTTP status of the response if we annotate a request handler method with this annotation. We can declare the status code with the code argument, or its alias, the value argument.

Also, we can provide a reason using the reason argument.

We also can use it along with @ExceptionHandler:

@ExceptionHandler(IllegalArgumentException.class)
@ResponseStatus(HttpStatus.BAD_REQUEST)
void onIllegalArgumentException(IllegalArgumentException exception) {
    // ...
}

For more information about HTTP response status, please visit this article.

7. Other Web Annotations

Some annotations don’t manage HTTP requests or responses directly. In the next sections, we’ll introduce the most common ones.

7.1. @RestController

The @RestController combines @Controller and @ResponseBody.

Therefore, the following declarations are equivalent:

@Controller
@ResponseBody
class VehicleRestController {
    // ...
}
@RestController
class VehicleRestController {
    // ...
}

7.2. @ModelAttribute

With this annotation we can access elements that are already in the model of an MVC @Controller, by providing the model key:

@PostMapping("/assemble")
void assembleVehicle(@ModelAttribute("vehicle") Vehicle vehicleInModel) {
    // ...
}

Like with @PathVariable and @RequestParam, we don’t have to specify the model key if the argument has the same name:

@PostMapping("/assemble")
void assembleVehicle(@ModelAttribute Vehicle vehicle) {
    // ...
}

Besides, @ModelAttribute has another use: if we annotate a method with it, Spring will automatically add the method’s return value to the model:

@ModelAttribute("vehicle")
Vehicle getVehicle() {
    // ...
}

Like before, we don’t have to specify the model key, Spring uses the method’s name by default:

@ModelAttribute
Vehicle vehicle() {
    // ...
}

Before Spring calls a request handler method, it invokes all @ModelAttribute annotated methods in the class.

More information about @ModelAttribute can be found in this article.

7.3. @CrossOrigin

@CrossOrigin enables cross-domain communication for the annotated request handler methods:

@CrossOrigin
@RequestMapping("/hello")
String hello() {
    return "Hello World!";
}

If we mark a class with it, it applies to all request handler methods in it.

We can fine-tune CORS behavior with this annotation’s arguments.

For more details, please visit this article.

8. Conclusion

In this article, we saw how we can handle HTTP requests and responses with Spring MVC.

As usual, the examples are available over on GitHub.


Spring Scheduling Annotations

$
0
0

1. Overview

When single-threaded execution isn’t enough, we can use annotations from the org.springframework.scheduling.annotation package.

In this quick tutorial, we’re going to explore the Spring Scheduling Annotations.

2. @EnableAsync

With this annotation, we can enable asynchronous functionality in Spring.

We must use it with @Configuration:

@Configuration
@EnableAsync
class VehicleFactoryConfig {}

Now, that we enabled asynchronous calls, we can use @Async to define the methods supporting it.

3. @EnableScheduling

With this annotation, we can enable scheduling in the application.

We also have to use it in conjunction with @Configuration:

@Configuration
@EnableScheduling
class VehicleFactoryConfig {}

As a result, we can now run methods periodically with @Scheduled.

4. @Async

We can define methods we want to execute on a different thread, hence run them asynchronously.

To achieve this, we can annotate the method with @Async:

@Async
void repairCar() {
    // ...
}

If we apply this annotation to a class, then all methods will be called asynchronously.

Note, that we need to enable the asynchronous calls for this annotation to work, with @EnableAsync or XML configuration.

More information about @Async can be found in this article.

5. @Scheduled

If we need a method to execute periodically, we can use this annotation:

@Scheduled(fixedRate = 10000)
void checkVehicle() {
    // ...
}

We can use it to execute a method at fixed intervals, or we can fine-tune it with cron-like expressions.

@Scheduled leverages the Java 8 repeating annotations feature, which means we can mark a method with it multiple times:

@Scheduled(fixedRate = 10000)
@Scheduled(cron = "0 * * * * MON-FRI")
void checkVehicle() {
    // ...
}

Note, that the method annotated with @Scheduled should have a void return type.

Moreover, we have to enable scheduling for this annotation to work for example with @EnableScheduling or XML configuration.

For more information about scheduling read this article.

6. @Schedules

We can use this annotation to specify multiple @Scheduled rules:

@Schedules({ 
  @Scheduled(fixedRate = 10000), 
  @Scheduled(cron = "0 * * * * MON-FRI")
})
void checkVehicle() {
    // ...
}

Note, that since Java 8 we can achieve the same with the repeating annotations feature as described above.

7. Conclusion

In this article, we saw an overview of the most common Spring scheduling annotations.

As usual, the examples are available over on GitHub.

A Guide to Spring Data Key Value

$
0
0

1. Introduction

The Spring Data Key Value framework makes it easy to write Spring applications that use key-value stores.

It minimizes redundant tasks and boilerplate code required for interacting with the store. The framework works well for key-value stores like Redis and Riak.

In this tutorial, we’ll cover how we can use Spring Data Key Value with the default java.util.Map based implementation.

2. Requirements

The Spring Data Key Value 1.x binaries require JDK level 6.0 or above, and Spring Framework 3.0.x or above.

3. Maven Dependency

To work with Spring Data Key Value, we need to add the following dependency:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-keyvalue</artifactId>
    <version>2.0.6.RELEASE</version>
</dependency>

The latest version can be found here.

4. Creating an Entity

Let’s create an Employee entity:

@KeySpace("employees")
public class Employee {

    @Id
    private Integer id;

    private String name;

    private String department;

    private String salary;

    // constructors/ standard getters and setters

}

Keyspaces define in which part of the data structure the entity should be kept. This concept is very similar to collections in MongoDB and Elasticsearch, cores in Solr and tables in JPA.

By default, the keyspace of an entity is extracted from its type.

5. Repository

Similar to other Spring Data frameworks, we will need to activate Spring Data repositories using the @EnableMapRepositories annotation.

By default, the repositories will use the ConcurrentHashMap-based implementation:

@SpringBootApplication
@EnableMapRepositories
public class SpringDataKeyValueApplication {
}

It’s possible to change the default ConcurrentHashMap implementation and use some other java.util.Map implementations:

@EnableMapRepositories(mapType = WeakHashMap.class)

Creating repositories with Spring Data Key Value works the same way as with other Spring Data frameworks:

@Repository
public interface EmployeeRepository
  extends CrudRepository<Employee, Integer> {
}

For learning more about Spring Data repositories we can have a look at this article.

6. Using the Repository

By extending CrudRepository in EmployeeRepository, we get a complete set of persistence methods that perform CRUD functionality.

Now, we’ll see how we can use some of the available persistence methods.

6.1. Saving an Object

Let’s save a new Employee object to the data store using the repository:

Employee employee = new Employee(1, "Mike", "IT", "5000");
employeeRepository.save(employee);

6.2. Retrieving an Existing Object

We can verify the correct save of the employee in the previous section by fetching the employee:

Optional<Employee> savedEmployee = employeeRepository.findById(1);

6.3. Updating an Existing Object

CrudRepository doesn’t provide a dedicated method for updating an object.

Instead, we can use the save() method:

employee.setName("Jack");
employeeRepository.save(employee);

6.4. Deleting an Existing Object

We can delete the inserted object using the repository:

employeeRepository.deleteById(1);

6.5. Fetch All Objects

We can fetch all the saved objects:

Iterable<Employee> employees = employeeRepository.findAll();

7. KeyValueTemplate

Another way of performing operations on the data structure is by using KeyValueTemplate.

In very basic terms, the KeyValueTemplate uses a MapAdapter wrapping a java.util.Map implementation to perform queries and sorting:

@Bean
public KeyValueOperations keyValueTemplate() {
    return new KeyValueTemplate(keyValueAdapter());
}

@Bean
public KeyValueAdapter keyValueAdapter() {
    return new MapKeyValueAdapter(WeakHashMap.class);
}

Note that in case we have used @EnableMapRepositories, we don’t need to specify a KeyValueTemplate. It will be created by the framework itself.

8. Using KeyValueTemplate

Using KeyValueTemplate, we can perform the same operations as we did with the repository.

8.1. Saving an Object

Let’s see how to save a new Employee object to the data store using a template:

Employee employee = new Employee(1, "Mile", "IT", "5000");
keyValueTemplate.insert(employee);

8.2. Retrieving an Existing Object

We can verify the insertion of the object by fetching it from the structure using template:

Optional<Employee> savedEmployee = keyValueTemplate
  .findById(id, Employee.class);

8.3. Updating an Existing Object

Unlike CrudRepository, the template provides a dedicated method to update an object:

employee.setName("Jacek");
keyValueTemplate.update(employee);

8.4. Deleting an Existing Object

We can delete an object with a template:

keyValueTemplate.delete(id, Employee.class);

8.5. Fetch All Objects

We can fetch all the saved objects using a template:

Iterable<Employee> employees = keyValueTemplate
  .findAll(Employee.class);

8.6. Sorting the Objects

In addition to the basic functionality, the template also supports KeyValueQuery for writing custom queries.

For example, we can use a query to get a sorted list of Employees based on their salary:

KeyValueQuery<Employee> query = new KeyValueQuery<Employee>();
query.setSort(new Sort(Sort.Direction.DESC, "salary"));
Iterable<Employee> employees 
  = keyValueTemplate.find(query, Employee.class);

9. Conclusion

This article showcased how we can use Spring Data KeyValue framework with the default Map implementation using Repository or KeyValueTemplate.

There are more Spring Data Frameworks like Spring Data Redis which are written on top of Spring Data Key Value. Refer to this article for an introduction to Spring Data Redis.

And, as always, all code samples shown here are available over on GitHub.

Processing JSON with Kotlin and Klaxson

$
0
0

1. Overview

Klaxon is one of the open source libraries that we can use to parse JSON in Kotlin.

In this tutorial, we’re going to look at its features.

2. Maven Dependency

First, we’ll need to add the library dependency to our Maven project:

<dependency>
    <groupId>com.beust</groupId>
    <artifactId>klaxon</artifactId>
    <version>3.0.4</version>
</dependency>

The latest version can be found at jcenter or in the Spring Plugins Repository.

3. API Features

Klaxon has four APIs to work with JSON documents. We’ll explore these in the following sections.

4. Object Binding API

With this API, we can bind JSON documents to Kotlin objects and vice-versa.
To start, let’s define the following JSON document:

{
    "name": "HDD"
}

Next, we’ll create the Product class for binding:

class Product(val name: String)

Now, we can test serialization:

@Test
fun givenProduct_whenSerialize_thenGetJsonString() {
    val product = Product("HDD")
    val result = Klaxon().toJsonString(product)

    assertThat(result).isEqualTo("""{"name" : "HDD"}""")
}

And we can test deserialization:

@Test
fun givenJsonString_whenDeserialize_thenGetProduct() {
    val result = Klaxon().parse<Product>(
    """
        {
            "name" : "RAM"
        }
    """)

    assertThat(result?.name).isEqualTo("RAM")
}

This API also supports working with data classes as well mutable and immutable classes.

Klaxon allows us to customize the mapping process with the @Json annotation. This annotation has two properties:

  • name – for setting a different name for the fields
  • ignored – for ignoring fields of the mapping process

Let’s create a CustomProduct class to see how these work:

class CustomProduct(
    @Json(name = "productName")
    val name: String,
    @Json(ignored = true)
    val id: Int)

Now, let’s verify it with a test:

@Test
fun givenCustomProduct_whenSerialize_thenGetJsonString() {
    val product = CustomProduct("HDD", 1)
    val result = Klaxon().toJsonString(product)

    assertThat(result).isEqualTo("""{"productName" : "HDD"}""")
}

As we can see, the name property is serialized as productName, and the id property is ignored.

5. Streaming API

With the Streaming API, we can handle huge JSON documents by reading from a stream. This feature allows our code to process JSON values while it is still reading.

We need to use the JsonReader class from the API to read a JSON stream. This class has two special functions to handle streaming:

  • beginObject() – makes sure that the next token is the beginning of an object
  • beginArray() – makes sure that the next token is the beginning of an array

With these functions, we can be sure the stream is correctly positioned and that it’s closed after consuming the object or array.

Let’s test the streaming API against an array of the following ProductData class:

data class ProductData(val name: String, val capacityInGb: Int)
@Test
fun givenJsonArray_whenStreaming_thenGetProductArray() {
    val jsonArray = """
    [
        { "name" : "HDD", "capacityInGb" : 512 },
        { "name" : "RAM", "capacityInGb" : 16 }
    ]"""
    val expectedArray = arrayListOf(
      ProductData("HDD", 512),
      ProductData("RAM", 16))
    val klaxon = Klaxon()
    val productArray = arrayListOf<ProductData>()
    JsonReader(StringReader(jsonArray)).use { 
        reader -> reader.beginArray {
            while (reader.hasNext()) {
                val product = klaxon.parse<ProductData>(reader)
                productArray.add(product!!)
            }
        }
    }

    assertThat(productArray).hasSize(2).isEqualTo(expectedArray)
}

6. JSON Path Query API

Klaxon supports the element location feature from the JSON Path specification. With this API, we can define path matchers to locate specific entries in our documents.

Note that this API is streaming, too, and we’ll be notified after an element is found and parsed.

We need to use the PathMatcher interface. This interface is called when the JSON Path found matches of the regular expression.

To use this, we need to implement its methods:

  • pathMatches() – return true if we want to observe this path
  • onMatch() – fired when the path is found; note that the value can only be a basic type (e.g., int, String) and never JsonObject or JsonArray

Let’s make a test to see it in action.

First, let’s define an inventory JSON document as a source of data:

{
    "inventory" : {
        "disks" : [
            {
                "type" : "HDD",
                "sizeInGb" : 1000
            },
            {
                "type" : "SDD",
                "sizeInGb" : 512
            }
        ]
    }
}

Now, we implement the PathMatcher interface as follows:

val pathMatcher = object : PathMatcher {
    override fun pathMatches(path: String)
      = Pattern.matches(".*inventory.*disks.*type.*", path)

    override fun onMatch(path: String, value: Any) {
        when (path) {
            "$.inventory.disks[0].type"
              -> assertThat(value).isEqualTo("HDD")
            "$.inventory.disks[1].type"
              -> assertThat(value).isEqualTo("SDD")
        }
    }
}

Note we defined the regex to match the type of disk of our inventory document.

Now, we are ready to define our test:

@Test
fun givenDiskInventory_whenRegexMatches_thenGetTypes() {
    val jsonString = """..."""
    val pathMatcher = //...
    Klaxon().pathMatcher(pathMatcher)
      .parseJsonObject(StringReader(jsonString))
}

7. Low-Level API

With Klaxon, we can process JSON documents like a Map or a List. To do this, we can use the classes JsonObject and JsonArray from the API.

Let’s make a test to see the JsonObject in action:

@Test
fun givenJsonString_whenParser_thenGetJsonObject() {
    val jsonString = StringBuilder("""
        {
            "name" : "HDD",
            "capacityInGb" : 512,
            "sizeInInch" : 2.5
        }
    """)
    val parser = Parser()
    val json = parser.parse(jsonString) as JsonObject

    assertThat(json)
      .hasSize(3)
      .containsEntry("name", "HDD")
      .containsEntry("capacityInGb", 512)
      .containsEntry("sizeInInch", 2.5)
}

Now, let’s make a test to see the JsonArray functionality:

@Test
fun givenJsonStringArray_whenParser_thenGetJsonArray() {
    val jsonString = StringBuilder("""
    [
        { "name" : "SDD" },
        { "madeIn" : "Taiwan" },
        { "warrantyInYears" : 5 }
    ]""")
    val parser = Parser()
    val json = parser.parse(jsonString) as JsonArray<JsonObject>

    assertSoftly({
        softly ->
            softly.assertThat(json).hasSize(3)
            softly.assertThat(json[0]["name"]).isEqualTo("SDD")
            softly.assertThat(json[1]["madeIn"]).isEqualTo("Taiwan")
            softly.assertThat(json[2]["warrantyInYears"]).isEqualTo(5)
    })
}

As we can see in both cases, we made the conversions without the definition of specific classes.

8. Conclusion

In this article, we explored the Klaxon library and its APIs to handle JSON documents.

As always, the source code is available over on Github.

Guide to the this Java Keyword

$
0
0

1. Introduction

In this tutorial, we’ll take a look at the this Java keyword.

In Java, this keyword is a reference to the current object whose method is being called.

Let’s explore how and when we can use the keyword.

2. Disambiguating Field Shadowing

The keyword is useful for disambiguating instance variables from local parameters. The most common reason is when we have constructor parameters with the same name as instance fields:

public class KeywordTest {

    private String name;
    private int age;
    
    public KeywordTest(String name, int age) {
        this.name = name;
        this.age = age;
    }
}

As we can see here, we’re using this with the name and age instance fields – to distinguish them from parameters.

Another usage is to use this with the parameter hiding or shadowing in the local scope. An example of use can be found in the Variable and Method Hiding article.

3. Referencing Constructors of the Same Class

From a constructor, we can use this() to call a different constructor of the same class. Here, we use this() for the constructor chaining to reduce the code usage.

The most common use case is to call a default constructor from the parameterized constructor:

public KeywordTest(String name, int age) {
    this();
    
    // the rest of the code
}

Or, we can call the parameterized constructor from the no argument constructor and pass some arguments:

public KeywordTest() {
    this("John", 27);
}

Note, that this() should be the first statement in the constructor, otherwise the compilation error will occur.

4. Passing this as a Parameter

Here we have printInstance() method, where the this Keyword argument is defined:

public KeywordTest() {
    printInstance(this);
}

public void printInstance(KeywordTest thisKeyword) {
    System.out.println(thisKeyword);
}

Inside the constructor, we invoke printInstance() method. With this, we pass a reference to the current instance.

5. Returning this

We can also use this keyword to return the current class instance from the method.

To not duplicate the code, here’s a full practical example of how it’s implemented in the builder design pattern.

6. The this Keyword Within the Inner Class

We also use this to access the outer class instance from within the inner class:

public class KeywordTest {

    private String name;

    class ThisInnerClass {

        boolean isInnerClass = true;

        public ThisInnerClass() {
            KeywordTest thisKeyword = KeywordTest.this;
            String outerString = KeywordTest.this.name;
        }
    }
}

Here, inside the constructor, we can get a reference to the KeywordTest instance with the KeywordTest.this call. We can go even deeper and access the instance variables like KeywordTest.this.name field.

7. Conclusion

In this article, we explored the this keyword in Java.

As usual, the complete code is available over on Github.

Guide to the super Java Keyword

$
0
0

1. Introduction

In this quick tutorial, we’ll take a look at the super Java keyword.

Simply put, we can use the super keyword to access the parent class. 

Let’s explore the applications of the core keyword in the language.

2. The super Keyword with Constructors

We can use super() to call the parent default constructor. It should be the first statement in a constructor.

In our example, we use super(message) with the String argument:

public class SuperSub extends SuperBase {

    public SuperSub(String message) {
        super(message);
    }
}

Let’s create a child class instance and see what’s happening behind:

SuperSub child = new SuperSub("message from the child class");

The new keyword invokes the constructor of the SuperSub, which itself calls the parent constructor first and passes the String argument to it.

3. Accessing Parent Class Variables

Let’s create a parent class with the message instance variable:

public class SuperBase {
    String message = "super class";
}

Now, we create a child class with the variable of the same name:

public class SuperSub extends SuperBase {

    String message = "child class";

    public void getParentMessage() {
        System.out.println(super.message);
    }
}

We can access the parent variable from the child class by using the super keyword.

4. The super Keyword with Method Overriding

Before going further, we advise reviewing our method overriding guide.

Let’s add an instance method to our parent class:

public class SuperBase {

    String message = "super class";

    public void printMessage() {
        System.out.println(message);
    }
}

And override the printMessage() method in our child class:

public class SuperSub extends SuperBase {

    String message = "child class";

    public SuperSub() {
        super.printMessage();
        printMessage();
    }

    public void printMessage() {
        System.out.println(message);
    }
}

We can use the super to access the overridden method from the child class. The super.printMessage() in constructor calls the parent method from SuperBase.

5. Conclusion

In this article, we explored the super keyword.

As usual, the complete code is available over on Github.

Returning a JSON Response from a Servlet

$
0
0

1. Introduction

In this quick tutorial, we’ll create a small web application and explore how to return a JSON response from a Servlet.

2. Maven

For our web application, we’ll include javax.servlet-api and Gson dependencies in our pom.xml:

<dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>javax.servlet-api</artifactId>
    <version>${javax.servlet.version}</version>
</dependency>
<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>${gson.version}</version>
</dependency>

The latest versions of the dependencies can be found here: javax.servlet-api and gson.

We also need to configure a Servlet container to deploy our application to. This article is a good place to start on how to deploy a WAR on Tomcat.

3. Creating an Entity

Let’s create an Employee entity which will later be returned from the Servlet as JSON:

public class Employee {
	
    private int id;
    
    private String name;
    
    private String department;
   
    private long salary;

    // constructors
    // standard getters and setters.
}

4. Entity to JSON

To send a JSON response from the Servlet we first need to convert the Employee object into its JSON representation.

There are many java libraries available to convert an object to there JSON representation and vice versa. Most prominent of them would be the Gson and Jackson libraries. To learn about the differences between GSON and Jackson, have a look at this article.

A quick sample for converting an object to JSON representation with Gson would be:

String employeeJsonString = new Gson().toJson(employee);

5. Response and Content Type

For HTTP Servlets, the correct procedure for populating the response:

  1. Retrieve an output stream from the response
  2. Fill in the response headers
  3. Write content to the output stream
  4. Commit the response

In a response, a Content-Type header tells the client what the content type of the returned content actually is.

For producing a JSON response the content type should be application/json:

PrintWriter out = response.getWriter();
response.setContentType("application/json");
response.setCharacterEncoding("UTF-8");
out.print(employeeJsonString);
out.flush();

Response headers must always be set before the response is committed. The web container will ignore any attempt to set or add headers after the response is committed.

Calling flush() on the PrintWriter commits the response.

6. Example Servlet

Now let’s see an example Servlet that returns a JSON response:

@WebServlet(name = "EmployeeServlet", urlPatterns = "/employeeServlet")
public class EmployeeServlet extends HttpServlet {

    private Gson gson = new Gson();

    @Override
    protected void doGet(
      HttpServletRequest request, 
      HttpServletResponse response) throws IOException {
        
        Employee employee = new Employee(1, "Karan", "IT", 5000);
        String employeeJsonString = this.gson.toJson(employee);

        PrintWriter out = response.getWriter();
        response.setContentType("application/json");
        response.setCharacterEncoding("UTF-8");
        out.print(employeeJsonString);
        out.flush();   
    }
}

7. Conclusion

This article showcased how to return a JSON response from a Servlet. This is helpful in web applications that use Servlets to implement REST Services.

All code samples shown here can be found on GitHub.

Spring Data Reactive Repositories with MongoDB

$
0
0

1. Introduction

In this tutorial, we’re going to see how to configure and implement database operations using Reactive Programming through Spring Data Reactive Repositories with MongoDB.

We’ll go over the basic usages of ReactiveCrudRepository, ReactiveMongoRepository, as well as ReactiveMongoTemplate.

Even though these implementations use reactive programming, that isn’t the primary focus of this tutorial.

2. Environment

In order to use Reactive MongoDB, we need to add the dependency to our pom.xml.

We’ll also add an embedded MongoDB for testing:

<dependencies>
    // ...
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-mongodb-reactive</artifactId>
    </dependency>
    <dependency>
        <groupId>de.flapdoodle.embed</groupId>
        <artifactId>de.flapdoodle.embed.mongo</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies>

3. Configuration

In order to activate the reactive support, we need to use the @EnableReactiveMongoRepositories alongside with some infrastructure setup:

@EnableReactiveMongoRepositories
public class MongoReactiveApplication
  extends AbstractReactiveMongoConfiguration {

    @Bean
    public MongoClient mongoClient() {
        return MongoClients.create();
    }

    @Override
    protected String getDatabaseName() {
        return "reactive";
    }
}

Note that the above would be necessary if we were using the standalone MongoDB installation. But, as we’re using Spring Boot with embedded MongoDB in our example, the above configuration is not necessary.

4. Creating a Document

For the examples below, let’s create an Account class and annotate it with @Document to use it in the database operations:

@Document
public class Account {
 
    @Id
    private String id;
    private String owner;
    private Double value;
 
    // getters and setters
}

5. Using Reactive Repositories

We are already familiar with the repositories programming model, with the CRUD methods already defined plus support for some other common things as well.

Now with the Reactive model, we get the same set of methods and specifications, except that we’ll deal with the results and parameters in a reactive way.

5.1. ReactiveCrudRepository

We can use this repository the same way as the blocking CrudRepository:

@Repository
public interface AccountCrudRepository 
  extends ReactiveCrudRepository<Account, String> {
 
    Flux<Account> findAllByValue(String value);
    Mono<Account> findFirstByOwner(Mono<String> owner);
}

We can pass different types of arguments like plain (String), wrapped (Optional, Stream), or reactive (Mono, Flux) as we can see in the findFirstByOwner() method.

5.2. ReactiveMongoRepository

There’s also the ReactiveMongoRepository interface, which inherits from ReactiveCrudRepository and adds some new query methods:

@Repository
public interface AccountReactiveRepository 
  extends ReactiveMongoRepository<Account, String> { }

Using the ReactiveMongoRepository, we can query by example:

Flux<Account> accountFlux = repository
  .findAll(Example.of(new Account(null, "owner", null)));

As a result, we’ll get every Account that is the same as the example passed.

With our repositories created, they already have defined methods to perform some database operations that we don’t need to implement:

Mono<Account> accountMono 
  = repository.save(new Account(null, "owner", 12.3));
Mono<Account> accountMono2 = repository
  .findById("123456");

5.3. RxJava2CrudRepository

With RxJava2CrudRepository, we have the same behavior as the ReactiveCrudRepository, but with the results and parameter types from RxJava:

@Repository
public interface AccountRxJavaRepository 
  extends RxJava2CrudRepository<Account, String> {
 
    Observable<Account> findAllByValue(Double value);
    Single<Account> findFirstByOwner(Single<String> owner);
}

5.4. Testing Our Basic Operations

In order to test our repository methods, we’ll use the test subscriber:

@Test
public void givenValue_whenFindAllByValue_thenFindAccount() {
    repository.save(new Account(null, "Bill", 12.3)).block();
    Flux<Account> accountFlux = repository.findAllByValue(12.3);

    StepVerifier
      .create(accountFlux)
      .assertNext(account -> {
          assertEquals("Bill", account.getOwner());
          assertEquals(Double.valueOf(12.3) , account.getValue());
          assertNotNull(account.getId());
      })
      .expectComplete()
      .verify();
}

@Test
public void givenOwner_whenFindFirstByOwner_thenFindAccount() {
    repository.save(new Account(null, "Bill", 12.3)).block();
    Mono<Account> accountMono = repository
      .findFirstByOwner(Mono.just("Bill"));

    StepVerifier
      .create(accountMono)
      .assertNext(account -> {
          assertEquals("Bill", account.getOwner());
          assertEquals(Double.valueOf(12.3) , account.getValue());
          assertNotNull(account.getId());
      })
      .expectComplete()
      .verify();
}

@Test
public void givenAccount_whenSave_thenSaveAccount() {
    Mono<Account> accountMono = repository.save(new Account(null, "Bill", 12.3));

    StepVerifier
      .create(accountMono)
      .assertNext(account -> assertNotNull(account.getId()))
      .expectComplete()
      .verify();
}

6. ReactiveMongoTemplate

Besides the repositories approach, we have the ReactiveMongoTemplate.

First of all, we need to register ReactiveMongoTemplate as a bean:

@Configuration
public class ReactiveMongoConfig {
 
    @Autowired
    MongoClient mongoClient;

    @Bean
    public ReactiveMongoTemplate reactiveMongoTemplate() {
        return new ReactiveMongoTemplate(mongoClient, "test");
    }
}

And then, we can inject this bean into our service to perform the database operations:

@Service
public class AccountTemplateOperations {
 
    @Autowired
    ReactiveMongoTemplate template;

    public Mono<Account> findById(String id) {
        return template.findById(id, Account.class);
    }
 
    public Flux<Account> findAll() {
        return template.findAll(Account.class);
    } 
    public Mono<Account> save(Mono<Account> account) {
        return template.save(account);
    }
}

ReactiveMongoTemplate also has a number of methods that do not relate to the domain we have, you can check them out in the documentation.

7. Conclusion

In this brief tutorial, we’ve covered the use of repositories and templates using reactive programming with MongoDB with Spring Data Reactive Repositories framework.

The full source code for the examples is available over on GitHub.


Visitor Design Pattern in Java

$
0
0

1. Overview

In this tutorial, we’ll introduce one of the behavioral GoF design patterns – the Visitor.

First, we’ll explain its purpose and the problem it tries to solve.

Next, we’ll have a look at Visitor’s UML diagram and implementation of the practical example.

2. Visitor Design Pattern

The purpose of a Visitor pattern is to define a new operation without introducing the modifications to an existing object structure.

Imagine that we have a composite object which consists of components. The object’s structure is fixed – we either can’t change it, or we don’t plan to add new types of elements to the structure.

Now, how could we add new functionality to our code without modification of existing classes?

The Visitor design pattern might be an answer. Simply put, we’ll have to do is to add a function which accepts the visitor class to each element of the structure.

That way our components will allow the visitor implementation to “visit” them and perform any required action on that element.

In other words, we’ll extract the algorithm which will be applied to the object structure from the classes.

Consequently, we’ll make good use of the Open/Closed principle as we won’t modify the code, but we’ll still be able to extend the functionality by providing a new Visitor implementation.

3. UML Diagram

On the UML diagram above, we have two implementation hierarchies, specialized visitors, and concrete elements.

First of all, the client uses a Visitor implementation and applies it to the object structure. The composite object iterates over its components and applies the visitor to each of them.

Now, especially relevant is that concrete elements (ConcreteElementA and ConcreteElementB) are accepting a Visitor, simply allowing it to visit them.

Lastly, this method is the same for all elements in the structure, it performs double dispatch with passing itself (via the this keyword) to the visitor’s visit method.

4. Implementation

Our example will be custom Document object that consists of JSON and XML concrete elements; the elements have a common abstract superclass, the Element.

The Document class:

public class Document extends Element {

    List<Element> elements = new ArrayList<>();

    // ...

    @Override
    public void accept(Visitor v) {
        for (Element e : this.elements) {
            e.accept(v);
        }
    }
}

The Element class has an abstract method which accepts the Visitor interface:

public abstract void accept(Visitor v);

Therefore, when creating the new element, name it the JsonElement, we’ll have to provide the implementation this method.

However, due to nature of the Visitor pattern, the implementation will be the same, so in most cases, it would require us to copy-paste the boilerplate code from other, already existing element:

public class JsonElement extends Element {

    // ...

    public void accept(Visitor v) {
        v.visit(this);
    }
}

Since our elements allow visiting them by any visitor, let’s say that we want to process our Document elements, but each of them in a different way, depending on its class type.

Therefore, our visitor will have a separate method for the given type:

public class ElementVisitor implements Visitor {

    @Override
    public void visit(XmlElement xe) {
        System.out.println(
          "processing an XML element with uuid: " + xe.uuid);
    }

    @Override
    public void visit(JsonElement je) {
        System.out.println(
          "processing a JSON element with uuid: " + je.uuid);
    }
}

Here, our concrete visitor implements two methods, correspondingly one per each type of the Element.

This gives us access to the particular object of the structure on which we can perform necessary actions.

5. Testing

For testing purpose, let’s have a look at VisitorDemoclass:

public class VisitorDemo {

    public static void main(String[] args) {

        Visitor v = new ElementVisitor();

        Document d = new Document(generateUuid());
        d.elements.add(new JsonElement(generateUuid()));
        d.elements.add(new JsonElement(generateUuid()));
        d.elements.add(new XmlElement(generateUuid()));

        d.accept(v);
    }

    // ...
}

First, we create an ElementVisitor, it holds the algorithm we will apply to our elements.

Next, we set up our Document with proper components and apply the visitor which will be accepted by every element of an object structure.

The output would be like this:

processing a JSON element with uuid: fdbc75d0-5067-49df-9567-239f38f01b04
processing a JSON element with uuid: 81e6c856-ddaf-43d5-aec5-8ef977d3745e
processing an XML element with uuid: 091bfcb8-2c68-491a-9308-4ada2687e203

It shows that visitor has visited each element of our structure, depending on the Element type, it dispatched the processing to appropriate method and could retrieve the data from every underlying object.

6. Downsides

As each design pattern, even the Visitor has its downsides, particularly, its usage makes it more difficult to maintain the code if we need to add new elements to the object’s structure.

For example, if we add new YamlElement, then we need to update all existing visitors with the new method desired for processing this element. Following this further, if we have ten or more concrete visitors, that might be cumbersome to update all of them.

Other than this, when using this pattern, the business logic related to one particular object gets spread over all visitor implementations.

7. Conclusion

The Visitor pattern is great to separate the algorithm from the classes on which it operates. Besides that, it makes adding new operation more easily, just by providing a new implementation of the Visitor.

Furthermore, we don’t depend on components interfaces, and if they are different, that’s fine, since we have a separate algorithm for processing per concrete element.

Moreover, the Visitor can eventually aggregate data based on the element it traverses.

To see a more specialized version of the Visitor design pattern, check out visitor pattern in Java NIO – the usage of the pattern in the JDK.

As usual, the complete code is available on the Github project.

The Thread.join() Method in Java

$
0
0

1. Overview

In this tutorial, we’ll discuss the different join() methods in the Thread class. We’ll go into the details of these methods and some example code.

Like the wait() and notify() methods, join() is another mechanism of inter-thread synchronization.

You can have a quick look at this tutorial to read more about wait() and notify().

2. The Thread.join() Method

The join method is defined in the Thread class:

public final void join() throws InterruptedException
Waits for this thread to die.

When we invoke the join() method on a thread, the calling thread goes into a waiting state. It remains in a waiting state until the referenced thread terminates.

We can see this behavior in the following code:

class SampleThread extends Thread {
    public int processingCount = 0;

    SampleThread(int processingCount) {
        this.processingCount = processingCount;
        LOGGER.info("Thread Created");
    }

    @Override
    public void run() {
        LOGGER.info("Thread " + this.getName() + " started");
        while (processingCount > 0) {
            try {
                Thread.sleep(1000);
            } catch (InterruptedException e) {
                LOGGER.info("Thread " + this.getName() + " interrupted");
            }
            processingCount--;
        }
        LOGGER.info("Thread " + this.getName() + " exiting");
    }
}

@Test
public void givenStartedThread_whenJoinCalled_waitsTillCompletion() 
  throws InterruptedException {
    Thread t2 = new SampleThread(1);
    t2.start();
    LOGGER.info("Invoking join");
    t2.join();
    LOGGER.info("Returned from join");
    assertFalse(t2.isAlive());
}

We should expect results similar to the following when executing the code:

INFO: Thread Created
INFO: Invoking join
INFO: Thread Thread-1 started
INFO: Thread Thread-1 exiting
INFO: Returned from join

The join() method may also return if the referenced thread was interrupted.  In this case, the method throws an InterruptedException.

Finally, if the referenced thread was already terminated or hasn’t been started, the call to join() method returns immediately.

Thread t1 = new SampleThread(0);
t1.join();  //returns immediately

3. Thread.join() Methods with Timeout

The join() method will keep waiting if the referenced thread is blocked or is taking too long to process. This can become an issue as the calling thread will become non-responsive. To handle these situations, we use overloaded versions of the join() method that allow us to specify a timeout period.

There are two timed versions which overload the join() method:

“public final void join(long millis) throws InterruptedException
Waits at most millis milliseconds for this thread to die. A timeout of 0 means to wait forever.”

“public final void join(long millis,int nanos) throws InterruptedException
Waits at most millis milliseconds plus nanos nanoseconds for this thread to die.”

We can use the timed join() as below:

@Test
public void givenStartedThread_whenTimedJoinCalled_waitsUntilTimedout()
  throws InterruptedException {
    Thread t3 = new SampleThread(10);
    t3.start();
    t3.join(1000);
    assertTrue(t3.isAlive());
}

In this case, the calling thread waits for roughly 1 second for the thread t3 to finish. If the thread t3 does not finish in this time period, the join() method returns control to the calling method.

Timed join() is dependent on the OS for timing. So, we cannot assume that join() will wait exactly as long as specified.

4. Thread.join() Methods and Synchronization

In addition to waiting until termination, calling the join() method has a synchronization effect. join() creates a happens-before relationship:

“All actions in a thread happen-before any other thread successfully returns from a join() on that thread.”

This means that when a thread t1 calls t2.join(), then all changes done by t2 are visible in t1 on return. However, if we do not invoke join() or use other synchronization mechanisms, we do not have any guarantee that changes in the other thread will be visible to the current thread even if the other thread has completed.

Hence, even though the join() method call to a thread in the terminated state returns immediately, we still need to call it in some situations.

We can see an example of improperly synchronized code below:

SampleThread t4 = new SampleThread(10);
t4.start();
// not guaranteed to stop even if t4 finishes.
do {
       
} while (t4.processingCount > 0);

To properly synchronize the above code, we can add timed t4.join() inside the loop or use some other synchronization mechanism.

5. Conclusion

join() method is quite useful for inter-thread synchronization. In this article, we discussed the join() methods and their behavior. We also reviewed code using join() method.

As always, the full source code can be found over on GitHub.

Method Parameter Reflection in Java

$
0
0

1. Overview

Method Parameter Reflection support was added in Java 8. Simply put, it provides support for getting the names of parameters at runtime.

In this quick tutorial, we’ll take a look at how to access parameter names for constructors and methods at runtime – using reflection.

2. Compiler Argument 

In order to be able to get access to method name information, we must opt-in explicitly.

To do this, we specify the parameters option during compilation.

For a Maven project, we can declare this option in the pom.xml:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-compiler-plugin</artifactId>
  <version>3.1</version>
  <configuration>
    <source>1.8</source>
    <target>1.8</target>
    <compilerArgument>-parameters</compilerArgument>
  </configuration>
</plugin>

3. Example Class

We’ll use a contrived Person class with a single property called fullName to demonstrate:

public class Person {

    private String fullName;

    public Person(String fullName) {
        this.fullName = fullName;
    }

    public void setFullName(String fullName) {
        this.fullName = fullName;
    }

    // other methods
}

4. Usage

The Parameter class is new in Java 8 and has a variety of interesting methods. If the -parameters compiler option was provided, the isNamePresent() method will return true.

To access the name of a parameter, we can simply call getName():

@Test
public void whenGetConstructorParams_thenOk() 
  throws NoSuchMethodException, SecurityException {
 
    List<Parameter> parameters 
        = Arrays.asList(Person.class.getConstructor(String.class).getParameters());
    Optional<Parameter> parameter 
        = parameters.stream().filter(Parameter::isNamePresent).findFirst();
    assertThat(parameter.get().getName()).isEqualTo("fullName");
}

@Test
public void whenGetMethodParams_thenOk() 
  throws NoSuchMethodException, SecurityException {
 
    List<Parameter> parameters = Arrays.asList(
      Person.class.getMethod("setFullName", String.class).getParameters());
    Optional<Parameter> parameter= parameters.stream()
      .filter(Parameter::isNamePresent)
      .findFirst();
 
    assertThat(parameter.get().getName()).isEqualTo("fullName");
}

5. Conclusion

In this quick article, we looked at the new reflection support for parameter names that became available in Java 8.

The most obvious use case for this information is to help implement auto-complete support within editors.

As always, the source code can be found over on Github.

Performance of Java Mapping Frameworks

$
0
0

1. Introduction

Creating large Java applications composed of multiple layers require using multiple models such as persistence model, domain model or so-called DTOs. Using multiple models for different application layers will require us to provide a way of mapping between beans.

Doing this manually can quickly create much boilerplate code and consume a lot of time. Luckily for us, there are multiple object mapping frameworks for Java.

In this tutorial we’re going to compare the performance of the most popular Java mapping frameworks.

2. Mapping Frameworks

2.1. Dozer

Dozer is a mapping framework that uses recursion to copy data from one object to another.  The framework is able not only to copy properties between the beans, but it can also automatically convert between different types.

To use the Dozer framework we need to add such dependency to our project:

<dependency>
    <groupId>net.sf.dozer</groupId>
    <artifactId>dozer</artifactId>
    <version>5.5.1</version>
</dependency>

More information about the usage of the Dozer framework can be found in this article.

The documentation of the framework can be found here.

2.2. Orika

Orika is a bean to bean mapping framework that recursively copies data from one object to another.

The general principle of work of the Orika is similar to Dozer. The main difference between the two is the fact that Orika uses bytecode generation. This allows generating faster mappers with the minimal overhead.

To use it, we need to add such dependency to our project:

<dependency>
    <groupId>ma.glasnost.orika</groupId>
    <artifactId>orika-core</artifactId>
    <version>1.5.2</version>
</dependency>

More detailed information about the usage of the Orika can be found in this article.

The actual documentation of the framework can be found here.

2.3. MapStruct

MapStruct is a code generator that generates bean mapper classes automatically.

MapStruct also has the ability to convert between different data types. More information on how to use it can be found in this article.

To add MapStruct to our project we need to include the following dependency :

<dependency>3
    <groupId>org.mapstruct</groupId>
    <artifactId>mapstruct-processor</artifactId>
    <version>1.2.0.Final</version>
</dependency>

The documentation of the framework can be found here.

2.4. ModelMapper

ModelMapper is a framework that aims to simplify object mapping, by determining how objects map to each other basing on conventions. It provides type-safe and refactoring-safe API.

More information about the framework can be found in the documentation.

To include the ModelMapper to our project we need to add the following dependency:

<dependency>
  <groupId>org.modelmapper</groupId>
  <artifactId>modelmapper</artifactId>
  <version>1.1.0</version>
</dependency>

2.5. JMapper

JMapper is the mapping framework that aims to provide easy-to-use, high-performance mapping between Java Beans.

The framework aims to apply DRY principle using Annotations and relational mapping.

The framework allows for different ways of configuration: annotation-based, XML or API-based.

More information about the framework can be found in its documentation.

To include the JMapper in our project we need to add its dependency:

<dependency>
    <groupId>com.googlecode.jmapper-framework</groupId>
    <artifactId>jmapper-core</artifactId>
    <version>1.6.0.1</version>
</dependency>

3. Testing Model

To be able to test mapping properly we need to have a source and target models. We’ve created two testing models.

First one is just a simple POJO with one String field, this allowed us to compare frameworks in simpler cases and check whether anything changes if we use more complicated beans.

The simple source model looks like below:

public class SourceCode {
    String code;
    // getter and setter
}

And its destination is quite similar:

public class DestinationCode {
    String code;
    // getter and setter
}

The real-life example of source bean looks like that:

public class SourceOrder {
    private String orderFinishDate;
    private PaymentType paymentType;
    private Discount discount;
    private DeliveryData deliveryData;
    private User orderingUser;
    private List<Product> orderedProducts;
    private Shop offeringShop;
    private int orderId;
    private OrderStatus status;
    private LocalDate orderDate;
    // standard getters and setters
}

And the target class looks like below:

public class Order {
    private User orderingUser;
    private List<Product> orderedProducts;
    private OrderStatus orderStatus;
    private LocalDate orderDate;
    private LocalDate orderFinishDate;
    private PaymentType paymentType;
    private Discount discount;
    private int shopId;
    private DeliveryData deliveryData;
    private Shop offeringShop;
    // standard getters and setters
}

The whole model structure can be found here.

4. Converters

To simplify the design of the testing setup, we’ve created the Converter interface which looks like below:

public interface Converter {
    Order convert(SourceOrder sourceOrder);
    DestinationCode convert(SourceCode sourceCode);
}

And all our custom mappers will implement this interface.

4.1. OrikaConverter

Orika allows for full API implementation, this greatly simplifies the creation of the mapper:

public class OrikaConverter implements Converter{
    private MapperFacade mapperFacade;

    public OrikaConverter() {
        MapperFactory mapperFactory = new DefaultMapperFactory
          .Builder().build();

        mapperFactory.classMap(Order.class, SourceOrder.class)
          .field("orderStatus", "status").byDefault().register();
        mapperFacade = mapperFactory.getMapperFacade();
    }

    @Override
    public Order convert(SourceOrder sourceOrder) {
        return mapperFacade.map(sourceOrder, Order.class);
    }

    @Override
    public DestinationCode convert(SourceCode sourceCode) {
        return mapperFacade.map(sourceCode, DestinationCode.class);
    }
}

4.2. DozerConverter

Dozer requires XML mapping file,  with the following sections:

<mappings xmlns="http://dozer.sourceforge.net"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://dozer.sourceforge.net
  http://dozer.sourceforge.net/schema/beanmapping.xsd">

    <mapping>
        <class-a>com.baeldung.performancetests.model.source.SourceOrder</class-a>
        <class-b>com.baeldung.performancetests.model.destination.Order</class-b>
        <field>
            <a>status</a>
            <b>orderStatus</b>
        </field>
    </mapping>
    <mapping>
        <class-a>com.baeldung.performancetests.model.source.SourceCode</class-a>
        <class-b>com.baeldung.performancetests.model.destination.DestinationCode</class-b>
    </mapping>
</mappings>

After defining the XML mapping, we can use it from code:

public class DozerConverter implements Converter {
    private final Mapper mapper;

    public DozerConverter() {
        DozerBeanMapper mapper = new DozerBeanMapper();
        mapper.addMapping(
          DozerConverter.class.getResourceAsStream("/dozer-mapping.xml"));
        this.mapper = mapper;
    }

    @Override
    public Order convert(SourceOrder sourceOrder) {
        return mapper.map(sourceOrder,Order.class);
    }

    @Override
    public DestinationCode convert(SourceCode sourceCode) {
        return mapper.map(sourceCode, DestinationCode.class);
    }
}

4.3. MapStructConverter

Map struct definition is quite simple as it entirely bases on code generation :

@Mapper
public interface MapStructConverter extends Converter {
    MapStructConverter MAPPER = Mappers.getMapper(MapStructConverter.class);

    @Mapping(source = "status", target = "orderStatus")
    @Override
    Order convert(SourceOrder sourceOrder);

    @Override
    DestinationCode convert(SourceCode sourceCode);
}

4.4. JMapperConverter

JMapperConverter requires more work to do. After implementing the interface:

public class JMapperConverter implements Converter {
    JMapper realLifeMapper;
    JMapper simpleMapper;
 
    public JMapperConverter() {
        JMapperAPI api = new JMapperAPI()
          .add(JMapperAPI.mappedClass(Order.class));
        realLifeMapper = new JMapper(Order.class, SourceOrder.class, api);
        JMapperAPI simpleApi = new JMapperAPI()
          .add(JMapperAPI.mappedClass(DestinationCode.class));
        simpleMapper = new JMapper(
          DestinationCode.class, SourceCode.class, simpleApi);
    }

    @Override
    public Order convert(SourceOrder sourceOrder) {
        return (Order) realLifeMapper.getDestination(sourceOrder);
    }

    @Override
    public DestinationCode convert(SourceCode sourceCode) {
        return (DestinationCode) simpleMapper.getDestination(sourceCode);
    }
}

We also need to add @JMap annotations to each field of the target class. Also, JMapper can’t convert between enum types on its own and it requires us to create custom mapping functions :

@JMapConversion(from = "paymentType", to = "paymentType")
public PaymentType conversion(com.baeldung.performancetests.model.source.PaymentType type) {
    PaymentType paymentType = null;
    switch(type) {
        case CARD:
            paymentType = PaymentType.CARD;
            break;

        case CASH:
            paymentType = PaymentType.CASH;
            break;

        case TRANSFER:
            paymentType = PaymentType.TRANSFER;
            break;
    }
    return paymentType;
}

4.5. ModelMapperConverter

ModelMapperConverter requires only to provide the classes that we want to map :

public class ModelMapperConverter implements Converter {
    private ModelMapper modelMapper;

    public ModelMapperConverter() {
        modelMapper = new ModelMapper();
    }

    @Override
    public Order convert(SourceOrder sourceOrder) {
       return modelMapper.map(sourceOrder, Order.class);
    }

    @Override
    public DestinationCode convert(SourceCode sourceCode) {
        return modelMapper.map(sourceCode, DestinationCode.class);
    }
}

5. Simple Model Testing

For the performance testing, we can use Java Microbenchmark Harness, more information about how to use it can be found in this article.

We’ve created a separate benchmark for each Converter with specifying BenchmarkMode to Mode.All.

5.1. AverageTime

JMH returned the following results for average running time (the less is the better) :

 

This benchmark shows clearly that MapStruct has by far the best average working time.

5.2. Throughput

In this mode, benchmark returns the number of operations per second. We have received the following results(the more is the better) :

Again, MapStruct was the fastest in among all the frameworks.

5.3. SingleShotTime

This mode allows measuring the time of single operation from it’s beginning to the end. The benchmark gave the following result (the less is the better):

Again, MapStruct was the fastest, however, ModelMapper gave better results than in previous tests.

5.4. SampleTime

This mode allows sampling the time of each operation.  The results for three different percentiles  look like below:

All benchmarks have shown that MapStruct has the best performance, JMapper is also quite a good choice, although it gave significantly worse results for SingleShotTime.

6. Real Life Model Testing

For the performance testing, we can use Java Microbenchmark Harness, more information about how to use it can be found in this article.

We have created a separate benchmark for each Converter with specifying BenchmarkMode to Mode.All.

6.1. AverageTime

JMH returned the following results for average running time (the less is the better) :

6.2. Throughput

In this mode, benchmark returns the number of operations per second. For each of the mappers we’ve received the following results (the more is the better) :

6.3. SingleShotTime

This mode allows measuring the time of single operation from it’s beginning to the end. The benchmark gave the following results (the less is the better):

6.4. SampleTime

This mode allows sampling the time of each operation. Sampling results are split into percentiles, we’ll present results for three different percentiles: p0.90, p0.999, and p1.00:

While the exact results of the simple example and the real-life example were clearly different, but they do follow the same trend. Both examples gave similar results in terms of which algorithm is the fastest and which is the slowest one.

The best performance clearly belongs to the MapStruct and the worst to the Orika.

7. Summary

In this article, we’ve conducted performance tests of five popular java bean mapping frameworks: ModelMapper, MapStruct, Orika, Dozer, and JMapper.

As always, code samples can be found over on GitHub.

Spring – Injecting Collections

$
0
0

1. Introduction

In this tutorial, we’re going to show how to inject Java collections using the Spring framework.

Simply put, we’ll demonstrate examples with the List, Map, Set collection interfaces.

2. List with @Autowired

Let’s create an example bean:

public class CollectionsBean {

    @Autowired
    private List<String> nameList;

    public void printNameList() {
        System.out.println(nameList);
    }
}

Here, we declared the nameList property to hold a List of String values.

In this example, we use field injection for nameList. Therefore, we put the @Autowired annotation.

To learn more about the dependency injection or different ways to implement it, check out this guide.

After, we register the CollectionsBean in the configuration setup class:

@Configuration
public class CollectionConfig {

    @Bean
    public CollectionsBean getCollectionsBean() {
        return new CollectionsBean();
    }

    @Bean
    public List<String> nameList() {
        return Arrays.asList("John", "Adam", "Harry");
    }
}

Besides registering the CollectionsBean, we also inject a new list by explicitly initializing and returning it as a separate @Bean configuration.

Now, we can test the results:

ApplicationContext context = new AnnotationConfigApplicationContext(CollectionConfig.class);
CollectionsBean collectionsBean = context.getBean(
  CollectionsBean.class);
collectionsBean.printNameList();

The output of printNameList() method:

[John, Adam, Harry]

3. Set with Constructor Injection

To set up the same example with the Set collection, let’s modify the CollectionsBean class:

public class CollectionsBean {

    private Set<String> nameSet;

    public CollectionsBean(Set<String> strings) {
        this.nameSet = strings;
    }

    public void printNameSet() {
        System.out.println(nameSet);
    }
}

This time we want to use a constructor injection for initializing the nameSet property. This requires also changes in configuration class:

@Bean
public CollectionsBean getCollectionsBean() {
    return new CollectionsBean(new HashSet<>(Arrays.asList("John", "Adam", "Harry")));
}

4. Map with Setter Injection

Following the same logic, let’s add the nameMap field to demonstrate the map injection:

public class CollectionsBean {

    private Map<Integer, String> nameMap;

    @Autowired
    public void setNameMap(Map<Integer, String> nameMap) {
        this.nameMap = nameMap;
    }

    public void printNameMap() {
        System.out.println(nameMap);
    }
}

This time we have a setter method in order to use a setter dependency injection. We also need to add the Map initializing code in configuration class:

@Bean
public Map<Integer, String> nameMap(){
    Map<Integer, String>  nameMap = new HashMap<>();
    nameMap.put(1, "John");
    nameMap.put(2, "Adam");
    nameMap.put(3, "Harry");
    return nameMap;
}

The results after invoking the printNameMap() method:

{1=John, 2=Adam, 3=Harry}

5. Injecting Bean References

Let’s look at an example where we inject bean references as elements of the collection.

First, let’s create the bean:

public class BaeldungBean {

    private String name;

    // constructor
}

And add a List of BaeldungBean as a property to the CollectionsBean class:

public class CollectionsBean {

    @Autowired(required = false)
    private List<BaeldungBean> beanList;

    public void printBeanList() {
        System.out.println(beanList);
    }
}

Next, we add the Java configuration factory methods for each BaeldungBean element:

@Configuration
public class CollectionConfig {

    @Bean
    public BaeldungBean getElement() {
        return new BaeldungBean("John");
    }

    @Bean
    public BaeldungBean getAnotherElement() {
        return new BaeldungBean("Adam");
    }

    @Bean
    public BaeldungBean getOneMoreElement() {
        return new BaeldungBean("Harry");
    }

    // other factory methods
}

The Spring container injects the individual beans of the BaeldungBean type into one collection.

To test this, we invoke the collectionsBean.printBeanList() method. The output shows the bean names as list elements:

[John, Harry, Adam]

Now, let’s consider a scenario when there is not a BaeldungBean. If there isn’t a BaeldungBean registered in the application context, Spring will throw an exception because the required dependency is missing.

We can use @Autowired(required = false) to mark the dependency as optional. Instead of throwing an exception, the beanList won’t be initialized and its value will stay null.

If we need an empty list instead of null, we can initialize beanList with a new ArrayList:

@Autowired(required = false)
private List<BaeldungBean> beanList = new ArrayList<>();

5.1. Using @Order to Sort Beans

We can specify the order of the beans while injecting into the collection.

For that purpose, we use the @Order annotation and specify the index:

@Configuration
public class CollectionConfig {

    @Bean
    @Order(2)
    public BaeldungBean getElement() {
        return new BaeldungBean("John");
    }

    @Bean
    @Order(3)
    public BaeldungBean getAnotherElement() {
        return new BaeldungBean("Adam");
    }

    @Bean
    @Order(1)
    public BaeldungBean getOneMoreElement() {
        return new BaeldungBean("Harry");
    }
}

Spring container first will inject the bean with the name “Harry”, as it has the lowest order value.

It will then inject the “John”, and finally, the “Adam” bean:

[Harry, John, Adam]

Learn more about @Order in this guide.

5.2. Using @Qualifier to Select Beans

We can use the @Qualifier to select the beans to be injected into the specific collection that matches the @Qualifier name.

Here’s how we use it for the injection point:

@Autowired
@Qualifier("CollectionsBean")
private List<BaeldungBean> beanList;

Then, we mark with the same @Qualifier the beans that we want to inject into the List:

@Configuration
public class CollectionConfig {

    @Bean
    @Qualifier("CollectionsBean")
    public BaeldungBean getElement() {
        return new BaeldungBean("John");
    }

    @Bean
    public BaeldungBean getAnotherElement() {
        return new BaeldungBean("Adam");
    }

    @Bean
    public BaeldungBean getOneMoreElement() {
        return new BaeldungBean("Harry");
    }

    // other factory methods
}

In this example, we specify that the bean with the name “John” will be injected into the List named “CollectionsBean”. The results we test here:

ApplicationContext context = new AnnotationConfigApplicationContext(CollectionConfig.class);
CollectionsBean collectionsBean = context.getBean(CollectionsBean.class);
collectionsBean.printBeanList();

From the output, we see that our collection has only one element:

[John]

6. Summary

With this guide, we learned how to inject different types of Java collections using the Spring framework.

We also examined injection with reference types and how to select or order them inside of the collection.

As usual, the complete code is available in the Github project.

Viewing all 4754 articles
Browse latest View live