Quantcast
Channel: Baeldung
Viewing all 4793 articles
Browse latest View live

Java Localization – Formatting Messages

$
0
0

1. Introduction

In this tutorial, we’ll consider how we can localize and format messages based on Locale.

We’ll use both Java’s MessageFormat and the third-party library, ICU.

2. Localization Use Case

When our application acquires a wide audience of users from all over the world, we may naturally want to show different messages based on the user’s preferences.

The first and most important aspect is the language that the user speaks. Others might include currency, number and date formats. Last but not least are cultural preferences: what is acceptable for users from one country might be intolerable for others.

Suppose that we have an email client and we want to show notifications when a new message arrives.

A simple example of such a message might be this one:

Alice has sent you a message.

It’s fine for English-speaking users, but non-English speaking ones might be not that happy. For example, French-speaking users would prefer to see this message:

Alice vous a envoyé un message.

While Polish people would be pleased by seeing this one:

Alice wysłała ci wiadomość.

What if we want to have a properly-formatted notification even in the case when Alice sends not just one message, but few messages?

We might be tempted to address the issue by concatenating various pieces in a single string, like this:

String message = "Alice has sent " + quantity + " messages";

The situation can easily get out of control when we need notifications in the case when not only Alice but also Bob might send the messages:

Bob has sent two messages.
Bob a envoyé deux messages.
Bob wysłał dwie wiadomości.

Notice, how the verb changes in the case of Polish (wysłała vs wysłał) language. It illustrates the fact that banal string concatenation is rarely acceptable for localizing messages.

As we see, we get two types of issues: one is related to translations and the other is related to formats. Let’s address them in the following sections.

3. Message Localization

We may define the localization, or l10n, of an application as the process of adapting the application to the user’s comfort. Sometimes, the term internalization, or i18n, is also used.

In order to localize the application, first of all, let’s eliminate all hardcoded messages by moving them into our resources folder:

Each file should contain key-value pairs with the messages in the corresponding language. For example, file messages_en.properties should contain the following pair:

label=Alice has sent you a message.

messages_pl.properties should contain the following pair:

label=Alice wysłała ci wiadomość.

Similarly, other files assign appropriate values to the key label. Now, in order to pick up the English version of the notification, we can use ResourceBundle:

ResourceBundle bundle = ResourceBundle.getBundle("messages", Locale.UK);
String message = bundle.getString("label");

The value of the variable message will be “Alice has sent you a message.”

Java’s Locale class contains shortcuts to frequently used languages and countries.

In the case of the Polish language, we might write the following:

ResourceBundle bundle
  = ResourceBundle.getBundle("messages", Locale.forLanguageTag("pl-PL"));
String message = bundle.getString("label");

Let’s just mention that if we provide no locale, then the system will use a default one. We may more details on this issue in our article “Internationalization and Localization in Java 8“. Then, among available translations, the system will choose the one that is the most similar to the currently active locale.

Placing the messages in the resource files is a good step towards rendering the application more user-friendly. It makes it easier to translate the whole application for the following reasons:

  1. a translator does not have to look through the application in search of the messages
  2. a translator can see the whole phrase which helps to grasp the context and hence facilitates a better translation
  3. we don’t have to recompile the whole application when a translation for a new language is ready

4. Message Format

Even though we have moved the messages from the code into a separate location, they still contain some hardcoded information. It would be nice to be able to customize the names and numbers in the messages in such a way that they remain grammatically correct.

We may define the formatting as a process of rendering the string template by substituting the placeholders by their values.

In the following sections, we’ll consider two solutions that allow us to format the messages.

4.1. Java’s MessageFormat

In order to format strings, Java defines numerous format methods in java.lang.String. But, we can get even more support via java.text.format.MessageFormat.

To illustrate, let’s create a pattern and feed it to a MessageFormat instance:

String pattern = "On {0, date}, {1} sent you "
  + "{2, choice, 0#no messages|1#a message|2#two messages|2<{2, number, integer} messages}.";
MessageFormat formatter = new MessageFormat(pattern, Locale.UK);

The pattern string has slots for three placeholders.

If we supply each value:

String message = formatter.format(new Object[] {date, "Alice", 2});

Then MessageFormat will fill in the template and render our message:

On 27-Apr-2019, Alice sent you two messages.

4.2. MessageFormat Syntax

From the example above, we see that the message pattern:

pattern = "On {...}, {..} sent you {...}.";

contains placeholders which are the curly brackets {…} with a required argument index and two optional arguments, type and style:

{index}
{index, type}
{index, type, style}

The placeholder’s index corresponds to the position of an element from the array of objects that we want to insert.

When present, the type and style may take the following values:

type style
number integer, currency, percent, custom format
date short, medium, long, full, custom format
time short, medium, long, full, custom format
choice custom format

The names of the types and styles largely speak for themselves, but we can consult the official documentation for more details.

Let’s take a closer look, though, at custom format

In the example above, we used the following format expression:

{2, choice, 0#no messages|1#a message|2#two messages|2<{2, number, integer} messages}

In general, the choice style has the form of options separated by the vertical bar (or pipe):

Inside the options, the match value ki and the string vi are separated by # except for the last option. Notice that we may nest other patterns into the string vi as we did it for the last option:

{2, choice, ...|2<{2, number, integer} messages}

The choice type is a numeric-based one, so there is a natural ordering for the match values k that split a numeric line into intervals:

If we give a value k that belongs to the interval [ki, ki+1) (the left end is included, the right one is excluded), then value vi is selected.

Let’s consider in more details the ranges of the chosen style. To this end, we take this pattern:

pattern = "You''ve got "
  + "{0, choice, 0#no messages|1#a message|2#two messages|2<{0, number, integer} messages}.";

and pass various values for its unique placeholder:

n message
-1, 0, 0.5 You’ve got no messages.
1, 1.5 You’ve got a message.
2 You’ve got two messages.
2.5 You’ve got 2 messages.
5 You’ve got 5 messages.

4.3. Making Things Better

So, we’re now formatting our messages. But, the message itself remains hardcoded.

From the previous section, we know that we should extract the strings patterns to the resources. To separate our concerns, let’s create another bunch of resource files called formats:

In those, we’ll create a key called label with language-specific content.

For example, in the English version, we’ll put the following string:

label=On {0, date, full} {1} has sent you 
  + {2, choice, 0#nothing|1#a message|2#two messages|2<{2,number,integer} messages}.

We should slightly modify the French version because of the zero message case:

label={0, date, short}, {1}{2, choice, 0# ne|0<} vous a envoyé 
  + {2, choice, 0#aucun message|1#un message|2#deux messages|2<{2,number,integer} messages}.

And we’d need to do similar modifications as well in the Polish and Italian versions.

In fact, the Polish version exhibits yet another problem. According to the grammar of the Polish language (and many others), the verb has to agree in gender with the subject. We could resolve this problem by using the choice type, but let’s consider another solution.

4.4. ICU’s MessageFormat

Let’s use the International Components for Unicode (ICU) library. We have already mentioned it in our Convert a String to Title Case tutorial. It’s a mature and widely-used solution that allows us to customize the application for various languages.

Here, we’re not going to explore it in full details. We’ll just limit ourselves to what our toy application needs. For the most comprehensive and updated information, we should check the ICU’s official site.

At the time of writing, the latest version of ICU for Java (ICU4J) is 64.2. As usual, in order to start using it, we should add it as a dependency to our project:

<dependency>
    <groupId>com.ibm.icu</groupId>
    <artifactId>icu4j</artifactId>
    <version>64.2</version>
</dependency>

Suppose that we want to have a properly formed notification in various languages and for different numbers of messages:

N English Polish
0 Alice has sent you no messages.
Bob has sent you no messages.
Alice nie wysłała ci żadnej wiadomości.
Bob nie wysłał ci żadnej wiadomości.
1 Alice has sent you a message.
Bob has sent you a message.
Alice wysłała ci wiadomość.
Bob wysłał ci wiadomość.
> 1 Alice has sent you N messages.
Bob has sent you N messages.
Alice wysłała ci N wiadomości.
Bob wysłał ci N wiadomości.

First of all, we should create a pattern in the locale-specific resource files.

Let’s re-use the file formats.properties and add there a key label-icu with the following content:

label-icu={0} has sent you
  + {2, plural, =0 {no messages} =1 {a message}
  + other {{2, number, integer} messages}}.

It contains three placeholders which we feed by passing there a three-element array:

Object[] data = new Object[] { "Alice", "female", 0 }

We see that in the English version, the gender-valued placeholder is of no use, while in the Polish one:

label-icu={0} {2, plural, =0 {nie} other {}}
+  {1, select, male {wysłał} female {wysłała} other {wysłało}} 
+  ci {2, plural, =0 {żadnych wiadomości} =1 {wiadomość}
+  other {{2, number, integer} wiadomości}}.

we use it in order to distinguish between wysłał/wysłała/wysłało.

5. Conclusion

In this tutorial, we considered how to localize and format the messages that we demonstrate to the users of our applications.

As always, the code snippets for this tutorial are on our Github repository.


Guide to JVM Platform Annotations in Kotlin

$
0
0

1. Introduction

Kotlin provides several annotations to facilitate compatibility of Kotlin classes with Java.

In this tutorial, we’ll specifically explore Kotlin’s JVM annotations, how we can use them and what effect they have when we use Kotlin classes in Java.

2. Kotlin’s JVM Annotations

Kotlin’s JVM annotations affect the way how Kotlin code is compiled to bytecode and how the resulting classes can be used in Java.

Most of the JVM annotations don’t have an impact when we use Kotlin only. However, @JvmName and @JvmDefault also have an effect when purely using Kotlin.

3. @JvmName

We can apply the @JvmName annotation to files, functions, properties, getters, and setters.

In all cases, @JvmName defines the name of the target in the bytecode, which is also the name we can use when referring to the target from Java.

The annotation doesn’t change the name of a class, function, getter, or setter when we call it from Kotlin itself.

Let’s look at each possible target in more detail.

3.1. File Names

By default, all top-level functions and properties in a Kotlin file are compiled to filenameKt.class and all classes are compiled to className.class.

Let’s say we have a file named message.kt, which contains top-level declarations and a class named Message:

package jvmannotation

fun getMyName() : String {
    return "myUserId"
}

class Message {
}

The compiler will create two class files: MessageKt.class and Message.class. We can now call both from Java:

Message m = new Message();
String me = MessageKt.getMyName();

If we want to give MessageKt.class a different name, we can add the @JvmName annotation in the first line of the file:

@file:JvmName("MessageHelper") 
package jvmannotation

In Java, we can now use the name which is defined in the annotation:

String me = MessageHelper.getMyName();

The annotation doesn’t change the name of the class file. It’ll remain as Message.class.

3.2. Function Names

The @JvmName annotation changes the name of a function in the bytecode. We can call the following function:

@JvmName("getMyUsername")
fun getMyName() : String {
    return "myUserId"
}

And then from Java, we can use the name we supplied in the annotation:

String username = MessageHelper.getMyUsername();

While in Kotlin, we’ll use the actual name:

val username = getMyName()

There are two interesting use cases where @JvmName can come in handy – with functions and with type erasure.

3.3. Function Name Conflicts

The first use case is a function with the same name as an auto-generated getter or setter method.

The following code:

val sender = "me" 
fun getSender() : String = "from:$sender"

Will produce a compile-time error:

Platform declaration clash: The following declarations have the same JVM signature (getSender()Ljava/lang/String;)
public final fun <get-sender>(): String defined in jvmannotation.Message
public final fun getSender(): String defined in jvmannotation.Message

The reason for the error is that Kotlin automatically generates a getter method and we cannot have an additional function with the same name.

If we want to have a function with that name, we can use @JvmName to tell the Kotlin compiler to rename the function at the bytecode level:

@JvmName("getSenderName")
fun getSender() : String = "from:$sender"

We can now call the function from Kotlin by the actual name and access the member variable as usual:

val formattedSender = message.getSender()
val sender = message.sender

From Java we can call the function by the name defined in the annotation and access the member variable by the generated getter method:

String formattedSender = m.getSenderName();
String sender = m.getSender();

At this point, we might want to note that doing getter resolution like this should be avoided as much as possible as it might cause naming confusions.

3.4. Type Erasure Conflicts

The second use case is when a name clashes due to generic type erasure.

Here we’ll look at a quick example. The following two methods cannot be defined within the same class, as their method signature is the same in the JVM:

fun setReceivers(receiverNames : List<String>) {
}

fun setReceivers(receiverNames : List<Int>) {
}

We’ll see a compilation error:

Platform declaration clash: The following declarations have the same JVM signature (setReceivers(Ljava/util/List;)V)

If we want to have the same name for both functions in Kotlin, we can annotate one of the functions with @JvmName:

@JvmName("setReceiverIds")
fun setReceivers(receiverNames : List<Int>) {
}

Now we can call both functions from Kotlin with their declared name setReceivers() since Kotlin considers both signatures as different. From Java, we can call the two functions by two separate names, setReceivers() and setReceiverIds().

3.5. Getters and Setters

We can also apply the @JvmName annotation to change the names of the default getters and setters.

Let’s look at the following class definition in Kotlin:

class Message {
    val sender = "me"
    var text = ""
    private val id = 0
    var hasAttachment = true
    var isEncrypted = true
}

From Kotlin, we can refer to the class members directly, for example, we can assign a value to text:

val message = Message()
message.text = "my message"
val copy = message.text

From Java, however, we call getter and setter methods which are auto-generated by the Kotlin compiler:

Message m = new Message();
m.setText("my message");
String copy = m.getText();

If we want to change the name of a generated getter or setter method, we can add the @JvmName annotation to the class member:

@get:JvmName("getContent")
@set:JvmName("setContent")
var text = ""

Now, we can access the text in Java by the defined getter and setter names:

Message m = new Message();
m.setContent("my message");
String copy = m.getContent();

However, the @JvmName annotation doesn’t change the way we access the class member from Kotlin. We can still directly access the variable:

message.text = "my message"

In Kotlin, the following will still result in a compilation error:

m.setContent("my message");

3.6. Naming Conventions

The @JvmName annotation also comes in handy when we want to conform to certain naming conventions when calling our Kotlin class from Java.

As we’ve seen, the compiler adds the prefix get to generated getter methods. However, this is not the case for fields with names that start with is. In Java, we can access the two booleans in our message class in the following way:

Message message = new Message();
boolean isEncrypted = message.isEncrypted();
boolean hasAttachment = message.getHasAttachment();

As we can see, the compiler doesn’t prefix the getter method for isEncrypted. That seems what one would expect as it would sound unnatural to have a getter with the getIsEncrypted().

However, that only applies to properties starting with is. We still have getHasAttachment(). Here, we can add the @JvmName annotation:

@get:JvmName("hasAttachment")
var hasAttachment = true

And we’ll get a more Java-idiomatic getter:

boolean hasAttachment = message.hasAttachment();

3.7. Access Modifier Restrictions

Note that the annotations can only be applied to class members with the appropriate access rights.

If we attempt to add @set:JvmName to an immutable member:

@set:JvmName("setSender")
val sender = "me"

We’ll get a compile-time error:

Error:(11, 5) Kotlin: '@set:' annotations could be applied only to mutable properties

And if we attempt to add @get:JvmName or @set:JvmName to a private member:

@get:JvmName("getId")
private id = 0

We’ll see only a warning:

An accessor will not be generated for 'id', so the annotation will not be written to the class file

And the Kotlin compiler will ignore the annotation and not generate any getter or setter method.

4. @JvmStatic and @JvmField

We already have two articles, which describe the @JvmField and @JvmSynthetic annotation, therefore we won’t cover those in detail here.

However, we’ll have a quick look at @JvmField to point out the differences between constants and the @JvmStatic annotation.

4.1. @JvmStatic

The @JvmStatic annotation can be applied to a function or a property of a named object or a companion object.

Let’s begin with an unannotated MessageBroker:

object MessageBroker {
    var totalMessagesSent = 0
    fun clearAllMessages() { }
}

In Kotlin, we can access these properties and functions in a static way:

val total = MessageBroker.totalMessagesSent
MessageBroker.clearAllMessages()

However, if we want to do the same in Java, we need to do so via the INSTANCE of that object:

int total = MessageBroker.INSTANCE.getTotalMessagesSent();
MessageBroker.INSTANCE.clearAllMessages();

This doesn’t look very idiomatic in Java. Therefore, we can use the @JvmStatic annotation:

object MessageBroker {
    @JvmStatic
    var totalMessagesSent = 0
    @JvmStatic
    fun clearAllMessages() { }
}

Now we see static properties and methods in Java as well:

int total = MessageBroker.getTotalMessagesSent();
MessageBroker.clearAllMessages();

4.2. @JvmField, @JvmStatic and Constants

To better understand the difference between @JvmField, @JvmStatic and a constant in Kotlin, let’s look at the following example:

object MessageBroker {
    @JvmStatic
    var totalMessagesSent = 0

    @JvmField
    var maxMessagePerSecond = 0

    const val maxMessageLength = 0
}

A named object is the Kotlin implementation of a singleton. It’s compiled to a final class with a private constructor and a public static INSTANCE field. The Java equivalent of the above class is:

public final class MessageBroker {
    private static int totalMessagesSent = 0;
    public static int maxMessagePerSecond = 0;
    public static final int maxMessageLength = 0;
    public static MessageBroker INSTANCE = new MessageBroker();
    
    private MessageBroker() {
    }
    
    public static int getTotalMessagesSent() {
        return totalMessagesSent;
    }
    
    public static void setTotalMessagesSent(int totalMessagesSent) {
        this.totalMessagesSent = totalMessagesSent;
    }
}

We see that a property annotated with @JvmStatic is the equivalent of a private static field and corresponding getter and setter methods. A field annotated with @JvmField is the equivalent of a public static field and a constant is the equivalent of a public static final field.

5. @JvmOverloads

In Kotlin, we can provide default values for the parameters of a function. This helps to reduce the number of necessary overloads and keeps function calls short.

Let’s look at the following named object:

object MessageBroker {
    @JvmStatic
    fun findMessages(sender : String, type : String = "text", maxResults : Int = 10) : List {
        return ArrayList()
    }
}

We can call findMessages in multiple different ways by successively leaving out the parameters with default values
from right to left:

MessageBroker.findMessages("me", "text", 5);
MessageBroker.findMessages("me", "text");
MessageBroker.findMessages("me");

Note, that we cannot skip the value for the first parameter sender as we do not have a default value.

However, from Java, we need to provide the values for all parameters:

MessageBroker.findMessages("me", "text", 10);

We see that, when using our Kotlin function in Java, we don’t benefit from the default parameter values, but need to provide all values explicitly.

If we want to have multiple method overloads in Java as well, we can add the @JvmOverloads annotation:

@JvmStatic
@JvmOverloads
fun findMessages(sender : String, type : String = "text", maxResults : Int = 10) : List {
    return ArrayList()
}

The annotation instructs the Kotlin compiler to generate (n + 1) overloaded methods for n parameters with default values:

  1. One overloaded method with all parameters.
  2. One method per default parameter by successively leaving out parameters with a default value from right to left.

The Java equivalent of these functions are:

public static List<Message> findMessages(String sender, String type, int maxResults)
public static List<Message> findMessages(String sender, String type)
public static List<Message> findMessages(String sender)

Since our function has two parameters with a default value, we can now call it from Java in the same way:

MessageBroker.findMessages("me", "text", 10);
MessageBroker.findMessages("me", "text");
MessageBroker.findMessages("me");

6. @JvmDefault

In Kotlin, like in Java 8, we can define default methods for an interface:

interface Document {
    fun getType() = "document"
}

class TextDocument : Document

fun main() {
    val myDocument = TextDocument()
    println("${myDocument.getType()}")
}

This even works if we run on a Java 7 JVM. Kotlin achieves this by implementing a static inner class which implements the default method.

In this tutorial, we won’t look deeper into the generated bytecode. Instead, we’ll focus on how we can use these interfaces in Java. Furthermore, we’ll see the impact of @JvmDefault on interface delegation.

6.1. Kotlin Default Interface Methods and Java

Let’s look at a Java class which implements our interface:

public class HtmlDocument implements Document {
}

We’ll get a compilation error, saying:

Class 'HtmlDocument' must either be declared abstract or implement abstract method 'getType()' in 'Document'

If we do this in Java 7 or below, we expect this, since default interface methods was a new feature in Java 8. However, in Java 8, we expect to have the default implementation available. We can achieve this by annotating the method:

interface Document {
    @JvmDefault
    fun getType() = "document"
}

To be able to use the @JvmDefault annotation, we need to add one of the following two arguments to the Kotlin compiler:

  • Xjvm-default=enable – Only the default method of the interface is generated
  • Xjvm-default=compatibility – Both the default method and the static inner class are generated

6.2. @JvmDefault and Interface Delegation

Methods annotated with @JvmDefault are excluded from interface delegation. That means that the annotation also changes the way we can use such a method in Kotlin itself.

Let’s look at what that actually means.

The class TextDocument implements the interface Document and overrides getType():

interface Document {
    @JvmDefault
    fun getTypeDefault() = "document"

    fun getType() = "document"
}

class TextDocument : Document {
    override fun getType() = "text"
}

We can define another class, which delegates the implementation to TextDocument:

class XmlDocument(d : Document) : Document by d

Both classes will use the method which is implemented in our TextDocument class:

@Test
fun testDefaultMethod() {
    val myDocument = TextDocument()
    val myTextDocument = XmlDocument(myDocument)

    assertEquals("text", myDocument.getType())
    assertEquals("text", myTextDocument.getType())
    assertEquals("document", myTextDocument.getTypeDefault())
}

We see that the method getType() of both classes return the same value, while the method getTypeDefault() , which is annotated with @JvmDefault, returns a different value.
This is because getType() is not delegated and as XmlDocument does not override the method, the default implementation is called.

7. @Throws

7.1. Exceptions in Kotlin

Kotlin doesn’t have checked exceptions, which means a surrounding try-catch is always optional:

fun findMessages(sender : String, type : String = "text", maxResults : Int = 10) : List<Message> {
    if(sender.isEmpty()) {
        throw IllegalArgumentException()
    }
    return ArrayList()
}

We see that the method getType() of both classes returns the same value and the method getTypeDefault, which is annoteatec with @JvmDefault, returns a different value.

We can call the function either with or without a surrounding try-catch:

MessageBroker.findMessages("me")
    
try {
    MessageBroker.findMessages("me")
} catch(e : IllegalArgumentException) {
}

If we call our Kotlin function from Java, the try-catch is also optional:

MessageBroker.findMessages("");

try {
    MessageBroker.findMessages("");
} catch (Exception e) {
    e.printStackTrace();
}

7.2. Creating Checked Exceptions for Use in Java

If we want to have a checked exception when using our function in Java, we can add the @Throws annotation:

@Throws(Exception::class)
fun findMessages(sender : String, type : String = "text", maxResults : Int = 10) : List<Message> {
    if(sender.isEmpty()) {
        throw IllegalArgumentException()
    }
    return ArrayList()
}

This annotation instructs the Kotlin compiler to create the equivalent of:

public static List<Message> findMessage(String sender, String type, int maxResult) throws Exception {
    if(sender.length() == 0) {
        throw new Exception();
    }
    return  new ArrayList<>();
}

If we now omit the try-catch in Java, we get a compile-time error:

Unhandled exception: java.lang.Exception

However, if we use the function in Kotlin, we can still omit the try-catch, as the annotation only changes the way that it’s called from Java.

8. @JvmWildcard and @JvmSuppressWildcards

8.1. Generic Wildcards

In Java, we need wildcards to handle generics in combination with inheritance. Even though Integer extends Number, the following assignment leads to a compilation error:

List<Number> numberList = new ArrayList<Integer>();

We can solve the problem by using a wildcard:

List<? extends Number> numberList = new ArrayList<Integer>();

In Kotlin, there are no wildcards, and we can simply write:

val numberList : List<Number> = ArrayList<Int>()

This leads to the question of what happens if we use a Kotlin class which contains such a list.

As an example, let’s look at a function which takes a list as a parameter:

fun transformList(list : List<Number>) : List<Number>

In Kotlin, we can call this function with any list whose parameter extends Number:

val list = transformList(ArrayList<Long>())

Of course, if we want to call this function from Java, we expect this to be possible as well. This indeed works, since from a Java perspective, the function looks like this:

public List<Number> transformList(List<? extends Number> list)

The Kotlin compiler implicitly created a function with a wildcard.

Let’s see when this happens, and when not.

8.2. Kotlin’s Wildcard Rule

Here, the basic rule is, that by default, Kotlin only produces a wildcard where necessary.

If the type parameters is a final class, there is no wildcard:

fun transformList(list : List<String>) // Kotlin
public void transformList(List<String> list) // Java

Here, there is no need for “? extends Number“, because no class can extend String. However, if the class can be extended, we’ll have a wildcard. Number is not a final class, so we’ll have:

fun transformList(list : List<Number>) // Kotlin
public void transformList(List<? extends Number> list) // Java

Furthermore, return types don’t have a wildcard:

fun transformList() : List<Number> // Kotlin 
public List<Number> transformList() // Java

8.3. Wildcard Configuration

However, there might be situations, where we want to change the default behavior. To do so, we can use the JVM annotations. JvmWildcard ensures that the annotated type parameter always gets a wildcard. And JvmSuppressWildcards ensures that it won’t get a wildcard.

Let’s annotate the above function:

fun transformList(list : List<@JvmSuppressWildcards Number>) : List<@JvmWildcard Number>

And look at the method signature as seen from Java, which shows the effect of the annotations:

public List<? extends Number> transformListInverseWildcards(List<Number> list)

Finally, we should note that wildcards in return types are generally bad practice in Java, however, there might be a situation where we need them. Then the Kotlin JVM annotations come in handy.

9. @JvmMultifileClass

We already saw how we can apply the @JvmName annotation to a file in order to define the name of the class where all top-level declarations are compiled to. Of course, the name we provide has to be unique.

Suppose we have two Kotlin files in the same package, both with the @JvmName annotation and the same target class name. The first file MessageConverter.kt with the following code:

@file:JvmName("MessageHelper")
package jvmannotation
convert(message: Message) = // conversion code

And the second file Message.kt with the following code:

@file:JvmName("MessageHelper") 
package jvmannotation
fun archiveMessage() =  // archiving code

If we do this, we’ll get an error:

// Error:(1, 1) Kotlin: Duplicate JVM class name 'jvmannotation/MessageHelper' 
//  generated from: package-fragment jvmannotation, package-fragment jvmannotation

This is because the Kotlin compiler attempts to create two classes with the same name.

If we want to combine all top-level declarations of both files in one single class with the name MessageHelper.class, we can add the @JvmMultifileClass to both files.

Let’s add @JvmMultifileClass to MessageConverter.kt:

@file:JvmName("MessageHelper")
@file:JvmMultifileClass
package jvmannotationfun 
convert(message: Message) = // conversion code

And then, we’ll add it to Message.kt as well:

@file:JvmName("MessageHelper") 
@file:JvmMultifileClass
package jvmannotation
fun archiveMessage() =  // archiving code

In Java, we can see all top-level declarations from both Kotlin files are now unified into MessageHelper:

MessageHelper.archiveMessage();
MessageHelper.convert(new Message());

The annotation does not affect how we call the functions from Kotlin.

10. @JvmPackageName

All JVM platform annotations are defined in the package kotlin.jvm. When we look at this package, we notice that there’s another annotation: @JvmPackageName.

This annotation can change the package name much like @file:JvmName changes the name of the generated class file.

However, the annotation is marked as internal, which means that it cannot be used outside the Kotlin library classes. Therefore, we won’t look into more detail in this article.

11. Annotation Target Cheat Sheet

A good source to find all the information about the JVM annotations available in Kotlin is the official documentation. Another good place to find all the details is the code itself. The definitions (including JavaDoc) can be found in the package kotlin.jvm in kotlin-stdlib.jar.

The following table summarizes which annotations can be used with which target:

12. Conclusion

In this article, we had a look at Kotlin’s JVM annotations. The full source code for the examples is available over on GitHub.

Default Column Values in JPA

$
0
0

1. Introduction

In this tutorial, we’ll look into default column values in JPA.

We’ll learn how to set them in as a default property in the entity as well as directly in the SQL table definition.

2. While Creating an Entity

The first way to set a default column value is to set it directly as an entity property value:

@Entity
public class User {
    @Id
    private Long id;
    private String firstName = "John Snow";
    private Integer age = 25;
    private Boolean locked = false;
}

Now, every time we’ll create an entity using the new operator it’ll set the default values we’ve provided:

@Test
void saveUser_shouldSaveWithDefaultFieldValues() {
    User user = new User();
    user = userRepository.save(user);
    
    assertEquals(user.getName(), "John Snow");
    assertEquals(user.getAge(), 25);
    assertFalse(user.getLocked());
}

There is one drawback of this solution. When we take a look at the SQL table definition we won’t see any default value in it:

create table user
(
    id     bigint not null constraint user_pkey primary key,
    name   varchar(255),
    age    integer,
    locked boolean
);

So, if we override them with null, the entity will be saved without any error:

@Test
void saveUser_shouldSaveWithNullName() {
    User user = new User();
    user.setName(null);
    user.setAge(null);
    user.setLocked(null);
    user = userRepository.save(user);

    assertNull(user.getName());
    assertNull(user.getAge());
    assertNull(user.getLocked());
}

3. In the Schema Definition

To create a default value directly in the SQL table definition we can use the @Column annotation and set its columnDefinition parameter:

@Entity
public class User {
    @Id
    Long id;

    @Column(columnDefinition = "varchar(255) default 'John Snow'")
    private String name;

    @Column(columnDefinition = "integer default 25")
    private Integer age;

    @Column(columnDefinition = "boolean default false")
    private Boolean locked;
}

Using this method the default value will be present in the SQL table definition:

create table user
(
    id     bigint not null constraint user_pkey primary key,
    name   varchar(255) default 'John Snow',
    age    integer      default 35,
    locked boolean      default false
);

And the entity will be saved properly with the default values:

@Test
void saveUser_shouldSaveWithDefaultSqlValues() {
    User user = new User();
    user = userRepository.save(user);

    assertEquals(user.getName(), "John Snow");
    assertEquals(user.getAge(), 25);
    assertFalse(user.getLocked());
}

Remember that by using this solution, we won’t be able to set a given column to null when saving the entity for the first time. If we don’t provide any value, the default one will be set automatically.

4. Summary

In this short tutorial, we’ve learned how to set a default column values in JPA.

As always all source code is available on GitHub.

Java Weekly, Issue 280

$
0
0

Here we go…

1. Spring and Java

>> Multiple Cache Configurations with Caffeine and Spring Boot [techblog.bozho.net]

A novel extension of the CaffeineCacheManager lets you configure caches with different specs, all managed by the same CacheManager. Very cool.

>> Running Kotlin Tests With Gradle [petrikainulainen.net]

With a bit of configuration, you can run both unit and integration tests in Kotlin — or either in isolation — during a Gradle build.

>> Eclipse and Oracle Unable to Agree on Terms for javax Package Namespace and Trademarks [infoq.com]

And a mind-boggling decision results in a clear departure from the long history of Java SE and EE compatibility. And some of the FAQ on the developing situation.

 

Also worth reading:

Webinars and presentations:

Time to upgrade:

 

2. Technical and Musings

>> Surviving the Frequency of Open Source Vulnerabilities [tomitribe.com]

With an estimated half of all web sites containing critical security vulnerabilities, no company is immune from cyberattacks.

>> CloudFormation CLI workflows [advancedweb.hu]

And though managing stacks via the console is tedious at best, a few basic tools and scripts can take away some of the pain.

 

Also worth reading:

 

3. Comics

And my favorite Dilberts of the week:

>> Paying the Replacement More [dilbert.com]

>> Dogbert Narrates [dilbert.com]

>> Engineers Don’t Lie [dilbert.com]

 

4. Pick of the Week

>> Protecting Yourself from Identity Theft [schneier.com]

JPA @Basic Annotation

$
0
0

1. Overview

In this quick tutorial, we’ll explore the JPA @Basic annotation. We’ll also discuss the difference between @Basic and @Column JPA annotations.

2. Basic Types

JPA support various Java data types as persistable fields of an entity, often known as the basic types.

A basic type maps directly to a column in the database. These include Java primitives and their wrapper classes, String, java.math.BigInteger and java.math.BigDecimal, various available date-time classes, enums, and any other type that implements java.io.Serializable.

Hibernate, like any other ORM vendor, maintains a registry of basic types and uses it to resolve a column’s specific org.hibernate.type.Type.

3. @Basic Annotation

We can use the @Basic annotation to mark a basic type property:

@Entity
public class Course {

    @Basic
    @Id
    private int id;

    @Basic
    private String name;
    ...
}

In other words, the @Basic annotation on a field or a property signifies that it’s a basic type and Hibernate should use the standard mapping for its persistence.

Note that it’s an optional annotation. And so, we can rewrite our Course entity as:

@Entity
public class Course {

    @Id
    private int id;

    private String name;
    ...
}

When we don’t specify the @Basic annotation for a basic type attribute, it is implicitly assumed, and the default values of this annotation apply.

4. Why Use @Basic Annotation?

The @Basic annotation has two attributes, optional and fetch. Let’s take a closer look at each one.

The optional attribute is a boolean parameter that defines whether the marked field or property allows null. It defaults to true. So, if the field is not a primitive type, the underlying column is assumed to be nullable by default.

The fetch attribute accepts a member of the enumeration Fetch, which specifies whether the marked field or property should be lazily loaded or eagerly fetched. It defaults to FetchType.EAGER, but we can permit lazy loading by setting it to FetchType.LAZY.

Lazy loading will only make sense when we have a large Serializable object mapped as a basic type, as in that case, the field access cost can be significant.

We have a detailed tutorial covering Eager/Lazy loading in Hibernate that takes a deeper dive into the topic.

Now, let’s say don’t want to allow nulls for our Course‘s name and want to lazily load that property as well. Then, we’ll define our Course entity as:

@Entity
public class Course {
    
    @Id
    private int id;
    
    @Basic(optional = false, fetch = FetchType.LAZY)
    private String name;
    ...
}

We should explicitly use the @Basic annotation when willing to deviate from the default values of optional and fetch parameters. We can specify either one or both of these attributes, depending on our needs.

5. JPA @Basic vs @Column

Let’s look at the differences between @Basic and @Column annotations:

  • Attributes of the @Basic annotation are applied to JPA entities, whereas the attributes of @Column are applied to the database columns
  • @Basic annotation’s optional attribute defines whether the entity field can be null or not; on the other hand, @Column annotation’s nullable attribute specifies whether the corresponding database column can be null
  • We can use @Basic to indicate that a field should be lazily loaded
  • The @Column annotation allows us to specify the name of the mapped database column

6. Conclusion

In this article, we learned when and how to use JPA’s @Basic annotation. We also talked about how it differs from the @Column annotation.

As usual, code examples are available over on Github.

OData Protocol Guide

$
0
0

1. Introduction

In this tutorial, we’ll explore OData, a standard protocol that allows easy access to data sets using a RESTFul API.

2. What is OData?

OData is an OASIS and ISO/IEC Standard for accessing data using a RESTful API. As such, it allows a consumer to discover and navigate through data sets using standard HTTP calls. For instance, we can access one of the publicly available OData services with a simple curl one-liner:

curl -s https://services.odata.org/V2/Northwind/Northwind.svc/Regions
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<feed xml:base="https://services.odata.org/V2/Northwind/Northwind.svc/" 
  xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" 
  xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" 
  xmlns="http://www.w3.org/2005/Atom">
    <title type="text">Regions</title>
    <id>https://services.odata.org/V2/Northwind/Northwind.svc/Regions</id>
... rest of xml response omitted

As of this writing, the OData protocol is at its 4th version – 4.01 to be more precise. OData V4 reached the OASIS standard level in 2014, but it has a longer history. We can trace its roots to a Microsoft project called Astoria, which was renamed to ADO.Net Data Services in 2007. The original blog entry announcing this project is still available at Microsoft’s OData blog.

Having a standards-based protocol to access data set brings some benefits over standard APIs such as JDBC or ODBC. As an end-user level consumer, we can use popular tools such as Excel to retrieve data from any compatible provider. Programming is also facilitated by a large number of available REST client libraries.

As providers, adopting OData also has benefits: once we’ve created a compatible service, we can focus on providing valuable data sets, that end-users can consume using the tools of their choice. Since it is an HTTP-based protocol, we can also leverage aspects such as security mechanisms, monitoring, and logging.

Those characteristics made OData a popular choice by government agencies when implementing public data services, as we can check by taking a look at this directory.

3. OData Concepts

At the core of the OData protocol is the concept of an Entity Data Model – or EDM for short. The EDM describes the data exposed by an OData provider through a metadata document containing a number of meta-entities:

  • Entity type and its properties (e.g. Person, Customer, Order, etc) and keys
  • Relationships between entities
  • Complex types used to describe structured types embedded into entities (say, an address type which is part of a Customer type)
  • Entity Sets, which aggregate entities of a given type

The spec mandates that this metadata document must be available at the standard location $metadata at the root URL used to access the service. For instance, if we have an OData service available at http://example.org/odata.svc/, then its metadata document will be available at http://example.org/odata.svc/$metadata.

The returned document contains a bunch of XML describing the schemas supported by this server:

<?xml version="1.0"?>
<edmx:Edmx 
  xmlns:edmx="http://schemas.microsoft.com/ado/2007/06/edmx" 
  Version="1.0">
    <edmx:DataServices 
      xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" 
      m:DataServiceVersion="1.0">
    ... schema elements omitted
    </edmx:DataServices>
</edmx:Edmx>

Let’s tear down this document into its main sections.

The top-level element, <edmx:Edmx> can have only one child, the <edmx:DataServices> element. The important thing to notice here is the namespace URI since it allows us to identify which OData version the server uses. In this case, the namespace indicates that we have an OData V2 server, which uses Microsoft’s identifiers.

DataServices element can have one or more Schema elements, each describing an available dataset. Since a full description of the available elements in a Schema is beyond the scope of this article, we’ll focus on the most important ones: EntityTypes, Associations, and EntitySets.

3.1. EntityType element

This element defines the available properties of a given entity, including its primary key. It may also contain information about relationships with other schema types and, by looking at an example – a  CarMaker – we’ll be able to see that it is not very different from descriptions found in other ORM technologies, such as JPA:

<EntityType Name="CarMaker">
    <Key>
        <PropertyRef Name="Id"/>
    </Key>
    <Property Name="Id" Type="Edm.Int64" 
      Nullable="false"/>
    <Property Name="Name" Type="Edm.String" 
      Nullable="true" 
      MaxLength="255"/>
    <NavigationProperty Name="CarModelDetails" 
      Relationship="default.CarModel_CarMaker_Many_One0" 
      FromRole="CarMaker" 
      ToRole="CarModel"/>
</EntityType>

Here, our CarMaker has only two properties – Id and Name – and an association to another EntityType. The Key sub-element defines the entity’s primary key to be its Id property, and each Property element contains data about an entity’s property such as its name, type or nullability.

A NavigationProperty is a special kind of property that describes an “access point” to a related entity.

3.2. Association element

An Association element describes an association between two entities, which includes the multiplicity on each end and optionally a referential integrity constraint:

<Association Name="CarModel_CarMaker_Many_One0">
    <End Type="default.CarModel" Multiplicity="*" Role="CarModel"/>
    <End Type="default.CarMaker" Multiplicity="1" Role="CarMaker"/>
    <ReferentialConstraint>
        <Principal Role="CarMaker">
            <PropertyRef Name="Id"/>
        </Principal>
        <Dependent Role="CarModel">
            <PropertyRef Name="Maker"/>
        </Dependent>
    </ReferentialConstraint>
</Association>

Here, the Association element defines a one-to-many relationship between a CarModel and CarMaker entities, where the former acts as the dependent party.

3.3. EntitySet element

The final schema concept we’ll explore is the EntitySet element, which represents a collection of entities of a given type. While it’s easy to think them as analogous to a table – and in many cases, they’re just that – a better analogy is that of a view. The reason for that is that we can have multiple EntitySet elements for the same EntityType, each representing a different subset of the available data.

The EntityContainer element, which is a top-level schema element, groups all available EntitySets:

<EntityContainer Name="defaultContainer" 
  m:IsDefaultEntityContainer="true">
    <EntitySet Name="CarModels" 
      EntityType="default.CarModel"/>
    <EntitySet Name="CarMakers" 
      EntityType="default.CarMaker"/>
</EntityContainer>

In our simple example, we have just two EntitySets, but we could also add additional views, such as ForeignCarMakers or HistoricCarMakers.

4. OData URLs and Methods

In order to access data exposed by an OData service, we use the regular HTTP verbs:

  • GET returns one or more entities
  • POST adds a new entity to an existing Entity Set
  • PUT replaces a given entity
  • PATCH replaces specific properties of a given entity
  • DELETE removes a given entity

All those operations require a resource path to act upon. The resource path may define an entity set, an entity or even a property within an entity.

Let’s take a look on an example URL used to access our previous OData service:

http://example.org/odata/CarMakers

The first part of this URL, starting with the protocol up to the odata/ path segment, is known as the service root URL and is the same for all resource paths of this service.  Since the service root is always the same, we’ll replace it in the following URL samples by an ellipsis (“…”).

CarMakers, in this case, refers to one of the declared EntitySets in the service metadata. We can use a regular browser to access this URL, which should then return a document containing all existing entities of this type:

<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" 
  xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" 
  xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" 
  xml:base="http://localhost:8080/odata/">
    <id>http://localhost:8080/odata/CarMakers</id>
    <title type="text">CarMakers</title>
    <updated>2019-04-06T17:51:33.588-03:00</updated>
    <author>
        <name/>
    </author>
    <link href="CarMakers" rel="self" title="CarMakers"/>
    <entry>
      <id>http://localhost:8080/odata/CarMakers(1L)</id>
      <title type="text">CarMakers</title>
      <updated>2019-04-06T17:51:33.589-03:00</updated>
      <category term="default.CarMaker" 
        scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme"/>
      <link href="CarMakers(1L)" rel="edit" title="CarMaker"/>
      <link href="CarMakers(1L)/CarModelDetails" 
        rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/CarModelDetails" 
        title="CarModelDetails" 
        type="application/atom+xml;type=feed"/>
        <content type="application/xml">
            <m:properties>
                <d:Id>1</d:Id>
                <d:Name>Special Motors</d:Name>
            </m:properties>
        </content>
    </entry>  
  ... other entries omitted
</feed>

The returned document contains an entry element for each CarMaker instance.

Let’s take a closer look at what information we have available to us:

  • id: a link to this specific entity
  • title/author/updated: metadata about this entry
  • link elements: Links used to point to a resource used to edit the entity (rel=”edit”) or to related entities. In this case, we have a link that takes us to the set of CarModel entities associated with this particular CarMaker.
  • content: property values of CarModel entity

An important point to notice here is the use of the key-value pair to identify a particular entity within an entity set. In our example, the key is numeric so a resource path like CarMaker(1L) refers to the entity with a primary key value equal to 1 – the “L” here just denotes a long value and could be omitted.

5. Query Options

We can pass query options to a resource URL in order to modify a number of aspects of the returned data, such as to limit the size of the returned set or its ordering. The OData spec defines a rich set of options, but here we’ll focus on the most common ones.

As a general rule, query options can be combined with each other, thus allowing clients to easily implement common functionalities such as paging, filtering and ordering result lists.

5.1. $top and $skip

We can navigate through a large dataset using the $top an $skip query options:

.../CarMakers?$top=10&$skip=10

$top tells the service that we want only the first 10 records of the CarMakers entity set. A $skip, which is applied before the $top, tells the server to skip the first 10 records.

It’s usually useful to know the size of a given Entity Set and, for this purpose, we can use the $count sub-resource:

.../CarMakers/$count

This resource produces a text/plain document containing the size of the corresponding set. Here, we must pay attention to the specific OData version supported by a provider. While OData V2 supports $count as a sub-resource from a collection, V4 allows it to be used as a query parameter. In this case, $count is a Boolean, so we need to change the URL accordingly:

.../CarMakers?$count=true

5.2. $filter

We use the $filter query option to limit the returned entities from a given Entity Set to those matching given criteria. The value for the $filter is a logical expression that supports basic operators, grouping and a number of useful functions. For instance, let’s build a query that returns all CarMaker instances where its Name attribute starts with the letter ‘B’:

.../CarMakers?$filter=startswith(Name,'B')

Now, let’s combine a few logical operators to search for CarModels of a particular Year and Maker:

.../CarModels?$filter=Year eq 2008 and CarMakerDetails/Name eq 'BWM'

Here, we’ve used the equality operator eq to specify values for the properties. We can also see how to use properties from a related entity in the expression.

5.3. $expand

By default, an OData query does not return data for related entities, which is usually OK. We can use the $expand query option to request that data from a given related entity be included inline with the main content.

Using our sample domain, let’s build an URL that returns data from a given model and its maker, thus avoiding an additional round-trip to the server:

.../CarModels(1L)?$expand=CarMakerDetails

The returned document now includes the CarMaker data as part of the related entity:

<?xml version="1.0" encoding="utf-8"?>
<entry xmlns="http://www.w3.org/2005/Atom" 
  xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" 
  xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" 
  xml:base="http://localhost:8080/odata/">
    <id>http://example.org/odata/CarModels(1L)</id>
    <title type="text">CarModels</title>
    <updated>2019-04-07T11:33:38.467-03:00</updated>
    <category term="default.CarModel" 
      scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme"/>
    <link href="CarModels(1L)" rel="edit" title="CarModel"/>
    <link href="CarModels(1L)/CarMakerDetails" 
      rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/CarMakerDetails" 
      title="CarMakerDetails" 
      type="application/atom+xml;type=entry">
        <m:inline>
            <entry xml:base="http://localhost:8080/odata/">
                <id>http://example.org/odata/CarMakers(1L)</id>
                <title type="text">CarMakers</title>
                <updated>2019-04-07T11:33:38.492-03:00</updated>
                <category term="default.CarMaker" 
                  scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme"/>
                <link href="CarMakers(1L)" rel="edit" title="CarMaker"/>
                <link href="CarMakers(1L)/CarModelDetails" 
                  rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/CarModelDetails" 
                  title="CarModelDetails" 
                  type="application/atom+xml;type=feed"/>
                <content type="application/xml">
                    <m:properties>
                        <d:Id>1</d:Id>
                        <d:Name>Special Motors</d:Name>
                    </m:properties>
                </content>
            </entry>
        </m:inline>
    </link>
    <content type="application/xml">
        <m:properties>
            <d:Id>1</d:Id>
            <d:Maker>1</d:Maker>
            <d:Name>Muze</d:Name>
            <d:Sku>SM001</d:Sku>
            <d:Year>2018</d:Year>
        </m:properties>
    </content>
</entry>

5.4. $select

We use the $select query option to inform the OData service that it should only return the values for the given properties. This is useful in scenarios where our entities have a large number of properties, but we’re only interested in some of them.

Let’s use this option in a query that returns only the Name and Sku properties:

.../CarModels(1L)?$select=Name,Sku

The resulting document now has only the requested properties:

... xml omitted
    <content type="application/xml">
        <m:properties>
            <d:Name>Muze</d:Name>
            <d:Sku>SM001</d:Sku>
        </m:properties>
    </content>
... xml omitted

We can also see that even related entities were omitted. In order to include them, we’d need to include the name of the relation in the $select option.

5.5. $orderBy

The $orderBy option works pretty much as its SQL counterpart. We use it to specify the order in which we want the server to return a given set of entities. In its simpler form, its value is just a list of property names from the selected entity, optionally informing the order direction:

.../CarModels?$orderBy=Name asc,Sku desc

This query will result in a list of CarModels ordered by their names and SKUs, in ascending and descending directions, respectively.

An important detail here is the case used with the direction part of a given property: while the spec mandates that server must support any combination of upper- and lower-case letters for the keywords asc and desc, it also mandates that client use only lowercase.

5.6. $format

This option defines the data representation format that the server should use, which takes precedence over any HTTP content-negotiation header, such as Accept. Its value must be a full MIME-Type or a format-specific short form.

For instance, we can use json as an abbreviation for application/json:

.../CarModels?$format=json

This URL instructs our service to return data using JSON format, instead of XML, as we’ve seen before. When this option is not present, the server will use the value of the Accept header, if present. When neither is available, the server is free to choose any representation – usually XML or JSON.

Regarding JSON specifically, it’s fundamentally schemaless. However, OData 4.01 defines a JSON schema for metadata endpoints as well. This means that we can now write clients that can get totally rid of XML processing, if they choose to do so.

6. Conclusion

In this brief introduction to OData, we’ve covered its basic semantics and how to perform simple data set navigation. Our follow-up article will continue where we left and go straight into the Olingo library. We’ll then see how to implement sample services using this library.

Code examples, as always, are available on GitHub.

Derived Query Methods in Spring Data JPA Repositories

$
0
0

1. Introduction

For simple queries, it’s easy to derive what the query should be just by looking at the corresponding method name in our code.

In this tutorial, we’ll explore how Spring Data JPA leverages this idea in the form of a method naming convention.

2. Structure of Derived Query Methods in Spring

Derived method names have two main parts separated by the first By keyword:

List<User> findByName(String name)

The first part – like find – is the introducer and the rest – like ByName – is the criteria.

Spring Data JPA supports find, read, query, count and get. So, for example, we could have done queryByName and Spring Data would behave the same.

We can also use Distinct, First, or Top to remove duplicates or limit our result set:

List<User> findTop3ByAge()

The criteria part contains the entity-specific condition expressions of the query. We can use the condition keywords along with the entity’s property names. We can also concatenate the expressions with And and Or, as well see in just a moment.

3. Sample Application

First, we’ll, of course, need an application using Spring Data JPA.

In that application, let’s define an entity class:

@Table(name = "users")
@Entity
class User {
    @Id
    @GeneratedValue
    private Integer id;
    
    private String name;
    private Integer age;
    private ZonedDateTime birthDate;
    private Boolean active;

    // standard getters and setters
}

And, let’s also define a repository. It’ll extend JpaRepository, one of the Spring Data Repository types:

interface UserRepository extends JpaRepository<User, Integer> {}

This is where we’ll place all our derived query methods.

4. Equality Condition Keywords

Exact equality is one of the most-used conditions in queries. We have several options to express = or IS operators in the query.

We can just append the property name without any keyword for an exact match condition:

List<User> findByName(String name);

And we can add Is or Equals for readability:

List<User> findByNameIs(String name);
List<User> findByNameEquals(String name);

This extra readability comes in handy when we need to express inequality instead:

List<User> findByNameIsNot(String name);

This is quite a bit more readable than findByNameNot(String)!

As null equality is a special case, we shouldn’t use the = operator. Spring Data JPA handles null parameters by default. So, when we pass a null value for an equality condition, Spring interprets the query as IS NULL in the generated SQL.

We can also use the IsNull keyword to add IS NULL criteria to the query:

List<User> findByNameIsNull();
List<User> findByNameIsNotNull();

Note that, neither IsNull nor IsNotNull requires a method argument.

There are also two more keywords that don’t require any arguments. We can use True and False keywords to add equality conditions for boolean types:

List<User> findByActiveTrue();
List<User> findByActiveFalse();

Of course, sometimes we want something more lenient than exact equality, let’s see what else we can do.

5. Similarity Condition Keywords

When we need to query the results with a pattern of a property, we have a few options.

We can find names that start with a value using StartingWith:

List<User> findByNameStartingWith(String prefix);

Roughly, this translates to “WHERE name LIKE ‘value%’“.

If we want names that end with a value, then EndingWith is what we want:

List<User> findByNameEndingWith(String suffix);

Or, we can find which names contain a value with Containing:

List<User> findByNameContaining(String infix);

Note that all conditions above are called predefined pattern expressions. So, we don’t need to add operator inside the argument when these methods are called.

But, let’s suppose we are doing something more complex. Say we need to fetch the users whose names start with an a, contain b, and end with c.

For that, we can add our own LIKE with the Like keyword:

List<User> findByNameLike(String likePattern);

And we can then hand in our LIKE pattern when we call the method:

String likePattern = "a%b%c";
userRepository.findByNameLike(likePattern);

That’s enough about names for now. Let’s try some other values in User.

6. Comparison Condition Keywords

Furthermore, we can use LessThan and LessThanEqual keywords to compare the records with the given value using the < and <= operators:

List<User> findByAgeLessThan(Integer age);
List<User> findByAgeLessThanEqual(Integer age);

On the other hand, in the opposite situation, we can use GreaterThan and GreaterThanEqual keywords:

List<User> findByAgeGreaterThan(Integer age);
List<User> findByAgeGreaterThanEqual(Integer age);

Or, we can find users who are between two ages with Between:

List<User> findByAgeBetween(Integer startAge, Integer endAge);

We can also supply a collection of ages to match against using In:

List<User> findByAgeIn(Collection<Integer> ages);

Since we know the users’ birthdates, we might want to query for users who were born before or after a given date. We’d use Before and After for that:

List<User> findByBirthDateAfter(ZonedDateTime birthDate);
List<User> findByBirthDateBefore(ZonedDateTime birthDate);

7. Multiple Condition Expressions

We can combine as many expressions as we need by using And and Or keywords:

List<User> findByNameOrBirthDate(String name, ZonedDateTime birthDate);
List<User> findByNameOrBirthDateAndActive(String name, ZonedDateTime birthDate, Boolean active);

The precedence order is And then Or, just like Java.

While Spring Data JPA imposes no limit to how many expressions we can add, we shouldn’t go crazy here. Long names are unreadable and hard to maintain. For complex queries, take a look at the @Query annotation instead.

8. Sorting the Results

Next up is sorting. We could ask that the users be sorted alphabetically by their name using OrderBy:

List<User> findByNameOrderByName(String name);
List<User> findByNameOrderByNameAsc(String name);

Ascending order is the default sorting option, but we can use Desc instead to sort them in reverse:

List<User> findByNameOrderByNameDesc(String name);

9. findOne vs findById in a CrudRepository

The Spring team made some major changes in CrudRepository with Spring Boot 2.x. One of them is renaming findOne to findById.

Previously with Spring Boot 1.x, we’d call findOne when we wanted to retrieve an entity by its primary key:

User user = userRepository.findOne(1);

Since Spring Boot 2.x we can do the same with findById:

User user = userRepository.findById(1);

Note that the findById() method is already defined in CrudRepository for us. So we don’t have to define it explicitly in custom repositories that extend CrudRepository.

10. Conclusion

In this article, we explained the query derivation mechanism in Spring Data JPA. We used the property condition keywords to write derived query methods in Spring Data JPA repositories.

The source code of this tutorial is available on the Github project.

How to Find an Exception’s Root Cause in Java

$
0
0

1. Introduction

It’s pretty common in Java to work with nested exceptions as they can help us track the source of an error.

When we deal with these kinds of exceptions, sometimes we may want to know the original problem that caused the exception so our application can respond differently for each case. This is especially useful when we work with frameworks that wrap the root exceptions into their own.

In this short article, we’ll show how to get the root cause exception using plain Java as well as external libraries such as Apache Commons Lang and Google Guava.

2. An Age Calculator App

Our application will be an age calculator that tells us how old a person is from a given date received as String in ISO format. We’ll handle 2 possible error cases when parsing the date: a poorly-formatted date and a date in the future.

Let’s first create exceptions for our error cases:

static class InvalidFormatException extends DateParseException {

    InvalidFormatException(String input, Throwable thr) {
        super("Invalid date format: " + input, thr);
    }
}

static class DateOutOfRangeException extends DateParseException {

    DateOutOfRangeException(String date) {
        super("Date out of range: " + date);
    }

}

Both exceptions inherit from a common parent exception that will make our code a bit clearer:

static class DateParseException extends RuntimeException {

    DateParseException(String input) {
        super(input);
    }

    DateParseException(String input, Throwable thr) {
        super(input, thr);
    }
}

After that, we can implement the AgeCalculator class with a method to parse the date:

static class AgeCalculator {

    private static LocalDate parseDate(String birthDateAsString) {
        LocalDate birthDate;
        try {
            birthDate = LocalDate.parse(birthDateAsString);
        } catch (DateTimeParseException ex) {
            throw new InvalidFormatException(birthDateAsString, ex);
        }

        if (birthDate.isAfter(LocalDate.now())) {
            throw new DateOutOfRangeException(birthDateAsString);
        }

        return birthDate;
    }
}

As we can see, when the format is wrong we wrap the DateTimeParseException into our custom InvalidFormatException.

Finally, let’s add a public method to our class that receives the date, parses it and then calculates the age:

public static int calculateAge(String birthDate) {
    if (birthDate == null || birthDate.isEmpty()) {
        throw new IllegalArgumentException();
    }

    try {
        return Period
          .between(parseDate(birthDate), LocalDate.now())
          .getYears();
    } catch (DateParseException ex) {
        throw new CalculationException(ex);
    }
}

As shown, we’re wrapping the exceptions again. In this case, we wrap them into a CalculationException that we have to create:

static class CalculationException extends RuntimeException {

    CalculationException(DateParseException ex) {
        super(ex);
    }
}

Now, we’re ready to use our calculator by passing it any date in ISO format:

AgeCalculator.calculateAge("2019-10-01");

And if the calculation fails, it would be useful to know what the problem was, wouldn’t it? Keep reading to find out how we can do that.

3. Find the Root Cause Using Plain Java

The first way we’ll use to find the root cause exception is by creating a custom method that loops through all the causes until it reaches the root:

public static Throwable findCauseUsingPlainJava(Throwable throwable) {
    Objects.requireNonNull(throwable);
    Throwable rootCause = throwable;
    while (rootCause.getCause() != null && rootCause.getCause() != rootCause) {
        rootCause = rootCause.getCause();
    }
    return rootCause;
}

Notice that we’ve added an extra condition in our loop to avoid infinite loops when handling recursive causes.

If we pass an invalid format to our AgeCalculator, we’ll get the DateTimeParseException as the root cause:

try {
    AgeCalculator.calculateAge("010102");
} catch (CalculationException ex) {
    assertTrue(findCauseUsingPlainJava(ex) instanceof DateTimeParseException);
}

However, if we use a future date we’ll get a DateOutOfRangeException:

try {
    AgeCalculator.calculateAge("2020-04-04");
} catch (CalculationException ex) {
    assertTrue(findCauseUsingPlainJava(ex) instanceof DateOutOfRangeException);
}

Furthermore, our method also works for non-nested exceptions:

try {
    AgeCalculator.calculateAge(null);
} catch (Exception ex) {
    assertTrue(findCauseUsingPlainJava(ex) instanceof IllegalArgumentException);
}

In this case, we get an IllegalArgumentException since we passed in null.

4. Find the Root Cause Using Apache Commons Lang

We’ll now demonstrate finding the root cause using third-party libraries instead of writing our custom implementation.

Apache Commons Lang provides an ExceptionUtils class which provides some utility methods to work with exceptions.

We’ll use the getRootCause() method with our previous example:

try {
    AgeCalculator.calculateAge("010102");
} catch (CalculationException ex) {
    assertTrue(ExceptionUtils.getRootCause(ex) instanceof DateTimeParseException);
}

We get the same root cause as before. The same behavior applies to the other examples that we’ve listed above.

5. Find the Root Cause Using Guava

The last way we’re going to try is by using Guava. Similar to Apache Commons Lang, it provides a Throwables class with a getRootCause() utility method.

Let’s try it out with the same example:

try {
    AgeCalculator.calculateAge("010102");
} catch (CalculationException ex) {
    assertTrue(Throwables.getRootCause(ex) instanceof DateTimeParseException);
}

The behavior is exactly the same as with the other methods.

6. Conclusion

In this article, we’ve demonstrated how to use nested exceptions in our application and implemented a utility method to find the root cause exception. We’ve also shown how to do the same by using third-party libraries like Apache Commons Lang and Google Guava.

As always, the full source code for the examples is available over on GitHub.


Template Engines in Groovy

$
0
0

1. Overview

In this introductory tutorial, we’ll explore the concept of template engines in Groovy.

In Groovy, we can use GStrings to generate dynamic text easily. However, the template engines provide a better way of handling dynamic text using static templates.

These templates are convenient in defining static templates for various notifications like SMS and emails.

2. What is Groovy’s TemplateEngine?

Groovy’s TemplateEngine is an abstract class that contains the createTemplate method.

All template framework engines available in Groovy extend TemplateEngine and implement createTemplate. Additionally, every engine returns the Template interface object.

The Template interface has a method make, which takes a map for binding variables. Therefore, it must be implemented by every template framework.

Let’s discuss the functionality and behavior of all the available template frameworks in Groovy.

3. SimpleTemplateEngine

The SimpleTemplateEngine generates dynamic text using String interpolation and scriptlets. This engine is quite useful for simple notifications like SMS and simple text emails.

For example:

def smsTemplate = 'Dear <% print user %>, Thanks for reading our Article. ${signature}'
def bindMap = [user: "Norman", signature: "Baeldung"]
def smsText = new SimpleTemplateEngine().createTemplate(smsTemplate).make(bindMap)

assert smsText.toString() == "Dear Norman, Thanks for reading our Article. Baeldung"

4. StreamingTemplateEngine

In a general sense, the StreamingTemplateEngine works similarly to SimpleTemplateEngine. However, internally it uses Writable closures to generate a template.

For the same reason, it holds benefits when working over larger Strings(> 64K). Hence, it’s more efficient than SimpleTemplateEngine.

Let’s write a quick example to generate a dynamic email content using a static template.

Firstly, we’ll create a static articleEmail template:

Dear <% out << (user) %>,
Please read the requested article below.
<% out << (articleText) %>
From,
<% out << (signature) %>

Here, we’re using <% %> scriptlets for dynamic text and out for the writer.

Now, we’ll generate the content of an email using StreamingTemplateEngine:

def articleEmailTemplate = new File('src/main/resources/articleEmail.template')
def bindMap = [user: "Norman", signature: "Baeldung"]

bindMap.articleText = """1. Overview
This is a tutorial article on Template Engines...""" //can be a string larger than 64k

def articleEmailText = new StreamingTemplateEngine().createTemplate(articleEmailTemplate).make(bindMap)

assert articleEmailText.toString() == """Dear Norman,
Please read the requested article below.
1. Overview
This is a tutorial article on Template Engines...
From,
Baeldung"""

5. GStringTemplateEngine

As the name suggests, GStringTemplateEngine uses GString to generate dynamic text from static templates.

Firstly, let’s write a simple email template using GString:

Dear $user,
Thanks for subscribing our services.
${signature}

Now, we’ll use GStringTemplateEngine to create dynamic content:

def emailTemplate = new File('src/main/resources/email.template')
def emailText = new GStringTemplateEngine().createTemplate(emailTemplate).make(bindMap)

6. XmlTemplateEngine

The XmlTemplateEngine is useful when we want to create dynamic XML outputs. It requires XML schema as input and allows two special tags, <gsp:scriptlet> to inject script and <gsp:expression> to inject an expression.

For example, let’s convert the already discussed email template to XML:

def emailXmlTemplate = '''
<xs xmlns:gsp='groovy-server-pages'>
    <gsp:scriptlet>def emailContent = "Thanks for subscribing our services."</gsp:scriptlet>
    <email>
        <greet>Dear ${user}</greet>
        <content><gsp:expression>emailContent</gsp:expression></content>
        <signature>${signature}</signature>
    </email>
</xs>'''

def emailXml = new XmlTemplateEngine().createTemplate(emailXmlTemplate).make(bindMap)

Hence, the emailXml will have XML rendered, and the content will be:

<xs>
  <email>
    <greet>
      Dear Norman
    </greet>
    <content>
      Thanks for subscribing our services.
    </content>
    <signature>
      Baeldung
    </signature>
  </email>
</xs>

It’s interesting to note the XML output is indented and beautified itself by the template framework.

7. MarkupTemplateEngine

This template framework is a complete package to generate HTML and other markup languages.

Additionally, it uses Domain Specific Language to process the templates and is the most optimized among all template frameworks available in Groovy.

7.1. HTML

Let’s write a quick example to render HTML for the already discussed email template:

def emailHtmlTemplate = """
html {
    head {
        title('Service Subscription Email')
    }
    body {
        p('Dear Norman')
        p('Thanks for subscribing our services.')
        p('Baeldung')
    }
}"""
def emailHtml = new MarkupTemplateEngine().createTemplate(emailHtmlTemplate).make()

Therefore, the content of emailHtml will be:

<html><head><title>Service Subscription Email</title></head>
<body><p>Dear Norman</p><p>Thanks for subscribing our services.</p><p>Baeldung</p></body></html>

7.2. XML

Likewise, we can render XML:

def emailXmlTemplate = """
xmlDeclaration()  
    xs{
        email {
            greet('Dear Norman')
            content('Thanks for subscribing our services.')
            signature('Baeldung')
        }  
    }"""
def emailXml = new MarkupTemplateEngine().createTemplate(emailXmlTemplate).make()

Therefore, the content of emailXml will be:

<?xml version='1.0'?>
<xs><email><greet>Dear Norman</greet><content>Thanks for subscribing our services.</content>
<signature>Baeldung</signature></email></xs>

7.3. TemplateConfiguration

Note that unlike like XmlTemplateEngine, the template output of this framework is not indented and beautified by itself.

For such configuration, we’ll use the TemplateConfiguration class:

TemplateConfiguration config = new TemplateConfiguration()
config.autoIndent = true
config.autoEscape = true
config.autoNewLine = true
                               
def templateEngine = new MarkupTemplateEngine(config)

7.4. Internationalization

Additionally, the locale property of TemplateConfiguration is available to enable the support of internationalization.

Firstly, we’ll create a static template file email.tpl and copy the already discussed emailHtmlTemplate string into it. This will be treated as the default template.

Likewise, we’ll create locale-based template files like email_ja_JP.tpl for Japanese, email_fr_FR.tpl for French, etc.

Finally, all we need is to set the locale in the TemplateConfiguration object:

config.locale = Locale.JAPAN

Hence, the corresponding locale-based template will be picked.

8. Conclusion

In this article, we’ve seen various template frameworks available in Groovy.

We can leverage these handy template engines to generate dynamic text using static templates. Therefore, they can be helpful in the dynamic generation of various kinds of notifications or on-screen messages and errors.

As usual, the code implementations of this tutorial are available on the GitHub project.

Converting Between Stream and Array in Java

$
0
0

1. Introduction

It’s common to need to convert various dynamic data structures into arrays.

In this tutorial, we’ll demonstrate how to convert a Stream to an array and vice versa in Java.

2. Converting a Stream to an Array

2.1. Method Reference

The best way to convert a Stream into an array is to use Stream’s toArray() method:

public String[] usingMethodReference(Stream<String> stringStream) {
    return stringStream.toArray(String[]::new);
}

Now, we can easily test if the conversion was successful:

Stream<String> stringStream = Stream.of("baeldung", "convert", "to", "string", "array");
assertArrayEquals(new String[] { "baeldung", "convert", "to", "string", "array" },
    usingMethodReference(stringStream));

2.2. Lambda Expression

Another equivalent is to pass a lambda expression to the toArray() method:

public static String[] usingLambda(Stream<String> stringStream) {
    return stringStream.toArray(size -> new String[size]);
}

This would give us the same result as with using the method reference.

2.3. Custom Class

Or, we can go all out and create a full-blown class.

As we can see from the Stream documentation, it takes an IntFunction as an argument. It takes the array size as input and returns an array of that size.

Of course, IntFunction is an interface so we can implement it:

class MyArrayFunction implements IntFunction<String[]> {
    @Override
    public String[] apply(int size) {
        return new String[size];
    }
};

We can then construct and use as normal:

public String[] usingCustomClass(Stream<String> stringStream) {
    return stringStream.toArray(new MyArrayFunction());
}

Consequently, we can make the same assertion as earlier.

2.4. Primitive Arrays

In the previous sections, we explored how to convert a String Stream to a String array. In fact, we can perform the conversion this way for any Object and it would look very similar to the String examples above.

It’s a bit different for primitives, though. If we have a Stream of Integers that we want to convert to int[], for example, we first need to call the mapToInt() method:

public int[] intStreamToPrimitiveIntArray(Stream<Integer> integerStream) {
    return integerStream.mapToInt(i -> i).toArray();
}

There’s also mapToLong() and mapToDouble() methods at our disposal. Also, please note that we didn’t pass any argument to the toArray() this time.

Finally, let’s do the equality assertion and confirm that we’ve got our int array correctly:

Stream<Integer> integerStream = IntStream.rangeClosed(1, 7).boxed();
assertArrayEquals(new int[]{1, 2, 3, 4, 5, 6, 7}, intStreamToPrimitiveIntArray(integerStream));

What if we need to do the opposite, though? Let’s take a look.

3. Converting an Array to a Stream

We can, of course, go the other way, too. And Java has some dedicated methods for that.

3.1. Array of Objects

We can convert the array to a Stream using Arrays.stream() or Stream.of() methods:

public Stream<String> stringArrayToStreamUsingArraysStream(String[] stringArray) {
    return Arrays.stream(stringArray);
}

public Stream<String> stringArrayToStreamUsingStreamOf(String[] stringArray) {
    return Stream.of(stringArray);
}

We should note that in both cases, our Stream is of the same time as our array.

3.2. Array of Primitives

Similarly, we can convert an array of primitives:

public IntStream primitiveIntArrayToStreamUsingArraysStream(int[] intArray) {
    return Arrays.stream(intArray);
}

public Stream<int[]> primitiveIntArrayToStreamUsingStreamOf(int[] intArray) {
    return Stream.of(intArray);
}

But, in contrast to converting Objects arrays, there is an important difference. When converting primitives array, Arrays.stream() returns IntStream, while Stream.of() returns Stream<int[]>.

3.3. Arrays.stream vs. Stream.of

In order to understand the differences mentioned in earlier sections, we’ll take a look at the implementation of the corresponding methods.

Let’s first take a peek at Java’s implementation of these two methods:

public <T> Stream<T> stream(T[] array) {
    return stream(array, 0, array.length);
}

public <T> Stream<T> of(T... values) {
    return Arrays.stream(values);
}

We can see that Stream.of() is actually calling Arrays.stream() internally and that’s obviously the reason why we get the same results.

Now, we’ll check out the methods in the case when we want to convert an array of primitives:

public IntStream stream(int[] array) {
    return stream(array, 0, array.length);
}

public <T> Stream<T> of(T t) {
    return StreamSupport.stream(new Streams.StreamBuilderImpl<>(t), false);
}

This time, Stream.of() is not calling the Arrays.stream().

4. Conclusion

In this article, we saw how we can convert Streams to arrays in Java and the other way round. We also explained why we get different results when converting an array of Objects and when we use an array of primitives.

As always, complete source code can be found over on GitHub.

Java Weekly, Issue 281

$
0
0

Here we go…

1. Spring and Java

>> Test-Driven Development: Really, It’s a Design Technique [infoq.com]

A step-by-step walkthrough of TDD using a simple Java example.

>> Property-based Testing in Java: PBT and Test-driven Development [blog.johanneslink.net]

Another Java-based TDD example, this time using a technique where you first define the desired properties of a solution and then iteratively develop and test the solution until all properties are realized.

>> Jakarta EE, javax, And A Week Of Turmoil [blog.codefx.org]

And finally, a compilation of reactions from the Java community regarding last week’s announcement.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh [martinfowler.com]

An introduction to the domain-driven distributed data mesh, a paradigm shift from the centralized, monolithic, domain-agnostic data lakes that proliferate enterprise data today.

>> Increasing access to blockchain and ledger databases [allthingsdistributed.com]

The time-tested ledger data store is a natural fit for blockchain technology, and AWS Managed Blockchain aims to make it easier for companies to adopt.

>> The Potential for Using a Service Mesh for Event-Driven Messaging [infoq.com]

And a quick look at how existing service-mesh offerings are trying to address the need for event-driven messaging support.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Various Anonymous Sources [dilbert.com]

>> Twitch Gets You More Work [dilbert.com]

>> Bad Planning [dilbert.com]

4. Pick of the Week

>> It is perfectly OK to only code at work, you can have a life too [zeroequalsfalse.press]

String Initialization in Java

$
0
0

1. Introduction

Java String is one of the most important classes and we’ve already covered a lot of its aspects in our String-related series of tutorials.

In this tutorial, we’ll focus on String initialization in Java.

2. Creation

First of all, we should remember how Strings are created in Java.

We can use the new keyword or the literal syntax:

String usingNew = new String("baeldung");
String usingLiteral = "baeldung";

And, it’s also important that we understand how Strings are managed in a pool.

3. String Declaration Only

First, let’s just declare a String, without assigning a value explicitly.

We can either do this locally or as a member variable:

public class StringInitialization {

    String fieldString;

    void printDeclaredOnlyString() {
        String localVarString;
        
        // System.out.println(localVarString); -> compilation error
        System.out.println(fieldString);
    }
}

As we can see, if we try to use localVarString before giving it a value, we’ll get a compilation error. On the other hand, the console will show “null” for fieldString‘s value.

See, member variables are initialized with a default value when the class is constructed, null in String‘s case. But, we have to initialize local variables ourselves.

If we give localVarString a value of null, we’ll see that the two are, indeed, now equal:

String localVarString = null;
assertEquals(fieldString, localVarString);

4. String Initialization Using Literals

Let’s now create two Strings using the same literal:

String literalOne = "Baeldung";
String literalTwo = "Baeldung";

We’ll confirm that only one object is created by comparing the references:

assertTrue(literalOne == literalTwo);

The reason for this harks back to the fact that Strings are stored in a poolliteralOne adds the String “baeldung” to the pool, and literalTwo reuses it.

5. String Initialization Using new

We’ll see some different behavior, though, if we use the new keyword.

String newStringOne = new String("Baeldung");
String newStringTwo = new String("Baeldung");

Although the value of both Strings will be the same as earlier, we’ll have to different objects this time:

assertFalse(newStringOne == newStringTwo);

6. Empty Strings

Let’s now create three empty Strings:

String emptyLiteral = "";
String emptyNewString = new String("");
String emptyNewStringTwo = new String();

As we know by now, the emptyLiteral will be added to the String pool, while the other two go directly onto the heap.

Although these won’t be the same objects, all of them will have the same value:

assertFalse(emptyLiteral == emptyNewString)
assertFalse(emptyLiteral == emptyNewStringTwo)
assertFalse(emptyNewString == emptyNewStringTwo)
assertEquals(emptyLiteral, emptyNewString);
assertEquals(emptyNewString, emptyNewStringTwo);

7. null Values

Finally, let’s see how null Strings behave.

Let’s declare and initialize a null String:

String nullValue = null;

If we printed nullValue, we’d see the word “null”, as we previously saw. And, if we tried to invoke any methods on nullValue, we’d get a NullPointerException, as expected.

But, why does “null” is being printed? What is null actually?

Well, the JVM specification says that null is the default value for all references, so it’s not specifically tied to the String. And actually, the specification doesn’t mandate any concrete value encoding for null.

So, where is “null” coming from for printing a String then?

If we take a look at the PrintStream#println implementation, we’ll see it calls String#valueOf:

public void println(Object x) {
    String s = String.valueOf(x);
    synchronized (this) {
        print(s);
        newLine();
    }
}

And, if we look at String#valueOf, we get our answer:

public static String valueOf(Object obj) {
    return (obj == null) ? "null" : obj.toString();
}

And, obviously, that’s the reason for “null”.

8. Conclusion

In this article, we explored String initialization. We explained the difference between declaration and initialization. We also touched on using new and using the literal syntax.

Finally, we took a look at what it means to assign a null value to a String, how the null value is represented in memory, and how it looks when we print it.

All code samples used in the article are available over on Github.

Introduction to RxKotlin

$
0
0

1. Overview

In this tutorial, we’re going to review the use of Reactive Extensions (Rx) in idiomatic Kotlin using the RxKotlin library.

RxKotlin is not an implementation of Reactive Extensions, per se. Instead, it’s mostly a collection of extension methods. That is, RxKotlin augments the RxJava library with an API designed with Kotlin in mind. Therefore, we’ll use concepts from our article, Introduction to RxJava, as well as the concept of Flowables we’ve presented in a dedicated article.

2. RxKotlin Setup

To use RxKotlin in our Maven project, we’ll need to add the rxkotlin dependency to our pom.xml:

<dependency>
    <groupId>io.reactivex.rxjava2</groupId>
    <artifactId>rxkotlin</artifactId>
    <version>2.3.0</version>
</dependency>

Or, for a Gradle project, to our build.gradle:

implementation 'io.reactivex.rxjava2:rxkotlin:2.3.0'

Here, we’re using RxKotlin 2.x, which targets RxJava 2. Projects using RxJava 1 should use RxKotlin 1.x. The same concepts apply to both versions.

Note that RxKotlin depends on RxJava, but they don’t update the dependency frequently to the latest release. So, we recommend to explicitly include the specific RxJava version we’re going to depend on, as detailed in our RxJava article.

3. Creating Observables in RxKotlin

RxKotlin includes a number of extension methods to create Observable and Flowable objects from collections.

In particular, every type of array has a toObservable() method and a toFlowable() method:

val observable = listOf(1, 1, 2, 3).toObservable()
observable.test().assertValues(1, 1, 2, 3)
val flowable = listOf(1, 1, 2, 3).toFlowable()
flowable.buffer(2).test().assertValues(listOf(1, 1), listOf(2, 3))

3.1. Completables

RxKotlin also provides some methods to create Completable instances. In particular, we can convert Actions, Callables, Futures, and zero-arity functions to Completable with the extension method toCompletable:

var value = 0
val completable = { value = 3 }.toCompletable()
assertFalse(completable.test().isCancelled())
assertEquals(3, value)

4. Observable and Flowable to Map and Multimap

When we have an Observable or Flowable that produces Pair instances, we can transform them into a Single observable that produces a Map:

val list = listOf(Pair("a", 1), Pair("b", 2), Pair("c", 3), Pair("a", 4))
val observable = list.toObservable()
val map = observable.toMap()
assertEquals(mapOf(Pair("a", 4), Pair("b", 2), Pair("c", 3)), map.blockingGet())

As we can see in the previous example, toMap overwrites values emitted earlier with later values if they have the same key.

If we want to accumulate all the values associated with a key into a collection, we use toMultimap instead:

val list = listOf(Pair("a", 1), Pair("b", 2), Pair("c", 3), Pair("a", 4))
val observable = list.toObservable()
val map = observable.toMultimap()
assertEquals(
  mapOf(Pair("a", listOf(1, 4)), Pair("b", listOf(2)), Pair("c", listOf(3))),
  map.blockingGet())

5. Combining Observables and Flowables

One of the selling points of Rx is the possibility to combine Observables and Flowables in various ways. Indeed, RxJava provides a number of operators out of the box.

In addition to that, RxKotlin includes a few more extension methods for combining Observables and the like.

5.1. Combining Observable Emissions

When we have an Observable that emits other Observables, we can use one of the extension methods in RxKotlin to combine together the emitted values.

In particular, mergeAll combines the observables with flatMap:

val subject = PublishSubject.create<Observable<String>>()
val observable = subject.mergeAll()

Which would be the same as:

val observable = subject.flatMap { it }

The resulting Observable will emit all the values of the source Observables in an unspecified order.

Similarly, concatAll uses concatMap (the values are emitted in the same order as the sources), while switchLatest uses switchMap (values are emitted from the last emitted Observable).

As we’ve seen so far, all the above methods are provided for Flowable sources as well, with the same semantics.

5.2. Combining Completables, Maybes, and Singles

When we have an Observable that emits instances of Completable, Maybe, or Single, we can combine those with the appropriate mergeAllXs method like, for example, mergeAllMaybes:

val subject = PublishSubject.create<Maybe<Int>>()
val observable = subject.mergeAllMaybes()
subject.onNext(Maybe.just(1))
subject.onNext(Maybe.just(2))
subject.onNext(Maybe.empty())
subject.onNext(Maybe.error(Exception("error")))
subject.onNext(Maybe.just(3))
observable.test().assertValues(1, 2).assertError(Exception::class.java)

5.3. Combining Iterables of Observables

For collections of Observable or Flowable instances instead, RxKotlin has a couple of other operators, merge and mergeDelayError. They both have the effect of combining all the Observables or Flowables into one that will emit all the values in sequence:

val observables = mutableListOf(Observable.just("first", "second"))
val observable = observables.merge()
observables.add(Observable.just("third", "fourth"))
observable.test().assertValues("first", "second", "third", "fourth")

The difference between the two operators — which are directly derived from the same-named operators in RxJava — is their treatment of errors.

The merge method emits errors as soon as they’re emitted by the source:

// ...
observables.add(Observable.error(Exception("e")))
observables.add(Observable.just("fifth"))
// ...
observable.test().assertValues("first", "second", "third", "fourth")

Whereas mergeDelayError emits them at the end of the stream:

// ...
observables.add(Observable.error(Exception("e")))
observables.add(Observable.just("fifth"))
// ...
observable.test().assertValues("first", "second", "third", "fourth", "fifth")

6. Handling Values of Different Types

Let’s now look at the extension methods in RxKotlin for dealing with values of different types.

These are variants of RxJava methods, that make use of Kotlin’s reified generics. In particular, we can:

  • cast emitted values from one type to another, or
  • filter out values that are not of a certain type

So, we could, for example, cast an Observable of Numbers to one of Ints:

val observable = Observable.just<Number>(1, 1, 2, 3)
observable.cast<Int>().test().assertValues(1, 1, 2, 3)

Here, the cast is unnecessary. However, when combining different observables together, we might need it.

With ofType, instead, we can filter out values that aren’t of the type we expect:

val observable = Observable.just(1, "and", 2, "and")
observable.ofType<Int>().test().assertValues(1, 2)

As always, cast and ofType are applicable to both Observables and Flowables.

Furthermore, Maybe supports these methods as well. The Single class, instead, only supports cast.

7. Other Helper Methods

Finally, RxKotlin includes several helper methods. Let’s have a quick look.

We can use subscribeBy instead of subscribe – it allows named parameters:

Observable.just(1).subscribeBy(onNext = { println(it) })

Similarly, for blocking subscriptions we can use blockingSubscribeBy.

Additionally, RxKotlin includes some methods that mimic those in RxJava but work around a limitation of Kotlin’s type inference.

For example, when using Observable#zip, specifying the zipper doesn’t look so great:

Observable.zip(Observable.just(1), Observable.just(2), BiFunction<Int, Int, Int> { a, b -> a + b })

So, RxKotlin adds Observables#zip for more idiomatic usage:

Observables.zip(Observable.just(1), Observable.just(2)) { a, b -> a + b }

Notice the final “s” in Observables. Similarly, we have Flowables, Singles, and Maybes.

8. Conclusions

In this article, we’ve thoroughly reviewed the RxKotlin library, which augments RxJava to make its API look more like idiomatic Kotlin.

For further information, please refer to the RxKotlin GitHub page. For more examples, we recommend RxKotlin tests.

The implementation of all these examples and code snippets can be found in the GitHub project as a Maven and Gradle project, so it should be easy to import and run as is.

Difference Between a Java Keystore and a Truststore

$
0
0

1. Overview

In this article, we’ll provide an overview of the differences between a Java keystore and a Java truststore.

2. Concepts

In most cases, we use a keystore and a truststore when our application needs to communicate over SSL/TLS.

Usually, these are password-protected files that sit on the same file system as our running application. The default format used for these files is JKS until Java 8.

Since Java 9, though, the default keystore format is PKCS12. The biggest difference between JKS and PKCS12 is that JKS is a format specific to Java, while PKCS12 is a standardized and language-neutral way of storing encrypted private keys and certificates.

3. Java KeyStore

A Java keystore stores private key entries, certificates with public keys or just secret keys that we may use for various cryptographic purposes. It stores each by an alias for ease of lookup.

Generally speaking, keystores hold keys that our application owns that we can use to prove the integrity of a message and the authenticity of the sender, say by signing payloads.

Usually, we’ll use a keystore when we are a server and want to use HTTPS. During an SSL handshake, the server looks up the private key from the keystore and presents its corresponding public key and certificate to the client.

Correspondingly, if the client also needs to authenticate itself – a situation called mutual authentication – then the client also has a keystore and also presents its public key and certificate.

There’s no default keystore, so if we want to use an encrypted channel, we’ll have to set javax.net.ssl.keyStore and javax.net.ssl.keyStorePassword. If our keystore format is different than the default, we could use javax.net.ssl.keyStoreType to customize it.

Of course, we can use these keys to service other needs as well. Private keys can sign or decrypt data, and public keys can verify or encrypt data. Secret keys can perform these functions as well. A keystore is a place that we can hold onto these keys.

We can also interact with the keystore programmatically.

4. Java TrustStore

A truststore is the opposite – while a keystore typically holds onto certificates that identify us, a truststore holds onto certificates that identify others.

In Java, we use it to trust the third party we’re about to communicate with.

Take our earlier example. If a client talks to a Java-based server over HTTPS, the server will look up the associated key from its keystore and present the public key and certificate to the client.

We, the client, then look up the associated certificate in our truststore. If the certificate or Certificate Authorities presented by the external server is not in our truststore, we’ll get an SSLHandshakeException and the connection won’t be set up successfully.

Java has bundled a truststore called cacerts and it resides in the $JAVA_HOME/jre/lib/security directory.

It contains default, trusted Certificate Authorities:

$ keytool -list -keystore cacerts
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN

Your keystore contains 92 entries

verisignclass2g2ca [jdk], 2018-06-13, trustedCertEntry,
Certificate fingerprint (SHA1): B3:EA:C4:47:76:C9:C8:1C:EA:F2:9D:95:B6:CC:A0:08:1B:67:EC:9D

We see here that the truststore contains 92 trusted certificate entries and one of the entries is the verisignclass2gca entryThis means that the JVM will automatically trust certificates signed by verisignclass2g2ca.

Here, we can override the default truststore location via the javax.net.ssl.trustStore property. Similarly, we can set javax.net.ssl.trustStorePassword and javax.net.ssl.trustStoreType to specify the truststore’s password and type.

5. Conclusion

In this tutorial, we discussed the main differences between the Java keystore and the Java truststore and its purpose.

Also, we showed how the defaults can be overridden with system properties.

Next, we could have a look at the following SSL guide or the JSSE Reference Guide to learn more details about encrypted communication in Java.

String API Updates in Java 12

$
0
0

1. Introduction

Java 12 added a couple of useful APIs to the String class. In this tutorial, we will explore these new APIs with examples.

2. indent()

The indent() method adjusts the indentation of each line of the string based on the argument passed to it.

When indent() is called on a string, the following actions are taken:

  1. The string is conceptually separated into lines using lines(). lines() is the String API introduced in Java 11.
  2. Each line is then adjusted based on the int argument n passed to it and then suffixed with a line feed “\n”.
    1. If n > 0, then n spaces are inserted at the beginning of each line.
    2. If n < 0, then up to n white space characters are removed from the beginning of each line. In case a given line does not contain sufficient white space, then all leading white space characters are removed.
    3. If n == 0, then the line remains unchanged. However, line terminators are still normalized.
  3. The resulting lines are then concatenated and returned.

For example:

@Test
public void whenPositiveArgument_thenReturnIndentedString() {
    String multilineStr = "This is\na multiline\nstring.";
    String outputStr = "   This is\n   a multiline\n   string.\n";

    String postIndent = multilineStr.indent(3);

    assertThat(postIndent, equalTo(outputStr));
}

We can also pass a negative int to reduce the indentation of the string. For example:

@Test
public void whenNegativeArgument_thenReturnReducedIndentedString() {
    String multilineStr = "   This is\n   a multiline\n   string.";
    String outputStr = " This is\n a multiline\n string.\n";

    String postIndent = multilineStr.indent(-2);

    assertThat(postIndent, equalTo(outputStr));
}

3. transform()

We can apply a function to this string using the transform() method. The function should expect a single String argument and produce a result:

@Test
public void whenTransformUsingLamda_thenReturnTransformedString() {
    String result = "hello".transform(input -> input + " world!");

    assertThat(result, equalTo("hello world!"));
}

It is not necessary that the output has to be a string. For example:

@Test
public void whenTransformUsingParseInt_thenReturnInt() {
    int result = "42".transform(Integer::parseInt);

    assertThat(result, equalTo(42));
}

4. Conclusion

In this article, we explored the new String APIs in Java 12. As usual, code snippets can be found over on GitHub.


Removing Stopwords from a String in Java

$
0
0

1. Overview

In this tutorial, we’ll discuss different ways to remove stopwords from a String in Java. This is a useful operation in cases where we want to remove unwanted or disallowed words from a text, such as comments or reviews added by users of an online site.

We’ll use a simple loop, Collection.removeAll() and regular expressions.

Finally, we’ll compare their performance using the Java Microbenchmark Harness.

2. Loading Stopwords

First, we’ll load our stopwords from a text file.

Here we have the file english_stopwords.txt which contain a list of words we consider stopwords, such as I, he, she, and the.

We’ll load the stopwords into a List of String using Files.readAllLines():

@BeforeClass
public static void loadStopwords() throws IOException {
    stopwords = Files.readAllLines(Paths.get("english_stopwords.txt"));
}

3. Removing Stopwords Manually

For our first solution, we’ll remove stopwords manually by iterating over each word and checking if it’s a stopword:

@Test
public void whenRemoveStopwordsManually_thenSuccess() {
    String original = "The quick brown fox jumps over the lazy dog"; 
    String target = "quick brown fox jumps lazy dog";
    String[] allWords = original.toLowerCase().split(" ");

    StringBuilder builder = new StringBuilder();
    for(String word : allWords) {
        if(!stopwords.contains(word)) {
            builder.append(word);
            builder.append(' ');
        }
    }
    
    String result = builder.toString().trim();
    assertEquals(result, target);
}

4. Using Collection.removeAll()

Next, instead of iterating over each word in our String, we can use Collection.removeAll() to remove all stopwords at once:

@Test
public void whenRemoveStopwordsUsingRemoveAll_thenSuccess() {
    ArrayList<String> allWords = 
      Stream.of(original.toLowerCase().split(" "))
            .collect(Collectors.toCollection(ArrayList<String>::new));
    allWords.removeAll(stopwords);

    String result = allWords.stream().collect(Collectors.joining(" "));
    assertEquals(result, target);
}

In this example, after splitting our String into an array of words, we’ll transform it into an ArrayList to be able to apply the removeAll() method.

5. Using Regular Expressions

Finally, we can create a regular expression from our stopwords list, then use it to replace stopwords in our String:

@Test
public void whenRemoveStopwordsUsingRegex_thenSuccess() {
    String stopwordsRegex = stopwords.stream()
      .collect(Collectors.joining("|", "\\b(", ")\\b\\s?"));

    String result = original.toLowerCase().replaceAll(stopwordsRegex, "");
    assertEquals(result, target);
}

The resulting stopwordsRegex will have the format “\\b(he|she|the|…)\\b\\s?”. In this regex, “\b” refers to a word boundary, to avoid replacing “he” in “heat” for example, while “\s?” refers to zero or one space, to delete the extra space after replacing a stopword.

6. Performance Comparison

Now, let’s see which method has the best performance.

First, let’s set up our benchmark. We’ll use a rather big text file as the source of our String called shakespeare-hamlet.txt:

@Setup
public void setup() throws IOException {
    data = new String(Files.readAllBytes(Paths.get("shakespeare-hamlet.txt")));
    data = data.toLowerCase();
    stopwords = Files.readAllLines(Paths.get("english_stopwords.txt"));
    stopwordsRegex = stopwords.stream().collect(Collectors.joining("|", "\\b(", ")\\b\\s?"));
}

Then we’ll have our benchmark methods, starting with removeManually():

@Benchmark
public String removeManually() {
    String[] allWords = data.split(" ");
    StringBuilder builder = new StringBuilder();
    for(String word : allWords) {
        if(!stopwords.contains(word)) {
            builder.append(word);
            builder.append(' ');
        }
    }
    return builder.toString().trim();
}

Next, we have the removeAll() benchmark:

@Benchmark
public String removeAll() {
    ArrayList<String> allWords = 
      Stream.of(data.split(" "))
            .collect(Collectors.toCollection(ArrayList<String>::new));
    allWords.removeAll(stopwords);
    return allWords.stream().collect(Collectors.joining(" "));
}

Finally, we’ll add the benchmark for replaceRegex():

@Benchmark
public String replaceRegex() {
    return data.replaceAll(stopwordsRegex, "");
}

And here’s the result of our benchmark:

Benchmark                           Mode  Cnt   Score    Error  Units
removeAll                           avgt   60   7.782 ±  0.076  ms/op
removeManually                      avgt   60   8.186 ±  0.348  ms/op
replaceRegex                        avgt   60  42.035 ±  1.098  ms/op

It seems like using Collection.removeAll() has the fastest execution time while using regular expressions is the slowest.

7. Conclusion

In this quick article, we learned different methods to remove stopwords from a String in Java. We also benchmarked them to see which method has the best performance.

The full source code for the examples is available over on GitHub.

LIKE Queries in Spring JPA Repositories

$
0
0

1. Introduction

In this quick tutorial, we’re going to cover various ways of creating LIKE queries in Spring JPA Repositories.

We’ll start by looking at the various keywords we can use while creating query methods. Then, we’ll cover the @Query annotation with named and ordered parameters.

2. Setup

For our example, we’ll be querying a movie table.

Let’s define our Movie entity:

@Entity
public class Movie {
    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE)
    private Long id;
    private String title;
    private String director;
    private String rating;
    private int duration;

    // standard getters and setters
}

With our Movie entity defined, let’s create some sample insert statements:

INSERT INTO movie(id, title, director, rating, duration) 
    VALUES(1, 'Godzilla: King of the Monsters', ' Michael Dougherty', 'PG-13', 132);
INSERT INTO movie(id, title, director, rating, duration) 
    VALUES(2, 'Avengers: Endgame', 'Anthony Russo', 'PG-13', 181);
INSERT INTO movie(id, title, director, rating, duration) 
    VALUES(3, 'Captain Marvel', 'Anna Boden', 'PG-13', 123);
INSERT INTO movie(id, title, director, rating, duration) 
    VALUES(4, 'Dumbo', 'Tim Burton', 'PG', 112);
INSERT INTO movie(id, title, director, rating, duration) 
    VALUES(5, 'Booksmart', 'Olivia Wilde', 'R', 102);
INSERT INTO movie(id, title, director, rating, duration) 
    VALUES(6, 'Aladdin', 'Guy Ritchie', 'PG', 128);
INSERT INTO movie(id, title, director, rating, duration) 
    VALUES(7, 'The Sun Is Also a Star', 'Ry Russo-Young', 'PG-13', 100);

3. LIKE Query Methods

For many simple LIKE query scenarios, we can take advantage of a variety of keywords to create query methods in our repositories.

Let’s explore them now.

3.1. Containing, Contains, IsContaining and Like

Let’s look at how we can perform the following LIKE query with a query method:

SELECT * FROM movie WHERE title LIKE '%in%';

First, let’s define query methods using Containing, Contains, and IsContaining:

List<Movie> findByTitleContaining(String title);
List<Movie> findByTitleContains(String title);
List<Movie> findByTitleIsContaining(String title);

Let’s call our query methods with the partial title in:

List<Movie> results = movieRepository.findByTitleContaining("in");
assertEquals(3, results.size());

results = movieRepository.findByTitleIsContaining("in");
assertEquals(3, results.size());

results = movieRepository.findByTitleContains("in");
assertEquals(3, results.size());

We can expect each of the three methods to return the same results.

Spring also provides us with a Like keyword, but it behaves slightly differently in that we’re required to provide the wildcard character with our search parameter.

Let’s define a LIKE query method:

List<Movie> findByTitleLike(String title);

Now, let’s call our findByTitleLike method with the same value we used before but including the wildcard characters:

results = movieRepository.findByTitleLike("%in%");
assertEquals(3, results.size());

3.2. StartsWith

Now, let’s look at the following query:

SELECT * FROM Movie WHERE Rating LIKE 'PG%';

Let’s use the StartsWith keyword to create a query method:

List<Movie> findByRatingStartsWith(String rating);

With our method defined, let’s call it with the value PG:

List<Movie> results = movieRepository.findByRatingStartsWith("PG");
assertEquals(6, results.size());

3.3. EndsWith

Spring provides us with the opposite functionality with the EndsWith keyword.

Let’s consider this query:

SELECT * FROM Movie WHERE director LIKE '%Burton';

Now, let’s define an EndsWith query method:

List<Movie> findByDirectorEndsWith(String director);

Once we’ve defined our method, let’s call it with the Burton parameter:

List<Movie> results = movieRepository.findByDirectorEndsWith("Burton");
assertEquals(1, results.size());

3.4. Case Insensitivity

We often want to find all the records containing a certain string regardless of the case. In SQL, we might accomplish this by forcing the column to all capital or lower case letters and providing the same with values we’re querying.

With Spring JPA, we can use the IgnoreCase keyword combined with one of our other keywords:

List<Movie> findByTitleContainingIgnoreCase(String title);

Now we can call the method with the and expect to get results containing both, lower- and uppercase results:

List<Movie> results = movieRepository.findByTitleContainingIgnoreCase("the");
assertEquals(2, results.size());

3.5. Not

Sometimes we want to find all the records that don’t contain a particular string. We can use the NotContains, NotContaining, and NotLike keywords to do that.

Let’s define a query using NotContaining to find movies with ratings that don’t contain PG:

List<Movie> findByRatingNotContaining(String rating);

Now, let’s call our newly defined method:

List<Movie> results = movieRepository.findByRatingNotContaining("PG");
assertEquals(1, results.size());

To achieve functionality that finds records where the director doesn’t start with a particular string, let’s use the NotLike keyword to retain control over our wild card placement:

List<Movie> findByDirectorNotLike(String director);

Finally, let’s call the method to find all the movies where the director’s name starts with something other than An:

List<Movie> results = movieRepository.findByDirectorNotLike("An%");
assertEquals(5, results.size());

We can use NotLike in a similar way to accomplish a Not combined with the EndsWith kind of functionality.

4. Using @Query

Sometimes we need to create queries that are too complicated for Query Methods or would result in absurdly long method names. In those cases, we can use the @Query annotation to query our database.

4.1. Named Parameters

For comparison purposes, let’s create a query that’s equivalent to the findByTitleContaining method we defined earlier:

@Query("SELECT m FROM Movie m WHERE m.title LIKE %:title%")
List<Movie> searchByTitleLike(@Param("title") String title);

We include our wildcards in the query we supply. The @Param annotation is important here because we’re using a named parameter.

4.2. Ordered Parameters

In addition to named parameters, we can use ordered parameters in our queries:

@Query("SELECT m FROM Movie m WHERE m.rating LIKE ?1%")
List<Movie> searchByRatingStartsWith(String rating);

We have control of our wildcards, so this query is the equivalent of the findByRatingStartsWith query method.

Let’s find all the movies with a rating starting with PG:

List<Movie> results = movieRepository.searchByRatingStartsWith("PG");
assertEquals(6, results.size());

When we use ordered parameters in LIKE queries with untrusted data, we should escape incoming search values.

If we’re using Spring Boot 2.4.1 or later, we can use the SpEL escape method:

@Query("SELECT m FROM Movie m WHERE m.director LIKE %?#{escape([0])} escape ?#{escapeCharacter()}")
List<Movie> searchByDirectorEndsWith(String director);

Now, let’s call our method with the value Burton:

List<Movie> results = movieRepository.searchByDirectorEndsWith("Burton");
assertEquals(1, results.size());

5. Conclusion

In this short tutorial, we learned how to create LIKE queries in Spring JPA Repositories.

First, we learned how to use the provided keywords to create query methods. Then, we learned how to accomplish the same tasks using the @Query parameter with both named and ordered parameters.

The full example code is available over on GitHub.

Persisting Enums in JPA

$
0
0

1. Introduction

In JPA version 2.0 and below, there’s no convenient way to map Enum values to a database column. Each option has its limitations and drawbacks. These issues can be avoided by using JPA 2.1. features.

In this tutorial, we’ll take a look at the different possibilities we have to persist enums in a database using JPA. We’ll also describe their advantages and disadvantages as well as provide simple code examples.

2. Using @Enumerated Annotation

The most common option to map an enum value to and from its database representation in JPA before 2.1. is to use the @Enumerated annotation. This way, we can instruct a JPA provider to convert an enum to its ordinal or String value.

We’ll explore both options in this section.

But first, let’s create a simple @Entity that we’ll be using throughout this tutorial:

@Entity
public class Article {
    @Id
    private int id;

    private String title;

    // standard constructors, getters and setters
}

2.1. Mapping Ordinal Value

If we put the @Enumerated(EnumType.ORDINAL) annotation on the enum field, JPA will use the Enum.ordinal() value when persisting a given entity in the database.

Let’s introduce the first enum:

public enum Status {
    OPEN, REVIEW, APPROVED, REJECTED;
}

Next, let’s add it to the Article class and annotate it with @Enumerated(EnumType.ORDINAL):

@Entity
public class Article {
    @Id
    private int id;

    private String title;

    @Enumerated(EnumType.ORDINAL)
    private Status status;
}

Now, when persisting an Article entity:

Article article = new Article();
article.setId(1);
article.setTitle("ordinal title");
article.setStatus(Status.OPEN);

JPA will trigger the following SQL statement:

insert 
into
    Article
    (status, title, id) 
values
    (?, ?, ?)
binding parameter [1] as [INTEGER] - [0]
binding parameter [2] as [VARCHAR] - [ordinal title]
binding parameter [3] as [INTEGER] - [1]

A problem with this kind of mapping arises when we need to modify our enum. If we add a new value in the middle or rearrange the enum’s order, we’ll break the existing data model.

Such issues might be hard to catch, as well as problematic to fix, as we would have to update all the database records.

2.2. Mapping String Value

Analogously, JPA will use the Enum.name() value when storing an entity if we annotate the enum field with @Enumerated(EnumType.STRING).

Let’s create the second enum:

public enum Type {
    INTERNAL, EXTERNAL;
}

And let’s add it to our Article class and annotate it with @Enumerated(EnumType.STRING):

@Entity
public class Article {
    @Id
    private int id;

    private String title;

    @Enumerated(EnumType.ORDINAL)
    private Status status;

    @Enumerated(EnumType.STRING)
    private Type type;
}

Now, when persisting an Article entity:

Article article = new Article();
article.setId(2);
article.setTitle("string title");
article.setType(Type.EXTERNAL);

JPA will execute the following SQL statement:

insert 
into
    Article
    (status, title, type, id) 
values
    (?, ?, ?, ?)
binding parameter [1] as [INTEGER] - [null]
binding parameter [2] as [VARCHAR] - [string title]
binding parameter [3] as [VARCHAR] - [EXTERNAL]
binding parameter [4] as [INTEGER] - [2]

With @Enumerated(EnumType.STRING), we can safely add new enum values or change our enum’s order. However, renaming an enum value will still break the database data.

Additionally, even though this data representation is far more readable compared to the @Enumerated(EnumType.ORDINAL) option, it also consumes a lot more space than necessary. This might turn out to be a significant issue when we need to deal with a high volume of data.

3. Using @PostLoad and @PrePersist annotations

Another option we have to deal with persisting enums in a database is to use standard JPA callback methods. We can map our enums back and forth in the @PostLoad and @PrePersist events.

The idea is to have two attributes in an entity. The first one is mapped to a database value, and the second one is a @Transient field that holds a real enum value. The transient attribute is then used by the business logic code.

To better understand the concept, let’s create a new enum and use its int value in the mapping logic:

public enum Priority {
    LOW(100), MEDIUM(200), HIGH(300);

    private int priority;

    private Priority(int priority) {
        this.priority = priority;
    }

    public int getPriority() {
        return priority;
    }

    public static Priority of(int priority) {
        return Stream.of(Priority.values())
          .filter(p -> p.getPriority() == priority)
          .findFirst()
          .orElseThrow(IllegalArgumentException::new);
    }
}

We’ve also added the Priority.of() method to make it easy to get a Priority instance based on its int value.

Now, to use it in our Article class, we need to add two attributes and implement callback methods:

@Entity
public class Article {

    @Id
    private int id;

    private String title;

    @Enumerated(EnumType.ORDINAL)
    private Status status;

    @Enumerated(EnumType.STRING)
    private Type type;

    @Basic
    private int priorityValue;

    @Transient
    private Priority priority;

    @PostLoad
    void fillTransient() {
        if (priorityValue > 0) {
            this.priority = Priority.of(priorityValue);
        }
    }

    @PrePersist
    void fillPersistent() {
        if (priority != null) {
            this.priorityValue = priority.getPriority();
        }
    }
}

Now, when persisting an Article entity:

Article article = new Article();
article.setId(3);
article.setTitle("callback title");
article.setPriority(Priority.HIGH);

JPA will trigger the following SQL query:

insert 
into
    Article
    (priorityValue, status, title, type, id) 
values
    (?, ?, ?, ?, ?)
binding parameter [1] as [INTEGER] - [300]
binding parameter [2] as [INTEGER] - [null]
binding parameter [3] as [VARCHAR] - [callback title]
binding parameter [4] as [VARCHAR] - [null]
binding parameter [5] as [INTEGER] - [3]

Even though this option gives us more flexibility in choosing the database value’s representation compared to previously described solutions, it’s not ideal. It just doesn’t feel right to have two attributes representing a single enum in the entity. Additionally, if we use this type of mapping, we aren’t able to use enum’s value in JPQL queries. 

4. Using JPA 2.1 @Converter Annotation

To overcome the limitations of the solutions shown above, JPA 2.1 release introduced a new standardized API that can be used to convert an entity attribute to a database value and vice versa. All we need to do is to create a new class that implements javax.persistence.AttributeConverter and annotate it with @Converter.

Let’s see a practical example. But first, as usual, we’ll create a new enum:

public enum Category {
    SPORT("S"), MUSIC("M"), TECHNOLOGY("T");

    private String code;

    private Category(String code) {
        this.code = code;
    }

    public String getCode() {
        return code;
    }
}

We also need to add it to the Article class:

@Entity
public class Article {

    @Id
    private int id;

    private String title;

    @Enumerated(EnumType.ORDINAL)
    private Status status;

    @Enumerated(EnumType.STRING)
    private Type type;

    @Basic
    private int priorityValue;

    @Transient
    private Priority priority;

    private Category category;
}

Now, let’s create a new CategoryConverter:

@Converter(autoApply = true)
public class CategoryConverter implements AttributeConverter<Category, String> {
 
    @Override
    public String convertToDatabaseColumn(Category category) {
        if (category == null) {
            return null;
        }
        return category.getCode();
    }

    @Override
    public Category convertToEntityAttribute(String code) {
        if (code == null) {
            return null;
        }

        return Stream.of(Category.values())
          .filter(c -> c.getCode().equals(code))
          .findFirst()
          .orElseThrow(IllegalArgumentException::new);
    }
}

We’ve set the @Converter‘s value of autoApply to true so that JPA will automatically apply the conversion logic to all mapped attributes of a Category type. Otherwise, we’d have to put the @Converter annotation directly on the entity’s field.

Let’s now persist an Article entity:

Article article = new Article();
article.setId(4);
article.setTitle("converted title");
article.setCategory(Category.MUSIC);

Then JPA will execute the following SQL statement:

insert 
into
    Article
    (category, priorityValue, status, title, type, id) 
values
    (?, ?, ?, ?, ?, ?)
Converted value on binding : MUSIC -> M
binding parameter [1] as [VARCHAR] - [M]
binding parameter [2] as [INTEGER] - [0]
binding parameter [3] as [INTEGER] - [null]
binding parameter [4] as [VARCHAR] - [converted title]
binding parameter [5] as [VARCHAR] - [null]
binding parameter [6] as [INTEGER] - [4]

As we can see, we can simply set our own rules of converting enums to a corresponding database value if we use the AttributeConverter interface. Moreover, we can safely add new enum values or change the existing ones without breaking the already persisted data.

The overall solution is simple to implement and addresses all the drawbacks of the options presented in the earlier sections.

5. Conclusion

In this tutorial, we’ve covered various ways of persisting enum values in a database. We’ve presented options we have when using JPA in version 2.0 and below, as well as a new API available in JPA 2.1 and above.

It’s worth noting that these aren’t the only possibilities to deal with enums in JPA. Some databases, like PostgreSQL, provide a dedicated column type to store enum values. However, such solutions are outside the scope of this article.

As a rule of thumb, we should always use the AttributeConverter interface and @Converter annotation if we’re using JPA 2.1 or later.

As usual, all the code examples are available over on our GitHub repository.

The Difference Between CDI and EJB Singleton

$
0
0

1. Overview

In this tutorial, we’ll take a closer look at two types of singletons available in Java EE. We’ll explain and demonstrate the differences and see the usages suitable for each one.

First, let’s see what singletons are all about before getting into the details.

2. Singleton Design Pattern

Recall that a common way to implement Singleton Pattern is with a static instance and private constructor:

public final class Singleton {
    private static final Singleton instance = new Singleton();

    private Singleton() {}

    public static Singleton getInstance() {
        return instance;
    }
}

But, alas, this isn’t really object-oriented. And it has some multi-threading issues.

CDI and EJB containers give us an object-oriented alternative, though.

3. CDI Singleton

With CDI (Contexts and Dependency Injection), we can easily create singletons using the @Singleton annotation. This annotation is a part of the javax.inject package. It instructs the container to instantiate the singleton once and passes its reference to other objects during the injection.

As we can see, singleton implementation with CDI is very simple:

@Singleton
public class CarServiceSingleton {
    // ...
}

Our class simulates a car service shop. We have a lot of instances of various Cars, but they all use the same shop for servicing. Therefore, Singleton is a good fit.

We can verify it is the same instance with a simple JUnit test that asks the context for the class twice. Note that we’ve got a getBean helper method here for readability:

@Test
public void givenASingleton_whenGetBeanIsCalledTwice_thenTheSameInstanceIsReturned() {       
    CarServiceSingleton one = getBean(CarServiceSingleton.class);
    CarServiceSingleton two = getBean(CarServiceSingleton.class);
    assertTrue(one == two);
}

Because of the @Singleton annotation, the container will return the same reference both times. If we try this with a plain managed bean, however, the container will provide a different instance each time.

And while this works the same for either javax.inject.Singleton or javax.ejb.Singleton, there’s a key difference between these two.

4. EJB Singleton

To create an EJB singleton we use the @Singleton annotation from the javax.ejb package. This way we create a Singleton Session Bean.

We can test this implementation the same way we tested the CDI implementation in the previous example, and the result will be the same. EJB singletons, as expected, provide the single instance of the class.

However, EJB Singletons also provide additional functionality in the form of container-managed concurrency control.

When we use this type of implementation, the EJB container ensures that every public method of the class is accessed by a single thread at a time. If multiple threads try to access the same method, only one thread gets to use it while others wait for their turn.

We can verify this behavior with a simple test. We’ll introduce a service queue simulation for our singleton classes:

private static int serviceQueue;

public int service(Car car) {
    serviceQueue++;
    Thread.sleep(100);
    car.setServiced(true); 
    serviceQueue--;
    return serviceQueue;
}

serviceQueue is implemented as a plain static integer which increases when a car “enters” the service and decreased when it “leaves”. If proper locking is provided by the container, this variable should be equal to zero before and after the service, and equal to one during the service.

We can check that behavior with a simple test:

@Test
public void whenEjb_thenLockingIsProvided() {
    for (int i = 0; i < 10; i++) {
        new Thread(new Runnable() {
            @Override
            public void run() {
                int serviceQueue = carServiceEjbSingleton.service(new Car("Speedster xyz"));
                assertEquals(0, serviceQueue);
            }
        }).start();
    }
    return;
}

This test starts 10 parallel threads. Each thread instantiates a car and tries to service it. After the service, it asserts that the value of the serviceQueue is back to zero.

If we, for instance, execute a similar test on the CDI singleton, our test will fail.

5. Conclusion

In this article, we went through two types of singleton implementations available in Java EE. We saw their advantages and disadvantages and we also demonstrated how and when to use each one.

And, as always, the complete source code is available over on GitHub.

How to Delay Code Execution in Java

$
0
0

1. Introduction

It is relatively common for Java programs to add a delay or pause in their operation. This can be useful for task pacing or to pause execution until another task completes.

This tutorial will describe two ways to implement delays in Java.

2. A Thread-Based Approach

When a Java program runs, it spawns a process that runs on the host machine. This process contains at least one thread – the main thread – in which the program runs. Furthermore, Java enables multithreading, which enables applications to create new threads that run in parallel, or asynchronously, to the main thread.

2.1. Using Thread.sleep

A quick and dirty way to pause in Java is to tell the current thread to sleep for a specified amount of time. This can be done using Thread.sleep(milliseconds):

try {
    Thread.sleep(secondsToSleep * 1000);
} catch (InterruptedException ie) {
    Thread.currentThread().interrupt();
}

It is good practice to wrap the sleep method in a try/catch block in case another thread interrupts the sleeping thread. In this case, we catch the InterruptedException and explicitly interrupt the current thread, so it can be caught later and handled. This is more important in a multi-threaded program, but still good practice in a single-threaded program in case we add other threads later.

2.2. Using TimeUnit.sleep

For better readability, we can use TimeUnit.XXX.sleep(y), where XXX is the time unit to sleep for (SECONDSMINUTES, etc.), and y is the number of that unit to sleep for. This uses Thread.sleep behind the scenes. Here’s an example of the TimeUnit syntax:

try {
    TimeUnit.SECONDS.sleep(secondsToSleep);
} catch (InterruptedException ie) {
    Thread.currentThread().interrupt();
}

However, there are some disadvantages to using these thread-based methods:

  • The sleep times are not exactly precise, especially when using smaller time increments like milliseconds and nanoseconds
  • When used inside of loops, sleep will drift slightly between loop iterations due to other code execution so the execution time could get imprecise after many iterations

3. An ExecutorService-Based Approach

Java provides the ScheduledExecutorService interface, which is a more robust and precise solution. This interface can schedule code to run once after a specified delay or at fixed time intervals.

To run a piece of code once after a delay, we can use the schedule method:

ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor();

executorService.schedule(Classname::someTask, delayInSeconds, TimeUnit.SECONDS);

The Classname::someTask part is where we specify the method that will run after the delay:

  • someTask is the name of the method we want to execute
  • Classname is the name of the class that contains the someTask method

To run a task at fixed time intervals, we can use the scheduleAtFixedRate method:

ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor();

executorService.scheduleAtFixedRate(Classname::someTask, 0, delayInSeconds, TimeUnit.SECONDS);

This will repeatedly call the someTask method, pausing for delayInSeconds between each call.

Besides allowing more timing options, the ScheduledExecutorService method yields more precise time intervals, since it prevents issues with drift.

4. Conclusion

In this article, we discussed two methods for creating delays in Java programs.

The full code for this article can be found over on Github. This is a Maven-based project, so it should be easy to import and run as it is.

Viewing all 4793 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>