Quantcast
Channel: Baeldung
Viewing all 4699 articles
Browse latest View live

Array to String Conversions

$
0
0

1. Overview

In this short tutorial, we’re going to look at converting an array of strings or integers to a string and back again.

We can achieve this with vanilla Java and Java utility classes from commonly used libraries.

2. Convert Array to String

Sometimes we need to convert an array of strings or integers into a string, but unfortunately, there is no direct method to perform this conversion.

The default implementation of the toString() method on an array returns something like Ljava.lang.String;@74a10858 which only informs us of the object’s type and hash code.

However, the java.util.Arrays utility class supports array and string manipulation, including a toString() method for arrays.

Arrays.toString() returns a string with the content of the input array. The new string created is a comma-delimited list of the array’s elements, surrounded with square brackets (“[]”):

String[] strArray = { "one", "two", "three" };
String joinedString = Arrays.toString(strArray);
assertEquals("[one, two, three]", joinedString);
int[] intArray = { 1,2,3,4,5 }; 
joinedString = Arrays.toString(intArray);
assertEquals("[1, 2, 3, 4, 5]", joinedString);

And, while it’s great that the Arrays.toString(int[]) method buttons up this task for us so nicely, let’s compare it to different methods that we can implement on our own.

2.1. StringBuilder.append()

To start, let’s look at how to do this conversion with StringBuilder.append():

String[] strArray = { "Convert", "Array", "With", "Java" };
StringBuilder stringBuilder = new StringBuilder();
for (int i = 0; i < strArray.length; i++) {
    stringBuilder.append(strArray[i]);
}
String joinedString = stringBuilder.toString();
assertEquals("ConvertArrayWithJava", joinedString);

Additionally, to convert an array of integers, we can use the same approach but instead call Integer.valueOf(intArray[i]) when appending to our StringBuilder.

2.2. Java Streams API

Java 8 and above offers the String.join() method that produces a new string by joining elements and separating them with the specified delimiter, in our case just empty string:

String joinedString = String.join("", new String[]{ "Convert", "With", "Java", "Streams" });
assertEquals("ConvertWithJavaStreams", joinedString);

Additionally, we can use the Collectors.joining() method from the Java Streams API that joins strings from the Stream in the same order as its source array:

String joinedString = Arrays
    .stream(new String[]{ "Convert", "With", "Java", "Streams" })
    .collect(Collectors.joining());
assertEquals("ConvertWithJavaStreams", joinedString);

2.3. StringUtils.join()

And Apache Commons Lang is never to be left out of tasks like these.

The StringUtils class has several StringUtils.join() methods that can be used to change an array of strings into a single string:

String joinedString = StringUtils.join(new String[]{ "Convert", "With", "Apache", "Commons" });
assertEquals("ConvertWithApacheCommons", joinedString);

2.4. Joiner.join()

And not to be outdone, Guava accommodates the same with its Joiner class. The Joiner class offers a fluent API and provides a handful of helper methods to join data.

For example, we can add a delimiter or skip null values:

String joinedString = Joiner.on("")
        .skipNulls()
        .join(new String[]{ "Convert", "With", "Guava", null });
assertEquals("ConvertWithGuava", joinedString);

3. Convert String to Array of Strings

Similarly, we sometimes need to split a string into an array that contains some subset of input string split by the specified delimiter, let’s see how we can do this, too.

3.1. String.split()

Firstly, let’s start by splitting the whitespace using the String.split() method without a delimiter:

String[] strArray = "loremipsum".split("");

Which produces:

["l", "o", "r", "e", "m", "i", "p", "s", "u", "m"]

3.2. StringUtils.split()

Secondly, let’s look again at the StringUtils class from Apache’s Commons Lang library.

Among many null-safe methods on string objects, we can find StringUtils.split(). By default, it assumes a whitespace delimiter:

String[] splitted = StringUtils.split("lorem ipsum dolor sit amet");

Which results in:

["lorem", "ipsum", "dolor", "sit", "amet"]

But, we can also provide a delimiter if we want.

3.3. Splitter.split()

Finally, we can also use Guava with its Splitter fluent API:

List<String> resultList = Splitter.on(' ')
    .trimResults()
    .omitEmptyStrings()
    .splitToList("lorem ipsum dolor sit amet");   
String[] strArray = resultList.toArray(new String[0]);

Which generates:

["lorem", "ipsum", "dolor", "sit", "amet"]

4. Conclusion

In this article, we illustrated how to convert an array to string and back again using core Java and popular utility libraries.

Of course, the implementation of all these examples and code snippets can be found over on Github.


Spring Data with Reactive Cassandra

$
0
0

1. Introduction

In this tutorial, we’ll learn how to use reactive data access features of Spring Data Cassandra.

Particularly, this is the third article of the Spring Data Cassandra article series. In this one, we’ll expose a Cassandra database using a REST API.

We can read more about Spring Data Cassandra in the first and second articles of the series.

2. Maven Dependencies

As a matter of fact, Spring Data Cassandra supports Project Reactor and RxJava reactive types. To demonstrate, we’ll use the Project reactor’s reactive types Flux and Mono in this tutorial.

To start with, let’s add the dependencies needed for our tutorial:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-cassandra</artifactId>
    <version>2.1.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>io.projectreactor</groupId>
    <artifactId>reactor-core</artifactId>
</dependency>

The latest version of the spring-data-cassandra can be found here.

Now, we’re going to expose SELECT operations from the database via a REST API. So, let’s add the dependency for RestController, too:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

3. Implementing our App

Since we will be persisting data, let’s first define our entity object:

@Table
public class Employee {
    @PrimaryKey
    private int id;
    private String name;
    private String address;
    private String email;
    private int age;
}

Next, its time to create an EmployeeRepository that extends from ReactiveCassandraRepository. It’s important to note that this interface enables the support for reactive types:

public interface EmployeeRepository extends ReactiveCassandraRepository<Employee, Integer> {
    @AllowFiltering
    Flux<Employee> findByAgeGreaterThan(int age);
}

3.1. Rest Controller for CRUD Operations

For the purpose of illustration, we’ll expose some basic SELECT operations using a simple Rest Controller:

@RestController
@RequestMapping("employee")
public class EmployeeController {

    @Autowired
    EmployeeService employeeService;

    @PostConstruct
    public void saveEmployees() {
        List<Employee> employees = new ArrayList<>();
        employees.add(new Employee(123, "John Doe", "Delaware", "jdoe@xyz.com", 31));
        employees.add(new Employee(324, "Adam Smith", "North Carolina", "asmith@xyz.com", 43));
        employees.add(new Employee(355, "Kevin Dunner", "Virginia", "kdunner@xyz.com", 24));
        employees.add(new Employee(643, "Mike Lauren", "New York", "mlauren@xyz.com", 41));
        employeeService.initializeEmployees(employees);
    }

    @GetMapping("/list")
    public Flux<Employee> getAllEmployees() {
        Flux<Employee> employees = employeeService.getAllEmployees();
        return employees;
    }

    @GetMapping("/{id}")
    public Mono<Employee> getEmployeeById(@PathVariable int id) {
        return employeeService.getEmployeeById(id);
    }

    @GetMapping("/filterByAge/{age}")
    public Flux<Employee> getEmployeesFilterByAge(@PathVariable int age) {
        return employeeService.getEmployeesFilterByAge(age);
    }
}

Finally, let’s add a simple EmployeeService:

@Service
public class EmployeeService {

    @Autowired
    EmployeeRepository employeeRepository;

    public void initializeEmployees(List<Employee> employees) {
        Flux<Employee> savedEmployees = employeeRepository.saveAll(employees);
        savedEmployees.subscribe();
    }

    public Flux<Employee> getAllEmployees() {
        Flux<Employee> employees =  employeeRepository.findAll();
        return employees;
    }

    public Flux<Employee> getEmployeesFilterByAge(int age) {
        return employeeRepository.findByAgeGreaterThan(age);
    }

    public Mono<Employee> getEmployeeById(int id) {
        return employeeRepository.findById(id);
    }
}

3.2. Database Configuration

Then, let’s specify the keyspace and port to use for connecting with Cassandra in application.properties:

spring.data.cassandra.keyspace-name=practice
spring.data.cassandra.port=9042

4. Testing the Endpoints

Finally, its time to test our API endpoints.

4.1. Manual Testing

To begin with, let’s fetch the employee records from the database:

curl localhost:8080/employee/list

As a result, we get all the employees:

[
    {
        "id": 324,
        "name": "Adam Smith",
        "address": "North Carolina",
        "email": "asmith@xyz.com",
        "age": 43
    },
    {
        "id": 123,
        "name": "John Doe",
        "address": "Delaware",
        "email": "jdoe@xyz.com",
        "age": 31
    },
    {
        "id": 355,
        "name": "Kevin Dunner",
        "address": "Virginia",
        "email": "kdunner@xyz.com",
        "age": 24
    },
    {
        "id": 643,
        "name": "Mike Lauren",
        "address": "New York",
        "email": "mlauren@xyz.com",
       "age": 41
    }
]

Moving on, let’s try to find a specific employee by his id:

curl localhost:8080/employee/643

As a result, we get Mr. Mike Lauren back:

{
    "id": 643,
    "name": "Mike Lauren",
    "address": "New York",
    "email": "mlauren@xyz.com",
    "age": 41
}

Finally, let’s see if our age filter works:

curl localhost:8080/employee/filterByAge/35

And as expected, we get all the employees whose age is greater than 35:

[
    {
        "id": 324,
        "name": "Adam Smith",
        "address": "North Carolina",
        "email": "asmith@xyz.com",
        "age": 43
    },
    {
        "id": 643,
        "name": "Mike Lauren",
        "address": "New York",
        "email": "mlauren@xyz.com",
        "age": 41
    }
]

4.2. Integration Testing

Additionally, let’s test the same functionality by writing a test case:

@RunWith(SpringRunner.class)
@SpringBootTest
public class ReactiveEmployeeRepositoryIntegrationTest {

    @Autowired
    EmployeeRepository repository;

    @Before
    public void setUp() {
        Flux<Employee> deleteAndInsert = repository.deleteAll()
          .thenMany(repository.saveAll(Flux.just(
            new Employee(111, "John Doe", "Delaware", "jdoe@xyz.com", 31),
            new Employee(222, "Adam Smith", "North Carolina", "asmith@xyz.com", 43),
            new Employee(333, "Kevin Dunner", "Virginia", "kdunner@xyz.com", 24),
            new Employee(444, "Mike Lauren", "New York", "mlauren@xyz.com", 41))));

        StepVerifier
          .create(deleteAndInsert)
          .expectNextCount(4)
          .verifyComplete();
    }

    @Test
    public void givenRecordsAreInserted_whenDbIsQueried_thenShouldIncludeNewRecords() {
        Mono<Long> saveAndCount = repository.count()
          .doOnNext(System.out::println)
          .thenMany(repository
            .saveAll(Flux.just(
            new Employee(325, "Kim Jones", "Florida", "kjones@xyz.com", 42),
            new Employee(654, "Tom Moody", "New Hampshire", "tmoody@xyz.com", 44))))
          .last()
          .flatMap(v -> repository.count())
          .doOnNext(System.out::println);

        StepVerifier
          .create(saveAndCount)
          .expectNext(6L)
          .verifyComplete();
    }

    @Test
    public void givenAgeForFilter_whenDbIsQueried_thenShouldReturnFilteredRecords() {
        StepVerifier
          .create(repository.findByAgeGreaterThan(35))
          .expectNextCount(2)
          .verifyComplete();
    }
}

5. Conclusion

In summary, we learned how to use reactive types using Spring Data Cassandra to build a non-blocking application.

As always, check out the source code for this tutorial over on GitHub.

Java Weekly, Issue 257

$
0
0

Here we go…

1. Spring and Java

>> Practice Mock Interviews & Coding Problems with Pramp 

If you’re looking to improve your interview game, definitely have a look at the Pramp mock interviews on Data Structures and Algorithms, System Design, etc. Get unlimited tries.

>> Hibernate Tips: How to Exclude Deactivated Elements from an Association [thoughts-on-java.org]

A brief example of applying Hibernate’s @Where annotation to a one-to-many JPA mapping.

>> Java 12 Raw String Literals [vojtechruzicka.com]

A look ahead at an upcoming and long overdue feature: multi-line String literals. Very cool.

>> Java optional parameters [dolszewski.com]

An interesting write-up explores several patterns and anti-patterns for handling optional method and constructor parameters.

>> Introducing Swagger Brake [blog.arnoldgalovics.com]

And a quick overview of a tool you can use to identify API breaking changes between two versions of a Swagger specification.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Designing Headers for HTTP Compression [mnot.net]

A good write-up on how HPACK compression works and how to optimize HTTP/2 headers accordingly.

>> Sentiment Analysis: What’s with the Tone? [infoq.com]

A fascinating read comparing two approaches that can help identify the emotion behind a text.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Being More Nimble [dilbert.com]

>> Blockchain vs Databases [dilbert.com]

>> AI Can Control Minds [dilbert.com]

4. Pick of the Week

>> 5 Principles for Making Better Life Decisions [markmanson.net]

Ways to Iterate Over a List in Java

$
0
0

1. Introduction

Iterating over the elements of a list is one of the most common tasks in a program.

In this tutorial, we’re going to review different ways to do this in Java. We’ll be focusing on iterating through the list in order, though going in reverse is simple, too.

2. for Loop

Firstly, let’s review some for loop options.

Let’s begin by defining a list of countries for our examples:

List<String> countries = Arrays.asList("Germany", "Panama", "Australia");

2.1. Basic for Loop

The most common flow control statement for iteration is the basic for loop.

The for loop defines three types of statements separated with semicolons. The first statement is the initialization statement. The second one defines the termination condition. The last statement is the update clause.

Here, we’re simply using an integer variable as an index:

for (int i = 0; i < countries.size(); i++) {
    System.out.println(countries.get(i));
}

In the initialization, we must declare an integer variable to specify the starting point. This variable typically acts as the list index.

The termination condition is an expression that after evaluation returns a boolean, once this expression evaluates to false the loop finishes.

The update clause is used to modify the current state of the index variable, increasing it or decreasing it until the point of termination.

2.2. Enhanced for loop

The enhanced for loop is a simple structure that allows us to visit every element of a list. It is similar to the basic for loop but more readable and compact. Consequently, is one of the most commonly used forms to traverse a list.

Notice that the enhanced for loop is simpler than the basic for loop:

for (String country : countries) {
    System.out.println(country); 
}

3. Iterators

An Iterator is a design pattern that offers us a standard interface to traverse a data structure without having to worry about the internal representation.

This way of traversing data structures offers many advantages, among which we can emphasize that our code does not depend on the implementation.

Therefore, the structure can be a binary tree or a doubly linked list since the Iterator abstracts us from the way of performing the traversal. In this way, we can easily replace data structures in our code without unpleasant problems.

3.1. Iterator

In Java, the Iterator pattern is reflected in the java.util.Iterator class. It is widely used in Java Collections. There are two key methods in an Iterator, the hasNext() and next() methods.

Here, we demonstrate the usage of both:

Iterator<String> countriesIterator = countries.iterator();

while(countriesIterator.hasNext()) {
    System.out.println(countriesIterator.next()); 
}

The hasNext() method checks if there are any elements remaining in the list.

The next() method returns the next element in the iteration.

3.2. ListIterator

A ListIterator allows us to traverse a list of elements in either forward or backward order.

Scrolling a list with ListIterator forward follows a mechanism similar to that used by the Iterator. In this way, we can move the iterator forward with the next() method, and we can find the end of the list using the hasNext() method.

As we can see, the ListIterator looks very similar to the Iterator that we used previously:

ListIterator<String> listIterator = countries.listIterator();

while(listIterator.hasNext()) {
    System.out.println(listIterator.next());
}

4. forEach()

4.1. Iterable.forEach()

Since Java 8, we can use the forEach() method to iterate over the elements of a list.  This method is defined in the Iterable interface and can accept Lambda expressions as a parameter.

The syntax is pretty simple:

countries.forEach(System.out::println);

Before the forEach function, all iterators in Java were active, that is, they involved a for or a while loop that traversed the data collection until a certain condition was met.

With the introduction of forEach as a function in the Iterable interface, all classes that implement Iterable have the forEach function added.

4.2. Stream.forEach()

We can also convert a collection of values to a Stream and we can have access to operations such as forEach()map(), or filter().

Here, we demonstrate a typical usage for streams:

countries.stream().forEach((c) -> System.out.println(c));

5. Conclusion

In this article, we showed the different ways to iterate over the elements of a list using the Java API. Among these, we mentioned the for loop, the enhanced for loop, the Iterator, the ListIterator and the forEach() method (included in Java 8).

In addition, we also showed how to use the forEach() method with Streams.

Finally, all the code used in this article is available in our Github repo.

Auto-Generated Field for MongoDB using Spring Boot

$
0
0

1. Overview

In this tutorial, we’re going to learn how to implement a sequential, auto-generated field for MongoDB in Spring Boot.

When we’re using MongoDB as the database for a Spring Boot application, we can’t use @GeneratedValue annotation in our models as it’s not available. Hence we need a method to produce the same effect as we’ll have if we’re using JPA and an SQL database.

The general solution to this problem is simple. We’ll create a collection (table) that’ll store the generated sequence for other collections. During the creation of a new record, we’ll use it to fetch the next value.

2. Dependencies

Let’s add the following spring-boot starters to our pom.xml:

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
        <versionId>2.1.0.RELEASE</versionId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-mongodb</artifactId>
        <versionId>2.1.0.RELEASE</versionId>
    </dependency>
</dependencies>

The latest version for the dependencies is managed by spring-boot-starter-parent.

3. Collections

As discussed in the overview, we’ll create a collection that’ll store the auto-incremented sequence for other collections. We’ll call this collection database_sequences. It can be created using either the mongo shell or MongoDB Compass. Let’s create a corresponding model class:

@Document(collection = "database_sequences")
public class DatabaseSequence {

    @Id
    private String id;

    private long seq;

    //getters and setters omitted
}

Let’s then create a users collection, and a corresponding model object, that’ll store the details of people that are using our system:

@Document(collection = "users")
public class User {

    @Transient
    public static final String SEQUENCE_NAME = "users_sequence";

    @Id
    private long id;

    private String email;

    //getters and setters omitted
}

In the User model created above, we added a static field SEQUENCE_NAME, which is a unique reference to the auto-incremented sequence for the users collection.

We also annotate it with the @Transient to prevent it from being persisted alongside other properties of the model.

4. Creating a New Record

So far, we’ve created the required collections and models. Now, we’ll create a service that’ll generate the auto-incremented value that can be used as id for our entities.

Let’s create a SequenceGeneratorService that has generateSequence():

public long generateSequence(String seqName) {
    DatabaseSequence counter = mongoOperations.findAndModify(query(where("_id").is(seqName)),
      new Update().inc("seq",1), options().returnNew(true).upsert(true),
      DatabaseSequence.class);
    return !Objects.isNull(counter) ? counter.getSeq() : 1;
}

Now, we can use the generateSequence() while creating a new record:

User user = new User();
user.setId(sequenceGenerator.generateSequence(User.SEQUENCE_NAME));
user.setEmail("john.doe@example.com");
userRepository.save(user);

To list all the users, we’ll use the UserRepository:

List<User> storedUsers = userRepository.findAll();
storedUsers.forEach(System.out::println);

As it is now, we have to set the id field every time we create a new instance of our model. We can circumvent this process by creating a listener for Spring Data MongoDB lifecycle events.

To do that, we’ll create a UserModelListener that extends AbstractMongoEventListener<User> and then we’ll override the onBeforeConvert():

@Override
public void onBeforeConvert(BeforeConvertEvent<User> event) {
    event.getSource().setId(sequenceGenerator.generateSequence(User.SEQUENCE_NAME));
}

Now, every time we save a new User, the id will be set automatically.

5. Conclusion

In conclusion, we’ve seen how to generate sequential, auto-incremented values for the id field and simulate the same behavior as seen in SQL databases.

Hibernate uses a similar method for generating auto-incremented values by default.

As usual, the complete source code is available over on Github.

Kotlin Contracts

$
0
0

1. Overview

In this tutorial, we will talk about Kotlin Contracts. Their syntax is not stable yet, but the binary implementation is, and Kotlin stdlib itself is already putting them to use.

Basically, Kotlin contracts are a way to inform the compiler about the behavior of a function.

2. Maven Setup

This feature is introduced in Kotlin 1.3, so we need to use this version or a newer one. For this tutorial, we’ll use the latest version available – 1.3.10.

Please refer to our introduction to Kotlin for more details about setting that up.

3. Motivation for Contracts

As smart as the compiler is, it doesn’t always come to the best conclusion.

Consider the example below:

data class Request(val arg: String)

class Service {

    fun process(request: Request?) {
        validate(request)
        println(request.arg) // Doesn't compile because request might be null
    }
}

private fun validate(request: Request?) {
    if (request == null) {
        throw IllegalArgumentException("Undefined request")
    }
    if (request.arg.isBlank()) {
        throw IllegalArgumentException("No argument is provided")
    }
}

Any programmer can read this code and know that request is not null if a call to validate doesn’t throw an exception. In other words, it’s impossible for our println instruction to throw a NullPointerException.

Unfortunately, the compiler is unaware of that and doesn’t allow us to reference request.arg.

However, we can enhance validate by a contract which defines that if the function successfully returns – that is, it doesn’t throw an exception – then the given argument is not null:

@ExperimentalContracts
class Service {

    fun process(request: Request?) {
        validate(request)
        println(request.arg) // Compiles fine now
    }
}

@ExperimentalContracts
private fun validate(request: Request?) {
    contract {
        returns() implies (request != null)
    }
    if (request == null) {
        throw IllegalArgumentException("Undefined request")
    }
    if (request.arg.isBlank()) {
        throw IllegalArgumentException("No argument is provided")
    }
}

Next, let’s have a look at this feature in more detail.

4. The Contracts API

The general contract form is:

function {
    contract {
        Effect
    }
}

We can read this as “invoking the function produces the Effect”.

In the following sections, let’s have a look at the types of effects that the language supports now.

4.1. Making Guarantees Based on the Return Value

Here we specify that if the target function returns, the target condition is satisfied. We used this in the Motivation section.

We can also specify a value in the returns – that would instruct Kotlin compiler that the condition is fulfilled only if the target value is returned:

data class MyEvent(val message: String)

@ExperimentalContracts
fun processEvent(event: Any?) {
    if (isInterested(event)) {
        println(event.message) 
    }
}

@ExperimentalContracts
fun isInterested(event: Any?): Boolean {
    contract { 
        returns(true) implies (event is MyEvent)
    }
    return event is MyEvent
}

This helps the compiler make a smart cast in the processEvent function.

Note that, for now, returns contracts allow only truefalse, and null on the right-hand side of implies.

And even though implies takes a Boolean argument, only a subset of valid Kotlin expressions is accepted: namely, null-checks (== null!= null), instance-checks (is, !is), logic operators (&&, ||, !).

There is also a variation which targets any non-null returned value:

contract {
    returnsNotNull() implies (event is MyEvent)
}

4.2. Making Guarantees About a Function’s Usage

The callsInPlace contract expresses the following guarantees:

  • the callable won’t be invoked after the owner-function is finished
  • it also won’t be passed to another function without the contract

This helps us in situations like below:

inline fun <R> myRun(block: () -> R): R {
    return block()
}

fun callsInPlace() {
    val i: Int
    myRun {
        i = 1 // Is forbidden due to possible re-assignment
    }
    println(i) // Is forbidden because the variable might be uninitialized
}

We can fix the errors by helping the compiler to ensure that the given block is guaranteed to be called and called only once:

@ExperimentalContracts
inline fun <R> myRun(block: () -> R): R {
    contract {
        callsInPlace(block, InvocationKind.EXACTLY_ONCE)
    }
    return block()
}

Standard Kotlin utility functions run, with, apply, etc already define such contracts.

Here we used InvocationKind.EXACTLY_ONCE. Other options are AT_LEAST_ONCEAT_MOST_ONCE, and UNKNOWN.

5. Limitations of Contracts

While Kotlin contracts look promising, the current syntax is unstable at the moment, and it’s possible that it will be completely changed in the future.

Also, they have a few limitations:

  • We can only apply contracts on top-level functions with a body, i.e. we can’t use them on fields and class functions.
  • The contract call must be the first statement in the function body.
  • The compiler trusts contracts unconditionally; this means the programmer is responsible for writing correct and sound contracts. A future version may implement verification.

And finally, contract descriptions only allow references to parameters. For example, the code below doesn’t compile:

data class Request(val arg: String?)

@ExperimentalContracts
private fun validate(request: Request?) {
    contract {
        // We can't reference request.arg here
        returns() implies (request != null && request.arg != null)
    }
    if (request == null) {
        throw IllegalArgumentException("Undefined request")
    }
    if (request.arg.isBlank()) {
        throw IllegalArgumentException("No argument is provided")
    }
}

6. Conclusion

The feature looks rather interesting and even though its syntax is in the prototype stage, the binary representation is stable enough and is a part of stdlib already. It won’t change without a graceful migration cycle, and that means that we can depend on binary artifacts with contracts (e.g. stdlib) to have all the usual compatibility guarantees.

That’s why our recommendation is that it’s worth using contracts even now – it wouldn’t be too hard changing contract declarations if and when their DSL changes.

As usual, the source code used in this article is available over on GitHub.

Operator Overloading in Kotlin

$
0
0

1. Overview

In this tutorial, we’re going to talk about the conventions that Kotlin provides to support operator overloading.

2. The operator Keyword

In Java, operators are tied to specific Java types. For example, String and numeric types in Java can use the + operator for concatenation and addition, respectively. No other Java type can reuse this operator for its own benefit. Kotlin, on the contrary, provides a set of conventions to support limited Operator Overloading.

Let’s start with a simple data class:

data class Point(val x: Int, val y: Int)

We’re going to enhance this data class with a few operators.

In order to turn a Kotlin function with a pre-defined name into an operator, we should mark the function with the operator modifier.  For example, we can overload the “+” operator:

operator fun Point.plus(other: Point) = Point(x + other.x, y + other.y)

This way we can add two Points with “+”:

>> val p1 = Point(0, 1)
>> val p2 = Point(1, 2)
>> println(p1 + p2)
Point(x=1, y=3)

3. Overloading for Unary Operations

Unary operations are those that work on just one operand. For example, -a, a++ or !a are unary operations. Generally, functions that are going to overload unary operators take no parameters.

3.1. Unary Plus

How about constructing a Shape of some kind with a few Points:

val s = shape { 
    +Point(0, 0)
    +Point(1, 1)
    +Point(2, 2)
    +Point(3, 4)
}

In Kotlin, that’s perfectly possible with the unaryPlus operator function.

Since a Shape is just a collection of Points, then we can write a class, wrapping a few Points with the ability to add more:

class Shape {
    private val points = mutableListOf<Point>()

    operator fun Point.unaryPlus() {
        points.add(this)
    }
}

And note that what gave us the shape {…} syntax was to use a Lambda with Receivers:

fun shape(init: Shape.() -> Unit): Shape {
    val shape = Shape()
    shape.init()

    return shape
}

3.2. Unary Minus

Suppose we have a Point named “p” and we’re gonna negate its coordinations using something like “-p”. Then, all we have to do is to define an operator function named unaryMinus on Point:

operator fun Point.unaryMinus() = Point(-x, -y)

Then, every time we add a “-“ prefix before an instance of Point, the compiler translates it to a unaryMinus function call:

>> val p = Point(4, 2)
>> println(-p)
Point(x=-4, y=-2)

3.3. Increment

We can increment each coordinate by one just by implementing an operator function named inc:

operator fun Point.inc() = Point(x + 1, y + 1)

The postfix “++” operator, first returns the current value and then increases the value by one:

>> var p = Point(4, 2)
>> println(p++)
>> println(p)
Point(x=4, y=2)
Point(x=5, y=3)

On the contrary, the prefix “++” operator, first increases the value and then returns the newly incremented value:

>> println(++p)
Point(x=6, y=4)

Also, since the “++” operator re-assigns the applied variable, we can’t use val with them.

3.4. Decrement

Quite similar to increment, we can decrement each coordinate by implementing the dec operator function:

operator fun Point.dec() = Point(x - 1, y - 1)

dec also supports the familiar semantics for pre- and post-decrement operators as for regular numeric types:

>> var p = Point(4, 2)
>> println(p--)
>> println(p)
>> println(--p)
Point(x=4, y=2)
Point(x=3, y=1)
Point(x=2, y=0)

Also, like ++ we can’t use  with vals.

3.5. Not

How about flipping the coordinates just by !p? We can do this with not:

operator fun Point.not() = Point(y, x)

Simply put, the compiler translates any “!p” to a function call to the “not” unary operator function:

>> val p = Point(4, 2)
>> println(!p)
Point(x=2, y=4)

4. Overloading for Binary Operations

Binary operators, as their name suggests, are those that work on two operands. So, functions overloading binary operators should accept at least one argument.

Let’s start with the arithmetic operators.

4.1. Plus Arithmetic Operator

As we saw earlier, we can overload basic mathematic operators in Kotlin. We can use “+”  to add two Points together:

operator fun Point.plus(other: Point): Point = Point(x + other.x, y + other.y)

Then we can write:

>> val p1 = Point(1, 2)
>> val p2 = Point(2, 3)
>> println(p1 + p2)
Point(x=3, y=5)

Since plus is a binary operator function, we should declare a parameter for the function.

Now, most of us have experienced the inelegance of adding together two BigIntegers:

BigInteger zero = BigInteger.ZERO;
BigInteger one = BigInteger.ONE;
one = one.add(zero);

As it turns out, there is a better way to add two BigIntegers in Kotlin:

>> val one = BigInteger.ONE
println(one + one)

This is working because the Kotlin standard library itself adds its fair share of extension operators on built-in types like BigInteger.

4.2. Other Arithmetic Operators

Similar to plus,  subtraction, multiplicationdivision, and the remainder are working the same way:

operator fun Point.minus(other: Point): Point = Point(x - other.x, y - other.y)
operator fun Point.times(other: Point): Point = Point(x * other.x, y * other.y)
operator fun Point.div(other: Point): Point = Point(x / other.x, y / other.y)
operator fun Point.rem(other: Point): Point = Point(x % other.x, y % other.y)

Then, Kotlin compiler translates any call to “-“, “*”“/”, or “%” to “minus”, “times”“div”, or “rem” , respectively:

>> val p1 = Point(2, 4)
>> val p2 = Point(1, 4)
>> println(p1 - p2)
>> println(p1 * p2)
>> println(p1 / p2)
Point(x=1, y=0)
Point(x=2, y=16)
Point(x=2, y=1)

Or, how about scaling a Point by a numeric factor:

operator fun Point.times(factor: Int): Point = Point(x * factor, y * factor)

This way we can write something like “p1 * 2”:

>> val p1 = Point(1, 2)
>> println(p1 * 2)
Point(x=2, y=4)

As we can spot from the preceding example, there is no obligation for two operands to be of the same type. The same is true for return types.

4.3. Commutativity

Overloaded operators are not always commutative. That is, we can’t swap the operands and expect things to work as smooth as possible.

For example, we can scale a Point by an integral factor by multiplying it to an Int, say “p1 * 2”, but not the other way around.

The good news is, we can define operator functions on Kotlin or Java built-in types. In order to make the “2 * p1” work, we can define an operator on Int:

operator fun Int.times(point: Point): Point = Point(point.x * this, point.y * this)

Now we can happily use “2 * p1” as well:

>> val p1 = Point(1, 2)
>> println(2 * p1)
Point(x=2, y=4)

4.4. Compound Assignments

Now that we can add two BigIntegers with the “+” operator, we may be able to use the compound assignment for “+” which is “+=”. Let’s try this idea:

var one = BigInteger.ONE
one += one

By default, when we implement one of the arithmetic operators, say “plus”, Kotlin not only supports the familiar “+” operator, it also does the same thing for the corresponding compound assignment, which is “+=”.

This means, without any more work, we can also do:

var point = Point(0, 0)
point += Point(2, 2)
point -= Point(1, 1)
point *= Point(2, 2)
point /= Point(1, 1)
point /= Point(2, 2)
point *= 2

But sometimes this default behavior is not what we’re looking for. Suppose we’re going to use “+=” to add an element to a MutableCollection. 

For these scenarios, we can be explicit about it by implementing an operator function named plusAssign:

operator fun <T> MutableCollection<T>.plusAssign(element: T) {
    add(element)
}

For each arithmetic operator, there is a corresponding compound assignment operator which all have the “Assign” suffix. That is, there are plusAssign, minusAssign, timesAssign, divAssign, and remAssign:

>> val colors = mutableListOf("red", "blue")
>> colors += "green"
>> println(colors)
[red, blue, green]

All compound assignment operator functions must return Unit.

4.5. Equals Convention

If we override the equals method, then we can use the “==” and “!=” operators, too:

class Money(val amount: BigDecimal, val currency: Currency) : Comparable<Money> {

    // omitted
    
    override fun equals(other: Any?): Boolean {
        if (this === other) return true
        if (other !is Money) return false

        if (amount != other.amount) return false
        if (currency != other.currency) return false

        return true
    }

    // An equals compatible hashcode implementation
}

Kotlin translates any call to “==” and “!=” operators to an equals function call, obviously in order to make the “!=” work, the result of function call gets inverted. Note that in this case, we don’t need the operator keyword.

4.6. Comparison Operators

It’s time to bash on BigInteger again!

Suppose we’re gonna run some logic conditionally if one BigInteger is greater than the other. In Java, the solution is not all that clean:

if (BigInteger.ONE.compareTo(BigInteger.ZERO) > 0 ) {
    // some logic
}

When using the very same BigInteger in Kotlin, we can magically write this:

if (BigInteger.ONE > BigInteger.ZERO) {
    // the same logic
}

This magic is possible because Kotlin has a special treatment of Java’s Comparable.

Simply put, we can call the compareTo method in the Comparable interface by a few Kotlin conventions. In fact, any comparisons made by “<“, “<=”, “>”, or “>=”  would be translated to a compareTo function call.

In order to use comparison operators on a Kotlin type, we need to implement its Comparable interface:

class Money(val amount: BigDecimal, val currency: Currency) : Comparable<Money> {

    override fun compareTo(other: Money): Int =
      convert(Currency.DOLLARS).compareTo(other.convert(Currency.DOLLARS))

    fun convert(currency: Currency): BigDecimal = // omitted
}

Then we can compare monetary values as simple as:

val oneDollar = Money(BigDecimal.ONE, Currency.DOLLARS)
val tenDollars = Money(BigDecimal.TEN, Currency.DOLLARS)
if (oneDollar < tenDollars) {
    // omitted
}

Since the compareTo function in the Comparable interface is already marked with the operator modifier, we don’t need to add it ourselves.

4.7. In Convention

In order to check if an element belongs to a Page, we can use the “in” convention:

operator fun <T> Page<T>.contains(element: T): Boolean = element in elements()

Again, the compiler would translate “in” and “!in” conventions to a function call to the contains operator function:

>> val page = firstPageOfSomething()
>> "This" in page
>> "That" !in page

The object on the left-hand side of “in” will be passed as an argument to contains and the contains function would be called on the right side operand.

4.8. Get Indexer

Indexers allow instances of a type to be indexed just like arrays or collections. Suppose we’re gonna model a paginated collection of elements as Page<T>, shamelessly ripping off an idea from Spring Data:

interface Page<T> {
    fun pageNumber(): Int
    fun pageSize(): Int
    fun elements(): MutableList<T>
}

Normally, in order to retrieve an element from a Page, we should first call the elements function:

>> val page = firstPageOfSomething()
>> page.elements()[0]

Since the Page itself is just a fancy wrapper for another collection, we can use the indexer operators to enhance its API:

operator fun <T> Page<T>.get(index: Int): T = elements()[index]

The Kotlin compiler replaces any page[index] on a Page to a get(index) function call:

>> val page = firstPageOfSomething()
>> page[0]

We can go even further by adding as many arguments as we want to the get method declaration.

Suppose we’re gonna retrieve part of the wrapped collection:

operator fun <T> Page<T>.get(start: Int, endExclusive: Int): 
  List<T> = elements().subList(start, endExclusive)

Then we can slice a Page like:

>> val page = firstPageOfSomething()
>> page[0, 3]

Also, we can use any parameter types for the get operator function, not just Int.

4.9. Set Indexer

In addition to using indexers for implementing get-like semantics, we can utilize them to mimic set-like operations, too. All we have to do is to define an operator function named set with at least two arguments:

operator fun <T> Page<T>.set(index: Int, value: T) {
    elements()[index] = value
}

When we declare a set function with just two arguments, the first one should be used inside the bracket and another one after the assignment:

val page: Page<String> = firstPageOfSomething()
page[2] = "Something new"

The set function can have more than just two arguments, too. If so, the last parameter is the value and the rest of the arguments should be passed inside the brackets.

4.10. Iterator Convention

How about iterating a Page like other collections? We just have to declare an operator function named iterator with Iterator<T> as the return type:

operator fun <T> Page<T>.iterator() = elements().iterator()

Then we can iterate through a Page:

val page = firstPageOfSomething()
for (e in page) {
    // Do something with each element
}

4.11. Range Convention

In Kotlin, we can create a range using the “..” operator. For example, “1..42” creates a range with numbers between 1 and 42.

Sometimes it’s sensible to use the range operator on other non-numeric types. The Kotlin standard library provides a rangeTo convention on all Comparables:

operator fun <T : Comparable<T>> T.rangeTo(that: T): ClosedRange<T> = ComparableRange(this, that)

We can use this to get a few consecutive days as a range:

val now = LocalDate.now()
val days = now..now.plusDays(42)

As with other operators, the Kotlin compiler replaces any “..” with a rangeTo function call.

5. Use Operators Judiciously

Operator overloading is a powerful feature in Kotlin which enables us to write more concise and sometimes more readable codes. However, with great power comes great responsibility.

Operator overloading can make our code confusing or even hard to read when its too frequently used or occasionally misused.

Thus, before adding a new operator to a particular type, first, ask whether the operator is semantically a good fit for what we’re trying to achieve. Or ask if we can achieve the same effect with normal and less magical abstractions.

6. Conclusion

In this article, we learned more about the mechanics of operator overloading in Kotlin and how it uses a set of conventions to achieve it.

The implementation of all these examples and code snippets can be found in the GitHub project – it’s a Maven project, so it should be easy to import and run as it is.

Get All Data from a Table with Hibernate

$
0
0

1. Overview

In this quick tutorial, we’ll look at how to get all data from a table with Hibernate using JPQL or the Criteria API.

JPQL provides us with a quicker and simpler implementation while using the Criteria API is more dynamic and robust.

2. JPQL

JPQL provides a simple and straightforward way to get all entities from a table.

Let’s see what it might look like to retrieve all students from a table using JPQL:

public List<Student> findAllStudentsWithJpql() {
    return session.createQuery("SELECT a FROM Student a", Student.class).getResultList();      
}

Our Hibernate session’s createQuery() method receives a typed query string as the first argument and the entity’s type as the second. We execute the query with a call to the getResultList() method which returns the results as a typed List.

Simplicity is the advantage of this approach. JPQL is very close to SQL, and is, therefore, easier to write and understand.

3. Criteria API

The Criteria API provides a dynamic approach for building JPA queries.

It allows us to build queries by instantiating Java objects that represent query elements. And it’s a cleaner solution if queries are constructed from many optional fields because it eliminates a lot of string concatenations.

We just saw a select-all query using JPQL. Let’s take a look at its equivalent using the Criteria API:

public List<Student> findAllStudentsWithCriteriaQuery() {
    CriteriaBuilder cb = session.getCriteriaBuilder();
    CriteriaQuery<Student> cq = cb.createQuery(Student.class);
    Root<Student> rootEntry = cq.from(Student.class);
    CriteriaQuery<Student> all = cq.select(rootEntry);

    TypedQuery<Student> allQuery = session.createQuery(all);
    return allQuery.getResultList();
}

First, we get a CriteriaBuilder which we use to create a typed CriteriaQuery. Later, we set the root entry for the query. And lastly, we execute it with a getResultList() method.

Now, this approach is similar to what we did earlier. But, it gives us complete access to the Java language to articulate greater nuance in formulating the query.

In addition to being similar, JPQL queries and JPA criteria-based queries are equivalently performant.

4. Conclusion

In this article, we demonstrated how to get all entities from a table using JPQL or Criteria API.

The complete source code for the example is available over on GitHub.


Guide to Character Encoding

$
0
0

1. Overview

In this tutorial, we’ll discuss the basics of character encoding and how we handle it in Java.

2. Importance of Character Encoding

We often have to deal with texts belonging to multiple languages with diverse writing scripts like Latin or Arabic. Every character in every language needs to somehow be mapped to a set of ones and zeros. Really, it’s a wonder that computers can process all of our languages correctly.

To do this properly, we need to think about character encoding. Not doing so can often lead to data loss and even security vulnerabilities.

To understand this better, let’s define a method to decode a text in Java:

String decodeText(String input, String encoding) throws IOException {
    return 
      new BufferedReader(
        new InputStreamReader(
          new ByteArrayInputStream(input.getBytes()), 
          Charset.forName(encoding)))
        .readLine();
}

Note that the input text we feed here uses the default platform encoding.

If we run this method with input as “The façade pattern is a software design pattern.” and encoding as “US-ASCII”, it’ll output:

The fa��ade pattern is a software design pattern.

Well, not exactly what we expected.

What could have gone wrong? We’ll try to understand and correct this in the rest of this tutorial.

3. Fundamentals

Before digging deeper, though, let’s quickly review three terms: encodingcharsets, and code point.

3.1. Encoding

Computers can only understand binary representations like 1 and 0. Processing anything else requires some kind of mapping from the real-world text to its binary representation. This mapping is what we know as character encoding or simply just as encoding.

For example, the first letter in our message, “T”, in US-ASCII encodes to “01010100”.

3.2. Charsets

The mapping of characters to their binary representations can vary greatly in terms of the characters they include. The number of characters included in a mapping can vary from only a few to all the characters in practical use. The set of characters that are included in a mapping definition is formally called a charset.

For example, ASCII has a charset of 128 characters.

3.3. Code Point

A code point is an abstraction that separates a character from its actual encoding. A code point is an integer reference to a particular character.

We can represent the integer itself in plain decimal or alternate bases like hexadecimal or octal. We use alternate bases for the ease of referring large numbers.

For example, the first letter in our message, T, in Unicode has a code point “U+0054” (or 84 in decimal).

4. Understanding Encoding Schemes

A character encoding can take various forms depending upon the number of characters it encodes.

The number of characters encoded has a direct relationship to the length of each representation which typically is measured as the number of bytes. Having more characters to encode essentially means needing lengthier binary representations.

Let’s go through some of the popular encoding schemes in practice today.

4.1. Single-Byte Encoding

One of the earliest encoding schemes, called ASCII (American Standard Code for Information Exchange) uses a single byte encoding scheme. This essentially means that each character in ASCII is represented with seven-bit binary numbers. This still leaves one bit free in every byte!

ASCII’s 128-character set covers English alphabets in lower and upper cases, digits, and some special and control characters.

Let’s define a simple method in Java to display the binary representation for a character under a particular encoding scheme:

String convertToBinary(String input, String encoding) 
      throws UnsupportedEncodingException {
    byte[] encoded_input = Charset.forName(encoding)
      .encode(input)
      .array();  
    return IntStream.range(0, encoded_input.length)
        .map(i -> encoded_input[i])
        .mapToObj(e -> Integer.toBinaryString(e ^ 255))
        .map(e -> String.format("%1$" + Byte.SIZE + "s", e).replace(" ", "0"))
        .collect(Collectors.joining(" "));
}

Now, character ‘T’ has a code point of 84 in US-ASCII (ASCII is referred to as US-ASCII in Java).

And if we use our utility method, we can see its binary representation:

assertEquals(convertToBinary("T", "US-ASCII"), "01010100");

This, as we expected, is a seven-bit binary representation for the character ‘T’.

The original ASCII left the most significant bit of every byte unused. At the same time, ASCII had left quite a lot of characters unrepresented, especially for non-English languages.

This led to an effort to utilize that unused bit and include an additional 128 characters.

There were several variations of the ASCII encoding scheme proposed and adopted over the time. These loosely came to be referred to as “ASCII extensions”.

Many of the ASCII extensions had different levels of success but obviously, this was not good enough for wider adoption as many characters were still not represented.

One of the more popular ASCII extensions was ISO-8859-1, also referred to as “ISO Latin 1”.

4.2. Multi-Byte Encoding

As the need to accommodate more and more characters grew, single-byte encoding schemes like ASCII were not sustainable.

This gave rise to multi-byte encoding schemes which have a much better capacity albeit at the cost of increased space requirements.

BIG5 and SHIFT-JIS are examples of multi-byte character encoding schemes which started to use one as well as two bytes to represent wider charsets. Most of these were created for the need to represent Chinese and similar scripts which have a significantly higher number of characters.

Let’s now call the method convertToBinary with input as ‘語’, a Chinese character, and encoding as “Big5”:

assertEquals(convertToBinary("語", "Big5"), "10111011 01111001");

The output above shows that Big5 encoding uses two bytes to represent the character ‘語’.

A comprehensive list of character encodings, along with their aliases, is maintained by the International Number Authority.

5. Unicode

It is not difficult to understand that while encoding is important, decoding is equally vital to make sense of the representations. This is only possible in practice if a consistent or compatible encoding scheme is used widely.

Different encoding schemes developed in isolation and practiced in local geographies started to become challenging.

This challenge gave rise to a singular encoding standard called Unicode which has the capacity for every possible character in the world. This includes the characters which are in use and even those which are defunct!

Well, that must require several bytes to store each character? Honestly yes, but Unicode has an ingenious solution.

Unicode as a standard defines code points for every possible character in the world. The code point for character ‘T’ in Unicode is 84 in decimal. We generally refer to this as “U+0054” in Unicode which is nothing but U+ followed by the hexadecimal number.

We use hexadecimal as the base for code points in Unicode as there are 1,114,112 points, which is a pretty large number to communicate conveniently in decimal!

How these code points are encoded into bits is left to specific encoding schemes within Unicode. We will cover some of these encoding schemes in the sub-sections below.

5.1. UTF-32

UTF-32 is an encoding scheme for Unicode that employs four bytes to represent every code point defined by Unicode. Obviously, it is space inefficient to use four bytes for every character.

Let’s see how a simple character like ‘T’ is represented in UTF-32. We will use the method convertToBinary introduced earlier:

assertEquals(convertToBinary("T", "UTF-32"), "00000000 00000000 00000000 01010100");

The output above shows the usage of four bytes to represent the character ‘T’ where the first three bytes are just wasted space.

5.2. UTF-8

UTF-8 is another encoding scheme for Unicode which employs a variable length of bytes to encode. While it uses a single byte to encode characters generally, it can use a higher number of bytes if needed, thus saving space.

Let’s again call the method convertToBinary with input as ‘T’ and encoding as “UTF-8”:

assertEquals(convertToBinary("T", "UTF-8"), "01010100");

The output is exactly similar to ASCII using just a single byte. In fact, UTF-8 is completely backward compatible with ASCII.

Let’s again call the method convertToBinary with input as ‘語’ and encoding as “UTF-8”:

assertEquals(convertToBinary("語", "UTF-8"), "11101000 10101010 10011110");

As we can see here UTF-8 uses three bytes to represent the character ‘語’. This is known as variable-width encoding.

UTF-8, due to its space efficiency, is the most common encoding used on the web.

6. Encoding Support in Java

Java supports a wide array of encodings and their conversions to each other. The class Charset defines a set of standard encodings which every implementation of Java platform is mandated to support.

This includes US-ASCII, ISO-8859-1, UTF-8, and UTF-16 to name a few. A particular implementation of Java may optionally support additional encodings.

There are some subtleties in the way Java picks up a charset to work with. Let’s go through them in more details.

6.1. Default Charset

The Java platform depends heavily on a property called the default charset. The Java Virtual Machine (JVM) determines the default charset during start-up.

This is dependent on the locale and the charset of the underlying operating system on which JVM is running. For example on MacOS, the default charset is UTF-8.

Let’s see how we can determine the default charset:

Charset.defaultCharset().displayName();

If we run this code snippet on a Windows machine the output we get:

windows-1252

Now, “windows-1252” is the default charset of the Windows platform in English, which in this case has determined the default charset of JVM which is running on Windows.

6.2. Who Uses the Default Charset?

Many of the Java APIs make use of the default charset as determined by the JVM. To name a few:

So, this means that if we’d run our example without specifying the charset:

new BufferedReader(new InputStreamReader(new ByteArrayInputStream(input.getBytes()))).readLine();

then it would use the default charset to decode it.

And there are several APIs that make this same choice by default.

The default charset hence assumes an importance which we can not safely ignore.

6.3. Problems with the Default Charset

As we have seen that the default charset in Java is determined dynamically when the JVM starts. This makes the platform less reliable or error-prone when used across different operating systems.

For example, if we run

new BufferedReader(new InputStreamReader(new ByteArrayInputStream(input.getBytes()))).readLine();

on macOS, it will use UTF-8.

If we try the same snippet on Windows, it will use Windows-1252 to decode the same text.

Or, imagine writing a file on a macOS, and then reading that same file on Windows.

It’s not difficult to understand that because of different encoding schemes, this may lead to data loss or corruption.

6.4. Can We Override the Default Charset?

The determination of the default charset in Java leads to two system properties:

  • file.encoding: The value of this system property is the name of the default charset
  • sun.jnu.encoding: The value of this system property is the name of the charset used when encoding/decoding file paths

Now, it’s intuitive to override these system properties through command line arguments:

-Dfile.encoding="UTF-8"
-Dsun.jnu.encoding="UTF-8"

However, it is important to note that these properties are read-only in Java. Their usage as above is not present in the documentation. Overriding these system properties may not have desired or predictable behavior.

Hence, we should avoid overriding the default charset in Java.

6.5. Why Is Java Not Solving This?

There is a Java Enhancement Proposal (JEP) which prescribes using “UTF-8” as the default charset in Java instead of basing it on locale and operating system charset.

This JEP is in a draft state as of now and when it (hopefully!) goes through it will solve most of the issues we discussed earlier.

Note that the newer APIs like those in java.nio.file.Files do not use the default charset. The methods in these APIs read or write character streams with charset as UTF-8 rather than the default charset.

6.6. Solving This Problem in Our Programs

We should normally choose to specify a charset when dealing with text instead of relying on the default settings. We can explicitly declare the encoding we want to use in classes which deal with character-to-byte conversions.

Luckily, our example is already specifying the charset. We just need to select the right one and let Java do the rest.

We should realize by now that accented characters like ‘ç’ are not present in the encoding schema ASCII and hence we need an encoding which includes them. Perhaps, UTF-8?

Let’s try that, we will now run the method decodeText with the same input but encoding as “UTF-8”:

The façade pattern is a software-design pattern.

Bingo! We can see the output we were hoping to see now.

Here we have set the encoding we think best suits our need in the constructor of InputStreamReader. This is usually the safest method of dealing with characters and byte conversions in Java.

Similarly, OutputStreamWriter and many other APIs supports setting an encoding scheme through their constructor.

7. Other Places Where Encoding is Important

We don’t just need to consider character encoding while programming. Texts can go wrong terminally at many other places.

The most common cause of problems in these cases is the conversion of text from one encoding scheme to another, thereby possibly introducing data loss.

Let’s quickly go through a few places where we may encounter issues when encoding or decoding text.

7.1. Text Editors

In most of the cases, a text editor is where texts originate. The are numerous text editors in popular choice including vi, Notepad, and MS Word. Most of these text editors allow for us to select the encoding scheme. Hence, we should always make sure they are appropriate for the text we are handling.

7.2. File System

After we create texts in an editor, we need to store them in some file system. The file system depends on the operating system on which it is running. Most operating systems have inherent support for multiple encoding schemes. However, there may still be cases where an encoding conversion leads to data loss.

7.3. Network

Texts when transferred over a network using a protocol like File Transfer Protocol (FTP) also involve conversion between character encodings. For anything encoded in Unicode, it’s safest to transfer over as binary to minimize the risk of loss in conversion. However, transferring text over a network is one of the less frequent causes of data corruption.

7.4. Databases

Most of the popular databases like Oracle and MySQL support the choice of the character encoding scheme at the installation or creation of databases. We must choose this in accordance with the texts we expect to store in the database. This is one of the more frequent places where the corruption of text data happens due to encoding conversions.

7.5. Browsers

Finally, in most web applications, we create texts and pass them through different layers with the intention to view them in a user interface, like a browser. Here as well it is imperative for us to choose the right character encoding which can display the characters properly. Most popular browsers like Chrome, Edge allow choosing the character encoding through their settings.

8. Conclusion

In this article, we discussed how encoding can be an issue while programming.

We further discussed the fundamentals including encoding and charsets. Moreover, we went through different encoding schemes and their uses.

We also picked up an example of incorrect character encoding usage in Java and saw how to get that right. Finally, we discussed some other common error scenarios related to character encoding.

As always, the code for the examples is available over on GitHub.

Customizing Authorization and Token Requests with Spring Security 5.1 Client

$
0
0

1. Overview

Sometimes OAuth2 APIs can diverge a little from the standard, in which case we need to do some customizations to the standard OAuth2 requests.

Spring Security 5.1 provides support for customizing OAuth2 authorization and token requests.

In this tutorial, we’ll see how to customize request parameters and response handling.

2. Custom Authorization Request

First, we’ll customize the OAuth2 authorization request. We can modify standard parameters and add extra parameters to the authorization request as we need.

To do so, we need to implement our own OAuth2AuthorizationRequestResolver:

public class CustomAuthorizationRequestResolver 
  implements OAuth2AuthorizationRequestResolver {
    
    private OAuth2AuthorizationRequestResolver defaultResolver;

    public CustomAuthorizationRequestResolver(
      ClientRegistrationRepository repo, String authorizationRequestBaseUri) {
        defaultResolver = new DefaultOAuth2AuthorizationRequestResolver(repo, authorizationRequestBaseUri);
    }
    
    // ...
}

Note that we used the DefaultOAuth2AuthorizationRequestResolver to provide base functionality.

We’ll also override the resolve() methods to add our customization logic:

public class CustomAuthorizationRequestResolver 
  implements OAuth2AuthorizationRequestResolver {

    //...

    @Override
    public OAuth2AuthorizationRequest resolve(HttpServletRequest request) {
        OAuth2AuthorizationRequest req = defaultResolver.resolve(request);
        if(req != null) {
            req = customizeAuthorizationRequest(req);
        }
        return req;
    }

    @Override
    public OAuth2AuthorizationRequest resolve(HttpServletRequest request, String clientRegistrationId) {
        OAuth2AuthorizationRequest req = defaultResolver.resolve(request, clientRegistrationId);
        if(req != null) {
            req = customizeAuthorizationRequest(req);
        }
        return req;
    }

    private OAuth2AuthorizationRequest customizeAuthorizationRequest(
      OAuth2AuthorizationRequest req) {
        // ...
    }

}

We’ll add our customizations later on using our method customizeAuthorizationRequest() method as we’ll discuss in the next sections.

After implementing our custom OAuth2AuthorizationRequestResolver, we need to add it to our security configuration:

@Configuration
public class SecurityConfig extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.oauth2Login()
          .authorizationEndpoint()
          .authorizationRequestResolver(
            new CustomAuthorizationRequestResolver(
              clientRegistrationRepository(), "/oauth2/authorize-client"))
        //...
    }
}

Here we used oauth2Login().authorizationEndpoint().authorizationRequestResolver() to inject our custom OAuth2AuthorizationRequestResolver.

3. Customizing Authorization Request Standard Parameters

Now, let’s discuss the actual customization. We can modify OAuth2AuthorizationRequest as much as we want.

For starters, we can modify a standard parameter for each authorization request.

We can, for example, generate our own “state” parameter:

private OAuth2AuthorizationRequest customizeAuthorizationRequest(
  OAuth2AuthorizationRequest req) {
    return OAuth2AuthorizationRequest
      .from(req).state("xyz").build();
}

4. Authorization Request Extra Parameters

We can also add extra parameters to our OAuth2AuthorizationRequest using the additionalParameters() method of the OAuth2AuthorizationRequest and passing in a Map:

private OAuth2AuthorizationRequest customizeAuthorizationRequest(
  OAuth2AuthorizationRequest req) {
    Map<String,Object> extraParams = new HashMap<String,Object>();
    extraParams.putAll(req.getAdditionalParameters()); 
    extraParams.put("test", "extra");
    
    return OAuth2AuthorizationRequest
      .from(req)
      .additionalParameters(extraParams)
      .build();
}

We also have to make sure that we include the old additionalParameters before we add our new ones.

Let’s see a more practical example by customizing the authorization request used with the Okta Authorization Server.

4.1. Custom Okta Authorize Request

Okta has extra optional parameters for authorization request to provide the user with more functionality. For example, idp which indicates the identity provider.

The identity provider is Okta by default, but we can customize it using idp parameter:

private OAuth2AuthorizationRequest customizeOktaReq(OAuth2AuthorizationRequest req) {
    Map<String,Object> extraParams = new HashMap<String,Object>();
    extraParams.putAll(req.getAdditionalParameters()); 
    extraParams.put("idp", "https://idprovider.com");
    return OAuth2AuthorizationRequest
      .from(req)
      .additionalParameters(extraParams)
      .build();
}

5. Custom Token Request

Now, we’ll see how to customize the OAuth2 token request.

We can customize the token request by customizing OAuth2AccessTokenResponseClient.

The default implementation for OAuth2AccessTokenResponseClient is DefaultAuthorizationCodeTokenResponseClient.

We can customize the token request itself by providing a custom RequestEntityConverter and we can even customize the token response handling by customizing DefaultAuthorizationCodeTokenResponseClient RestOperations:

@Configuration
public class SecurityConfig extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.tokenEndpoint()
          .accessTokenResponseClient(accessTokenResponseClient())
            //...
    }

    @Bean
    public OAuth2AccessTokenResponseClient<OAuth2AuthorizationCodeGrantRequest> accessTokenResponseClient(){
        DefaultAuthorizationCodeTokenResponseClient accessTokenResponseClient = 
          new DefaultAuthorizationCodeTokenResponseClient(); 
        accessTokenResponseClient.setRequestEntityConverter(new CustomRequestEntityConverter()); 

        OAuth2AccessTokenResponseHttpMessageConverter tokenResponseHttpMessageConverter = 
          new OAuth2AccessTokenResponseHttpMessageConverter(); 
        tokenResponseHttpMessageConverter.setTokenResponseConverter(new CustomTokenResponseConverter()); 
        RestTemplate restTemplate = new RestTemplate(Arrays.asList(
          new FormHttpMessageConverter(), tokenResponseHttpMessageConverter)); 
        restTemplate.setErrorHandler(new OAuth2ErrorResponseErrorHandler()); 
        
        accessTokenResponseClient.setRestOperations(restTemplate); 
        return accessTokenResponseClient;
    }
}

We can inject our own OAuth2AccessTokenResponseClient using tokenEndpoint().accessTokenResponseClient().

To customize token request parameters, we’ll implement CustomRequestEntityConverter. Similarly, to customize handling token response, we’ll implement CustomTokenResponseConverter.

We’ll discuss both CustomRequestEntityConverter and CustomTokenResponseConverter in the following sections.

6. Token Request Extra Parameters

Now, we’ll see how to add extra parameters to our token request by building a custom Converter:

public class CustomRequestEntityConverter implements 
  Converter<OAuth2AuthorizationCodeGrantRequest, RequestEntity<?>> {

    private OAuth2AuthorizationCodeGrantRequestEntityConverter defaultConverter;
    
    public CustomRequestEntityConverter() {
        defaultConverter = new OAuth2AuthorizationCodeGrantRequestEntityConverter();
    }
    
    @Override
    public RequestEntity<?> convert(OAuth2AuthorizationCodeGrantRequest req) {
        RequestEntity<?> entity = defaultConverter.convert(req);
        MultiValueMap<String, String> params = (MultiValueMap<String,String>) entity.getBody();
        params.add("test2", "extra2");
        return new RequestEntity<>(params, entity.getHeaders(), 
          entity.getMethod(), entity.getUrl());
    }

}

Our Converter transforms OAuth2AuthorizationCodeGrantRequest to a RequestEntity. 

We used default converter OAuth2AuthorizationCodeGrantRequestEntityConverter to provide base functionality, and added extra parameters to the RequestEntity body.

7. Custom Token Response Handling

Now, we’ll customize handling the token response.

We can use the default token response converter OAuth2AccessTokenResponseHttpMessageConverter as a starting point.

We’ll implement CustomTokenResponseConverter to handle the “scope” parameter differently:

public class CustomTokenResponseConverter implements 
  Converter<Map<String, String>, OAuth2AccessTokenResponse> {
    private static final Set<String> TOKEN_RESPONSE_PARAMETER_NAMES = Stream.of(
        OAuth2ParameterNames.ACCESS_TOKEN, 
        OAuth2ParameterNames.TOKEN_TYPE, 
        OAuth2ParameterNames.EXPIRES_IN, 
        OAuth2ParameterNames.REFRESH_TOKEN, 
        OAuth2ParameterNames.SCOPE).collect(Collectors.toSet());

    @Override
    public OAuth2AccessTokenResponse convert(Map<String, String> tokenResponseParameters) {
        String accessToken = tokenResponseParameters.get(OAuth2ParameterNames.ACCESS_TOKEN);

        Set<String> scopes = Collections.emptySet();
        if (tokenResponseParameters.containsKey(OAuth2ParameterNames.SCOPE)) {
            String scope = tokenResponseParameters.get(OAuth2ParameterNames.SCOPE);
            scopes = Arrays.stream(StringUtils.delimitedListToStringArray(scope, ","))
                .collect(Collectors.toSet());
        }

        //...
        return OAuth2AccessTokenResponse.withToken(accessToken)
          .tokenType(accessTokenType)
          .expiresIn(expiresIn)
          .scopes(scopes)
          .refreshToken(refreshToken)
          .additionalParameters(additionalParameters)
          .build();
    }

}

The token response converter transforms Map to OAuth2AccessTokenResponse.

In this example, we parsed the “scope” parameter as a comma-delimited instead of space-delimited String.

Let’s go through another practical example by customizing the token response using LinkedIn as an authorization server.

7.1. LinkedIn Token Response Handling

Finally, let’s see how to handle the LinkedIn token response. This contains only access_token and expires_in, but we also need token_type.

We can simply implement our own token response converter and set token_type manually:

public class LinkedinTokenResponseConverter 
  implements Converter<Map<String, String>, OAuth2AccessTokenResponse> {

    @Override
    public OAuth2AccessTokenResponse convert(Map<String, String> tokenResponseParameters) {
        String accessToken = tokenResponseParameters.get(OAuth2ParameterNames.ACCESS_TOKEN);
        long expiresIn = Long.valueOf(tokenResponseParameters.get(OAuth2ParameterNames.EXPIRES_IN));
        
        OAuth2AccessToken.TokenType accessTokenType = OAuth2AccessToken.TokenType.BEARER;

        return OAuth2AccessTokenResponse.withToken(accessToken)
          .tokenType(accessTokenType)
          .expiresIn(expiresIn)
          .build();
    }
}

8. Conclusion

In this article, we learned how to customize OAuth2 authorization and token requests by adding or modifying request parameters.

The full source code for the examples is available over on GitHub.

Remove Leading and Trailing Characters from a String

$
0
0

1. Introduction

In this short tutorial, we’ll see several ways to remove leading and trailing characters from a String. For the sake of simplicity, we’ll remove zeroes in the examples.

With each implementation, we’ll create two methods: one for leading, and one for trailing zeroes.

This problem has an edge case: what do we want to do, when the input contains zeroes only? Return an empty String, or a String containing a single zero? We’ll see implementations for both use cases in each of the solutions.

We have unit tests for each implementation, which you can find on GitHub.

2. Using StringBuilder

In our first solution, we’ll create a StringBuilder with the original String, and we’ll delete the unnecessary characters from the beginning or the end:

String removeLeadingZeroes(String s) {
    StringBuilder sb = new StringBuilder(s);
    while (sb.length() > 0 && sb.charAt(0) == '0') {
        sb.deleteCharAt(0);
    }
    return sb.toString();
}

String removeTrailingZeroes(String s) {
    StringBuilder sb = new StringBuilder(s);
    while (sb.length() > 0 && sb.charAt(sb.length() - 1) == '0') {
        sb.setLength(sb.length() - 1);
    }
    return sb.toString();
}

Note, that we use StringBuilder.setLength() instead of StringBuilder.deleteCharAt() when we remove trailing zeroes because it also deletes the last few characters and it’s more performant.

If we don’t want to return an empty String when the input contains only zeroes, the only thing we need to do is to stop the loop if there’s only a single character left.

Therefore, we change the loop condition:

String removeLeadingZeroes(String s) {
    StringBuilder sb = new StringBuilder(s);
    while (sb.length() > 1 && sb.charAt(0) == '0') {
        sb.deleteCharAt(0);
    }
    return sb.toString();
}

String removeTrailingZeroes(String s) {
    StringBuilder sb = new StringBuilder(s);
    while (sb.length() > 1 && sb.charAt(sb.length() - 1) == '0') {
        sb.setLength(sb.length() - 1);
    }
    return sb.toString();
}

3. Using String.subString()

In this solution when we remove leading or trailing zeroes, we find the position of the first or last non-zero character.

After that, we only have to call substring(), to return the remaining parts:

String removeLeadingZeroes(String s) {
    int index;
    for (index = 0; index < s.length(); index++) {
        if (s.charAt(index) != '0') {
            break;
        }
    }
    return s.substring(index);
}

String removeTrailingZeroes(String s) {
    int index;
    for (index = s.length() - 1; index >= 0; index--) {
        if (s.charAt(index) != '0') {
            break;
        }
    }
    return s.substring(0, index + 1);
}

Note, that we have to declare the variable index before the for loop because we want to use the variable outside the loop’s scope.

Also note, that we have to look for non-zero characters manually, since String.indexOf() and String.lastIndexOf() work only for exact matching.

If we don’t want to return an empty String, we have to do the same thing like before: change the loop condition:

String removeLeadingZeroes(String s) {
    int index;
    for (index = 0; index < s.length() - 1; index++) {
        if (s.charAt(index) != '0') {
            break;
        }
    }
    return s.substring(index);
}

String removeTrailingZeroes(String s) {
    int index;
    for (index = s.length() - 1; index > 0; index--) {
        if (s.charAt(index) != '0') {
            break;
        }
    }
    return s.substring(0, index + 1);
}

4. Using Apache Commons

Apache Commons has many useful classes, including org.apache.commons.lang.StringUtils. To be more precise, this class is in Apache Commons Lang3.

4.1. Dependencies

We can use Apache Commons Lang3 by inserting this dependency into our pom.xml file:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.8.1</version>
</dependency>

4.2. Implementation

In the StringUtils class, we have the methods stripStart() and stripEnd(). They remove leading and trailing characters respectively.

Since it’s exactly what we need, our solution is pretty straightforward:

String removeLeadingZeroes(String s) {
    return StringUtils.stripStart(s, "0");
}

String removeTrailingZeroes(String s) {
    return StringUtils.stripEnd(s, "0");
}

Unfortunately, we can’t configure, if we want to remove all occurrences or not. Therefore, we need to control it manually.

If the input wasn’t empty, but the stripped String is empty, then we have to return exactly one zero:

String removeLeadingZeroes(String s) {
    String stripped = StringUtils.stripStart(s, "0");
    if (stripped.isEmpty() && !s.isEmpty()) {
        return "0";
    }
    return stripped;
}

String removeTrailingZeroes(String s) {
    String stripped = StringUtils.stripEnd(s, "0");
    if (stripped.isEmpty() && !s.isEmpty()) {
        return "0";
    }
    return stripped;
}

Note, that these methods accept a String as their second parameter. This String represents a set of characters, not a sequence we want to remove.

For example, if we pass “01”, they’ll remove any leading or trailing characters, that are either ‘0’ or ‘1’.

5. Using Guava

Guava also provides many utility classes. For this problem, we can use com.google.common.base.CharMatcher, which provides utility methods to interact with matching characters.

5.1. Dependencies

To use Guava, we should add the following dependencies to our pom.xml file:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>27.0.1-jre</version>
</dependency>

Note, that if we want to use Guava in an Android application, we should use version 27.0-android instead.

5.2. Implementation

In our case, we’re interested in trimLeadingFrom() and trimTrailingFrom().

As their name suggests, they remove any leading or trailing character respectively from a String, which matches the CharMatcher:

String removeLeadingZeroes(String s) {
    return CharMatcher.is('0').trimLeadingFrom(s);
}

String removeTrailingZeroes(String s) {
    return CharMatcher.is('0').trimTrailingFrom(s);
}

They have the same characteristics, as the Apache Commons methods we saw.

Therefore, if we don’t want to remove all zeroes, we can use the same trick:

String removeLeadingZeroes(String s) {
    String stripped = CharMatcher.is('0').trimLeadingFrom(s);
    if (stripped.isEmpty() && !s.isEmpty()) {
        return "0";
    }
    return stripped;
}

String removeTrailingZeroes(String s) {
    String stripped = CharMatcher.is('0').trimTrailingFrom(s);
    if (stripped.isEmpty() && !s.isEmpty()) {
        return "0";
    }
    return stripped;
}

Note, that with CharMatcher we can create more complex matching rules.

6. Using Regular Expressions

Since our problem is a pattern matching problem, we can use regular expressions: we want to match all zeroes at the beginning or the end of a String.

On top of that, we want to remove those matching zeroes. In other words, we want to replace them with nothing, or in other words, an empty String.

We can do exactly that, with the String.replaceAll() method:

String removeLeadingZeroes(String s) {
    return s.replaceAll("^0+", "");
}

String removeTrailingZeroes(String s) {
    return s.replaceAll("0+$", "");
}

If we don’t want to remove all zeroes, we could use the same solution we used with Apache Commons and Guava. However, there’s a pure regular expression way to do this: we have to provide a pattern, which doesn’t match the whole String.

That way, if the input contains only zeroes, the regexp engine will keep exactly one out from the matching. We can do this with the following patterns:

String removeLeadingZeroes(String s) {
    return s.replaceAll("^0+(?!$)", "");
}

String removeTrailingZeroes(String s) {
    return s.replaceAll("(?!^)0+$", "");
}

Note, that “(?!^)” and “(?!$)” means that it’s not the beginning or the end of the String respectively.

7. Conclusion

In this tutorial, we saw several ways to remove leading and trailing characters from a String. The choice between these implementations is often simply personal preference.

As usual, the examples are available over on GitHub.

JPA Entity Graph

$
0
0

1. Overview

JPA 2.1 has introduced the Entity Graph feature as a more sophisticated method of dealing with performance loading.

It allows defining a template by grouping the related persistence fields which we want to retrieve and lets us choose the graph type at runtime.

In this tutorial, we’ll explain in more detail how to create and use this feature.

2. What The Entity Graph Tries to Resolve

Until JPA 2.0, to load an entity association, we usually used FetchType.LAZY and FetchType.EAGER as fetching strategiesThis instructs the JPA provider to additionally fetch the related association or not. Unfortunately, this meta configuration is static and doesn’t allow to switch between these two strategies at runtime.

The main goal of the JPA Entity Graph is then to improve the runtime performance when loading the entity’s related associations and basic fields.

Briefly put, the JPA provider loads all the graph in one select query and then avoids fetching association with more SELECT queries. This is considered a good approach for improving application performance.

3. Defining the Model

Before we start exploring the Entity Graph, we need to define the model entities we’re working with. Let’s say we want to create a blog site where users can comment on and share posts.

So, first we’ll have a User entity:

@Entity
public class User {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    private String name;
    private String email;

    //...
}

The user can share various posts, so we also need a Post entity:

@Entity
public class Post {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    private String subject;
    @OneToMany(mappedBy = "post")
    private List<Comment> comments = new ArrayList<>();

    @ManyToOne(fetch = FetchType.LAZY)
    @JoinColumn
    private User user;
    
    //...
}

The user can also comment on the shared posts, so, finally, we’ll add a Comment entity:

@Entity
public class Comment {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    private String reply;

    @ManyToOne(fetch = FetchType.LAZY)
    @JoinColumn
    private Post post;

    @ManyToOne(fetch = FetchType.LAZY)
    @JoinColumn
    private User user;
    
    //...
}

As we can see, the Post entity has an association with the Comment and User entities. The Comment entity has an association to the Post and User entities.

The goal is then to load the following graph using various ways:

Post  ->  user:User
      ->  comments:List<Comment>
            comments[0]:Comment -> user:User
            comments[1]:Comment -> user:User

4. Loading Related Entities with FetchType Strategies

The FetchType method defines two strategies for fetching data from the database:

  • FetchType.EAGER: The persistence provider must load the related annotated field or property. This is the default behavior for @Basic, @ManyToOne, and @OneToOne annotated fields.
  • FetchType.LAZY: The persistence provider should load data when it’s first accessed, but can be loaded eagerly. This is the default behavior for @OneToMany, @ManyToMany and @ElementCollection-annotated fields.

For example, when we load a Post entity, the related Comment entities are not loaded as the default FetchType since @OneToMany is LAZY. We can override this behavior by changing the FetchType to EAGER:

@OneToMany(mappedBy = "post", fetch = FetchType.EAGER)
private List<Comment> comments = new ArrayList<>();

By comparison, when we load a Comment entity, his Post parent entity is loaded as the default mode for @ManyToOne, which is EAGER. We can also choose to not load the Post entity by changing this annotation to LAZY:

@ManyToOne(fetch = FetchType.LAZY) 
@JoinColumn(name = "post_id") 
private Post post;

Note that as the LAZY is not a requirement, the persistence provider can still load the Post entity eagerly if it wants. So for using this strategy properly, we should go back to the official documentation of the corresponding persistence provider.

Now, because we’ve used annotations to describe our fetching strategy, our definition is static and there is no way to switch between the LAZY and EAGER at runtime.

This is where the Entity Graph come into play as we’ll see in the next section.

5. Defining an Entity Graph

To define an Entity Graph, we can either use the annotations on the entity or we can proceed programmatically using the JPA API.

5.1. Defining an Entity Graph with Annotations

The @NamedEntityGraph annotation allows specifying the attributes to include when we want to load the entity and the related associations.

So let’s first define an Entity Graph that loads the Post and his related entities User and Comments:

@NamedEntityGraph(
  name = "post-entity-graph",
  attributeNodes = {
    @NamedAttributeNode("subject"),
    @NamedAttributeNode("user"),
    @NamedAttributeNode("comments"),
  }
)
@Entity
public class Post {

    @OneToMany(mappedBy = "post")
    private List<Comment> comments = new ArrayList<>();
    
    //...
}

In this example, we’ve used the @NamedAttributeNode to define the related entities to be loaded when the root entity is loaded.

Let’s now define a more complicated Entity Graph where we want also load the Users related to the Comments.

For this purpose, we’ll use the @NamedAttributeNode subgraph attribute. This allows referencing a named subgraph defined through the @NamedSubgraph annotation:

@NamedEntityGraph(
  name = "post-entity-graph-with-comment-users",
  attributeNodes = {
    @NamedAttributeNode("subject"),
    @NamedAttributeNode("user"),
    @NamedAttributeNode(value = "comments", subgraph = "comments-subgraph"),
  },
  subgraphs = {
    @NamedSubgraph(
      name = "comments-subgraph",
      attributeNodes = {
        @NamedAttributeNode("user")
      }
    )
  }
)
@Entity
public class Post {

    @OneToMany(mappedBy = "post")
    private List<Comment> comments = new ArrayList<>();
    //...
}

The definition of the @NamedSubgraph annotation is similar to the @NamedEntityGraph and allows to specify attributes of the related association. Doing so, we can construct a complete graph.

In the example above, with the defined ‘post-entity-graph-with-comment-users’  graph, we can load the Post, the related User, the Comments and the Users related to the Comments.

Finally, note that we can alternatively add the definition of the Entity Graph using the orm.xml deployment descriptor:

<entity-mappings>
  <entity class="com.baeldung.jpa.entitygraph.Post" name="Post">
    ...
    <named-entity-graph name="post-entity-graph">
            <named-attribute-node name="comments" />
    </named-entity-graph>
  </entity>
  ...
</entity-mappings>

5.2. Defining an Entity Graph with the JPA API

We can also define the Entity Graph through the EntityManager API by calling the createEntityGraph() method:

EntityGraph<Post> entityGraph = entityManager.createEntityGraph(Post.class);

To specify the attributes of the root entity, we use the addAttributeNodes() method.

entityGraph.addAttributeNodes("subject");
entityGraph.addAttributeNodes("user");

Similarly, to include the attributes from the related entity, we use the addSubgraph() to construct an embedded Entity Graph and then we the addAttributeNodes() as we did above.

entityGraph.addSubgraph("comments")
  .addAttributeNodes("user");

Now that we have seen how to create the Entity Graph, we’ll explore how to use it in the next section.

6. Using The Entity Graph

6.1. Types of Entity Graphs

JPA defines two properties or hints by which the persistence provider can choose in order to load or fetch the Entity Graph at runtime:

  • javax.persistence.fetchgraph – Only the specified attributes are retrieved from the database. As we are using Hibernate in this tutorial, we can note that in contrast to the JPA specs, attributes statically configured as EAGER are also loaded.
  • javax.persistence.loadgraph – In addition to the specified attributes, attributes statically configured as EAGER are also retrieved.

In either case, the primary key and the version if any are always loaded.

6.2. Loading an Entity Graph

We can retrieve the Entity Graph using various ways.

Let’s start by using the EntityManager.find() method. As we’ve already shown, the default mode is based on the static meta-strategies FetchType.EAGER and FetchType.LAZY.

So let’s invoke the find() method and inspect the log:

Post post = entityManager.find(Post.class, 1L);

Here is the log provided by Hibernate implementation:

select
    post0_.id as id1_1_0_,
    post0_.subject as subject2_1_0_,
    post0_.user_id as user_id3_1_0_ 
from
    Post post0_ 
where
    post0_.id=?

As we can see from the log, the User and Comment entities are not loaded.

We can override this default behavior by invoking the overloaded find() method which accepts hints as a Map. We can then provide the graph type which we want to load:

EntityGraph entityGraph = entityManager.getEntityGraph("post-entity-graph");
Map<String, Object> properties = new HashMap<>();
properties.put("javax.persistence.fetchgraph", entityGraph);
Post post = entityManager.find(Post.class, id, properties);

If we look again in the log, we can see that these entities are now loaded and only in one select query:

select
    post0_.id as id1_1_0_,
    post0_.subject as subject2_1_0_,
    post0_.user_id as user_id3_1_0_,
    comments1_.post_id as post_id3_0_1_,
    comments1_.id as id1_0_1_,
    comments1_.id as id1_0_2_,
    comments1_.post_id as post_id3_0_2_,
    comments1_.reply as reply2_0_2_,
    comments1_.user_id as user_id4_0_2_,
    user2_.id as id1_2_3_,
    user2_.email as email2_2_3_,
    user2_.name as name3_2_3_ 
from
    Post post0_ 
left outer join
    Comment comments1_ 
        on post0_.id=comments1_.post_id 
left outer join
    User user2_ 
        on post0_.user_id=user2_.id 
where
    post0_.id=?

Let’s see how we can achieve the same thing using JPQL:

EntityGraph entityGraph = entityManager.getEntityGraph("post-entity-graph-with-comment-users");
Post post = entityManager.createQuery("select p from Post p where p.id = :id", Post.class)
  .setParameter("id", id)
  .setHint("javax.persistence.fetchgraph", entityGraph)
  .getSingleResult();

And finally, let’s have a look at a Criteria API example:

EntityGraph entityGraph = entityManager.getEntityGraph("post-entity-graph-with-comment-users");
CriteriaBuilder criteriaBuilder = entityManager.getCriteriaBuilder();
CriteriaQuery<Post> criteriaQuery = criteriaBuilder.createQuery(Post.class);
Root<Post> root = criteriaQuery.from(Post.class);
criteriaQuery.where(criteriaBuilder.equal(root.<Long>get("id"), id));
TypedQuery<Post> typedQuery = entityManager.createQuery(criteriaQuery);
typedQuery.setHint("javax.persistence.loadgraph", entityGraph);
Post post = typedQuery.getSingleResult();

In each of these, the graph type is given as a hint. While in the first example we used the Map, in the two later examples we’ve used the setHint() method.

7. Conclusion

In this article, we’ve explored using the JPA Entity Graph to dynamically fetch an Entity and its associations.

The decision is made at runtime in which we choose to load or not the related association.

Performance is obviously a key factor to take into account when designing JPA entities. The JPA documentation recommends using the FetchType.LAZY strategy whenever possible, and the Entity Graph when we need to load an association.

As usual, all the code is available over on GitHub.

Formatting with printf() in Java

$
0
0

1. Introduction

In this tutorial, we’ll demonstrate different examples of formatting with the printf() method.

The method is part of the java.io.PrintStream class and provides String formatting similar to the printf() function in C.

2. Syntax

We can use one of the following PrintStream methods to format the output:

System.out.printf(format, arguments);
System.out.printf(locale, format, arguments);

We specify the formatting rules using the format parameter. Rules start with the ‘%’ character.

Let’s look at a quick example before we dive into the details of the various formatting rules:

System.out.printf("Hello %s!%n", "World");

This produces the following output:

Hello World!

As shown above, the format string contains plain text and two formatting rules. The first rule is used to format the string argument. The second rule adds a newline character to the end of the string.

2.1. Format Rules

Let’s have a look at format string more closely. It consists of literals and format specifiers. Format specifiers include flags, width, precision, and conversion characters in the following sequence:

%[flags][width][.precision]conversion-character

Specifiers in the brackets are optional.

Internally, printf() uses the java.util.Formatter class to parse the format string and generate the output. Additional format string options can be found in the Formatter Javadoc.

2.2. Conversion Characters

The conversion-character is required and determines how the argument is formatted. Conversion characters are only valid for certain data types. Some common ones are:

  • – formats strings
  • d – formats decimal integers
  • f – formats the floating-point numbers
  • t– formats date/time values

We’ll explore these and a few others later in the article.

2.3. Optional Modifiers

The [flags] define standard ways to modify the output and are most common for formatting integers and floating point numbers.

The [width] specifies the field width for outputting the argument. It represents the minimum number of characters written to the output.

The [.precision] specifies the number of digits of precision when outputting floating-point values. Additionally, we can use it to define the length of a substring to extract from a String.

3. Line Separator

To break the string into separate lines, we have a %n specifier:

System.out.printf("baeldung%nline%nterminator");

The code snippet above will produce the following output:

baeldung
line
terminator

The %n separator printf() will automatically insert the host system’s native line separator.

4. Boolean Formatting

To format boolean values, we use the %b format. It works the following way: If the input value is true, the output is true. Otherwise, the output is false.

So, if we do:

System.out.printf("%b%n", null);
System.out.printf("%B%n", false);
System.out.printf("%B%n", 5.3);
System.out.printf("%b%n", "random text");

Then we’ll see:

false
FALSE
TRUE
true

Notice that we can use %B for uppercase formatting.

5. String Formatting

To format a simple string, we’ll use the %s combination. Additionally, we can make the string uppercase:

printf("'%s' %n", "baeldung");
printf("'%S' %n", "baeldung");

And the output is:

'baeldung' 
'BAELDUNG'

Also, to specify a minimum length, we can specify a width:

printf("'%15s' %n", "baeldung");

Which gives us:

'       baeldung'

If we need to left-justify our string, then we can use the ‘-‘ flag:

printf("'%-10s' %n", "baeldung");

And the output is:

'baeldung  '

Even more, we can limit the number of characters in our output by specifying a precision:

System.out.printf("%2.2s", "Hi there!");

The first ‘x’ number in %x.ys syntax is the padding. ‘y’ is the number of chars.

For our example here, the output is Hi.

6. Char Formatting

The result of %c is a Unicode character:

System.out.printf("%c%n", 's');
System.out.printf("%C%n", 's');

The capital letter C will uppercase the result:

s
S

But, if we give it an invalid argument, then Formatter will throw IllegalFormatConversionException.

7. Number Formatting

7.1. Integer Formatting

The printf() method accepts all the integers available in the language; byte, short, int, long and BigInteger if we use %d:

System.out.printf("simple integer: %d%n", 10000L);

With the help of the ‘d’ character, we’ll have:

simple integer: 10000

In case we need to format our number with the thousands separator, we can use the ‘,’ flag. And we can also format our results for different locales:

System.out.printf(Locale.US, "%,d %n", 10000);
System.out.printf(Locale.ITALY, "%,d %n", 10000);

As we see, the formatting in the US is different than in Italy:

10,000 
10.000

7.2. Float and Double Formatting

To format a float number, we’ll need the ‘f’ format:

System.out.printf("%f%n", 5.1473);

Which will output:

5.147300

Of course, the first thing that comes to mind is to control the precision:

System.out.printf("'%5.2f'%n", 5.1473);

Here we define the width of our number as 5, and the length of the decimal part is 2:

' 5.15'

Here we have one space padding from the beginning of the number to support the predefined width.

To have our output in scientific notation, we just use the ‘e’ conversion character:

System.out.printf("'%5.2e'%n", 5.1473);

And the result is the following:

'5.15e+00'

8. Date and Time Formatting

For date and time formatting, the conversion string is a sequence of two characters: the ‘t’ or ‘T’ character and the conversion suffix. Let’s explore the most common time and date formatting suffix characters with the examples.

Definitely, for more advanced formatting we can use DateTimeFormatter which has been available since Java 8.

8.1. Time Formatting

First, let’s see the list of some useful suffix characters for Time Formatting:

  • ‘H’, ‘M’, ‘S’  – characters are responsible for extracting the hours, minutes and second from the input Date
  • ‘L’, ‘N’  – to represent the time in milliseconds and nanoseconds accordingly
  • ‘p’ – adds am/pm formatting
  • ‘z’ – prints out the timezone offset

Now, let’s say we wanted to print out the time part of a Date:

Date date = new Date();
System.out.printf("%tT%n", date);

The code above along with ‘%tT’ combination produces the following output:

13:51:15

In case we need more detailed formatting, we can call for different time segments:

System.out.printf("hours %tH: minutes %tM: seconds %tS%n", date, date, date);

Having used ‘H’, ‘M’, and ‘S’ we get:

hours 13: minutes 51: seconds 15

Though, listing date multiple times is a pain. Alternatively, to get rid of multiple arguments, we can use the index reference of our input parameter which is 1$ in our case:

System.out.printf("%1$tH:%1$tM:%1$tS %1$tp %1$tL %1$tN %1$tz %n", date);

Here we want as an output the current time, am/pm,  time in milliseconds, nanoseconds and the timezone offset:

13:51:15 pm 061 061000000 +0400

8.2. Date Formatting

Like time formatting, we have special formatting characters for date formatting:

  • ‘A’ – prints out the full day of the week
  • ‘d’ – formats a two-digit day of the month
  • ‘B’ – is for the full month name
  • ‘m’ – formats a two-digit month
  • ‘Y’ – outputs a year in four digits
  • ‘y’ – outputs the last two digits of the year

So, if we wanted to show the day of the week, followed by the month:

System.out.printf("%1$tA, %1$tB %1$tY %n", date);

Then using ‘A’‘B’, and ‘Y’, we’d get:

Thursday, November 2018

To have our results all in numeric format, we can replace the ‘A’, ‘B’, ‘Y ‘ letters with ‘d’, ‘m’, ‘y’:

System.out.printf("%1$td.%1$tm.%1$ty %n", date);

Which will result in:

22.11.18

9. Summary

In this article, we discussed how to use the PrintStream#printf method to format output. We looked at the different format patterns used to control the output for common data types.

Finally, as always, the code used during the discussion can be found over on GitHub.

Spring Boot With SQLite

$
0
0

1. Overview

In this quick tutorial, we’ll go through steps to use an SQLite database in a JPA-enabled Spring Boot application.

Spring Boot supports a few well known in-memory databases out of the box, but SQLite requires a bit more from us.

Let’s have a look at what it takes.

2. Project Setup

For our illustration, we’ll start with a Spring Data Rest app we’ve used in past tutorials.

In the pom, we need to add the sqllite-jdbc dependency:

<dependency>
    <groupId>org.xerial</groupId>
    <artifactId>sqlite-jdbc</artifactId>
    <version>3.25.2</version>
</dependency>

This dependency gives us what we need to use JDBC to communicate with SQLite. But, if we are going to use an ORM, it’s not enough.

3. SQLite Dialect

See, Hibernate doesn’t ship with a Dialect for SQLite. We need to create one ourselves.

3.1. Extending Dialect

Our first step is to extend org.hibernate.dialect.Dialect class to register the data types provided by SQLite:

public class SQLiteDialect extends Dialect {

    public SQLiteDialect() {
        registerColumnType(Types.BIT, "integer");
        registerColumnType(Types.TINYINT, "tinyint");
        registerColumnType(Types.SMALLINT, "smallint");
        registerColumnType(Types.INTEGER, "integer");
        // other data types
    }
}

There are several, so definitely check out the sample code for the rest.

Next, we’ll need to override some default Dialect behaviors.

3.2. Identity Column Support

For example, we need to tell Hibernate how SQLite handles @Id columns, which we can do with a custom IdentityColumnSupport implementation:

public class SQLiteIdentityColumnSupport extends IdentityColumnSupportImpl {

    @Override
    public boolean supportsIdentityColumns() {
        return true;
    }

    @Override
    public String getIdentitySelectString(String table, String column, int type) 
      throws MappingException {
        return "select last_insert_rowid()";
    }

    @Override
    public String getIdentityColumnString(int type) throws MappingException {
        return "integer";
    }
}

To keep things simple here, let’s keep the identity column type to Integer only. And to get the next available identity value, we’ll specify the appropriate mechanism.

Then, we simply override the corresponding method in our growing SQLiteDialect class:

@Override
public IdentityColumnSupport getIdentityColumnSupport() {
    return new SQLiteIdentityColumnSupport();
}

3.3. Disable Constraints Handling

And, SQLite doesn’t have support for the database constraints, so we’ll need to disable those by again overriding the appropriate methods for both primary and foreign keys:

@Override
public boolean hasAlterTable() {
    return false;
}

@Override
public boolean dropConstraints() {
    return false;
}

@Override
public String getDropForeignKeyString() {
    return "";
}

@Override
public String getAddForeignKeyConstraintString(String cn, 
  String[] fk, String t, String[] pk, boolean rpk) {
    return "";
}

@Override
public String getAddPrimaryKeyConstraintString(String constraintName) {
    return "";
}

And, in just a moment, we’ll be able to reference this new dialect in our Spring Boot configuration.

4. DataSource Configuration

Also, since Spring Boot doesn’t provide configuration support for SQLite database out of the box, we also need to expose our own DataSource bean:

@Autowired Environment env;

@Bean
public DataSource dataSource() {
    final DriverManagerDataSource dataSource = new DriverManagerDataSource();
    dataSource.setDriverClassName(env.getProperty("driverClassName"));
    dataSource.setUrl(env.getProperty("url"));
    dataSource.setUsername(env.getProperty("user"));
    dataSource.setPassword(env.getProperty("password"));
    return dataSource;
}

And finally, we’ll configure the following properties in our persistence.properties file:

driverClassName=org.sqlite.JDBC
url=jdbc:sqlite:memory:myDb?cache=shared
username=sa
password=sa
hibernate.dialect=com.baeldung.dialect.SQLiteDialect
hibernate.hbm2ddl.auto=create-drop
hibernate.show_sql=true

Note that we need to keep the cache as shared in order to keep the database updates visible across multiple database connections.

So, with the above configurations, the app will start and will launch an in-memory database called myDb, which the remaining Spring Data Rest configuration can take up.

5. Conclusion

In this article, we took a sample Spring Data Rest application and pointed it at an SQLite database. However, to do so, we had to create a custom Hibernate dialect.

Make sure to check out the application over on Github. Just run with mvn -Dspring.profiles.active=sqlite spring-boot:run and browse to http://localhost:8080.

Immutable Map Implementations in Java

$
0
0

1. Overview

It is sometimes preferable to disallow modifications to the java.util.Map such as sharing read-only data across threads. For this purpose, we can use either an Unmodifiable Map or an Immutable Map.

In this quick tutorial, we’ll see what’s the difference between them. Then, we’ll present various ways in which we can create an Immutable Map.

2. Unmodifiable vs Immutable

An Unmodifiable Map is just a wrapper over a modifiable map and it doesn’t allow modifications to it directly:

Map<String, String> mutableMap = new HashMap<>();
mutableMap.put("USA", "North America");

Map<String, String> unmodifiableMap = Collections.unmodifiableMap(mutableMap);
assertThrows(UnsupportedOperationException.class,
  () -> unmodifiableMap.put("Canada", "North America"));

But the underlying mutable map can still be changed and the modifications are reflected in the Unmodifiable map as well:

mutableMap.remove("USA");
assertFalse(unmodifiableMap.containsKey("USA"));
		
mutableMap.put("Mexico", "North America");
assertTrue(unmodifiableMap.containsKey("Mexico"));

An Immutable Map, on the other hand, contains its own private data and doesn’t allow modifications to it. Therefore, the data cannot change in any way once an instance of the Immutable Map is created.

3. Guava’s Immutable Map

Guava provides immutable versions of each java.util.Map using ImmutableMap. It throws an UnsupportedOperationException whenever we try to modify it.

Since it contains its own private data, this data won’t change when the original map is changed.

We’ll now discuss various ways of creating instances of the ImmutableMap.

3.1. Using copyOf() method

First, let’s use the ImmutableMap.copyOf() method that returns a copy of all the entries as in the original map:

ImmutableMap<String, String> immutableMap = ImmutableMap.copyOf(mutableMap);
assertTrue(immutableMap.containsKey("USA"));

It cannot be modified directly or indirectly:

assertThrows(UnsupportedOperationException.class,
  () -> immutableMap.put("Canada", "North America"));
		
mutableMap.remove("USA");
assertTrue(immutableMap.containsKey("USA"));
		
mutableMap.put("Mexico", "North America");
assertFalse(immutableMap.containsKey("Mexico"));

3.2. Using builder() method

We can also use ImmutableMap.builder() method to create a copy of all the entries as in the original map.

Moreover, we can use this method to add additional entries that are not present in the original map:

ImmutableMap<String, String> immutableMap = ImmutableMap.<String, String>builder()
  .putAll(mutableMap)
  .put("Costa Rica", "North America")
  .build();
assertTrue(immutableMap.containsKey("USA"));
assertTrue(immutableMap.containsKey("Costa Rica"));

The same as in the previous example, we cannot modify it directly or indirectly:

assertThrows(UnsupportedOperationException.class,
  () -> immutableMap.put("Canada", "North America"));
		
mutableMap.remove("USA");
assertTrue(immutableMap.containsKey("USA"));
		
mutableMap.put("Mexico", "North America");
assertFalse(immutableMap.containsKey("Mexico"));

3.3. Using of() method

Finally, we can use ImmutableMap.of() method to create an immutable map with a set of entries provided on the fly. It supports at most five key/value pairs:

ImmutableMap<String, String> immutableMap
  = ImmutableMap.of("USA", "North America", "Costa Rica", "North America");
assertTrue(immutableMap.containsKey("USA"));
assertTrue(immutableMap.containsKey("Costa Rica"));

We cannot modify it as well:

assertThrows(UnsupportedOperationException.class,
  () -> immutableMap.put("Canada", "North America"));

4. Conclusion

In this quick article, we discussed the differences between an Unmodifiable Map and Immutable Map.

We also had a look at different ways of creating Guava’s ImmutableMap.

And, as always, the complete code examples are available over on GitHub.


Java Compound Operators

$
0
0

1. Overview

In this tutorial, we’ll have a look at Java compound operators, their types and how Java evaluates them.

We’ll also explain how implicit casting works.

2. Compound Assignment Operators

An assignment operator is a binary operator that assigns the result of the right-hand side to the variable on the left-hand side. The simplest is the “=” assignment operator:

int x = 5;

This statement declares a new variable x, assigns x the value of 5 and returns 5.

Compound Assignment Operators are a shorter way to apply an arithmetic or bitwise operation and to assign the value of the operation to the variable on the left-hand side.

For example, the following two multiplication statements are equivalent, meaning a and b will have the same value:

int a = 3, b = 3, c = -2;
a = a * c; // Simple assignment operator
b *= c; // Compound assignment operator

It’s important to note that the variable on the left-hand of a compound assignment operator must be already declared. In other words, compound operators can’t be used to declare a new variable.

Like the “=” assignment operator, compound operators return the assigned result of the expression:

long x = 1;
long y = (x+=2);

Both x and y will hold the value 3.

The assignment (x+=2) does two things: first, it adds 2 to the value of the variable x, which becomes 3; second, it returns the value of the assignment, which is also 3.

3. Types of Compound Assignment Operators

Java supports 11 compound assignment operators. We can group these into arithmetic and bitwise operators.

Let’s go through the arithmetic operators and the operations they perform:

  • Incrementation: +=
  • Decrementation: -=
  • Multiplication: *=
  • Division: /=
  • Modulus: %=

Then, we also have the bitwise operators:

  • AND, binary: &=
  • Exclusive OR, binary: ^=
  • Inclusive OR, binary: |=
  • Left Shift, binary: <<=
  • Right Shift, binary: >>=
  • Shift right zero fill: >>>=

Let’s have a look at a few examples of these operations:

// Simple assignment
int x = 5; // x is 5

// Incrementation
x += 5; // x is 10

// Multiplication
x *= 2; // x is 16

// Modulus
x %= 3; // x is 1

// Binary AND
x &= 4; // x is 0

// Binary inclusive OR
x |= 8; // x is 12

As we can see here, the syntax to use these operators is consistent.

4. Evaluation of Compound Assignment Operations

There are two ways Java evaluates the compound operations.

First, when the left-hand operand is not an array, then Java will, in order:

  1. Verify the operand is a declared variable
  2. Save the value of the left-hand operand
  3. Evaluate the right-hand operand
  4. Perform the binary operation as indicated by the compound operator
  5. Convert the result of the binary operation to the type of the left-hand variable (implicit casting)
  6. Assign the converted result to the left-hand variable

Next, when the left-hand operand is an array, the steps to follow are a bit different:

  1. Verify the array expression on the left-hand side and throw a NullPointerException or ArrayIndexOutOfBoundsException if it’s incorrect
  2. Save the array element in the index
  3. Evaluate the right-hand operand
  4. Check if the array component selected is a primitive type or reference type and then continue with the same steps as the first list, as if the left-hand operand is a variable.

If any step of the evaluation fails, Java doesn’t continue to perform the following steps.

Let’s give some examples related to the evaluation of these operations to an array element:

int[] numbers = null;

// Trying Incrementation
numbers[2] += 5;

As we’d expect, this will throw a NullPointerException.

However, if we assign an initial value to the array:

int[] numbers = {0, 1};

// Trying Incrementation
numbers[2] += 5;

We would get rid of the NullPointerException, but we’d still get an ArrayIndexOutOfBoundsException, as the index used is not correct.

If we fix that, the operation will be completed successfully:

int[] numbers = {0, 1};

// Incrementation
numbers[1] += 5; // x is now 6

Finally, the x variable will be 6 at the end of the assignment.

5. Implicit Casting

One of the reasons compound operators are useful is that not only they provide a shorter way for operations, but also implicitly cast variables.

Formally, a compound assignment expression of the form:

E1 op= E2

is equivalent to:

E1 – (T)(E1 op E2)

where T is the type of E1.

Let’s consider the following example:

long number = 10;
int i = number;
i = i * number; // Does not compile

Let’s review why the last line won’t compile.

Java automatically promotes smaller data types to larger data ones, when they are together in an operation, but will throw an error when trying to convert from larger to smaller types.

So, first, i will be promoted to long and then the multiplication will give the result 10L. The long result would be assigned to i, which is an int, and this will throw an error.

This could be fixed with an explicit cast:

i = (int) i * number;

Java compound assignment operators are perfect in this case because they do an implicit casting:

i *= number;

This statement works just fine, casting the multiplication result to int and assigning the value to the left-hand side variable, i.

6. Conclusion

In this article, we looked at compound operators in Java, giving some examples and different types of them. We explained how Java evaluates these operations.

Finally, we also reviewed implicit casting, one of the reasons these shorthand operators are useful.

As always, all of the code snippets mentioned in this article can be found in our GitHub repository.

Java Weekly, Issue 258

$
0
0

Here we go…

1. Spring and Java

>> The best way to initialize LAZY entity and collection proxies with JPA and Hibernate [vladmihalcea.com]

A good write-up on when and how to use lazy initialization with Hibernate’s second-level cache.

>> Take a sneak peek at the future of Hibernate Search with 6.0.0.Alpha1! [in.relation.to]

A quick look at what’s coming in Hibernate Search 6.0, including a new Search DSL and ORM integration. And a word of caution: this is truly an alpha release and not ready for production.

>> Spring bean thread safety guide [dolszewski.com]

A nice review of the ins and outs of Spring bean scopes with respect to thread safety. Good stuff.

>> Dynamic Flow Control during Backpressure with RxJava and RSockets [medium.com]

A deep dive into how to handle backpressure with RSockets.

>> Beware the Attach API [blog.frankel.ch]

And a reminder that with great power comes great risk. Use with caution, or learn how to disable it altogether.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Test-driven infrastructure development with Ansible & Molecule [blog.codecentric.de]

A solid piece introduces two tools for integrating TDD with Infrastructure as Code.

>> Three years as a Hibernate Developer Advocate [vladmihalcea.com]

And a report summarizing recent accomplishments for the Hibernate project and its community.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Wally’s Doctor Note [dilbert.com]

>> Exclamation Mark [dilbert.com]

>> Everyone Is Their Own Boss [dilbert.com]

4. Pick of the Week

>> You probably won’t make it to the top [m.signalvnoise.com]

Find the Longest Substring without Repeating Characters

$
0
0

1. Overview

In this tutorial, compare ways to find the longest substring of unique letters using Java. For example, the longest substring of unique letters in “CODINGISAWESOME” is “NGISAWE”.

2. Brute Force Approach

Let’s start with a naive approach. To begin with, we can examine each substring whether it contains unique characters:

String getUniqueCharacterSubstringBruteForce(String input) {
    String output = "";
    for (int start = 0; start < input.length(); start++) {
        Set<Character> visited = new HashSet<>();
        int end = start;
        for (; end < input.length(); end++) {
            char currChar = input.charAt(end);
            if (visited.contains(currChar)) {
                break;
            } else {
                visited.add(currChar);
            }
        }
        if (output.length() < end - start + 1) {
            output = input.substring(start, end);
        }
    }
    return output;
}

Since there are n*(n+1)/2 possible substrings, the time complexity of this approach is O(n^2).

3. Optimized Approach

Now, let’s take a look at an optimized approach. We start traversing the string from left to right and maintain track of:

  1. the current substring with non-repeating characters with the help of a start and end index
  2. the longest non-repeating substring output
  3. a lookup table of already visited characters
String getUniqueCharacterSubstring(String input) {
    Map<Character, Integer> visited = new HashMap<>();
    String output = "";
    for (int start = 0, end = 0; end < input.length(); end++) {
        char currChar = input.charAt(end);
        if (visited.containsKey(currChar)) {
            start = Math.max(visited.get(currChar)+1, start);
        }
        if (output.length() < end - start + 1) {
            output = input.substring(start, end + 1);
        }
        visited.put(currChar, end);
    }
    return output;
}

For every new character, we look for it in the already visited characters. If the character has already been visited and is part of the current substring with non-repeating characters, we update the start index. Otherwise, we’ll continue traversing the string.

Since we are traversing the string only once, the time complexity will be linear, or O(n).

This approach is also known as the sliding window pattern.

4. Testing

Finally, let’s test thoroughly our implementation to make sure it works:

@Test
void givenString_whenGetUniqueCharacterSubstringCalled_thenResultFoundAsExpected() {
    assertEquals("", getUniqueCharacterSubstring(""));
    assertEquals("A", getUniqueCharacterSubstring("A"));
    assertEquals("ABCDEF", getUniqueCharacterSubstring("AABCDEF"));
    assertEquals("ABCDEF", getUniqueCharacterSubstring("ABCDEFF"));
    assertEquals("NGISAWE", getUniqueCharacterSubstring("CODINGISAWESOME"));
    assertEquals("be coding", getUniqueCharacterSubstring("always be coding"));
}

Here, we try and test boundary conditions as well as the more typical use cases.

5. Conclusion

In this tutorial, we have learned how to use the sliding window technique to find the longest substring with non-repeating characters. As always, the source code is available on GitHub.

Inline Functions in Kotlin

$
0
0

1. Overview

In Kotlin, functions are first-class citizens, so we can pass functions around or return them just like other normal types. However, the representation of these functions at runtime sometimes may cause a few limitations or performance complications.

In this tutorial, first we’re going to enumerate two seemingly unrelated issues about lambdas and generics and then, after introducing Inline Functions, we’ll see how they can address both of those concerns, so let’s get started!

2. Trouble in Paradise

2.1. The Overhead of Lambdas in Kotlin

One of the perks of functions being first-class citizens in Kotlin is that we can pass a piece of behavior to other functions. Passing functions as lambdas let us express our intentions in a more concise and elegant way but that’s only one part of the story.

To explore the dark side of lambdas, let’s reinvent the wheel by declaring an extension function to filter collections:

fun <T> Collection<T>.filter(predicate: (T) -> Boolean): Collection<T> = // Omitted

Now, let’s see how the function above compiles into Java. Focus on the predicate function that is being passed as a parameter:

public static final <T> Collection<T> filter(Collection<T>, kotlin.jvm.functions.Function1<T, Boolean>);

Notice how the predicate is handled by using the Function1 interface?

Now, if we call this in Kotlin:

sampleCollection.filter { it == 1 }

Something similar to the following will be produced to wrap the lambda code:

filter(sampleCollection, new Function1<Integer, Boolean>() {
  @Override
  public Boolean invoke(Integer param) {
    return param == 1;
  }
});

Every time we declare a higher-order function, at least one instance of those special Function* types will be created.

Why does Kotlin do this instead of, say, using invokedynamic like how Java 8 does with lambdas? Simply put, Kotlin goes for Java 6 compatibility, and invokedynamic isn’t available until Java 7.

But this is not the end of it. As we might guess, just creating an instance of a type isn’t enough.

In order to actually perform the operation encapsulated in a Kotlin lambda, the higher-order function – filter in this case – will need to call the special method named invoke on the new instance. The result is more overhead due to the extra call.

So, just to recap, when we’re passing a lambda to a function, the following happens under the hood:

  1. At least one instance of a special type is created and stored in the heap
  2. An extra method call will always happen

One more instance allocation and one more virtual method call doesn’t seem that bad, right?

2.2. Closures

As we saw earlier, when we pass a lambda to a function, an instance of a function type will be created, similar to anonymous inner classes in Java.

Just like with the latter, a lambda expression can access its closure, that is, variables declared in the outer scope. When a lambda captures a variable from its closure, Kotlin stores the variable along with the capturing lambda code.

The extra memory allocations get even worse when a lambda captures a variable: The JVM creates a function type instance on every invocation. For non-capturing lambdas, there will be only one instance, a singleton, of those function types.

How are we so sure about this? Let’s reinvent another wheel by declaring a function to apply a function on each collection element:

fun <T> Collection<T>.each(block: (T) -> Unit) {
    for (e in this) block(e)
}

As silly as it may sound, here we’re gonna multiply each collection element by a random number:

fun main() {
    val numbers = listOf(1, 2, 3, 4, 5)
    val random = random()

    numbers.each { println(random * it) } // capturing the random variable
}

And if we take a peek inside the bytecode using javap:

>> javap -c MainKt
public final class MainKt {
  public static final void main();
    Code:
      // Omitted
      51: new           #29                 // class MainKt$main$1
      54: dup
      55: fload_1
      56: invokespecial #33                 // Method MainKt$main$1."<init>":(F)V
      59: checkcast     #35                 // class kotlin/jvm/functions/Function1
      62: invokestatic  #41                 // Method CollectionsKt.each:(Ljava/util/Collection;Lkotlin/jvm/functions/Function1;)V
      65: return

Then we can spot from index 51 that the JVM creates a new instance of MainKt$main$1 inner class for each invocation. Also, index 56 shows how Kotlin captures the random variable. This means that each captured variable will be passed as constructor arguments, thus generating a memory overhead.

2.3. Type Erasure

When it comes to generics on the JVM, it’s never been a paradise, to begin with! Anyway, Kotlin erases the generic type information at runtime. That is, an instance of a generic class doesn’t preserve its type parameters at runtime.

For example, when declaring a few collections like List<Int> or List<String>, all we have at runtime are just raw Lists. This seems unrelated to the previous issues, as promised, but we’ll see how inline functions are the common solution for both problems.

3. Inline Functions

3.1. Removing the Overhead of Lambdas

When using lambdas, the extra memory allocations and extra virtual method call introduce some runtime overhead. So, if we were executing the same code directly, instead of using lambdas, our implementation would be more efficient.

Do we have to choose between abstraction and efficiency?

As is turns out, with inline functions in Kotlin we can have both! We can write our nice and elegant lambdas, and the compiler generates the inlined and direct code for us. All we have to do is to put an inline on it:

inline fun <T> Collection<T>.each(block: (T) -> Unit) {
    for (e in this) block(e)
}

When using inline functions, the compiler inlines the function body. That is, it substitutes the body directly into places where the function gets called.  By default, the compiler inlines the code for both the function itself and the lambdas passed to it.

For example, The compiler translates:

val numbers = listOf(1, 2, 3, 4, 5)
numbers.each { println(it) }

To something like:

val numbers = listOf(1, 2, 3, 4, 5)
for (number in numbers)
    println(number)

When using inline functions, there is no extra object allocation and no extra virtual method calls.

However, we should not overuse the inline functions, especially for long functions since the inlining may cause the generated code to grow quite a bit.

3.2. No Inline

By default, all lambdas passed to an inline function would be inlined, too. However, we can mark some of the lambdas with the noinline keyword to exclude them from inlining:

inline fun foo(inlined: () -> Unit, noinline notInlined: () -> Unit) { ... }

3.3. Inline Reification

As we saw earlier, Kotlin erases the generic type information at runtime, but for inline functions, we can avoid this limitation. That is, the compiler can reify generic type information for inline functions.

All we have to do is to mark the type parameter with the reified keyword:

inline fun <reified T> Any.isA(): Boolean = this is T

Without inline and reified, the isA function wouldn’t compile, as we thoroughly explain in our Kotlin Generics article.

4. Limitations

Generally, we can inline functions with lambda parameters only if the lambda is either called directly or passed to another inline function. Otherwise, the compiler prevents inlining with a compiler error.

For example, let’s take a look at the replace function in Kotlin standard library:

inline fun CharSequence.replace(regex: Regex, noinline transform: (MatchResult) -> CharSequence): String =
    regex.replace(this, transform) // passing to a normal function

The snippet above passes the lambda, transform, to a normal function, replace, hence the noinline.

5. Conclusion

In this article, we dove into issues with lambda performance and type erasure in Kotlin. Then, after introducing inline functions, we saw how these can address both issues.

However, we should try not to overuse these type of functions, especially when the function body is too large as the generated bytecode size may grow and we may also lose a few JVM optimizations along the way.

As usual, the sample codes are available on our GitHub project, so make sure to check it out!

One-to-One Relationship in JPA

$
0
0

1. Introduction

In this tutorial, we’ll have a look at different ways of creating one-to-one mappings in JPA.

We’ll need a basic understanding of the Hibernate framework, so please check out our Guide to Hibernate 5 with Spring for extra background.

2. Description

Let’s suppose we are building a User Management System and our boss asks us to store a mailing address for each user. A user will have one mailing address, and a mailing address will have only one user tied to it.

This is an example of a one-to-one relationship, in this case between user and address entities.

Let’s see how we can implement this in the subsequent sections.

3. Using a Foreign Key

3.1. Modeling with a Foreign Key

Let’s have a look at the following ER diagram which represents a foreign key based one-to-one mapping:

An ER Diagram mapping Users to Addresses via an address_id foreign key

In this example, the address_id column in users is the foreign key to address.

3.2. Implementing with a Foreign Key in JPA

First, let’s create the User class and annotate it appropriately:

@Entity
@Table(name = "users")
public class User {
    
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @Column(name = "id")
    private Long id;
    //... 

    @OneToOne(cascade = CascadeType.ALL)
    @JoinColumn(name = "address_id", referencedColumnName = "id")
    private Address address;

    // ... getters and setters
}

Note that we place the @OneToOne annotation on the related entity field, Address.

Also, we need to place the @JoinColumn annotation to configure the name of the column in the users table that maps to the primary key in the address table. If we don’t provide a name, then Hibernate will follow some rules to select a default one.

Finally, note in the next entity that we won’t use the @JoinColumn annotation there. This is because we only need it on the owning side of the foreign key relationship. Simply put, whoever owns the foreign key column gets the @JoinColumn annotation.

The Address entity turns out a bit simpler:

@Entity
@Table(name = "address")
public class Address {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @Column(name = "id")
    private Long id;
    //...

    @OneToOne(mappedBy = "address")
    private User user;

    //... getters and setters
}

We also need to place the @OneToOne annotation here, too. That’s because this is a bidirectional relationship. The address side of the relationship is called the non-owning side. 

4. Using a Shared Primary Key

4.1. Modeling with a Shared Primary Key

In this strategy, instead of creating a new column address_id, we’ll mark the primary key column (user_id) of the address table as the foreign key to the users table:

An ER diagram with Users Tied to Addresses where they share the same primary key values

We’ve optimized the storage space by utilizing the fact that these entities have a one-to-one relationship between them.

4.2. Implementing with a Shared Primary Key in JPA

Notice that our definitions change only slightly:

@Entity
@Table(name = "users")
public class User {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @Column(name = "id")
    private Long id;

    //...

    @OneToOne(mappedBy = "user", cascade = CascadeType.ALL)
    private Address address;

    //... getters and setters
}
@Entity
@Table(name = "address")
public class Address {

    @Id
    @Column(name = "id")
    private Long id;

    //...

    @OneToOne
    @MapsId
    private User user;
   
    //... getters and setters
}

@MapsId tells Hibernate to use the id column of address as both primary key and foreign key. Also, notice that the @Id column of the Address entity no longer uses the @GeneratedValue annotation.

The mappedBy attribute is now moved to the User class since the foreign key is now present in the address table.

5. Using a Join Table

One-to-one mappings can be of two types – Optional and Mandatory. So far, we’ve seen only mandatory relationships.

Now, let’s see imagine that our employees get associated with a workstation. It’s one-to-one, but sometimes an employee might not have a workstation and vice-versa.

5.1. Modeling with a Join Table

The strategies that we have discussed until now force us to put null values in the column to handle optional relationships.

Typically, we think of many-to-many relationships when we consider a join table, but, using a join table, in this case, can help us to eliminate these null values:

An ER diagram relating Employees to Workstations via a Join Table

Now, whenever we have a relationship, we’ll make an entry in the emp_workstation table and avoid nulls altogether.

5.2. Implementing with a Join Table in JPA

Our first example used @JoinColumn. This time, we’ll use @JoinTable:

@Entity
@Table(name = "employee")
public class Employee {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @Column(name = "id")
    private Long id;

    //...

    @OneToOne(cascade = CascadeType.ALL)
    @JoinTable(name = "emp_workstation", 
      joinColumns = { @JoinColumn(name = "employee_id", referencedColumnName = "id") },
      inverseJoinColumns = { @JoinColumn(name = "workstation_id", referencedColumnName = "id") })
    private WorkStation workStation;

    //... getters and setters
}
@Entity
@Table(name = "workstation")
public class WorkStation {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @Column(name = "id")
    private Long id;

    //...

    @OneToOne(mappedBy = "workStation")
    private Employee employee;

    //... getters and setters
}

@JoinTable instructs Hibernate to employ the join table strategy while maintaining the relationship.

Also, Employee is the owner of this relationship as we chose to use the join table annotation on it.

6. Conclusion

In this tutorial, we learned different ways of maintaining a one-to-one association in JPA and Hibernate and when to use each.

The source code of this tutorial can be found over on GitHub.

Viewing all 4699 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>