Quantcast
Channel: Baeldung
Viewing all 4770 articles
Browse latest View live

Sequences in Kotlin

$
0
0

1. Overview

The Kotlin language introduces sequences as a way to work with collections. They are quite similar to Java Streams, however, they use different key concepts under-the-hood. In this tutorial, we'll briefly discuss what sequences are and why we need them.

2. Understanding Sequences

A sequence is a container Sequence<T> with type T. It's also an interface, including intermediate operations like map() and filter(), as well as terminal operations like count() and find().

Like Streams in Java, Sequences in Kotlin execute lazily. The difference is, if we use a sequence to process a collection using several operations, we won't get an intermediate result at the end of each step. Thus, we won't introduce a new collection after processing each step.

It has tremendous potential to boost application performance while working with large collections. On the other hand, there is an overhead to sequences when processing small collections.

3. Creating a Sequence

3.1. From Elements

To create sequence from elements, we just use the sequenceOf() function:

val seqOfElements = sequenceOf("first" ,"second", "third")

3.2. From a Function

To create an infinite sequence, we can call the generateSequence() function:

val seqFromFunction = generateSequence(Instant.now()) {it.plusSeconds(1)}

3.3. From Chunks

We can also create a sequence from chunks with arbitrary length. Let's see an example using yield(), which takes a single element, and yieldAll(), which takes a collection:

val seqFromChunks = sequence {
    yield(1)
    yieldAll((2..5).toList())
}

It's worth mentioning here that all chunks produce elements one after another. In other words, if we have an infinite collection generator, we should put it at the end.

3.4. From a Collection

To create a sequence from collections of Iterable interface, we should use the asSequence() function:

val seqFromIterable = (1..10).asSequence()

4. Lazy and Eager Processing

Let's compare two implementations. The first one, without a sequence, is eager:

val withoutSequence = (1..10).filter{it % 2 == 1}.map { it * 2 }.toList()

And the second, with a sequence, is lazy:

val withSequence = (1..10).asSequence().filter{it % 2 == 1}.map { it * 2 }.toList()

How many intermediate collections were introduced in each case?

In the first example, each operator introduces an intermediate collection. So, all ten elements pass to a map() function. In the second example, there are no intermediate collections introduced, thus map() function has only five elements as input.

5. Conclusion

In this tutorial, we briefly discussed sequences in Kotlin. We've seen how to create a sequence in different ways. Also, we've seen the difference in processing a collection with sequence and without it.

All code examples are available over on GitHub.


Balanced Brackets Algorithm in Java

$
0
0

1. Overview

Balanced Brackets, also known as Balanced Parentheses, is a common programming problem.

In this tutorial, we will validate whether the brackets in a given string are balanced or not.

2. Problem Statement

A bracket is considered to be any of the following characters – “(“, “)”, “[“, “]”, “{“, “}”.

A set of brackets is considered to be a matched pair if an opening bracket, “(“, “[“, and “{“, occurs to the left of the corresponding closing bracket, “)”, “]”,  and “}”, respectively.

However, a string containing bracket pairs is not balanced if the set of brackets it encloses is not matched.

Similarly, a string containing non-bracket characters like a-z, A-Z, 0-9 or other special characters like #,$,@ is also considered to be unbalanced.

For example, if the input is “{[(])}”, the pair of square brackets, “[]”, encloses a single unbalanced opening round bracket, “(“. Similarly, the pair of round brackets, “()”, encloses a single unbalanced closing square bracket, “]”. Thus, the input string “{[(])}” is unbalanced.

Therefore, a string containing bracket characters is said to be balanced if:

  1. A matching opening bracket occurs to the left of each corresponding closing bracket
  2. Brackets enclosed within balanced brackets are also balanced
  3. It does not contain any non-bracket characters

There are a couple of special cases to keep in mind: null is considered to be unbalanced, while the empty string is considered to be balanced.

To further illustrate our definition of balanced brackets, let's see some examples of balanced brackets:

()
[()]
{[()]}
([{{[(())]}}])

And a few that are not balanced:

abc[](){}
{{[]()}}}}
{[(])}

Now that we have a better understanding of our problem, let's see how to solve it!

3. Solution Approaches

There are different ways to solve this problem. In this tutorial, we will look at two approaches:

  1. Using methods of the String class
  2. Using Deque implementation

4. Basic Setup and Validations

Let's first create a method that will return true if the input is balanced and false if the input is unbalanced:

public boolean isBalanced(String str)

Let's consider the basic validations for the input string:

  1. If a null input is passed, then it's not balanced.
  2. For a string to be balanced, the pairs of opening and closing brackets should match. Therefore, it would be safe to say that an input string whose length is odd will not be balanced as it will contain at least one non-matched bracket.
  3. As per the problem statement, the balanced behavior should be checked between brackets. Therefore, any input string containing non-bracket characters is an unbalanced string.

Given these rules, we can implement the validations:

if (null == str || ((str.length() % 2) != 0)) {
    return false;
} else {
    char[] ch = str.toCharArray();
    for (char c : ch) {
        if (!(c == '{' || c == '[' || c == '(' || c == '}' || c == ']' || c == ')')) {
            return false;
        }
    }
}

Now that the input string is validated, we can move on to solving this problem.

5. Using String.replaceAll Method

In this approach, we'll loop through the input string removing occurrences of “()”, “[]”, and “{}” from the string using String.replaceAll. We continue this process until no further occurrences are found in the input string.

Once the process is complete, if the length of our string is zero, then all matching pairs of brackets have been removed and the input string is balanced. If, however, the length is not zero, then some unmatched opening or closing brackets are still present in the string. Therefore, the input string is unbalanced.

Let's see the complete implementation:

while (str.contains("()") || str.contains("[]") || str.contains("{}")) {
    str = str.replaceAll("\\(\\)", "")
      .replaceAll("\\[\\]", "")
      .replaceAll("\\{\\}", "");
}
return (str.length() == 0);

6. Using Deque

Deque is a form of the Queue that provides add, retrieve and peek operations at both ends of the queue. We will leverage the Last-In-First-Out (LIFO) order feature of this data structure to check for the balance in the input string.

First, let's construct our Deque:

Deque<Character> deque = new LinkedList<>();

Note that we have used a LinkedList here because it provides an implementation for the Deque interface.

Now that our deque is constructed, we will loop through each character of the input string one by one. If the character is an opening bracket, then we will add it as the first element in the Deque:

if (ch == '{' || ch == '[' || ch == '(') { 
    deque.addFirst(ch); 
}

But, if the character is a closing bracket, then we will perform some checks on the LinkedList.

First, we check whether the LinkedList is empty or not. An empty list means that the closing bracket is unmatched. Therefore, the input string is unbalanced. So we return false.

However, if the LinkedList is not empty, then we peek on its last-in character using the peekFirst method. If it can be paired with the closing bracket, then we remove this top-most character from the list using the removeFirst method and move on to the next iteration of the loop:

if (!deque.isEmpty() 
    && ((deque.peekFirst() == '{' && ch == '}') 
    || (deque.peekFirst() == '[' && ch == ']') 
    || (deque.peekFirst() == '(' && ch == ')'))) { 
    deque.removeFirst(); 
} else { 
    return false; 
}

By the end of the loop, all characters are balance-checked, so we can return true. Below is a complete implementation of the Deque based approach:

Deque<Character> deque = new LinkedList<>();
for (char ch: str.toCharArray()) {
    if (ch == '{' || ch == '[' || ch == '(') {
        deque.addFirst(ch);
    } else {
        if (!deque.isEmpty()
            && ((deque.peekFirst() == '{' && ch == '}')
            || (deque.peekFirst() == '[' && ch == ']')
            || (deque.peekFirst() == '(' && ch == ')'))) {
            deque.removeFirst();
        } else {
            return false;
        }
    }
}
return true;

7. Conclusion

In this tutorial, we discussed the problem statement of Balanced Brackets and solved it using two different approaches.

As always, the code is available over on Github.

Guide to Work Stealing in Java

$
0
0

1. Overview

In this tutorial, we'll look at the concept of work stealing in Java.

2. What Is Work Stealing?

Work stealing was introduced in Java with the aim of reducing contention in multi-threaded applications. This is done using the fork/join framework.

2.1. Divide and Conquer Approach

In the fork/join framework, problems or tasks are recursively broken down into sub-tasks. The sub-tasks are then solved individually, with the sub-results combined to form the result:

Result solve(Problem problem) {
    if (problem is small)
        directly solve problem
    else {
        split problem into independent parts
        fork new subtasks to solve each part
        join all subtasks
        compose result from subresults
    }
}

2.2. Worker Threads

The broken-down task is solved with the help of worker threads provided by a thread pool. Each worker thread will have sub-tasks it's responsible for. These are stored in double-ended queues (deques).

Each worker thread gets sub-tasks from its deque by continuously popping a sub-task off the top of the deque. When a worker thread's deque is empty, it means that all the sub-tasks have been popped off and completed.

At this point, the worker thread randomly selects a peer thread-pool thread it can “steal” work from. It then uses the first-in, first-out approach (FIFO) to take sub-tasks from the tail end of the victim's deque.

3. Fork/Join Framework Implementation

We can create a work-stealing thread pool using either the ForkJoinPool class or the Executors class:

ForkJoinPool commonPool = ForkJoinPool.commonPool();
ExecutorService workStealingPool = Executors.newWorkStealingPool();

The Executors class has an overloaded newWorkStealingPool method, which takes an integer argument representing the level of parallelism.

Executors.newWorkStealingPool is an abstraction of ForkJoinPool.commonPool. The only difference is that Executors.newWorkStealingPool creates a pool in asynchronous mode and ForkJoinPool.commonPool doesn't.

4. Synchronous vs Asynchronous Thread Pools

ForkJoinPool.commonPool uses a last-in, first-out (LIFO) queue configuration, whereas Executors.newWorkStealingPool uses first-in, first-out approach (FIFO) one.

According to Doug Lea, the FIFO approach has these advantages over LIFO:

  • It reduces contention by having stealers operate on the opposite side of the deque as owners
  • It exploits the property of recursive divide−and−conquer algorithms of generating “large” tasks early

The second point above means that it is possible to further break down an older stolen task by a thread that stole it.

As per the Java documentation, setting asyncMode to true may be suitable for use with event-style tasks that are never joined.

5. Working Example – Finding Prime Numbers

We'll use the example of finding prime numbers from a collection of numbers to show the computation time benefits of the work-stealing framework. We'll also show the differences between using synchronous and asynchronous thread pools.

5.1. The Prime Numbers Problem

Finding prime numbers from a collection of numbers can be a computationally expensive process. This is mainly due to the size of the collection of numbers.

The PrimeNumbers class helps us find prime numbers:

public class PrimeNumbers extends RecursiveAction {

    private int lowerBound;
    private int upperBound;
    private int granularity;
    static final List<Integer> GRANULARITIES
      = Arrays.asList(1, 10, 100, 1000, 10000);
    private AtomicInteger noOfPrimeNumbers;

    PrimeNumbers(int lowerBound, int upperBound, int granularity, AtomicInteger noOfPrimeNumbers) {
        this.lowerBound = lowerBound;
        this.upperBound = upperBound;
        this.granularity = granularity;
        this.noOfPrimeNumbers = noOfPrimeNumbers;
    }

    // other constructors and methods

    private List<PrimeNumbers> subTasks() {
        List<PrimeNumbers> subTasks = new ArrayList<>();

        for (int i = 1; i <= this.upperBound / granularity; i++) {
            int upper = i * granularity;
            int lower = (upper - granularity) + 1;
            subTasks.add(new PrimeNumbers(lower, upper, noOfPrimeNumbers));
        }
        return subTasks;
    }

    @Override
    protected void compute() {
        if (((upperBound + 1) - lowerBound) > granularity) {
            ForkJoinTask.invokeAll(subTasks());
        } else {
            findPrimeNumbers();
        }
    }

    void findPrimeNumbers() {
        for (int num = lowerBound; num <= upperBound; num++) {
            if (isPrime(num)) {
                noOfPrimeNumbers.getAndIncrement();
            }
        }
    }

    public int noOfPrimeNumbers() {
        return noOfPrimeNumbers.intValue();
    }
}

A few important things to note about this class:

  • It extends RecursiveAction, which allows us to implement the compute method used in computing tasks using a thread pool
  • It recursively breaks down tasks into sub-tasks based on the granularity value
  • The constructors take lower and upper bound values which control the range of numbers we want to determine prime numbers for
  • It enables us to determine prime numbers using either a work-stealing thread pool or a single thread

5.2. Solving the Problem Faster with Thread Pools

Let's determine prime numbers in a single-threaded manner and also using work-stealing thread pools.

First, let's see the single-threaded approach:

PrimeNumbers primes = new PrimeNumbers(10000);
primes.findPrimeNumbers();

And now, the ForkJoinPool.commonPool approach:

PrimeNumbers primes = new PrimeNumbers(10000);
ForkJoinPool pool = ForkJoinPool.commonPool();
pool.invoke(primes);
pool.shutdown();

Finally, we'll have a look at the Executors.newWorkStealingPool approach:

PrimeNumbers primes = new PrimeNumbers(10000);
int parallelism = ForkJoinPool.getCommonPoolParallelism();
ForkJoinPool stealer = (ForkJoinPool) Executors.newWorkStealingPool(parallelism);
stealer.invoke(primes);
stealer.shutdown();

We use the invoke method of the ForkJoinPool class to pass tasks to the thread pool. This method takes in instances of sub-classes of RecursiveAction. Using Java Microbench Harness, we benchmark these different approaches against each other in terms of the average time per operation:

# Run complete. Total time: 00:04:50

Benchmark                                                      Mode  Cnt    Score   Error  Units
PrimeNumbersUnitTest.Benchmarker.commonPoolBenchmark           avgt   20  119.885 ± 9.917  ms/op
PrimeNumbersUnitTest.Benchmarker.newWorkStealingPoolBenchmark  avgt   20  119.791 ± 7.811  ms/op
PrimeNumbersUnitTest.Benchmarker.singleThread                  avgt   20  475.964 ± 7.929  ms/op

It is clear that both ForkJoinPool.commonPool and Executors.newWorkStealingPool allow us to determine prime numbers faster than with a single-threaded approach.

The fork/join pool framework lets us break down the task into sub-tasks. We broke down the collection of 10,000 integers into batches of 1-100, 101-200, 201-300 and so on. We then determined prime numbers for each batch and made the total number of prime numbers available with our noOfPrimeNumbers method.

5.3. Stealing Work to Compute

With a synchronous thread pool, ForkJoinPool.commonPool puts threads in the pool as long as the task is still in progress. As a result, the level of work stealing is not dependent on the level of task granularity.

The asynchronous Executors.newWorkStealingPool is more managed, allowing the level of work stealing to be dependent on the level of task granularity.

We get the level of work stealing using the getStealCount of the ForkJoinPool class:

long steals = forkJoinPool.getStealCount();

Determining the work-stealing count for Executors.newWorkStealingPool and ForkJoinPool.commonPool gives us dissimilar behavior:

Executors.newWorkStealingPool ->
Granularity: [1], Steals: [6564]
Granularity: [10], Steals: [572]
Granularity: [100], Steals: [56]
Granularity: [1000], Steals: [60]
Granularity: [10000], Steals: [1]

ForkJoinPool.commonPool ->
Granularity: [1], Steals: [6923]
Granularity: [10], Steals: [7540]
Granularity: [100], Steals: [7605]
Granularity: [1000], Steals: [7681]
Granularity: [10000], Steals: [7681]

When granularity changes from fine to coarse (1 to 10,000) for Executors.newWorkStealingPool, the level of work stealing decreases. Therefore, the steal count is one when the task is not broken down (granularity of 10,000).

The ForkJoinPool.commonPool has a different behavior. The level of work stealing is always high and not influenced much by the change in task granularity.

Technically speaking, our prime numbers example is one that supports asynchronous processing of event-style tasks. This is because our implementation does not enforce the joining of results.

A case can be made that Executors.newWorkStealingPool offers the best use of resources in solving the problem.

6. Conclusion

In this article, we looked at work stealing and how to apply it using the fork/join framework. We also looked at the examples of work stealing and how it can improve processing time and use of resources.

As always, the full source code of the example is available over on GitHub.

Creating a LocalDate with Values in Java

$
0
0

1. Overview

Creating a date in Java had been redefined with the advent of Java 8. Besides, the new Date & Time API from the java.time package can be used with ease relative to the old one from the java.util package. In this tutorial, we'll see how it makes a huge difference.

The LocalDate class from the java.time package helps us achieve this. LocalDate is an immutable, thread-safe class. Moreover, a LocalDate can hold only date values and cannot have a time component.

Let's now see all the variants of creating one with values.

2. Create a Custom LocalDate with of()

Let's look at a few ways of creating a LocalDate representing January 8, 2020. We can create one by passing values to the factory method of:

LocalDate date = LocalDate.of(2020, 1, 8);

The month can also be specified using the Month enum:

LocalDate date = LocalDate.of(2020, Month.JANUARY, 8)

We can also try to get it using the epoch day:

LocalDate date = LocalDate.ofEpochDay(18269);

And finally, let's create one with year and day-of-year values:

LocalDate date = LocalDate.ofYearDay(2020, 8);

3. Create a LocalDate by Parsing a String

The last option is to create a date by parsing a string. We can use the parse method with only a single argument to parse a date in the yyyy-mm-dd format:

LocalDate date = LocalDate.parse("2020-01-08");

We can also specify a different pattern to get one using the DateTimeFormatter class as the second parameter of the parse method:

LocalDate date = LocalDate.parse("8-Jan-2020", DateTimeFormatter.ofPattern("d-MMM-yyyy"));

4. Conclusion

In this article, we've seen all the variants of creating a LocalDate with values in Java. The Date & Time API articles can help us understand more.

The examples are available over on GitHub.

Working with Lazy Element Collections in JPA

$
0
0

1. Overview

The JPA specification provides two different fetch strategies: eager and lazy. While the lazy approach helps to avoid unnecessarily loading data that we don't need, we sometimes need to read data not initially loaded in a closed Persistence Context. Moreover, accessing lazy element collections in a closed Persistence Context is a common problem.

In this tutorial, we'll focus on how to load data from lazy element collections. We'll explore three different solutions: one involving the JPA query language, another with the use of entity graphs, and the last one with transaction propagation.

2. The Element Collection Problem

By default, JPA uses the lazy fetch strategy in associations of type @ElementCollection. Thus, any access to the collection in a closed Persistence Context will result in an exception.

To understand the problem, let's define a domain model based on the relationship between the employee and its phone list:

@Entity
public class Employee {
    @Id
    private int id;
    private String name;
    @ElementCollection
    @CollectionTable(name = "employee_phone", joinColumns = @JoinColumn(name = "employee_id"))
    private List phones;

    // standard constructors, getters, and setters
}

@Embeddable
public class Phone {
    private String type;
    private String areaCode;
    private String number;

    // standard constructors, getters, and setters
}

Our model specifies that an employee can have many phones. The phone list is a collection of embeddable types. Let's use a Spring Repository with this model:

@Repository
public class EmployeeRepository {

    public Employee findById(int id) {
        return em.find(Employee.class, id);
    }

    // additional properties and auxiliary methods
}

Now, let's reproduce the problem with a simple JUnit test case:

public class ElementCollectionIntegrationTest {

    @Before
    public void init() {
        Employee employee = new Employee(1, "Fred");
        employee.setPhones(
          Arrays.asList(new Phone("work", "+55", "99999-9999"), new Phone("home", "+55", "98888-8888")));
        employeeRepository.save(employee);
    }

    @After
    public void clean() {
        employeeRepository.remove(1);
    }

    @Test(expected = org.hibernate.LazyInitializationException.class)
    public void whenAccessLazyCollection_thenThrowLazyInitializationException() {
        Employee employee = employeeRepository.findById(1);
 
        assertThat(employee.getPhones().size(), is(2));
    }
}

This test throws an exception when we try to access the phone list because the Persistence Context is closed.

We can solve this problem by changing the fetch strategy of the @ElementCollection to use the eager approach. However, fetching the data eagerly isn't necessarily the best solution, since the phone data always will be loaded, whether we need it or not.

3. Loading Data with JPA Query Language

The JPA query language allows us to customize the projected information. Therefore, we can define a new method in our EmployeeRepository to select the employee and its phones:

public Employee findByJPQL(int id) {
    return em.createQuery("SELECT u FROM Employee AS u JOIN FETCH u.phones WHERE u.id=:id", Employee.class)
        .setParameter("id", id).getSingleResult();
}

The above query uses an inner join operation to fetch the phone list for each employee returned.

4. Loading Data with Entity Graph

Another possible solution is to use the entity graph feature from JPA. The entity graph makes it possible for us to choose which fields will be projected by JPA queries. Let's define one more method in our repository:

public Employee findByEntityGraph(int id) {
    EntityGraph entityGraph = em.createEntityGraph(Employee.class);
    entityGraph.addAttributeNodes("name", "phones");
    Map<String, Object> properties = new HashMap<>();
    properties.put("javax.persistence.fetchgraph", entityGraph);
    return em.find(Employee.class, id, properties);
}

We can see that our entity graph includes two attributes: name and phones. So, when JPA translates this to SQL, it'll project the related columns.

5. Loading Data in a Transactional Scope

Finally, we're going to explore one last solution. So far, we've seen that the problem is related to the Persistence Context life cycle.

What happens is that our Persistence Context is transaction-scoped and will remain open until the transaction finishes. The transaction life cycle spans from the beginning to the end of the execution of the repository method.

So, let's create another test case and configure our Persistence Context to bind to a transaction started by our test method. We'll keep the Persistence Context open until the test ends:

@Test
@Transactional
public void whenUseTransaction_thenFetchResult() {
    Employee employee = employeeRepository.findById(1);
    assertThat(employee.getPhones().size(), is(2));
}

The @Transactional annotation configures a transactional proxy around the instance of the related test class. Moreover, the transaction is associated with the thread executing it. Considering the default transaction propagation setting, every Persistence Context created from this method joins to this same transaction. Consequently, the transaction persistence context is bound to the transaction scope of the test method.

6. Conclusion

In this tutorial, we evaluated three different solutions to address the problem of reading data from lazy associations in a closed Persistence Context. First, we used the JPA query language to fetch the element collections. Next, we defined an entity graph to retrieve the necessary data. In the ultimate solution, we used the Spring Transaction to keep the Persistence Context open and read the data needed.

As always, the example code for this tutorial is available over on GitHub.

Generating Barcodes and QR Codes in Java

$
0
0

1. Overview

Barcodes are used to convey information visually. We'll most likely provide an appropriate barcode image in a web page, email, or a printable document.

In this tutorial, we're going to look at how to generate the most common types of barcodes in Java.

First, we'll learn about the internals of several types of barcodes. Next, we'll explore the most popular Java libraries for generating barcodes. Finally, we'll see how to integrate barcodes into our application by serving them from a web service.

2. Types of Barcodes

Barcodes encode information such as product numbers, serial numbers, and batch numbers. Also, they enable parties like retailers, manufacturers, and transport providers to track assets through the entire supply chain.

We can group the many different barcode symbologies into two primary categories:

  • linear barcodes
  • 2D barcodes

2.1. UPC (Universal Product Code) Codes

UPC Codes are some of the most commonly used 1D barcodes, and we mostly find them in the United States.

The UPC-A is a numeric-only code that contains 12 digits: a manufacturer identification number (6 digits), an item number (5 digits), and a check digit. There is also a UPC-E code that has only 8 digits and is used for small packages.

2.2. EAN Codes

EAN Codes are known worldwide as both European Article Number and International Article Number. They're designed for Point-of-Sale scanning. There are also a few different variations of the EAN code, including EAN-13, EAN-8, JAN-13, and ISBN.

The EAN-13 code is the most commonly used EAN standard and is similar to the UPC code. It's made of 13 digits — a leading “0” followed by the UPC-A code.

2.3. Code 128

The Code 128 barcode is a compact, high-density linear code used in the logistics and transportation industries for ordering and distribution. It can encode all 128 characters of ASCII, and its length is variable.

2.4. PDF417

PDF417 is a stacked linear barcode comprised of multiple 1D barcodes stacked one on top of another. Hence, it can use a traditional linear scanner.

We might expect to find it on a variety of applications such as travel (boarding passes), identification cards, and inventory management.

PDF417 uses Reed-Solomon error correction instead of check digits. This error correction allows the symbol to endure some damage without causing loss of data. However, it can be expansive in size – 4 times larger than other 2D barcodes such as Datamatrix and QR Codes.

2.5. QR Codes

QR Codes are becoming the most widely recognized 2D barcodes worldwide. The big benefit of the QR code is that we can store large amounts of data in a limited space.

They use four standardized encoding modes to store data efficiently:

  • numeric
  • alphanumeric
  • byte/binary
  • kanji

Moreover, they are flexible in size and are easily scanned using a smartphone. Similar to PDF417, a QR code can withstand some damage without causing loss of data.

3. Barcode Libraries

We're going to explore several libraries:

  • Barbecue
  • Barcode4j
  • ZXing
  • QRGen

Barbecue is an open-source Java library that supports an extensive set of 1D barcode formats. Also, the barcodes can be output to PNG, GIF, JPEG, and SVG.

Barcode4j is also an open-source library. In addition, it offers 2D barcode formats – like DataMatrix and PDF417 – and more output formats. The PDF417 format is available in both libraries. But, unlike Barcode4j, Barbecue considers it a linear barcode.

ZXing (“zebra crossing”) is an open-source, multi-format 1D/2D barcode image processing library implemented in Java, with ports to other languages. This is the main library that supports QR codes in Java.

QRGen library offers a simple QRCode generation API built on top of ZXing. It provides separate modules for Java and Android.

4. Generating Linear Barcodes

Let's create a barcode image generator for each library and barcode pair. We'll retrieve the image in the PNG format, but we could also use other formats like GIF or JPEG.

4.1. Using the Barbecue Library

As we'll see, Barbecue provides the simplest API for generating barcodes. We only need to provide the barcode text as minimal input. But we could optionally set a font and a resolution (dots per inch). Regarding the font, we can use it to display the barcode text under the image.

First, we need to add the Barbecue Maven dependency:

<dependency>
    <groupId>net.sourceforge.barbecue</groupId>
    <artifactId>barbecue</artifactId>
    <version>1.5-beta1</version>
</dependency>

Let's create a generator for an EAN13 barcode:

public static BufferedImage generateEAN13BarcodeImage(String barcodeText) throws Exception {
    Barcode barcode = BarcodeFactory.createEAN13(barcodeText);
    barcode.setFont(BARCODE_TEXT_FONT);

    return BarcodeImageHandler.getImage(barcode);
}

We can generate images for the rest of the linear barcode types in a similar manner.

We should note that we do not need to provide the checksum digit for EAN/UPC barcodes, as it is automatically added by the library.

4.2. Using the Barcode4j Library

Let's start by adding the Barcode4j Maven Dependency:

<dependency>
    <groupId>net.sf.barcode4j</groupId>
    <artifactId>barcode4j</artifactId>
    <version>2.1</version>
</dependency>

Likewise, let's build a generator for an EAN13 barcode:

public static BufferedImage generateEAN13BarcodeImage(String barcodeText) {
    EAN13Bean barcodeGenerator = new EAN13Bean();
    BitmapCanvasProvider canvas = 
      new BitmapCanvasProvider(160, BufferedImage.TYPE_BYTE_BINARY, false, 0);

    barcodeGenerator.generateBarcode(canvas, barcodeText);
    return canvas.getBufferedImage();
}

The BitmapCanvasProvider constructor takes several parameters: resolution, image type, whether to enable anti-aliasing, and image orientation. Also, we don't need to set a font because the text under the image is displayed by default.

4.3. Using the ZXing Library

Here, we need to add two Maven dependencies: the core image library and the Java client:

<dependency>
    <groupId>com.google.zxing</groupId>
    <artifactId>core</artifactId>
    <version>3.3.0</version>
</dependency>
<dependency>
    <groupId>com.google.zxing</groupId>
    <artifactId>javase</artifactId>
    <version>3.3.0</version>
</dependency>

Let's create an EAN13 generator:

public static BufferedImage generateEAN13BarcodeImage(String barcodeText) throws Exception {
    EAN13Writer barcodeWriter = new EAN13Writer();
    BitMatrix bitMatrix = barcodeWriter.encode(barcodeText, BarcodeFormat.EAN_13, 300, 150);

    return MatrixToImageWriter.toBufferedImage(bitMatrix);
}

Here, we need to provide several parameters as input, such as a barcode text, a barcode format, and barcode dimensions. Unlike the other two libraries, we must also add the checksum digit for EAN barcodes. But, for UPC-A barcodes, the checksum is optional.

Moreover, this library will not display barcode text under the image.

5. Generating 2D Barcodes

5.1. Using the ZXing Library

We're going to use this library to generate a QR Code. The API is similar to that of the linear barcodes:

public static BufferedImage generateQRCodeImage(String barcodeText) throws Exception {
    QRCodeWriter barcodeWriter = new QRCodeWriter();
    BitMatrix bitMatrix = 
      barcodeWriter.encode(barcodeText, BarcodeFormat.QR_CODE, 200, 200);

    return MatrixToImageWriter.toBufferedImage(bitMatrix);
}

5.2. Using the QRGen Library

The library is no longer deployed to Maven Central, but we can find it on jitpack.io.

First, we need to add the jitpack repository and the QRGen dependency to our pom.xml:

<repositories>
    <repository>
        <id>jitpack.io</id>
        <url>https://jitpack.io</url>
    </repository>
</repositories>

<dependencies>
    <dependency>
        <groupId>com.github.kenglxn.qrgen</groupId>
        <artifactId>javase</artifactId>
        <version>2.6.0</version>
    </dependency>
</dependencies>

Let's create a method that generates a QR Code:

public static BufferedImage generateQRCodeImage(String barcodeText) throws Exception {
    ByteArrayOutputStream stream = QRCode
      .from(barcodeText)
      .withSize(250, 250)
      .stream();
    ByteArrayInputStream bis = new ByteArrayInputStream(stream.toByteArray());

    return ImageIO.read(bis);
}

As we can see, the API is based on the Builder pattern and it provides two types of output: File and OutputStream. We can use the ImageIO library to convert it to a BufferedImage.

6. Building a REST Service

Let's look at how to serve barcodes from a web service. We'll start with a RestController:

@RestController
@RequestMapping("/barcodes")
public class BarcodesController {

    @GetMapping(value = "/barbecue/ean13/{barcode}", produces = MediaType.IMAGE_PNG_VALUE)
    public ResponseEntity<BufferedImage> barbecueEAN13Barcode(@PathVariable("barcode") String barcode)
    throws Exception {
        return okResponse(BarbecueBarcodeGenerator.generateEAN13BarcodeImage(barcode));
    }
    //...
}

Also, we need to manually register a message converter for BufferedImage HTTP Responses because there is no default:

@Bean
public HttpMessageConverter<BufferedImage> createImageHttpMessageConverter() {
    return new BufferedImageHttpMessageConverter();
}

Finally, we can use Postman or a browser to view the generated barcodes.

6.1. Generating a UPC-A Barcode

Let's call the UPC-A web service using the Barbecue library:

[GET] http://localhost:8080/barcodes/barbecue/upca/12345678901

Here's the result:

6.2. Generating an EAN13 Barcode

Similarly, we're going to call the EAN13 web service:

[GET] http://localhost:8080/barcodes/barbecue/ean13/012345678901

And here's our barcode:

6.3. Generating a Code128 Barcode

In this case, we're going to use the POST method. Let's call the Code128 web service using the Barbecue library:

[POST] http://localhost:8080/barcodes/barbecue/code128

We'll provide the request body, containing the data:

Lorem ipsum dolor sit amet, consectetur adipiscing elit,
 sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

Let's see the result:

6.4. Generating a PDF417 Barcode

Here, we're going to call the PDF417 web service, which is similar to Code128:

[POST] http://localhost:8080/barcodes/barbecue/pdf417

We'll provide the request body, containing the data:

Lorem ipsum dolor sit amet, consectetur adipiscing elit,
 sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

And here's the resulting barcode:

6.5. Generating a QR Code Barcode

Let's call the QR Code web service using the ZXing library:

[POST] http://localhost:8080/barcodes/zxing/qrcode

We'll provide the request body, containing the data:

Lorem ipsum dolor sit amet, consectetur adipiscing elit,
 sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
 quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Here's our QR code:

Here, we can see the power of QR codes to store large amounts of data in a limited space.

7. Conclusion

In this article, we learned how to generate the most common types of barcodes in Java.

First, we studied the formats of several types of linear and 2D barcodes. Next, we explored the most popular Java libraries for generating them. Though we tried some simple examples, we can study the libraries further for more customized implementations.

Finally, we saw how to integrate the barcode generators into a REST service, and how to test them.

As always, the example code from this tutorial is available over on GitHub.

Guide to @CurrentSecurityContext in Spring Security

$
0
0

1. Overview

Spring Security handles receiving and parsing authentication credentials for us.

In this short tutorial, we're going to look at how to get the SecurityContext information from a request, within our handler code.

2. The @CurrentSecurityContext Annotation

We could use some boilerplate code to read the security context:

SecurityContext context = SecurityContextHolder.getContext();
Authentication authentication = context.getAuthentication();

However, there is now a @CurrentSecurityContext annotation to help us.

Furthermore, using annotations makes the code more declarative and makes the authentication object injectable. With @CurrentSecurityContext, we can also access the Principal implementation of the current user.

In the examples below, we're going to look at a couple of ways to get security context data, like the Authentication and the name of the Principal. We'll also see how to test our code.

3. Dependencies

We first need the dependency for spring-security-core:

<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-core</artifactId>
    <version>5.2.1.RELEASE</version>
</dependency>

4. Implementing with @CurrentSecurityContext

We can use SpEL (Spring Expression Language) with @CurrentSecurityContext to inject the Authentication object or the Principal. SpEL works together with type lookup. The type check is not enforced by default, but we can enable it via the errorOnInvalidType parameter of the @CurrentSecurityContext annotation.

4.1. Obtaining the Authentication Object

Let's read the Authentication object so that we can return its details:

@GetMapping("/authentication")
public Object getAuthentication(@CurrentSecurityContext(expression = "authentication") 
  Authentication authentication) {
    return authentication.getDetails();
}

Note that the SpEL expression refers to the authentication object itself.

Let's test it:

@Test
public void givenOAuth2Context_whenAccessingAuthentication_ThenRespondTokenDetails() {
    ClientCredentialsResourceDetails resourceDetails = 
      getClientCredentialsResourceDetails("baeldung", singletonList("read"));
    OAuth2RestTemplate restTemplate = getOAuth2RestTemplate(resourceDetails);

    String authentication = executeGetRequest(restTemplate, "/authentication");

    Pattern pattern = Pattern.compile("\\{\"remoteAddress\":\".*"
      + "\",\"sessionId\":null,\"tokenValue\":\".*"
      + "\",\"tokenType\":\"Bearer\",\"decodedDetails\":null}");
    assertTrue("authentication", pattern.matcher(authentication).matches());
}

We should note that, in this example, we're getting all the details of our connection. As our test code cannot predict the remoteAddress or tokenValue, we're using a regular expression to check the resulting JSON.

4.2. Obtaining the Principal

If we only want the Principal from our authentication data, we can change the SpEL expression and the injected object:

@GetMapping("/principal")
public String getPrincipal(@CurrentSecurityContext(expression = "authentication.principal") 
  Principal principal) { 
    return principal.getName(); 
}

In this case, we're returning only the Principal name using the getName method.

Let's test it:

@Test
public void givenOAuth2Context_whenAccessingPrincipal_ThenRespondBaeldung() {
    ClientCredentialsResourceDetails resourceDetails = 
       getClientCredentialsResourceDetails("baeldung", singletonList("read"));
    OAuth2RestTemplate restTemplate = getOAuth2RestTemplate(resourceDetails);

    String principal = executeGetRequest(restTemplate, "/principal");

    assertEquals("baeldung", principal);
}

Here we see the name baeldung, which was added to the client credentials, being found and returned from inside the Principal object injected into the handler.

5. Conclusion

In this article, we've seen how to access properties within the current security context and inject them into parameters in our handler methods.

We've done this by taking advantage of SpEL and the @CurrentSecurityContext annotation.

As always, the full source code for the examples is available over on GitHub.

Parsing Command-Line Parameters with Airline

$
0
0

1. Introduction

In this tutorial, we'll introduce Airline — an annotation-driven Java library for building Command-Line Interfaces (CLIs).

2. Scenario

When building a command-line application, it's natural to create a simple interface to allow the user to mold the output as needed. Almost everyone has played with Git CLI and can relate to how powerful, yet simple, it is. Alas, few tools come in handy when building such an interface.

Airline aims to reduce the boilerplate code typically associated with CLIs in Java, as most common behaviors can be achieved with annotations and zero user code.

We're going to implement a small Java program that will exploit Airline's functionalities to mimic a common CLI. It'll expose user commands for setting up our program configuration, like defining the database URL, credentials, and logger verbosity. We'll also dive under the surface of our library and use more than its basics to probe if it can handle some complexity.

3. Setup

To get started, let's add the Airline dependency to our pom.xml:

<dependency>
    <groupId>com.github.rvesse</groupId>
    <artifactId>airline</artifactId>
    <version>2.7.2</version>
</dependency>

4. A Simple CLI

Let's create our entry point for the application — the CommandLine class:

@Cli(name = "baeldung-cli",
  description = "Baeldung Airline Tutorial",
  defaultCommand = Help.class)
public class CommandLine {
    public static void main(String[] args) {
        Cli<Runnable> cli = new Cli<>(CommandLine.class);
        Runnable cmd = cli.parse(args);
        cmd.run();
    }
}

Through a simple @Cli annotation, we have defined the default command that will run on our application – the Help command.

The Help class comes as part of the Airline library and exposes a default help command using -h or –help options.

Just like that, the basic setup is done.

5. Our First Command

Let's implement our first command, a simple LoggingCommand class that will control the verbosity of our logs. We'll annotate the class with @Command to ensure that the correct command is applied when the user calls setup-log:

@Command(name = "setup-log", description = "Setup our log")
public class LoggingCommand implements Runnable {

    @Inject
    private HelpOption<LoggingCommand> help;
	
    @Option(name = { "-v", "--verbose" }, 
      description = "Set log verbosity on/off")
    private boolean verbose = false;

    @Override
    public void run() {
        if (!help.showHelpIfRequested())
            System.out.println("Verbosity: " + verbose);
        }
    }
}

Let's take a closer look at our example command.

First, we've set a description so that our helper, thanks to the injection, will display our command options when requested.

Then we declared a boolean variable, verbose, and annotated it with @Option to give it a name, description, and also an alias -v/–verbose to represent our command-line option to control verbosity.

Finally, inside the run method, we instructed our command to halt whenever the user asks for help.

So far, so good. Now, we need to add our new command to the main interface by modifying the @Cli annotation:

@Cli(name = "baeldung-cli",
description = "Baeldung Airline Tutorial",
defaultCommand = Help.class,
commands = { LoggingCommand.class, Help.class })
public class CommandLine {
    public static void main(String[] args) {
        Cli<Runnable> cli = new Cli<>(CommandLine.class);
        Runnable cmd = cli.parse(args);
        cmd.run();
    }
}

Now, if we pass setup-log -v to our program, it will run our logic.

6. Constraints and More

We have seen how Airline generates CLI flawlessly, but… there's more!

We can specify constraints (or restrictions) for our parameters to handle allowed values, requirements or dependencies, and more.

We're going to create a DatabaseSetupCommand class, which will respond to the setup-db command; same as we did earlier, but we'll add some spice.

First, we'll request the type of database, accepting only 3 valid values through @AllowedRawValues:

@AllowedRawValues(allowedValues = { "mysql", "postgresql", "mongodb" })
@Option(type = OptionType.COMMAND,
  name = {"-d", "--database"},
  description = "Type of RDBMS.",
  title = "RDBMS type: mysql|postgresql|mongodb")
protected String rdbmsMode;

When using a database connection, without any doubt, users should supply an endpoint and some credentials to access it. We'll let CLI handle this through one (URL mode) or more parameters (host mode). For this, we'll use the @MutuallyExclusiveWith annotation, marking each parameter with the same tag:

@Option(type = OptionType.COMMAND,
  name = {"--rdbms:url", "--url"},
  description = "URL to use for connection to RDBMS.",
  title = "RDBMS URL")
@MutuallyExclusiveWith(tag="mode")
@Pattern(pattern="^(http://.*):(d*)(.*)u=(.*)&p=(.*)")
protected String rdbmsUrl = "";
	
@Option(type = OptionType.COMMAND,
  name = {"--rdbms:host", "--host"},
  description = "Host to use for connection to RDBMS.",
  title = "RDBMS host")
@MutuallyExclusiveWith(tag="mode")
protected String rdbmsHost = "";

Note that we used the @Pattern decorator, which helps us define the URL string format.

If we look at the project documentation, we'll find other valuable tools for handling requirements, occurrences, allowed values, specific cases, and more, enabling us to define our custom rules.

Finally, if the user selected the host mode, we should ask them to provide their credentials. In this way, one option is dependent on another. We can achieve this behavior with the @RequiredOnlyIf annotation:

@RequiredOnlyIf(names={"--rdbms:host", "--host"})
@Option(type = OptionType.COMMAND,
  name = {"--rdbms:user", "-u", "--user"},
  description = "User for login to RDBMS.",
  title = "RDBMS user")
protected String rdbmsUser;

@RequiredOnlyIf(names={"--rdbms:host", "--host"})
@Option(type = OptionType.COMMAND,
  name = {"--rdbms:password", "--password"},
  description = "Password for login to RDBMS.",
  title = "RDBMS password")
protected String rdbmsPassword;

What if we need to use some drivers to handle the DB connection? And also, suppose we need to receive more than one value in a single parameter. We can just change the option type to OptionType.ARGUMENTS or – even better – accept a list of values:

@Option(type = OptionType.COMMAND,
  name = {"--driver", "--jars"},
  description = "List of drivers",
  title = "--driver <PATH_TO_YOUR_JAR> --driver <PATH_TO_YOUR_JAR>")
protected List<String> jars = new ArrayList<>();

Now, let's not forget to add the database setup command to our main class. Otherwise, it won't be available on CLI.

7. Run

We did it! We finished our project, and now we can run it.

As expected, without passing any parameters, Help is invoked:

$ baeldung-cli

usage: baeldung-cli <command> [ <args> ]

Commands are:
    help        Display help information
    setup-db    Setup our database
    setup-log   Setup our log

See 'baeldung-cli help <command>' for more information on a specific command.

If we instead execute setup-log –help, we get:

$ baeldung-cli setup-log --help

NAME
        baeldung-cli setup-log - Setup our log

SYNOPSIS
        baeldung-cli setup-log [ {-h | --help} ] [ {-v | --verbose} ]

OPTIONS
        -h, --help
            Display help information

        -v, --verbose
            Set log verbosity on/off

Finally, supplying parameters to these commands will run the correct business logic.

8. Conclusion

In this article, we have built a simple yet powerful command-line interface with very little coding.

The Airline library, with its powerful functionalities, simplifies the CLI, providing us a general, clean and reusable infrastructure. It allows us, developers, to concentrate on our business logic rather than spending time designing what should be trivial.

As always, the code can be found over on GitHub.


Guide to the Cactoos Library

$
0
0

1. Introduction

Cactoos is a library of object-oriented Java primitive types.

In this tutorial, we'll take a look at some of the classes available as a part of this library.

2. Cactoos

The Cactoos library's repertoire is pretty rich, ranging from string manipulation to data structures. The primitive types and their corresponding methods offered by this library are similar to the ones provided by other libraries like Guava and Apache Commons but are more focused on object-oriented design principles.

2.1. Comparison With Apache Commons

Cactoos library is equipped with classes that provide the same functionality as the static methods that are part of the Apache Commons library.

Let's take a look at some of these static methods that are part of the StringUtils package and their equivalent classes in Cactoos:

Static method of StringUtils  Equivalent Cactoos class
isBlank() IsBlank
lowerCase() Lowered
upperCase() Upper
rotate() Rotated
swapCase() SwappedCase
stripStart() TrimmedLeft
stripEnd() TrimmedRight

More information on this can be found in the official documentation. We'll take a look at the implementation of some of these in the subsequent section.

3. The Maven Dependency

Let's start by adding the required Maven dependency. The latest version of this library can be found on Maven Central:

<dependency>
    <groupId>org.cactoos</groupId>
    <artifactId>cactoos</artifactId>
    <version>0.43</version>
</dependency>

4. Strings

Cactoos has a wide range of classes for manipulating the String object.

4.1. String Object Creation

Let's look at how a String object can be created using the TextOf class:

String testString = new TextOf("Test String").asString();

4.2. Formatted String

In case a formatted String needs to be created, we can use the FormattedText class:

String formattedString = new FormattedText("Hello %s", stringToFormat).asString();

Let's verify that this method, in fact, returns the formatted String:

StringMethods obj = new StringMethods();

String formattedString = obj.createdFormattedString("John");
assertEquals("Hello John", formattedString);

4.3. Lower/Upper Case Strings

The Lowered class converts a String to its lower case using its TextOf object:

String lowerCaseString = new Lowered(new TextOf(testString)).asString();

Similarly, a given String can be converted to its upper case using the Upper class:

String upperCaseString = new Upper(new TextOf(testString)).asString();

Let's verify the outputs of these methods using a test string:

StringMethods obj = new StringMethods();

String lowerCaseString = obj.toLowerCase("TeSt StrIng");
String upperCaseString = obj.toUpperCase("TeSt StrIng"); 

assertEquals("test string", lowerCaseString);
assertEquals("TEST STRING", upperCaseString);

4.4. Check for an Empty String

As discussed earlier, Cactoos library provides an IsBlank class to check for null or empty String:

new IsBlank(new TextOf(testString)) != null;

5. Collections

This library also provides several classes for working on Collections. Let's take a look at a few of these.

5.1. Iterating a Collection

We can iterate a list of strings, using the utility class And:

new And((String input) -> LOGGER.info(new FormattedText("%s\n", input).asString()), strings).value();

The above method is a functional way of iterating over the Strings list that writes the output to the logger.

5.2. Filtering a Collection

The Filtered class can be used to filter a collection based on specific criteria:

Collection<String> filteredStrings = new ListOf<>(new Filtered<>(string -> string.length() == 5, new IterableOf<>(strings)));

Let's test this method by passing in a few arguments, only 3 of which satisfy the criteria:

CollectionUtils obj = new CollectionUtils(); 

List<String> strings = new ArrayList<String>() {
    add("Hello"); 
    add("John");
    add("Smith"); 
    add("Eric"); 
    add("Dizzy"); 
};

int size = obj.getFilteredList(strings).size(); 

assertEquals(3, size);

Some other classes for Collections provided by this library can be found in the official documentation.

6. Conclusion

In this tutorial, we have looked at Cactoos library and some of the classes it provides for string and data structure manipulation.

In addition to these, the library also provides other utility classes for IO operations and also Date and Time.

As usual, the code samples used in this tutorial are available over on GitHub.

Merge Cells in Excel Using Apache POI

$
0
0

1. Overview

In this tutorial, we'll show how to merge cells in Excel with Apache POI.

2. Apache POI

To begin with, we first need to add the poi dependency to our project pom.xml file:

<dependency>
    <groupId>org.apache.poi</groupId>
    <artifactId>poi</artifactId>
    <version>4.1.1</version>
</dependency>

Apache POI uses the Workbook interface to represent an Excel file. It also uses SheetRow, and Cell interfaces to model different levels of elements in an Excel file.

3. Merge Cells

In Excel, we sometimes want to display a string across two or more cells. For example, we can merge several cells horizontally to create a table title that spans several columns:

To achieve this, we can use addMergedRegion to merge several cells defined by CellRangeAddressThere are two ways to set the cell range. Firstly, we can use four zero-based indexes to define the top-left cell location and the bottom-right cell location:

sheet = // existing Sheet setup
int firstRow = 0;
int lastRow = 0;
int firstCol = 0;
int lastCol = 2
sheet.addMergedRegion(new CellRangeAddress(firstRow, lastRow, firstCol, lastCol));

We can also use a cell range reference string to provide the merged region:

sheet = // existing Sheet setup
sheet.addMergedRegion(CellRangeAddress.valueOf("A1:C1"));

If cells have data before we merge them, Excel will use the top-left cell value as the merged region value. For the other cells, Excel will discard their data.

When we add multiple merged regions on an Excel file, we should not create any overlaps. Otherwise, Apache POI will throw an exception at runtime.

4. Summary

In this quick article, we showed how to merge several cells with Apache POI. We also discussed two ways to define the merged region. As always, the source code for the article is available over on GitHub.

java.net.UnknownHostException: Invalid Hostname for Server

$
0
0

1. Introduction

In this tutorial, we'll learn the cause of UnknownHostException with an example. We'll also discuss possible ways of preventing and handling the exception.

2. When is the Exception Thrown?

UnknownHostException indicates that the IP address of a hostname could not be determined. It can happen because of a typo in the hostname:

String hostname = "http://locaihost";
URL url = new URL(hostname);
HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.getResponseCode();

The above code throws an UnknownHostException since the misspelled locaihost doesn't point to any IP addresses.

Another possible reason for UnknownHostException is DNS propagation delay or DNS misconfiguration.

It might take up to 48 hours for a new DNS entry to be propagated all around the Internet.

3. How to Prevent It?

Preventing the exception from occurring in the first place is better than handling it afterward. A few tips to prevent the exception are:

  1. Double-check the hostname: Make sure there is no typo, and trim all whitespaces.
  2. Check the system's DNS settings: Make sure the DNS server is up and reachable, and if the hostname is new, wait for the DNS server to catch up.

4. How to Handle It?

UnknownHostException extends IOException, which is a checked exception. Similar to any other checked exception, we must either throw it or surround it with a try-catch block.

Let's handle the exception in our example:

try {
    con.getResponseCode();
} catch (UnknownHostException e) {
    con.disconnect();
}

It's a good practice to close the connection when UnknownHostException occurs. A lot of wasteful open connections can cause the application to run out of memory.

5. Conclusion

In this article, we learned what causes UnknownHostException, how to prevent it, and how to handle it.

As always, the code is available over on Github.

Introduction to Dropwizard

$
0
0

1. Overview

Dropwizard is an open-source Java framework used for the fast development of high-performance RESTful web services. It gathers some popular libraries to create the light-weight package. The main libraries that it uses are Jetty, Jersey, Jackson, JUnit, and Guava. Furthermore, it uses its own library called Metrics.

In this tutorial, we'll learn how to configure and run a simple Dropwizard application. When we're done, our application will expose a RESTful API that allows us to obtain a list of stored brands.

2. Maven Dependencies

Firstly, the dropwizard-core dependency is all we need in order to create our service. Let's add it to our pom.xml:

<dependency>
    <groupId>io.dropwizard</groupId>
    <artifactId>dropwizard-core</artifactId>
    <version>2.0.0</version>
</dependency>

3. Configuration

Now, we'll create the necessary classes that are needed for every Dropwizard application to run.

Dropwizard applications store properties in YML files. Therefore, we'll create the introduction-config.yml file in the resource directory:

defaultSize: 5

We can access values in that file by creating a class that extends io.dropwizard.Configuration:

public class BasicConfiguration extends Configuration {
    @NotNull private final int defaultSize;

    @JsonCreator
    public BasicConfiguration(@JsonProperty("defaultSize") int defaultSize) {
        this.defaultSize = defaultSize;
    }

    public int getDefaultSize() {
        return defaultSize;
    }
}

Dropwizard uses Jackson to deserialize the configuration file into our class. Hence, we've used Jackson annotations.

Next, let's create the main application class, which is responsible for preparing our service for usage:

public class IntroductionApplication extends Application<BasicConfiguration> {

    public static void main(String[] args) throws Exception {
        new IntroductionApplication().run("server", "introduction-config.yml");
    }

    @Override
    public void run(BasicConfiguration basicConfiguration, Environment environment) {
        //register classes
    }

    @Override
    public void initialize(Bootstrap<BasicConfiguration> bootstrap) {
        bootstrap.setConfigurationSourceProvider(new ResourceConfigurationSourceProvider());
        super.initialize(bootstrap);
    }
}

Firstly, the main method is responsible for running the application. We could either pass the args to the run method or fill it by ourselves.

The first argument can be either server or check. The check option validates the configuration, while the server option runs the application. The second argument is the location of the configuration file. 

Furthermore, the initialize method sets the configuration provider to the ResourceConfigurationSourceProvider, which allows the application to find a given configuration file in the resource directory. It isn't obligatory to override this method.

Lastly, the run method allows us to access both the Environment and the BaseConfiguration, which we'll use later in this article.

4. Resource

Firstly, let's create a domain class for our brand:

public class Brand {
    private final Long id;
    private final String name;

    // all args constructor and getters
}

Secondly, let's create a BrandRepository class that'll be responsible for returning brands:

public class BrandRepository {
    private final List<Brand> brands;

    public BrandRepository(List<Brand> brands) {
        this.brands = ImmutableList.copyOf(brands);
    }

    public List<Brand> findAll(int size) {
        return brands.stream()
          .limit(size)
          .collect(Collectors.toList());
    }

    public Optional<Brand> findById(Long id) {
        return brands.stream()
          .filter(brand -> brand.getId().equals(id))
          .findFirst();
    }
}

Additionally, we were able to use the ImmutableList from Guava because it's part of Dropwizard itself.

Thirdly, we'll create a BrandResource class. The Dropwizard uses JAX-RS by default with Jersey as implementation. Therefore, we'll make use of annotations from this specification to expose our REST API endpoints:

@Path("/brands")
@Produces(MediaType.APPLICATION_JSON)
public class BrandResource {
    private final int defaultSize;
    private final BrandRepository brandRepository;

    public BrandResource(int defaultSize, BrandRepository brandRepository) {
        this.defaultSize = defaultSize;
        this.brandRepository = brandRepository;
    }

    @GET
    public List<Brand> getBrands(@QueryParam("size") Optional<Integer> size) {
        return brandRepository.findAll(size.orElse(defaultSize));
    }

    @GET
    @Path("/{id}")
    public Brand getById(@PathParam("id") Long id) {
        return brandRepository
          .findById(id)
          .orElseThrow(RuntimeException::new);
    }
}

Additionally, we've defined size as Optional in order to use defaultSize from our configuration if the argument is not provided.

Lastly, we'll register BrandResource in the IntroductionApplicaton class. In order to do that, let's implement the run method:

@Override
public void run(BasicConfiguration basicConfiguration, Environment environment) {
    int defaultSize = basicConfiguration.getDefaultSize();
    BrandRepository brandRepository = new BrandRepository(initBrands());
    BrandResource brandResource = new BrandResource(defaultSize, brandRepository);

    environment
      .jersey()
      .register(brandResource);
}

All created resources should be registered in this method.

5. Running Application

In this section, we'll learn how to run the application from the command line.

First, we'll configure our project to build a JAR file using the maven-shade-plugin:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-shade-plugin</artifactId>
    <configuration>
        <createDependencyReducedPom>true</createDependencyReducedPom>
        <filters>
            <filter>
                <artifact>*:*</artifact>
                <excludes>
                    <exclude>META-INF/*.SF</exclude>
                    <exclude>META-INF/*.DSA</exclude>
                    <exclude>META-INF/*.RSA</exclude>
                </excludes>
            </filter>
        </filters>
    </configuration>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>shade</goal>
            </goals>
            <configuration>
                <transformers>
                    <transformer
                      implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
                    <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                        <mainClass>com.baeldung.dropwizard.introduction.IntroductionApplication</mainClass>
                    </transformer>
                </transformers>
            </configuration>
        </execution>
    </executions>
</plugin>

This is the suggested configuration of the plugin. Additionally, we've included the path to our main class in the <mainClass> element.

Finally, we'll build the application with Maven. Once we have our JAR file, we can run the application:

java -jar target/dropwizard-0.0.1-SNAPSHOT.jar

There's no need to pass the parameters because we've already included them in the IntroductionApplication class.

After that, the console log should end with:

INFO  [2020-01-08 18:55:06,527] org.eclipse.jetty.server.Server: Started @1672ms

Now, the application is listening on port 8080, and we can access our brand endpoint at http://localhost:8080/brands.

6. Health Check

When starting the application, we were informed that the application doesn't have any health checks. Fortunately, Dropwizard provides an easy solution to add health checks to our application.

Let's start by adding a simple class that extends com.codahale.metrics.health.HealthCheck:

public class ApplicationHealthCheck extends HealthCheck {
    @Override
    protected Result check() throws Exception {
        return Result.healthy();
    }
}

This simple method will return information about the healthiness of our component. We could create multiple health checks, and some of them might fail in certain situations. For instance, we would return Result.unhealthy() if the connection to the database failed.

Lastly, we need to register our health check in the run method of our IntroductionApplication class:

environment
  .healthChecks()
  .register("application", new ApplicationHealthCheck());

After running the application, we can check the health check response under http://localhost:8081/healthcheck:

{
  "application": {
    "healthy": true,
    "duration": 0
  },
  "deadlocks": {
    "healthy": true,
    "duration": 0
  }
}

As we can see, our health check has been registered under the application tag.

7. Conclusion

In this article, we've learned how to set up the Dropwizard application with Maven.

We've discovered that the base setup of the application is really easy and fast. Additionally, Dropwizard includes every library that we need to run the high-performance RESTful web service.

As always, the code for these examples is available over on GitHub.

Looking for a Java Developer with Spring Security Experience (Remote) (Part Time)

$
0
0

Who?

I'm looking for a Java developer with extensive Spring and Spring Security experience.

Some experience with OAuth is a strong plus.

On the non-technical side – a good level of command over the English language is also important.

The Work

You're going to be working with me and the team on developing projects for teaching purposes – naturally with a strong focus on security.

The Admin Details

Type of Engagement: Fully Remote

Time: 10 Hours / Week

Systems we use: JIRA, Slack, GitHub, Email

Budget: 20$ – 23$ / hour

Apply

You can apply with a quick message (and a link to your LinkedIn profile), here.

Best of luck,

Eugen.

JPA Entity Lifecycle Events

$
0
0

1. Introduction

When working with JPA, there are several events that we can be notified of during an entity's lifecycle. In this tutorial, we'll discuss the JPA entity lifecycle events and how we can use annotations to handle the callbacks and execute code when these events occur.

We'll start by annotating methods on the entity itself and then move on to using an entity listener.

2. JPA Entity Lifecycle Events

JPA specifies seven optional lifecycle events that are called:

  • before persist is called for a new entity – @PrePersist
  • after persist is called for a new entity – @PostPersist
  • before an entity is removed – @PreRemove
  • after an entity has been deleted – @PostRemove
  • before the update operation – @PreUpdate
  • after an entity is updated – @PostUpdate
  • after an entity has been loaded – @PostLoad

There are two approaches for using the lifecycle event annotations: annotating methods in the entity and creating an EntityListener with annotated callback methods. We can also use both at the same time. Regardless of where they are, callback methods are required to have a void return type.

So, if we create a new entity and call the save method of our repository, our method annotated with @PrePersist is called, then the record is inserted into the database, and finally, our @PostPersist method is called. If we're using @GeneratedValue to automatically generate our primary keys, we can expect that key to be available in the @PostPersist method.

For the @PostPersist, @PostRemove and @PostUpdate operations, the documentation mentions that these events can happen right after the operation occurs, after a flush, or at the end of a transaction.

We should note that the @PreUpdate callback is only called if the data is actually changed — that is, if there's an actual SQL update statement to run. The @PostUpdate callback is called regardless of whether anything actually changed.

If any of our callbacks for persisting or removing an entity throw an exception, the transaction will be rolled back.

3. Annotating the Entity

Let's start by using the callback annotations directly in our entity. In our example, we're going to leave a log trail when User records are changed, so we're going to add simple logging statements in our callback methods.

Additionally, we want to make sure we assemble the user's full name after they're loaded from the database. We'll do that by annotating a method with @PostLoad.

We'll start by defining our User entity:

@Entity
public class User {
    private static Log log = LogFactory.getLog(User.class);

    @Id
    @GeneratedValue
    private int id;
    
    private String userName;
    private String firstName;
    private String lastName;
    @Transient
    private String fullName;

    // Standard getters/setters
}

Next, we need to create a UserRepository interface:

public interface UserRepository extends JpaRepository<User, Integer> {
    public User findByUserName(String userName);
}

Now, let's return to our User class and add our callback methods:

@PrePersist
public void logNewUserAttempt() {
    log.info("Attempting to add new user with username: " + userName);
}
    
@PostPersist
public void logNewUserAdded() {
    log.info("Added user '" + userName + "' with ID: " + id);
}
    
@PreRemove
public void logUserRemovalAttempt() {
    log.info("Attempting to delete user: " + userName);
}
    
@PostRemove
public void logUserRemoval() {
    log.info("Deleted user: " + userName);
}

@PreUpdate
public void logUserUpdateAttempt() {
    log.info("Attempting to update user: " + userName);
}

@PostUpdate
public void logUserUpdate() {
    log.info("Updated user: " + userName);
}

@PostLoad
public void logUserLoad() {
    fullName = firstName + " " + lastName;
}

When we run our tests, we'll see a series of logging statements coming from our annotated methods. Additionally, we can reliably expect our user's full name to be populated when we load a user from the database.

4. Annotating an EntityListener

We're going to expand on our example now and use a separate EntityListener to handle our update callbacks. We might favor this approach over placing the methods in our entity if we have some operation we want to apply to all of our entities.

Let's create our AuditTrailListener to log all the activity on the User table:

public class AuditTrailListener {
    private static Log log = LogFactory.getLog(AuditTrailListener.class);
    
    @PrePersist
    @PreUpdate
    @PreRemove
    private void beforeAnyUpdate(User user) {
        if (user.getId() == 0) {
            log.info("[USER AUDIT] About to add a user");
        } else {
            log.info("[USER AUDIT] About to update/delete user: " + user.getId());
        }
    }
    
    @PostPersist
    @PostUpdate
    @PostRemove
    private void afterAnyUpdate(User user) {
        log.info("[USER AUDIT] add/update/delete complete for user: " + user.getId());
    }
    
    @PostLoad
    private void afterLoad(User user) {
        log.info("[USER AUDIT] user loaded from database: " + user.getId());
    }
}

As we can see from the example, we can apply multiple annotations to a method.

Now, we need to go back to our User entity and add the @EntityListener annotation to the class:

@EntityListeners(AuditTrailListener.class)
@Entity
public class User {
    //...
}

And, when we run our tests, we'll get two sets of log messages for each update action and a log message after a user is loaded from the database.

5. Conclusion

In this article, we've learned what the JPA entity lifecycle callbacks are and when they're called. We looked at the annotations and talked about the rules for using them. We've also experimented with using them in both an entity class and with an EntityListener class.

The example code is available over on GitHub.

Final vs Effectively Final in Java

$
0
0

1. Introduction

One of the most interesting features introduced in Java 8 is effectively final. It allows us to not write the final modifier for variables, fields, and parameters that are effectively treated and used like final ones.

In this tutorial, we'll explore this feature's origin and how it's treated by the compiler compared to the final keyword. Furthermore, we'll explore a solution to use regarding a problematic use-case of effectively final variables.

2. Effectively Final Origin

In simple terms, objects or primitive values are effectively final if we do not change their values after initialization. In the case of objects, if we do not change the reference of an object, then it is effectively final — even if a change occurs in the state of the referenced object.

Prior to its introduction, we could not use a non-final local variable in an anonymous class. We still cannot use variables that have more than one value assigned to them inside anonymous classes, inner classes, and lambda expressions. The introduction of this feature allows us to not have to use the final modifier on variables that are effectively final, saving us a few keystrokes.

Anonymous classes are inner classes and they cannot access non-final or non-effectively-final variables or mutate them in their enclosing scopes as specified by JLS 8.1.3. The same limitation applies to lambda expressions, as having access can potentially produce concurrency issues.

3. Final vs Effectively Final

The simplest way to understand whether a final variable is effectively final is to think whether removing the final keyword would allow the code to compile and run:

@FunctionalInterface
public interface FunctionalInterface {
    void testEffectivelyFinal();
    default void test() {
        int effectivelyFinalInt = 10;
        FunctionalInterface functionalInterface 
            = () -> System.out.println("Value of effectively variable is : " + effectivelyFinalInt);
    }
}

Reassigning a value or mutating the above effectively final variable would make the code invalid regardless of where it occurs.

3.1. Compiler Treatment

JLS 4.12.4 states that if we remove the final modifier from a method parameter or a local variable without introducing compile-time errors, then it's effectively final. Moreover, if we add the final keyword to a variable's declaration in a valid program, then it's effectively final.

The Java compiler doesn't do additional optimization for effectively final variables, unlike it does for final variables.

Let's consider a simple example that declares two final String variables but only uses them for concatenation:

public static void main(String[] args) {
    final String hello = "hello";
    final String world = "world";
    String test = hello + " " + world;
    System.out.println(test);
}

The compiler would change the code executed in the main method above to:

public static void main(String[] var0) {
    String var1 = "hello world";
    System.out.println(var1);
}

On the other hand, if we remove the final modifiers, the variables would be considered effectively final, but the compiler won't remove them since they're only used for concatenation.

4. Atomic Modification

Generally, it's not a good practice to modify variables used in lambda expressions and anonymous classes. We cannot know how these variables are going to be used inside method blocks. Mutating them might lead to unexpected results in multithreading environments.

We already have a tutorial explaining the best practices when using lambda expressions and another that explains common anti-patterns when we modify them. But there's an alternative approach that allows us to modify variables in such cases that achieves thread-safety through atomicity.

The package java.util.concurrent.atomic offers classes such as AtomicReference and AtomicInteger. We can use them to atomically modify variables inside lambda expressions:

public static void main(String[] args) {
    AtomicInteger effectivelyFinalInt = new AtomicInteger(10);
    FunctionalInterface functionalInterface = effectivelyFinalInt::incrementAndGet;
}

5. Conclusion

In this tutorial, we learned about the most notable differences between final and effectively final variables. In addition, we provided a safe alternative that allows us to modify variables inside lambda functions.


Java Weekly, Issue 318

$
0
0

1. Spring and Java

>> Creating Docker images with Spring Boot 2.3.0.M1 [spring.io]

A quick look at Spring Boot's upcoming support for buildpacks and layered jars — two new features that make it easier to create optimized Docker images.

>> Creating an API Gateway with Zuul and Spring Boot [mscharhag.com]

A sample Zuul proxy application demonstrates route configuration and the use of filters to customize routing behavior.

>> IntelliJ IDEA best plugins [vojtechruzicka.com]

And a handful of cool plugins, from keyboard-shortcut helpers and color-coded bracket matching to security vulnerability warnings for third-party library dependencies, and many more.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> An introduction to REST API testing in Go with Resty [ontestautomation.com]

And it's easy to add assertions via the Testify library, which also provides support for setup/teardown, mocks, and test suites.

Also worth reading:

3. Musings

>> Manage dependencies and risks diligently [martinfowler.com]

When teams collaborate on a project, front-loading early sprints to build a “walking-skeleton” can help to decouple their backlogs and ultimately speed up delivery.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Looks Like A Duck [dilbert.com]

>> Data Can Only Mean One Thing [dilbert.com]

>> Mind Reader [dilbert.com]

5. Pick of the Week

>> Recording more events… But where will we store them? [medium.com]

Difference Between Java Matcher find() and matches()

$
0
0

1. Overview

When working with regular expressions in Java, we typically want to search a character sequence for a given Pattern. To facilitate this, the Java Regular Expressions API provides the Matcher class, which we can use to match a given regular expression against a text.

As a general rule, we'll almost always want to use one of two popular methods of the Matcher class:

  • find()
  • matches()

In this quick tutorial, we'll learn about the differences between these methods using a simple set of examples.

2. The find() Method

Put simply, the find() method tries to find the occurrence of a regex pattern within a given string. If multiple occurrences are found in the string, then the first call to find() will jump to the first occurrence. Thereafter, each subsequent call to the find() method will go to the next matching occurrence, one by one.

Let's imagine we want to search the provided string “goodbye 2019 and welcome 2020” for four-digit numbers only.

For this we'll be using the pattern “\\d\\d\\d\\d” :

@Test
public void whenFindFourDigitWorks_thenCorrect() {
    Pattern stringPattern = Pattern.compile("\\d\\d\\d\\d");
    Matcher m = stringPattern.matcher("goodbye 2019 and welcome 2020");

    assertTrue(m.find());
    assertEquals(8, m.start());
    assertEquals("2019", m.group());
    assertEquals(12, m.end());
    
    assertTrue(m.find());
    assertEquals(25, m.start());
    assertEquals("2020", m.group());
    assertEquals(29, m.end());
    
    assertFalse(m.find());
}

As we have two occurrences in this example – 2019 and 2020 – the find() method will return true twice, and once it reaches the end of the match region, it'll return false.

Once we find any match, we can then use methods like start(), group(), and end() to get more details about the match, as shown above.

The start() method will give the start index of the match, end() will return the last index of the character after the end of the match, and group() will return the actual value of the match.

3. The find(int) Method

We also have the overloaded version of the find method —  find(int). It takes the start index as a parameter and considers the start index as the starting point to look for occurrences in the string.

Let's see how to use this method in the same example as before:

@Test
public void givenStartIndex_whenFindFourDigitWorks_thenCorrect() {
    Pattern stringPattern = Pattern.compile("\\d\\d\\d\\d");
    Matcher m = stringPattern.matcher("goodbye 2019 and welcome 2020");

    assertTrue(m.find(20));
    assertEquals(25, m.start());
    assertEquals("2020", m.group());
    assertEquals(29, m.end());  
}

As we have provided a start index of 20, we can see that there is now only one occurrence found — 2020, which occurs as expected after this index. And, as is the case with find(), we can use methods like start(), group(), and end() to extract more details about the match.

4. The matches() Method

On the other hand, the matches() method tries to match the whole string against the pattern.

For the same example, matches() will return false:

@Test
public void whenMatchFourDigitWorks_thenFail() {
    Pattern stringPattern = Pattern.compile("\\d\\d\\d\\d");
    Matcher m = stringPattern.matcher("goodbye 2019 and welcome 2020");
 
    assertFalse(m.matches());
}

This is because it will try to match “\\d\\d\\d\\d” against the whole string “goodbye 2019 and welcome 2020”unlike the find() and find(int) methods, both of which will find the occurrence of the pattern anywhere within the string.

If we change the string to the four-digit number “2019”, then matches() will return true:

@Test
public void whenMatchFourDigitWorks_thenCorrect() {
    Pattern stringPattern = Pattern.compile("\\d\\d\\d\\d");
    Matcher m = stringPattern.matcher("2019");
    
    assertTrue(m.matches());
    assertEquals(0, m.start());
    assertEquals("2019", m.group());
    assertEquals(4, m.end());
    assertTrue(m.matches());
}

As shown above, we can also use methods like start(), group(), and end() to gather more details about the match. One interesting point to note is that calling find() multiple times may return different output after calling these methods, as we saw in our first example, but matches() will always return the same value.

5. Conclusion

In this article, we’ve seen how find(), find(int), and matches() differ from each other with a practical example. We've also seen how various methods like start(), group(), and end() can help us extract more details about a given match.

As always, the full source code of the article is available over on GitHub.

How to Create a Slack Plugin in Java

$
0
0

1. Introduction

Slack is a popular chat system used by people and companies around the world. One of the things that makes it so popular is the ability to write our own custom plugins that can interact with people and channels within a single slack. This uses their HTTP API.

Slack doesn't offer an official SDK for writing plugins with Java. However, there is an officially endorsed community SDK that we are going to use. This gives us access to almost all of the Slack API from a Java codebase without our needing to concern ourselves with the exact details of the API.

We'll make use of this to build a small system monitoring bot. This will periodically retrieve the disk space for the local computer and alert people if any drives are getting too full.

2. Obtaining API Credentials

Before we can do anything with Slack, we need to create a new App and a Bot and connect it to our channels.

Firstly, let's visit https://api.slack.com/apps. This is the base from where we manage our Slack apps. From here we can create a new app.

When we do this, we need to enter a name for the app and a Slack workspace to create it in.

Once we've done this, the app has been created and is ready for us to work with. The next screen allows us to create a Bot. This is a fake user that the plugin will be acting as.

As with any normal user, we need to give this a display name and a username. These are the settings that other users in the Slack workspace will see for this bot user if they ever interact with it.

Now that we've done this, we can select “Install App” from the side menu and add the App into our Slack workspace. Once we've done this, the app can interact with our workspace.

This will then give us the tokens that we need for our plugin to communicate with Slack.

 

Each bot interacting with a different Slack workspace will have a different set of tokens. Our application needs the “Bot User OAuth Access Token” value for when we run it.

Finally, we need to invite the bot to any channels it should be involved in. This works by simply messaging it from the channel — @system_monitoring in this case.

3. Adding Slack to our Project

Before we can use it, we first need to add the Slack SDK dependencies to our pom.xml file:

<dependency>
    <groupId>com.hubspot.slack</groupId>
    <artifactId>slack-base</artifactId>
    <version>${slack.version}</version>
</dependency>
<dependency>
    <groupId>com.hubspot.slack</groupId>
    <artifactId>slack-java-client</artifactId>
    <version>${slack.version}</version>
</dependency>

3. Application Structure

The core of our application is the ability to check for errors on the system. We'll represent this with the concept of an Error Checker. This is a simple interface with a single method, triggered to check for errors and report them:

public interface ErrorChecker {
    void check();
}

We also want to have the means to report any errors that have been found. This is another simple interface that will take a problem statement and report it appropriately:

public interface ErrorReporter {
    void reportProblem(String problem);
}

The use of an interface here allows us to have different ways of reporting problems. For example, we might have one that sends emails, contacts an error reporting system, or sends messages to our Slack system for people to get an immediate notification.

The design behind this is that each ErrorChecker instance is given its own ErrorReporter to use. This gives us the flexibility to have different error reporters for different checkers to use because some errors might be more important than others. For example, if the disks are over 90% full that may require a message to a Slack channel, but if they are over 98% full then we might instead want to send private messages to specific people instead.

4. Checking Disk Space

Our error checker will check the amount of disk space on the local system. Any file system that has less than a particular percentage free is considered to be an error and will be reported as such.

We'll make use of the NIO2 FileStore API introduced in Java 7 to obtain this information in a cross-platform manner.

Now, let's take a look at our error checker:

public class DiskSpaceErrorChecker implements ErrorChecker {
    private static final Logger LOG = LoggerFactory.getLogger(DiskSpaceErrorChecker.class);

    private ErrorReporter errorReporter;

    private double limit;

    public DiskSpaceErrorChecker(ErrorReporter errorReporter, double limit) {
        this.errorReporter = errorReporter;
        this.limit = limit;
    }

    @Override
    public void check() {
        FileSystems.getDefault().getFileStores().forEach(fileStore -> {
            try {
                long totalSpace = fileStore.getTotalSpace();
                long usableSpace = fileStore.getUsableSpace();
                double usablePercentage = ((double) usableSpace) / totalSpace;

                if (totalSpace > 0 && usablePercentage < limit) {
                    String error = String.format("File store %s only has %d%% usable disk space",
                        fileStore.name(), (int)(usablePercentage * 100));
                    errorReporter.reportProblem(error);
                }
            } catch (IOException e) {
                LOG.error("Error getting disk space for file store {}", fileStore, e);
            }
        });
    }
}

Here, we're obtaining the list of all file stores on the local system and then checking each one individually. Any that has less than our defined limit as usable space will generate an error using our error reporter.

5. Sending Errors to Slack Channels

We now need to be able to report our errors. Our first reporter will be one that sends messages to a Slack channel. This allows anyone in the channel to see the message, in the hope that somebody will react to it.

This uses a SlackClient, from the Slack SDK, and the name of the channel to send the messages to. It also implements our ErrorReporter interface so that we can easily plug it into whichever error checker wants to use it:

public class SlackChannelErrorReporter implements ErrorReporter {
    private SlackClient slackClient;

    private String channel;

    public SlackChannelErrorReporter(SlackClient slackClient, String channel) {
        this.slackClient = slackClient;
        this.channel = channel;
    }

    @Override
    public void reportProblem(String problem) {
        slackClient.postMessage(
          ChatPostMessageParams.builder()
            .setText(problem)
            .setChannelId(channel)
            .build()
        ).join().unwrapOrElseThrow();
    }
}

6. Application Wiring

We are now in a position to wire up the application and have it monitor our system. For the sake of this tutorial, we're going to use the Java Timer and TimerTask that are part of the core JVM, but we could just as easily use Spring or any other framework to build this.

For now, this will have a single DiskSpaceErrorChecker that reports any disks that are under 10% usable to our “general” channel, and which runs every 5 minutes:

public class MainClass {
    public static final long MINUTES = 1000 * 60;

    public static void main(String[] args) throws IOException {
        SlackClientRuntimeConfig runtimeConfig = SlackClientRuntimeConfig.builder()
          .setTokenSupplier(() -> "<Your API Token>")
          .build();

        SlackClient slackClient = SlackClientFactory.defaultFactory().build(runtimeConfig);

        ErrorReporter slackChannelErrorReporter = new SlackChannelErrorReporter(slackClient, "general");

        ErrorChecker diskSpaceErrorChecker10pct = 
          new DiskSpaceErrorChecker(slackChannelErrorReporter, 0.1);

        Timer timer = new Timer();
        timer.scheduleAtFixedRate(new TimerTask() {
            @Override
            public void run() {
                diskSpaceErrorChecker10pct.check();
            }
        }, 0, 5 * MINUTES);
    }
}

We need to replace “<Your API Token>” with the token that was obtained earlier, and then we're ready to run. As soon as we do, if everything is correct, our plugin will check the local drives and message the Slack if there are any errors.

7. Sending Errors as Private Messages

Next, we're going to add an error reporter that sends private messages instead. This can be useful for more urgent errors since it will immediately ping a specific user instead of relying on someone in the channel to react.

Our error reporter here is more complicated because it needs to interact with a single, targeted user:

public class SlackUserErrorReporter implements ErrorReporter {
    private SlackClient slackClient;

    private String user;

    public SlackUserErrorReporter(SlackClient slackClient, String user) {
        this.slackClient = slackClient;
        this.user = user;
    }

    @Override
    public void reportProblem(String problem) {
        UsersInfoResponse usersInfoResponse = slackClient
            .lookupUserByEmail(UserEmailParams.builder()
              .setEmail(user)
              .build()
            ).join().unwrapOrElseThrow();

        ImOpenResponse imOpenResponse = slackClient.openIm(ImOpenParams.builder()
            .setUserId(usersInfoResponse.getUser().getId())
            .build()
        ).join().unwrapOrElseThrow();

        imOpenResponse.getChannel().ifPresent(channel -> {
            slackClient.postMessage(
                ChatPostMessageParams.builder()
                  .setText(problem)
                  .setChannelId(channel.getId())
                  .build()
            ).join().unwrapOrElseThrow();
        });
    }
}

What we have to do here is to find the user that we are messaging — looked up by email address, since this is the one thing that can't be changed. Next, we open an IM channel to the user, and then we post our error message to that channel.

This can then be wired up in the main method, and we will alert a single user directly:

ErrorReporter slackUserErrorReporter = new SlackUserErrorReporter(slackClient, "testuser@baeldung.com");

ErrorChecker diskSpaceErrorChecker2pct = new DiskSpaceErrorChecker(slackUserErrorReporter, 0.02);

timer.scheduleAtFixedRate(new TimerTask() {
    @Override
    public void run() {
        diskSpaceErrorChecker2pct.check();
    }
}, 0, 5 * MINUTES);

Once done, we can run this up and get private messages for errors as well.

8. Conclusion

We've seen here how we can incorporate Slack into our tooling so that we can have feedback sent to either the entire team or to individual members. There's much more we can do with the Slack API, so why not see what else we can incorporate.

As usual, the source code for this article can be found over on GitHub.

Spring Cloud Gateway Routing Predicate Factories

$
0
0

1. Introduction

In a previous article, we've covered what is the Spring Cloud Gateway and how to use the built-in predicates to implement basic routing rules. Sometimes, however, those built-in predicates might not be enough. For instance, our routing logic might require a database lookup for some reason.

For those cases, Spring Cloud Gateway allows us to define custom predicates. Once defined, we can use them as any other predicate, meaning we can define routes using the fluent API and/or the DSL.

2. Anatomy of a Predicate

In a nutshell, a Predicate in Spring Cloud Gateway is an object that tests if the given request fulfills a given condition. For each route, we can define one or more predicates that, if satisfied, will accept requests for the configured backend after applying any filters.

Before writing our predicate, let's take a look at the source code of an existing predicate or, more precisely, the code for an existing PredicateFactory. As the name already hints, Spring Cloud Gateway uses the popular Factory Method Pattern as a mechanism to support the creation of Predicate instances in an extensible way.

We can pick any one of the built-in predicate factories, which are available in the org.springframework.cloud.gateway.handler.predicate package of the spring-cloud-gateway-core module. We can easily spot the existing ones since their names all end in RoutePredicateFactory. HeaderRouterPredicateFactory is a good example:

public class HeaderRoutePredicateFactory extends 
  AbstractRoutePredicateFactory<HeaderRoutePredicateFactory.Config> {

    // ... setup code omitted
    @Override
    public Predicate<ServerWebExchange> apply(Config config) {
        return new GatewayPredicate() {
            @Override
            public boolean test(ServerWebExchange exchange) {
                // ... predicate logic omitted
            }
        };
    }

    @Validated
    public static class Config {
        public Config(boolean isGolden, String customerIdCookie ) {
          // ... constructor details omitted
        }
        // ...getters/setters omitted
    }
}

There are a few key points we can observe in the implementation:

  • It extends the AbstractRoutePredicateFactory<T>, which, in turn, implements the RoutePredicateFactory interface used by the gateway
  • The apply method returns an instance of the actual Predicate – a GatewayPredicate in this case
  • The predicate defines an inner Config class, which is used to store static configuration parameters used by the test logic

If we take a look at other available PredicateFactory, we'll see that the basic pattern is basically the same:

  1. Define a Config class to hold configuration parameters
  2. Extend the AbstractRoutePredicateFactory, using the configuration class as its template parameter
  3. Override the apply method, returning a Predicate that implements the desired test logic

3. Implementing a Custom Predicate Factory

For our implementation, let's suppose the following scenario: for a given API, call we have to choose between two possible backends. “Golden” customers, who are our most valued ones, should be routed to a powerful server, with access to more memory, more CPU, and fast disks. Non-golden customers go to a less powerful server, which results in slower response times.

To determine whether the request comes from a golden customer, we'll need to call a service that takes the customerId associated with the request and returns its status. As for the customerId, in our simple scenario, we'll assume it is available in a cookie.

With all this information, we can now write our custom predicate. We'll keep the existing naming convention and name our class GoldenCustomerRoutePredicateFactory:

public class GoldenCustomerRoutePredicateFactory extends 
  AbstractRoutePredicateFactory<GoldenCustomerRoutePredicateFactory.Config> {

    private final GoldenCustomerService goldenCustomerService;
    
    // ... constructor omitted

    @Override
    public Predicate<ServerWebExchange> apply(Config config) {        
        return (ServerWebExchange t) -> {
            List<HttpCookie> cookies = t.getRequest()
              .getCookies()
              .get(config.getCustomerIdCookie());
              
            boolean isGolden; 
            if ( cookies == null || cookies.isEmpty()) {
                isGolden = false;
            } else {                
                String customerId = cookies.get(0).getValue();                
                isGolden = goldenCustomerService.isGoldenCustomer(customerId);
            }              
            return config.isGolden() ? isGolden : !isGolden;           
        };        
    }
    
    @Validated
    public static class Config {        
        boolean isGolden = true;        
        @NotEmpty
        String customerIdCookie = "customerId";
        // ...constructors and mutators omitted   
    }    
}

As we can see, the implementation is quite simple. Our apply method returns a lambda that implements the required logic using the ServerWebExchange passed to it. First, it checks for the presence of the customerId cookie. If it cannot find it, then this is a normal customer. Otherwise, we use the cookie value to call the isGoldenCustomer service method.

Next, we combine the client's type with the configured isGolden parameter to determine the return value. This allows us to use the same predicate to create both routes described before, by just changing the value of the isGolden parameter.

4. Registering the Custom Predicate Factory

Once we've coded our custom predicate factory, we need a way to make Spring Cloud Gateway aware of if. Since we're using Spring, this is done in the usual way: we declare a bean of type GoldenCustomerRoutePredicateFactory.

Since our type implements RoutePredicateFactory through is base class, it will be picked by Spring at context initialization time and made available to Spring Cloud Gateway.

Here, we'll create our bean using a @Configuration class:

@Configuration
public class CustomPredicatesConfig {
    @Bean
    public GoldenCustomerRoutePredicateFactory goldenCustomer(
      GoldenCustomerService goldenCustomerService) {
        return new GoldenCustomerRoutePredicateFactory(goldenCustomerService);
    }
}

We assume here we have a suitable GoldenCustomerService implementation available in the Spring's context. In our case, we have just a dummy implementation that compares the customerId value with a fixed one — not realistic, but useful for demonstration purposes.

5. Using the Custom Predicate

Now that we have our “Golden Customer” predicate implemented and available to Spring Cloud Gateway, we can start using it to define routes. First, we'll use the fluent API to define a route, then we'll do it in a declarative way using YAML.

5.1. Defining a Route with the Fluent API

Fluent APIs are a popular design choice when we have to programmatically create complex objects. In our case, we define routes in a @Bean that creates a RouteLocator object using a RouteLocatorBuilder and our custom predicate factory:

@Bean
public RouteLocator routes(RouteLocatorBuilder builder, GoldenCustomerRoutePredicateFactory gf ) {
    return builder.routes()
      .route("golden_route", r -> r.path("/api/**")
        .uri("https://fastserver")
        .predicate(gf.apply(new Config(true, "customerId"))))
      .route("common_route", r -> r.path("/api/**")
        .uri("https://slowserver")
        .predicate(gf.apply(new Config(false, "customerId"))))                
      .build();
}

Notice how we've used two distinct Config configurations in each route. In the first case, the first argument is true, so the predicate also evaluates to true when we have a request from a golden customer. As for the second route, we pass false in the constructor so our predicate will return true for non-golden customers.

5.2. Defining a Route in YAML

We can achieve the same result as before in a declarative way using properties or YAML files. Here, we'll use YAML, as it's a bit easier to read:

spring:
  cloud:
    gateway:
      routes:
      - id: golden_route
        uri: https://fastserver
        predicates:
        - Path=/api/**
        - GoldenCustomer=true
      - id: common_route
        uri: https://slowserver
        predicates:
        - Path=/api/**
        - name: GoldenCustomer
          args:
            golden: false
            customerIdCookie: customerId

Here we've defined the same routes as before, using the two available options to define predicates. The first one, golden_route, uses a compact representation that takes the form Predicate=[param[,param]+]Predicate here is the predicate's name, which is derived automatically from the factory class name by removing the RoutePredicateFactory suffix. Following the “=” sign, we have parameters used to populate the associated Config instance.

This compact syntax is fine when our predicate requires just simple values, but this might not always be the case. For those scenarios, we can use the long format, depicted in the second route. In this case, we supply an object with two properties: name and args. name contains the predicate name, and args is used to populate the Config instance. Since this time args is an object, our configuration can be as complex as required.

6. Testing

Now, let's check if everything is working as expected using curl to test our gateway. For those tests, we've set up our routes just like previously shown, but we'll use the publicly available httpbin.org service as our dummy backend. This is a quite useful service that we can use to quickly check if our rules are working as expected, available both online and as a docker image that we can use locally.

Our test configuration also includes the standard AddRequestHeader filter. We use it  to add a custom Goldencustomer header to the request with a value that corresponds to the predicate result. We also add a StripPrefix filter, since we want to remove the /api from the request URI before calling the backend.

First, let's test the “common client” scenario. With our gateway up and running, we use curl to invoke httpbin‘s headers API, which will simply echo all received headers:

$ curl http://localhost:8080/api/headers
{
  "headers": {
    "Accept": "*/*",
    "Forwarded": "proto=http;host=\"localhost:8080\";for=\"127.0.0.1:51547\"",
    "Goldencustomer": "false",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.55.1",
    "X-Forwarded-Host": "localhost:8080",
    "X-Forwarded-Prefix": "/api"
  }
}

As expected, we see that the Goldencustomer header was sent with a false value. Let's try now with a “Golden” customer:

$ curl -b customerId=baeldung http://localhost:8080/api/headers
{
  "headers": {
    "Accept": "*/*",
    "Cookie": "customerId=baeldung",
    "Forwarded": "proto=http;host=\"localhost:8080\";for=\"127.0.0.1:51651\"",
    "Goldencustomer": "true",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.55.1",
    "X-Forwarded-Host": "localhost:8080",
    "X-Forwarded-Prefix": "/api"
  }
}

This time, Goldencustomer is true, as we've sent a customerId cookie with a value that our dummy service recognizes as valid for a golden customer.

7. Conclusion

In this article, we've covered how to add custom predicate factories to Spring Cloud Gateway and use them to define routes using arbitrary logic.

As usual, all code is available over on GitHub.

Implement Health Checks in OpenShift

$
0
0

1. Overview

In this tutorial, we're going to illustrate how to keep an application deployed within OpenShift healthy. 

2. What's a Healthy Application?

First, let's try to understand what it means to keep an application healthy.

Very often, applications within pods experience problems. In particular, the application may stop responding or start responding incorrectly. 

An application may become unhealthy due to temporary problems, such as configuration errors or connectivity to external components like databases, storage, or other applications.

The first step in building a resilient application is to implement automatic health checks on the pods. In the event of a problem, the pods will be restarted automatically, without manual intervention.

3. Health Checks Using Probes

Kubernetes and, therefore, OpenShift, offers two types of probes: liveness probes and readiness probes. 

We use liveness probes to know when it's necessary to restart a container. OpenShift restarts the pod when the health check fails and the pod becomes unavailable. 

Readiness probes verify the availability of a container to accept traffic. We consider a pod ready when all its containers are ready. The service load balancers remove the pod when this isn't in the ready state.

In case a container takes a long time to start, the mechanism allows us to route connections to the pods that are ready to accept the required traffic. This can occur, for example, when there's a need to initialize a dataset or establish a connection to another container or an external service.

We can configure the liveness probe in two ways:

  • Editing the pod deployment file
  • Using the OpenShift wizard

In both cases, the result obtained remains the same. The first mechanism is inherited directly from Kubernetes, already discussed in another tutorial.

In this tutorial, instead, we show how to configure the probes using the OpenShift graphical user interface.

4. Liveness Probes

We define a liveness probe as a parameter for specific containers within the deployment configuration file. All the containers inside the pod will inherit this configuration. 

In case the probe has been created as an HTTP or TCP check, the probe will be executed from the node the container is running on. OpenShift executes the probe inside the container when the probe is created as a script.

So, let's add a new liveness probe to an application deployed within OpenShift:

  1. Let's select the project to which the application belongs
  2. From the left panel, we can click on Applications -> Deployments
  3. Let's select the chosen application
  4. In the Application Deployment Configuration tab, we select the link in the Add Health Checks alert. The alert is present only in case no health check has been configured for the application
  5. From the new page, let's select Add Liveness Probe
  6. Then, let's configure the liveliness probe to our liking:

Let's break down what each of these Liveness Probe settings means:

  • Type: the type of health check. We can choose between an HTTP(S) check, a container execution check, or a socket check
  • Use HTTPS: select this checkbox only if the liveness service is exposed over HTTPS
  • Path:  the path on which the application exposes the liveness probe
  • Port: the port on which the application exposes the liveness probe
  • Initial Delay: the number of seconds after the start of the container before the probe is executed – if left blank, it defaults to 0
  • Timeout: the number of seconds after which a probe timeout is detected – defaults to 1 second if blank

OpenShift creates a new DeploymentConfig for the application. The new DeploymentConfig will contain the definition of the newly configured probe.

5. Readiness Probes

We can configure readiness probes to ensure that the container is ready to receive traffic before it is considered active. Unlike the liveness probe, if a container fails the readiness check, that container remains active but is unable to serve traffic. 

The readiness probe is essential to perform zero-downtime deployments.

As in the case of the liveness probe, we can configure the readiness probe using the OpenShift wizard, or by directly editing the pod deployment file.

Since we've already configured the liveness probe, let's now configure the readiness probe:

  1. Select the project to which the application belongs
  2. From the left panel, we can click on Applications -> Deployments
  3. Let's select the chosen application
  4. Inside the Application Deployment Configuration tab, we can click on the Actions button in the top right corner, and select Edit Health Checks
  5. From the new page, let's select Add Readiness Probe
  6. Then, let's configure the readiness probe to our liking:

 

As seen for the liveness probe, the configurable parameters are as follows:

  • Type: type of health check. We can choose between an HTTP(S) check, a container execution check, or a socket check
  • Use HTTPS: select the checkbox only if the readiness service is exposed in HTTPS
  • Path: the path on which the application exposes the readiness probe
  • Port: port on which the application exposes the readiness probe
  • Initial Delay: number of seconds after the start of the container before the probe is executed (default is 0)
  • Timeout: number of seconds after which a probe timeout is detected (default is 1)

Again, OpenShift creates a new DeploymentConfig – containing the readiness probe – for our application.

6. Wrap It Up

It's time to test what we've presented. Suppose we have a Spring Boot application to deploy within an OpenShift cluster. To do this, we can refer to our tutorial, where the test application is presented as a step-by-step deployment.

Once the application has been correctly deployed, we can start by setting up the probes, following what presented in the previous paragraphs. The Spring Boot application uses Spring Boot Actuator to expose the health check endpoints. We can find more information about configuring the Actuator in our dedicated tutorial.

At the end of the setup, the Deployment configuration page will show the information about the newly configured probes:

Now it's time to check that the readiness and liveness probes are working properly.

6.1. Test The Readiness Probe

Let's try to simulate the deployment of a new version of the application. The readiness probe allows us to deploy with zero downtime. In this case, when we deploy a new version, OpenShift will create a new pod corresponding to the new version. The old pod will continue to serve traffic until the new pod is ready to receive traffic — that is until the readiness probe of the new pod returns a positive result.

From the OpenShift dashboard, inside the page of our project, if we look in the middle of the deployment phase, we can see the representation of the zero-downtime deploy:

6.2. Test The Liveness Probe

Let's simulate the failure of the liveness probe instead.

Let's suppose that the health check performed by the application at this point will return a negative result. This indicates the unavailability of a resource necessary for the work of the pod.

OpenShift will kill the pod n times (by default, n is set to 3) and recreate it. If the problems are solved during this phase, the pod is restored to its state of health. Otherwise, OpenShift considers the pod in Failed status if dependent resources continue to be unavailable during the attempts.

Let's verify this behavior. Let's open the page containing the list of events related to the pod. We should view a screen similar to the following one:

As we can see, OpenShift recorded the health check failure, killed the pod, and tried to restart it.

7. Conclusion

In this tutorial, we explored OpenShift's two types of probes.

We used readiness and liveness probes in parallel in the same container. We used both of them to ensure that OpenShift restarts the containers when there are problems and doesn't allow traffic to reach them when they're not yet ready to serve. The complete source code of our examples here is, as always, over on GitHub.

Viewing all 4770 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>