Quantcast
Channel: Baeldung
Viewing all 4464 articles
Browse latest View live

Count Spaces in a Java String

$
0
0

1. Overview

When we work with Java strings, sometimes we would like to count how many spaces are in a string.

There are various ways to get the result. In this quick tutorial, we'll see how to get it done through examples.

2. The Example Input String

First of all, let's prepare an input string as the example:

String INPUT_STRING = "  This string has nine spaces and a Tab:'	'";

The string above contains nine spaces and a tab character wrapped by single quotes. Our goal is to count space characters only in the given input string.

Therefore, our expected result is:

int EXPECTED_COUNT = 9;

Next, let's explore various solutions to get the right result.

We'll first solve the problem using the Java standard library, then we'll solve it using some popular external libraries.

Finally, in this tutorial, we'll address all solutions in unit test methods.

3. Using Java Standard Library

3.1. The Classic Solution: Looping and Counting

This is probably the most straightforward idea to solve the problem.

We go through all the characters in the input string. Also, we maintain a counter variable and increment the counter once we see a space character.

Finally, we'll get the count of the spaces in the string:

@Test
void givenString_whenCountSpaceByLooping_thenReturnsExpectedCount() {
    int spaceCount = 0;
    for (char c : INPUT_STRING.toCharArray()) {
        if (c == ' ') {
            spaceCount++;
        }
    }
    assertThat(spaceCount).isEqualTo(EXPECTED_COUNT);
}

3.2. Using Java 8's Stream API

Stream API has been around since Java 8.

Additionally, since Java 9, a new chars() method has been added to the String class to convert the char values from the String into an IntStream instance.

If we're working with Java 9 or later, we can combine the two features to solve the problem in a one-liner:

@Test
void givenString_whenCountSpaceByJava8StreamFilter_thenReturnsExpectedCount() {
    long spaceCount = INPUT_STRING.chars().filter(c -> c == (int) ' ').count();
    assertThat(spaceCount).isEqualTo(EXPECTED_COUNT);
}

3.3. Using Regex's Matcher.find() Method

So far, we've seen solutions that count by searching the space characters in the given string. We've used character == ‘ ‘ to check if a character is a space character.

Regular Expression (Regex) is another powerful weapon to search strings, and Java has good support for Regex.

Therefore, we can define a single space as a pattern and use the Matcher.find() method to check if the pattern is found in the input string.

Also, to get the count of spaces, we increment a counter every time the pattern is found:

@Test
void givenString_whenCountSpaceByRegexMatcher_thenReturnsExpectedCount() {
    Pattern pattern = Pattern.compile(" ");
    Matcher matcher = pattern.matcher(INPUT_STRING);
    int spaceCount = 0;
    while (matcher.find()) {
        spaceCount++;
    }
    assertThat(spaceCount).isEqualTo(EXPECTED_COUNT);
}

3.4. Using the String.replaceAll() Method

Using the Matcher.find() method to search and find spaces is pretty straightforward. However, since we're talking about Regex, there can be other quick ways to count spaces.

We know that we can do “search and replace” using the String.replaceAll() method.

Therefore, if we replace all non-space characters in the input string with an empty string, all spaces from the input will be the result.

So, if we want to get the count, the length of the resulting string will be the answer. Next, let's give this idea a try:

@Test
void givenString_whenCountSpaceByReplaceAll_thenReturnsExpectedCount() {
    int spaceCount = INPUT_STRING.replaceAll("[^ ]", "").length();
    assertThat(spaceCount).isEqualTo(EXPECTED_COUNT);
}

As the code above shows, we have just one line to get the count.

It's worthwhile to mention that, in the String.replaceAll() call, we've used the pattern “[^ ]” instead of “\\S”. This is because we would like to replace non-space characters instead of just the non-whitespace characters.

3.5. Using the String.split() Method

We've seen that the solution with the String.replaceAll() method is neat and compact. Now, let's see another idea to solve the problem: using the String.split() method.

As we know, we can pass a pattern to the String.split() method and get an array of strings that split by the pattern.

So, the idea is, we can split the input string by a single space. Then, the count of spaces in the original string will be one less than the string array length.

Now, let's see if this idea works:

@Test
void givenString_whenCountSpaceBySplit_thenReturnsExpectedCount() {
    int spaceCount = INPUT_STRING.split(" ").length - 1;
    assertThat(spaceCount).isEqualTo(EXPECTED_COUNT);
}

4. Using External Libraries

Apache Commons Lang 3 library is widely used in Java Projects. Also, Spring is a popular framework among Java enthusiasts.

Both libraries have provided a handy string utility class.

Now, let's see how to count spaces in an input string using these libraries.

4.1. Using the Apache Commons Lang 3 Library

The Apache Commons Lang 3 library has provided a StringUtil class that contains many convenient string-related methods.

To count the spaces in a string, we can use the countMatches() method in this class.

Before we start using the StringUtil class, we should check if the library is in the classpath. We can add the dependency with the latest version in our pom.xml:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.12.0</version>
</dependency>

Now, let's create a unit test to show how to use this method:

@Test
void givenString_whenCountSpaceUsingApacheCommons_thenReturnsExpectedCount() {
    int spaceCount = StringUtils.countMatches(INPUT_STRING, " ");
    assertThat(spaceCount).isEqualTo(EXPECTED_COUNT);
}

4.2. Using Spring

Today, a lot of Java projects are based on the Spring framework. So, if we're working with Spring, a nice string utility provided by Spring is already ready to use: StringUtils.

Yes, it has the same name as the class in Apache Commons Lang 3. Moreover, it provides a countOccurrencesOf() method to count the occurrence of a character in a string.

This is exactly what we're looking for:

@Test
void givenString_whenCountSpaceUsingSpring_thenReturnsExpectedCount() {
    int spaceCount = StringUtils.countOccurrencesOf(INPUT_STRING, " ");
    assertThat(spaceCount).isEqualTo(EXPECTED_COUNT);
}

5. Conclusion

In this article, we've addressed different approaches to counting space characters in an input string.

As always, the code for the article can be found over on GitHub.

       

Java Sound API – Capturing Microphone

$
0
0

1. Overview

In this article, we'll see how to capture a microphone and record incoming audio in Java to save it to a WAV file. To capture the incoming sound from a microphone, we use the Java Sound API, part of the Java ecosystem.

The Java Sound API is a powerful API to capture, process, and playback audio and consists of 4 packages. We'll focus on the javax.sound.sampled package that provides all the interfaces and classes needed to capturing incoming audio.

2. What Is the TargetDataLine?

The TargetDataLine is a type of DataLine object which we use the capture and read audio-related data, and it captures data from audio capture devices like microphones. The interface provides all the methods necessary for reading and capturing data, and it reads the data from the target data line's buffer.

We can invoke the AudioSystem's getLine() method and provide it the DataLine.Info object, which provides all the transport-control methods for audio. The Oracle documentation explains in detail how the Java Sound API works.

Let's go through the steps we need to capture audio from a microphone in Java.

3. Steps to Capture Sound

To save captured audio, Java supports the: AU, AIFF, AIFC, SND, and WAVE file formats. We'll be using the WAVE (.wav) file format to save our files.

The first step in the process is to initialize the AudioFormat instance. The AudioFormat notifies Java how to interpret and handle the bits of information in the incoming sound stream. We use the following AudioFormat class constructor in our example:

AudioFormat(AudioFormat.Encoding encoding, float sampleRate, int sampleSizeInBits, int channels, int frameSize, float frameRate, boolean bigEndian)

After that, we open a DataLine.Info object. This object holds all the information related to the data line (input). Using the DataLine.Info object, we can create an instance of the TargetDataLine, which will read all the incoming data into an audio stream. For generating the TargetDataLine instance, we use the AudioSystem.getLine() method and pass the DataLine.Info object:

line = (TargetDataLine) AudioSystem.getLine(info);

The line is a TargetDataLine instance, and the info is the DataLine.Info instance.

Once created, we can open the line to read all the incoming sounds. We can use an AudioInputStream to read the incoming data. In conclusion, we can write this data into a WAV file and close all the streams.

To understand this process, we'll look at a small program to record input sound.

4. Example Application

To see the Java Sound API in action, let's create a simple program. We will break it down into three sections, first building the AudioFormat, second building the TargetDataLine, and lastly, saving the data as a file.

4.1. Building the AudioFormat

The AudioFormat class defines what kind of data can be captured by the TargetDataLine instance. So, the first step is to initialize the AudioFormat class instance even before we open a new data line. The App class is the main class of the application and makes all the calls. We define the properties of the AudioFormat in a constants class called ApplicationProperties. We build the AudioFormat instance bypassing all the necessary parameters:

public static AudioFormat buildAudioFormatInstance() {
    ApplicationProperties aConstants = new ApplicationProperties();
    AudioFormat.Encoding encoding = aConstants.ENCODING;
    float rate = aConstants.RATE;
    int channels = aConstants.CHANNELS;
    int sampleSize = aConstants.SAMPLE_SIZE;
    boolean bigEndian = aConstants.BIG_ENDIAN;
    return new AudioFormat(encoding, rate, sampleSize, channels, (sampleSize / 8) * channels, rate, bigEndian);
}

Now that we have our AudioFormat ready, we can move ahead and build the TargetDataLine instance.

4.2. Building the TargetDataLine

We use the TargetDataLine class to read audio data from our microphone. In our example, we get and run the TargetDataLine in the SoundRecorder class. The getTargetDataLineForRecord() method builds the TargetDataLine instance.

We read and processed audio input and dumped it in the AudioInputStream object. The way we create a TargetDataLine instance is:

private TargetDataLine getTargetDataLineForRecord() {
    TargetDataLine line;
    DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
    if (!AudioSystem.isLineSupported(info)) {
        return null;
    }
    line = (TargetDataLine) AudioSystem.getLine(info);
    line.open(format, line.getBufferSize());
    return line;
}

4.3. Building and Filling the AudioInputStream

So far in our example, we have created an AudioFormat instance and applied it to the TargetDataLine, and opened the data line to read audio data. We have also created a thread to help autorun the <em>SoundRecorder</em> instance. We first build a byte output stream when the thread runs and then convert it to an AudioInputStream instance. The parameters we need for building the AudioInputStream instance are:

int frameSizeInBytes = format.getFrameSize();
int bufferLengthInFrames = line.getBufferSize() / 8;
final int bufferLengthInBytes = bufferLengthInFrames * frameSizeInBytes;

Notice in the above code we have reduced the bufferSize by 8. We do so to make the buffer and the array the same length so that the recorder can then deliver the data to the line as soon as it is read.

Now that we have initialized all the parameters needed, the next step is to build the byte output stream. The next step is to convert the output stream generated (sound data captured) to an AudioInputStream instance.

buildByteOutputStream(out, line, frameSizeInBytes, bufferLengthInBytes);
this.audioInputStream = new AudioInputStream(line);
setAudioInputStream(convertToAudioIStream(out, frameSizeInBytes));
audioInputStream.reset();

Before we set the InputStream, we'll build the byte OutputStream:

public void buildByteOutputStream(final ByteArrayOutputStream out, final TargetDataLine line, int frameSizeInBytes, final int bufferLengthInBytes) throws IOException {
    final byte[] data = new byte[bufferLengthInBytes];
    int numBytesRead;
    line.start();
    while (thread != null) {
        if ((numBytesRead = line.read(data, 0, bufferLengthInBytes)) == -1) {
            break;
        }
        out.write(data, 0, numBytesRead);
    }
}

We then convert the byte Outstream to an AudioInputStream as:

public AudioInputStream convertToAudioIStream(final ByteArrayOutputStream out, int frameSizeInBytes) {
    byte audioBytes[] = out.toByteArray();
    ByteArrayInputStream bais = new ByteArrayInputStream(audioBytes);
    AudioInputStream audioStream = new AudioInputStream(bais, format, audioBytes.length / frameSizeInBytes);
    long milliseconds = (long) ((audioInputStream.getFrameLength() * 1000) / format.getFrameRate());
    duration = milliseconds / 1000.0;
    return audioStream;
}

4.4. Saving the AudioInputStream to a Wav File

We have created and filled in the AudioInputStream and stored it as a member variable of the SoundRecorder class. We will retrieve this AudioInputStream in the App class by using the SoundRecorder instance getter property and pass it to the WaveDataUtil class:

wd.saveToFile("/SoundClip", AudioFileFormat.Type.WAVE, soundRecorder.getAudioInputStream());

The WaveDataUtil class has the code to convert the AudioInputStream into a .wav file:

AudioSystem.write(audioInputStream, fileType, myFile);

5. Conclusion

This article showed a quick example of using the Java Sound API to capture and record audio using a microphone. The entire code for this tutorial is available over on GitHub.

       

Undo and Revert Commits in Git

$
0
0

1. Introduction

We often find ourselves needing to undo or revert a commit while using Git, whether it's to roll back to a particular point in time or to revert a particularly troublesome commit. In this tutorial, we'll go through the most common commands to undo and revert commits in Git. We'll also demonstrate how there are subtle differences in the way these commands function.

2. Reviewing Old Commits with git checkout

To start with, we're able to review the state of a project at a particular commit by using the git checkout command. We can review the history of a Git repository by using the git log command. Each commit has a unique SHA-1 identifying hash, which we're able to use with git checkout in order to revisit any commit in the timeline.

In this example, we'll revisit a commit that has an identifying hash of e0390cd8d75dc0f1115ca9f350ac1a27fddba67d:

git checkout e0390cd8d75dc0f1115ca9f350ac1a27fddba67d

Our working directory will now match the exact state of our specified commit. Thus, we're able to view the project in its historical state and edit files without worrying about losing the current project state. Nothing we do here saves to the repository. This is known as a detached HEAD state.

We can use git checkout on locally modified files to restore them to their working copy versions.

3. Reverting a Commit with git revert

We revert a commit in Git by using the git revert command. It's important to remember that this command is not a traditional undo operation. Instead, it inverts changes introduced by the commit and generates a new commit with the inverse content.

This means that git revert should only be used if we want to apply the inverse of a particular commit. It does not revert to the previous state of a project by removing all subsequent commits — it undoes a single commit.

git revert does not move ref pointers to the commit that we're reverting, which is in contrast to other ‘undo' commands such as git checkout and git reset. Instead, these commands move the HEAD ref pointer to the specified commit.

Let's go through an example of reverting a commit:

mkdir git_revert_example
cd git_revert_example/
git init .
touch test_file
echo "Test content" >> test_file 
git add test_file
git commit -m "Adding content to test file"
echo "More test content" >> test_file 
git add test_file
git commit -m "Adding more test content"
git log
git revert e0390cd8d75dc0f1115ca9f350ac1a27fddba67d
cat test_file

In this example, we've created a test_file, added some content, and committed it. Then, we've added and committed more content to the file before running a git log to identify the commit hash of the commit we want to revert.

In this instance, we're reverting the most recent commit. Finally, we've run git revert and verified that the changes in the commit were reverted by outputting the file's contents.

4. Reverting to Previous Project State with git reset

Reverting to a previous state in a project with Git is achieved by using the git reset command. This tool undoes more complex changes. It has three primary forms of invocation that relate to Git's internal state management system: –hard, –soft, and –mixed. Understanding which invocation to use is the most complicated part of performing a git revert.

git reset is similar in behavior to git checkout. However, git reset will move the HEAD ref pointer, whereas git checkout operates on the HEAD ref pointer and doesn't move it.

To understand the different invocations, we'll look at Git's internal state management system — also known as Git's three trees.

The first tree is the working directory. This tree is in sync with the local file system and represents immediate changes made to content in files and directories.

Next, we have the staging index tree. This tree tracks changes in the working directory — in other words, changes that have been selected with git add to be stored in the next commit.

The final tree is the commit history. The git commit command adds changes to a permanent snapshot that is stored in the commit history.

4.1. –hard

The most dangerous and frequently used option, with this invocation, commit history ref pointers update to the specified commit. After this, the staging index and working index reset to match that of the specified commit. Any previously pending changes to the staging index and working directory reset to match the state of the commit tree. We'll lose any pending or uncommitted work in the staging index and working index.

Following on from the example above, let's commit some more content to the file and also commit a brand new file to the repository:

echo "Text to be committed" >> test_file
git add test_file
touch new_test_file
git add new_test_file
git commit -m "More text added to test_file, added new_test_file"

Let's say we then decide to revert to the first commit in the repository. We'll achieve that by running the command:

git reset --hard 9d6bedfd771f73373348f8337cf60915372d7954

Git will tell us that the HEAD is now at the commit hash specified. Looking at the contents of test_file shows us that our latest text additions are not present, and our new_test_file no longer exists. This data loss is irreversible, so it is critical that we understand how –hard works with Git's three trees.

4.2. –soft

When invocation happens with –soft, the ref pointers are updated, and the reset stops there. Thus, the staging index and working directory remain in the same state.

In our previous example, the changes we committed to the staging index would not have been deleted if we used the –soft argument. We're still able to commit our changes in the staging index.

4.3. –mixed

The default operating mode, if no argument is passed, –mixed offers a middle ground between the –soft and –hard invocations. The staging index resets to the state of the specified commit and ref pointers update. Any undone changes from the staging index move to the working directory.

Using –mixed in our example above means that our local changes to files are not deleted. Unlike –soft, however, the changes are undone from the staging index and await further action.

5. Conclusion

A simple way to compare the two methods is that git revert is safe, and git reset is dangerous. As we've seen in our example, there is a possibility of losing work with git reset. With git revert, we can safely undo a public commit, whereas git reset is tailored towards undoing local changes in the working directory and staging index.

git reset will move the HEAD ref pointer, whereas git revert will simply revert a commit and apply the undo via a new commit to the HEAD. It's also important to note that we should never use git reset when any subsequent snapshots have been pushed to a shared repository. We must assume that other developers are reliant upon published commits.

       

New Features in Java 16

$
0
0

1. Overview

Java 16, released on the 16th of March 2021, is the latest short-term incremental release building on Java 15. This release comes with some interesting features, such as records and sealed classes.

In this article, we'll explore some of these new features.

2. Invoke Default Methods From Proxy Instances (JDK-8159746)

As an enhancement to the default method in Interfaces, with the release of Java 16, support has been added to java.lang.reflect.InvocationHandler to invoke default methods of an interface via a dynamic proxy using reflection.

To illustrate this, let's look at a simple default method example:

interface HelloWorld {
    default String hello() {
        return "world";
    }
}

With this enhancement, we can invoke the default method on a proxy of that interface using reflection:

Object proxy = Proxy.newProxyInstance(getSystemClassLoader(), new Class<?>[] { HelloWorld.class },
    (prox, method, args) -> {
        if (method.isDefault()) {
            return InvocationHandler.invokeDefault(prox, method, args);
        }
        // ...
    }
);
Method method = proxy.getClass().getMethod("hello");
assertThat(method.invoke(proxy)).isEqualTo("world");

3. Day Period Support (JDK-8247781)

A new addition to the DateTimeFormatter is the period-of-day symbol “B“, which provides an alternative to the am/pm format:

LocalTime date = LocalTime.parse("15:25:08.690791");
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("h B");
assertThat(date.format(formatter)).isEqualTo("3 in the afternoon");

Instead of something like “3pm“, we get an output of “3 in the afternoon“. We can also use the “B“, “BBBB“, or “BBBBBDateTimeFormatter pattern for short, full, and narrow styles respectively.

4. Add Stream.toList Method (JDK-8180352)

The aim is to reduce the boilerplate with some commonly used Stream collectors, such as Collectors.toList and Collectors.toSet:

List<String> integersAsString = Arrays.asList("1", "2", "3");
List<Integer> ints = integersAsString.stream().map(Integer::parseInt).collect(Collectors.toList());
List<Integer> intsEquivalent = integersAsString.stream().map(Integer::parseInt).toList();

Our ints example works the old way, but the intsEquivalent has the same result and is more concise.

5. Vector API Incubator (JEP-338)

The Vector API is in its initial incubation phase for Java 16. The idea of this API is to provide a means of vector computations that will ultimately be able to perform more optimally (on supporting CPU architectures) than the traditional scalar method of computations.

Let's look at how we might traditionally multiply two arrays:

int[] a = {1, 2, 3, 4};
int[] b = {5, 6, 7, 8};
var c = new int[a.length];
for (int i = 0; i < a.length; i++) {
    c[i] = a[i] * b[i];
}

This example of a scalar computation will, for an array of length 4, execute in 4 cycles. Now, let's look at the equivalent vector-based computation:

int[] a = {1, 2, 3, 4};
int[] b = {5, 6, 7, 8};
var vectorA = IntVector.fromArray(IntVector.SPECIES_128, a, 0);
var vectorB = IntVector.fromArray(IntVector.SPECIES_128, b, 0);
var vectorC = vectorA.mul(vectorB);
vectorC.intoArray(c, 0);

The first thing we do in the vector-based code is to create two IntVectors from our input arrays using the static factory method of this class fromArray. The first parameter is the size of the vector, followed by the array and the offset (here set to 0). The most important thing here is the size of the vector that we're setting to 128 bits.  In Java, each int takes 4 bytes to hold.

Since we have an input array of 4 ints, it takes 128 bits to store. Our single Vector can store the whole array.

On certain architectures, the compiler will be able to optimize the byte code to reduce the computation from 4 to only 1 cycle.  These optimizations benefit areas such as machine learning and cryptography.

We should note that being in the incubation stage means this Vector API is subject to change with newer releases.

6. Records (JEP-395)

Records were introduced in Java 14. Java 16 brings some incremental changes.

Records are similar to enums in the fact that they are a restricted form of class. Defining a record is a concise way of defining an immutable data holding object.

6.1. Example Without Records

First, let's define a Book class:

public final class Book {
    private final String title;
    private final String author;
    private final String isbn;
    public Book(String title, String author, String isbn) {
        this.title = title;
        this.author = author;
        this.isbn = isbn;
    }
    public String getTitle() {
        return title;
    }
    public String getAuthor() {
        return author;
    }
    public String getIsbn() {
        return isbn;
    }
    @Override
    public boolean equals(Object o) {
        // ...
    }
    @Override
    public int hashCode() {
        return Objects.hash(title, author, isbn);
    }
}

Creating simple data holding classes in Java requires a lot of boilerplate code. This can be cumbersome and lead to bugs where developers don't provide all the necessary methods, such as equals and hashCode.

Similarly, sometimes developers skip the necessary steps for creating proper immutable classes. Sometimes we end up reusing a general-purpose class rather than defining a specialist one for each different use case.

Most modern IDEs provide an ability to auto-generate code (such as setters, getters, constructors, etc.) that helps mitigate these issues and reduces the overhead on a developer writing the code. However, Records provide an inbuilt mechanism to reduce the boilerplate code and create the same result.

6.2. Example with Records

Here is Book re-written as a Record:

public record Book(String title, String author, String isbn) {
}

By using the record keyword, we have reduced the Book class to two lines. This makes it a lot easier and less error-prone.

6.3. New Additions to Records in Java 16

With the release of Java 16, we can now define records as class members of inner classes. This is due to relaxing restrictions that were missed as part of the incremental release of Java 15 under JEP-384:

class OuterClass {
    class InnerClass {
        Book book = new Book("Title", "author", "isbn");
    }
}

7. Pattern Matching for instanceof (JEP-394)

Pattern matching for the instanceof keyword has been added as of Java 16.

Previously we might write code like this:

Object obj = "TEST";
if (obj instanceof String) {
    String t = (String) obj;
    // do some logic...
}

Instead of purely focusing on the logic needed for the application, this code must first check instance of obj, then cast the object to a String and assign it to a new variable t.

With the introduction of pattern matching, we can re-write this code:

Object obj = "TEST";
if (obj instanceof String t) {
    // do some logic
}

We can now declare a variable – in this instance t – as part of the instanceof check.

8. Sealed Classes (JEP-397)

Sealed classes, first introduced in Java 15, provide a mechanism to determine which sub-classes are allowed to extend or implement a parent class or interface.

8.1. Example

Let's illustrate this by defining an interface and two implementing classes:

public sealed interface JungleAnimal permits Monkey, Snake  {
}
public final class Monkey implements JungleAnimal {
}
public non-sealed class Snake implements JungleAnimal {
}

The sealed keyword is used in conjunction with the permits keyword to determine exactly which classes are allowed to implement this interface. In our example, this is Monkey and Snake. 

All inheriting classes of a sealed class must be marked with one of the following:

  • sealed – meaning they must define what classes are permitted to inherit from it using the permits keyword.
  • final – preventing any further subclasses
  • non-sealed – allowing any class to be able to inherit from it.

A significant benefit of sealed classes is that they allow for exhaustive pattern matching checking without the need for a catch for all non-covered cases. For example, using our defined classes, we can have logic to cover all possible subclasses of JungleAnimal:

JungleAnimal j = // some JungleAnimal instance
if (j instanceof Monkey m) {
    // do logic
} else if (j instanceof Snake s) {
    // do logic
}

We don't need an else block as the sealed classes only allow the two possible subtypes of Monkey and Snake.

8.2. New Additions to Sealed Classes in Java 16

There are a few additions to sealed classes in Java 16. These are the changes that Java 16 introduces to the sealed class:

  • The Java language recognizes sealed, non-sealed, and permits as contextual keywords (similar to abstract and extends)
  • Restrict the ability to create local classes that are subclasses of a sealed class (similar to the inability to create anonymous classes of sealed classes).
  • Stricter checks when casting sealed classes and classes derived from sealed classes

9. Other Changes

Continuing from JEP-383 in the Java 15 release, the foreign linker API provides a flexible way to access native code on the host machine. Initially, for C language interoperability, in the future, it may be adaptable to other languages such as C++ or Fortran. The goal of this feature is to eventually replace the Java Native Interface.

Another important change is that JDK internals are now strongly encapsulated by default. These have been accessible since Java 9. However, now the JVM requires the argument –illegal-access=permit. This will affect all libraries and apps (particularly when it comes to testing) that are currently using JDK internals directly and simply ignoring the warning messages.

10. Conclusion

In this article, we covered some of the features and changes introduced as part of the incremental Java 16 release. The complete list of changes in Java 16 is in the JDK release notes.

As always, all the code in this post can be found over on GitHub.

       

Streaming with gRPC in Java

$
0
0

1. Overview

gRPC is a platform to do inter-process Remote Procedure Calls (RPC). It follows a client-server model, is highly performant, and supports the most important computer languages. Check out our article  Introduction to gRPC for a good review.

In this tutorial, we'll focus on gRPC streams. Streaming allows multiplex messages between servers and clients, creating very efficient and flexible inter-process communications.

2. Basics of gRPC Streaming

gRPC uses the HTTP/2 network protocol to do inter-service communications. One key advantage of HTTP/2 is that it supports streams. Each stream can multiplex multiple bidirectional messages sharing a single connection.

In gRPC, we can have streaming with three functional call types:

  1. Server streaming RPC: The client sends a single request to the server and gets back several messages that it reads sequentially.
  2. Client streaming RPC: The client sends a sequence of messages to the server. The client waits for the server to process the messages and reads the returned response.
  3. Bidirectional streaming RPC: The client and server can send multiple messages back and forth. The messages are received in the same order that they were sent. However, the server or client can respond to the received messages in the order that they choose.

To demonstrate how to use these procedural calls, we'll write a simple client-server application example that exchanges information on stock securities.

3. Service Definition

We use stock_quote.proto to define the service interface and the structure of the payload messages:

service StockQuoteProvider {
  
  rpc serverSideStreamingGetListStockQuotes(Stock) returns (stream StockQuote) {}
  rpc clientSideStreamingGetStatisticsOfStocks(stream Stock) returns (StockQuote) {}
  
  rpc bidirectionalStreamingGetListsStockQuotes(stream Stock) returns (stream StockQuote) {}
}
message Stock {
   string ticker_symbol = 1;
   string company_name = 2;
   string description = 3;
}
message StockQuote {
   double price = 1;
   int32 offer_number = 2;
   string description = 3;
}

The StockQuoteProvider service has three method types that support message streaming. In the next section, we'll cover their implementations.

We see from the service's method signatures that the client queries the server by sending Stock messages. The server sends the response back using StockQuote messages.

We use the protobuf-maven-plugin defined in the pom.xml file to generate the Java code from the stock-quote.proto IDL file.

The plugin generates client-side stubs and server-side code in the target/generated-sources/protobuf/java and /grpc-java directories.

We're going to leverage the generated code to implement our server and client.

4. Server Implementation

The StockServer constructor uses the gRPC Server to listen to and dispatch incoming requests:

public class StockServer {
    private int port;
    private io.grpc.Server server;
    public StockServer(int port) throws IOException {
        this.port = port;
        server = ServerBuilder.forPort(port)
          .addService(new StockService())
          .build();
    }
    //...
}

We add StockService to the io.grpc.Server. StockService extends StockQuoteProviderImplBase, which the protobuf plugin generated from our proto file. Therefore, StockQuoteProviderImplBase has stubs for the three streaming service methods.

StockService needs to override these stub methods to do the actual implementation of our service.

Next, we're going to see how this is done for the three streaming cases.

4.1. Server-Side Streaming

The client sends a single request for a quote and gets back several responses, each with different prices offered for the commodity:

@Override
public void serverSideStreamingGetListStockQuotes(Stock request, StreamObserver<StockQuote> responseObserver) {
    for (int i = 1; i <= 5; i++) {
        StockQuote stockQuote = StockQuote.newBuilder()
          .setPrice(fetchStockPriceBid(request))
          .setOfferNumber(i)
          .setDescription("Price for stock:" + request.getTickerSymbol())
          .build();
        responseObserver.onNext(stockQuote);
    }
    responseObserver.onCompleted();
}

The method creates a StockQuote, fetches the prices, and marks the offer number. For each offer, it sends a message to the client invoking responseObserver::onNext. It uses reponseObserver::onCompleted to signal that it's done with the RPC.

4.2. Client-Side Streaming

The client sends multiple stocks and the server returns back a single StockQuote:

@Override
public StreamObserver<Stock> clientSideStreamingGetStatisticsOfStocks(StreamObserver<StockQuote> responseObserver) {
    return new StreamObserver<Stock>() {
        int count;
        double price = 0.0;
        StringBuffer sb = new StringBuffer();
        @Override
        public void onNext(Stock stock) {
            count++;
            price = +fetchStockPriceBid(stock);
            sb.append(":")
                .append(stock.getTickerSymbol());
        }
        @Override
        public void onCompleted() {
            responseObserver.onNext(StockQuote.newBuilder()
                .setPrice(price / count)
                .setDescription("Statistics-" + sb.toString())
                .build());
            responseObserver.onCompleted();
        }
        // handle onError() ...
    };
}

The method gets a StreamObserver<StockQuote> as a parameter to respond to the client. It returns a StreamObserver<Stock>, where it processes the client request messages.

The returned StreamObserver<Stock> overrides onNext() to get notified each time the client sends a request.

The method StreamObserver<Stock>.onCompleted() is called when the client has finished sending all the messages. With all the Stock messages that we have received, we find the average of the fetched stock prices, create a StockQuote, and invoke responseObserver::onNext to deliver the result to the client.

Finally, we override StreamObserver<Stock>.onError() to handle abnormal terminations.

4.3. Bidirectional Streaming

The client sends several stocks and the server returns a set of prices for each request:

@Override
public StreamObserver<Stock> bidirectionalStreamingGetListsStockQuotes(StreamObserver<StockQuote> responseObserver) {
    return new StreamObserver<Stock>() {
        @Override
        public void onNext(Stock request) {
            for (int i = 1; i <= 5; i++) {
                StockQuote stockQuote = StockQuote.newBuilder()
                  .setPrice(fetchStockPriceBid(request))
                  .setOfferNumber(i)
                  .setDescription("Price for stock:" + request.getTickerSymbol())
                  .build();
                responseObserver.onNext(stockQuote);
            }
        }
        @Override
        public void onCompleted() {
            responseObserver.onCompleted();
        }
        //handle OnError() ...
    };
}

We have the same method signature as in the previous example. What changes is the implementation: We don't wait for the client to send all the messages before we respond.

In this case, we invoke responseObserver::onNext immediately after receiving each incoming message, and in the same order that it was received.

It's important to notice that we could have easily changed the order of the responses if needed.

5. Client Implementation

The constructor of StockClient takes a gRPC channel and instantiates the stub classes generated by the gRPC Maven plugin:

public class StockClient {
    private StockQuoteProviderBlockingStub blockingStub;
    private StockQuoteProviderStub nonBlockingStub;
    public StockClient(Channel channel) {
        blockingStub = StockQuoteProviderGrpc.newBlockingStub(channel);
        nonBlockingStub = StockQuoteProviderGrpc.newStub(channel);
    }
    // ...
}

StockQuoteProviderBlockingStub and StockQuoteProviderStub support making synchronous and asynchronous client method requests.

We're going to see the client implementation for the three streaming RPCs next.

5.1. Client RPC with Server-Side Streaming

The client makes a single call to the server requesting a stock price and gets back a list of quotes:

public void serverSideStreamingListOfStockPrices() {
    Stock request = Stock.newBuilder()
      .setTickerSymbol("AU")
      .setCompanyName("Austich")
      .setDescription("server streaming example")
      .build();
    Iterator<StockQuote> stockQuotes;
    try {
        logInfo("REQUEST - ticker symbol {0}", request.getTickerSymbol());
        stockQuotes = blockingStub.serverSideStreamingGetListStockQuotes(request);
        for (int i = 1; stockQuotes.hasNext(); i++) {
            StockQuote stockQuote = stockQuotes.next();
            logInfo("RESPONSE - Price #" + i + ": {0}", stockQuote.getPrice());
        }
    } catch (StatusRuntimeException e) {
        logInfo("RPC failed: {0}", e.getStatus());
    }
}

We use blockingStub::serverSideStreamingGetListStocks to make a synchronous request. We get back a list of StockQuotes with the Iterator.

5.2. Client RPC with Client-Side Streaming

The client sends a stream of Stocks to the server and gets back a StockQuote with some statistics:

public void clientSideStreamingGetStatisticsOfStocks() throws InterruptedException {
    StreamObserver<StockQuote> responseObserver = new StreamObserver<StockQuote>() {
        @Override
        public void onNext(StockQuote summary) {
            logInfo("RESPONSE, got stock statistics - Average Price: {0}, description: {1}", summary.getPrice(), summary.getDescription());
        }
        @Override
        public void onCompleted() {
            logInfo("Finished clientSideStreamingGetStatisticsOfStocks");
        }
        // Override OnError ...
    };
    
    StreamObserver<Stock> requestObserver = nonBlockingStub.clientSideStreamingGetStatisticsOfStocks(responseObserver);
    try {
        for (Stock stock : stocks) {
            logInfo("REQUEST: {0}, {1}", stock.getTickerSymbol(), stock.getCompanyName());
            requestObserver.onNext(stock);
        }
    } catch (RuntimeException e) {
        requestObserver.onError(e);
        throw e;
    }
    requestObserver.onCompleted();
}

As we did with the server example, we use StreamObservers to send and receive messages.

The requestObserver uses the non-blocking stub to send the list of Stocks to the server.

With responseObserver, we get back the StockQuote with some statistics.

5.3. Client RPC with Bidirectional Streaming

The client sends a stream of Stocks and gets back a list of prices for each Stock.

public void bidirectionalStreamingGetListsStockQuotes() throws InterruptedException{
    StreamObserver<StockQuote> responseObserver = new StreamObserver<StockQuote>() {
        @Override
        public void onNext(StockQuote stockQuote) {
            logInfo("RESPONSE price#{0} : {1}, description:{2}", stockQuote.getOfferNumber(), stockQuote.getPrice(), stockQuote.getDescription());
        }
        @Override
        public void onCompleted() {
            logInfo("Finished bidirectionalStreamingGetListsStockQuotes");
        }
        //Override onError() ...
    };
    
    StreamObserver<Stock> requestObserver = nonBlockingStub.bidirectionalStreamingGetListsStockQuotes(responseObserver);
    try {
        for (Stock stock : stocks) {
            logInfo("REQUEST: {0}, {1}", stock.getTickerSymbol(), stock.getCompanyName());
            requestObserver.onNext(stock);
            Thread.sleep(200);
        }
    } catch (RuntimeException e) {
        requestObserver.onError(e);
        throw e;
    }
    requestObserver.onCompleted();
}

The implementation is quite similar to the client-side streaming case. We send the Stocks with the requestObserver — the only difference is that now we get multiple responses with the responseObserver. The responses are decoupled from the requests — they can arrive in any order.

6. Running the Server and Client

After using Maven to compile the code, we just need to open two command windows.

To run the server:

mvn exec:java -Dexec.mainClass=com.baeldung.grpc.streaming.StockServer

To run the client:

mvn exec:java -Dexec.mainClass=com.baeldung.grpc.streaming.StockClient

7. Conclusion

In this article, we've seen how to use streaming in gRPC. Streaming is a powerful feature that allows clients and servers to communicate by sending multiple messages over a single connection. Furthermore, the messages are received in the same order as they were sent, but either side can read or write the messages in any order they wish.

The source code of the examples can be found over on GitHub.

       

Java Weekly, Issue 404

$
0
0

1. Spring and Java

>> JDK 17 G1/Parallel GC changes [tschatzl.github.io]

One more reason to upgrade to Java 17 – reduced GC pauses, significant memory savings, and enhanced Windows support on G1 and Parallel GC.

>> Finalizing the Foreign APIs [inside.java]

A detailed take on making the Foreign Memory API in Java tighter, simpler, and much safer to use!

>> Running Scheduled Jobs in Spring Boot [reflectoring.io]

A practical guide on different approaches for scheduling jobs in Spring Boot: fixed-delay, fixed-rates, good old CRON, and distributed jobs!

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> What is an A/B Test? [netflixtechblog.com]

Turning ideas to testable hypotheses at Netflix's scale – covering the basics of A/B testing and its superiority over plain old blind rollouts!

>> Introducing Single Pod Access Mode for PersistentVolumes [kubernetes.io]

Let's get familiar with a new PV and PVC access mode in K8S 1.22: mounting a volume as read-write by a single pod.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Roi Of Lying [dilbert.com]

>> Updated Opinion  [dilbert.com]

>> Long Range Plan [dilbert.com]

4. Pick of the Week

>> The 1000 Day Rule : What Living the Dream Really Looks Like [tropicalmba.com]

       

Consistency Levels in Cassandra

$
0
0

1. Overview

Apache Cassandra is an open-source, NoSQL, highly available, and scalable distributed database. To achieve high availability, Cassandra relies on the replication of data across clusters.

In this tutorial, we will learn how Cassandra provides us the control to manage the consistency of data while replicating data for high availability.

2. Data Replication

Data replication refers to storing copies of each row in multiple nodes. The reason for data replication is to ensure reliability and fault tolerance. Consequently, if any node fails for any reason, the replication strategy makes sure that the same data is available in other nodes.

The replication factor (RF) specifies how many nodes across the cluster would store the replicas.

There are two available replication strategies:

The SimpleStrategy is used for a single data center and one rack topology. First, Cassandra uses partitioner logic to determine the node to place the row. Then it puts additional replicas on the next nodes clockwise in the ring.

The NetworkTopologyStrategy is generally used for multiple datacenters and multiple racks. Additionally, it allows you to specify a different replication factor for each data center. Within a data center, it allocates replicas to different racks to maximize availability.

3. Consistency Level

Consistency indicates how recent and in-sync all replicas of a row of data are. With the replication of data across the distributed system, achieving data consistency is a very complicated task.

Cassandra prefers availability over consistency. It doesn't optimize for consistency. Instead, it gives you the flexibility to tune the consistency depending on your use case. In most use cases, Cassandra relies on eventual consistency.

Let's look at consistency level impact during the write and read of data.

4. Consistency Level (CL) On Write

For write operations, the consistency level specifies how many replica nodes must acknowledge back before the coordinator successfully reports back to the client. More importantly, the number of nodes that acknowledge (for a given consistency level) and the number of nodes storing replicas (for a given RF) are mostly different.

For example, with the consistency level ONE and RF = 3, even though only one replica node acknowledges back for a successful write operation, Cassandra asynchronously replicates the data to 2 other nodes in the background.

Let's look at some of the consistency level options available for the write operation to be successful.

The consistency level ONE means it needs acknowledgment from only one replica node. Since only one replica needs to acknowledge, the write operation is fastest in this case.

The consistency level QUORUM means it needs acknowledgment from 51% or a majority of replica nodes across all datacenters.

The consistency level of LOCAL_QUORUM means it needs acknowledgment from 51% or a majority of replica nodes just within the same datacenter as the coordinator. Thus, it avoids the latency of inter-datacenter communication.

The consistency level of ALL means it needs acknowledgment from all the replica nodes. Since all replica nodes need to acknowledge, the write operation is the slowest in this case. Moreover, if one of the replica nodes is down during the write operation, it fails, and availability suffers. Therefore, the best practice is not to use this option in production deployment.

We can configure the consistency level for each write query or at the global query level.

The diagram below shows a couple of examples of CL on write:

5. Consistency Level (CL) On Read

For read operations, the consistency level specifies how many replica nodes must respond with the latest consistent data before the coordinator successfully sends the data back to the client.

Let's look at some of the consistency level options available for the read operation where Cassandra successfully returns data.

The consistency level ONE means only one replica node returns the data. The data retrieval is fastest in this case.

The consistency level QUORUM means 51% or a majority of replica nodes across all datacenters responds. Then the coordinator returns the data to the client. In the case of multiple data centers, the latency of inter-data center communication results in a slow read.

The consistency level of LOCAL_QUORUM means 51% or a majority of replica nodes within the same datacenter. As the coordinator responds, then the coordinator returns the data to the client. Thus, it avoids the latency of inter-datacenter communication.

The consistency level of ALL means all the replica nodes respond, then the coordinator returns the data to the client. Since all replica nodes need to acknowledge, the read operation is the slowest in this case. Moreover, if one of the replica nodes is down during the read operation, it fails, and availability suffers. The best practice is not to use this option in production deployment.

We can configure the consistency level for each write query or at the global query level.

The diagram below shows a couple of examples of CL on read:

6. Strong Consistency

Strong consistency means you are reading the latest written data into the cluster no matter how much time between the latest write and subsequent read.

We saw in the earlier sections how we could specify desired consistency level (CL) for writes and reads.

Strong consistency can be achieved if W + R > RF, where R – read CL replica count, W – write CL replica count, RF – replication factor.

In this scenario, you get a strong consistency since all client reads always fetches the most recent written data.

Let's look at a couple of examples of strong consistency levels:

6.1. Write CLQUORUM And Read CLQUORUM

If RF = 3, W = QUORUM or LOCAL_QUORUM, R = QUORUM or LOCAL_QUORUM, then W (2) + R (2) > RF (3)

In this case, the write operation makes sure two replicas have the latest data. Then the read operation also makes sure it receives the data successfully only if at least two replicas respond with consistent latest data.

6.2. Write CL = ALL And Read CLONE

If RF = 3, W = ALL, R = ONE, then W (3) + R (1) > RF (3)

In this case, once the coordinator writes data to all the replicas, the write operation is successful. Then it's enough to read the data from one of those replicas to make sure we read the latest written data.

But as we learned earlier, write CL of ALL is not fault-tolerant, and the availability suffers.

7. Conclusion

In this article, we looked at data replication in Cassandra. We also learned about the different consistency level options available on data write and read. Additionally, we looked at a couple of examples to achieve strong consistency.

       

Snapshotting Aggregates in Axon

$
0
0

1. Overview

In this article, we'll be looking at how Axon supports aggregate snapshotting.

We consider this article to be an expansion of our main guide on Axon. As such, we'll utilize both Axon Framework and Axon Server again. We'll use the former in this article's implementation, and the latter is the event store and message router.

2. Aggregate Snapshotting

Let's start by understanding what snapshotting an aggregate means. When we start with Event Sourcing in an application, a natural question is how do I keep sourcing an aggregate performant in my application? Although there are several optimization options, the most straightforward is to introduce snapshotting.

Aggregate snapshotting is the process of storing a snapshot of the aggregate state to improve loading. When snapshotting is incorporated, loading an aggregate before command handling becomes a two-step process:

  1. Retrieve the most recent snapshot, if any, and use it to source the aggregate. The snapshot carries a sequence number, defining up until which point it represents the aggregate's state.
  2. Retrieve the remainder of events starting from the snapshot's sequence, and source the rest of the aggregate.

If snapshotting should be enabled, a process that triggers the creation of snapshots is required. The snapshot creation process should ensure the snapshot resembles the entire aggregate state at its creation point. Lastly, the aggregate loading mechanism (read: the repository) should first load a snapshot, and then, any remaining events.

3. Aggregate Snapshotting in Axon

Axon Framework supports snapshotting of aggregates. For a complete overview of this process, check out this section of Axon's reference guide.

Within the framework, the snapshotting process consists out of two main components:

The Snapshotter is the component that constructs the snapshot for an aggregate instance. By default, the framework will use the entire aggregate's state as the snapshot.

The SnapshotTriggerDefinition defines the trigger towards the Snapshotter to construct a snapshot. A trigger can be:

  • after a set amount of events, or
  • once loading takes a certain amount, or
  • at set moments in time.

The storage and retrieval of snapshots reside with the event store and the aggregate's Repository. To that end, the event store contains a distinct section to store the snapshots. In Axon Server, a separate snapshots file reflects this section.

Snapshot loading is done by the repository, consulting the event store for this. As such, loading an aggregate, and incorporating a snapshot, are wholly taken care of by the framework.

4. Configuring Snapshotting

We will be looking at the Order domain introduced in the previous article. Snapshot construction, storage, and loading are already taken care of by the Snapshotter, event store, and repository.

Hence, to introduce snapshotting to the OrderAggregate, we only have to configure the SnapshotTriggerDefinition.

4.1. Defining a Snapshot Trigger

 Since the application uses Spring, we can add a SnapshotTriggerDefinition to the Application Context. To that end, we add a Configuration class:

@Configuration
public class OrderApplicationConfiguration {
    @Bean
    public SnapshotTriggerDefinition orderAggregateSnapshotTriggerDefinition(
      Snapshotter snapshotter,
      @Value("${axon.aggregate.order.snapshot-threshold:250}") int threshold) {
        return new EventCountSnapshotTriggerDefinition(snapshotter, threshold);
    }
}

In this case, we chose the EventCountSnapshotTriggerDefinitionThis definition triggers the creation of a snapshot once the event count for an aggregate matches the ‘threshold.' Note that the threshold is configurable through a property.

The definition also needs the Snapshotter, which Axon adds to the Application Context automatically. Hence, it can be wired as a parameter when constructing the trigger definition.

Another implementation we could've used, is the AggregateLoadTimeSnapshotTriggerDefinition. This definition triggers the creation of a snapshot if loading the aggregate exceeds the loadTimeMillisThreshold. Lastly, since it's a snapshot trigger, it also requires the Snapshotter to construct the snapshot.

4.2. Using the Snapshot Trigger

Now that the SnapshotTriggerDefinition is part of the application, we need to set it for the OrderAggregate. Axon's Aggregate annotation allows us to specify the bean name of the snapshot trigger. 

Setting the bean name on the annotation will automatically configure the trigger definition for the aggregate:

@Aggregate(snapshotTriggerDefinition = "orderAggregateSnapshotTriggerDefinition")
public class OrderAggregate {
    // state, command handlers and event sourcing handlers omitted
}

By setting the snapshotTriggerDefinition to equal the bean name of the constructed definition, we instruct the framework to configure it for this aggregate.

5. Snapshotting in Action

The configuration sets the trigger definition threshold to ‘250.' This setting means that the framework constructs a snapshot after 250 events are published. Although this is a reasonable default for most applications, this prolongs our test.

So to perform a test, we will adjust the axon.aggregate.order.snapshot-threshold property to ‘5.' Now, we can more easily test whether snapshotting works.

To that end, we start Axon Server and the Order application. After issuing sufficient commands to an OrderAggregate to generate five events, we can check if the application stored a snapshot by searching in the Axon Server Dashboard.

To search for snapshots, we need to click the ‘Search' button in the left tab, select ‘Snapshots' in the top left corner, and click the orange ‘Search' button to the right. The table below should show a single entry like this:

6. Conclusion

In this article, we looked at what aggregate snapshotting is and how Axon Framework supports this concept.

The only thing required to enable snapshotting is the configuration of a SnapshotTriggerDefinition on the aggregate. The job of creation, storage, and retrieval of snapshots, is all taken care of for us.

You can find the implementation of the Order application and the code snippets over on GitHub. For any additional questions on this topic, also check out Discuss AxonIQ.

       

Differences Between applicationContext.xml and spring-servlet.xml in Spring

$
0
0

1. Introduction

When developing a Spring application, it is necessary to tell the framework where to look for beans. When the application starts, the framework locates and registers all of them for further execution. Similarly, we need to define the mapping where all incoming requests to the web application will be processed.

All the Java web frameworks are built on top of servlet api. In a web application, three files play a vital role. Usually, we chain them in order as: web.xml -> applicationContext.xml -> spring-servlet.xml

In this article, we'll look at the differences between the applicationContext and spring-servlet.

2. applicationContext.xml

Inversion of control (IoC) is the core of the Spring framework. In IoC enabled framework, usually, a container is responsible for instantiating, creating, and deleting objects. In Spring, applicationContext plays the role of an IoC container.

When developing a standard J2EE application, we declare the ContextLoaderListener in the web.xml file. In addition, a contextConfigLocation is also defined to indicate the XML configuration file.

<context-param>
    <param-name>contextConfigLocation</param-name>
    <param-value>/WEB-INF/applicationContext*.xml</param-value>
</context-param>

When the application starts, the Spring loads this configuration file and uses it to create a WebApplicationContext object. In the absence of contextConfigLocation, by default, the system will look for/WEB-INF/applicationContext.xml to load.

In short, applicationContext is the central interface in Spring. It provides configuration information for an application.

In this file, we provide the configurations related to the application. Usually, those are the basic data source, property place holder file, and message source for project localization, among other enhancements.

Let's look at the sample file:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:c="http://www.springframework.org/schema/c"
  xmlns:p="http://www.springframework.org/schema/p"
  xmlns:context="http://www.springframework.org/schema/context"
  xsi:schemaLocation="http://www.springframework.org/schema/beans
  http://www.springframework.org/schema/beans/spring-beans-4.1.xsd
  http://www.springframework.org/schema/context
  http://www.springframework.org/schema/context/spring-context-4.1.xsd">
    <context:property-placeholder location="classpath:/database.properties" />
    <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource">
        <property name="driverClassName" value="${jdbc.driverClassName}" />
        <property name="url" value="${jdbc.url}" />
        <property name="username" value="${jdbc.username}" />
        <property name="password" value="${jdbc.password}" />
        <property name="initialSize" value="5" />
        <property name="maxActive" value="10" />
    </bean>
    <bean id="messageSource"
        class="org.springframework.context.support.ResourceBundleMessageSource">
        <property name="basename" value="messages" />
    </bean>
</beans>

ApplicationContext is a complete superset of the BeanFactory interface and, hence, provides all the functionalities of BeanFactory. It also provides the integrated lifecycle management, automatic registration of BeanPostProcessor and BeanFactoryPostProcessor, convenient access to MessageSource, and publication of ApplicationEvent.

3. spring-servlet.xml

In Spring, a single front servlet takes the incoming requests and delegates them to appropriate controller methods. The front servlet, based on a Front controller design pattern, handles all the HTTP requests of a particular web application. This front servlet has all the controls over incoming requests.

Similarly, spring-servlet acts as a front controller servlet and provides a single entry point. It takes the incoming URI. Behind the scenes, it uses HandlerMapping implementation to define a mapping between requests and handler objects.

Let's look at the sample code:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:mvc="http://www.springframework.org/schema/mvc"
  xmlns:context="http://www.springframework.org/schema/context"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="
    http://www.springframework.org/schema/beans     
    http://www.springframework.org/schema/beans/spring-beans.xsd
    http://www.springframework.org/schema/mvc 
    http://www.springframework.org/schema/mvc/spring-mvc.xsd
    http://www.springframework.org/schema/context 
    http://www.springframework.org/schema/context/spring-context.xsd">
    <mvc:annotation-driven />
    <context:component-scan base-package="com.baeldung.controller" />
    <bean id="viewResolver"
      class="org.springframework.web.servlet.view.UrlBasedViewResolver">
	<property name="viewClass"
          value="org.springframework.web.servlet.view.JstlView" />
	<property name="prefix" value="/WEB-INF/jsp/" />
	<property name="suffix" value=".jsp" />
    </bean>
</beans>

4. applicationContext.xml vs. spring-servlet.xml

Let's look at the summarize view:

Feature applicationContext.xml spring-servlet.xml
Framework It is part of the Spring framework. It is part of the Spring MVC framework.
Purpose A container that defines spring beans. A front controller that processes the incoming requests.
Scope It defines the beans that are shared among all servlets. It defines servlet-specific beans only.
Manages It manages global things like datasource, and connection factories are defined in it. Conversely, Only web-related things like controllers and viewresolver will be defined in it.
References It cannot access the beans of spring-servlet. It can access the beans defined in applicationContext.
Sharing Properties common to the whole application will go here. Properties that are specific to one servlet only will go here.
Scanning We define the filters to include/exclude packages. We declare the component scans for controllers.
Occurrence It is common to define multiple context files in an application. Similarly, we can define multiple files in a web application.
Loading The file applicationContext.xml is loaded by ContextLoaderListener. The file spring-servlet.xml is loaded by DispatcherServlet.
Required Optional Mandatory

5. Conclusion

In this tutorial, we learned about the applicationContext and spring-servlet files. Then, we discussed their role and responsibilities in a Spring application. In the end, we looked at the differences between them.

       

Get a Submap From a HashMap in Java

$
0
0

1. Overview

In our previous tutorial, A Guide to Java HashMap, we showed how to use HashMap in Java.

In this short tutorial, we'll learn how to get a submap from a HashMap based on a list of keys.

2. Use Java 8 Stream

For example, suppose we have a HashMap and a list of keys:

Map<Integer, String> map = new HashMap<>();
map.put(1, "A");
map.put(2, "B");
map.put(3, "C");
map.put(4, "D");
map.put(5, "E");
List<Integer> keyList = Arrays.asList(1, 2, 3);

We can use Java 8 streams to get a submap based on keyList:

Map<Integer, String> subMap = map.entrySet().stream()
  .filter(x -> keyList.contains(x.getKey()))
  .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
System.out.println(subMap);

The output will look like:

{1=A, 2=B, 3=C}

3. Use retainAll() Method

We can get the map's keySet and use the retainAll() method to remove all entries whose key is not in keyList:

map.keySet().retainAll(keyList);

Note that this method will edit the original map. If we don't want to affect the original map, we can create a new map first using a copy constructor of HashMap:

Map<Integer, String> newMap = new HashMap<>(map);
newMap.keySet().retainAll(keyList);
System.out.println(newMap);

The output is the same as above.

4. Conclusion

In summary, we've learned two methods to get a submap from a HashMap based on a list of keys.

       

Share Docker Images Without Using the Docker Hub

$
0
0

1. Overview

Let's suppose we need to share a Docker image that is present locally on our machine. To solve this problem, Docker Hub comes to the rescue.

Docker Hub is a cloud-based central repository where Docker images can be stored. So all we need to do is push our Docker image to the Docker Hub, and later, anyone can pull the same Docker image.

Being a cloud-based repository, Docker Hub requires an additional network bandwidth to upload and download the Docker images. Also, as the image size increases, the time needed to upload/download the image also increases. Hence, this method of sharing the Docker images is not always useful.

In this tutorial, we'll discuss a way to share Docker images without using the Docker Hub. This approach proves out to be handy when sender and receiver are connected to the same private network.

2. Save Docker Image as a tar Archive

Suppose there is a Docker image baeldung which we need to transfer from machine A to machine B. To achieve this, first, we'll convert the Docker image to a .tar file using the docker save command:

$ docker save --output baeldung.tar baeldung

The above command will create a tar archive named baeldung.tar. Alternatively, we can also use file redirection to achieve similar results:

$ docker save baeldung > baeldung.tar

The docker save command can create a single tar archive using multiple Docker images:

$ docker save -o ubuntu.tar ubuntu:18.04 ubuntu:16.04 ubuntu:latest

3. Transfer the tar Archive

The tar archive that we created is present on machine A. Let's now transfer the baeldung.tar file to machine B. We can use the protocols like scp or ftp.

This step is highly flexible and depends significantly on the environment where machine A and machine B are present.

4. Load tar Archive into the Docker Image

So far, we have created the tar archive of the Docker image and moved it to our target machine B.

Now, we'll create the actual Docker image from the tar archive baeldung.tar using the docker load command:

$ docker load --input baeldung.tar 
Loaded image: baeldung:latest

Again, we can also use redirection from the file to convert the tar archive:

$ docker load < baeldung.tar
Loaded image: baeldung:latest

Let's now verify whether the image is successfully loaded by running the docker images command:

$ docker images
baeldung                                        latest                            277bcd6563ce        About a minute ago       466MB

Note that if the Docker image, baeldung, is already present on the target machine (machine B in our example), then the docker load command will rename the tag of the existing image to an empty string <none>:

$ docker load --input baeldung.tar 
cfd97936a580: Loading layer [==================================================>]  466MB/466MB
The image baeldung:latest already exists, renaming the old one with ID sha256:
  277bcd6563ce2b71e43b7b6b7e12b830f5b329d21ab690d59f0fd85b01045574 to empty string

5. Drawbacks

Using this approach, we lose the freedom to reuse the cached layers of the Docker image. So, each time we run the docker save command, it'll create the tar archive of the entire Docker image.

Another drawback is that we need to maintain the Docker image versions manually by saving all the tar archives.

Hence, it is recommended to use this approach in the testing environment or when we have restricted access to Docker Hub.

6. Conclusion

In this tutorial, we learned about the docker save and docker load commands and how to transfer a Docker image using these commands.

We also went through the downsides involved and the ideal situations where this approach could prove out to be efficient.

Email Validation in Java

$
0
0

1. Overview

In this tutorial, we'll learn how to validate email addresses in Java using regular expressions.

2. Email Validation in Java

Email validation is required in nearly every application that has user registration in place.

An email address is divided into three main parts: the local-part, an @ symbol, and a domain. Let's suppose if “username@domain.com” is an email then:

  • local-part = username
  • @ = @
  • domain = domain.com

It can take a lot of effort to validate an email address through string manipulation techniques, as we typically need to count and check all the character types and lengths. But in Java, by using a regular expression, it can be much easier.

As we know, a regular expression is a sequence of characters to match patterns. In the following sections, we'll see how email validation can be performed by using several different regular expression methods.

3. Simple Regular Expression Validation

The simplest regular expression to validate an email address is ^(.+)@(\S+) $.

It only checks the presence of the @ symbol in the email address. If present, then the validation result returns true otherwise, the result is false. However, this regular expression doesn't check the local-part and domain of the email.

For example, according to this regular expression, username@domain.com will pass the validation, but username#domain.com will fail the validation.

Let's define a simple helper method to match the regex pattern:

public static boolean patternMatches(String emailAddress, String regexPattern) {
    return Pattern.compile(regexPattern)
      .matcher(emailAddress)
      .matches();
}

Now, Let's also write the code to validate the email address using this regular expression:

@Test
public void testUsingSimpleRegex() {
    emailAddress = "username@domain.com";
    regexPattern = "^(.+)@(\\S+)$";
    assertTrue(EmailValidation.patternMatches(emailAddress, regexPattern));
}

The absence of the @ symbol in the email address will also fail the validation.

4. Strict Regular Expression Validation

Now let's write a more strict regular expression that will check the local-part as well as the domain-part of the email:

^(?=.{1,64}@)[A-Za-z0-9_-]+(\\.[A-Za-z0-9_-]+)*@[^-][A-Za-z0-9-]+(\\.[A-Za-z0-9-]+)*(\\.[A-Za-z]{2,})$

The following restrictions are imposed in the email addresses local-part by using this regex:

  • It allows numeric values from 0 to 9
  • Both uppercase and lowercase letters from a to z are allowed
  • Allowed are underscore “_”, hyphen “-” and dot “.”
  • Dot isn't allowed at the start and end of the local-part
  • Consecutive dots aren't allowed
  • For the local-part, a maximum of 64 characters are allowed

Restrictions for the domain-part in this regular expression include:

  • It allows numeric values from 0 to 9
  • We allow both uppercase and lowercase letters from a to z
  • Hyphen “-” and dot “.” isn't allowed at the start and end of the domain-part
  • No consecutive dots

Let's also write the code to test out this regular expression:

@Test
public void testUsingStrictRegex() {
    emailAddress = "username@domain.com";
    regexPattern = "^(?=.{1,64}@)[A-Za-z0-9_-]+(\\.[A-Za-z0-9_-]+)*@" 
        + "[^-][A-Za-z0-9-]+(\\.[A-Za-z0-9-]+)*(\\.[A-Za-z]{2,})$";
    assertTrue(EmailValidation.patternMatches(emailAddress, regexPattern));
}

So some of the email addresses that'll be valid via this email validation technique are:

  • username@domain.com
  • user.name@domain.com
  • user-name@domain.com
  • username@domain.co.in
  • user_name@domain.com

Here is a shortlist of some email addresses that'll be invalid via this email validation:

  • username.@domain.com
  • .user.name@domain.com
  • user-name@domain.com.
  • username@.com

5. Regular Expression for Validation of Non-Latin or Unicode Characters Email

The regex that we just saw in the previous section will work well for email addresses written in the English language, but it won't work for Non-Latin email addresses.

So let's write a regular expression that we can use to validate unicode characters as well:

^(?=.{1,64}@)[\\p{L}0-9_-]+(\\.[\\p{L}0-9_-]+)*@[^-][\\p{L}0-9-]+(\\.[\\p{L}0-9-]+)*(\\.[\\p{L}]{2,})$

We can use this regex for validating the Unicode or Non-Latin email addresses to support all languages.

As we can see, this regex is similar to the strict regex that we built in the previous section except that we have changed the “A-Za-Z” part with “\\p{L}”. This is to enable the support for Unicode characters.

Let's check this regex by writing the test:

@Test
public void testUsingUnicodeRegex() {
    emailAddress = "用户名@领域.电脑";
    regexPattern = "^(?=.{1,64}@)[\\p{L}0-9_-]+(\\.[\\p{L}0-9_-]+)*@" 
        + "[^-][\\p{L}0-9-]+(\\.[\\p{L}0-9-]+)*(\\.[\\p{L}]{2,})$";
    assertTrue(EmailValidation.patternMatches(emailAddress, regexPattern));
}

Now, this regex not only presents a more strict approach to validate email addresses but also supports Non-Latin characters as well.

6. Regular Expression by RFC 5322 for Email Validation

Instead of writing a custom regex to validate email addresses, we can use one provided by the RFC standards.

The RFC 5322, which is an updated version of RFC 822, provides a regular expression for email validation.

Let's check it out:

^[a-zA-Z0-9_!#$%&'*+/=?`{|}~^.-]+@[a-zA-Z0-9.-]+$

As we can see, it's a very simple regex that allows all the characters in the email.

However, it doesn't allow the pipe character (|) and single quote (‘) as these present a potential SQL injection risk when passed from the client site to the server.

Let's write the code to validate an email with this regex:

@Test
public void testUsingRFC5322Regex() {
    emailAddress = "username@domain.com";
    regexPattern = "^[a-zA-Z0-9_!#$%&'*+/=?`{|}~^.-]+@[a-zA-Z0-9.-]+$";
    assertTrue(EmailValidation.patternMatches(emailAddress, regexPattern));
}

7. Regular Expression to Check Characters in the Top-Level Domain

We have written regex to verify the email address's local and domain parts. Now let's also write a regex that checks the top-level domain of the email.

The below regular expression validates the top-level domain-part of the email address:

^[\\w!#$%&'*+/=?`{|}~^-]+(?:\\.[\\w!#$%&'*+/=?`{|}~^-]+)*@(?:[a-zA-Z0-9-]+\\.)+[a-zA-Z]{2,6}$

This regex basically checks whether the email address has only one dot and there is a minimum of two and a maximum of six characters are present in the top-level domain.

Let's also write some code to verify the email address by using this regex:

@Test
public void testTopLevelDomain() {
    emailAddress = "username@domain.com";
    regexPattern = "^[\\w!#$%&'*+/=?`{|}~^-]+(?:\\.[\\w!#$%&'*+/=?`{|}~^-]+)*" 
        + "@(?:[a-zA-Z0-9-]+\\.)+[a-zA-Z]{2,6}$";
    assertTrue(EmailValidation.patternMatches(emailAddress, regexPattern));
}

8. Regular Expression to Restrict Consecutive, Trailing, and Leading Dots

Now Let's write a regex that will restrict the usage of dots in the email addresses.

^[a-zA-Z0-9_!#$%&'*+/=?`{|}~^-]+(?:\\.[a-zA-Z0-9_!#$%&'*+/=?`{|}~^-]+)*@[a-zA-Z0-9-]+(?:\\.[a-zA-Z0-9-]+)*$

The above regular expression is used to restrict consecutive, leading, and trailing dots. Thus, an email can contain more than one dot but not consecutive in the local and domain parts.

Let's take a look at the code:

@Test
public void testRestrictDots() {
    emailAddress = "username@domain.com";
    regexPattern = "^[a-zA-Z0-9_!#$%&'*+/=?`{|}~^-]+(?:\\.[a-zA-Z0-9_!#$%&'*+/=?`{|}~^-]+)*@" 
        + "[a-zA-Z0-9-]+(?:\\.[a-zA-Z0-9-]+)*$";
    assertTrue(EmailValidation.patternMatches(emailAddress, regexPattern));
}

9. OWASP Validation Regular Expression

This regular expression is provided by the OWASP validation regex repository to check the email validation:

^[a-zA-Z0-9_+&*-] + (?:\\.[a-zA-Z0-9_+&*-] + )*@(?:[a-zA-Z0-9-]+\\.) + [a-zA-Z]{2, 7}

This regex also supports the most validations in the standard email structure.

Let's also verify the email address by using the below code:

@Test
public void testOwaspValidation() {
    emailAddress = "username@domain.com";
    regexPattern = "^[a-zA-Z0-9_+&*-]+(?:\\.[a-zA-Z0-9_+&*-]+)*@(?:[a-zA-Z0-9-]+\\.)+[a-zA-Z]{2,7}$";
    assertTrue(EmailValidation.patternMatches(emailAddress, regexPattern));
}

10. Gmail Special Case for Emails

There's one special case that applies only to the Gmail domain. It's permission to use the character + character in the local-part of the email. For the Gmail domain, the two email addresses username+something@gmail.com and username@gmail.com are the same.

Also, username@gmail.com is similar to user+name@gmail.com.

We must implement a slightly different regex that will pass the email validation for this special case as well.

^(?=.{1,64}@)[A-Za-z0-9_-+]+(\\.[A-Za-z0-9_-+]+)*@[^-][A-Za-z0-9-+]+(\\.[A-Za-z0-9-+]+)*(\\.[A-Za-z]{2,})$

Let's also write an example to test this use case:

@Test
public void testGmailSpecialCase() {
    emailAddress = "username+something@domain.com";
    regexPattern = "^(?=.{1,64}@)[A-Za-z0-9\\+_-]+(\\.[A-Za-z0-9\\+_-]+)*@" 
        + "[^-][A-Za-z0-9\\+-]+(\\.[A-Za-z0-9\\+-]+)*(\\.[A-Za-z]{2,})$";
    assertTrue(EmailValidation.patternMatches(emailAddress, regexPattern));
}

11. Apache Commons Validator for Email

The Apache Commons Validator is a validation package that contains standard validation rules. So, by importing this package, we can apply email validation.

We can use the EmailValidator class to validate email, which uses RFC 822 standards. This Validator contains a mixture of custom code and regular expressions to validate an email. It not only supports the special characters but also supports the Unicode characters we've discussed.

Let's add the commons-validator dependency in our project:

<dependency>
    <groupId>commons-validator</groupId>
    <artifactId>commons-validator</artifactId>
    <version>${validator.version}</version>
</dependency>

Now, we can validate email addresses using the below code:

@Test
public void testUsingEmailValidator() {
    emailAddress = "username@domain.com";
    assertTrue(EmailValidator.getInstance()
      .isValid(emailAddress));
}

12. Which Regex Should I Use?

In this tutorial, we've looked at a variety of solutions using regex for email address validation. Obviously, it depends on how strict we want our validation to be and our exact requirements as to which one solution we should use.

For example, we can use the simple regex from section 3 if we need just a simple regex to check the presence of an @ symbol in an email. However, for more detailed validation, we can opt for a stricter regex solution from section 6 based on the RFC5322 standard.

Finally, if we are dealing with Unicode characters in an email, we can go for the regex solution provided in section 5 of this tutorial.

13. Conclusion

In this tutorial, we've learned various ways to validate email addresses in Java using regular expressions.

The complete code for this tutorial is available over on GitHub.

       

JUnit 4 on How to Ignore a Base Test Class

$
0
0

1. Overview

This tutorial will discuss possible solutions to skip running tests from base test classes in JUnit 4. For purposes of this tutorial, a base class has only helper methods, while children classes will extend it and run actual tests.

2. Bypass Base Test Class

Let's assume we have a BaseUnitTest class with some helper methods:

public class BaseUnitTest {
    public void helperMethod() {
        // ...
    }
}

Now, let's extend it with a class that has tests:

public class ExtendedBaseUnitTest extends BaseUnitTest {
    @Test
    public void whenDoTest_thenAssert() {
        // ...        
    }
}

If we run tests, either with an IDE or a Maven build, we might get an error telling us about no runnable test methods in BaseUnitTest. We don't want to run tests in this class, so we're looking for a way to avoid this error.

We'll take a look at three different possibilities. If running tests with an IDE, we might get different results, depending on our IDE's plugin and how we configure it to run JUnit tests.

2.1. Rename Class

We can rename our class to a name that the build convention will exclude from running as a test. For example, if we're using Maven, we can check the defaults of the Maven Surefire Plugin.

We could change the name from BaseUnitTest to BaseUnitTestHelper or similar:

public class BaseUnitTestHelper {
    public void helperMethod() {
        // ...
    }
}

2.2. Ignore

A second option would be to disable tests temporarily using the JUnit @Ignore annotation. We can add it at the class level to disable all tests in a class:

@Ignore("Class not ready for tests")
public class IgnoreClassUnitTest {
    @Test
    public void whenDoTest_thenAssert() {
        // ...
    }
}

Likewise, we can add it at the method level, in case we still need to run other tests within the class but only exclude one or a few:

public class IgnoreMethodTest {
    @Ignore("This method not ready yet")
    @Test
    public void whenMethodIsIgnored_thenTestsDoNotRun() {
        // ...
    }
}

If running with Maven, we will see output like:

Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.041 s - in com.baeldung.IgnoreMethodTest

The @Disabled annotation replaces @Ignore since JUnit 5.

2.3. Make the Base Class abstract

Possibly the best approach is to make a base test class abstract. Abstraction will require a concrete class to extend it. That's why JUnit will not consider it a test instance in any case.

Let's make our BaseUnitTest class abstract:

public abstract class BaseUnitTest {
    public void helperMethod() {
        // ...
    }
}

3. Conclusion

In this article, we've seen some examples of excluding a base test class from running tests in JUnit 4. The best approach is to create abstract classes.

JUnit's @Ignore annotation is also widely used but is considered a bad practice. By ignoring tests, we might forget about them and the reason for their ignoring.

As always, the code presented in this article is available over on GitHub.

       

Introducing KivaKit

$
0
0

1. Overview

KivaKit is a modular Java application framework designed to make developing microservices and applications quicker and easier. KivaKit has been developed at Telenav since 2011. It is now available as an Apache Licensed, Open Source project on GitHub.

In this article, we'll explore the design of KivaKit as a collection of “mini-frameworks” that work together. In addition, we'll take a look at the essential features of each mini-framework.

2. KivaKit Mini-frameworks

Looking in the kivakit and kivakit-extensions repositories, we can see that KivaKit 1.0 contains 54 modules. We could find this overwhelming. However, if we take things one step at a time, it's not so bad. For starters, we can pick and choose what we want to include in our projects. Each module in KivaKit is designed to be used on its own.

Some KivaKit modules contain mini-frameworks. A mini-framework is a simple, abstract design that addresses a common problem. If we examine KivaKit's mini-frameworks, we will find that they have straightforward, broadly applicable interfaces. As a result, they are a bit like Legos™. That is to say, they are simple pieces that snap together.

Here, we can see KivaKit's mini-frameworks and how they relate to each other:

 

 

Mini-framework Module Description
Application kivakit-application Base components for applications and servers
Command-Line Parsing kivakit-commandline Switch and argument parsing using the conversion and validation mini-frameworks
Component kivakit-component Base functionality for implementing KivaKit components, including applications
Conversion kivakit-kernel An abstraction for implementing robust, modular type converters
Extraction kivakit-kernel Extraction of objects from a data source
Interfaces kivakit-kernel Generic interfaces used as integration points between frameworks
Logging kivakit-kernel
kivakit-logs-*
Core logging functionality, log service provider interface (SPI), and log implementations
Messaging kivakit-kernel Enables components to transmit and receive status information
Mixins kivakit-kernel An implementation of stateful traits
Resource kivakit-resource
kivakit-network-*
kivakit-filesystems-*
Abstractions for files, folders, and streamed resources
Service Locator kivakit-configuration An implementation of the service locator pattern for finding components and settings information
Settings kivakit-configuration Provides easy access to component configuration information
Validation kivakit-kernel Base functionality for checking the consistency of objects

We can find these frameworks in the kivakit repository. On the other hand, we will find less important modules like service providers in kivakit-extensions.

2.1. Messaging

As the diagram above shows, messaging is the central point of integration. In KivaKit, messaging formalizes status reporting. As Java programmers, we're used to status information being logged. Sometimes, we also see this information returned to callers or thrown as exceptions. By contrast, status information in KivaKit is contained in Messages. We can write components that broadcast these messages. Also, we can write components that listen to them.

We can see that this design allows components to focus on reporting status consistently. To a component, where status messages go is unimportant. In one case, we might direct messages to a Logger. In another, we might include them in a statistic. We could even display them to an end-user. The component doesn't care. It just reports issues to whoever might be interested.

KivaKit components that broadcast messages can be connected to one or more message repeaters, forming a listener chain:

 

In KivaKit, an Application is usually the end of a listener chain. Therefore, the Application class logs any messages it receives. Along the way, components in the listener chain may have other uses for these messages.

2.2. Mixins

Another integration feature of KivaKit is the mixins mini-framework. KivaKit mixins allow base classes to be “mixed in” to types through interface inheritance.  Sometimes, mixins are known as “stateful traits“.

For example, the BaseComponent class in KivaKit provides a foundation for building components. BaseComponent provides convenience methods for sending messages. In addition, it offers easy access to resources, settings, and registered objects.

But we quickly run into a problem with this design. As we know, in Java, a class that already has a base class cannot also extend BaseComponent. KivaKit mixins allow BaseComponent functionality to be added to a component that already has a base class. For example:

public class MyComponent extends MyBaseClass implements ComponentMixin { [...] }

We can see here that the interface ComponentMixin extends both Mixin and Component:

 

The Mixin interface provides a state() method to ComponentMixin. First, this method is used to create and associate a BaseComponent with the object implementing ComponentMixin. Second, ComponentMixin implements each method of the Component interface as a Java default method. Third, each default method delegates to the associated BaseComponent. In this way, implementing ComponentMixin provides the same methods as extending BaseComponent.

2.3. Service Locator

The service locator class Registry allows us to wire components together. A Registry provides roughly the same functionality as dependency injection (DI). However, it differs from the typical use of DI in one important way. In the service locator pattern, components reach out for the interfaces that they need. On the other hand, DI pushes interfaces into components. As a result, the service locator approach improves encapsulation. It also reduces the scope of references. For example, Registry can be used neatly within a method:

class MyData extends BaseComponent {
    [...]
    public void save() {
        var database = require(Database.class);
        database.save(this);
    }
}

The BaseComponent.require(Class) method here looks up objects in a Registry. When our save() method returns, the database reference leaves scope. This ensures that no external code can obtain our reference.

When our application starts, we can register service objects with one of the BaseComponent.registerObject() methods. Later, code elsewhere in our application can look them up with require(Class).

2.4. Resources and Filesystems

The kivakit-resource module provides abstractions for reading and writing streamed resources and accessing filesystems. We can see here a few of the more important resource types that KivaKit includes:

  • Files (local, Zip, S3, and HDFS)
  • Package resources
  • Network protocols (sockets, HTTP, HTTPS, and FTP)
  • Input and output streams

We get two valuable benefits from this abstraction. We can:

2.5. Components

The kivakit-component module gives us ready access to common functionality. We can:

  • Send and receive messages
  • Access packages and packaged resources
  • Register and lookup objects, components, and settings

The Component interface is implemented by both BaseComponent and ComponentMixin. As a result, we can add “component nature” to any object.

2.6. Logging

Listener chains formed by KivaKit components often terminate in a Logger. A Logger writes the messages it receives to one or more Logs. In addition, the kivakit-kernel module provides a service provider interface (SPI) for implementing Logs. We can see the full design of the logging mini-framework in UML here.

Using the logging SPI, the kivakit-extensions repository provides us with some Log implementations:

Provider Module
ConsoleLog kivakit-kernel
FileLog kivakit-logs-file
EmailLog kivakit-logs-email

One or more logs can be selected and configured from the command line. This is done by defining the KIVAKIT_LOG system property.

2.7. Conversion and Validation

The kivakit-kernel module contains mini-frameworks for type conversion and object validation. These frameworks are integrated with KivaKit messaging. This allows them to report problems consistently. It also simplifies usage. To implement a type Converter like a StringConverter, we need to write the conversion code. We don't need to worry about exceptions, empty strings, or null values.

 

We can see converters in use in many places in KivaKit, including:

  • Switch and argument parsing
  • Loading settings objects from properties files
  • Formatting objects as debug strings
  • Reading objects from CSV files

Messages broadcast by Validatables are captured by the validation mini-framework. Subsequently, they are analyzed to provide us with easy access to error statistics and validation problems.

 

2.8. Applications, Command Lines, and Settings

The kivakit-application, kivakit-configuration, and kivakit-commandline modules provide a simple, consistent model for developing applications.

The kivakit-application project supplies the Application base class. An Application is a Component. It provides settings information using kivakit-configuration. In addition, it provides command-line parsing with kivakit-commandline.

The kivakit-configuration project uses the kivakit-resource module to load settings information from .properties resources (and other sources in the future). It converts the properties in these resources to objects using kivakit-kernel converters. The converted objects are then validated with the validation mini-framework.

Command-line arguments and switches for an application are parsed by the kivakit-commandline module using KivaKit converters and validators. We can see issues that arise in the application's ConsoleLog.

 

2.9. Microservices

So far, we've discussed KivaKit features that are generally useful to any application. In addition, KivaKit also provides functionality in kivakit-extensions that is explicitly targeted at microservices. Let's take a quick look at kivakit-web.

The kivakit-web project contains modules for rapidly developing a simple REST and web interface to a microservice. The JettyServer class provides us with a way to plug in servlets and filters with a minimum of hassle. Plugins that we can use with JettyServer include:

Plugin Description
JettyJersey REST application support
JettySwagger Swagger automatic REST documentation
JettyWicket Support for the Apache Wicket web framework

These plugins can be combined to provide a RESTful microservice with Swagger documentation and a web interface:

var application = new MyRestApplication();
listenTo(new JettyServer())
    .port(8080)
    .add("/*", new JettyWicket(MyWebApplication.class))
    .add("/open-api/*", new JettySwaggerOpenApi(application))
    .add("/docs/*", new JettySwaggerIndex(port))
    .add("/webapp/*", new JettySwaggerStaticResources())
    .add("/webjar/*", new JettySwaggerWebJar(application))
    .add("/*", new JettyJersey(application))
    .start();

KivaKit 1.1 will include a dedicated microservices mini-framework. This will make it even easier for us to build microservices.

3. Documentation and Lexakai

The documentation for KivaKit is generated by Lexakai. Lexakai creates UML diagrams (guided by annotations when desired) and updates README.md markdown files. In the readme file for each project, Lexakai updates a standard header and footer. In addition, it maintains indexes for the generated UML diagrams and Javadoc documentation. Lexakai is an open-source project distributed under Apache License.

4. Building KivaKit

KivaKit targets a Java 11 or higher virtual machine (but can be used from Java 8 source code). We can find all the artifacts for KivaKit modules on Maven Central. However, we might want to modify KivaKit or contribute to the open-source project. In this case, we'll need to build it.

To get started, let's set up Git, Git Flow, Java 16 JDK, and Maven 3.8.1 or higher.

First, we clone the kivakit repository into our workspace:

mkdir ~/Workspace
cd ~/Workspace
git clone --branch develop https://github.com/Telenav/kivakit.git

Next, we copy the sample bash profile to our home folder:

cp kivakit/setup/profile ~/.profile

Then we modify ~/.profile to point to our workspace, and our Java and Maven installations:

export KIVAKIT_WORKSPACE=$HOME/Workspace 
export JAVA_HOME=/Library/Java/JavaVirtualMachines/temurin-16.jdk/Contents/Home 
export M2_HOME=$HOME/Developer/apache-maven-3.8.2

After our profile is set up, we ensure that we are running bash (on macOS, zsh is the default now):

chsh -s /bin/bash

And finally, we restart our terminal program and execute the command:

$KIVAKIT_HOME/setup/setup.sh

The setup script will clone kivakit-extensions and some other related repositories. Afterward, it will initialize git-flow and build all our KivaKit projects.

5. Conclusion

In this article, we took a brief look at the design of KivaKit. We also toured some of the more important functions it provides. KivaKit is ideally suited for developing microservices. It has been designed to be learned and used in easy-to-digest, independent pieces.

       

Add a Reference to Method Parameters in Javadoc

$
0
0

1. Overview

In the Java language, we can generate documentation in HTML format from Java source code using Javadoc. In this tutorial, we'll learn about different ways to add a reference to method parameters in Javadoc.

2. Different Ways to Add a Reference to a Method Parameter

In this section, we'll talk about adding a reference to a method parameter in Javadoc. We'll see the usage of inline tag {@code} and HTML style tag </code> in Javadoc.

Further, we'll see how {@code} and <code> tag take care of a few special cases:

  • displaying special characters ‘<‘, ‘>', and ‘@'
  • indentation and line breaks
  • handling of escaping of HTML codes — for example, < which translates to symbol ‘<‘

2.1. The {@code} Tag

{@code text} is an inline tag that was included in JDK 1.5.

The {@code} tag formats literal text in the code font. {@code abc} is equivalent to <code>{@literal abc}</code>.

We don't need to manually escape any special character used inside the {@code} tag.

When we use the {@code} tag, it:

  • displays ‘<‘ and ‘>' correctly
  • displays ‘@' correctly
  • does not need to escape special characters via HTML number codes
  • is more readable and concise

Let's create a simple method in a class and add some Javadoc using the {@code} tag:

/**
  * This method takes a {@code String} 
  * and searches in the given list {@code List<String>}
  * 
  * @param name
  *        Name of the person
  * @param avengers
  *        list of Avengers names
  * @return true if found, false otherwise
  */
public Boolean isAvenger(String name, List<String> avengers) {
    return avengers.contains(name);
}

Here, we can see that we don't need to escape the special characters ‘<‘ and ‘>'.

The generated Javadoc will render the HTML output as:

 

Similarly, we can see that we don't need to escape the ‘@' character:

/**
  * This is sample for showing @ use without any manual escape.
  * {@code @AnyAnnotaion}
  * 
  */
public void javadocTest() {
}

This will render to HTML Javadoc as:

 

In the case of a multiline code snippet in Javadoc, {@code} will not maintain the indentation and line break. To overcome this, we can use HTML tag <pre> along with {@code}. However, we need to escape the ‘@' character in this case.

2.2. The <code> Tag

<code> is an HTML style tag supported by Javadoc.

When we use the <code> tag, it:

  • doesn't display ‘<‘ and ‘>' correctly
  • requires to escape special characters via HTML number codes
  • is not so readable

Let's consider the same example again. We can see that the generated Javadoc HTML is missing the <String> parameterized type after List in our paragraph:

/**
  * This method takes a <code>String</code>
  * and searches in the given <code>List<String></code>
  * 
  * @param name
  *        Name of the person
  * @param avengers
  *        list of Avengers names
  * @return true if found, false otherwise
  */
public Boolean isAvenger(String name, List<String> avengers) {
    return avengers.contains(name);
}


Here, if we escape the special characters ‘<‘ and ‘>' in our method comment, then it will render the correct <String> in Javadoc:

/**
  * This method takes a <code>String</code>
  * and searches in the given <code>List<String></code>
  * 
  * @param name
  *        Name of the person
  * @param avengers
  *        list of Avengers names
  * @return true if found, false otherwise
  */
public Boolean isAvenger(String name, List<String> avengers) {
    return avengers.contains(name);
}

 

3. Conclusion

In this tutorial, we first talked about how to use {@code} and <code> to reference method parameters in Javadoc. Then we described the handling of special characters by these tags. In conclusion, we now understand how to add references to method parameters in Javadoc, and we can see that {@code} is better than <code> any day.

       

REST vs. gRPC

$
0
0

1. Overview

In this article, we’ll compare REST and gRPC, two architectural styles for web APIs.

2. What Is REST?

REST (Representational State Transfer) is an architectural style that provides guidelines for designing web APIs.

It uses standard HTTP 1.1 methods like GET, POST, PUT, and DELETE to work with server-side resources. Additionally, REST APIs provide pre-defined URLs that the client must use to connect with the server.

3. What Is gRPC?

gRPC (Remote Procedure Call) is an open-source data exchange technology developed by Google using the HTTP/2 protocol.

It uses the Protocol Buffers binary format (Protobuf) for data exchange. Also, this architectural style enforces rules that a developer must follow to develop or consume web APIs.

4. REST vs. gRPC

4.1. Guidelines vs. Rules

REST is a set of guidelines for designing web APIs without enforcing anything. On the other hand, gRPC enforces rules by defining a .proto file that must be adhered to by both client and server for data exchange.

4.2. Underlying HTTP Protocol

REST provides a request-response communication model built on the HTTP 1.1 protocol. Therefore, when multiple requests reach the server, it is bound to handle each of them, one at a time.

However, gRPC follows a client-response model of communication for designing web APIs that rely on HTTP/2. Hence, gRPC allows streaming communication and serves multiple requests simultaneously. In addition to that, gRPC also supports unary communication similar to REST.

4.3. Data Exchange Format

REST typically uses JSON and XML formats for data transfer. However, gRPC relies on Protobuf for an exchange of data over the HTTP/2 protocol.

4.4. Serialization vs. Strong Typing

REST, in most cases, uses JSON or XML that requires serialization and conversion into the target programming language for both client and server, thereby increasing response time and the possibility of errors while parsing the request/response.

However, gRPC provides strongly typed messages automatically converted using the Protobuf exchange format to the chosen programming language.

4.5. Latency

REST utilizes HTTP 1.1, which requires a TCP handshake for each request. Hence, REST APIs can suffer from latency issues.

On the other hand, gRPC relies on HTTP/2 protocol, which uses multiplexed streams. Therefore, several clients can send multiple requests simultaneously without establishing a new TCP connection for each one. Also, the server can send push notifications to clients via the established connection.

4.6. Browser Support

REST, which relies on HTTP 1.1, has universal browser support.

However, gRPC has limited browser support because numerous browsers have no mature support for HTTP/2. So, it may require gRPC-web and a proxy layer to perform conversions between HTTP 1.1 and HTTP/2. Therefore, at the moment, gRPC is primarily used for internal services.

4.7. Code Generation Features

REST provides no built-in code generation features. However, we can use third-party tools like Swagger or Postman to produce code for API requests.

On the other hand, gRPC, using its protoc compiler, comes with native code generation features, compatible with several programming languages.

5. Conclusion

In this article, we compared two architectural styles for APIs, REST and gRPC.

We conclude that REST is handy in integrating microservices and third-party applications with the core systems.

However, gRPC can find its application in various systems like IoT systems that require light-weight message transmission, mobile applications with no browser support, and applications that need multiplexed streams.

       

Java Weekly, Issue 405

$
0
0

1. Spring and Java

>> Multiple ways to configure Spring [blog.frankel.ch]

A practical guide on different approaches to configure Spring applications: from files to config classes to Kotlin DSLs!

>> JDK 18: Code Snippets in Java API Documentation [marxsoftware.com]

No more unreadable pre tags in Javadocs – Java 18 will ship the new snippet tag to include code snippets.

>> VMware Overhauls Spring 6 & Spring Boot 3 for Another Decade [infoq.com]

Exciting plans that shape the future of Spring, Spring Boot, and the Java ecosystem: Java 17 baseline, JPMS, native executables, observability, and a lot more

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Netflix Cloud Packaging in the Terabyte Era [netflixtechblog.com]

How Netflix supports inspecting, encoding, and packaging of media content at terabytes scale.

Also worth reading:

3. Musings

>> Atomic Habits in Software Development [reflectoring.io]

A few pieces of advice to perfect the workflow and achieve peak productivity for software engineers.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Zoom Tips 2 [dilbert.com]

>> Cracked Phone Screen [dilbert.com]

>> True Nature Of Reality [dilbert.com]

5. Pick of the Week

>> SnykCon is coming up [snyk.io]

       

Spring Security – Request Rejected Exception

$
0
0

1. Introduction

Spring Framework versions 5.0 to 5.0.4, 4.3 to 4.3.14, and other older versions had a directory or path traversal security vulnerability on Windows systems.

Misconfiguring the static resources allows malicious users to access the server's file system. For instance, serving static resources using file: protocol provides illegal access to the file system on Windows.

The Spring Framework acknowledged the vulnerability and fixed it in the later releases.

Consequently, this fix guards the applications against path traversal attacks. However, with this fix, a few of the earlier URLs now throw an org.springframework.security.web.firewall.RequestRejectedException exception.

Finally, in this tutorial, let's learn about org.springframework.security.web.firewall.RequestRejectedException and StrictHttpFirewall in the context of path traversal attacks.

2. Path Traversal Vulnerabilities

A path traversal or directory traversal vulnerability enables illegal access outside the web document root directory. For instance, manipulating the URL can provide unauthorized access to the files outside the document root.

Though most latest and popular webservers offset most of these attacks, the attackers can still use URL-encoding of special characters like “./”, “../” to circumvent the webserver security and gain illegal access.

Also, OWASP discusses the Path Traversal vulnerabilities and the ways to address them.

3. Spring Framework Vulnerability

Now, Let's try to replicate this vulnerability before we learn how to fix it.

First, let's clone the Spring Framework MVC examples. Later, let's modify the pom.xml and replace the existing Spring Framework version with a vulnerable version. 

Clone the repository:

git clone git@github.com:spring-projects/spring-mvc-showcase.git

Inside the cloned directory, edit the pom.xml to include 5.0.0.RELEASE as the Spring Framework version:

<org.springframework-version>5.0.0.RELEASE</org.springframework-version>

Next, edit the web configuration class WebMvcConfig and modify the addResourceHandlers method to map resources to a local file directory using  file:

@Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
    registry
      .addResourceHandler("/resources/**")
      .addResourceLocations("file:./src/", "/resources/");
}

Later, build the artifact and run our web app:

mvn jetty:run

Now, when the server starts up, invoke the URL:

curl 'http://localhost:8080/spring-mvc-showcase/resources/%255c%255c%252e%252e%255c/%252e%252e%255c/%252e%252e%255c/%252e%252e%255c/%252e%252e%255c/windows/system.ini'

%252e%252e%255c is a double-encoded form of  ..\ and %255c%255c is a double-encoded form of \\.

Precariously, the response will be the contents of the Windows system file system.ini.

4. Spring Security HttpFirewall Interface

The Servlet specification does not precisely define the distinction between servletPath and pathInfo. Hence, there is an inconsistency among the Servlet containers in the translation of these values.

For instance, on Tomcat 9, for the URL http://localhost:8080/api/v1/users/1, the URI /1 is intended to be a path variable.

On the other hand, the following returns /api/v1/users/1:

request.getServletPath()

However, the command below returns a null:

request.getPathInfo()

Unable to distinguish the path variables from the URI can lead to potential attacks like Path Traversal / Directory Traversal attacks. For instance, a user can exploit system files on the server by including a \\,  /../, ..\ in the URL. Unfortunately, only some Servlet containers normalize these URLs.

Spring Security to the rescue. Spring Security consistently behaves across the containers and normalizes these kinds of malicious URLs utilizing a HttpFirewall interface. This interface has two implementations:

4.1. DefaultHttpFirewall

In the first place, let's not get confused with the name of the implementation class. In other words, this is not the default HttpFirewall implementation.

The firewall tries to sanitize or normalize the URLs and standardizes the servletPath and pathInfo across the containers. Also, we can override the default HttpFirewall behavior by explicitly declaring a @Bean:

@Bean
public HttpFirewall getHttpFirewall() {
    return new DefaultHttpFirewall();
}

However, StrictHttpFirewall provides a robust and secured implementation and is the recommended implementation.

4.2. StrictHttpFirewall

StrictHttpFirewall is the default and stricter implementation of HttpFirewall. In contrast, unlike DefaultHttpFirewall, StrictHttpFirewall rejects any un-normalized URLs providing more stringent protection. In addition, this implementation protects the application from several other attacks like Cross-Site Tracing (XST) and HTTP Verb Tampering.

Moreover, this implementation is customizable and has sensible defaults. In other words, we can disable (not recommended) a few of the features like allowing semicolons as part of the URI:

@Bean
public HttpFirewall getHttpFirewall() {
    StrictHttpFirewall strictHttpFirewall = new StrictHttpFirewall();
    strictHttpFirewall.setAllowSemicolon(true);
    return strictHttpFirewall;
}

In short, StrictHttpFirewall rejects suspicious requests with a org.springframework.security.web.firewall.RequestRejectedException.

Finally, let's develop a User Management application with CRUD operations on Users using Spring REST and Spring Security, and see StrictHttpFirewall in action.

5. Dependencies

Let's declare Spring Security and Spring Web dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
    <version>2.5.4</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>2.5.4</version>
</dependency>

6. Spring Security Configuration

Next, let's secure our application with Basic Authentication by creating a configuration class that extends WebSecurityConfigurerAdapter:

@Configuration
public class SpringSecurityHttpFirewallConfiguration extends WebSecurityConfigurerAdapter {
    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
          .csrf()
          .disable()
          .authorizeRequests()
            .antMatchers("/error").permitAll()
          .anyRequest()
          .authenticated()
          .and()
          .httpBasic();
    }
}

By default, Spring Security provides a default password that changes for every restart. Hence, let's create a default username and password in the application.properties:

spring.security.user.name=user
spring.security.user.password=password

Henceforth, we'll access our secured REST APIs using these credentials.

7. Building a Secured REST API

Now, let's build our User Management REST API:

@PostMapping
public ResponseEntity<Response> createUser(@RequestBody User user) {
    userService.saveUser(user);
    Response response = new Response()
      .withTimestamp(System.currentTimeMillis())
      .withCode(HttpStatus.CREATED.value())
      .withMessage("User created successfully");
    URI location = URI.create("/users/" + user.getId());
    return ResponseEntity.created(location).body(response);
}
 
@DeleteMapping("/{userId}")
public ResponseEntity<Response> deleteUser(@PathVariable("userId") String userId) {
    userService.deleteUser(userId);
    return ResponseEntity.ok(new Response(200,
      "The user has been deleted successfully", System.currentTimeMillis()));
}

Now, let's build and run the application:

mvn spring-boot:run

8. Testing the APIs

Now, let's start by creating a User using cURL:

curl -i --user user:password -d @request.json -H "Content-Type: application/json" 
     -H "Accept: application/json" http://localhost:8080/api/v1/users

Here is a request.json:

{
    "id":"1",
    "username":"navuluri",
    "email":"bhaskara.navuluri@mail.com"
}

Consequently, the response is:

HTTP/1.1 201
Location: /users/1
Content-Type: application/json
{
  "code":201,
  "message":"User created successfully",
  "timestamp":1632808055618
}

Now, let's configure our StrictHttpFirewall to deny requests from all the HTTP methods:

@Bean
public HttpFirewall configureFirewall() {
    StrictHttpFirewall strictHttpFirewall = new StrictHttpFirewall();
    strictHttpFirewall
      .setAllowedHttpMethods(Collections.emptyList());
    return strictHttpFirewall;
}

Next, let's invoke the API again. Since we configured StrictHttpFirewall to restrict all the HTTP methods, this time, we get an error.

In the logs, we have this exception:

org.springframework.security.web.firewall.RequestRejectedException: 
The request was rejected because the HTTP method "POST" was not included
  within the list of allowed HTTP methods []

Since Spring Security v5.4, we can use RequestRejectedHandler to customize the HTTP Status when there is a RequestRejectedException:

@Bean
public RequestRejectedHandler requestRejectedHandler() {
   return new HttpStatusRequestRejectedHandler();
}

Note that the default HTTP status code when using a HttpStatusRequestRejectedHandler is 400. However, we can customize this by passing a status code in the constructor of the HttpStatusRequestRejectedHandler class.

Now, let's reconfigure the StrictHttpFirewall to allow \\ in the URL and HTTP GET, POST, DELETE, and OPTIONS methods:

strictHttpFirewall.setAllowBackSlash(true);
strictHttpFirewall.setAllowedHttpMethods(Arrays.asList("GET","POST","DELETE", "OPTIONS")

Next, invoke the API:

curl -i --user user:password -d @request.json -H "Content-Type: application/json" 
     -H "Accept: application/json" http://localhost:8080/api<strong>\\</strong>v1/users

And here we have a response:

{
  "code":201,
  "message":"User created successfully",
  "timestamp":1632812660569
}

Finally, let's revert to the original strict functionality of StrictHttpFirewall by deleting the @Bean declaration.

Next, let's try to invoke our API with suspicious URLs:

curl -i --user user:password -d @request.json -H "Content-Type: application/json" 
      -H "Accept: application/json" http://localhost:8080/api/v1<strong>//</strong>users
curl -i --user user:password -d @request.json -H "Content-Type: application/json" 
      -H "Accept: application/json" http://localhost:8080/api/v1<strong>\\</strong>users

Straightaway, all the above requests fail with error log:

org.springframework.security.web.firewall.RequestRejectedException: 
The request was rejected because the URL contained a potentially malicious String "//"

9. Conclusion

This article explains Spring Security's protection against malicious URLs that may cause the Path Traversal/Directory Traversal attacks.

DefaultHttpFirewall tries to normalize the malicious URLs. However, StrictHttpFirewall rejects the requests with a RequestRejectedException. Along with Path Traversal attacks, StrictHttpFirewall protects us from several other attacks. Hence it is highly recommended to use the StrictHttpFirewall along with its default configurations.

As always, the complete source code is available over on Github.

       

Build a Dashboard With Cassandra, Astra and CQL – Mapping Event Data

$
0
0

1. Introduction

In our previous article, we looked at augmenting our dashboard to store and display individual events from the Avengers using DataStax Astra, a serverless DBaaS powered by Apache Cassandra using Stargate to offer additional APIs for working with it.

In this article, we will be making use of the exact same data in a different way. We are going to allow the user to select which of the Avengers to display, the time period of interest, and then display these events on an interactive map. Unlike in the previous article, this will allow the user to see the data interacting with each other in both geography and time.

In order to follow along with this article, it is assumed that you have already read the first and second articles in this series and that you have a working knowledge of Java 16, Spring, and at least an understanding of what Cassandra can offer for data storage and access. It may also be easier to have the code from GitHub open alongside the article to follow along.

2. Service Setup

We will be retrieving the data using the CQL API, using queries in the Cassandra Query Language. This requires some additional setup for us to be able to talk to the server.

2.1. Download Secure Connect Bundle.

In order to connect to the Cassandra database hosted by DataStax Astra via CQL, we need to download the “Secure Connect Bundle”. This is a zip file containing SSL certificates and connection details for this exact database, allowing the connection to be made securely.

This is available from the Astra dashboard, found under the “Connect” tab for our exact database, and then the “Java” option under “Connect using a driver”:

For pragmatic reasons, we're going to put this file into src/main/resources so that we can access it from the classpath. In a normal deployment situation, you would need to be able to provide different files to connect to different databases – for example, to have different databases for development and production environments.

2.2. Creating Client Credentials

We also need to have some client credentials in order to connect to our database. Unlike the APIs that we've used in previous articles, which use an access token, the CQL API requires a “username” and “password”. These are actually a Client ID and Client Secret that we generate from the “Manage Tokens” section under “Organizations”:

Once this is done, we need to add the generated Client ID and Client Secret to our application.properties:

ASTRA_DB_CLIENT_ID=clientIdHere
ASTRA_DB_CLIENT_SECRET=clientSecretHere

2.3. Google Maps API Key

In order to render our map, we are going to use Google Maps. This will then need a Google API key to be able to use this API.

After signing up for a Google account, we need to visit the Google Cloud Platform Dashboard. Here we can create a new project:

We then need to enable the Google Maps JavaScript API for this project. Search for this and enable this:

Finally, we need an API key to be able to use this. For this, we need to navigate to the “Credentials” pane on the sidebar, click on “Create Credentials” at the top and select API Key:

We now need to add this key to our application.properties file:

GOOGLE_CLIENT_ID=someRandomClientId

3. Building The Client Layer Using Astra and CQL

In order to communicate with the database via CQL, we need to write our client layer. This will be a class called CqlClient that wraps the DataStax CQL APIs, abstracting away the connection details:

@Repository
public class CqlClient {
  @Value("${ASTRA_DB_CLIENT_ID}")
  private String clientId;
  @Value("${ASTRA_DB_CLIENT_SECRET}")
  private String clientSecret;
  public List<Row> query(String cql, Object... binds) {
    try (CqlSession session = connect()) {
      var statement = session.prepare(cql);
      var bound = statement.bind(binds);
      var rs = session.execute(bound);
      return rs.all();
    }
  }
  private CqlSession connect() {
    return CqlSession.builder()
      .withCloudSecureConnectBundle(CqlClient.class.getResourceAsStream("/secure-connect-baeldung-avengers.zip"))
      .withAuthCredentials(clientId, clientSecret)
      .build();
  }
}

This gives us a single public method that will connect to the database and execute an arbitrary CQL query, allowing for some bind values to be provided to it.

Connecting to the database makes use of our Secure Connect Bundle and client credentials that we generated earlier. The Secure Connect Bundle needs to have been placed in src/main/resources/secure-connect-baeldung-avengers.zip, and the client ID and secret need to have been put into application.properties with the appropriate property names.

Note that this implementation loads every row from the query into memory and returns them as a single list before finishing. This is only for the purposes of this article but is not as efficient as it otherwise could be. We could, for example, fetch and process each row individually as they are returned or even go as far as to wrap the entire query in a java.util.streams.Stream to be processed.

4. Fetching the Required Data

Once we have our client to be able to interact with the CQL API, we need our service layer to actually fetch the data we are going to display.

Firstly, we need a Java Record to represent each row we are fetching from the database:

public record Location(String avenger, 
  Instant timestamp, 
  BigDecimal latitude, 
  BigDecimal longitude, 
  BigDecimal status) {}

And then we need our service layer to retrieve the data:

@Service
public class MapService {
  @Autowired
  private CqlClient cqlClient;
  // To be implemented.
}

Into this, we're going to write our functions to actually query the database – using the CqlClient that we've just written – and return the appropriate details.

4.1. Generate a List of Avengers

Our first function is to get a list of all the Avengers that we are able to display the details of:

public List<String> listAvengers() {
  var rows = cqlClient.query("select distinct avenger from avengers.events");
  return rows.stream()
    .map(row -> row.getString("avenger"))
    .sorted()
    .collect(Collectors.toList());
}

This just gets the list of distinct values in the avenger column from our events table. Because this is our partition key, it is incredibly efficient. CQL will only allow us to order the results when we have a filter on the partition key so we are instead doing the sorting in Java code. This is fine though because we know that we have a small number of rows being returned so the sorting will not be expensive.

4.2. Generate Location Details

Our other function is to get a list of all the location details that we wish to display on the map. This takes a list of avengers, and a start and end time and returns all of the events for them grouped as appropriate:

public Map<String, List<Location>> getPaths(List<String> avengers, Instant start, Instant end) {
  var rows = cqlClient.query("select avenger, timestamp, latitude, longitude, status from avengers.events where avenger in ? and timestamp >= ? and timestamp <= ?", 
    avengers, start, end);
  var result = rows.stream()
    .map(row -> new Location(
      row.getString("avenger"), 
      row.getInstant("timestamp"), 
      row.getBigDecimal("latitude"), 
      row.getBigDecimal("longitude"),
      row.getBigDecimal("status")))
    .collect(Collectors.groupingBy(Location::avenger));
  for (var locations : result.values()) {
    Collections.sort(locations, Comparator.comparing(Location::timestamp));
  }
  return result;
}

The CQL binds automatically expand out the IN clause to handle multiple avengers correctly, and the fact that we are filtering by the partition and clustering key again makes this efficient to execute. We then parse these into our Location object, group them together by the avenger field and ensure that each grouping is sorted by the timestamp.

5. Displaying the Map

Now that we have the ability to fetch our data, we need to actually let the user see it. This will first involve writing our controller for getting the data:

5.1. Map Controller

@Controller
public class MapController {
  @Autowired
  private MapService mapService;
  @Value("${GOOGLE_CLIENT_ID}")
  private String googleClientId;
  @ModelAttribute("googleClientId")
  String getGoogleClientId() {
    return googleClientId;
  }
  @GetMapping("/map")
  public ModelAndView showMap(@RequestParam(name = "avenger", required = false) List<String> avenger,
  @RequestParam(required = false) String start, @RequestParam(required = false) String end) throws Exception {
    var result = new ModelAndView("map");
    result.addObject("inputStart", start);
    result.addObject("inputEnd", end);
    result.addObject("inputAvengers", avenger);
    
    result.addObject("avengers", mapService.listAvengers());
    if (avenger != null && !avenger.isEmpty() && start != null && end != null) {
      var paths = mapService.getPaths(avenger, 
        LocalDateTime.parse(start).toInstant(ZoneOffset.UTC), 
        LocalDateTime.parse(end).toInstant(ZoneOffset.UTC));
      result.addObject("paths", paths);
    }
    return result;
  }
}

This uses our service layer to get the list of avengers, and if we have inputs provided then it also gets the list of locations for those inputs. We also have a ModelAttribute that will provide the Google Client ID to the view for it to use.

5.1. Map Template

Once we've written our controller, we need a template to actually render the HTML. This will be written using Thymeleaf as in the previous articles:

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1" />
  <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta3/dist/css/bootstrap.min.css" rel="stylesheet"
    integrity="sha384-eOJMYsd53ii+scO/bJGFsiCZc+5NDVN2yr8+0RDqr0Ql0h+rP48ckxlpbzKgwra6" crossorigin="anonymous" />
  <title>Avengers Status Map</title>
</head>
<body>
  <nav class="navbar navbar-expand-lg navbar-dark bg-dark">
    <div class="container-fluid">
      <a class="navbar-brand" href="#">Avengers Status Map</a>
    </div>
  </nav>
  <div class="container-fluid mt-4">
    <div class="row">
      <div class="col-3">
        <form action="/map" method="get">
          <div class="mb-3">
            <label for="avenger" class="form-label">Avengers</label>
            <select class="form-select" multiple name="avenger" id="avenger" required>
              <option th:each="avenger: ${avengers}" th:text="${avenger}" th:value="${avenger}"
                th:selected="${inputAvengers != null && inputAvengers.contains(avenger)}"></option>
            </select>
          </div>
          <div class="mb-3">
            <label for="start" class="form-label">Start Time</label>
            <input type="datetime-local" class="form-control" name="start" id="start" th:value="${inputStart}"
              required />
          </div>
          <div class="mb-3">
            <label for="end" class="form-label">End Time</label>
            <input type="datetime-local" class="form-control" name="end" id="end" th:value="${inputEnd}" required />
          </div>
          <button type="submit" class="btn btn-primary">Submit</button>
        </form>
      </div>
      <div class="col-9">
        <div id="map" style="width: 100%; height: 40em;"></div>
      </div>
    </div>
  </div>
  <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta3/dist/js/bootstrap.bundle.min.js"
    integrity="sha384-JEW9xMcG8R+pH31jmWH6WWP0WintQrMb4s7ZOdauHnUtxwoG2vI5DkLtS3qm9Ekf" crossorigin="anonymous">
    </script>
  <script type="text/javascript" th:inline="javascript">
    /*<![CDATA[*/
    let paths = /*[[${paths}]]*/ {};
    let map;
    let openInfoWindow;
    function initMap() {
      let averageLatitude = 0;
      let averageLongitude = 0;
      if (paths) {
        let numPaths = 0;
        for (const path of Object.values(paths)) {
          let last = path[path.length - 1];
          averageLatitude += last.latitude;
          averageLongitude += last.longitude;
          numPaths++;
        }
        averageLatitude /= numPaths;
        averageLongitude /= numPaths;
      } else {
        // We had no data, so lets just tidy things up:
        paths = {};
        averageLatitude = 40.730610;
        averageLongitude = -73.935242;
      }
      map = new google.maps.Map(document.getElementById("map"), {
        center: { lat: averageLatitude, lng: averageLongitude },
        zoom: 16,
      });
      for (const avenger of Object.keys(paths)) {
        const path = paths[avenger];
        const color = getColor(avenger);
        new google.maps.Polyline({
          path: path.map(point => ({ lat: point.latitude, lng: point.longitude })),
          geodesic: true,
          strokeColor: color,
          strokeOpacity: 1.0,
          strokeWeight: 2,
          map: map,
        });
        path.forEach((point, index) => {
          const infowindow = new google.maps.InfoWindow({
            content: "<dl><dt>Avenger</dt><dd>" + avenger + "</dd><dt>Timestamp</dt><dd>" + point.timestamp + "</dd><dt>Status</dt><dd>" + Math.round(point.status * 10000) / 100 + "%</dd></dl>"
          });
          const marker = new google.maps.Marker({
            position: { lat: point.latitude, lng: point.longitude },
            icon: {
              path: google.maps.SymbolPath.FORWARD_CLOSED_ARROW,
              strokeColor: color,
              scale: index == path.length - 1 ? 5 : 3
            },
            map: map,
          });
          marker.addListener("click", () => {
            if (openInfoWindow) {
              openInfoWindow.close();
              openInfoWindow = undefined;
            }
            openInfoWindow = infowindow;
            infowindow.open({
              anchor: marker,
              map: map,
              shouldFocus: false,
            });
          });
        });
      }
    }
    function getColor(avenger) {
      return {
        wanda: '#ff2400',
        hulk: '#008000',
        hawkeye: '#9370db',
        falcon: '#000000'
      }[avenger];
    }
    /*]]>*/
  </script>
  <script
    th:src="${'https://maps.googleapis.com/maps/api/js?key=' + googleClientId + '&callback=initMap&libraries=&v=weekly'}"
    async></script>
</body>
</html>

We are injecting the data retrieved from Cassandra, as well as some other details. Thymeleaf automatically handles converting the objects within the script block into valid JSON. Once this is done, our JavaScript then renders a map using the Google Maps API and adds some routes and markers onto it to show our selected data.

At this point, we have a fully working application. Into this we can select some avengers to display, date and time ranges of interest, and see what was happening with our data:

6. Conclusion

Here we have seen an alternative way to visualize data retrieved from our Cassandra database, and have shown the Astra CQL API in use to obtain this data.

All of the code from this article can be found over on GitHub.

       

Convert Long to String in Java

$
0
0

1. Overview

In this short tutorial, we'll learn how to convert Long to String in Java.

2. Use Long.toString()

For example, suppose we have two variables of type long and Long (one of primitive type and the other of reference type):

long l = 10L;
Long obj = 15L;

We can simply use the toString() method of the Long class to convert them to String:

String str1 = Long.toString(l);
String str2 = Long.toString(obj);
System.out.println(str1);
System.out.println(str2);

The output will look like:

10
15

If our obj object is null, we'll get a NullPointerException.

3. Use String.valueOf()

We can use the valueOf() method of the String class to achieve the same goal:

String str1 = String.valueOf(l);
String str2 = String.valueOf(obj);

When obj is null, the method will set str2 to “null” instead of throwing a NullPointerException.

4. Use String.format()

Besides the valueOf() method of the String class, we can also use the format() method:

String str1 = String.format("%d", l);
String str2 = String.format("%d", obj);

str2 will also be “null” if obj is null.

5. Use toString() Method of Long Object

Our obj object can use its toString() method to get the String representation:

String str = obj.toString();

Of course, we'll get a NullPointerException if obj is null.

6. Using the + Operator

We can simply use the + operator with an empty String to get the same result:

String str1 = "" + l;
String str2 = "" + obj;

str2 will be “null” if obj is null.

7. Use StringBuilder or StringBuffer

StringBuilder and StringBuffer objects can be used to convert Long to String:

String str1 = new StringBuilder().append(l).toString();
String str2 = new StringBuilder().append(obj).toString();

str2 will be “null” if obj is null.

8. Use DecimalFormat

Finally, we can use the format() method of a DecimalFormat object:

String str1 = new DecimalFormat("#").format(l);
String str2 = new DecimalFormat("#").format(obj);

Be careful because if obj is null, we'll get an IllegalArgumentException.

9. Conclusion

In summary, we've learned different ways to convert Long to String in Java. It's up to us to choose which method to use, but it's generally better to use one that's concise and doesn't throw exceptions.

       
Viewing all 4464 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>