Quantcast
Channel: Baeldung
Viewing all 4885 articles
Browse latest View live

Get the Current Date Prior to Java 8

$
0
0

1. Introduction

In legacy systems, we might need to work with dates when neither the new date and time API nor the highly recommended Joda-Time library is available.

In this short tutorial, we're going to take a look at several approaches to see how to get the current date in pre-Java 8 systems.

2. System Time

When all we need is a single numeric value representing the current date and time, we can use the system time. To get the number of milliseconds passed since January 1, 1970 00:00:00 GMT we can use the currentTimeMillis method which returns a long:

long elapsedMilliseconds = System.currentTimeMillis();

When we want to measure elapsed time with greater precision, we can use the nanoTime method. This will return the value of nanoseconds that have passed from a fixed but arbitrary moment.

This arbitrary time is the same for all calls inside the JVM, so the value returned is useful only for computing the difference in elapsed nanoseconds between multiple calls of nanoTime:

long elapsedNanosecondsStart = System.nanoTime();
long elapsedNanoseconds = System.nanoTime() - elapsedNanosecondsStart;

3. The java.util Package

Using classes from the java.util package, we can represent a moment in time, usually by the milliseconds elapsed since January 1, 1970 00:00:00 GMT.

3.1. java.util.Date

We can represent a specific date and time by using a Date object. This contains a precision of milliseconds and the time zone information.

While there are many constructors available, the simplest way to create a Date object representing the current date in the local time zone is to use the basic constructor:

Date currentUtilDate = new Date();

Let’s now create a Date object for a specific date and time. We could use the aforementioned constructors and simply pass the milliseconds value.

Alternatively, we can use the SimpleDateFormat class to convert a String value to an actual Date object:

SimpleDateFormat dateFormatter = new SimpleDateFormat("dd-MM-yyyy HH:mm:ss");
Date customUtilDate = dateFormatter.parse("30-01-2020 10:11:12");

We can use a wide range of date patterns to suit our needs.

3.2. java.util.Calendar

A Calendar object can do what a Date does, and it's better for date arithmetic computations since it can also take a Locale. We can specify the Locale as a geographic, political, or cultural region.

To get the current date, with no TimeZone or Locale specified, we can use the getInstance method:

Calendar currentUtilCalendar = Calendar.getInstance();

And for Calendar to Date conversion, we can simply use the getTime method:

Date currentDate = Calendar.getInstance().getTime();

As a fun fact, the GregorianCalendar class is the implementation of the most used calendar in the world.

4. The java.sql Package

Next, we’ll explore three extensions of the java.util.Date class that represents the equivalent SQL objects.

4.1. java.sql.Date

With a java.sql.Date object, we don’t have access to time zone information, and the precision is truncated at the day level. To represent today, we can use the constructor that takes a long representation of milliseconds:

Date currentSqlDate = new Date(System.currentTimeMillis());

As before, for a specific date, we can use the SimpleDateFormat class to convert to a java.util.Date first and then get the milliseconds using the getTime method. Then, we can pass this value to the java.sql.Date constructor.

We can simply use the valueOf method when the String representation of a Date matches the yyyy-[m]m-[d]d pattern:

Date customSqlDate = Date.valueOf("2020-01-30");

4.2. java.sql.Time

The java.sql.Time object offers access to the hour, minute, and second information — once again, with no access to a time zone. Let’s get the current Time using milliseconds representation:

Time currentSqlTime = new Time(System.currentTimeMillis());

To specify a time using the valueOf method, we can pass in a value matching the hh:mm:ss pattern:

Time customSqlTime = Time.valueOf("10:11:12");

4.3. java.sql.Timestamp

In this last section, we'll combine both the SQL Date and Time information using the Timestamp class. This allows us to have precision down to nanoseconds.

Let’s create a Timestamp object by once again passing a long value for the current number of milliseconds to the constructor:

Timestamp currentSqlTimestamp = new Timestamp(System.currentTimeMillis());

Finally, let's create a new custom Timestamp using the valueOf method with the required yyyy-[m]m-[d]d hh:mm:ss[.f…] pattern:

Timestamp customSqlTimestamp = Timestamp.valueOf("2020-1-30 10:11:12.123456789");

5. Conclusion

In this short tutorial, we’ve seen how to get the current date and the date for a given instant without the use of Java 8 or any external libraries.

As always, the code for the article is available over on GitHub.


How to Pass Command Line Arguments to Bash Script

$
0
0

1. Overview

Linux has a rich and powerful set of ways to supply parameters to bash scripts.

In this tutorial, we'll look at a few ways of doing this.

2. Argument List

Arguments can be passed to a bash script during the time of its execution, as a list, separated by space following the script filename. This comes in handy when a script has to perform different functions depending on the values of the input.

For instance, let's pass a couple of parameters to our script start.sh:

sh start.sh development 100

2.1. Using Single Quote

If the input list has arguments that comprise multiple words separated by spaces, they need to be enclosed in single quotes.

For instance, in the above-mentioned example, if the first argument to be passed is development mode instead of development, it should be enclosed in single quotes and passed as ‘development mode':

sh start.sh 'development mode' 100

2.2. Using Double Quotes

Arguments that require evaluation must be enclosed in double-quotes before passing them as input.

Consider a bash script copyFile.sh that takes in two arguments: A file name and the directory to copy it to:

sh copyFile.sh abc.txt "$HOME"

Here, the $HOME variable gets evaluated to the user's home directory, and the evaluated result is passed to the script.

2.3. Escaping Special Characters

If the arguments that need to be passed have special characters, they need to be escaped with backslashes:

sh printStrings.sh abc a@1 cd\$ 1\*2

The characters $ and * do not belong to the safe set and hence are escaped with backslashes.

These rules for using single quotes, double quotes, and escape characters remain the same for the subsequent sections as well.

3. Flags

Arguments can also be passed to the bash script with the help of flags. These flags are usually single character letters preceded by a hyphen. The corresponding input value to the flag is specified next to it separated by space.

Let's consider the following example of a user registration script, userReg.sh which takes 3 arguments: username, full name, and age:

sh userReg.sh -u abc -f Abc -a 25

Here, the input to the script is specified using the flags (u, f and a) and the script processes this input by fetching the corresponding values based on the flag.

4. Environment Variables

Bash scripts can also be passed with the arguments in the form of environment variables. This can be done in either of the following ways:

  • Specifying the variable value before the script execution command
  • Exporting the variable and then executing the script

Let's look at the following example of a script processor.sh, which takes two variables var1 and var2 as input.

As mentioned above these variables can be fed as input to the script:

var1=abc var2=c\#1 sh processor.sh

Here, we're first specifying the values of the var1 and var2 variables before invoking the script execution, in the same command.

The same can also be achieved by exporting var1 and var2 as an environment variable and then invoking the script execution:

export var1=abc
export var2=c\#1
sh processor.sh

5. Last Argument Operator (!$)

The last argument of the previous command, referred by !$ can also be passed as an argument to a bash script.

Let's suppose we're again copying files, and the destination is the user home directory. Using the last argument operator, we can pass the input to the script:

cd $HOME
sh copyFile.sh abc.txt !$

In the first command, we navigate to the user home and when the second command is invoked, !$ gets evaluated to the last argument of the previous command which is $HOME and hence the resultant command gets evaluated to:

sh copyFile.sh abc.txt $HOME

6. Pipe Operator (|)

Pipe operator (|) in combination with the xargs command can be used to feed input to the bash scripts.

Let's revisit the earlier example of printStrings.sh script which takes the input as a list of strings and prints them.

Instead of passing these strings as arguments in the command line, they can be added inside a file. Then, this file, with the help of the pipe operator and xargs command, can be used as the input:

cat abc.txt | xargs printStrings.sh

Here, the cat command outputs the strings that have been added in the file. These are then passed to the xargs command through the pipe operator, which collects this list and passes it to the script.

7. Conclusion

In conclusion, we have seen different ways of passing command-line arguments to the bash script during run time and how different types of input like multi-word arguments, arguments with special characters need to be handled.

How to Mount and Unmount Filesystems in Linux

$
0
0

1. Overview

We know that files are the central operating units in the Linux system. Files and directories are stored in filesystems, which can be located on various devices, such as a hard disk or a USB drive. Filesystems can also be shared over the network.

In this tutorial, we'll discuss how to use the mount command to attach various filesystems and detach them with the command umount.

2. List Mounted Filesystems

In Linux, we can mount a filesystem into any directory. As a result, the files stored in that filesystem are then accessible when we enter the directory.

We call those directories the “mount points” of a filesystem.

We can get the information of all currently mounted filesystems by using the mount command without any arguments:

$ mount
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
dev on /dev type devtmpfs (rw,nosuid,relatime,size=8133056k,nr_inodes=2033264,mode=755)
run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
/media/Data/archLinux.iso on /mnt/archIso type udf (ro,relatime,utf8)
/dev/sdb1 on / type ext4 (rw,noatime,commit=120)
...

Let's pick the last output line as an example to understand the information that the mount command gives us:

/dev/sdb1 on / type ext4 (rw,noatime,commit=120)
---------   ---     ----  ---------------------
   (1)      (2)     (3)          (4)
  1. The device or filesystem name we want to mount. In this case, it's the first partition on the second hard disk.
  2. The mount point. This is the directory the device is mounted to — the root directory in this case.
  3. The type of the filesystem. In this example, it's ext4.
  4. Mount options. We'll cover some commonly used mount options in the following sections.

The command mount will, by default, report all mounted filesystems, including the virtual ones such as cgroup on /sys/fs[?].

It can be a very long list. However, if we want to check the mounting information of a particular filesystem type, we can make use of the -t type option.

Let's see how to get the mounting information only for udf (Universal Disk Format) filesystems:

$ mount -t udf
/media/Data/archLinux.iso on /mnt/archIso type udf (ro,relatime,utf8)

3. Mounting Filesystems

Mounting filesystems isn't complicated. Usually, it takes only two steps:

  • Create the mount point (create a directory using the mkdir command)
  • Mount the filesystem with the command:
mount -t Type Device MountPoint

Usually, the mount command can detect the type of filesystem automatically. That is, we don't have to pass the -t option explicitly.

There are some cases in which the mount command cannot detect the filesystem type:

  • The partition is corrupt or not formatted
  • The required filesystem tools are not available — for example, an attempt to mount an NTFS partition with “read & write” access without installing the ntfs-3g package

On the Linux system, mounting is typically restricted to the root user for security reasons.

The root user can set the permission of mounting point directories. As a result, all users allowed to enter the directories can access the mounted filesystems.

Next, let's see how to mount various devices and filesystems.

3.1. USB Drive/Stick

To mount a USB drive in Linux, first of all, we have to find out the name of the USB device we want to mount.

After we plug in a USB device, the Linux system adds a new block device file into the /dev directory.

Most modern Linux distributions will populate a /dev/disk/by-label directory by udev rules. To identify the partition on the USB drive, we can go to /dev/disk/by-label to find the block device by checking the label of the partition.

Let's see an example of how to find the device we're about to mount.

Say we plug in a 16GB USB stick that has a single partition, which is formatted in ext4 format with label “SanDisk_16G“:

$ pwd
/dev/disk/by-label
$ ls -l
...
lrwxrwxrwx 1 root root 10 Oct 28 23:47  Backup -> ../../sda3
lrwxrwxrwx 1 root root 10 Nov  1 18:07  SanDisk_16G -> ../../sdd1
...

The ls -l output shows the block device file of our USB stick is /dev/sdd1.

However, not all Linux distributions will populate the /dev/disk/by-label directory. CirrOS, for instance, doesn't populate the by-label directory by default.

In addition to searching in /dev/disk/by-label, we can also identify the block device file of our USB device by using the fdisk command with the -l option:

root# fdisk -l
...
Disk /dev/sdd: 14.94 GiB, 16013942784 bytes, 31277232 sectors
Disk model: Extreme         
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xfdc01076

Device     Boot Start      End  Sectors  Size Id Type
/dev/sdd1  *       63 31277231 31277169 14.9G 83 Linux
...

Once we've found the correct device file, mounting the device is pretty straightforward.

First, we create the mount point:

root# mkdir /mnt/usb16G

Now let's mount the USB stick:

root# mount /dev/sdd1 /mnt/usb16G

root# mount | grep usb16G
/dev/sdd1 on /mnt/usb16G type ext4 (rw,defaults,data=ordered)

And now, we can access the USB stick under the directory /mnt/usb16G.

3.2. ISO Files

Mounting isn't restricted to physical devices. If we have a filesystem “image”, such as an optical disc ISO image, we can mount the image as a filesystem.

We mount the image through the use of a pseudo-device called the “loop device“. This makes files contained in the image accessible.

Let's see how to mount an ISO file archLinux.iso on /mnt/archIso:

First, we create a directory as the mount point:

root# mkdir /mnt/archIso

Next, we mount the ISO image on the directory we just created:

root# mount /media/Data/archLinux.iso /mnt/archIso -o loop

root# mount | grep archIso
/media/Data/archLinux.iso on /mnt/archIso type udf (ro,relatime,utf8)

In the above mount command, we use the “loop” option to tell the mount command to treat the ISO image as a loop device.

3.3. Samba Share

The SMB protocol allows a Unix-like system to access shared resources in a Microsoft Windows system.

Samba is an open-source implementation of the SMB protocol.

The cifs-utils package is required to mount a Samba share.

For example, let's say we have a Windows share “sharedOnWin” on a Windows server (192.168.0.7), and a Windows user “kent” with password “kent_PWD” has been authorized to access the shared resources.

Before we mount the Samba share, we create a directory:

root# mkdir /mnt/winShare

Next, let's mount the Samba share:

root# mount -t smbfs //192.168.0.7/sharedOnWin /mnt/winShare -o username=kent,password=kent_PWD

root# mount | grep winShare
//192.168.0.7/shareOnWin on /mnt/winShare type cifs (rw,relatime...addr=192.168.0.7,username=kent...)

Now, the files in the Windows share are available in the directory /mnt/winShare.

3.4. NFS

NFS (Network File System) is a distributed filesystem protocol that allows us to share remote directories over a network.

To mount an NFS share, we must install the NFS client package first.

Let's say we have a well-configured NFS shared directory “/export/nfs/shared” on a server 192.168.0.8.

Similar to the Samba share mount, we first create the mount point and then mount the NFS share:

root# mkdir /mnt/nfsShare
root# mount -t nfs 192.168.0.8:/export/nfs/shared /mnt/nfsShare

root# mount | grep nfsShare
192.168.0.8:/export/nfs/shared/ on /mnt/nfsShare type nfs (rw,addr=192.168.0.8)

3.5. Commonly Used mount -o Options

The mount command supports many options.

Some commonly used options are:

  • loop – mount as a loop device
  • rw – mount the filesystem read-write (default)
  • ro – mount the filesystem read-only
  • iocharset=value – character to use for accessing the filesystem (default iso8859-1)
  • noauto – the filesystem will not be mounted automatically during system boot

3.6. The /etc/fstab File

So far, we've seen several examples of the mount command to attach to various filesystems. However, the mounts won't survive after a reboot.

For some filesystems, we may want to have them automatically mounted after system boot or reboot. The /etc/fstab file can help us to achieve this.

The /etc/fstab file contains lines describing which filesystems or devices are to be mounted on which mount points, and with which mount options.

All filesystems listed in the fstab file will be mounted automatically during system boot, except for the lines containing the “noauto” mount option. 

Let's see an /etc/fstab example:

$ cat /etc/fstab
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/sdb1	/	ext4	rw,defaults,noatime,commit=120,data=ordered	0	1
/dev/sdb2	/home	ext4	rw,defaults,noatime,data=ordered	0	2
/dev/sda3	/media/Backup	ntfs-3g	defaults,locale=en_US.UTF-8	0	0
/dev/sda2	/media/Data	ntfs-3g	defaults,locale=en_US.UTF-8	0	0
...

Thus, if we add the following line in this file, the archLinux.iso image will be automatically mounted on /mnt/archIso after system boot:

/media/Data/archLinux.iso /mnt/archIso udf ro,relatime,utf8 0 0

Once a filesystem is mentioned in /etc/fstab, we can mount it by just giving the mount point or the device.

For instance, with the above fstab configuration, we can mount the /dev/sda2 partition with either of the two short commands:

root# mount /media/Data

or

root# mount /dev/sda2

4. Unmounting a Filesystem

The umount command notifies the system to detach the given mounted filesystems. We just provide the filesystem name or the mount point following the umount command.

For example, if we want to unmount the previously mounted USB stick and ISO image:

root# umount /dev/sdd1
root# umount /mnt/archIso

We can also umount multiple mounted filesystems in one shot:

root# umount /dev/sdd1 /mnt/archIso

4.1. Lazy Unmount

When we want to umount a filesystem, we don't always know if there are operations still running on it. For instance, a copy job could be running on the filesystem.

Of course, we don't want to break the copy and get inconsistent data. The option -l will let us do a “lazy” umount.

The -l option informs the system to complete pending read or write operations on that filesystem and then safely unmount it:

root# umount -l mount_point

4.2. Force Unmount

If we pass -f option to the command umount, it'll forcefully unmount a filesystem even if it's still busy:

root# umount -f mount_point

We should be careful while executing umount with -f as it could lead to corrupt or inconsistent data in the unmounted filesystem. 

One real-world use case for force unmounting could be unmounting a network share because of a connection problem.

5. Conclusion

In this article, we've seen examples of how to use the mount and umount command to attach and detach various filesystems or devices.

Later on, we talked about some commonly used mounting options and the /etc/fstab file.

Armed with these two commands, accessing network shares or other filesystems in Linux CLI won't be a challenge for us.

Guide to Eureka Self Preservation and Renewal

$
0
0

1. Overview

In this tutorial, we're going to learn about Eureka Self Preservation and Renewal.

We'll start by creating a Eureka server alongside multiple Eureka client instances.

Then, we'll register these clients with our Eureka server to show how self-preservation works.

2. Eureka Self-Preservation

Before talking about self-preservation, let's understand how the Eureka server maintains the client instance registry.

During the start-up, the clients trigger a REST call with the Eureka server to self-register to the server's instance registry. When a graceful shutdown occurs after use, the clients trigger another REST call so that the server can wipe out all the data related to the caller.

To handle ungraceful client shutdowns the server expects heartbeats from the client at specific intervals. This is called renewal. If the server stops receiving the heartbeat for a specified duration, then it will start evicting the stale instances.

The mechanism that stops evicting the instances when the heartbeats are below the expected threshold is called self-preservation. This might happen in the case of a poor network partition, where the instances are still up, but just can't be reached for a moment or in the case of an abrupt client shutdown.

And when the server activates self-preservation mode it holds the instance eviction until the renewal rate is back above the expected threshold.

Let's see this in action.

3. Creating the Server

Let's first, create the Eureka server by annotating our Spring Boot main class with @EnableEurekaServer:

@SpringBootApplication
@EnableEurekaServer
public class EurekaServerApplication {
    public static void main(String[] args) {
        SpringApplication.run(EurekaServerApplication.class, args);
    }
}

But now, let's add the basic configurations for us to start up the server:

eureka.client.registerWithEureka=false
eureka.client.fetchRegistry=false
eureka.instance.hostname=localhost

Since we don't want our Eureka server to register with itself, we've set the property eureka.client.registerWithEureka as false. Here the property eureka.instance.hostname=localhost is particularly important since we're running it in a local machine. Otherwise, we may end up creating an unavailable replica within the Eureka server – messing up the client's heartbeat counts.

Now let's take a look at all the configuration and its relevance in the context of self-preservation in the next section.

3.1. Self-Preservation Configurations

By default, Eureka servers run with self-preservation enabled.

However, for the sake of our understanding, let's go through each of these configurations on the server-side.

  • eureka.server.enable-self-preservation: Configuration for disabling self-preservation – the default value is true
  • eureka.server.expected-client-renewal-interval-seconds: The server expects client heartbeats at an interval configured with this property – the default value is 30
  • eureka.instance.lease-expiration-duration-in-seconds: Indicates the time in seconds that the Eureka server waits since it received the last heartbeat from a client before it can remove that client from its registry – the default value is 90
  • eureka.server.eviction-interval-timer-in-ms: This property tells the Eureka server to run a job at this frequency to evict the expired clients – the default value is 60 seconds
  • eureka.server.renewal-percent-threshold: Based on this property, the server calculates the expected heartbeats per minute from all the registered clients – the default value is 0.85
  • eureka.server.renewal-threshold-update-interval-ms: This property tells the Eureka server to run a job at this frequency to calculate the expected heartbeats from all the registered clients at this minute – the default value is 15 minutes

In most cases, the default configuration is sufficient. But for specific requirements, we might want to change these configurations. Utmost care needs to be taken in those cases to avoid unexpected consequences like wrong renew threshold calculation or delayed self-preservation mode activation.

4. Registering Clients

Now, let's create a Eureka Client and spin up six instances:

@SpringBootApplication
@EnableEurekaClient
public class EurekaClientApplication {

    public static void main(String[] args) {
        SpringApplication.run(EurekaClientApplication.class, args);
    }
}

Here are the client's configurations:

spring.application.name=Eurekaclient
server.port=${PORT:0}
eureka.client.serviceUrl.defaultZone=http://localhost:8761/eureka
eureka.instance.preferIpAddress=true
eureka.instance.lease-renewal-interval-in-seconds=30

This configuration allows us to spin up multiple instances of the same client with the PORT program argument.  The configuration eureka.instance.lease-renewal-interval-in-seconds indicates the interval of heartbeats that the client sends to the server. The default value is 30 seconds which means that the client will send one heartbeat every 30 seconds.

Let's now start these six client instances with the port numbers starting from 8081 to 8086 and navigate to http://localhost:8761 to inspect whether these instances are registered with the Eureka server.

From the screenshot, we can see that our Eureka server has six registered client instances and the total renewal threshold is 11. The threshold calculation is based on three factors:

  • Total number of registered client instances – 6
  • Configured client renewal interval – 30 seconds
  • The configured renewal percentage threshold – 0.85

Considering all these factors, in our case, the threshold is 11.

5. Testing Self-Preservation

In order to simulate a temporary network problem, let's set the property eureka.client.should-unregister-on-shutdown as false at the client-side and stop one of our client instances. Because we set the should-unregister-on-shutdown flag as false, the client won't invoke the unregister call and the server assumes that this is an ungraceful shutdown.

Let's now wait for 90 seconds, set by our eureka.instance.lease-expiration-duration-in-seconds property, and navigate again to http://localhost:8761. The red bold text indicates that the Eureka Server is now in self-preservation mode and stopped evicting instances.

Let's now inspect the registered instances section to see if the stopped instance is still available or not. As we can see, it is available but with the status as DOWN:

The only way the server can get out of self-preservation mode is either by starting the stopped instance or by disabling self-preservation itself. If we repeat the same steps by setting the flag eureka.server.enable-self-preservation as false, then the Eureka server will evict the stopped instance from the registry after the configured lease expiration duration property.

6. Conclusion

In this tutorial, we've learned how Eureka self-preservation works and how can we configure different options related to self-preservation.

All the examples we've demonstrated here can be found over on GitHub.

Benchmark JDK Collections vs Eclipse Collections

$
0
0

1. Introduction

In this tutorial, we're going to compare the performance of traditional JDK collections with Eclipse Collections. We'll create different scenarios and explore the results.

2. Configuration

First, note that for this article, we'll use the default configuration to run the tests. No flags or other parameters will be set on our benchmark.

We'll use the following hardware and libraries:

The easiest way to create our project is via the command-line:

mvn archetype:generate \
  -DinteractiveMode=false \
  -DarchetypeGroupId=org.openjdk.jmh \
  -DarchetypeArtifactId=jmh-java-benchmark-archetype \
  -DgroupId=com.baeldung \
  -DartifactId=benchmark \
  -Dversion=1.0

After that, we can open the project using our favorite IDE and edit the pom.xml to add the Eclipse Collections dependencies:

<dependency>
    <groupId>org.eclipse.collections</groupId>
    <artifactId>eclipse-collections</artifactId>
    <version>10.0.0</version>
</dependency>
<dependency>
    <groupId>org.eclipse.collections</groupId>
    <artifactId>eclipse-collections-api</artifactId>
    <version>10.0.0</version>
</dependency>

3. First Benchmark

Our first benchmark is simple. We want to calculate the sum of a previously created List of Integers.

We'll test six different combinations while running them in serial and parallel:

private List<Integer> jdkIntList;
private MutableList<Integer> ecMutableList;
private ExecutorService executor;
private IntList ecIntList;

@Setup
public void setup() {
    PrimitiveIterator.OfInt iterator = new Random(1L).ints(-10000, 10000).iterator();
    ecMutableList = FastList.newWithNValues(1_000_000, iterator::nextInt);
    jdkIntList = new ArrayList<>(1_000_000);
    jdkIntList.addAll(ecMutableList);
    ecIntList = ecMutableList.collectInt(i -> i, new IntArrayList(1_000_000));
    executor = Executors.newWorkStealingPool();
}

@Benchmark
public long jdkList() {
    return jdkIntList.stream().mapToLong(i -> i).sum();
}

@Benchmark
public long ecMutableList() {
    return ecMutableList.sumOfInt(i -> i);
}

@Benchmark
public long jdkListParallel() {
    return jdkIntList.parallelStream().mapToLong(i -> i).sum();
}

@Benchmark
public long ecMutableListParallel() {
    return ecMutableList.asParallel(executor, 100_000).sumOfInt(i -> i);
}

@Benchmark
public long ecPrimitive() { 
    return this.ecIntList.sum(); 
}

@Benchmark
public long ecPrimitiveParallel() {
    return this.ecIntList.primitiveParallelStream().sum(); 
}

To run our first benchmark we need to execute:

mvn clean install
java -jar target/benchmarks.jar IntegerListSum -rf json

This will trigger the benchmark at our IntegerListSum class and save the result to a JSON file.

We'll measure the throughput or number of operations per second in our tests, so the higher the better:

Benchmark                              Mode  Cnt     Score       Error  Units
IntegerListSum.ecMutableList          thrpt   10   573.016 ±    35.865  ops/s
IntegerListSum.ecMutableListParallel  thrpt   10  1251.353 ±   705.196  ops/s
IntegerListSum.ecPrimitive            thrpt   10  4067.901 ±   258.574  ops/s
IntegerListSum.ecPrimitiveParallel    thrpt   10  8827.092 ± 11143.823  ops/s
IntegerListSum.jdkList                thrpt   10   568.696 ±     7.951  ops/s
IntegerListSum.jdkListParallel        thrpt   10   918.512 ±    27.487  ops/s

Accordingly to our tests, Eclipse Collections's parallel list of primitives had the highest throughput of all. Also, it was the most efficient with a performance almost 10x faster than the Java JDK running also in parallel.

Of course, a portion of that can be explained by the fact that when working with primitive lists, we don't have the cost associated with boxing and unboxing.

We can use JMH Visualizer to analyze our results. The chart below shows a better visualization:

4. Filtering

Next, we'll modify our list to get all elements that are multiple of 5. We'll reuse a big portion of our previous benchmark and a filter function:

private List<Integer> jdkIntList;
private MutableList<Integer> ecMutableList;
private IntList ecIntList;
private ExecutorService executor;

@Setup
public void setup() {
    PrimitiveIterator.OfInt iterator = new Random(1L).ints(-10000, 10000).iterator();
    ecMutableList = FastList.newWithNValues(1_000_000, iterator::nextInt);
    jdkIntList = new ArrayList<>(1_000_000);
    jdkIntList.addAll(ecMutableList);
    ecIntList = ecMutableList.collectInt(i -> i, new IntArrayList(1_000_000));
    executor = Executors.newWorkStealingPool();
}

@Benchmark
public List<Integer> jdkList() {
    return jdkIntList.stream().filter(i -> i % 5 == 0).collect(Collectors.toList());
}

@Benchmark
public MutableList<Integer> ecMutableList() {
    return ecMutableList.select(i -> i % 5 == 0);
}


@Benchmark
public List<Integer> jdkListParallel() {
    return jdkIntList.parallelStream().filter(i -> i % 5 == 0).collect(Collectors.toList());
}

@Benchmark
public MutableList<Integer> ecMutableListParallel() {
    return ecMutableList.asParallel(executor, 100_000).select(i -> i % 5 == 0).toList();
}

@Benchmark
public IntList ecPrimitive() {
    return this.ecIntList.select(i -> i % 5 == 0);
}

@Benchmark
public IntList ecPrimitiveParallel() {
    return this.ecIntList.primitiveParallelStream()
      .filter(i -> i % 5 == 0)
      .collect(IntLists.mutable::empty, MutableIntList::add, MutableIntList::addAll);
}

We'll execute the test just like before:

mvn clean install
java -jar target/benchmarks.jar IntegerListFilter -rf json

And the results:

Benchmark                                 Mode  Cnt     Score    Error  Units
IntegerListFilter.ecMutableList          thrpt   10   145.733 ±  7.000  ops/s
IntegerListFilter.ecMutableListParallel  thrpt   10   603.191 ± 24.799  ops/s
IntegerListFilter.ecPrimitive            thrpt   10   232.873 ±  8.032  ops/s
IntegerListFilter.ecPrimitiveParallel    thrpt   10  1029.481 ± 50.570  ops/s
IntegerListFilter.jdkList                thrpt   10   155.284 ±  4.562  ops/s
IntegerListFilter.jdkListParallel        thrpt   10   445.737 ± 23.685  ops/s

As we can see, the Eclipse Collections Primitive was the winner again. With a throughput more than 2x faster than the JDK parallel list.

Note that for filtering, the effect of parallel processing is more visible. Summing is a cheap operation for the CPU and we won't see the same differences between serial and parallel.

Also, the performance boost that Eclipse Collections primitive lists got earlier begins to evaporate as the work done on each element begins to outweigh the cost of boxing and unboxing.

To finalize, we could see that operations on primitives are faster than objects:

5. Conclusion

In this article, we created a couple of benchmarks to compare Java Collections with Eclipse Collections. We've leveraged JMH to try to minimize the environment bias.

As always, the source code is available over on GitHub.

Spring Optional Path Variables

$
0
0

1. Overview

In this tutorial, we'll learn how to make a path variable optional in Spring. First, we'll describe how Spring binds @PathVariable parameters in a handler method. Then, we'll show different ways of making a path variable optional in different Spring versions.

For a quick overview of path variables, please read our Spring MVC article.

2. How Spring Binds @PathVariable Parameters

By default, Spring will try to bind all parameters annotated with @PathVariable in a handler method with the corresponding variables in the URI template. If Spring fails, it'll not deliver our request to that handler method.

For instance, consider the following getArticle method that attempts (unsuccessfully) to make the id path variable optional:

@RequestMapping(value = {"/article", "/article/{id}"})
public Article getArticle(@PathVariable(name = "id") Integer articleId) {
    if (articleId != null) {
        //...
    } else {
        //...
    }
}

Here, the getArticle method is supposed to serves requests to both /article and /article/{id}. Spring will try to bind the articleId parameter to the id path variable if present.

For instance, sending a request to /article/123 sets the value of articleId to 123.

On the other hand, if we send a request to /article, Spring return status code 500 due to the following exception:

org.springframework.web.bind.MissingPathVariableException:
  Missing URI template variable 'id' for method parameter of type Integer

This was because Spring couldn't set a value for the articleId parameter as id was missing.

So, we need some way to tell Spring to ignore binding a specific @PathVariable parameter if it has no corresponding path variable, as we'll see in the following sections.

3. Making Path Variables Optional

3.1. Using the required Attribute of @PathVariable

Since Spring 4.3.3, the @PathVariable annotation defines the boolean attribute required for us to indicate if a path variable is mandatory to a handler method.

For example, the following version of getArticle uses the required attribute:

@RequestMapping(value = {"/article", "/article/{id}"})
public Article getArticle(@PathVariable(required = false) Integer articleId) {
   if (articleId != null) {
       //...
   } else {
       //...
   }
}

Since the required attribute is false, Spring will not complain if the id path variable is not sent in the request. That is, Spring will set articleId to id if it's sent, or null otherwise.

On the other hand, if required was true, Spring would throw an exception in case id was missing.

3.2. Using an Optional Parameter Type

The following implementation shows how Spring 4.1, along with JDK 8's Optional class, offers another way to make articleId optional:

@RequestMapping(value = {"/article", "/article/{id}"}")
public Article getArticle(@PathVariable Optional<Integer> optionalArticleId) {
    if (optionalArticleId.isPresent()) {
        Integer articleId = optionalArticleId.get();
        //...
    } else {
        //...
    }
}

Here, Spring creates the Optional<Integer> instance, optionalArticleId, to hold the value of id. If id is present, optionalArticleId will wrap its value, otherwise, optionalArticleId will wrap a null value. Then, we can use Optional‘s isPresent(), get(), or orElse() methods to work with the value.

3.3. Using a Map Parameter Type

Another way to define an optional path variable that is available since Spring 3.2 is with a Map for @PathVariable parameters:

@RequestMapping(value = {"/article", "/article/{id}"})
public Article getArticle(@PathVariable Map<String, String> pathVarsMap) {
    String articleId = pathVarsMap.get("id");
    if (articleId != null) {
        Integer articleIdAsInt = Integer.valueOf(articleId);
        //...
    } else {
        //...
    }
}

In this example, the Map<String, String> pathVarsMap parameter collects all path variables that are in the URI as a key/value pairs. Then, we can get a specific path variable using the get() method.

Note that because Spring extracts the value of a path variable as a String, we used the Integer.valueOf() method to convert it to Integer.

3.4. Using Two Handler Methods

In case we were using a legacy Spring version, we can split the getArticle handler method into two methods.

The first method will handle requests to /article/{id}:

@RequestMapping(value = "/article/{id}")
public Article getArticle(@PathVariable(name = "id") Integer articleId) {
    //...        
}

While the second method will handle requests to /article:

@RequestMapping(value = "/article")
public Article getDefaultArticle() {
    //...
}

4. Conclusion

To sum up, we've discussed how to make a path variable optional in different Spring versions.

As usual, the complete code for this article is available over on GitHub.

Implementing A* Pathfinding in Java

$
0
0

1. Introduction

Pathfinding algorithms are techniques for navigating maps, allowing us to find a route between two different points. Different algorithms have different pros and cons, often in terms of the efficiency of the algorithm and the efficiency of the route that it generates.

2. What is a Pathfinding Algorithm?

A Pathfinding Algorithm is a technique for converting a graph – consisting of nodes and edges – into a route through the graph. This graph can be anything at all that needs traversing. For this article, we're going to attempt to traverse a portion of the London Underground system:

(“London Underground Overground DLR Crossrail map” by sameboat is licensed under CC BY-SA 4.0)

This has a lot of interesting components to it:

  • We may or may not have a direct route between our starting and ending points. For example, we can go directly from “Earl's Court” to “Monument”, but not to “Angel”.
  • Every single step has a particular cost. In our case, this is the distance between stations.
  • Each stop is only connected to a small subset of the other stops. For example, “Regent's Park” is directly connected to only “Baker Street” and “Oxford Circus”.

All pathfinding algorithms take as input a collection of all the nodes – stations in our case – and connections between them, and also the desired starting and ending points. The output is typically the set of nodes that will get us from start to end, in the order that we need to go.

3. What is A*?

A* is one specific pathfinding algorithm, first published in 1968 by Peter Hart, Nils Nilsson, and Bertram Raphael. It is generally considered to be the best algorithm to use when there is no opportunity to pre-compute the routes and there are no constraints on memory usage.

Both memory and performance complexity can be O(b^d) in the worst case, so while it will always work out the most efficient route, it's not always the most efficient way to do so.

A* is actually a variation on Dijkstra's Algorithm, where there is additional information provided to help select the next node to use. This additional information does not need to be perfect – if we already have perfect information, then pathfinding is pointless. But the better it is, the better the end result will be.

4. How does A* Work?

The A* Algorithm works by iteratively selecting what is the best route so far, and attempting to see what the best next step is.

When working with this algorithm, we have several pieces of data that we need to keep track of. The “open set” is all of the nodes that we are currently considering. This is not every node in the system, but instead, it's every node that we might make the next step from.

We'll also keep track of the current best score, the estimated total score and the current best previous node for each node in the system.

As part of this, we need to be able to calculate two different scores. One is the score to get from one node to the next. The second is a heuristic to give an estimate of the cost from any node to the destination. This estimate does not need to be accurate, but greater accuracy is going to yield better results. The only requirement is that both scores are consistent with each other – that is, they're in the same units.

At the very start, our open set consists of our start node, and we have no information about any other nodes at all.

At each iteration, we will:

  • Select the node from our open set that has the lowest estimated total score
  • Remove this node from the open set
  • Add to the open set all of the nodes that we can reach from it

When we do this, we also work out the new score from this node to each new one to see if it's an improvement on what we've got so far, and if it is, then we update what we know about that node.

This then repeats until the node in our open set that has the lowest estimated total score is our destination, at which point we've got our route.

4.1. Worked Example

For example, let's start from “Marylebone” and attempt to find our way to “Bond Street”.

At the very start, our open set consists only of “Marylebone”. That means that this is implicitly the node that we've got the best “estimated total score” for.

Our next stops can be either “Edgware Road”, with a cost of 0.4403 km, or “Baker Street”, with a cost of 0.4153 km. However, “Edgware Road” is in the wrong direction, so our heuristic from here to the destination gives a score of 1.4284 km, whereas “Baker Street” has a heuristic score of 1.0753 km.

This means that after this iteration our open set consists of two entries – “Edgware Road”, with an estimated total score of 1.8687 km, and “Baker Street”, with an estimated total score of 1.4906 km.

Our second iteration will then start from “Baker Street”, since this has the lowest estimated total score. From here, our next stops can be either “Marylebone”, “St. John's Wood”, “Great Portland Street”, Regent's Park”, or “Bond Street”.

We won't work through all of these, but let's take “Marylebone” as an interesting example. The cost to get there is again 0.4153 km, but this means that the total cost is now 0.8306 km. Additionally the heuristic from here to the destination gives a score of 1.323 km.

This means that the estimated total score would be 2.1536 km, which is worse than the previous score for this node. This makes sense because we've had to do extra work to get nowhere in this case. This means that we will not consider this a viable route. As such, the details for “Marylebone” are not updated, and it is not added back onto the open set.

5. Java Implementation

Now that we've discussed how this works, let's actually implement it. We're going to build a generic solution, and then we'll implement the code necessary for it to work for the London Underground. We can then use it for other scenarios by implementing only those specific parts.

5.1. Representing the Graph

Firstly, we need to be able to represent our graph that we wish to traverse. This consists of two classes – the individual nodes and then the graph as a whole.

We'll represent our individual nodes with an interface called GraphNode:

public interface GraphNode {
    String getId();
}

Each of our nodes must have an ID. Anything else is specific to this particular graph and is not needed for the general solution. These classes are simple Java Beans with no special logic.

Our overall graph is then represented by a class simply called Graph:

public class Graph<T extends GraphNode> {
    private final Set<T> nodes;
    private final Map<String, Set<String>> connections;
    
    public T getNode(String id) {
        return nodes.stream()
            .filter(node -> node.getId().equals(id))
            .findFirst()
            .orElseThrow(() -> new IllegalArgumentException("No node found with ID"));
    }

    public Set<T> getConnections(T node) {
        return connections.get(node.getId()).stream()
            .map(this::getNode)
            .collect(Collectors.toSet());
    }
}

This stores all of the nodes in our graph and has knowledge of which nodes connect to which. We can then get any node by ID, or all of the nodes connected to a given node.

At this point, we're capable of representing any form of graph we wish, with any number of edges between any number of nodes.

5.2. Steps on Our Route

The next thing we need is our mechanism for finding routes through the graph.

The first part of this is some way to generate a score between any two nodes. We'll the Scorer interface for both the score to the next node and the estimate to the destination:

public interface Scorer<T extends GraphNode> {
    double computeCost(T from, T to);
}

Given a start and an end node, we then get a score for traveling between them.

We also need a wrapper around our nodes that carries some extra information. Instead of being a GraphNode, this is a RouteNode – because it's a node in our computed route instead of one in the entire graph:

class RouteNode<T extends GraphNode> implements Comparable<RouteNode> {
    private final T current;
    private T previous;
    private double routeScore;
    private double estimatedScore;

    RouteNode(T current) {
        this(current, null, Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY);
    }

    RouteNode(T current, T previous, double routeScore, double estimatedScore) {
        this.current = current;
        this.previous = previous;
        this.routeScore = routeScore;
        this.estimatedScore = estimatedScore;
    }
}

As with GraphNode, these are simple Java Beans used to store the current state of each node for the current route computation. We've given this a simple constructor for the common case, when we're first visiting a node and have no additional information about it yet.

These also need to be Comparable though, so that we can order them by the estimated score as part of the algorithm. This means the addition of a compareTo() method to fulfill the requirements of the Comparable interface:

@Override
public int compareTo(RouteNode other) {
    if (this.estimatedScore > other.estimatedScore) {
        return 1;
    } else if (this.estimatedScore < other.estimatedScore) {
        return -1;
    } else {
        return 0;
    }
}

5.3. Finding Our Route

Now we're in a position to actually generate our routes across our graph. This will be a class called RouteFinder:

public class RouteFinder<T extends GraphNode> {
    private final Graph<T> graph;
    private final Scorer<T> nextNodeScorer;
    private final Scorer<T> targetScorer;

    public List<T> findRoute(T from, T to) {
        throw new IllegalStateException("No route found");
    }
}

We have the graph that we are finding the routes across, and our two scorers – one for the exact score for the next node, and one for the estimated score to our destination. We've also got a method that will take a start and end node and compute the best route between the two.

This method is to be our A* algorithm. All the rest of our code goes inside this method.

We start with some basic setup – our “open set” of nodes that we can consider as the next step, and a map of every node that we've visited so far and what we know about it:

Queue<RouteNode> openSet = new PriorityQueue<>();
Map<T, RouteNode<T>> allNodes = new HashMap<>();

RouteNode<T> start = new RouteNode<>(from, null, 0d, targetScorer.computeCost(from, to));
openSet.add(start);
allNodes.put(from, start);

Our open set initially has a single node – our start point. There is no previous node for this, there's a score of 0 to get there, and we've got an estimate of how far it is from our destination.

The use of a PriorityQueue for the open set means that we automatically get the best entry off of it, based on our compareTo() method from earlier.

Now we iterate until either we run out of nodes to look at, or the best available node is our destination:

while (!openSet.isEmpty()) {
    RouteNode<T> next = openSet.poll();
    if (next.getCurrent().equals(to)) {
        List<T> route = new ArrayList<>();
        RouteNode<T> current = next;
        do {
            route.add(0, current.getCurrent());
            current = allNodes.get(current.getPrevious());
        } while (current != null);
        return route;
    }

    // ...

When we've found our destination, we can build our route by repeatedly looking at the previous node until we reach our starting point.

Next, if we haven't reached our destination, we can work out what to do next:

    graph.getConnections(next.getCurrent()).forEach(connection -> { 
        RouteNode<T> nextNode = allNodes.getOrDefault(connection, new RouteNode<>(connection));
        allNodes.put(connection, nextNode);

        double newScore = next.getRouteScore() + nextNodeScorer.computeCost(next.getCurrent(), connection);
        if (newScore < nextNode.getRouteScore()) {
            nextNode.setPrevious(next.getCurrent());
            nextNode.setRouteScore(newScore);
            nextNode.setEstimatedScore(newScore + targetScorer.computeCost(connection, to));
            openSet.add(nextNode);
        }
    });

    throw new IllegalStateException("No route found");
}

Here, we're iterating over the connected nodes from our graph. For each of these, we get the RouteNode that we have for it – creating a new one if needed.

We then compute the new score for this node and see if it's cheaper than what we had so far. If it is then we update it to match this new route and add it to the open set for consideration next time around.

This is the entire algorithm. We keep repeating this until we either reach our goal or fail to get there.

5.4. Specific Details for the London Underground

What we have so far is a generic A* pathfinder, but it's lacking the specifics we need for our exact use case. This means we need a concrete implementation of both GraphNode and Scorer.

Our nodes are stations on the underground, and we'll model them with the Station class:

public class Station implements GraphNode {
    private final String id;
    private final String name;
    private final double latitude;
    private final double longitude;
}

The name is useful for seeing the output, and the latitude and longitude are for our scoring.

In this scenario, we only need a single implementation of Scorer. We're going to use the Haversine formula for this, to compute the straight-line distance between two pairs of latitude/longitude:

public class HaversineScorer implements Scorer<Station> {
    @Override
    public double computeCost(Station from, Station to) {
        double R = 6372.8; // Earth's Radius, in kilometers

        double dLat = Math.toRadians(to.getLatitude() - from.getLatitude());
        double dLon = Math.toRadians(to.getLongitude() - from.getLongitude());
        double lat1 = Math.toRadians(from.getLatitude());
        double lat2 = Math.toRadians(to.getLatitude());

        double a = Math.pow(Math.sin(dLat / 2),2)
          + Math.pow(Math.sin(dLon / 2),2) * Math.cos(lat1) * Math.cos(lat2);
        double c = 2 * Math.asin(Math.sqrt(a));
        return R * c;
    }
}

We now have almost everything necessary to calculate paths between any two pairs of stations. The only thing missing is the graph of connections between them. This is available in GitHub.

Let's use it for mapping out a route. We'll generate one from Earl's Court up to Angel. This has a number of different options for travel, on a minimum of two tube lines:

public void findRoute() {
    List<Station> route = routeFinder.findRoute(underground.getNode("74"), underground.getNode("7"));

    System.out.println(route.stream().map(Station::getName).collect(Collectors.toList()));
}

This generates a route of Earl's Court -> South Kensington -> Green Park -> Euston -> Angel.

The obvious route that many people would have taken would likely be Earl's Count -> Monument -> Angel, because that's got fewer changes. Instead, this has taken a significantly more direct route even though it meant more changes.

6. Conclusion

In this article, we've seen what the A* algorithm is, how it works, and how to implement it in our own projects. Why not take this and extend it for your own uses?

Maybe try to extend it to take interchanges between tube lines into account, and see how that affects the selected routes?

And again, the complete code for the article is available over on GitHub.

Viewing Files in Linux Using cat, more, and less

$
0
0

1. Introduction

Linux provides a number of commands for viewing files. In this tutorial, we'll look at the most commonly used cat, more and less commands.

2. The cat command

The cat command is the simplest way to view the contents of a file. It displays the contents of the file(s) specified on to the output terminal.

Let's look at an example:

cat a.txt

This will print the contents of the file a.txt:

A sample
file
to be used
for
cat command
examples

Sometimes, we might want to number the lines in the output.

We can do this by using the -n option:

cat -n a.txt

This will number each line in the output:

     1  A sample
     2  file
     3  to be used
     4  for
     5  cat command
     6  examples

Note that the line numbering starts from 1.

Let's look at a few other significant options:

  • -e     displays control and non-printing characters followed by a $ symbol at the end of each line
  • -t      each tab will display as ^I and each form feed will display as ^L
  • -v     displays control and non-printing characters

3. The more command

The cat command is all well and good for small files. But, if the file is large, the contents will zoom past and we'll only see the last screen worth of content.

One way to overcome this is by using the more command.

The more command displays the contents of the file one screen at a time for large files. If the contents of the file fit a single screen, the output will be the same as the cat command.

Let's use this command on a pom.xml file:

Note the text “–More–(46%)” at the end of the output. Here, the text “(46%)” tells us that the file is big and we're currently seeing only 46% of the content. This percentage will increase as we traverse through the file.

The cursor will stay at the end of this text. Then, we can scroll through the contents of the file using the Enter key, one line at a time.

We can also scroll through the file page by page by using the Space bar. And to scroll back to the previous page, we can use the b key. We'll use the q key to go back to the command prompt.

The more command can also be used to view multiple files. We just have to list each of them one after another:

more pom.xml a.txt b.txt

Later, we'll see how to move between these files.

Along with files, we can also pipe the more command with the output of other commands:

ls -l | more

It's important to note that more allows backward movement only with files, not with pipes.

3.1. Commands

As well as the keys used above, we can use a few other commands while viewing the file.

Let's look at a few important ones:

  • [k]/<text>   searches for the kth match of the regular expression <text>
  • !<cmd>       executes <cmd> in a subshell
  • [k]:n             goes to the kth next file
  • [k]:p             goes to the kth previous file
  • [k]z              displays the next k lines of text. If we don't specify k, it defaults to the current page size
  • :f                  displays the current file name and line number
  • =                  displays the current line number

Also, we can use h or ? at any point to list all the commands that can be used with more.

The more command also allows us to specify various options on the command line to customize the output. Let's look at a few of these.

3.2. Alter Page Size

Let's suppose we want to view only a certain number of lines at a time. We can do this by specifying the number of lines as an option:

more -5 pom.xml

This will display the first 5 lines of the file instead of a screen worth of content.

Subsequently, when we use the Space bar, the next 5 lines will be shown. Similarly, the b key will show the previous 5 lines.

3.3. Specify Start of Content

We can also specify the line number in the file from where we want to start viewing the content:

more +10 pom.xml

This will cause the output to start from the 10th line.

3.4. Start From the First Occurrence of a Text

It's possible to search for a particular text in the file and start viewing the file from that point:

more +/slf4j pom.xml

The above command will output the file contents from the first occurrence of the text “slf4j” in the file.

Apart from the above, the more command provides a few other options. Let's briefly look at these:

  • -d  this option is used to help the user navigate; it'll prompt the user with the message “[Press space to continue, ‘q' to quit.]”; it'll also display “[Press ‘h' for instructions.]” when an illegal key is pressed
  • -l   the more command usually treats ^L (form feed) as a special character and will pause after any line that contains a form feed; the -l option will prevent this behavior
  • -f   this option stops the wrapping of long lines
  • -p  clears the screen and then displays the text
  • -c  displays the pages on the same area by overlapping the previously displayed text
  • -u  suppresses underlining
  • -s  squeezes multiple blank lines into one

4. The less command 

Now, let's move to the less command. The less command is similar to the more command but provides extensive features. One important one is that it allows backward as well as forward movement in the file, even with pipes.

Also, since it does not read the entire file before starting, it starts up faster compared to text editors — especially when we're viewing large files.

To view our pom.xml file, we'll simply replace more with less:

less pom.xml

This should show us the first page of the file with a prompt at the end:

Note how the file name is displayed at the prompt.

Unlike more, if the file content fits the screen, less will still display the prompt. To override this we have to specify the -F option.

Like cat, it's possible to number the lines in the file. For this, we've to specify the -N or –LINE-NUMBERS option.

Just like more, we can execute a number of commands while viewing the contents. There are too many to list here, so let's look at the most common ones.

4.1. Moving Around the File

The keys SpaceEnterb, and q work the same way as with more. Apart from this, we can use the arrow keys to move horizontally and vertically. As an alternative to the up arrow, we can use j to move one line forward. Similarly, k can be used to move one line backward.

We can prefix the above keys with a number to override the default movement. For example, 5 followed by j will take us 5 lines forward.

While scrolling through large files, it's useful to be able to go back to the start of the file, or to the end of the file, quickly. The g key will take us to the start of the file, and the G key will take us to the end of the file.

4.2. Searching for Text

The less command's especially useful for viewing large log files. And most times, the reason we're viewing a log file is to search for an error or look for a log statement.

So, to search for a particular text, we'll use /<pattern>. Here, <pattern> is the text we are searching for and can be a regular expression.

Subsequently, we can use the n key to move to the next occurrence of the pattern, and the N key to move to the previous occurrence of the pattern.

This command will search for the pattern forward in the file. But, sometimes we might want to go to the end of the file and search backward for the latest occurrence of the text. The G key will take us to the end of the file. Then, we can use ?<pattern> to search for the pattern backward in the file.

Now, it's also possible to view only the lines in the file that match the pattern. This way, we'll have less text to go through. We can do this by using &<pattern>.

4.3. Monitoring the File

One interesting feature that less provides is the ability to monitor files. So, every time the content of the file changes, we'll be able to view the changes.

We can achieve this by using the F key. It'll take us forward and keep trying to read when the end of file is reached. This behavior is similar to the “tail -f” command.

5. Conclusion

In this article, we saw how to view files using the catmore, and less commands. As for less, we only covered the most commonly used features. But, less provides a lot more.

We can use “less -h” or the h key, while in the viewer, to list all the features provided.


Differences Between more, less, and most in Linux

$
0
0

1. Overview

When viewing file contents in Linux, we may benefit from some interactive features to help us. We might need to see statistics of a file that we're viewing, mark a certain line then return to it, or view the contents of a file while it's being updated.

In this tutorial, we'll briefly discuss the usage and differences of three terminal pagers that are used in Linux.

Terminal pagers are used to view files page by page, and/or line by line. We're going to explore three of them: more, less, and most. All of them have similar features such as viewing multiple files simultaneously, but each one has a prominent feature or advantage that might make us consider using it.

2. Usage

We can use all of the tools by passing one or more file names:

more file1.txt file2 file3
less file1.txt file2 file3
most file1.txt file2 file3

To exit any of the tools, we can press q or ctrl+c.

We can also pipe the output of another command as an input:

history | less

3. Availability

The more tool is available on most Linux and Unix-like operating systems. less is also widely available, but some Alpine Linux distributions don't have it installed by default. On the other hand, most is usually not installed by default.

We can install most using a package manager. For example, on Ubuntu, we can install it using:

apt install most

4. more

4.1. Using more

more is one of the oldest terminal pagers in the UNIX ecosystem. Originally, more could only scroll down, but now we can use it to scroll up one screen-full at a time, and scroll down either one line or one screen-full:

more filename.txt

An example output would be:

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur sit amet 

--More--(1%)

On its status bar, more shows the percentage of the file read. It automatically closes when it reaches the end of the file without having to press a button.

4.2. Interactive Commands

more has many interactive commands we can invoke by keystrokes:

  • space – go to the next page in accordance with the terminal's size
  • b – go back one page
  • enter – scroll down one line
  • = – display the current line number
  • :v – start up the vi text editor at the current line

5. less

One of the reasons why less was introduced was to allow backward movement line by line. It has a lot of commands that are similar to the vi text editor's commands, and it supports horizontal scrolling, live monitoring, and more.

5.1. Support of File Formats

less has support for different file formats. For example, if we try to read a png, jpeg, or jpg file with more, it would just print its binary data, whereas less would print its metadata:

less picture.jpg

picture.jpg JPG 743x533 743x533+0+0 8-bit sRGB 45.6KB

less supports other file formats such as jar, war, zip, pdf, tar.gz, gif, png, tiff, tif, and rar.

5.2. Marking

While reading a large file, we may want to set bookmarks in certain locations to be able to return to them.

With less, we can mark a certain line by pressing m followed by another character, for example, A. We can mark another line by pressing m followed by an another character such as B. Then, after we have scrolled elsewhere in the file, we can return to the marked lines by pressing the apostrophe key () followed by the character that we used for marking.

In this example, we can switch between our bookmarks by typingA and B.

5.3. Monitoring

Let's say we want to watch the content of a log file while it's being updated, but we don't want to have to re-run less on it over and over. We can switch to seeing the updating content of the file by pressing Shift+F keys, or by executing the command with the addition of the +F flag:

less +F /var/log/syslog

6. most

most allows us to view multiple files simultaneously and switch between them. It's very useful for viewing large data sets because most does not wrap lines that have more characters than the terminal page. Instead, it truncates them and offers column-by-column horizontal scrolling.

6.1. Multiple Windows

We can display multiple files and switch between them by passing files as arguments to most:

most text.txt file.txt

-- MOST: text.txt                                   (18,4) 5%

By default, while reading a file, the status bar displays the file name, the percentage that we've viewed so far, the current line number, and current position horizontally since we can scroll left and right by pressing left and right keys.

We can switch between files by pressing :n. Then, we can use up/down arrow keys to change file names, and press enter to switch to the selected file:

-- MOST: text.txt                                   (18,4) 5%
Next File (1): file.txt

We may also want to read one file in binary mode and another file in non-binary mode. most allows us to view different files in different modes. For example, we can toggle options while viewing a file by pressing : followed by o, then we can toggle the binary mode by pressing b.

7. Comparing the Tools

If we want a simple terminal pager that is widely available, then we would choose more.

But, if we want to use vi text editor's commands, and prefer a tool that has more sophisticated scrolling both horizontally and vertically, then less is a good option.

more and less have the option to view multiple files at once. more allows us to view them as a single file separated by lines, and less allows us to switch between them. However, both more and less display all the opened files with the same options.

Finally, if we want to open multiple files with different options, or see more information on the status bar while viewing a file, then most is a good choice, but may require installing in our Linux environment.

8. Conclusion

In this article, we learned about the features of the most common Linux terminal pagers — more, less, and most.

Then we looked at how the tools differ and how to choose between them.

Overflow and Underflow in Java

$
0
0

1. Introduction

In this tutorial, we'll look at the overflow and underflow of numerical data types in Java.

We won't dive deeper into the more theoretical aspects — we'll just focus on when it happens in Java.

First, we'll look at integer data types, then at floating-point data types. For both, we'll also see how we can detect when over- or underflow occurs.

2. Overflow and Underflow

Simply put, overflow and underflow happen when we assign a value that is out of range of the declared data type of the variable.

If the (absolute) value is too big, we call it overflow, if the value is too small, we call it underflow. 

Let's look at an example where we attempt to assign the value 101000 (a 1 with 1000 zeros) to a variable of type int or double. The value is too big for an int or double variable in Java, and there will be an overflow.

As a second example, let's say we attempt to assign the value 10-1000 (which is very close to 0) to a variable of type double. This value is too small for a double variable in Java, and there will be an underflow.

Let's see what happens in Java in these cases in more detail.

3. Integer Data Types

The integer data types in Java are byte (8 bits), short (16 bits), int (32 bits), and long (64 bits).

Here, we'll focus on the int data type. The same behavior applies to the other data types, except that the minimum and maximum values differ.

An integer of type int in Java can be negative or positive, which means with its 32 bits, we can assign values between -231 (-2147483648) and 231-1 (2147483647).

The wrapper class Integer defines two constants that hold these values: Integer.MIN_VALUE and Integer.MAX_VALUE.

3.1. Example

What will happen if we define a variable m of type int and attempt to assign a value that's too big (e.g., 21474836478 = MAX_VALUE + 1)?

A possible outcome of this assignment is that the value of m will be undefined or that there will be an error.

Both are valid outcomes; however, in Java, the value of m will be -2147483648 (the minimum value). On the other hand, if we attempt to assign a value of -2147483649 (= MIN_VALUE – 1), m will be 2147483647 (the maximum value). This behavior is called integer-wraparound.

Let's consider the following code snippet to illustrate this behavior better:

int value = Integer.MAX_VALUE-1;
for(int i = 0; i < 4; i++, value++) {
    System.out.println(value);
}

We'll get the following output, which demonstrates the overflow:

2147483646
2147483647
-2147483648
-2147483647

4. Handling Underflow and Overflow of Integer Data Types

Java does not throw an exception when an overflow occurs; that is why it can be hard to find errors resulting from an overflow. Nor can we directly access the overflow flag, which is available in most CPUs.

However, there are various ways to handle a possible overflow. Let's look at several of these possibilities.

4.1. Use a Different Data Type

If we want to allow values larger than 2147483647 (or smaller than -2147483648), we can simply use the long data type or a BigInteger instead.

Though variables of type long can also overflow, the minimum and maximum values are much larger and are probably sufficient in most situations.

The value range of BigInteger is not restricted, except by the amount of memory available to the JVM.

Let's see how to rewrite our above example with BigInteger:

BigInteger largeValue = new BigInteger(Integer.MAX_VALUE + "");
for(int i = 0; i < 4; i++) {
    System.out.println(largeValue);
    largeValue = largeValue.add(BigInteger.ONE);
}

We'll see the following output:

2147483647
2147483648
2147483649
2147483650

As we can see in the output, there's no overflow here. Our article BigDecimal and BigInteger in Java covers BigInteger in more detail.

4.2. Throw an Exception

There are situations where we don't want to allow larger values, nor do we want an overflow to occur, and we want throw an exception instead.

As of Java 8, we can use the methods for exact arithmetic operations. Let's look at an example first:

int value = Integer.MAX_VALUE-1;
for(int i = 0; i < 4; i++) {
    System.out.println(value);
    value = Math.addExact(value, 1);
}

The static method addExact() performs a normal addition, but throws an exception if the operation results in an overflow or underflow:

2147483646
2147483647
Exception in thread "main" java.lang.ArithmeticException: integer overflow
	at java.lang.Math.addExact(Math.java:790)
	at baeldung.underoverflow.OverUnderflow.main(OverUnderflow.java:115)

In addition to addExact(), the Math package in Java 8 provides corresponding exact methods for all arithmetic operations. See the Java documentation for a list of all these methods.

Furthermore, there are exact conversion methods, which throw an exception if there is an overflow during the conversion to another data type.

For the conversion from a long to an int:

public static int toIntExact(long a)

And for the conversion from BigInteger to an int or long:

BigInteger largeValue = BigInteger.TEN;
long longValue = largeValue.longValueExact();
int intValue = largeValue.intValueExact();

4.3. Before Java 8

The exact arithmetic methods were added to Java 8. If we use an earlier version, we can simply create these methods ourselves. One option to do so is to implement the same method as in Java 8:

public static int addExact(int x, int y) {
    int r = x + y;
    if (((x ^ r) & (y ^ r)) < 0) {
        throw new ArithmeticException("int overflow");
    }
    return r;
}

5. Non-Integer Data Types

The non-integer types float and double do not behave in the same way as the integer data types when it comes to arithmetic operations.

One difference is that arithmetic operations on floating-point numbers can result in a NaN. We have a dedicated article on NaN in Java, so we won't look further into that in this article. Furthermore, there are no exact arithmetic methods such as addExact or multiplyExact for non-integer types in the Math package.

Java follows the IEEE Standard for Floating-Point Arithmetic (IEEE 754) for its float and double data types. This standard is the basis for the way that Java handles over- and underflow of floating-point numbers.

In the below sections, we'll focus on the over- and underflow of the double data type and what we can do to handle the situations in which they occur.

5.1. Overflow

As for the integer data types, we might expect that:

assertTrue(Double.MAX_VALUE + 1 == Double.MIN_VALUE);

However, that is not the case for floating-point variables. The following is true:

assertTrue(Double.MAX_VALUE + 1 == Double.MAX_VALUE);

This is because a double value has only a limited number of significant bits. If we increase the value of a large double value by only one, we do not change any of the significant bits. Therefore, the value stays the same.

If we increase the value of our variable such that we increase one of the significant bits of the variable, the variable will have the value INFINITY:

assertTrue(Double.MAX_VALUE * 2 == Double.POSITIVE_INFINITY);

and NEGATIVE_INFINITY for negative values:

assertTrue(Double.MAX_VALUE * -2 == Double.NEGATIVE_INFINITY);

We can see that, unlike for integers, there's no wraparound, but two different possible outcomes of the overflow: the value stays the same, or we get one of the special values, POSITIVE_INFINITY or NEGATIVE_INFINITY.

5.2. Underflow

There are two constants defined for the minimum values of a double value: MIN_VALUE (4.9e-324) and MIN_NORMAL (2.2250738585072014E-308).

IEEE Standard for Floating-Point Arithmetic (IEEE 754) explains the details for the difference between those in more detail.

Let's focus on why we need a minimum value for floating-point numbers at all.

A double value cannot be arbitrarily small as we only have a limited number of bits to represent the value.

The chapter about Types, Values, and Variables in the Java SE language specification describes how floating-point types are represented. The minimum exponent for the binary representation of a double is given as -1074. That means the smallest positive value a double can have is Math.pow(2, -1074), which is equal to 4.9e-324.

As a consequence, the precision of a double in Java does not support values between 0 and 4.9e-324, or between -4.9e-324 and 0 for negative values.

So what happens if we attempt to assign a too-small value to a variable of type double? Let's look at an example:

for(int i = 1073; i <= 1076; i++) {
    System.out.println("2^" + i + " = " + Math.pow(2, -i));
}

With output:

2^1073 = 1.0E-323
2^1074 = 4.9E-324
2^1075 = 0.0
2^1076 = 0.0

We see that if we assign a value that's too small, we get an underflow, and the resulting value is 0.0 (positive zero).
Similarly, for negative values, an underflow will result in a value of -0.0 (negative zero).

6. Detecting Underflow and Overflow of Floating-Point Data Types

As overflow will result in either positive or negative infinity, and underflow in a positive or negative zero, we do not need exact arithmetic methods like for the integer data types. Instead, we can check for these special constants to detect over- and underflow.

If we want to throw an exception in this situation, we can implement a helper method. Let's look at how that can look for the exponentiation:

public static double powExact(double base, double exponent) {
    if(base == 0.0) {
        return 0.0;
    }
    
    double result = Math.pow(base, exponent);
    
    if(result == Double.POSITIVE_INFINITY ) {
        throw new ArithmeticException("Double overflow resulting in POSITIVE_INFINITY");
    } else if(result == Double.NEGATIVE_INFINITY) {
        throw new ArithmeticException("Double overflow resulting in NEGATIVE_INFINITY");
    } else if(Double.compare(-0.0f, result) == 0) {
        throw new ArithmeticException("Double overflow resulting in negative zero");
    } else if(Double.compare(+0.0f, result) == 0) {
        throw new ArithmeticException("Double overflow resulting in positive zero");
    }

    return result;
}

In this method, we need to use the method Double.compare(). The normal comparison operators (< and >) do not distinguish between positive and negative zero.

7. Positive and Negative Zero

Finally, let's look at an example that shows why we need to be careful when working with positive and negative zero and infinity.

Let's define a couple of variables to demonstrate:

double a = +0f;
double b = -0f;

Because positive and negative 0 are considered equal:

assertTrue(a == b);

Whereas positive and negative infinity are considered different:

assertTrue(1/a == Double.POSITIVE_INFINITY);
assertTrue(1/b == Double.NEGATIVE_INFINITY);

However, the following assertion is correct:

assertTrue(1/a != 1/b);

Which seems to be a contradiction to our first assertion.

8. Conclusion

In this article, we saw what is over- and underflow, how it can occur in Java, and what is the difference between the integer and floating-point data types.

We also saw how we could detect over- and underflow during program execution.

As usual, the complete source code is available over on Github.

Hibernate @NotNull vs @Column(nullable = false)

$
0
0

1. Introduction

At first glance, it may seem like both the @NotNull and @Column(nullable = false) annotations serve the same purpose and can be used interchangeably. However, as we'll soon see, this isn't entirely true.

Even though, when used on the JPA entity, both of them essentially prevent storing null values in the underlying database, there are significant differences between these two approaches.

In this quick tutorial, we'll compare the @NotNull and @Column(nullable = false) constraints.

2. Dependencies

For all the presented examples, we'll use a simple Spring Boot application.

Here's a relevant section of the pom.xml file that shows needed dependencies:

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-jpa</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-validation</artifactId>
    </dependency>
    <dependency>
        <groupId>com.h2database</groupId>
        <artifactId>h2</artifactId>
    </dependency>
</dependencies>

2.1. Sample Entity

Let's also define a very simple entity that we'll be using throughout this tutorial:

@Entity
public class Item {

    @Id
    @GeneratedValue
    private Long id;

    private BigDecimal price;
}

3. The @NotNull Annotation

The @NotNull annotation is defined in the Bean Validation specification. This means its usage isn't limited only to the entities. On the contrary, we can use @NotNull on any other bean as well.

Let's stick with our use case though and add the @NotNull annotation to the Item‘s price field:

@Entity
public class Item {

    @Id
    @GeneratedValue
    private Long id;

    @NotNull
    private BigDecimal price;
}

Now, let's try to persist an item with a null price:

@SpringBootTest
public class ItemIntegrationTest {

    @Autowired
    private ItemRepository itemRepository;

    @Test
    public void shouldNotAllowToPersistNullItemsPrice() {
        itemRepository.save(new Item());
    }
}

And let's see Hibernate's output:

2019-11-14 12:31:15.070 ERROR 10980 --- [ main] o.h.i.ExceptionMapperStandardImpl : 
HHH000346: Error during managed flush [Validation failed for classes 
[com.baeldung.h2db.springboot.models.Item] during persist time for groups 
[javax.validation.groups.Default,] List of constraint violations:[
ConstraintViolationImpl{interpolatedMessage='must not be null', propertyPath=price, rootBeanClass=class 
com.baeldung.h2db.springboot.models.Item, 
messageTemplate='{javax.validation.constraints.NotNull.message}'}]]
 
(...)
 
Caused by: javax.validation.ConstraintViolationException: Validation failed for classes 
[com.baeldung.h2db.springboot.models.Item] during persist time for groups 
[javax.validation.groups.Default,] List of constraint violations:[
ConstraintViolationImpl{interpolatedMessage='must not be null', propertyPath=price, rootBeanClass=class 
com.baeldung.h2db.springboot.models.Item, 
messageTemplate='{javax.validation.constraints.NotNull.message}'}]

As we can see, in this case, our system threw javax.validation.ConstraintViolationException.

It's important to notice that Hibernate didn't trigger the SQL insert statement. Consequently, invalid data wasn't saved to the database.

This is because the pre-persist entity lifecycle event triggered the bean validation just before sending the query to the database.

3.1. Schema Generation

In the previous section, we've presented how the @NotNull validation works.

Let's now find out what happens if we let Hibernate generate the database schema for us.

For that reason, we'll set a couple of properties in our application.properties file:

spring.jpa.hibernate.ddl-auto=create-drop
spring.jpa.show-sql=true

If we now start our application, we'll see the DDL statement:

create table item (
   id bigint not null,
    price decimal(19,2) not null,
    primary key (id)
)

Surprisingly, Hibernate automatically adds the not null constraint to the price column definition.

How's that possible?

As it turns out, out of the box, Hibernate translates the bean validation annotations applied to the entities into the DDL schema metadata.

This is pretty convenient and makes a lot of sense. If we apply @NotNull to the entity, we most probably want to make the corresponding database column not null as well.

However, if, for any reason, we want to disable this Hibernate feature, all we need to do is to set hibernate.validator.apply_to_ddl property to false.

In order to test this let's update our application.properties:

spring.jpa.hibernate.ddl-auto=create-drop
spring.jpa.show-sql=true
spring.jpa.properties.hibernate.validator.apply_to_ddl=false

Let's run the application and see the DDL statement:

create table item (
   id bigint not null,
    price decimal(19,2),
    primary key (id)
)

As expected, this time Hibernate didn't add the not null constraint to the price column.

4. The @Column(nullable = false) Annotation

The @Column annotation is defined as a part of the Java Persistence API specification.

It's used mainly in the DDL schema metadata generation. This means that if we let Hibernate generate the database schema automatically, it applies the not null constraint to the particular database column.

Let's update our Item entity with the @Column(nullable = false) and see how this works in action:

@Entity
public class Item {

    @Id
    @GeneratedValue
    private Long id;

    @Column(nullable = false)
    private BigDecimal price;
}

We can now try to persist a null price value:

@SpringBootTest
public class ItemIntegrationTest {

    @Autowired
    private ItemRepository itemRepository;

    @Test
    public void shouldNotAllowToPersistNullItemsPrice() {
        itemRepository.save(new Item());
    }
}

Here's the snippet of Hibernate's output:

Hibernate: 
    
    create table item (
       id bigint not null,
        price decimal(19,2) not null,
        primary key (id)
    )

(...)

Hibernate: 
    insert 
    into
        item
        (price, id) 
    values
        (?, ?)
2019-11-14 13:23:03.000  WARN 14580 --- [main] o.h.engine.jdbc.spi.SqlExceptionHelper   : 
SQL Error: 23502, SQLState: 23502
2019-11-14 13:23:03.000 ERROR 14580 --- [main] o.h.engine.jdbc.spi.SqlExceptionHelper   : 
NULL not allowed for column "PRICE"

First of all, we can notice that Hibernate generated the price column with the not null constraint as we anticipated.

Additionally, it was able to create the SQL insert query and pass it through. As a result, it's the underlying database that triggered the error.

4.1. Validation

Almost all the sources emphasize that @Column(nullable = false) is used only for schema DDL generation.

Hibernate, however, is able to perform the validation of the entity against the possible null values, even if the corresponding field is annotated only with @Column(nullable = false).

In order to activate this Hibernate feature, we need to explicitly set the hibernate.check_nullability property to true:

spring.jpa.show-sql=true
spring.jpa.properties.hibernate.check_nullability=true

Let's now execute our test case again and examine the output:

org.springframework.dao.DataIntegrityViolationException: 
not-null property references a null or transient value : com.baeldung.h2db.springboot.models.Item.price; 
nested exception is org.hibernate.PropertyValueException: 
not-null property references a null or transient value : com.baeldung.h2db.springboot.models.Item.price

This time, our test case threw the org.hibernate.PropertyValueException.

It's crucial to notice that, in this case, Hibernate didn't send the insert SQL query to the database.

5. Summary

In this article, we've described how the @NotNull and @Column(nullable – false) annotations work.

Even though both of them prevent us from storing null values in the database, they take different approaches.

As a rule of thumb, we should prefer the @NotNull annotation over the @Column(nullable = false) annotation. This way, we make sure the validation takes place before Hibernate sends any insert or update SQL queries to the database.

Also, it's usually better to rely on the standard rules defined in the Bean Validation, rather than letting the database handle the validation logic.

But, even if we let Hibernate generate the database schema, it'll translate the @NotNull annotation into the database constraints. We must then only make sure that hibernate.validator.apply_to_ddl property is set to true.

As usual, all the code examples are available over on GitHub.

Pattern Matching in Scala

$
0
0

1. Overview

Pattern matching is a powerful feature of the Scala language. It allows for more concise and readable code while at the same time providing the ability to match elements against complex patterns.

In this tutorial, we'll discover how to use pattern matching in general and how we can benefit from it.

2. Pattern Matching

In contrast with “exact matching” as we can have in Java's switch statements, pattern matching allows matching a pattern instead of an exact value.

In Java for example, if input.equals(caseClause) returns false, we directly evaluate the next case clause. However, this is not how it always works in Scala.

Let's discover more in detail how pattern matching works.

2.1. Syntax

The match expressions consist of multiple parts:

  1. The value we'll use to match the patterns is called a candidate 
  2. The keyword match
  3. Multiple case clauses consisting of the case keyword, the pattern, an arrow symbol, and the code to execute when the pattern matches
  4. default clause when no other pattern has matched. The default clause is recognizable because it consists of the underscore character (_) and is the last of the case clauses

Let's take a look at a simple example to illustrate those parts:

def patternMatching(candidate: String): Int = {
  candidate match { 
    case "One" => 1 
    case "Two" => 2 
    case _ => -1 
  }
}

3. Patterns in Match Expression

3.1. Case Classes

Case classes help us use the power of inheritance to perform pattern matching. The case classes extend a common abstract class. The match expression then evaluates a reference of the abstract class against each pattern expressed by each case class.

Let's begin by writing our classes:

abstract class Animal

case class Mammal(name: String, fromSea: Boolean) extends Animal

case class Bird(name: String) extends Animal

case class Fish(name: String) extends Animal

Let's now see how we can apply pattern matching to case classes:

def caseClassesPatternMatching(animal: Animal): String = {
  animal match {
    case Mammal(name, fromSea) => s"I'm a $name, a kind of mammal. Am I from the sea? $fromSea"
    case Bird(name) => s"I'm a $name, a kind of bird"
    case _ => "I'm an unknown animal"
  }
}

Another name for this kind of pattern matching is Constructor matching, meaning that the constructor is used in this case to make the match possible.

For instance, we can notice how the Mammal pattern matches exactly the constructor from the case class defined above.

3.2. Constants

Like most languages, Scala uses constants to define numbers or boolean values. Patterns can consist of constants.

Let's see how we can use constants in a match expression:

def constantsPatternMatching(constant: Any): String = {
  constant match {
    case 0 => "I'm equal to zero"
    case 4.5d => "I'm a double"
    case false => "I'm the contrary of true"
    case _ => s"I'm unknown and equal to $constant"
  }
}

3.3. Sequences

Arrays, Lists, and Vectors consist of elements. These sequences and their elements are also used to form patterns.

Moreover, we usually want to use wildcards to express the dynamic parts of the pattern:

  • To match a single element, we'll use the underscore wildcard _. This is not to be confused with the default clause, which also uses the underscore character. An alias can also be used to represent the element
  • On the other hand, to match an unknown number of elements (zero, one, or more), we'll use the star wildcard *

Let's see what a sequence looks like when used as a pattern:

def sequencesPatternMatching(sequence: Any): String = {
  sequence match {
    case List(singleElement) => s"I'm a list with one element: $singleElement"
    case List(_, _*) => s"I'm a list with one or multiple elements: sequence"
    case Vector(1, 2, _*) => s"I'm a vector: $sequence"
    case _ => s"I'm an unrecognized sequence. My value: $sequence"
  }
}

In the first case clause, we've used an alias –  singleElement – to define the single element of the List:

In the other case clauses, we're simply ignoring the values by using the underscore character. Aside from being used in default case clauses, the underscore character can also be used when a particular value is ignored in the match expression.

3.4. Tuples

Tuples are objects containing a limited number of sub-objects. We can imagine those as collections of mixed elements with a limited size.

Next, let's look at an example of how to use tuples in pattern matching:

def tuplesPatternMatching(tuple: Any): String = {
  tuple match {
    case (first, second) => s"I'm a tuple with two elements: $first & $second"
    case (first, second, third) => s"I'm a tuple with three elements: $first & $second & $third"
    case _ => s"Unrecognized pattern. My value: $tuple"
  }
}

In this example, we've extracted the elements using the names defined in the tuple patterns. If we want to use the first element of the tuple, we'll define a local variable in the tuple pattern. In the example above we can recognize those variables as first, second, and third.

3.5. Typed Patterns

Scala is a typed language, meaning that each object has a static type that cannot be changed. For instance, a Boolean object can only contain a boolean expression.

Scala makes it easy to match objects against type patterns, as shown below:

def typedPatternMatching(any: Any): String = {
  any match {
    case string: String => s"I'm a string. My value: $string"
    case integer: Int => s"I'm an integer. My value: $integer"
    case _ => s"I'm from an unknown type. My value: $any"
  }
}

3.6. Regex Patterns

We already know how useful regular expressions are when working with strings of characters. In Scala, there's good news — we can also use regular expressions when matching objects in our match expressions:

def regexPatterns(toMatch: String): String = {
  val numeric = """([0-9]+)""".r
  val alphabetic = """([a-zA-Z]+)""".r
  val alphanumeric = """([a-zA-Z0-9]+)""".r

  toMatch match {
    case numeric(value) => s"I'm a numeric with value $value"
    case alphabetic(value) => s"I'm an alphabetic with value $value"
    case alphanumeric(value) => s"I'm an alphanumeric with value $value"
    case _ => s"I contain other characters than alphanumerics. My value $toMatch"
  }
}

3.7. Options: Some<T> and None

In functional languages like Scala, options are structures that either contain a value or not. An Option in Scala can easily be compared to Java's Optional class.

Pattern matching is possible using Option objects. In this case, we'll have two possible case clauses:

  • Some<T> — containing a value of type T
  • None — not containing anything

Let's see how we'll use those in our match expressions:

def optionsPatternMatching(option: Option[String]): String = {
  option match {
    case Some(value) => s"I'm not an empty option. Value $value"
    case None => "I'm an empty option"
  }
}

4. Pattern Guards

We've already seen how powerful pattern matching can be. We can build our patterns in so many different ways. But sometimes, we want to make sure a specific condition is fulfilled in addition to our pattern matching to execute the code inside of the case clause.

We can use pattern guards to achieve this behavior. Pattern guards are boolean expressions used together on the same level as the case clause.

Let's see how convenient it can be to use a pattern guard:

def patternGuards(toMatch: Any, maxLength: Int): String = {
  toMatch match {
    case list: List[Any] if (list.size <= maxLength) => "List is of acceptable size"
    case list: List[Any] => "List has not an acceptable size"
    case string: String if (string.length <= maxLength) => "String is of acceptable size"
    case string: String => "String has not an acceptable size"
    case _ => "Input is neither a List nor a String"
  }
}

In the example snippet of code, we notice two patterns each for both String and List objects. The difference between each lies in the fact that one of the patterns also checks on the length of the object before entering the case clause.

For instance, if a List contains 5 objects but the maximal length is 6, then it will enter the first case clause and return. On the other hand, if the maximal length were to be 4, then it would execute the code from the second clause.

5. Sealed Classes

We sometimes want to use sealed classes together with pattern matching. A sealed class is a superclass that is aware of every single class extending it. This behavior is possible using the same single file to express the sealed class and all of its subclasses.

This feature is particularly useful when we want to avoid having a default behavior in our match expression.

Let's begin by writing our sealed class and its child classes:

sealed abstract class CardSuit

case class Spike() extends CardSuit

case class Diamond() extends CardSuit

case class Heart() extends CardSuit

case class Club() extends CardSuit

After that, let's use pattern matching with these classes:

def sealedClass(cardSuit: CardSuit): String = {
  cardSuit match {
    case Spike() => "Card is spike"
    case Club() => "Card is club"
    case Heart() => "Card is heart"
    case Diamond() => "Card is diamond"
  }
}

Here, the usage of a default case clause is not mandatory as we have a case for each subclass.

6. Extractors

Extractor objects are objects containing a method called unapply. This method is executed when matching against a pattern is successful.

Let's use this in an example. Suppose we have a Person containing a full name, and when the Person is matched against a pattern, instead of using the full name, we just want their initials.

Let's see how extractors can be useful when implementing this requirement.

Firstly, we'll create the Person object:

object Person {
  def apply(fullName: String) = fullName

  def unapply(fullName: String): Option[String] = {
    if (!fullName.isEmpty)
      Some(fullName.replaceAll("(?<=\\w)(\\w+)", "."))
    else
      None
  }
}

Now that the Person object exists, we'll be able to use in our match expression and make use of the result of the unapply method:

def extractors(person: Any): String = {
  person match {
    case Person(initials) => s"My initials are $initials"
    case _ => "Could not extract initials"
  }
}

If the person is named John Smith, in this case, the returned String would be ‘My initials are J. S.‘.

7. Other Usages

7.1. Closures

A closure can also use pattern matching.

Let's see how closure pattern matching looks in practice:

def closuresPatternMatching(list: List[Any]): List[Any] = {
  list.collect { case i: Int if (i < 10) => i }
}

When invoked, this piece of code will:

  1. Filter out the elements that are not an Int
  2. Filter out Ints less than 10
  3. Collect the remaining elements from the List in a new List

7.2. Catch Blocks

We're also able to use pattern matching to handle the exceptions thrown in try-catch blocks.

Let's see this in action:

def catchBlocksPatternMatching(exception: Exception): String = {
  try {
    throw exception
  } catch {
    case ex: IllegalArgumentException => "It's an IllegalArgumentException"
    case ex: RuntimeException => "It's a RuntimeException"
    case _ => "It's an unknown kind of exception"
  }
}

8. Conclusion

In this tutorial, we've discovered how to use Scala's powerful pattern matching in many different ways.

As usual, all the source code used in this tutorial can be found over on GitHub.

Checking if Two Java Dates are On the Same Day

$
0
0

1. Overview

In this quick tutorial, we'll learn about several different ways to check if two java.util.Date objects have the same day.

We'll start by considering solutions using core Java – namely, Java 8 features – before looking at a couple of pre-Java 8 alternatives.

To finish, we'll also look at some external libraries — Apache Commons Lang, Joda-Time, and Date4J.

2. Core Java

The class Date represents a specific instant in time, with millisecond precision. To find out if two Date objects contain the same day, we need to check if the Year-Month-Day is the same for both objects and discard the time aspect.

2.1. Using LocalDate

With the new Date-Time API of Java 8, we can use the LocalDate object. This is an immutable object representing a date without a time.

Let's see how we can check if two Date objects have the same day using this class:

public static boolean isSameDay(Date date1, Date date2) {
    LocalDate localDate1 = date1.toInstant()
      .atZone(ZoneId.systemDefault())
      .toLocalDate();
    LocalDate localDate2 = date2.toInstant()
      .atZone(ZoneId.systemDefault())
      .toLocalDate();
    return localDate1.isEqual(localDate2);
}

In this example, we've converted both the Date objects to LocalDate using the default timezone. Once converted, we just need to check if the LocalDate objects are equal using the isEqual method.

Consequently, using this approach, we'll be able to determine if the two Date objects contain the same day.

2.2. Using SimpleDateFormat

Since early versions of Java, we've been able to use the SimpleDateFormat class to convert between Date and String object representations. This class comes with support for conversion using many patterns. In our case, we will use the pattern “yyyyMMdd”.

Using this, we'll format the Date, convert it to a String object, and then compare them using the standard equals method:

public static boolean isSameDay(Date date1, Date date2) {
    SimpleDateFormat fmt = new SimpleDateFormat("yyyyMMdd");
    return fmt.format(date1).equals(fmt.format(date2));
}

2.3. Using Calendar

The Calendar class provides methods to get the values of different date-time units for a particular instant of time.

Firstly, we need to create a Calendar instance and set the Calendar objects' time using each of the provided dates. Then we can query and compare the Year-Month-Day attributes individually to figure out if the Date objects have the same day:

public static boolean isSameDay(Date date1, Date date2) {
    Calendar calendar1 = Calendar.getInstance();
    calendar1.setTime(date1);
    Calendar calendar2 = Calendar.getInstance();
    calendar2.setTime(date2);
    return calendar1.get(Calendar.YEAR) == calendar2.get(Calendar.YEAR)
      && calendar1.get(Calendar.MONTH) == calendar2.get(Calendar.MONTH)
      && calendar1.get(Calendar.DAY_OF_MONTH) == calendar2.get(Calendar.DAY_OF_MONTH);
}

3. External Libraries

Now that we have a good understanding of how to compare Date objects using the new and old APIs offered by core Java, let's take a look at some external libraries.

3.1. Apache Commons Lang DateUtils

The DateUtils class provides many useful utilities that make it easier to work with the legacy Calendar and Date objects.

The Apache Commons Lang artifact is available from Maven Central:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.9</version>
</dependency>

Then we can simply use the method isSameDay from DateUtils:

DateUtils.isSameDay(date1, date2);

3.2. Joda-Time Library

An alternative to the core Java Date and Time library is Joda-Time. This widely used library serves an excellent substitute when working with Date and Time.

The artifact can be found on Maven Central:

<dependency>
    <groupId>joda-time</groupId>
    <artifactId>joda-time</artifactId>
    <version>2.10</version>
</dependency>

In this library, org.joda.time.LocalDate represents a date without time. Hence, we can construct the LocalDate objects from the java.util.Date objects and then compare them:

public static boolean isSameDay(Date date1, Date date2) {
    org.joda.time.LocalDate localDate1 = new org.joda.time.LocalDate(date1);
    org.joda.time.LocalDate localDate2 = new org.joda.time.LocalDate(date2);
    return localDate1.equals(localDate2);
}

3.3. Date4J Library

Date4j also provides a straightforward and simple implementation that we can use.

Likewise, it is also available from Maven Central:

<dependency>
    <groupId>com.darwinsys</groupId>
    <artifactId>hirondelle-date4j</artifactId>
    <version>1.5.1</version>
</dependency>

Using this library, we need to construct the DateTime object from a java.util.Date object. Then we can simply use the isSameDayAs method:

public static boolean isSameDay(Date date1, Date date2) {
    DateTime dateObject1 = DateTime.forInstant(date1.getTime(), TimeZone.getDefault());
    DateTime dateObject2 = DateTime.forInstant(date2.getTime(), TimeZone.getDefault());
    return dateObject1.isSameDayAs(dateObject2);
}

4. Conclusion

In this quick tutorial, we've explored several ways of checking if two java.util.Date objects contain the same day.

As always, the full source code of the article is available over on GitHub.

Intro to Apache Tapestry

$
0
0

1. Overview

Nowadays, from social networking to banking, healthcare to government services, all activities are available online. Therefore, they rely heavily on web applications.

A web application enables users to consume/enjoy the online services provided by a company. At the same time, it acts as an interface to the backend software.

In this introductory tutorial, we'll explore the Apache Tapestry web framework and create a simple web application using the basic features that it provides.

2. Apache Tapestry

Apache Tapestry is a component-based framework for building scalable web applications.

It follows the convention-over-configuration paradigm and uses annotations and naming conventions for configurations.

All the components are simple POJOs. At the same time, they are developed from scratch and have no dependencies on other libraries.

Along with Ajax support, Tapestry also has great exception reporting capabilities. It provides an extensive library of built-in common components as well.

Among other great features, a prominent one is hot reloading of the code. Therefore, using this feature, we can see the changes instantly in the development environment.

3. Setup

Apache Tapestry requires a simple set of tools to create a web application:

  • Java 1.6 or later
  • Build Tool (Maven or Gradle)
  • IDE (Eclipse or IntelliJ)
  • Application Server (Tomcat or Jetty)

In this tutorial, we'll use the combination of Java 8, Maven, Eclipse, and Jetty Server.

To set up the latest Apache Tapestry project, we'll use Maven archetype and follow the instructions provided by the official documentation:

$ mvn archetype:generate -DarchetypeCatalog=http://tapestry.apache.org

Or, if we have an existing project, we can simply add the tapestry-core Maven dependency to the pom.xml:

<dependency>
    <groupId>org.apache.tapestry</groupId>
    <artifactId>tapestry-core</artifactId>
    <version>5.4.5</version>
</dependency>

Once we're ready with the setup, we can start the application apache-tapestry by the following Maven command:

$ mvn jetty:run

By default, the app will be accessible at localhost:8080/apache-tapestry:

4. Project Structure

Let's explore the project layout created by Apache Tapestry:

We can see a Maven-like project structure, along with a few packages based on conventions.

The Java classes are placed in src/main/java and categorized as components, pages, and services.

Likewise, src/main/resources hold our templates (similar to HTML files) — these have the .tml extension.

For every Java class placed under components and pages directories, a template file with the same name should be created.

The src/main/webapp directory contains resources like images, stylesheets, and JavaScript files. Similarly, testing files are placed in src/test.

Last, src/site will contain the documentation files.

For a better idea, let's take a look at the project structure opened in Eclipse IDE:

5. Annotations

Let's discuss a few handy annotations provided by Apache Tapestry for day-to-day use. Going forward, we'll use these annotations in our implementations.

5.1. @Inject

The @Inject annotation is available in the org.apache.tapestry5.ioc.annotations package and provides an easy way to inject dependencies in Java classes.

This annotation is quite handy to inject an asset, block, resource, and service.

5.2. @InjectPage

Available in the org.apache.tapestry5.annotations package, the @InjectPage annotation allows us to inject a page into another component. Also, the injected page is always a read-only property.

5.3. @InjectComponent

Similarly, the @InjectComponent annotation allows us to inject a component defined in the template.

5.4. @Log

The @Log annotation is available in the org.apache.tapestry5.annotations package and is handy to enable the DEBUG level logging on any method. It logs method entry and exit, along with parameter values.

5.5. @Property

Available in the org.apache.tapestry5.annotations package, the @Property annotation marks a field as a property. At the same time, it automatically creates getters and setters for the property.

5.6. @Parameter

Similarly, the @Parameter annotation denotes that a field is a component parameter.

6. Page

So, we're all set to explore the basic features of the framework. Let's create a new Home page in our app.

First, we'll define a Java class Home in the pages directory in src/main/java:

public class Home {
}

6.1. Template

Then, we'll create a corresponding Home.tml template in the pages directory under src/main/resources.

A file with the extension .tml (Tapestry Markup Language) is similar to an HTML/XHTML file with XML markup provided by Apache Tapestry.

For instance, let's have a look at the Home.tml template:

<html xmlns:t="http://tapestry.apache.org/schema/tapestry_5_4.xsd">
    <head>
        <title>apache-tapestry Home</title>
    </head>
    <body>
        <h1>Home</h1>
    </body>   
</html>

Voila! Simply by restarting the Jetty server, we can access the Home page at localhost:8080/apache-tapestry/home:

6.2. Property

Let's explore how to render a property on the Home page.

For this, we'll add a property and a getter method in the Home class:

@Property
private String appName = "apache-tapestry";

public Date getCurrentTime() {
    return new Date();
}

To render the appName property on the Home page, we can simply use ${appName}.

Similarly, we can write ${currentTime} to access the getCurrentTime method from the page.

6.3. Localization

Apache Tapestry provides integrated localization support. As per convention, a page name property file keeps the list of all the local messages to render on the page.

For instance, we'll create a home.properties file in the pages directory for the Home page with a local message:

introMsg=Welcome to the Apache Tapestry Tutorial

The message properties are different from the Java properties.

For the same reason, the key name with the message prefix is used to render a message property — for instance, ${message:introMsg}.

6.4. Layout Component

Let's define a basic layout component by creating the Layout.java class. We'll keep the file in the components directory in src/main/java:

public class Layout {
    @Property
    @Parameter(required = true, defaultPrefix = BindingConstants.LITERAL)
    private String title;
}

Here, the title property is marked required, and the default prefix for binding is set as literal String.

Then, we'll write a corresponding template file Layout.tml in the components directory in src/main/resources:

<html xmlns:t="http://tapestry.apache.org/schema/tapestry_5_4.xsd">
    <head>
        <title>${title}</title>
    </head>
    <body>
        <div class="container">
            <t:body />
            <hr/>
            <footer>
                <p>© Your Company</p>
            </footer>
        </div>
    </body>
</html>

Now, let's use the layout on the home page:

<html t:type="layout" title="apache-tapestry Home" 
    xmlns:t="http://tapestry.apache.org/schema/tapestry_5_4.xsd">
    <h1>Home! ${appName}</h1>
    <h2>${message:introMsg}</h2>
    <h3>${currentTime}</h3>
</html>

Note, the namespace is used to identify the elements (t:type and t:body) provided by Apache Tapestry. At the same time, the namespace also provides components and attributes.

Here, the t:type will set the layout on the home page. And, the t:body element will insert the content of the page.

Let's take a look at the Home page with the layout:

 

7. Form

Let's create a Login page with a form, to allow users to sign-in.

As already explored, we'll first create a Java class Login:

public class Login {
    // ...
    @InjectComponent
    private Form login;

    @Property
    private String email;

    @Property
    private String password;
}

Here, we've defined two properties — email and password. Also, we've injected a Form component for the login.

Then, let's create a corresponding template login.tml:

<html t:type="layout" title="apache-tapestry com.example"
      xmlns:t="http://tapestry.apache.org/schema/tapestry_5_3.xsd"
      xmlns:p="tapestry:parameter">
    <t:form t:id="login">
        <h2>Please sign in</h2>
        <t:textfield t:id="email" placeholder="Email address"/>
        <t:passwordfield t:id="password" placeholder="Password"/>
        <t:submit class="btn btn-large btn-primary" value="Sign in"/>
    </t:form>
</html>

Now, we can access the login page at localhost:8080/apache-tapestry/login:

8. Validation

Apache Tapestry provides a few built-in methods for form validation. It also provides ways to handle the success or failure of the form submission.

The built-in method follows the convention of the event and the component name. For instance, the method onValidationFromLogin will validate the Login component.

Likewise, methods like onSuccessFromLogin and onFailureFromLogin are for success and failure events respectively.

So, let's add these built-in methods to the Login class:

public class Login {
    // ...
    
    void onValidateFromLogin() {
        if (email == null)
            System.out.println("Email is null);

        if (password == null)
            System.out.println("Password is null);
    }

    Object onSuccessFromLogin() {
        System.out.println("Welcome! Login Successful");
        return Home.class;
    }

    void onFailureFromLogin() {
        System.out.println("Please try again with correct credentials");
    }
}

9. Alerts

Form validation is incomplete without proper alerts. Not to mention, the framework also has built-in support for alert messages.

For this, we'll first inject the instance of the AlertManager in the Login class to manage the alerts. Then, replace the println statements in existing methods with the alert messages:

public class Login {
    // ...
    @Inject
    private AlertManager alertManager;

    void onValidateFromLogin() {
        if(email == null || password == null) {
            alertManager.error("Email/Password is null");
            login.recordError("Validation failed"); //submission failure on the form
        }
    }
 
    Object onSuccessFromLogin() {
        alertManager.success("Welcome! Login Successful");
        return Home.class;
    }

    void onFailureFromLogin() {
        alertManager.error("Please try again with correct credentials");
    }
}

Let's see the alerts in action when the login fails:

10. Ajax

So far, we've explored the creation of a simple home page with a form. At the same time, we've seen the validations and support for alert messages.

Next, let's explore the Apache Tapestry's built-in support for Ajax.

First, we'll inject the instance of the AjaxResponseRenderer and Block component in the Home class. Then, we'll create a method onCallAjax for processing the Ajax call:

public class Home {
    // ....

    @Inject
    private AjaxResponseRenderer ajaxResponseRenderer;
    
    @Inject
    private Block ajaxBlock;

    @Log
    void onCallAjax() {
        ajaxResponseRenderer.addRender("ajaxZone", ajaxBlock);
    }
}

Also, we need to make a few changes in our Home.tml.

First, we'll add the eventLink to invoke the onCallAjax method. Then, we'll add a zone element with id ajaxZone to render the Ajax response.

Last, we need to have a block component that will be injected in the Home class and rendered as Ajax response:

<p><t:eventlink event="callAjax" zone="ajaxZone" class="btn btn-default">Call Ajax</t:eventlink></p>
<t:zone t:id="ajaxZone"></t:zone>
<t:block t:id="ajaxBlock">
    <hr/>
    <h2>Rendered through Ajax</h2>
    <p>The current time is: <strong>${currentTime}</strong></p>
</t:block>

Let's take a look at the updated home page:

Then, we can click the Call Ajax button and see the ajaxResponseRenderer in action:

11. Logging

To enable the built-in logging feature, the instance of the Logger is required to be injected. Then, we can use it to log at any level like TRACE, DEBUG, and INFO.

So, let's make the required changes in the Home class:

public class Home {
    // ...

    @Inject
    private Logger logger;

    void onCallAjax() {
        logger.info("Ajax call");
        ajaxResponseRenderer.addRender("ajaxZone", ajaxBlock);
    }
}

Now, when we click the Call Ajax button, the logger will log at the INFO level:

[INFO] pages.Home Ajax call

12. Conclusion

In this article, we've explored the Apache Tapestry web framework.

To begin with, we've created a quickstart web application and added a Home page using basic features of Apache Tapestry, like components, pages, and templates.

Then, we've examined a few handy annotations provided by Apache Tapestry to configure a property and component/page injection.

Last, we've explored the built-in Ajax and logging support provided by the framework.

As usual, all the code implementations are available over on GitHub.

Spring MVC Themes

$
0
0

1. Overview

When designing a web application, its look-and-feel, or theme, is a key component. It impacts our application's usability and accessibility and can further establish our company's brand.

In this tutorial, we'll go through the steps required to configure themes in a Spring MVC application.

2. Use Cases

Simply put, themes are a set of static resources, typically stylesheets and images, that impact the visual style of our web application.

We can use themes to:

  • Establish a common look-and-feel with a fixed theme
  • Customize for a brand with a branding theme – this is common in a SAAS application where each client wants a different look-and-feel
  • Address accessibility concerns with a usability theme – for example, we might want a dark or a high-contrast theme

3. Maven Dependencies

So, first things first, let's add the Maven dependencies we'll be using for the first part of this tutorial.

We'll need the Spring WebMVC and Spring Context dependencies:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>5.2.1.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>5.2.1.RELEASE</version>
</dependency>

And since we're going to use JSP in our example, we'll need Java Servlets, JSP, and JSTL:

<dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>javax.servlet-api</artifactId>
    <version>4.0.1</version>
</dependency>
<dependency>
     <groupId>javax.servlet.jsp</groupId>
     <artifactId>javax.servlet.jsp-api</artifactId>
     <version>2.3.3</version>
</dependency>
<dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>jstl</artifactId>
    <version>1.2</version>
</dependency>

4. Configuring Spring Theme

4.1. Theme Properties

Now, let's configure light and dark themes for our application.

For the dark theme, let's create dark.properties:

styleSheet=themes/black.css
background=black

And for the light theme, light.properties:

styleSheet=themes/white.css
background=white

From the properties above, we notice that one refers to a CSS file and another refers to a CSS style. We'll see in a moment how these are manifest in our view.

4.2. ResourceHandler

Reading the properties above, the files black.css and white.css must be placed in the directory named /themes.

And, we must configure a ResourceHandler to enable Spring MVC to correctly locate the files when requested:

@Override 
public void addResourceHandlers(ResourceHandlerRegistry registry) {
    registry.addResourceHandler("/themes/**").addResourceLocations("classpath:/themes/");
}

4.3. ThemeSource

We can manage these theme-specific .properties files as ResourceBundles via ResourceBundleThemeSource:

@Bean
public ResourceBundleThemeSource resourceBundleThemeSource() {
    return new ResourceBundleThemeSource();
}

4.4. ThemeResolvers

Next, we need a ThemeResolver to resolve the correct theme for the application. Depending on our design needs, we can choose between existing implementations or create our own.

For our example, let's configure the CookieThemeResolver. As the name depicts, this resolves the theme information from a browser cookie or falls back to the default if that information isn't available:

@Bean
public ThemeResolver themeResolver() {
    CookieThemeResolver themeResolver = new CookieThemeResolver();
    themeResolver.setDefaultThemeName("light");
    return themeResolver;
}

The other variants of ThemeResolver shipped with the framework are:

  • FixedThemeResolver: Used when there is a fixed theme for an application
  • SessionThemeResolver: Used to allow the user to switch themes for the active session

4.5. View

In order to apply the theme to our view, we must configure a mechanism to query the resource bundles.

We'll keep the scope to JSP only, though a similar lookup mechanism could be configured for alternate view rendering engines as well.

For JSPs, we can import a tag library that does the job for us:

<%@ taglib prefix="spring" uri="http://www.springframework.org/tags"%>

And then we can refer to any property specifying the appropriate property name:

<link rel="stylesheet" href="<spring:theme code='styleSheet'/>"/>

Or:

<body bgcolor="<spring:theme code='background'/>">

So, let's now add a single view called index.jsp into our application and place it in the WEB-INF/ directory:

<%@ taglib prefix="spring" uri="http://www.springframework.org/tags"%>
<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %>
<html>
    <head>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
        <link rel="stylesheet" href="<spring:theme code='styleSheet'/>"/>
        <title>Themed Application</title>
    </head>
    <body>
        <header>
            <h1>Themed Application</h1>
            <hr />
        </header>
        <section>
            <h2>Spring MVC Theme Demo</h2>
            <form action="<c:url value='/'/>" method="POST" name="themeChangeForm" id="themeChangeForm">
                <div>
                    <h4>
                        Change Theme
                    </h4>
                </div>
                <select id="theme" name="theme" onChange="submitForm()">
                    <option value="">Reset</option>
                    <option value="light">Light</option>
                    <option value="dark">Dark</option>
                </select>
            </form>
        </section>

        <script type="text/javascript">
            function submitForm() {
                document.themeChangeForm.submit();
            }
        </script>
    </body>
</html>

Actually, our application would work at this point, always choosing our light theme.

Let's see how we can allow the user to change their theme.

4.6. ThemeChangeInterceptor

The job of the ThemeChangeInterceptor is to understand the theme change request.

Let's now add a ThemeChangeInterceptor and configure it to look for a theme request parameter:

@Override
public void addInterceptors(InterceptorRegistry registry) {
    registry.addInterceptor(themeChangeInterceptor());
}

@Bean
public ThemeChangeInterceptor themeChangeInterceptor() {
    ThemeChangeInterceptor interceptor = new ThemeChangeInterceptor();
    interceptor.setParamName("theme");
    return interceptor;
}

5. Further Dependencies

Next, let's implement our own ThemeResolver that stores the user's preference to a database.

To achieve this, we'll need Spring Security for identifying the user:

<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-web</artifactId>
    <version>5.2.1.RELEASE</version>
</dependency>

<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-config</artifactId>
    <version>5.2.1.RELEASE</version>
</dependency>

<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-taglibs</artifactId>
    <version>5.2.1.RELEASE</version>
</dependency>

And Spring Data, Hibernate, and HSQLDB for storing the user's preference:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-jpa</artifactId>
    <version>2.2.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-core</artifactId>
    <version>5.4.9.Final</version>
</dependency>

<dependency>
    <groupId>org.hsqldb</groupId>
    <artifactId>hsqldb</artifactId>
    <version>2.5.0</version>
</dependency>

6. Custom ThemeResolver

Let's now dive more into ThemeResolver and implement one of our own. This custom ThemeResolver will save the user's theme preference to a database.

To achieve this, let's first add a UserPreference entity:

@Entity
@Table(name = "preferences")
public class UserPreference {
    @Id
    private String username;

    private String theme;
}

Next, we'll create UserPreferenceThemeResolver, which must implement the ThemeResolver interface. Its key responsibilities are to resolve and save theme information.

Let's first address resolving the name by implementing UserPreferenceThemeResolver#resolveThemeName:

@Override
public String resolveThemeName(HttpServletRequest request) {
    String themeName = findThemeFromRequest(request)
      .orElse(findUserPreferredTheme().orElse(getDefaultThemeName()));
    request.setAttribute(THEME_REQUEST_ATTRIBUTE_NAME, themeName);
    return themeName;
}

private Optional<String> findUserPreferredTheme() {
    Authentication authentication = SecurityContextHolder.getContext()
            .getAuthentication();
    UserPreference userPreference = getUserPreference(authentication).orElse(new UserPreference());
    return Optional.ofNullable(userPreference.getTheme());
}

private Optional<String> findThemeFromRequest(HttpServletRequest request) {
    return Optional.ofNullable((String) request.getAttribute(THEME_REQUEST_ATTRIBUTE_NAME));
}
    
private Optional<UserPreference> getUserPreference(Authentication authentication) {
    return isAuthenticated(authentication) ? 
      userPreferenceRepository.findById(((User) authentication.getPrincipal()).getUsername()) : 
      Optional.empty();
}

And now we can write our implementation for saving the theme in UserPreferenceThemeResolver#setThemeName:

@Override
public void setThemeName(HttpServletRequest request, HttpServletResponse response, String theme) {
    Authentication authentication = SecurityContextHolder.getContext()
        .getAuthentication();
    if (isAuthenticated(authentication)) {
        request.setAttribute(THEME_REQUEST_ATTRIBUTE_NAME, theme);
        UserPreference userPreference = getUserPreference(authentication).orElse(new UserPreference());
        userPreference.setUsername(((User) authentication.getPrincipal()).getUsername());
        userPreference.setTheme(StringUtils.hasText(theme) ? theme : null);
        userPreferenceRepository.save(userPreference);
    }
}

And finally, let's now change out the ThemeResolver in our app:

@Bean 
public ThemeResolver themeResolver() { 
    return new UserPreferenceThemeResolver();
}

Now, the user's theme preference is saved in the database instead of as a cookie.

An alternative way of saving the user's preference could've been through a Spring MVC Controller and a separate API.

7. Conclusion

In this article, we learned the steps to configure Spring MVC themes.

We can also find the complete code over on GitHub.


Introduction to Spark Graph Processing with GraphFrames

$
0
0

1. Introduction

Graph processing is useful for many applications from social networks to advertisements. Inside a big data scenario, we need a tool to distribute that processing load.

In this tutorial, we'll load and explore graph possibilities using Apache Spark in Java. To avoid complex structures, we'll be using an easy and high-level Apache Spark graph API: the GraphFrames API.

2. Graphs

First of all, let's define a graph and its components. A graph is a data structure having edges and vertices. The edges carry information that represents relationships between the vertices.

The vertices are points in an n-dimensional space, and edges connect the vertices according to their relationships:

In the image above, we have a social network example. We can see the vertices represented by letters and the edges carrying which kind of relationship is between the vertices.

3. Maven Setup

Now, let's start the project by setting up the Maven configuration.

Let's add spark-graphx 2.11, graphframes, and spark-sql 2.11:

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-graphx_2.11</artifactId>
    <version>2.4.4</version>
</dependency>
<dependency>
   <groupId>graphframes</groupId>
   <artifactId>graphframes</artifactId>
   <version>0.7.0-spark2.4-s_2.11</version>
</dependency>
<dependency>
   <groupId>org.apache.spark</groupId>
   <artifactId>spark-sql_2.11</artifactId>
   <version>2.4.4</version>
</dependency>

These artifact versions support Scala 2.11.

Also, it so happens that GraphFrames is not in Maven Central. So, let's add the needed Maven repository, too:

<repositories>
     <repository>
          <id>SparkPackagesRepo</id>
          <url>http://dl.bintray.com/spark-packages/maven</url>
     </repository>
</repositories>

4. Spark Configuration

In order to work with GraphFrames, we'll need to download Hadoop and define the HADOOP_HOME environment variable.

In the case of Windows as the operating system, we'll also download the appropriate winutils.exe to the HADOOP_HOME/bin folder.

Next, let's begin our code by creating the basic configuration:

SparkConf sparkConf = new SparkConf()
  .setAppName("SparkGraphFrames")
  .setMaster("local[*]");
JavaSparkContext javaSparkContext = new JavaSparkContext(sparkConf);

We'll also need to create a SparkSession:

SparkSession session = SparkSession.builder()
  .appName("SparkGraphFrameSample")
  .config("spark.sql.warehouse.dir", "/file:C:/temp")
  .sparkContext(javaSparkContext.sc())
  .master("local[*]")
  .getOrCreate();

5. Graph Construction

Now, we're all set to start with our main code. So, let's define the entities for our vertices and edges, and create the GraphFrame instance.

We'll work on the relationships between users from a hypothetical social network.

5.1. Data

First, for this example, let's define both entities as User and Relationship:

public class User {
    private Long id;
    private String name;
    // constructor, getters and setters
}
 
public class Relationship implements Serializable {
    private String type;
    private String src;
    private String dst;
    private UUID id;

    public Relationship(String type, String src, String dst) {
        this.type = type;
        this.src = src;
        this.dst = dst;
        this.id = UUID.randomUUID();
    }
    // getters and setters
}

Next, let's define some User and Relationship instances:

List<User> users = new ArrayList<>();
users.add(new User(1L, "John"));
users.add(new User(2L, "Martin"));
users.add(new User(3L, "Peter"));
users.add(new User(4L, "Alicia"));

List<Relationship> relationships = new ArrayList<>();
relationships.add(new Relationship("Friend", "1", "2"));
relationships.add(new Relationship("Following", "1", "4"));
relationships.add(new Relationship("Friend", "2", "4"));
relationships.add(new Relationship("Relative", "3", "1"));
relationships.add(new Relationship("Relative", "3", "4"));

5.2. GraphFrame instance

Now, in order to create and manipulate our graph of relationships, we'll create an instance of GraphFrame. The GraphFrame constructor expects two Dataset<Row> instances, the first representing the vertices and the second, the edges:

Dataset<Row> userDataset = session.createDataFrame(users, User.class);
Dataset<Row> relationshipDataset = session.createDataFrame(relationships, Relation.class);

GraphFrame graph = new GraphFrame(userDataframe, relationshipDataframe);

At last, we'll log our vertices and edges in the console to see how it looks:

graph.vertices().show();
graph.edges().show();
+---+------+
| id|  name|
+---+------+
|  1|  John|
|  2|Martin|
|  3| Peter|
|  4|Alicia|
+---+------+

+---+--------------------+---+---------+
|dst|                  id|src|     type|
+---+--------------------+---+---------+
|  2|622da83f-fb18-484...|  1|   Friend|
|  4|c6dde409-c89d-490...|  1|Following|
|  4|360d06e1-4e9b-4ec...|  2|   Friend|
|  1|de5e738e-c958-4e0...|  3| Relative|
|  4|d96b045a-6320-4a6...|  3| Relative|
+---+--------------------+---+---------+

6. Graph Operators

Now that we have a GraphFrame instance, let's see what we can do with it.

6.1. Filter

GraphFrames allows us to filter edges and vertices by a query.

Next, then, let’s filter the vertices by the name property on User:

graph.vertices().filter("name = 'Martin'").show();

At the console, we can see the result:

+---+------+
| id|  name|
+---+------+
|  2|Martin|
+---+------+

Also, we can directly filter on the graph by calling filterEdges or filterVertices:

graph.filterEdges("type = 'Friend'")
  .dropIsolatedVertices().vertices().show();

Now, since we filtered the edges, we might still have some isolated vertices. So, we'll call dropIsolatedVertices(). 

As a result, we have a subgraph, still a GraphFrame instance, with just the relationships that have “Friend” status:

+---+------+
| id|  name|
+---+------+
|  1|  John|
|  2|Martin|
|  4|Alicia|
+---+------+

6.2. Degrees

Another interesting feature set is the degrees set of operations. These operations return the number of edges incident on each vertex.

The degrees operation just returns the count of all edges of each vertex. On the other hand, inDegrees counts only incoming edges, and outDegrees counts only outgoing edges.

Let's count the incoming degrees of all vertices in our graph:

graph.inDegrees().show();

As a result, we have a GraphFrame that shows the number of incoming edges to each vertex, excluding those with none:

+---+--------+
| id|inDegree|
+---+--------+
|  1|       1|
|  4|       3|
|  2|       1|
+---+--------+

7. Graph Algorithms

GraphFrames also provides popular algorithms ready to use — let's take a look at some of them.

7.1. Page Rank

The Page Rank algorithm weighs the incoming edges to a vertex and transforms it into a score.

The idea is that each incoming edge represents an endorsement and makes the vertex more relevant in the given graph.

For example, in a social network, if a person is followed by various people, he or she will be ranked highly.

Running the page rank algorithm is quite straightforward:

graph.pageRank()
  .maxIter(20)
  .resetProbability(0.15)
  .run()
  .vertices()
  .show();

To configure this algorithm, we just need to provide:

  • maxIter – the number of iterations of page rank to run – 20 is recommended, too few will decrease the quality, and too many will degrade the performance
  • resetProbability – the random reset probability (alpha) – the lower it is, the bigger the score spread between the winners and losers will be – valid ranges are from 0 to 1. Usually, 0.15 is a good score

The response is a similar GraphFrame, though this time we see an additional column giving the page rank of each vertex:

+---+------+------------------+
| id|  name|          pagerank|
+---+------+------------------+
|  4|Alicia|1.9393230468864597|
|  3| Peter|0.4848822786454427|
|  1|  John|0.7272991738542318|
|  2|Martin| 0.848495500613866|
+---+------+------------------+

In our graph, Alicia is the most relevant vertex, followed by Martin and John.

7.2. Connected Components

The connected components algorithm finds isolated clusters or isolated sub-graphs. These clusters are sets of connected vertices in a graph where each vertex is reachable from any other vertex in the same set.

We can call the algorithm without any parameters via the connectedComponents() method:

graph.connectedComponents().run().show();

The algorithm returns a GraphFrame containing each vertex and the component to which each is connected:

+---+------+------------+
| id|  name|   component|
+---+------+------------+
|  1|  John|154618822656|
|  2|Martin|154618822656|
|  3| Peter|154618822656|
|  4|Alicia|154618822656|
+---+------+------------+

Our graph has only one component — this means that we do not have isolated sub-graphs. The component has an auto-generated id, which is 154618822656, in our case.

Although we have one more column here – the component id – our graph is still the same.

7.3. Triangle Counting

Triangle counting is commonly used as community detection and counting in a social network graph. A triangle is a set of three vertices, where each vertex has a relationship to the other two vertices in the triangle.

In a social network community, it's easy to find a considerable number of triangles connected to each other.

We can easily perform a triangle counting directly from our GraphFrame instance:

graph.triangleCount().run().show();

The algorithm also returns a GraphFrame with the number of triangles passing through each vertex.

+-----+---+------+
|count| id|  name|
+-----+---+------+
|    1|  3| Peter|
|    2|  1|  John|
|    2|  4|Alicia|
|    1|  2|Martin|
+-----+---+------+

8. Conclusion

Apache Spark is a great tool for computing a relevant amount of data in an optimized and distributed way. And, the GraphFrames library allows us to easily distribute graph operations over Spark.

As always, the complete source code for the example is available over on GitHub.

A Quick Guide to Timeouts in OkHttp

$
0
0

1. Overview

In this quick tutorial, we'll focus on different types of timeouts we can set for the OkHttp client.

For the more general overview of the OkHttp library, check our introductory OkHttp guide.

2. Connect Timeout

A connect timeout defines a time period in which our client should establish a connection with a target host.

By default, for the OkHttpClient, this timeout is set to 10 seconds.

However, we can easily change its value using the OkHttpClient.Builder#connectTimeout method. A value of zero means no timeout at all.

Let's now see how to build and use an OkHttpClient with a custom connection timeout:

@Test
public void whenConnectTimeoutExceeded_thenSocketTimeoutException() {
    OkHttpClient client = new OkHttpClient.Builder()
      .connectTimeout(10, TimeUnit.MILLISECONDS)
      .build();

    Request request = new Request.Builder()
      .url("http://203.0.113.1") // non routable address
      .build();

    Throwable thrown = catchThrowable(() -> client.newCall(request).execute());

    assertThat(thrown).isInstanceOf(SocketTimeoutException.class);
}

The above example shows that the client throws a SocketTimeoutException when the connection attempt exceeds the configured timeout.

3. Read Timeout

A read timeout is applied from the moment the connection between a client and a target host has been successfully established.

It defines a maximum time of inactivity between two data packets when waiting for the server's response.

The default timeout of 10 seconds can be changed using OkHttpClient.Builder#readTimeout. Analogously as for the connect timeout, a zero value indicates no timeout.

Let's now see how to configure a custom read timeout in practice:

@Test
public void whenReadTimeoutExceeded_thenSocketTimeoutException() {
    OkHttpClient client = new OkHttpClient.Builder()
      .readTimeout(10, TimeUnit.MILLISECONDS)
      .build();

    Request request = new Request.Builder()
      .url("https://httpbin.org/delay/2") // 2-second response time
      .build();

    Throwable thrown = catchThrowable(() -> client.newCall(request).execute());

    assertThat(thrown).isInstanceOf(SocketTimeoutException.class);
}

As we can see, the server doesn't return the response within the defined timeout of 500 ms. As a result, the OkHttpClient throws a SocketTimeoutException.

4. Write Timeout

A write timeout defines a maximum time of inactivity between two data packets when sending the request to the server.

Similarly, as for the connect and read timeouts, we can override the default value of 10 seconds using OkHttpClient.Builder#writeTimeout. As a convention, a zero value means no timeout at all.

In the following example, we set a very short write timeout of 10 ms and post a 1 MB content to the server:

@Test
public void whenWriteTimeoutExceeded_thenSocketTimeoutException() {
    OkHttpClient client = new OkHttpClient.Builder()
      .writeTimeout(10, TimeUnit.MILLISECONDS)
      .build();

    Request request = new Request.Builder()
      .url("https://httpbin.org/delay/2")
      .post(RequestBody.create(MediaType.parse("text/plain"), create1MBString()))
      .build();

    Throwable thrown = catchThrowable(() -> client.newCall(request).execute());

    assertThat(thrown).isInstanceOf(SocketTimeoutException.class);
}

As we see, due to the large payload, our client isn't able to send a request body to the server within the defined timeout. Consequently, the OkHttpClient throws a SocketTimeoutException.

5. Call Timeout

A call timeout is a bit different than the connect, read and write timeouts we already discussed.

It defines a time limit for a complete HTTP call. This includes resolving DNS, connecting, writing the request body, server processing, as well as reading the response body.

Unlike other timeouts, it's default value is set to zero which implies no timeout. But of course, we can configure a custom value using OkHttpClient.Builder#callTimeout method.

Let's see a practical usage example:

@Test
public void whenCallTimeoutExceeded_thenInterruptedIOException() {
    OkHttpClient client = new OkHttpClient.Builder()
      .callTimeout(1, TimeUnit.SECONDS)
      .build();

    Request request = new Request.Builder()
      .url("https://httpbin.org/delay/2")
      .build();

    Throwable thrown = catchThrowable(() -> client.newCall(request).execute());

    assertThat(thrown).isInstanceOf(InterruptedIOException.class);
}

As we can see, the call timeout is exceeded and the OkHttpClient throws an InterruptedIOException.

6. Per-Request Timeout

It's recommended to create a single OkHttpClient instance and reuse it for all the HTTP calls across our application.

Sometimes, however, we know that a certain request takes more time than all the others. In this situation, we need to extend a given timeout only for that particular call.

In such cases, we can use an OkHttpClient#newBuilder method. This builds a new client that shares the same settings. We can then use the builder methods to adjust timeout settings as needed.

Let's now see how to do this in practice:

@Test
public void whenPerRequestTimeoutExtended_thenResponseSuccess() throws IOException {
    OkHttpClient defaultClient = new OkHttpClient.Builder()
      .readTimeout(1, TimeUnit.SECONDS)
      .build();

    Request request = new Request.Builder()
      .url("https://httpbin.org/delay/2")
      .build();

    Throwable thrown = catchThrowable(() -> defaultClient.newCall(request).execute());

    assertThat(thrown).isInstanceOf(InterruptedIOException.class);

    OkHttpClient extendedTimeoutClient = defaultClient.newBuilder()
      .readTimeout(5, TimeUnit.SECONDS)
      .build();

    Response response = extendedTimeoutClient.newCall(request).execute();
    assertThat(response.code()).isEqualTo(200);
}

As we see the defaultClient failed to complete the HTTP call because of the exceeded read timeout.

That's why we created the extendedTimeoutClient, adjusted the timeout value, and successfully executed the request.

7. Summary

In this article, we explored different timeouts we can configure for the OkHttpClient.

We also shortly described when the connect, read and write timeouts are applied during an HTTP call.

Additionally, we showed how easy it is to change a certain timeout value only for a single request.

As usual, all the code examples are available over on GitHub.

Circular Linked List Java Implementation

$
0
0

1. Introduction

In this tutorial, we'll look at the implementation of a circular linked list in Java.

2. Circular Linked List

A circular linked list is a variation of a linked list in which the last node points to the first node, completing a full circle of nodes. In other words, this variation of the linked list doesn't have a null element at the end.

With this simple change, we gain some benefits:

  • Any node in the circular linked list can be a starting point
  • Consequently, the whole list can be traversed starting from any node
  • Since the last node of the circular linked list has the pointer to the first node, it's easy to perform enqueue and dequeue operations

All in all, this is very useful in the implementation of the queue data structure.

Performance-wise, it is the same as other linked list implementations except for one thing: Traversing from the last node to the head node can be done in constant time. With conventional linked lists, this is a linear operation.

3. Implementation in Java

Let's start by creating an auxiliary Node class that will store int values and a pointer to the next node:

class Node {

    int value;
    Node nextNode;

    public Node(int value) {
        this.value = value;
    }
}

Now let's create the first and last nodes in the circular linked list, usually called the head and tail:

public class CircularLinkedList {
    private Node head = null;
    private Node tail = null;

    // ....
}

In the next subsections we'll take a look at the most common operations we can perform on a circular linked list.

3.1. Inserting Elements

The first operation we're going to cover is the insertion of new nodes. While inserting a new element we'll need to handle two cases :

  • The head node is null, that is there are no elements already added. In this case, we'll make the new node we add as both the head and tail of the list since there is only one node
  • The head node isn't null, that is to say, there are one or more elements already added to the list. In this case, the existing tail should point to the new node and the newly added node will become the tail

In both of the above cases, the nextNode for tail will point to head

Let's create an addNode method that takes the value to be inserted as a parameter:

public void addNode(int value) {
    Node newNode = new Node(value);

    if (head == null) {
        head = newNode;
    } else {
        tail.nextNode = newNode;
    }

    tail = newNode;
    tail.nextNode = head;
}

Now we can add a few numbers to our circular linked list:

private CircularLinkedList createCircularLinkedList() {
    CircularLinkedList cll = new CircularLinkedList();

    cll.addNode(13);
    cll.addNode(7);
    cll.addNode(24);
    cll.addNode(1);
    cll.addNode(8);
    cll.addNode(37);
    cll.addNode(46);

    return cll;
}

3.2. Finding an Element

The next operation we'll look at is searching to determine if an element is present in the list.

For this, we'll fix a node in the list (usually the head) as the currentNode and traverse through the entire list using the nextNode of this node, until we find the required element.

Let's add a new method containsNode that takes the searchValue as a parameter:

public boolean containsNode(int searchValue) {
    Node currentNode = head;

    if (head == null) {
        return false;
    } else {
        do {
            if (currentNode.value == searchValue) {
                return true;
            }
            currentNode = currentNode.nextNode;
        } while (currentNode != head);
        return false;
    }
}

Now, let's add a couple of tests to verify that the above-created list contains the elements we added and no new ones:

@Test
 public void givenACircularLinkedList_WhenAddingElements_ThenListContainsThoseElements() {
    CircularLinkedList cll = createCircularLinkedList();

    assertTrue(cll.containsNode(8));
    assertTrue(cll.containsNode(37));
}

@Test
public void givenACircularLinkedList_WhenLookingForNonExistingElement_ThenReturnsFalse() {
    CircularLinkedList cll = createCircularLinkedList();

    assertFalse(cll.containsNode(11));
}

3.3. Deleting an Element

Next, we'll look at the delete operation. Similar to insertion we have a couple of cases (excluding the case where the list itself is empty) that we need to look at.

  • Element to delete is the head itself. In this case, we need to update the head as the next node of the current headand the next node of the tail as the new head
  • Element to delete is any element other than the head. In this case, we just need to update the next node of the previous node as the next node of the node that needs to be deleted

We'll now add a new method deleteNode that takes the valueToDelete as a parameter:

public void deleteNode(int valueToDelete) {
    Node currentNode = head;

    if (head != null) {
        if (currentNode.value == valueToDelete) {
            head = head.nextNode;
            tail.nextNode = head;
        } else {
            do {
                Node nextNode = currentNode.nextNode;
                if (nextNode.value == valueToDelete) {
                    currentNode.nextNode = nextNode.nextNode;
                    break;
                }
                currentNode = currentNode.nextNode;
            } while (currentNode != head);
        }
    }
}

Let's now add a simple test to verify that deletion works as expected for all the cases:

@Test
public void givenACircularLinkedList_WhenDeletingElements_ThenListDoesNotContainThoseElements() {
    CircularLinkedList cll = createCircularLinkedList();

    assertTrue(cll.containsNode(13));
    cll.deleteNode(13);
    assertFalse(cll.containsNode(13));

    assertTrue(cll.containsNode(1));
    cll.deleteNode(1);
    assertFalse(cll.containsNode(1));

    assertTrue(cll.containsNode(46));
    cll.deleteNode(46);
    assertFalse(cll.containsNode(46));
 }

3.4. Traversing the List

We're going to take a look at the traversal of our circular linked list in this final section. Similar to the search and delete operations, for traversal we fix the currentNode as head and traverse through the entire list using the nextNode of this node.

Let's add a new method traverseList that prints the elements that are added to the list:

public void traverseList() {
    Node currentNode = head;

    if (head != null) {
        do {
            LOGGER.info(currentNode.value + " ");
            currentNode = currentNode.nextNode;
        } while (currentNode != head);
    }
}

As we can see, in the above example, during the traversal, we simply print the value of each of the nodes, until we get back to the head node.

4. Conclusion

In this tutorial, we've seen how to implement a circular linked list in Java and explored some of the most common operations.

First, we learned what exactly a circular linked list is including some of the most common features and differences with a conventional linked list. Then, we saw how to insert, search, delete and traverse items in our circular linked list implementation.

As usual, all the examples used in this article are available over on GitHub.

Handling URL Encoded Form Data in Spring REST

$
0
0

1. Overview

For an end-user, the process of form submission is convenient, and to some extent, equivalent to just entering data and clicking on a submit button. However, from an engineering perspective, it takes an encoding mechanism to reliably send and receive this data from the client-side to the server-side for back-end processing.

For the scope of this tutorial, we'll focus on creating a form that sends its data as application/x-www-form-urlencoded content type in a Spring web application.

2. Form Data Encoding

The most commonly used HTTP method for form submissions is POST. However, for idempotent form submissions, we can also use the HTTP GET method. And, the way to specify the method is through the form's method attribute.

For forms that use the GET method, the entire form data is sent as part of the query string. But, if we're using the POST method, then its data is sent as part of the body of the HTTP request.

Moreover, in the latter case, we can also specify the encoding of data with the form's enctype attribute, which can take two values, namely application/x-www-form-urlencoded and multipart/form-data.

2.1. Media Type application/x-www-form-urlencoded

HTML forms have a default value of application/x-www-form-urlencoded for the enctype attribute as this takes care of the basic use cases where data is entirely text. Nevertheless, if our use case involves supporting file data, then we'll have to override it with a value of multipart/form-data.

Essentially, it sends the form data as key-value pairs separated by an ampersand (&) character. Also, the respective key and value are separated with the equals sign (=). Further, all reserved and non-alphanumeric characters are encoded using percent-encoding.

3. Form Submission in Browser

Now that we have our basics covered, let's go ahead and see how we can handle URL encoded form data for a simple use case of feedback submission in a Spring web app.

3.1. Domain Model

For our feedback form, we need to capture the email identifier of the submitter along with the comment. So, let's create our domain model in a Feedback class:

public class Feedback {
    private String emailId;
    private String comment;
}

3.2. Create Form

To use a simple HTML template to create our dynamic web form, we'll need to configure Thymeleaf in our project. After this, we're ready to add a GET endpoint /feedback that will serve the feedback view for the form:

@GetMapping(path = "/feedback")
public String getFeedbackForm(Model model) {
    Feedback feedback = new Feedback();
    model.addAttribute("feedback", feedback);
    return "feedback";
}

Note that we're using feedback as a model attribute to capture the user input. Next, let's create the feedback view in the feedback.html template:

<form action="#" method="post" th:action="@{/web/feedback}" th:object="${feedback}">
    <!-- form fields for feedback's submitter and comment info -->
</form>

Of course, we don't need to explicitly specify the enctype attribute as it'll pick the default value of application/x-www-form-urlencoded.

3.3. PRG Flow

As we're accepting user input through the browser feedback form, we must implement the POST/REDIRECT/GET (PRG) submission workflow to avoid duplicate submissions.

First, let's implement the POST endpoint /web/feedback that'll act as the action handler for the feedback form:

@PostMapping(
  path = "/web/feedback",
  consumes = {MediaType.APPLICATION_FORM_URLENCODED_VALUE})
public String handleBrowserSubmissions(Feedback feedback) throws Exception {
    // Save feedback data
    return "redirect:/feedback/success";
}

Next, we can implement the redirect endpoint /feedback/success that serves a GET request:

@GetMapping("/feedback/success")
public ResponseEntity<String> getSuccess() {
    return new ResponseEntity<String>("Thank you for submitting feedback.", HttpStatus.OK);
}

To validate the functionality of form submission workflow in a browser, let's visit localhost:8080/feedback:

Finally, we can also inspect that form data is being sent in the URL encoded form:

emailId=abc%40example.com&comment=Sample+Feedback

4. Non-Browser Requests

At times, we might not have a browser-based HTTP client. Instead, our client could be a utility such as cURL or Postman. In such a case, we don't need the HTML web form. Instead, we can implement a /feedback endpoint that serves the POST request:

@PostMapping(
  path = "/feedback",
  consumes = {MediaType.APPLICATION_FORM_URLENCODED_VALUE})
public ResponseEntity<String> handleNonBrowserSubmissions(@RequestBody Feedback feedback) throws Exception {
    // Save feedback data
    return new ResponseEntity<String>("Thank you for submitting feedback", HttpStatus.OK);
}

In the absence of the HTML form in our data flow, we don't necessarily need to implement the PRG pattern. However, we must specify that the resource accepts APPLICATION_FORM_URLENCODED_VALUE media type.

Finally, we can test it with a cURL request:

curl -X POST \
  http://localhost:8080/feedback \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -d 'emailId=abc%40example.com&comment=Sample%20Feedback'

4.1. FormHttpMessageConverter Basics

An HTTP request that sends application/x-www-form-urlencoded data must specify this in the Content-Type header. Internally, Spring uses the FormHttpMessageConverter class to read this data and bind it with the method parameter.

In cases where our method parameter is of a type MultiValueMap, we can use either the @RequestParam or @RequestBody annotation to bind it appropriately with the body of the HTTP request. That's because the Servlet API combines the query parameters and form data into a single map called parameters, and that includes automatic parsing of the request body:

@PostMapping(
  path = "/feedback",
  consumes = {MediaType.APPLICATION_FORM_URLENCODED_VALUE})
public ResponseEntity<String> handleNonBrowserSubmissions(
  @RequestParam MultiValueMap<String,String> paramMap) throws Exception {
    // Save feedback data
    return new ResponseEntity<String>("Thank you for submitting feedback", HttpStatus.OK);
}

However, for a method parameter of type other than MultiValueMap, such as our Feedback domain object, we must use only the @RequestBody annotation.

5. Conclusion

In this tutorial, we briefly learned about the encoding of form data in web forms. We also explored how to handle URL encoded data for browser and non-browser HTTP requests by implementing a feedback form in a Spring Boot web app.

As always, the complete source code for the tutorial is available over on GitHub.

How to Implement a Quarkus Extension

$
0
0

1. Overview

Quarkus is a framework composed of a core and a set of extensions. The core is based on Context and Dependency Injection (CDI) and extensions are usually meant to integrate a third-party framework by exposing their primary components as CDI beans.

In this tutorial, we'll focus on how to write a Quarkus extension assuming a basic understanding of Quarkus.

2. What's a Quakus Extension

A Quarkus extension is simply a module that can run on top of a Quarkus application. The Quarkus application itself is a core module with a set of other extensions.

The most common use case for such an extension is to get a third-party framework running on top of a Quarkus application.

3. Running Liquibase in a Plain Java Application

Let's try and implement an extension for integrating Liquibase, a tool for database change management.

But before we dive in, we first need to show how to run a Liquibase migration from a Java main method. This will hugely facilitate implementing the extension.

The entry point for the Liquibase framework is the Liquibase API. To use this, we need a changelog file, a ClassLoader for accessing this file, and a Connection to the underlying database:

Connection c = DriverManager.getConnection("jdbc:h2:mem:testdb", "user", "password");
ResourceAccessor resourceAccessor = new ClassLoaderResourceAccessor();
String changLogFile = "db/liquibase-changelog-master.xml";
Liquibase liquibase = new Liquibase(changLogFile, resourceAccessor, new JdbcConnection(c));

Having this instance, we simply call the update() method which updates the database to match the changelog file.

liquibase.update(new Contexts());

The goal is to expose Liquibase as a Quarkus extension. That is, providing a database configuration and changelog file through Quarkus Configuration and then producing the Liquibase API as a CDI bean. This provides a means for recording migration invocation for later execution.

4. How to Write a Quarkus Extension

Technically speaking, a Quarkus extension is a Maven multi-module project composed of two modules. The first is a runtime module where we implement requirements. The second is a deployment module for processing configuration and generating the runtime code.

So, let's start by creating a Maven multi-module project called quarkus-liquibase-parent that contains two submodules, quarkus-liquibase, and quarkus-liquibase-deployment:

<modules>
    <module>quarkus-liquibase</module>
    <module>quarkus-liquibase-deployment</module>
</modules>

While the deployment module has the deployment suffix, the runtime module has no suffix. The runtime one is what a Quarkus application depends on.

5. Implementing the Runtime Module

In the runtime module, we'll implement:

  • a configuration class for capturing the Liquibase changelog file
  • a CDI producer for exposing the Liquibase API
  • and a recorder that acts as a proxy for recording invocation calls

5.1. Maven Dependencies And Plugins

The runtime module will depend on the quarkus-core module and eventually the runtime modules of the needed extensions. Here, we need the quarkus-agroal dependency as our extension needs a Datasource. We'll include the Liquibase library here, too:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-core</artifactId>
    <version>${quarkus.version}</version>
</dependency>
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-agroal</artifactId>
    <version>${quarkus.version}</version>
</dependency>
<dependency>
    <groupId>org.liquibase</groupId>
    <artifactId>liquibase-core</artifactId>
    <version>3.8.1</version>
</dependency>

Also, we may need to add the quarkus-bootstrap-maven-plugin. This plugin automatically generates the Quarkus extension descriptor by calling the extension-descriptor goal.

Or, we can omit this plugin and generate the descriptor manually.

Either way, we can find the extension descriptor is located under META-INF/quarkus-extension.properties:

deployment-artifact=com.baeldung.quarkus.liquibase\:quarkus-liquibase-deployment\:1.0-SNAPSHOT

5.2. Exposing the Configuration

To provide the changelog file, we need to implement a configuration class:

@ConfigRoot(name = "liquibase", phase = ConfigPhase.BUILD_AND_RUN_TIME_FIXED)
public final class LiquibaseConfig {
    @ConfigItem
    public String changeLog;
}

We annotate the class by @ConfigRoot and the properties by @ConfigItem. So, the changeLog field, which is the camel case form of the change-log, will be provided through the quarkus.liquibase.change-log key in the application.properties file, located in a Quarkus application classpath:

quarkus.liquibase.change-log=db/liquibase-changelog-master.xml

We can also note the ConfigRoot.phase value which instructs when to resolve the change-log key. In this case, BUILD_AND_RUN_TIME_FIXED, the key is read at deployment time and available to the application at runtime.

5.3. Exposing the Liquibase API as a CDI Bean

We've seen above how to run a Liquibase migration from the main method.

Now, we'll reproduce the same code but as a CDI bean, and we'll use a CDI producer for that purpose:

@Produces
public Liquibase produceLiquibase() throws Exception {
    ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
    ResourceAccessor resourceAccessor = new ClassLoaderResourceAccessor(classLoader);
    DatabaseConnection jdbcConnection = new JdbcConnection(dataSource.getConnection());
    Liquibase liquibase = new Liquibase(liquibaseConfig.changeLog, resourceAccessor, jdbcConnection);
    return liquibase;
}

5.4. Recording Bytecode

In this step, we'll write a recorder class that acts as a proxy for recording bytecode and setting up the runtime logic:

@Recorder
public class LiquibaseRecorder {

    public BeanContainerListener setLiquibaseConfig(LiquibaseConfig liquibaseConfig) {
        return beanContainer -> {
            LiquibaseProducer producer = beanContainer.instance(LiquibaseProducer.class);
            producer.setLiquibaseConfig(liquibaseConfig);
        };
    }

    public void migrate(BeanContainer container) throws LiquibaseException {
        Liquibase liquibase = container.instance(Liquibase.class);
        liquibase.update(new Contexts());
    }

}

Here, we have to record two invocations. setLiquibaseConfig for setting configuration and migrate for executing the migration. Next, we'll look at how these recorder methods are called by the deployment build step processors which we'll implement in the deployment module.

Note that when we invoke these recorder methods at build time, instructions are not executed but recorded for later execution at startup time.

6. Implementing the Deployment Module

The central components in a Quakus extension are the Build Step Processors. They are methods annotated as @BuildStep that generate bytecode through recorders, and they are executed during the build time through the build goal of the quarkus-maven-plugin configured in a Quarkus application.

@BuildSteps are ordered thanks to BuildItems. They consume build items generated by early build steps and produce build items for later build steps.

The generated code by all the ordered build steps found in the application deployment modules is actually the runtime code.

6.1. Maven Dependencies

The deployment module should depend on the corresponding runtime module and eventually on the deployment modules of the needed extensions:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-core-deployment</artifactId>
    <version>${quarkus.version}</version>
</dependency>
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-arc-deployment</artifactId>
    <version>${quarkus.version}</version>
</dependency>
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-agroal-deployment</artifactId>
    <version>${quarkus.version}</version>
</dependency>

<dependency>
    <groupId>com.baeldung.quarkus.liquibase</groupId>
    <artifactId>quarkus-liquibase</artifactId>
    <version>${project.version}</version>
</dependency>

The latest stable version of Quarkus extensions is the same for the runtime module.

6.2. Implementing Build Step Processors

Now, let's implement two build step processors for recording bytecode. The first build step processor is the build() method which will record bytecode for execution in the static init method. We configure this through the STATIC_INIT value:

@Record(ExecutionTime.STATIC_INIT)
@BuildStep
void build(BuildProducer<AdditionalBeanBuildItem> additionalBeanProducer,
  BuildProducer<FeatureBuildItem> featureProducer,
  LiquibaseRecorder recorder,
  BuildProducer<BeanContainerListenerBuildItem> containerListenerProducer,
  DataSourceInitializedBuildItem dataSourceInitializedBuildItem) {

    featureProducer.produce(new FeatureBuildItem("liquibase"));

    AdditionalBeanBuildItem beanBuilItem = AdditionalBeanBuildItem.unremovableOf(LiquibaseProducer.class);
    additionalBeanProducer.produce(beanBuilItem);

    containerListenerProducer.produce(
      new BeanContainerListenerBuildItem(recorder.setLiquibaseConfig(liquibaseConfig)));
}

First, we create a FeatureBuildItem to mark the type or the name of the extension. Then, we create an AdditionalBeanBuildItem so that the LiquibaseProducer bean will be available for the Quarkus container.

Finally, we create a BeanContainerListenerBuildItem in order to fire the BeanContainerListener after the Quarkus BeanContainer startup. Here, in the listener, we pass the configuration to the Liquibase bean.

The processMigration(), in turn, will record the invocation for execution in the main method as it's configured using the RUNTIME_INIT parameter for recording.

@Record(ExecutionTime.RUNTIME_INIT)
@BuildStep
void processMigration(LiquibaseRecorder recorder, 
  BeanContainerBuildItem beanContainer) throws LiquibaseException {
    recorder.migrate(beanContainer.getValue());
}

Here, in this processor, we just called the migrate() recorder method, which in turn records the update() Liquibase method for later execution.

7. Testing the Liquibase Extension

To test our extension, we'll first start by creating a Quarkus application using the quarkus-maven-plugin:

mvn io.quarkus:quarkus-maven-plugin:1.0.0.CR1:create\
-DprojectGroupId=org.baeldung.quarkus.app\
-DprojectArtifactId=quarkus-app

Next, we'll add our extension as a dependency in addition to the Quarkus JDBC extension corresponding to our underlying database:

<dependency>
    <groupId>com.baeldung.quarkus.liquibase</groupId>
    <artifactId>quarkus-liquibase</artifactId>
    <version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-jdbc-h2</artifactId>
    <version>1.0.0.CR1</version>
</dependency>

Next, we'll need to have the quarkus-maven-plugin in our pom file:

<plugin>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-maven-plugin</artifactId>
    <version>${quarkus.version}</version>
    <executions>
        <execution>
            <goals>
                <goal>build</goal>
            </goals>
        </execution>
    </executions>
</plugin>

This is especially useful for running the application using the dev goal or building an executable using the build goal.

And next, we'll provide data source configuration through the application.properties file located in src/main/resources:

quarkus.datasource.url=jdbc:h2:mem:testdb
quarkus.datasource.driver=org.h2.Driver
quarkus.datasource.username=user
quarkus.datasource.password=password

Next, we'll provide the changelog configuration for our changelog file:

quarkus.liquibase.change-log=db/liquibase-changelog-master.xml

Finally, we can  start the application either in dev mode:

mvn compile quarkus:dev

Or in production mode:

mvn clean package
java -jar target/quarkus-app-1.0-SNAPSHOT-runner.jar

8. Conclusion

In this article, we implemented a Quarkus extension. As an example, we have showcased how to get Liquibase running on top of a Quarkus application.

The full code source is available over on GitHub.

Viewing all 4885 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>