Quantcast
Channel: Baeldung
Viewing all 4754 articles
Browse latest View live

Difference Between JVM, JRE, and JDK

$
0
0

1. Overview

In this article, we’ll discuss differences between JVM, JRE, and JDK by considering their components and uses.

2. JVM

Java Virtual Machine (JVM) is an implementation of a virtual machine which executes a Java program.

The JVM first interprets the bytecode. It then stores the class information in the memory area. Finally, it executes the bytecode generated by the java compiler.

It is an abstract computing machine with its own instruction set and manipulates various memory areas at runtime.

Components of the JVM are:

  • Class Loaders
  • Run-Time Data Areas
  • Execution Engine

2.1. Class Loaders

Initial tasks of the JVM includes loading, verifying and linking the bytecode. Class loaders handle these tasks.

We have a detailed article specifically on class loaders.

2.2. Run-Time Data Areas

The JVM defines various memory areas to execute a Java program. These are used during runtime and are known as run-time data areas. Some of these areas are created on the JVM start-up and destroyed when the JVM exits while some are created when a thread is created and destroyed when a thread exits.

Let’s consider these areas one by one:

Method Area

Basically, method area is analogous to the storage area for compiled code. It stores structures such as run-time constant pool, field and method data, the code for methods and constructors as well as fully qualified class names. The JVM stores these structure for each and every class.

The method area, also known as permanent generation space (PermGen), is created when the JVM starts up. The memory for this area does not need to be contiguous. All the JVM threads share this memory area.

Heap Area

The JVM allocates the memory for all the class instances and arrays from this area.

Garbage Collector (GC) reclaims the heap memory for objects. Basically, GC has three phases to reclaim memory from objects viz. two minor GC and one major GC.

The heap memory has three portions:

  • Eden Space – it’s a part of Young Generation space. When we create an object, the JVM allocates memory from this space
  • Survivor Space – it’s also a part of Young Generation space. Survivor space contains existing objects which have survived the minor GC phases of GC
  • Tenured Space – this is also known as the Old Generation space. It holds long surviving objects. Basically, a threshold is set for Young Generation objects and when this threshold is met, these objects are moved to tenured space.

JVM creates heap area as soon as it starts up. All the threads of the JVM share this area. The memory for the heap area does not need to be contiguous.

Stack area

Stores data as frames and each frame stores local variables, partial results and nested method calls. JVM creates the stack area whenever it creates a new thread. This area is private for each thread.

Each entry in the stack is called Stack Frame or Activation record. Each frame contains three parts:

  • Local Variable Array – contains all the local variables and parameters of the method
  • Operand Stack – used as a workspace for storing intermediate calculation’s result
  • Frame Data – used to store partial results, return values for methods, and reference to the Exception table which provides corresponding catch block information in case of exceptions

The memory for the JVM stack does not need to be contiguous.

PC Registers

Each JVM thread has a separate PC Register which stores the address of the currently executing instruction. If the currently executing instruction is a part of the native method then this value is undefined.

Native method stacks

Native methods are those which are written in languages other than Java.

JVM provides capabilities to call these native methods. Native method stacks are also known as “C stacks”. They store the native method information. Whenever the native methods are compiled into machine codes, they usually use a native method stack to keep track of their state.

The JVM creates these stacks whenever it creates a new thread. And thus JVM threads don’t share this area.

2.3. Execution Engine

Execution engine executes the instructions using information present in the memory areas. It has three parts:

Interpreter

Once classloaders load and verify bytecode, the interpreter executes the bytecode line by line. This execution is quite slow. The disadvantage of the interpreter is that when one method is called multiple times, every time new interpretation is required.

However, the JVM uses JIT Compiler to mitigate this disadvantage.

Just-In-Time (JIT) Compiler

JIT compiler compiles the bytecode of the often-called methods into native code at run-time. Hence it is responsible for the optimization of the Java programs.

JVM automatically monitors which methods are being executed. Once a method becomes eligible for JIT compilation, it is scheduled for compilation into machine code. This method is then known as a hot method. This compilation into machine code happens on a separate JVM thread.

As a result, it does not interrupt the execution of the current program. After compilation into machine code, it runs faster.

Garbage Collector

Java takes care of memory management using Garbage Collection. It’s a process of looking at heap memory, identifying which objects are in use and which are not, and finally deleting unused objects.

GC is a daemon thread. It can be called explicitly using System.gc() method, however, it won’t be executed immediately and the JVM decides when to invoke GC.

2.4. Java Native Interface

It acts as an interface between the Java code and the native (C/C++) libraries.

There are situations in which Java alone doesn’t meet the needs for your application, for example, implementing a platform-dependent feature.

In those cases, we can use JNI to enable the code running in the JVM to call. Conversely, it enables native methods to call the code running in the JVM.

2.5. Native Libraries

These are platform specific libraries and contains the implementation of native methods.

3. JRE

Java Runtime Environment (JRE) is a bundle of software components used to run Java applications.

Core components of the JRE include:

  • An implementation of a Java Virtual Machine (JVM)
  • Classes required to run the Java programs
  • Property Files

We discussed the JVM in the above section. Here we will focus on the core classes and support files.

3.1. Bootstrap Classes

We’ll find bootstrap classes under jre/lib/. This path is also known as the bootstrap classpath. It includes:

  • Runtime classes in rt.jar
  • Internationalization classes in i18n.jar
  • Character conversion classes in charsets.jar
  • Others

Bootstrap ClassLoader loads these classes when the JVM starts up.

3.2. Extension Classes

We can find extension classes in jre/lib/extn/ which acts as a directory for extensions to the Java platform. This path is also known as extension classpath.

It contains JavaFX runtime libraries in jfxrt.jar and locale data for java.text and java.util packages in localedata.jar. Users can also add custom jars into this directory.

3.3. Property Settings

Java platform uses these property settings to maintain its configuration. Depending on their usage they are located in different folders inside /jre/lib/. These include:

  • Calendar configurations in the calendar.properties
  • Logging configurations in logging.properties
  • Networking configurations in net.properties
  • Deployment properties in /jre/lib/deploy/
  • Management properties in /jre/lib/management/

3.4. Other files

Apart from the above-mentioned files and classes, JRE also contains files for other matters:

  • Security management at jre/lib/security
  • The directory for placing support classes for applets at jre/lib/applet
  • Font related files at jre/lib/fonts and others

4. JDK

Java Development Kit (JDK) provides environment and tools for developing, compiling, debugging, and executing a Java program.

Core components of JDK include:

  • JRE
  • Development Tools

We discussed the JRE in the above section.

Now, we’ll focus on various development tools. Let’s categorize these tools based on their usage:

4.1. Basic Tools

These tools lay the foundation of the JDK and are used to create and build Java applications. Among these tools, we can find utilities for compiling, debugging, archiving, generating Javadocs, etc.

They include:

  • javac – reads class and interface definitions and compiles them into class files
  • java – launches the Java application
  • javadoc – generates HTML pages of API documentation from Java source files
  • apt – finds and executes annotation processors based on the annotations present in the set of specified source files
  • appletviewer – enables us to run Java applets without a web browser
  • jar – packages Java applets or applications into a single archive
  • jdb – a command-line debugging tool used to find and fix bugs in Java applications
  • javah – produces C header and source files from a Java class
  • javap – disassembles the class files and displays information about fields, constructors, and methods present in a class file
  • extcheck – detects version conflicts between target Java Archive (JAR) file and currently installed extension JAR files

4.2. Security Tools

These include key and certificate management tools that are used to manipulate Java Keystores.

A Java Keystore is a container for authorization certificates or public key certificates. Consequently, it is often used by Java-based applications for encryption, authentication, and serving over HTTPS.

Also, they help to set the security policies on our system and create applications which can work within the scope of these policies in the production environment. These include:

  • keytool – helps in managing keystore entries, namely, cryptographic keys and certificates
  • jarsigner – generates digitally signed JAR files by using keystore information
  • policytool –  enables us to manage the external policy configuration files that define installation’s security policy

Some security tools also help in managing Kerberos tickets.

Kerberos is a network authentication protocol.

It works on the basis of tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner:

  • kinit – used to obtain and cache Kerberos ticket-granting tickets
  • ktab – manages principle names and key pairs in the key table
  • klist – displays entries in the local credentials cache and key table

4.3. Internationalization Tool

Internationalization is the process of designing an application so that it can be adapted to various languages and regions without engineering changes.

For this purpose, the JDK brings native2ascii. This tool converts a file with characters supported by JRE to files encoded in ASCII or Unicode escapes.

4.4. Remote Method Invocation (RMI) Tools

RMI tools enable remote communication between Java applications thus providing scope for development of distributed applications.

RMI enables an object running in one JVM to invoke methods on an object running in another JVM. These tools include:

  • rmic – generates stub, skeleton, and tie classes for remote objects using the Java Remote Method Protocol (JRMP) or Internet Inter-Orb Protocol (IIOP)
  • rmiregistry – creates and starts remote object registry
  • rmid – starts the activation system daemon. This allows objects to be registered and activated in a Java Virtual Machine
  • serialver – returns serial version UID for specified classes

4.5. Java IDL and RMI-IIOP Tools

Java Interface Definition Language (IDL) adds Common Object-Based Request Broker Architecture (CORBA) capability to the Java platform.

These tools enable distributed Java web applications to invoke operations on remote network services using industry standard Object Management Group (OMG) – IDL.

Likewise, we could use Internet InterORB Protocol (IIOP).

RMI-IIOP, i.e. RMI over IIOP enables programming of CORBA servers and applications via the RMI API. Thus enabling connection between two applications written in any CORBA-compliant language via Internet InterORB Protocol (IIOP).

These tools include:

  • tnameserv – transient Naming Service which provides a tree-structured directory for object references
  • idlj – the IDL-to-Java Compiler for generating the Java bindings for a specified IDL file
  • orbd – enable clients to transparently locate and invoke persistent objects on the server in CORBA environment
  • servertool – provides command-line interface to register or unregister a persistent server with ORB Daemon (orbd), start and shut down a persistent server registered with ORB Daemon, etcetera

4.6. Java Deployment Tools

These tools help in deploying Java applications and applets on the web. They include:

  • pack200 – transforms a JAR file into a pack200 file using the Java gzip compressor
  • unpack200 – transforms pack200 file into a JAR file

4.7. Java Plug-in Tool

JDK provides us with htmlconverter. Furthermore, it’s used in conjunction with the Java Plug-in.

On the one hand, Java Plug-in establishes a connection between popular browsers and the Java platform. As a result of this connection, applets on the website can run within a browser.

On the other hand, htmlconverter is a utility for converting an HTML page containing applets to a format for Java Plug-in.

4.8. Java Web Start Tool

JDK brings javaws. We can use it in conjunction with the Java Web Start.

This tool allows us to download and launch Java applications with a single click from the browser. Hence, there is no need to run any installation process.

4.9. Monitoring and Management Tools

These are great tools that we can use to monitor JVM performance and resource consumption. Here are a few of these: :

  • jconsole – provides a graphical console that lets you monitor and manage Java applications
  • jps – lists the instrumented JVMs on the target system
  • jstat – monitors JVM statistics
  • jstatd – monitors creation and termination of instrumented JVMs

4.10. Troubleshooting Tools

These are experimental tools that we can leverage for troubleshooting tasks:

  • info – generates configuration information for a specified Java process
  • jmap – prints shared object memory maps or heap memory details of a specified process
  • jsadebugd – attaches to a Java process and acts as a debug server
  • jstack – prints Java stack traces of Java threads for a given Java process

5. Conclusion

In this article, we identified that the basic difference between JVM, JRE, and JDK lies in their usage.

First, we described how the JVM is an abstract computing machine that actually executes the Java bytecode.

Then, we explained how to just run Java applications, we use the JRE.

And finally, we understood how to develop Java applications, we use the JDK.

We also took some time to dig into tools and fundamental concepts of this components.


Mockito ArgumentMatchers

$
0
0

1. Overview

This tutorial shows how to use the ArgumentMatcher and how it differs from the ArgumentCaptor.

For an introduction to the Mockito framework, please refer to this article.

2. Maven Dependencies

We need to add a single artifact:

<dependency>
    <groupId>org.mockito</groupId> 
    <artifactId>mockito-core</artifactId>
    <version>2.18.3</version> 
    <scope>test</scope>
</dependency>

The latest version of Mockito can be found on Maven Central.

3. ArgumentMatchers

Configuring a mocked method in various ways is possible. One of them is to return fixed values:

doReturn("Flower").when(flowerService).analyze("poppy");

In the above example, the String “Flower” is returned only when the analyze service receive the String “poppy”.

But maybe we need to respond to a wider range of values or beforehand unknown values.

In all these scenarios, we can configure our mocked methods with argument matchers:

when(flowerService.analyze(anyString())).thenReturn("Flower");

Now, because of the anyString argument matcher, the result will be the same no matter what value we pass to analyze. ArgumentMatchers allows us flexible verification or stubbing.

In case of a method has more than one argument, it isn’t possible to use ArgumentMatchers for only some of the argumentsMockito requires you to provide all arguments either by matchers or by exact values.

A next example is an incorrect approach to this:

abstract class FlowerService {
    public abstract boolean isABigFlower(String name, int petals);
}

FlowerService mock = mock(FlowerService.class);

when(mock.isABigFlower("poppy", anyInt())).thenReturn(true);

To fix it and keep the String name “poppy” as it’s desired, we’ll use eq matcher:

when(mock.isABigFlower(eq("poppy"), anyInt())).thenReturn(true);

There are two more points to take care when matchers are used:

  • We can’t use them as a return value, an exact value is required when stubbing calls
  • Finally, we can’t use argument matchers outside of verification or stubbing

In the last case, Mockito will detect the misplaced argument and throw an InvalidUseOfMatchersException.

A bad example could be:

String orMatcher = or(eq("poppy"), endsWith("y"));
verify(mock).analyze(orMatcher);

The way to implement the above code is:

verify(mock).analyze(or(eq("poppy"), endsWith("y")));

Mockito also provides AdditionalMatchers to implement common logical operations (‘not’, ‘and’, ‘or’) on ArgumentMatchers that match both primitive and non-primitive types:

verify(mock).analyze(or(eq("poppy"), endsWith("y")));

4. Custom Argument Matcher

Creating our matcher can be good to select the best possible approach for a given scenario and produce highest quality test, which is clean and maintainable.

For instance, we could have a MessageController that delivers messages. It’ll receive a MessageDTO, and from that, It’ll create a Message to be delivered by MessageService.

Our verification will be simple, verify that we called the MessageService exactly 1 time with any Message:

verify(messageService, times(1)).deliverMessage(any(Message.class));

Because the Message is constructed inside the method under test, we’re forced to use any as the matcher.

This approach doesn’t let us validate the data inside the Message, which can be different compared to the data inside MessageDTO.

For that reason, we’re going to implement a custom argument matcher:

public class MessageMatcher extends ArgumentMatcher<Message> {

    private Message left;

    // constructors

    @Override
    public boolean matches(Object object) {
        if (object instanceof Message) {
            Message right = (Message) object;
            return left.getFrom().equals(right.getFrom()) &&
              left.getTo().equals(right.getTo()) &&
              left.getText().equals(right.getText());
        }
        return false;
    }
}

To use our matcher, we need to modify our test and replace any by argThat:

verify(messageService, times(1)).deliverMessage(argThat(new MessageMatcher(message)));

Now we know our Message instance will have the same data as our MessageDTO.

5. Custom Argument Matcher vs. ArgumentCaptor

Both techniques custom argument matchers and ArgumentCaptor can be used for making sure certain arguments were passed to mocks.

However, ArgumentCaptor may be a better fit if we need it to assert on argument values to complete verification or our custom argument matcher is not likely to be reused.

Custom argument matchers via ArgumentMatcher are usually better for stubbing.

6. Conclusion

In this article, we’ve explored a feature of Mockito, ArgumentMatcher and its difference with ArgumentCaptor.

As always, the full source code of the examples is available over on GitHub.

Spring Core Annotations

$
0
0

1. Overview

We can leverage the capabilities of Spring DI engine using the annotations in the org.springframework.beans.factory.annotation and org.springframework.context.annotation packages.

We often call these “Spring core annotations” and we’ll review them in this tutorial.

2. DI-Related Annotations

2.1. @Autowired

We can use the @Autowired to mark a dependency which Spring is going to resolve and inject. We can use this annotation with a constructor, setter, or field injection.

Constructor injection:

class Car {
    Engine engine;

    @Autowired
    Car(Engine engine) {
        this.engine = engine;
    }
}

Setter injection:

class Car {
    Engine engine;

    @Autowired
    void setEngine(Engine engine) {
        this.engine = engine;
    }
}

Field injection:

class Car {
    @Autowired
    Engine engine;
}

@Autowired has a boolean argument called required with a default value of true. It tunes Spring’s behavior when it doesn’t find a suitable bean to wire. When true, an exception is thrown, otherwise, nothing is wired.

Note, that if we use constructor injection, all constructor arguments are mandatory.

Starting with version 4.3, we don’t need to annotate constructors with @Autowired explicitly unless we declare at least two constructors.

For more details visit our articles about @Autowired and constructor injection.

2.2. @Bean

@Bean marks a factory method which instantiates a Spring bean:

@Bean
Engine engine() {
    return new Engine();
}

Spring calls these methods when a new instance of the return type is required.

The resulting bean has the same name as the factory method. If we want to name it differently, we can do so with the name or the value arguments of this annotation (the argument value is an alias for the argument name):

@Bean("engine")
Engine getEngine() {
    return new Engine();
}

Note, that all methods annotated with @Bean must be in @Configuration classes.

2.3. @Qualifier

We use @Qualifier along with @Autowired to provide the bean id or bean name we want to use in ambiguous situations.

For example, the following two beans implement the same interface:

class Bike implements Vehicle {}

class Car implements Vehicle {}

If Spring needs to inject a Vehicle bean, it ends up with multiple matching definitions. In such cases, we can provide a bean’s name explicitly using the @Qualifier annotation.

Using constructor injection:

@Autowired
Biker(@Qualifier("bike") Vehicle vehicle) {
    this.vehicle = vehicle;
}

Using setter injection:

@Autowired
void setVehicle(@Qualifier("bike") Vehicle vehicle) {
    this.vehicle = vehicle;
}

Alternatively:

@Autowired
@Qualifier("bike")
void setVehicle(Vehicle vehicle) {
    this.vehicle = vehicle;
}

Using field injection:

@Autowired
@Qualifier("bike")
Vehicle vehicle;

For a more detailed description, please read this article.

2.4. @Required

@Required on setter methods to mark dependencies that we want to populate through XML:

@Required
void setColor(String color) {
    this.color = color;
}
<bean class="com.baeldung.annotations.Bike">
    <property name="color" value="green" />
</bean>

Otherwise, BeanInitializationException will be thrown.

2.5. @Value

We can use @Value for injecting property values into beans. It’s compatible with constructor, setter, and field injection.

Constructor injection:

Engine(@Value("8") int cylinderCount) {
    this.cylinderCount = cylinderCount;
}

Setter injection:

@Autowired
void setCylinderCount(@Value("8") int cylinderCount) {
    this.cylinderCount = cylinderCount;
}

Alternatively:

@Value("8")
void setCylinderCount(int cylinderCount) {
    this.cylinderCount = cylinderCount;
}

Field injection:

@Value("8")
int cylinderCount;

Of course, injecting static values isn’t useful. Therefore, we can use placeholder strings in @Value to wire values defined in external sources, for example, in .properties or .yaml files.

Let’s assume the following .properties file:

engine.fuelType=petrol

We can inject the value of engine.fuelType with the following:

@Value("${engine.fuelType}")
String fuelType;

We can use @Value even with SpEL. More advanced examples can be found in our article about @Value.

2.6. @DependsOn

We can use this annotation to make Spring initialize other beans before the annotated one. Usually, this behavior is automatic, based on the explicit dependencies between beans.

We only need this annotation when the dependencies are implicit, for example, JDBC driver loading or static variable initialization.

We can use @DependsOn on the dependent class specifying the names of the dependency beans. The annotation’s value argument needs an array containing the dependency bean names:

@DependsOn("engine")
class Car implements Vehicle {}

Alternatively, if we define a bean with the @Bean annotation, the factory method should be annotated with @DependsOn:

@Bean
@DependsOn("fuel")
Engine engine() {
    return new Engine();
}

2.7. @Lazy

We use @Lazy when we want to initialize our bean lazily. By default, Spring creates all singleton beans eagerly at the startup/bootstrapping of the application context.

However, there are cases when we need to create a bean when we request it, not at application startup.

This annotation behaves differently depending on where we exactly place it. We can put it on:

  • a @Bean annotated bean factory method, to delay the method call (hence the bean creation)
  • a @Configuration class and all contained @Bean methods will be affected
  • a @Component class, which is not a @Configuration class, this bean will be initialized lazily
  • an @Autowired constructor, setter, or field, to load the dependency itself lazily (via proxy)

This annotation has an argument named value with the default value of true. It is useful to override the default behavior.

For example, marking beans to be eagerly loaded when the global setting is lazy, or configure specific @Bean methods to eager loading in a @Configuration class marked with @Lazy:

@Configuration
@Lazy
class VehicleFactoryConfig {

    @Bean
    @Lazy(false)
    Engine engine() {
        return new Engine();
    }
}

For further reading, please visit this article.

2.8. @Lookup

A method annotated with @Lookup tells Spring to return an instance of the method’s return type when we invoke it.

Detailed information about the annotation can be found in this article.

2.9. @Primary

Sometimes we need to define multiple beans of the same type. In these cases, the injection will be unsuccessful because Spring has no clue which bean we need.

We already saw an option to deal with this scenario: marking all the wiring points with @Qualifier and specify the name of the required bean.

However, most of the time we need a specific bean and rarely the others. We can use @Primary to simplify this case: if we mark the most frequently used bean with @Primary it will be chosen on unqualified injection points:

@Component
@Primary
class Car implements Vehicle {}

@Component
class Bike implements Vehicle {}

@Component
class Driver {
    @Autowired
    Vehicle vehicle;
}

@Component
class Biker {
    @Autowired
    @Qualifier("bike")
    Vehicle vehicle;
}

In the previous example Car is the primary vehicle. Therefore, in the Driver class, Spring injects a Car bean. Of course, in the Biker bean, the value of the field vehicle will be a Bike object because it’s qualified.

2.10. @Scope

We use @Scope to define the scope of a @Component class or a @Bean definition. It can be either singleton, prototype, request, session, globalSession or some custom scope.

For example:

@Component
@Scope("prototype")
class Engine {}

3. Context Configuration Annotations

We can configure the application context with the annotations described in this section.

3.1. @Profile

If we want Spring to use a @Component class or a @Bean method only when a specific profile is active, we can mark it with @Profile. We can configure the name of the profile with the value argument of the annotation:

@Component
@Profile("sportDay")
class Bike implements Vehicle {}

You can read more about profiles in this article.

3.2. @Import

We can use specific @Configuration classes without component scanning with this annotation. We can provide those classes with @Import‘s value argument:

@Import(VehiclePartSupplier.class)
class VehicleFactoryConfig {}

3.3. @ImportResource

We can import XML configurations with this annotation. We can specify the XML file locations with the locations argument, or with its alias, the value argument:

@Configuration
@ImportResource("classpath:/annotations.xml")
class VehicleFactoryConfig {}

3.4. @PropertySource

With this annotation, we can define property files for application settings:

@Configuration
@PropertySource("classpath:/annotations.properties")
class VehicleFactoryConfig {}

@PropertySource leverages the Java 8 repeating annotations feature, which means we can mark a class with it multiple times:

@Configuration
@PropertySource("classpath:/annotations.properties")
@PropertySource("classpath:/vehicle-factory.properties")
class VehicleFactoryConfig {}

3.5. @PropertySources

We can use this annotation to specify multiple @PropertySource configurations:

@Configuration
@PropertySources({ 
    @PropertySource("classpath:/annotations.properties"),
    @PropertySource("classpath:/vehicle-factory.properties")
})
class VehicleFactoryConfig {}

Note, that since Java 8 we can achieve the same with the repeating annotations feature as described above.

4. Conclusion

In this article, we saw an overview of the most common Spring core annotations. We saw how to configure bean wiring and application context, and how to mark classes for component scanning.

As usual, the examples are available over on GitHub.

Java 9 java.lang.Module API

$
0
0

1. Introduction

Following A Guide to Java 9 Modularity, in this article, we’re going to explore the java.lang.Module API that was introduced alongside the Java Platform Module System.

This API provides a way to access a module programmatically, to retrieve specific information from a module, and generally to work with it and its ModuleDescriptor.

2. Reading Module Information

The Module class represents both named and unnamed modules. Named modules have a name and are constructed by the Java Virtual Machine when it creates a module layer, using a graph of modules as a definition.

An unnamed module doesn’t have a name, and there is one for each ClassLoader. All types that aren’t in a named module are members of the unnamed module related to their class loader.

The interesting part of the Module class is that it exposes methods that allow us to retrieve information from the module, like the module name, the module classloader and the packages within the module.

Let’s see how it’s possible to find out if a module is named or unnamed.

2.1. Named or Unnamed

Using the isNamed() method we can identify whether a module is named or not.

Let’s see how we can see if a given class, like HashMap, is part of a named module and how we can retrieve its name:

Class<HashMap> hashMapClass = HashMap.class;
Module javaBaseModule = hashMapClass.getModule();

assertThat(javaBaseModule.isNamed(), is(true));
assertThat(javaBaseModule.getName(), is("java.base"));

Let’s now define a Person class:

public class Person {
    private String name;

    // constructor, getters and setters
}

In the same way, as we did for the HashMap class, we can check if the Person class is part of a named module:

Class<Person> personClass = Person.class;
Module module = personClass.getModule();

assertThat(module.isNamed(), is(false));
assertThat(module.getName(), is(nullValue()));

2.2. Packages

When working with a module, it might be important to know which packages are available within the module.

Let’s see how we can check if a given package, for example, java.lang.annotation, is contained in a given module:

assertTrue(javaBaseModule.getPackages().contains("java.lang.annotation"));
assertFalse(javaBaseModule.getPackages().contains("java.sql"));

2.3. Annotations

In the same way, as for the packages, it’s possible to retrieve the annotations that are present in the module using the getAnnotations() method.

If there are no annotations present in a named module, the method will return an empty array.

Let’s see how many annotations are present in the java.base module:

assertThat(javaBaseModule.getAnnotations().length, is(0));

When invoked on an unnamed module, the getAnnotations() method will return an empty array.

2.4. ClassLoader

Thanks to the getClassLoader() method available within the Module class, we can retrieve the ClassLoader for a given module:

assertThat(
  module.getClassLoader().getClass().getName(), 
  is("jdk.internal.loader.ClassLoaders$AppClassLoader")
);

2.5. Layer

Another valuable information that could be extracted from a module is the ModuleLayer, which represents a layer of modules in the Java virtual machine.

A module layer informs the JVM about the classes that may be loaded from the modules. In this way, the JVM knows exactly which module each class is a member of.

A ModuleLayer contains information related to its configuration, the parent layer and the set of modules available within the layer.

Let’s see how to retrieve the ModuleLayer of a given a module:

ModuleLayer javaBaseModuleLayer = javaBaseModule.getLayer();

Once we have retrieved the ModuleLayer, we can access its information:

assertTrue(javaBaseModuleLayer.configuration().findModule("jaa.base").isPresent());
assertThat(javaBaseModuleLayer.configuration().modules().size(), is(78));

A special case is the boot layer, created when Java Virtual Machine is started. The boot layer is the only layer that contains the java.base module.

3. Dealing with ModuleDescriptor

A ModuleDescriptor describes a named module and defines methods to obtain each of its components.

ModuleDescriptor objects are immutable and safe for use by multiple concurrent threads.

Let’s start by looking at how we can retrieve a ModuleDescriptor.

3.1. Retrieving a ModuleDescriptor

Since the ModuleDescriptor is tightly connected to a Module, it’s possible to retrieve it directly from a Module:

ModuleDescriptor moduleDescriptor = javaBaseModule.getDescriptor();

3.2. Creating a ModuleDescriptor

It’s also possible to create a module descriptor using the ModuleDescriptor.Builder class or by reading the binary form of a module declaration, the module-info.class.

Let’s see how we create a module descriptor using the ModuleDescriptor.Builder API:

ModuleDescriptor.Builder moduleBuilder = ModuleDescriptor
  .newModule("baeldung.base");

ModuleDescriptor moduleDescriptor = moduleBuilder.build();

assertThat(moduleDescriptor.name(), is("baeldung.base"));

With this, we created a normal module but in case we want to create an open module or an automatic one, we can respectively use the newOpenModule() or the newAutomaticModule() method.

3.3. Classifying a Module

A module descriptor describes a normal, open, or automatic module.

Thanks to the method available within the ModuleDescriptor, it’s possible to identify the type of the module:

ModuleDescriptor moduleDescriptor = javaBaseModule.getDescriptor();

assertFalse(moduleDescriptor.isAutomatic());
assertFalse(moduleDescriptor.isOpen());

3.4. Retrieving Requires

With a module descriptor, it’s possible to retrieve the set of Requires, representing the module dependencies.

This is possible using the requires() method:

Set<Requires> javaBaseRequires = javaBaseModule.getDescriptor().requires();
Set<Requires> javaSqlRequires = javaSqlModule.getDescriptor().requires();

Set<String> javaSqlRequiresNames = javaSqlRequires.stream()
  .map(Requires::name)
  .collect(Collectors.toSet());

assertThat(javaBaseRequires, empty());
assertThat(javaSqlRequires.size(), is(3));
assertThat(
  javaSqlRequiresNames, 
  containsInAnyOrder("java.base", "java.xml", "java.logging")
);

All modules, except java.base, have the java.base module as a dependency.

However, if the module is an automatic module, the set of dependencies will be empty except for the java.base one.

3.5. Retrieving Provides

With the provides() method it’s possible to retrieve the list of services that the module provides:

Set<Provides> javaBaseProvides = javaBaseModule.getDescriptor().provides();
Set<Provides> javaSqlProvides = javaSqlModule.getDescriptor().provides();

Set<String> javaBaseProvidesService = javaBaseProvides.stream()
  .map(Provides::service)
  .collect(Collectors.toSet());

assertThat(
  javaBaseProvidesService, 
  contains("java.nio.file.spi.FileSystemProvider")
);
assertThat(javaSqlProvides, empty());

3.6. Retrieving Exports

Using the exports() method, we can find out if the modules exports packages and which in particular:

Set<Exports> javaBaseExports = javaBaseModule.getDescriptor().exports();
Set<Exports> javaSqlExports = javaSqlModule.getDescriptor().exports();

Set<String> javaSqlExportsSource = javaSqlExports.stream()
  .map(Exports::source)
  .collect(Collectors.toSet());

assertThat(javaBaseExports.size(), is(108));
assertThat(javaSqlExports.size(), is(3));
assertThat(
  javaSqlExportsSource, 
  containsInAnyOrder("java.sql","javax.transaction.xa", "javax.sql")
);

As a special case, if the module is an automatic one, the set of exported packages will be empty.

3.7. Retrieving Uses

With the uses() method, it’s possible to retrieve the set of service dependencies of the module:

Set<String> javaBaseUses = javaBaseModule.getDescriptor().uses();
Set<String> javaSqlUses = javaSqlModule.getDescriptor().uses();

assertThat(javaBaseUses.size(), is(34));
assertThat(javaSqlUses, contains("java.sql.Driver"));

In case the module is an automatic one, the set of dependencies will be empty.

3.8. Retrieving Opens

Whenever we want to retrieve the list of the open packages of a module, we can use the opens() method:

Set<Opens> javaBaseUses = javaBaseModule.getDescriptor().opens();
Set<Opens> javaSqlUses = javaSqlModule.getDescriptor().opens();

assertThat(javaBaseUses, empty());
assertThat(javaSqlUses, empty());

The set will be empty if the module is an open or an automatic one.

4. Dealing with Modules

Working with the Module API, other than reading information from the module, we can update a module definition.

4.1. Adding Exports

Let’s see how we can update a module, exporting the given package from a given module:

Module updatedModule = module.addExports(
  "com.baeldung.java9.modules", javaSqlModule);

assertTrue(updatedModule.isExported("com.baeldung.java9.modules"));

This can be done only if the caller’s module is the module the code is a member of.

As a side note, there are no effects if the package is already exported by the module or if the module is an open one.

4.2. Adding Reads

When we want to update a module to read a given module, we can use the addReads() method:

Module updatedModule = module.addReads(javaSqlModule);

assertTrue(updatedModule.canRead(javaSqlModule));

This method does nothing if we add the module itself since all modules read themselves.

In the same way, this method does nothing if the module is an unnamed module or this module already reads the other.

4.3. Adding Opens

When we want to update a module that has opened a package to at least the caller module, we can use addOpens() to open the package to another module:

Module updatedModule = module.addOpens(
  "com.baeldung.java9.modules", javaSqlModule);

assertTrue(updatedModule.isOpen("com.baeldung.java9.modules", javaSqlModule));

This method has no effect if the package is already open to the given module.

4.4. Adding Uses

Whenever we want to update a module adding a service dependency, the method addUses() is our choice:

Module updatedModule = module.addUses(Driver.class);

assertTrue(updatedModule.canUse(Driver.class));

This method does nothing when invoked on an unnamed module or an automatic module.

5. Conclusion

In this article, we explored the use of the java.lang.Module API where we learned how to retrieve information of a module, how to use a ModuleDescriptor to access additional information regarding a module and how to manipulate it.

As always, all code examples in this article can be found over on GitHub.

Configure a RestTemplate with RestTemplateBuilder

$
0
0

1. Introduction

In this quick tutorial, we’re going to look at how to configure a Spring RestTemplate bean.

Let’s start by discussing the three main configuration types:

  • using the default RestTemplateBuilder
  • using a RestTemplateCustomizer
  • creating our own RestTemplateBuilder

To be able to test this easily, please follow the guide on how to set up a simple Spring Boot application.

2. Configuration Using the Default RestTemplateBuilder

To configure a RestTemplate this way, we need to inject the default RestTemplateBuilder bean provided by Spring Boot into our classes:

private RestTemplate restTemplate;

@Autowired
public HelloController(RestTemplateBuilder builder) {
    this.restTemplate = builder.build();
}

The RestTemplate bean created with this method has its scope limited to the class in which we build it.

3. Configuration Using a RestTemplateCustomizer

With this approach, we can create an application-wide, additive customization.

This is a slightly more complicated approach. For this we need to create a class that implements RestTemplateCustomizer, and define it as a bean:

public class CustomRestTemplateCustomizer implements RestTemplateCustomizer {
    @Override
    public void customize(RestTemplate restTemplate) {
        restTemplate.getInterceptors().add(new CustomClientHttpRequestInterceptor());
    }
}

The CustomClientHttpRequestInterceptor interceptor is doing basic logging of the request:

public class CustomClientHttpRequestInterceptor implements ClientHttpRequestInterceptor {
    private static Logger LOGGER = LoggerFactory
      .getLogger(CustomClientHttpRequestInterceptor.class);

    @Override
    public ClientHttpResponse intercept(
      HttpRequest request, byte[] body, 
      ClientHttpRequestExecution execution) throws IOException {
 
        logRequestDetails(request);
        return execution.execute(request, body);
    }
    private void logRequestDetails(HttpRequest request) {
        LOGGER.info("Headers: {}", request.getHeaders());
        LOGGER.info("Request Method: {}", request.getMethod());
        LOGGER.info("Request URI: {}", request.getURI());
    }
}

Now, we define CustomRestTemplateCustomizer as a bean in a configuration class or in our Spring Boot application class:

@Bean
public CustomRestTemplateCustomizer customRestTemplateCustomizer() {
    return new CustomRestTemplateCustomizer();
}

With this configuration, every RestTemplate that we’ll use in our application will have the custom interceptor set on it.

4. Configuration by Creating Our Own RestTemplateBuilder

This is the most extreme approach to customizing a RestTemplate. It disables the default auto-configuration of RestTemplateBuilder, so we need to define it ourselves:

@Bean
@DependsOn(value = {"customRestTemplateCustomizer"})
public RestTemplateBuilder restTemplateBuilder() {
    return new RestTemplateBuilder(customRestTemplateCustomizer());
}

After this, we can inject the custom builder into our classes like we’d do with a default RestTemplateBuilder and create a RestTemplate as usual:

private RestTemplate restTemplate;

@Autowired
public HelloController(RestTemplateBuilder builder) {
    this.restTemplate = builder.build();
}

5. Conclusion

We’ve seen how to configure a RestTemplate with the default RestTemplateBuilder, building our own RestTemplateBuilder, or using a RestTemplateCustomizer bean.

As always, the full codebase for this example can be found in our GitHub repository.

JUnit5 Programmatic Extension Registration with @RegisterExtension

$
0
0

1. Overview

JUnit 5 provides multiple methods for registering extensions. For an overview of some of these methods, refer to our Guide to JUnit 5 Extensions.

In this quick tutorial, we’ll focus on programmatic registration of JUnit 5 extensions, using the @RegisterExtension annotation.

2. @RegisterExtension

We can apply this annotation to fields in test classes. One advantage of this method is that we can access the extension as an object in the test content directly. 

JUnit will call the extension methods at appropriate stages.

For example, if an extension implements BeforeEachCallback, JUnit will call its corresponding interface methods before executing a test method.

3. Using @RegisterExtension with Static Fields

When used with static fields, JUnit will apply the methods of this extension after the class-level @ExtendWith based extensions have been applied.

Also, JUnit will invoke both class-level and method-level callbacks of the extension.

For example, the following extension features both a beforeAll and a beforeEach implementation:

public class LoggingExtension implements 
  BeforeAllCallback, BeforeEachCallback {

    // logger, constructor etc

    @Override
    public void beforeAll(ExtensionContext extensionContext) 
      throws Exception {
        logger.info("Type {} In beforeAll : {}", 
          type, extensionContext.getDisplayName());
    }

    @Override
    public void beforeEach(ExtensionContext extensionContext) throws Exception {
        logger.info("Type {} In beforeEach : {}",
          type, extensionContext.getDisplayName());
    }

    public String getType() {
        return type;
    }
}

Let’s apply this extension to a static field of a test:

public class RegisterExtensionTest {

    @RegisterExtension
    static LoggingExtension staticExtension = new LoggingExtension("static version");

    @Test
    public void demoTest() {
        // assertions
    }
}

The output shows messages from both the beforeAll and beforeEach methods:

Type static version In beforeAll : RegisterExtensionTest
Type static version In beforeEach : demoTest()

4. Using @RegisterExtension with Instance Fields

If we use RegisterExtension with non-static fields, JUnit will only apply the extension after processing all TestInstancePostProcessor callbacks.

In this case, JUnit will ignore class level callbacks like beforeAll.

In the above example, let’s remove the static modifier from LoggingExtension:

@RegisterExtension
LoggingExtension instanceLevelExtension = new LoggingExtension("instance version");

Now JUnit will only invoke the beforeEach method, as seen in the output:

Type instance version In beforeEach : demoTest()

5. Conclusion

In this article, we did an overview of programmatically registering JUnit 5 extensions with @RegisterExtension.

We also covered the difference between applying the extension to static fields vs. instance fields.

As usual, code examples can be found at our Github repository.

Java Weekly, Issue 232

$
0
0

Here we go…

1. Spring and Java

>> Truth First, or Why You Should Mostly Implement Database First Designs [blog.jooq.org]

Some solid points to consider when thinking about where the source of truth in your system is, and how to make sure your architecture reflects that.

>> Java Collections Are Evolving [dzone.com]

The highly useful new functionality the last couple of JDK releases have brought to the Java Collection Framework. Really good stuff.

>> Zip Slip Directory Traversal Vulnerability Impacts Multiple Java Projects [infoq.com]

A quick but interesting write-up, all about the new “Zip Slip” vulnerability – along with a few practical examples, if you’re curious.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Storing Encrypted Credentials in GIT [techblog.bozho.net]

Storing credentials correctly isn’t necessarily easy, but it’s highly important that you understand how to do that well.

>> “Should that be a Microservice?” Part 4: Independent Scalability [content.pivotal.io]

Microservices can be a useful architectural choice… but don’t always. Better think twice.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Motivational Speaker [dilbert.com]

>> Decentralization Changes Everything [dilbert.com]

>> Boiling an Ocean [dilbert.com]

4. Pick of the Week

>> Why the best things in life are all backwards [markmanson.net]

Spring Security OAuth Guides

$
0
0

REST Query Language Over Multiple Tables with Querydsl Web Support

$
0
0

1. Overview

In this tutorial, we’ll continue with the second part of Spring Data Querydsl Web Support. Here, we’ll focus on associated entities and how to create queries over HTTP.

Following the same configuration used in part one, we’ll create a Maven-based project. Please refer to the original article to check how to set up the basics.

2. Entities

First, let’s add a new entity (Address) creating a relationship between the user and her address. We’ve used the OneToOne relationship to keep it simple.

Consequently, we’ll have the following classes:

@Entity 
public class User {

    @Id 
    @GeneratedValue
    private Long id;

    private String name;

    @OneToOne(fetch = FetchType.LAZY, mappedBy = "user") 
    private Address addresses;

    // getters & setters 
}
@Entity 
public class Address {

    @Id 
    @GeneratedValue
    private Long id;

    private String address;

    private String country;

    @OneToOne(fetch = FetchType.LAZY) 
    @JoinColumn(name = "user_id") 
    private User user;

    // getters & setters
}

3. Spring Data Repositories

At this point, we have to create the Spring Data repositories, as usual, one for each entity. Note that these repositories will have the Querydsl configuration.

Let’s see the AddressRepository repository and explain how the framework configuration works:

public interface AddressRepository extends JpaRepository<Address, Long>, 
  QueryDslPredicateExecutor<Address>, QuerydslBinderCustomizer<QAddress> {
 
    @Override 
    default void customize(QuerydslBindings bindings, QAddress root) {
        bindings.bind(String.class)
          .first((SingleValueBinding<StringPath, String>) StringExpression::eq);
    }
}

We’re overriding the customize() method to configure the default binding. In this case, we’ll customize the default method binding to be equals, for all String properties.

Once the repository is all set, we just have to add a @RestController to manage the HTTP queries.

4. Query Rest Controller

In part one, we explained the Query@RestController over user repository, here, we’ll just reuse it.

Also, we may want to query the address table; so for this, we’ll just add a similar method:

@GetMapping(value = "/addresses", produces = MediaType.APPLICATION_JSON_VALUE)
public Iterable<Address> queryOverAddress(
  @QuerydslPredicate(root = Address.class) Predicate predicate) {
    BooleanBuilder builder = new BooleanBuilder();
    return addressRepository.findAll(builder.and(predicate));
}

Let’s create some tests to see how this works.

5. Integration Testing

We’ve included a test to prove how Querydsl works. For this, we are using the MockMvc framework to simulate HTTP querying over user joining this entity with the new one: address. Therefore, we are now able to make queries filtering address attributes.

Let’s retrieve all users living in Spain:

/users?addresses.country=Spain 

@Test
public void givenRequest_whenQueryUserFilteringByCountrySpain_thenGetJohn() throws Exception {
    mockMvc.perform(get("/users?address.country=Spain")).andExpect(status().isOk()).andExpect(content()
      .contentType(contentType))
      .andExpect(jsonPath("$", hasSize(1)))
      .andExpect(jsonPath("$[0].name", is("John")))
      .andExpect(jsonPath("$[0].address.address", is("Fake Street 1")))
      .andExpect(jsonPath("$[0].address.country", is("Spain")));
}

As a result, Querydsl will map the predicate sent over HTTP and generates the following SQL script:

select user0_.id as id1_1_, 
       user0_.name as name2_1_ 
from user user0_ 
      cross join address address1_ 
where user0_.id=address1_.user_id 
      and address1_.country='Spain'

6. Conclusion

To sum up, we have seen that Querydsl offers to the web clients a very simple alternative to create dynamic queries; another powerful use of this framework.

In part I, we saw how to retrieve data from one table; consequently, now, we can add queries joining several tables, offering web-clients a better experience filtering directly over HTTP requests they make.

The implementation of this example can be checked in the GitHub project – this is a Maven-based project, so it should be easy to import and run as it is.

Jagged Arrays In Java

$
0
0

1. Overview

A jagged array in Java is a multi-dimensional array comprising arrays of varying sizes as its elements. It’s also referred to as “an array of arrays” or “ragged array”.

In this quick tutorial, we’ll look more in-depth into defining and working with jagged arrays.

2. Creating Jagged Array

Let’s start by looking at ways in which we can create a jagged array:

2.1. Using the Shorthand-Form

An easy way to define a jagged array would be:

int[][] jaggedArr = {{1, 2}, {3, 4, 5}, {6, 7, 8, 9}};

Here, we’ve declared and initialized jaggedArr in a single step.

2.2. Declaration and then Initialization

We start by declaring a jagged array of size three:

int[][] jaggedArr = new int[3][];

Here, we’ve omitted to specify the second dimension since it will vary.

Next, let’s go further by both declaring and initializing the respective elements within jaggedArr:

jaggedArr[0] = new int[] {1, 2};
jaggedArr[1] = new int[] {3, 4, 5};
jaggedArr[2] = new int[] {6, 7, 8, 9};

We can also simply declare its elements without initializing them:

jaggedArr[0] = new int[2];
jaggedArr[1] = new int[3];
jaggedArr[2] = new int[4];

These can then later be initialized, for example by using user inputs.

3. Memory Representation

How will the memory representation of our jaggedArr look like?

As we know, an array in Java is nothing but an object, the elements of which could be either primitives or references. So, a two-dimensional array in Java can be thought of as an array of one-dimensional arrays.

Our jaggedArr in memory would look similar to:

Clearly, jaggedArr[0] is holding a reference to a single-dimensional array of size 2, jaggedArr[1] holds a reference to another one-dimensional array of size 3 and so on.

This way Java makes it possible for us to define and use jagged arrays.

4. Iterating Elements

We can iterate a jagged array much like any other multi-dimensional array in Java.

Let’s try iterating and initializing the jaggedArr elements using user inputs:

void initializeElements(int[][] jaggedArr) {
    Scanner sc = new Scanner(System.in);
    for (int outer = 0; outer < jaggedArr.length; outer++) {
        for (int inner = 0; inner < jaggedArr[outer].length; inner++) {
            jaggedArr[outer][inner] = sc.nextInt();
        }
    }
}

Here, jaggedArr[outer].length is the length of an array at an index outer in jaggedArr.

It helps us to ensure that we are looking for elements only within a valid range of each sub-array, thereby avoiding an ArrayIndexOutOfBoundException.

5. Printing Elements

What if we want to print the elements of our jagged array?

One obvious way would be to use the iteration logic we’ve already covered. This involves iterating through each item within our jagged array, which itself is an array, and then iterating over that child array – one element at a time.

Another option we have is to use java.util.Arrays.toString() helper method:

void printElements(int[][] jaggedArr) {
    for (int index = 0; index < jaggedArr.length; index++) {
        System.out.println(Arrays.toString(jaggedArr[index]));
    }
}

And we end up having a clean and simple code. The generated console output would look like:

[1, 2]
[3, 4, 5]
[6, 7, 8, 9]

6. Conclusion

In this article, we looked at what jagged arrays are, how they look in-memory and the ways in which we can define and use them.

As always, the source code of the examples presented can be found over on Github.

Learn Spring Boot

$
0
0

Spring Boot – Basics

Spring Boot – Advanced Topics

Spring Boot – Testing

Spring Boot – Under the Hood

Spring Boot – DevOps Tools

Spring Boot – Integration with other Libraries

Binding a List in Thymeleaf

$
0
0

1. Overview

In this quick tutorial, we’re going to show how to bind a List object in Thymeleaf.

To learn how to integrate Thymeleaf with Spring, you can check out our main Spring article here – where you can also learn how to display fields, accept input, display validation errors, or convert data for display.

2. Lists in Thymeleaf Example

We’ll start by showing how to display elements of a List in a Thymeleaf page and how to bind a list of objects as user’s inputs in a Thymeleaf form.

For this purpose, we’ll use a simple model shown in the following code:

public class Book {
    private long id;

    private String title;

    private String author;
	
    // getters and setters
}

Beside displaying existing books in our example, we’re going to make it possible for the user to add multiple books to the collection and also to edit all existing books at once.

3. Displaying List Elements

Let’s take a look at the following Controller method that returns the allBooks page:

@GetMapping("/all")
public String showAll(Model model) {
    model.addAttribute("books", bookService.findAll());
    return "books/allBooks";
}

Here, we’ve added List of Book objects as a model attribute sent to the view, where we’ll display it using an HTML table:

<table>
    <thead>
        <tr>
            <th> Title </th>
            <th> Author </th>
        </tr>
    </thead>
    <tbody>
	<tr th:if="${books.empty}">
            <td colspan="2"> No Books Available </td>
        </tr>
        <tr th:each="book : ${books}">
            <td><span th:text="${book.title}"> Title </span></td>
            <td><span th:text="${book.author}"> Author </span></td>
        </tr>
    </tbody>
</table>

Here, we’re using the th:each property to iterate through the list and display properties of each object in it.

4. Binding a List Using Selection Expression

To send the list of objects from the view to the controller via form submit, we cannot use List object itself.

Instead, we have to add a wrapper object that will hold the submitted list:

public class BooksCreationDto {
    private List<Book> books;

    // default and parameterized constructor

    public void addBook(Book book) {
        this.books.add(book);
    }
	
    // getter and setter
}

Let’s now enable the user to add three books in one form submission.

First, we’ll prepare the form page, passing our command object as a Model attribute:

@GetMapping("/create")
public String showCreateForm(Model model) {
    BooksCreationDto booksForm = new BooksCreationDto();

    for (int i = 1; i <= 3; i++) {
        booksForm.addBook(new Book());
    }

    model.addAttribute("form", booksForm);
    return "books/createBooksForm";
}

As we can see, we passed a list of 3 empty Book objects to the view via the wrapper class.

Next, we need to add the form to a Thymeleaf page:

<form action="#" th:action="@{/books/save}" th:object="${form}"
  method="post">
    <fieldset>
        <input type="submit" id="submitButton" th:value="Save">
        <input type="reset" id="resetButton" name="reset" th:value="Reset"/>
        <table>
            <thead>
                <tr>
                    <th> Title</th>
                    <th> Author</th>
                </tr>
            </thead>
            <tbody>
                <tr th:each="book, itemStat : *{books}">
                    <td><input th:field="*{books[__${itemStat.index}__].title}" /></td>
                    <td><input th:field="*{books[__${itemStat.index}__].author}" /></td>
                </tr>
            </tbody>
        </table>
    </fieldset>
</form>

And this is what the page above will look like:

Let’s have a closer look at what we did here. First, we used the th:object=”${form}” to specify the command object (the one we passed as a Model attribute).

The next thing worth noting is that we accessed the list with a selection expression using:

<tr th:each="book, itemStat : *{books}">

And finally, we’re mapping our inputs as properties of the list elements using th:field.

However, we also need to use the itemStat variable to define which list element we’re referring to, as demonstrated in:

th:field="*{books[__${itemStat.index}__].title}"

The last step is actually to manipulate the submitted data on the back-end. We’ll use the command object as the @ModelAttribute in our @PostMapping method in the controller, save the retrieved list of books and return all existing books to the user:

@PostMapping("/save")
public String saveBooks(@ModelAttribute BooksCreationDto form, Model model) {
    bookService.saveAll(form.getBooks());

    model.addAttribute("books", bookService.findAll());
    return "redirect:/books/all";
}

After submitting the form to the /save endpoint, we’ll get the page with all the newly added books:

5. Binding a List Using Variable Expression

For this example, we’ll first load all existing books into the command object:

@GetMapping("/edit")
public String showEditForm(Model model) {
    List<Book> books = new ArrayList<>();
    bookService.findAll().iterator().forEachRemaining(books::add);

    model.addAttribute("form", new BooksCreationDto(books));
    return "books/editBooksForm";
}

The HTML page is similar, with the most noteworthy differences in the th:each block:

<tr th:each="book, itemStat : ${form.books}">
    <td>
        <input hidden th:name="|books[${itemStat.index}].id|" th:value="${book.getId()}"/>
    </td>
    <td>
        <input th:name="|books[${itemStat.index}].title|" th:value="${book.getTitle()}"/>
    </td>
    <td>
        <input th:name="|books[${itemStat.index}].author|" th:value="${book.getAuthor()}"/>
    </td>
</tr>

As shown in <tr th:each=”book, itemStat : ${form.books}”>, we accessed the list in a slightly different manner, using variable expression this time. Especially relevant is to notice that we provided name and value for input elements to properly submit data.

We also had to add hidden input which will bind the current book’s id because we don’t want to create new books but to edit existing ones.

6. Conclusion

In this article, we illustrated how to use List object in Thymeleaf and Spring MVC. We’ve shown how to display the list of objects sent to the view, but we put the primary focus on two ways of binding user inputs as a list in Thymeleaf form.

All of the code snippets, mentioned in the article, can be found in our GitHub repository.

Working with Enums in Kotlin

$
0
0

1. Overview

In this tutorial, we’ll deep dive into Kotlin enums.

With the evolution of programming languages, the usage and application of enums have also advanced.

Enum constants today aren’t just mere collections of constants – they can have properties, implement interfaces, and much more.

For Kotlin beginners, check out this article on Kotlin basics – Introduction to the Kotlin Language.

2. Basic Kotlin Enums

Let’s look at the basics of enums in Kotlin.

2.1. Defining Enums

Let’s define an enum as having three constants describing credit card types:

enum class CardType {
    SILVER, GOLD, PLATINUM
}

2.2. Initializing Enum Constants

Enums in Kotlin, just like in Java, can have a constructor. Since enum constants are instances of an Enum class, the constants can be initialized by passing specific values to the constructor.

Let’s specify color values to various card types:

enum class CardType(val color: String) {
    SILVER("gray"),
    GOLD("yellow"),
    PLATINUM("black")
}

We can access the color value of a specific card type with:

val color = CardType.SILVER.color

3. Enum Constants as Anonymous Classes

We can define specific enum constant behavior by creating them as anonymous classes. Constants then need to override the abstract functions defined within the Enum definition.

For example, for each card type, we may have different cash-back calculation.

Let’s see how we can implement it:

enum class CardType {
    SILVER {
        override fun calculateCashbackPercent() = 0.25f
    },
    GOLD {
        override fun calculateCashbackPercent() = 0.5f
    },
    PLATINUM {
        override fun calculateCashbackPercent() = 0.75f
    };

    abstract fun calculateCashbackPercent(): Float
}

We can invoke the overridden methods of the anonymous constant classes with:

val cashbackPercent = CardType.SILVER.calculateCashbackPercent()

4. Enums Implementing Interfaces

Let’s say there’s an ICardLimit interface which defines the card limits of various card types:

interface ICardLimit {
    fun getCreditLimit(): Int
}

Now, let’s see how our enum can implement this interface:

enum class CardType : ICardLimit {
    SILVER {
        override fun getCreditLimit() = 100000
    },
    GOLD {
        override fun getCreditLimit() = 200000
    },
    PLATINUM {
        override fun getCreditLimit() = 300000
    }
}

To access the credit limit of a card type, we can use the same approach as in the previous example:

val creditLimit = CardType.PLATINUM.getCreditLimit()

5. Common Enum Constructs

5.1. Getting Enum Constants by Name

To get an enum constant by its String name, we use the valueOf() static function:

val cardType = CardType.valueOf(name.toUpperCase())

5.2. Iterating Through Enum Constants

To iterate through all enum constants, we use the values() static function:

for (cardType in CardType.values()) {
    println(cardType.color)
}

5.3. Static Methods

To add a “static” function to an enum, we can use a companion object:

companion object {
    fun getCardTypeByName(name: String) = valueOf(name.toUpperCase())
}

We can now invoke this function with:

val cardType = CardType.getCardTypeByName("SILVER")

Note that Kotlin doesn’t have a concept of static methods. What we’ve shown here a way to get the same functionality as in Java, but using Kotlin’s features.

6. Conclusion

This article makes an introduction to enums in Kotlin language and it’s key features.

We’ve introduced some simple concepts like defining enums and initializing the constants. We’ve also shown some advanced features like defining enum constants as anonymous classes, and enums implementing interfaces.

The implementation of all these examples and code snippets can be found in the GitHub project. This is a Maven project, so it should be easy to import and run as it is.

Java EE 8 Security API

$
0
0

1. Overview

The Java EE 8 Security API is the new standard and a portable way of handling security concerns in Java containers.

In this article, we’ll look at the three core features of the API:

  1. HTTP Authentication Mechanism
  2. Identity Store
  3. Security Context

We’ll first understand how to configure the provided implementations and then how to implement a custom one.

2. Maven Dependencies

To set up the Java EE 8 Security API, we need either a server-provided implementation or an explicit one.

2.1. Using the Server Implementation

Java EE 8 compliant servers already provide an implementation for the Java EE 8 Security API, and therefore we need only the Java EE Web Profile API Maven artifact:

<dependencies>
    <dependency>
        <groupId>javax</groupId>
        <artifactId>javaee-web-api</artifactId>
        <version>8.0</version>
        <scope>provided</scope>
    </dependency>
</dependencies>

2.2. Using an Explicit Implementation

First, we specify the Maven artifact for the Java EE 8 Security API:

<dependencies>
    <dependency>
        <groupId>javax.security.enterprise</groupId>
        <artifactId>javax.security.enterprise-api</artifactId>
        <version>1.0</version>
    </dependency>
</dependencies>

And then, we’ll add an implementation, for example, Soteria – the reference implementation:

<dependencies>
    <dependency>
        <groupId>org.glassfish.soteria</groupId>
        <artifactId>javax.security.enterprise</artifactId>
        <version>1.0</version>
    </dependency>
</dependencies>

3. HTTP Authentication Mechanism

Prior to Java EE 8, we’ve configured Authentication mechanisms declaratively through the web.xml file.

In this version, the Java EE 8 Security API has designed the new HttpAuthenticationMechanism interface as a replacement. Therefore, web applications can now configure Authentication mechanisms by providing implementations of this interface.

Fortunately, the container already provides an implementation for each of the three authentication methods defined by the Servlet specification: Basic HTTP authentication, form-based authentication, and custom form-based authentication.

It also provides an annotation to trigger each implementation:

  1. @BasicAuthenticationMechanismDefinition
  2. @FormAuthenticationMechanismDefinition
  3. @CustomFormAuthenrticationMechanismDefinition

3.1. Basic HTTP Authentication

As mentioned above, a web application can configure the Basic HTTP Authentication just by using the @BasicAuthenticationMechanismDefinition annotation on a CDI bean:

@BasicAuthenticationMechanismDefinition(
  realmName = "userRealm")
@ApplicationScoped
public class AppConfig{}

At this point, the Servlet container searches and instantiates the provided implementation of HttpAuthenticationMechanism interface.

Upon receipt of an unauthorized request, the container challenges the client for providing suitable authentication information via the WWW-Authenticate response header.

WWW-Authenticate: Basic realm="userRealm"

The client then sends the username and password, separated by a colon “:” and encoded in Base64, via the Authorization request header:

//user=baeldung, password=baeldung
Authorization: Basic YmFlbGR1bmc6YmFlbGR1bmc=

Note that the dialog presented for providing credentials is coming from the browser and not from the server.

3.2. Form-based HTTP Authentication

The @FormAuthenticationMechanismDefinition annotation triggers a form-based authentication as defined by the Servlet specification.

Then we have the option to specify the login and error pages or use the default reasonable ones /login and /login-error:

@FormAuthenticationMechanismDefinition(
  loginToContinue = @LoginToContinue(
    loginPage = "/login.html",
    errorPage = "/login-error.html"))
@ApplicationScoped
public class AppConfig{}

As a result of invoking loginPage, the server should send the form to the client:

<form action="j_security_check" method="post">
    <input name="j_username" type="text"/>
    <input name="j_password" type="password"/>
    <input type="submit">
</form>

The client then should send the form to a pre-defined backing authentication process provided by the container.

3.3. Custom Form-based HTTP Authentication

A web application can trigger the custom form-based authentication implementation by using the annotation @CustomFormAuthenticationMechanismDefinition:

@CustomFormAuthenticationMechanismDefinition(
  loginToContinue = @LoginToContinue(loginPage = "/login.xhtml"))
@ApplicationScoped
public class AppConfig {
}

But unlike the default form-based authentication, we’re configuring a custom login page and invoking the SecurityContext.authenticate() method as a backing authentication process.

Let’s have a look at the backing LoginBean as well, which contains the login logic:

@Named
@RequestScoped
public class LoginBean {

    @Inject
    private SecurityContext securityContext;

    @NotNull private String username;

    @NotNull private String password;

    public void login() {
        Credential credential = new UsernamePasswordCredential(
          username, new Password(password));
        AuthenticationStatus status = securityContext
          .authenticate(
            getHttpRequestFromFacesContext(),
            getHttpResponseFromFacesContext(),
            withParams().credential(credential));
        // ...
    }
     
    // ...
}

As a result of invoking the custom login.xhtml page, the client submits the received form to the LoginBean’s login() method:

//...
<input type="submit" value="Login" jsf:action="#{loginBean.login}"/>

3.4. Custom Authentication Mechanism

The HttpAuthenticationMechanism interface defines three methods. The most important is the validateRequest() which we must provide an implementation.

The default behavior for the other two methods, secureResponse() and cleanSubject(), is sufficient in most cases.

Let’s have a look at an example implementation:

@ApplicationScoped
public class CustomAuthentication 
  implements HttpAuthenticationMechanism {

    @Override
    public AuthenticationStatus validateRequest(
      HttpServletRequest request,
      HttpServletResponse response, 
      HttpMessageContext httpMsgContext) 
      throws AuthenticationException {
 
        String username = request.getParameter("username");
        String password = response.getParameter("password");
        // mocking UserDetail, but in real life, we can obtain it from a database
        UserDetail userDetail = findByUserNameAndPassword(username, password);
        if (userDetail != null) {
            return httpMsgContext.notifyContainerAboutLogin(
              new CustomPrincipal(userDetail),
              new HashSet<>(userDetail.getRoles()));
        }
        return httpMsgContext.responseUnauthorized();
    }
    //...
}

Here, the implementation provides the business logic of the validation process, but in practice, it’s recommended to delegate to the IdentityStore through the IdentityStoreHandler by invoking validate.

We’ve also annotated the implementation with @ApplicationScoped annotation as we need to make it CDI-enabled.

After a valid verification of the credential, and an eventual retrieving of user roles, the implementation should notify the container then:

HttpMessageContext.notifyContainerAboutLogin(Principal principal, Set groups)

3.5. Enforcing Servlet Security

A web application can enforce security constraints by using the @ServletSecurity annotation on a Servlet implementation:

@WebServlet("/secured")
@ServletSecurity(
  value = @HttpConstraint(rolesAllowed = {"admin_role"}),
  httpMethodConstraints = {
    @HttpMethodConstraint(
      value = "GET", 
      rolesAllowed = {"user_role"}),
    @HttpMethodConstraint(     
      value = "POST", 
      rolesAllowed = {"admin_role"})
  })
public class SecuredServlet extends HttpServlet {
}

This annotation has two attributes – httpMethodConstraints and valuehttpMethodConstraints is used to specify one or more constraints, each one representing an access control to an HTTP method by a list of allowed roles.

The container will then check, for every url-pattern and HTTP method, if the connected user has the suitable role for accessing the resource.

4. Identity Store

This feature is abstracted by the IdentityStore interface, and it’s used to validate credentials and eventually retrieve group membership. In other words, it can provide capabilities for authentication, authorization or both.

IdentityStore is intended and encouraged to be used by the HttpAuthenticationMecanism through a called IdentityStoreHandler interface. A default implementation of the IdentityStoreHandler is provided by the Servlet container.

An application can provide its implementation of the IdentityStore or uses one of the two built-in implementations provided by the container for Database and LDAP.

4.1. Built-in Identity Stores

The Java EE compliant server should provide implementations for the two Identity Stores: Database and LDAP.

The database IdentityStore implementation is initialized by passing a configuration data to the @DataBaseIdentityStoreDefinition annotation:

@DatabaseIdentityStoreDefinition(
  dataSourceLookup = "java:comp/env/jdbc/securityDS",
  callerQuery = "select password from users where username = ?",
  groupsQuery = "select GROUPNAME from groups where username = ?",
  priority=30)
@ApplicationScoped
public class AppConfig {
}

As a configuration data, we need a JNDI data source to an external database, two JDBC statements for checking caller and his groups and finally a priority parameter which is used in case of multiples store are configured.

IdentityStore with high priority is processed later by the IdentityStoreHandler.

Like the database, LDAP IdentityStore implementation is initialized through the @LdapIdentityStoreDefinition by passing configuration data:

@LdapIdentityStoreDefinition(
  url = "ldap://localhost:10389",
  callerBaseDn = "ou=caller,dc=baeldung,dc=com",
  groupSearchBase = "ou=group,dc=baeldung,dc=com",
  groupSearchFilter = "(&(member=%s)(objectClass=groupOfNames))")
@ApplicationScoped
public class AppConfig {
}

Here we need the URL of an external LDAP server, how to search the caller in the LDAP directory, and how to retrieve his groups.

4.2. Implementing a Custom IdentityStore

The IdentityStore interface defines four default methods:

default CredentialValidationResult validate(
  Credential credential)
default Set<String> getCallerGroups(
  CredentialValidationResult validationResult)
default int priority()
default Set<ValidationType> validationTypes()

The priority() method returns a value for the order of iteration this implementation is processed by IdentityStoreHandler. An IdentityStore with lower priority is treated first.

By default, an IdentityStore processes both credentials validation (ValidationType.VALIDATE) and group retrieval(ValidationType.PROVIDE_GROUPS). We can override this behavior so that it can provide only one capability.

Thus, we can configure the IdentityStore to be used only for credentials validation:

@Override
public Set<ValidationType> validationTypes() {
    return EnumSet.of(ValidationType.VALIDATE);
}

In this case, we should provide an implementation for the validate() method:

@ApplicationScoped
public class InMemoryIdentityStore implements IdentityStore {
    // init from a file or harcoded
    private Map<String, UserDetails> users = new HashMap<>();

    @Override
    public int priority() {
        return 70;
    }

    @Override
    public Set<ValidationType> validationTypes() {
        return EnumSet.of(ValidationType.VALIDATE);
    }

    public CredentialValidationResult validate( 
      UsernamePasswordCredential credential) {
 
        UserDetails user = users.get(credential.getCaller());
        if (credential.compareTo(user.getLogin(), user.getPassword())) {
            return new CredentialValidationResult(user.getLogin());
        }
        return INVALID_RESULT;
    }
}

Or we can choose to configure the IdentityStore so that it can be used only for group retrieval:

@Override
public Set<ValidationType> validationTypes() {
    return EnumSet.of(ValidationType.PROVIDE_GROUPS);
}

We should then provide an implementation for the getCallerGroups() methods:

@ApplicationScoped
public class InMemoryIdentityStore implements IdentityStore {
    // init from a file or harcoded
    private Map<String, UserDetails> users = new HashMap<>();

    @Override
    public int priority() {
        return 90;
    }

    @Override
    public Set<ValidationType> validationTypes() {
        return EnumSet.of(ValidationType.PROVIDE_GROUPS);
    }

    @Override
    public Set<String> getCallerGroups(CredentialValidationResult validationResult) {
        UserDetails user = users.get(
          validationResult.getCallerPrincipal().getName());
        return new HashSet<>(user.getRoles());
    }
}

Because IdentityStoreHandler expects the implementation to be a CDI bean, we decorate it with ApplicationScoped annotation.

5. Security Context API

The Java EE 8 Security API provides an access point to programmatic security through the SecurityContext interface. It’s an alternative when the declarative security model enforced by the container isn’t sufficient.

A default implementation of the SecurityContext interface should be provided at runtime as a CDI bean, and therefore we need to inject it:

@Inject
SecurityContext securityContext;

At this point, we can authenticate the user, retrieve an authenticated one, check his role membership and grant or deny access to web resource through the five available methods.

5.1. Retrieving Caller Data

In previous versions of Java EE, we’d retrieve the Principal or check the role membership differently in each container.

While we use the getUserPrincipal() and isUserInRole() methods of the HttpServletRequest in a servlet container, a similar methods getCallerPrincipal() and isCallerInRole() methods of the EJBContext are used in EJB Container.

The new Java EE 8 Security API has standardized this by providing a similar method through the SecurityContext interface:

Principal getCallerPrincipal();
boolean isCallerInRole(String role);
<T extends Principal> Set<T> getPrincipalsByType(Class<T> type);

The getCallerPrincipal() method returns a container specific representation of the authenticated caller while the getPrincipalsByType() method retrieves all principals of a given type.

It can be useful in case the application specific caller is different from the container one.

5.2. Testing for Web Resource Access

First, we need to configure a protected resource:

@WebServlet("/protectedServlet")
@ServletSecurity(@HttpConstraint(rolesAllowed = "USER_ROLE"))
public class ProtectedServlet extends HttpServlet {
    //...
}

And then, to check access to this protected resource we should invoke the hasAccessToWebResource() method:

securityContext.hasAccessToWebResource("/protectedServlet", "GET");

In this case, the method returns true if the user is in role USER_ROLE.

5.3. Authenticating the Caller Programmatically

An application can programmatically trigger the authentication process by invoking authenticate():

AuthenticationStatus authenticate(
  HttpServletRequest request, 
  HttpServletResponse response,
  AuthenticationParameters parameters);

The container is then notified and will, in turn, invoke the authentication mechanism configured for the application. AuthenticationParameters parameter provides a credential to HttpAuthenticationMechanism:

withParams().credential(credential)

The SUCCESS and SEND_FAILURE values of the AuthenticationStatus design a successful and failed authentication while SEND_CONTINUE  signals an in progress status of the authentication process.

6. Running the Examples

For highlighting these examples, we’ve used the latest development build of the Open Liberty Server which supports Java EE 8. This is downloaded and installed thanks to the liberty-maven-plugin which can also deploy the application and start the server.

To run the examples, just access to the corresponding module and invoke this command:

mvn clean package liberty:run

As a result, Maven will download the server, build, deploy, and run the application.

7. Conclusion

In this article, we covered the configuration and implementation of the main features of the new Java EE 8 Security API.

First, we started by showing how to configure the default built-in authentication mechanisms and how to implement a custom one. Later, we saw how to configure the built-in Identity Store and how to implement a custom one. And finally, we saw how to call methods of the SecurityContext.

As always, the code examples for this article are available over on GitHub.

Histograms with Apache Commons Frequency

$
0
0

1. Overview

In this tutorial, we’re going to look at how we can present data on a histogram with the help of Apache Commons Frequency class.

The Frequency class is part of part of the Apache Commons Math library explored in this article.

A histogram is a diagram of connected bars that shows the occurrence of a range of data in a dataset. It differs from a bar chart in that it’s used to display the distribution of continuous, quantitative variables while a bar chart is used to display categorical data.

2. Project Dependencies

In this article, we’ll be using a Maven project with the following dependencies:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-math3</artifactId>
    <version>3.6.1</version>
</dependency>
<dependency>
    <groupId>org.knowm.xchart</groupId>
    <artifactId>xchart</artifactId>
    <version>3.5.2</version>
</dependency>

The commons-math3 library contains the Frequency class that we’ll be using to determine the occurrence of variables in our dataset. The xchart library is what we’ll use to display the histogram in a GUI.

The latest version of commons-math3 and xchart can be found on Maven Central.

3. Calculating the Frequency of Variables

For this tutorial, we’ll be using a dataset comprising of the students’ age in a particular school. We’ll like to see the frequency of different age groups and observe their distribution on a histogram chart.

Let’s represent the dataset with a List collection and use it to populate an instance of the Frequency class:

List<Integer> datasetList = Arrays.asList(
  36, 25, 38, 46, 55, 68, 
  72, 55, 36, 38, 67, 45, 22, 
  48, 91, 46, 52, 61, 58, 55);
Frequency frequency = new Frequency();
datasetList.forEach(d -> frequency.addValue(Double.parseDouble(d.toString())));

Now that we’ve populated our instance of the Frequency class, we’re going to get the count of each age in a bin and sum it up so we can get the total frequency of ages in a particular age group:

datasetList.stream()
  .map(d -> Double.parseDouble(d.toString()))
  .distinct()
  .forEach(observation -> {
      long observationFrequency = frequency.getCount(observation);
      int upperBoundary = (observation > classWidth)
        ? Math.multiplyExact( (int) Math.ceil(observation / classWidth), classWidth)
        : classWidth;
      int lowerBoundary = (upperBoundary > classWidth)
        ? Math.subtractExact(upperBoundary, classWidth)
        : 0;
      String bin = lowerBoundary + "-" + upperBoundary;

      updateDistributionMap(lowerBoundary, bin, observationFrequency);
  });

From the snippet above, we first determine the frequency of the observation using the getCount() of the Frequency class. The method returns the total number of occurrence of the observation

Using the current observation, we dynamically determine the group it belongs to by figuring out its upper and lower boundaries relative to the class width – which is 10.

The upper and lower boundaries are concatenated to form a bin, which is stored alongside the observationFrequency in a distributionMap using the updateDistributionMap().

If the bin exists already we update the frequency, else we add it as key and set the frequency of the current observation as its value. Note that we kept track of the processed observations to avoid duplicates.

The Frequency class also have methods for determining the percentage and cumulative percentage of a variable in a dataset.

4. Plotting the Histogram Chart

Now that we’ve processed our raw dataset into a map of age groups and their respective frequencies we can use the xchart library to display the data in a histogram chart:

CategoryChart chart = new CategoryChartBuilder().width(800).height(600)
  .title("Age Distribution")
  .xAxisTitle("Age Group")
  .yAxisTitle("Frequency")
  .build();

chart.getStyler().setLegendPosition(Styler.LegendPosition.InsideNW);
chart.getStyler().setAvailableSpaceFill(0.99);
chart.getStyler().setOverlapped(true);

List yData = new ArrayList();
yData.addAll(distributionMap.values());
List xData = Arrays.asList(distributionMap.keySet().toArray());
chart.addSeries("age group", xData, yData);

new SwingWrapper<>(chart).displayChart();

We created an instance of a CategoryChart using the chart builder, then we configured it and populate it with the data for the x and y-axis.

We finally display the chart in a GUI using the SwingWrapper:

From the histogram above, we can see that there are no students with the age of 80 – 90 while students in the age 50 – 60 are predominant. This most likely will be doctoral or post-doctoral students.

We can also say the histogram has a normal distribution.

5. Conclusion

In this article, we’ve looked at how to harness the power of the Frequency class of Apache commons-math3 library.

There are other interesting classes for statistics, geometry, genetic algorithms and others in the library. Its documentation can be found here.

The complete source code is available over at Github.


Guide to the java.util.Arrays Class

$
0
0

1. Introduction

In this tutorial, we’ll take a look at java.util.Arrays, a utility class that has been part of Java since Java 1.2.

Using Arrays, we can create, compare, sort, search, stream, and transform arrays.

2. Creating

Let’s take a look at some of the ways we can create arrays: copyOf, copyOfRange, and fill.

2.1. copyOf and copyOfRange

To use copyOfRange, we need our original array and the beginning index (inclusive) and end index (exclusive) that we want to copy:

String[] intro = new String[] { "once", "upon", "a", "time" };
String[] abridgement = Arrays.copyOfRange(storyIntro, 0, 3); 

assertArrayEquals(new String[] { "once", "upon", "a" }, abridgement); 
assertFalse(Arrays.equals(intro, abridgement));

And to use copyOf, we’d take intro and a target array size and we’d get back a new array of that length:

String[] revised = Arrays.copyOf(intro, 3);
String[] expanded = Arrays.copyOf(intro, 5);

assertArrayEquals(Arrays.copyOfRange(intro, 0, 3), revised);
assertNull(expanded[4]);

Note that copyOf pads the array with nulls if our target size is bigger than the original size.

2.2. fill

Another way, we can create a fixed-length array, is fill, which is useful when we want an array where all elements are the same:

String[] stutter = new String[3];
Arrays.fill(stutter, "once");

assertTrue(Stream.of(stutter)
  .allMatch(el -> "once".equals(el));

Check out setAll to create an array where the elements are different.

Note that we need to instantiate the array ourselves beforehand–as opposed to something like String[] filled = Arrays.fill(“once”, 3);–since this feature was introduced before generics were available in the language.

3. Comparing

Now let’s switch to methods for comparing arrays.

3.1. equals and deepEquals

We can use equals for simple array comparison by size and contents.  If we add a null as one of the elements, the content check fails:

assertTrue(
  Arrays.equals(new String[] { "once", "upon", "a", "time" }, intro));
assertFalse(
  Arrays.equals(new String[] { "once", "upon", "a", null }, intro));

When we have nested or multi-dimensional arrays, we can use deepEquals to not only check the top-level elements but also perform the check recursively:

Object[] story = new Object[] 
  { intro, new String[] { "chapter one", "chapter two" }, end };
Object[] copy = new Object[] 
  { intro, new String[] { "chapter one", "chapter two" }, end };

assertTrue(Arrays.deepEquals(story, copy));
assertFalse(Arrays.equals(story, copy));

Note how deepEquals passes but equals fails.

This is because deepEquals ultimately calls itself each time it encounters an array, while equals will simply compare sub-arrays’ references.

Also, this makes it dangerous to call on an array with a self-reference!

3.2. hashCode and deepHashCode

The implementation of hashCode will give us the other part of the equals/hashCode contract that is recommended for Java objects.  We use hashCode to compute an integer based on the contents of the array:

Object[] looping = new Object[]{ intro, intro }; 
int hashBefore = Arrays.hashCode(looping);
int deepHashBefore = Arrays.deepHashCode(looping);

Now, we set an element of the original array to null and recompute the hash values:

intro[3] = null;
int hashAfter = Arrays.hashCode(looping);

Alternatively, deepHashCode checks the nested arrays for matching numbers of elements and contents.  If we recalculate with deepHashCode:

int deepHashAfter = Arrays.deepHashCode(looping);

Now, we can see the difference in the two methods:

assertEquals(hashAfter, hashBefore);
assertNotEquals(deepHashAfter, deepHashBefore);

deepHashCode is the underlying calculation used when we are working with data structures like HashMap and HashSet on arrays.

4. Sorting and Searching

Next, let’s take a look at sorting and searching arrays.

4.1. sort

If our elements are either primitives or they implement Comparable, we can use sort to perform an in-line sort:

String[] sorted = Arrays.copyOf(intro, 4);
Arrays.sort(sorted);

assertArrayEquals(
  new String[]{ "a", "once", "time", "upon" }, 
  sorted);

Take care that sort mutates the original reference, which is why we perform a copy here.

sort will use a different algorithm for different array element types. Primitive types use a dual-pivot quicksort and Object types use Timsort. Both have the average case of O(n log(n)) for a randomly-sorted array.

As of Java 8, parallelSort is available for a parallel sort-merge.  It offers a concurrent sorting method using several Arrays.sort tasks.

4.2. binarySearch

Searching in an unsorted array is linear, but if we have a sorted array, then we can do it in O(log n), which is what we can do with binarySearch:

int exact = Arrays.binarySearch(sorted, "time");
int caseInsensitive = Arrays.binarySearch(sorted, "TiMe", String::compareToIgnoreCase);

assertEquals("time", sorted[exact]);
assertEquals(2, exact);
assertEquals(exact, caseInsensitive);

If we don’t provide a Comparator as a third parameter, then binarySearch counts on our element type being of type Comparable.

And again, note that if our array isn’t first sorted, then binarySearch won’t work as we expect!

5. Streaming

As we saw earlier, Arrays was updated in Java 8 to include methods using the Stream API such as parallelSort (mentioned above), stream and setAll.

5.1. stream

stream gives us full access to the Stream API for our array:

Assert.assertEquals(Arrays.stream(intro).count(), 4);

exception.expect(ArrayIndexOutOfBoundsException.class);
Arrays.stream(intro, 2, 1).count();

We can provide inclusive and exclusive indices for the stream however we should expect an ArrayIndexOutOfBoundsException if the indices are out of order,  negative, or out of range.

6. Transforming

Finally, toString, asList, and setAll give us a couple different ways to transform arrays.

6.1. toString and deepToString

A great way we can get a readable version of our original array is with toString:

assertEquals("[once, upon, a, time]", Arrays.toString(storyIntro));

Again we must use the deep version to print the contents of nested arrays:

assertEquals(
  "[[once, upon, a, time], [chapter one, chapter two], [the, end]]",
  Arrays.deepToString(story));

6.2. asList

Most convenient of all the Arrays methods for us to use is the asList. We have an easy way to turn an array into a list:

List<String> rets = Arrays.asList(storyIntro);

assertTrue(rets.contains("upon"));
assertTrue(rets.contains("time"));
assertEquals(rets.size(), 4);

However, the returned List will be a fixed length so we won’t be able to add or remove elements.

Note also that, curiously, java.util.Arrays has its own ArrayList subclass, which asList returns. This can be very deceptive when debugging!

6.3. setAll

With setAll, we can set all of the elements of an array with a functional interface. The generator implementation takes the positional index as a parameter:

String[] longAgo = new String[4];
Arrays.setAll(longAgo, i -> this.getWord(i)); 
assertArrayEquals(longAgo, new String[]{"a","long","time","ago"});

And, of course, exception handling is one of the more dicey parts of using lambdas. So remember that here, if the lambda throws an exception, then Java doesn’t define the final state of the array.

7. Conclusion

In this article, we learned how some methods for creating, searching, sorting and transforming arrays using the java.util.Arrays class.

This class has been expanded in more recent Java releases with the inclusion of stream producing and consuming methods in Java 8 and mismatch methods in Java 9.

The source for this article is, as always, over on Github.

Generalized Target-Type Inference in Java

$
0
0

1. Introduction

Type Inference was introduced in Java 5 to complement the introduction of generics and was substantially expanded in following Java releases, which is also referred to as Generalized Target-Type Inference.

In this tutorial, we’ll explore this concept with code samples.

2. Generics

Generics provided us with many benefits such as increased type safety, avoiding type casting errors and generic algorithms. You can read more about generics in this article.

However, the introduction of generics resulted in the necessity of writing boilerplate code due to the need to pass type parameters. Some examples are:

Map<String, Map<String, String>> mapOfMaps = new HashMap<String, Map<String, String>>();
List<String> strList = Collections.<String>emptyList();
List<Integer> intList = Collections.<Integer>emptyList();

3. Type Inference Before Java 8

To reduce the unnecessary code verbosity due, Type Inference was introduced to Java which is the process of automatically deducing unspecified data types of an expression based on the contextual information.

Now, we can invoke the same generic types and methods without specifying the parameter types. The compiler automatically infers the parameter types when needed.

We can see the same code using the new concept:

List<String> strListInferred = Collections.emptyList();
List<Integer> intListInferred = Collections.emptyList();

In the above example, based on the expected return types List<String> and List<Integer>, the compiler is able to infer the type parameter to the following generic method:

public static final <T> List<T> emptyList()

As we can see, the resulting code is concise. Now, we can call generic methods as an ordinary method if the type parameter can be inferred.

In Java 5, we could do Type-Inference in specific contexts as shown above.

Java 7 expanded the contexts in which it could be performed. It introduced the diamond operator <>. You may read more about the diamond operator in this article.

Now, we can perform this operation for generic class constructors in an assignment context. One such example is:

Map<String, Map<String, String>> mapOfMapsInferred = new HashMap<>();

Here, the Java Compiler uses the expected assignment type to infer the type parameters to HashMap constructor.

4. Generalized Target-Type Inference – Java 8

Java 8 further expanded the scope of Type Inference. We refer to this expanded inference capability as Generalized Target-Type Inference. You may read the technical details here.

Java 8 also introduced Lambda Expressions. Lambda Expressions do not have an explicit type.  Their type is inferred by looking at the target type of the context or situation. The Target-Type of an expression is the data type that the Java Compiler expects depending on where the expression appears.

Java 8 supports inference using Target-Type in a method context. When we invoke a generic method without explicit type arguments, the compiler can look at the method invocation and corresponding method declarations to determine the type argument (or arguments) that make the invocation applicable.

Let us look into an example code:

static <T> List<T> add(List<T> list, T a, T b) {
    list.add(a);
    list.add(b);
    return list;
}

List<String> strListGeneralized = add(new ArrayList<>(), "abc", "def");
List<Integer> intListGeneralized = add(new ArrayList<>(), 1, 2);
List<Number> numListGeneralized = add(new ArrayList<>(), 1, 2.0);

In the code, ArrayList<> does not provide the type argument explicitly. So, the compiler needs to infer it. First, the compiler looks into the arguments of the add method. Then, it looks into the parameters passed at different invocations.

It performs invocation applicability inference analysis to determine whether the method applies to these invocations. If multiple methods are applicable due to overloading, the compiler would choose the most specific method.

Then, the compiler performs invocation type inference analysis to determine the type arguments. The expected target types are also used in this analysis. It deduces the arguments in the three instances as ArrayList<String>, ArrayList<Integer> and ArrayList<Number>.

Target-Type inference allows us to not specify types for lambda expression parameters:

List<Integer> intList = Arrays.asList(5, 2, 4, 2, 1);
Collections.sort(intList, (a, b) -> a.compareTo(b));

List<String> strList = Arrays.asList("Red", "Blue", "Green");
Collections.sort(strList, (a, b) -> a.compareTo(b));

Here, the parameters a and b do not have explicitly defined types.  Their types are inferred as Integer in the first Lambda Expression and as String in the second.

5. Conclusion

In this quick article, we reviewed Type Inference, that along with generics and Lambda Expression enables us to write concise Java code.

As usual, the full source code can be found over on Github.

Thin JARs with Spring Boot

$
0
0

1. Introduction

In this tutorial, we’re going to look at how to build a Spring Boot project into a thin JAR file, using the spring-boot-thin-launcher project.

Spring Boot is known for its “fat” JAR deployments, where a single executable artifact contains both the application code and all of its dependencies.

Boot is also widely used to develop microservices. This can sometimes be at odds with the “fat JAR” approach because including the same dependencies over and over in many artifacts can become an important waste of resources.

2. Prerequisites

First of all, we need a Spring Boot project, of course. In this article, we’ll look at Maven builds, and Gradle builds in their most common configurations.

It’s impossible to cover all the build systems and build configurations out there, but, hopefully, we’ll view enough of the general principles that you should be able to apply them to your specific setup.

2.1. Maven Projects

In a Boot project built with Maven, we ought to have the Spring Boot Maven plugin configured in our project’s pom.xml file, its parent, or one of its ancestors:

<plugin>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-maven-plugin</artifactId>    
</plugin>

Here, we’re referring to version 2.0.2.RELEASE of the plugin, the latest at the time of writing. The version of Spring Boot dependencies is usually decided by using a BOM or inheriting from a parent POM as in our reference project:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.0.1.RELEASE</version>
    <relativePath/>
</parent>

2.2. Gradle Projects

In a Boot project built with Gradle, we’ll have the Boot Gradle plugin:

buildscript {
    ext {
        springBootPlugin = 'org.springframework.boot:spring-boot-gradle-plugin'
        springBootVersion = '2.0.1.RELEASE'
    }
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("${springBootPlugin}:${springBootVersion}")
    }
}

// elided

apply plugin: 'org.springframework.boot'
apply plugin: 'io.spring.dependency-management'

springBoot {
    mainClassName = 'org.baeldung.DemoApplication'
}

Note that, in this article, we’ll be considering only Boot 2.x and later projects. The Thin Launcher also supports earlier versions, but it requires a slightly different Gradle configuration that we’re omitting for simplicity. Please look at the project’s homepage for more details.

3. How to Create a Thin JAR?

The Spring Boot Thin Launcher is a small library that reads an artifact’s dependencies from a file bundled in the archive itself, downloads them from a Maven repository and finally launches the main class of the application.

So, when we build a project with the library, we get a JAR file with our code, a file enumerating its dependencies, and the main class from the library that performs the above tasks.

Of course, things are a bit more nuanced than our simplified explanation; we’ll discuss some topics in depth later in the article.

4. Basic Usage

Let’s now see how to build a “thin” JAR from our regular Spring Boot application.

We’ll launch the application with the usual java -jar <my-app-1.0.jar>, with optional additional command line arguments that control the Thin Launcher. We’ll see a couple of them in the following sections; the project’s homepage contains the full list.

4.1. Maven Projects

In a Maven project, we have to modify the declaration of the Boot plugin (see section 2.1) to include a dependency on the custom “thin” layout:

<plugin>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-maven-plugin</artifactId>
    <dependencies>
        <!-- The following enables the "thin jar" deployment option. -->
        <dependency>
            <groupId>org.springframework.boot.experimental</groupId>
            <artifactId>spring-boot-thin-layout</artifactId>
            <version>1.0.11.RELEASE</version>
        </dependency>
    </dependencies>
</plugin>

The launcher will read dependencies from the pom.xml file that Maven stores in the generated JAR in the META-INF/maven directory.

We’ll perform the build as usual, e.g., with mvn install.

If we want to be able to produce both thin and fat builds (for example in a project with multiple modules) we can declare the custom layout in a dedicated Maven profile.

4.2. Maven and Dependencies: thin.properties

We can also have Maven generate a thin.properties file in addition to pom.xml. In that case, the file will contain the complete list of dependencies, including transitive ones, and the launcher will prefer it over the pom.xml.

The mojo (plugin) for doing so is spring-boot-thin-maven-plugin:properties, and by default, it outputs the thin.properties file in src/main/resources/META-INF, but we can specify its location with the thin.output property:

$ mvn org.springframework.boot.experimental:spring-boot-thin-maven-plugin:properties -Dthin.output=.

Please note that the output directory must exist for the goal to succeed, even if we’ve kept the default one.

4.3. Gradle Projects

In a Gradle project, instead, we add a dedicated plugin:

buildscript {
    ext {
        //...
        thinPlugin = 'org.springframework.boot.experimental:spring-boot-thin-gradle-plugin'
        thinVersion = '1.0.11.RELEASE'
    }
    //...
    dependencies {
        //...
        classpath("${thinPlugin}:${thinVersion}")
    }
}

//elided

apply plugin: 'maven'
apply plugin: 'org.springframework.boot.experimental.thin-launcher'

To obtain a thin build, we’ll tell Gradle to execute the thinJar task:

~/projects/baeldung/spring-boot-gradle $ ./gradlew thinJar

4.4. Gradle and Dependencies: pom.xml

In the code example in the previous section, we’ve declared the Maven plugin in addition to the Thin Launcher (as well as the Boot and Dependency Management plugins that we’d already seen in the Prerequisites section).

That’s because, just like in the Maven case that we’ve seen earlier, the artifact will contain and make use of a pom.xml file enumerating the application’s dependencies. The pom.xml file is generated by a task called thinPom, which is an implicit dependency of any jar task.

We can customize the generated pom.xml file with a dedicated task. Here, we’ll just replicate what the thin plugin already does automatically:

task createPom {
    def basePath = 'build/resources/main/META-INF/maven'
    doLast {
        pom {
            withXml(dependencyManagement.pomConfigurer)
        }.writeTo("${basePath}/${project.group}/${project.name}/pom.xml")
    }
}

To use our custom pom.xml file, we add the above task to the jar task’s dependencies:

bootJar.dependsOn = [createPom]

4.5. Gradle and Dependencies: thin.properties

We can also have Gradle generate a thin.properties file rather than pom.xml, as we did earlier with Maven.

The task that generates the thin.properties file is called thinProperties, and it’s not used by default. We can add it as a dependency of the jar task:

bootJar.dependsOn = [thinProperties]

5. Storing Dependencies

The whole point of thin jars is to avoid bundling the dependencies with the application. However, dependencies don’t magically disappear, they’re simply stored elsewhere.

In particular, the Thin Launcher uses the Maven infrastructure to resolve dependencies, so:

  1. it checks the local Maven repository, which by default lies in ~/.m2/repository but can be moved elsewhere;
  2. then, it downloads missing dependencies from Maven Central (or any other configured repository);
  3. finally, it caches them in the local repository, so that it won’t have to download them again the next time we run the application.

Of course, the download phase is the slow and error-prone part of the process, because it requires access to Maven Central through the Internet, or access to a local proxy, and we all know how those things are generally unreliable.

Fortunately, there are various ways of deploying the dependencies together with the application(s), for example in a prepackaged container for cloud deployment.

5.1. Running the Application for Warm-up

The simplest way to cache the dependencies is to do a warm-up run of the application in the target environment. As we’ve seen earlier, this will cause the dependencies to be downloaded and cached in the local Maven repository. If we run more than one app, the repository will end up containing all the dependencies without duplicates.

Since running an application can have unwanted side effects, we can also perform a “dry run” that only resolves and downloads the dependencies without running any user code:

$ java -Dthin.dryrun=true -jar my-app-1.0.jar

Note that, as per Spring Boot conventions, we can set the -Dthin.dryrun property also with a –thin.dryrun command line argument to the application or with a THIN_DRYRUN system property. Any value except false will instruct the Thin Launcher to perform a dry run.

5.2. Packaging the Dependencies During the Build

Another option is to collect the dependencies during the build, without bundling them in the JAR. Then, we can copy them to the target environment as part of the deployment procedure.

This is generally simpler because it’s not necessary to run the application in the target environment. However, if we’re deploying multiple applications, we’ll have to merge their dependencies, either manually or with a script.

The format in which the Thin Plugin for Maven and Gradle packages the dependencies during a build is the same as a Maven local repository:

root/
    repository/
        com/
        net/
        org/
        ...

In fact, we can point an application using the Thin Launcher to any such directory (including a local Maven repository) at runtime with the thin.root property:

$ java -jar my-app-1.0.jar --thin.root=my-app/deps

We can also safely merge multiple such directories by copying them one over another, thus obtaining a Maven repository with all the necessary dependencies.

5.3. Packaging the Dependencies With Maven

To have Maven package the dependencies for us, we use the resolve goal of the spring-boot-thin-maven-plugin. We can invoke it manually or automatically in our pom.xml:

<plugin>
    <groupId>org.springframework.boot.experimental</groupId>
    <artifactId>spring-boot-thin-maven-plugin</artifactId>
    <version>${thin.version}</version>
    <executions>
        <execution>
        <!-- Download the dependencies at build time -->
        <id>resolve</id>
        <goals>
            <goal>resolve</goal>
        </goals>
        <inherited>false</inherited>
        </execution>
    </executions>
</plugin>

After building the project, we’ll find a directory target/thin/root/ with the structure that we’ve discussed in the previous section.

5.4. Packaging the Dependencies With Gradle

If we’re using Gradle with the thin-launcher plugin, instead, we have a thinResolve task available. The task will save the application and its dependencies in the build/thin/root/ directory, similarly to the Maven plugin of the previous section:

$ gradlew thinResolve

Please note that, at the time of writing, the thin-launcher plugin has a bug that prevents the dependencies to be saved if thin.properties is used: https://github.com/dsyer/spring-boot-thin-launcher/issues/53.

6. Conclusions and Further Reading

In this article, we’ve looked at how to make our thin jar. We’ve also seen how to use the Maven infrastructure to download and store their dependencies.

The homepage of the thin launcher has a few more HOW-TO guides for scenarios such as cloud deployments to Heroku, as well as the full list of supported command line arguments.

The implementation of all the Maven examples and code snippets can be found in the GitHub project – as a Maven project, so it should be easy to import and run as is.

Similarly, all Gradle examples refer to this GitHub project.

Learn JUnit

$
0
0

Java Weekly, Issue 233

$
0
0

Here we go…

1. Spring and Java

>> Unlocking Intersection Types With ‘var’ In Java 10 [blog.codefx.org]

As a side effect of introducing “var” to Java, we also got the support for intersection types 🙂

>> FYI: removal of long-deprecated Thread.destroy() and Thread.stop(Throwable) methods [mail.openjdk.java.net]

The title says all – as promised, things are finally starting to get removed from Java.

>> Proposed Jakarta EE Design Principles [blog.sebastian-daschner.com]

A good, quick read about the foundation of the new Jakarta EE project and the direction it’s heading.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Open Sourcing Zuul 2 [medium.com]

After a long wait, the new version of Zuul is finally getting open-sourced. Nice.

>> Working with the system clipboard in Vim [advancedweb.hu]

The second thing to learn after exiting Vim 🙂

>> Agile cargo cult [blog.frankel.ch]

Turns out it’s about actually being agile, and not pretending to look like we’re doing agile.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Blaming Others [dilbert.com]

>> Lying to Customers [dilbert.com]

>> Wally Teaches Success [dilbert.com]

4. Pick of the Week

>> Becoming a dramatically better programmer [henrystanley.com]

Viewing all 4754 articles
Browse latest View live