Quantcast
Channel: Baeldung
Viewing all 4848 articles
Browse latest View live

Singleton Session Bean in Java EE

$
0
0

1. Overview

Whenever a single instance of a Session Bean is required for a given use-case, we can use a Singleton Session Bean.

In this tutorial, we’re going to explore the this through an example, with a Java EE application.

2. Maven

First of all, we need to define required Maven dependencies in the pom.xml.

Let’s define the dependencies for the EJB APIs and Embedded EJB container for deployment of the EJB:

<dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-api</artifactId>
    <version>8.0</version>
    <scope>provided</scope>
</dependency>

<dependency>
    <groupId>org.apache.openejb</groupId>
    <artifactId>tomee-embedded</artifactId>
    <version>1.7.5</version>
</dependency>

Latest versions can be found on Maven Central at JavaEE API and tomEE.

3. Types of Session Beans

There are three types of Session Beans. Before we explore Singleton Session Beans, let’s see what is the difference between the lifecycles of the three types.

3.1. Stateful Session Beans

A Stateful Session Bean maintains the conversational state with the client it is communicating.

Each client creates a new instance of Stateful Bean and is not shared with other clients.

When the communication between the client and bean ends, the Session Bean also terminates.

3.2. Stateless Session Beans

A Stateless Session Bean doesn’t maintain any conversational state with the client. The bean contains the state specific to the client only till the duration of method invocation.

Consecutive method invocations are independent unlike with the Stateful Session Bean.

The container maintains a pool of Stateless Beans and these instances can be shared between multiple clients.

3.3. Singleton Session Beans

A Singleton Session Bean maintains the state of the bean for the complete lifecycle of the application.

Singleton Session Beans are similar to Stateless Session Beans but only one instance of the Singleton Session Bean is created in the whole application and does not terminates until the application is shut down.

The single instance of the bean is shared between multiple clients and can be concurrently accessed.

4. Creating a Singleton Session Bean

Let’s start by creating an interface for it.

For this example, let’s use the javax.ejb.Local annotation to define the interface:

@Local
public interface CountryState {
   List<String> getStates(String country);
   void setStates(String country, List<String> states);
}

Using @Local means the bean is accessed within the same application. We also have the option to use javax.ejb.Remote annotation which allows us to call the EJB remotely.

Now, we’ll define the implementation EJB bean class. We mark the class as a Singleton Session Bean by using the annotation javax.ejb.Singleton.

In addition, let’s also mark the bean with the javax.ejb.Startup annotation to inform the EJB container to initialize the bean at the startup:

@Singleton
@Startup
public class CountryStateContainerManagedBean implements CountryState {
    ...
}

This is called eager initialization. If we don’t use @Startup, the EJB container determines when to initialize the bean.

We can also define multiple Session Beans to initialize the data and load the beans in the specific order. Therefore, we’ll use, javax.ejb.DependsOn annotation to define our bean’s dependency on other Session Beans.

The value for the @DependsOn annotation is an array of the names of Bean class names that our Bean depends on:

@Singleton 
@Startup 
@DependsOn({"DependentBean1", "DependentBean2"}) 
public class CountryStateCacheBean implements CountryState { 
    ...
}

We’ll define an initialize() method which initializes the bean, and makes it a lifecycle callback method using javax.annotation.PostConstruct annotation.

With this annotation, it’ll be called by the container upon instantiation of the bean:

@PostConstruct
public void initialize() {

    List<String> states = new ArrayList<String>();
    states.add("Texas");
    states.add("Alabama");
    states.add("Alaska");
    states.add("Arizona");
    states.add("Arkansas");

    countryStatesMap.put("UnitedStates", states);
}

5. Concurrency

Next, we’ll design the concurrency management of Singleton Session Bean. EJB provides two methods for implementing concurrent access to the Singleton Session Bean: Container-managed concurrency, and Bean-managed concurrency.

The annotation javax.ejb.ConcurrencyManagement defines the concurrency policy for a method. By default, the EJB container uses container-managed concurrency.

The @ConcurrencyManagement annotation takes a javax.ejb.ConcurrencyManagementType value. The options are:

  • ConcurrencyManagementType.CONTAINER for container-managed concurrency.
  • ConcurrencyManagementType.BEAN for bean-managed concurrency.

5.1. Container-Managed Concurrency

Simply put, in container-managed concurrency, the container controls how clients’ access to methods.

Let’s use the @ConcurrencyManagement annotation with value javax.ejb.ConcurrencyManagementType.CONTAINER:

@Singleton
@Startup
@ConcurrencyManagement(ConcurrencyManagementType.CONTAINER)
public class CountryStateContainerManagedBean implements CountryState {
    ...
}

To specify the access level to each of the singleton’s business methods, we’ll use javax.ejb.Lock annotation. javax.ejb.LockType contains the values for the @Lock annotation. javax.ejb.LockType defines two values:

  • LockType.WRITE – This value provides an exclusive lock to the calling client and prevents all other clients from accessing all methods of the bean. Use this for methods that change the state of the singleton bean.
  • LockType.READThis value provides concurrent locks to multiple clients to access a method.
    Use this for methods which only read data from the bean.

With this in mind, we’ll define the setStates() method with @Lock(LockType.WRITE) annotation, to prevent simultaneous update of the state by clients.

To allow clients to read the data concurrently, we’ll annotate getStates() with @Lock(LockType.READ):

@Singleton 
@Startup 
@ConcurrencyManagement(ConcurrencyManagementType.CONTAINER) 
public class CountryStateContainerManagedBean implements CountryState { 

    private final Map<String, List<String> countryStatesMap = new HashMap<>();

    @Lock(LockType.READ) 
    public List<String> getStates(String country) { 
        return countryStatesMap.get(country);
    }

    @Lock(LockType.WRITE)
    public void setStates(String country, List<String> states) {
        countryStatesMap.put(country, states);
    }
}

To stop the methods execute for a long time and blocking the other clients indefinitely, we’ll use the javax.ejb.AccessTimeout annotation to timeout long-waiting calls.

Use the @AccessTimeout annotation to define the number of milliseconds method times-out. After the timeout, the container throws a javax.ejb.ConcurrentAccessTimeoutException and the method execution terminates.

5.2. Bean-Managed Concurrency

In Bean managed concurrency, the container doesn’t control simultaneous access of Singleton Session Bean by clients. The developer is required to implement concurrency by themselves.

Unless concurrency is implemented by the developer, all methods are accessible to all clients simultaneously. Java provides the synchronization and volatile primitives for implementing concurrency.

To find out more about concurrency read about java.util.concurrent here and Atomic Variables here.

For bean-managed concurrency, let’s define the @ConcurrencyManagement annotation with the javax.ejb.ConcurrencyManagementType.BEAN value for the Singleton Session Bean class:

@Singleton 
@Startup 
@ConcurrencyManagement(ConcurrencyManagementType.BEAN) 
public class CountryStateBeanManagedBean implements CountryState { 
   ... 
}

Next, we’ll write the setStates() method which changes the state of the bean using synchronized keyword:

public synchronized void setStates(String country, List<String> states) {
    countryStatesMap.put(country, states);
}

The synchronized keyword makes the method accessible by only one thread at a time.

The getStates() method doesn’t change the state of the Bean and so it doesn’t need to use the synchronized keyword.

6. Client

Now we can write the client to access our Singleton Session Bean.

We can deploy the Session Bean on application container servers like JBoss, Glassfish etc. To keep things simple, we will use the javax.ejb.embedded.EJBContainer class. EJBContainer runs in the same JVM as the client and provides most of the services of an enterprise bean container.

First, we’ll create an instance of EJBContainer. This container instance will search and initialize all the EJB modules present in the classpath:

public class CountryStateCacheBeanTest {

    private EJBContainer ejbContainer = null;

    private Context context = null;

    @Before
    public void init() {
        ejbContainer = EJBContainer.createEJBContainer();
        context = ejbContainer.getContext();
    }
}

Next, we’ll get the javax.naming.Context object from the initialized container object. Using the Context instance, we can get the reference to CountryStateContainerManagedBean and call the methods:

@Test
public void whenCallGetStatesFromContainerManagedBean_ReturnsStatesForCountry() throws Exception {

    String[] expectedStates = {"Texas", "Alabama", "Alaska", "Arizona", "Arkansas"};

    CountryState countryStateBean = (CountryState) context
      .lookup("java:global/singleton-ejb-bean/CountryStateContainerManagedBean");
    List<String> actualStates = countryStateBean.getStates("UnitedStates");

    assertNotNull(actualStates);
    assertArrayEquals(expectedStates, actualStates.toArray());
}

@Test
public void whenCallSetStatesFromContainerManagedBean_SetsStatesForCountry() throws Exception {

    String[] expectedStates = { "California", "Florida", "Hawaii", "Pennsylvania", "Michigan" };
 
    CountryState countryStateBean = (CountryState) context
      .lookup("java:global/singleton-ejb-bean/CountryStateContainerManagedBean");
    countryStateBean.setStates(
      "UnitedStates", Arrays.asList(expectedStates));
 
    List<String> actualStates = countryStateBean.getStates("UnitedStates");
    assertNotNull(actualStates);
    assertArrayEquals(expectedStates, actualStates.toArray());
}

Similarly, we can use the Context instance to get the reference for Bean-Managed Singleton Bean and call the respective methods:

@Test
public void whenCallGetStatesFromBeanManagedBean_ReturnsStatesForCountry() throws Exception {

    String[] expectedStates = { "Texas", "Alabama", "Alaska", "Arizona", "Arkansas" };

    CountryState countryStateBean = (CountryState) context
      .lookup("java:global/singleton-ejb-bean/CountryStateBeanManagedBean");
    List<String> actualStates = countryStateBean.getStates("UnitedStates");

    assertNotNull(actualStates);
    assertArrayEquals(expectedStates, actualStates.toArray());
}

@Test
public void whenCallSetStatesFromBeanManagedBean_SetsStatesForCountry() throws Exception {

    String[] expectedStates = { "California", "Florida", "Hawaii", "Pennsylvania", "Michigan" };

    CountryState countryStateBean = (CountryState) context
      .lookup("java:global/singleton-ejb-bean/CountryStateBeanManagedBean");
    countryStateBean.setStates("UnitedStates", Arrays.asList(expectedStates));

    List<String> actualStates = countryStateBean.getStates("UnitedStates");
    assertNotNull(actualStates);
    assertArrayEquals(expectedStates, actualStates.toArray());
}

End our tests by closing the EJBContainer in the close() method:

@After
public void close() {
    if (ejbContainer != null) {
        ejbContainer.close();
    }
}

7. Conclusion

Singleton Session Beans are just as flexible and powerful as any standard Session Bean but allow us to apply a Singleton pattern to share state across our application’s clients.

Concurrency management of the Singleton Bean could be easily implemented using Container-Managed Concurrency where the container takes care of concurrent access by multiple clients, or you could also implement your own custom concurrency management using Bean-Managed Concurrency.

The source code of this tutorial can be found over on GitHub.


Programmatic Configuration with Log4j 2

$
0
0

1. Introduction

In this tutorial, we’ll take a look at different ways to programmatically configure Apache Log4j 2.

2. Initial Setup

To start using Log4j 2, we merely need to include the log4j-core and log4j-slf4j-impl dependencies in our pom.xml:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.11.0</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-slf4j-impl</artifactId>
    <version>2.11.0</version>
</dependency>

3. ConfigurationBuilder

Once we have Maven configured, then we need to create a ConfigurationBuilder, which is the class that lets us configure appenders, filters, layouts, and loggers.

Log4j 2 provides several ways to get a ConfigurationBuilder.

Let’s start with the most direct way:

ConfigurationBuilder<BuiltConfiguration> builder
 = ConfigurationBuilderFactory.newConfigurationBuilder();

And to begin configuring components, ConfigurationBuilder is equipped with a corresponding new method, like newAppender or newLayout, for each component.

Some components have different subtypes, like FileAppender or ConsoleAppender, and these are referred to in the API as plugins.

3.1. Configuring Appenders

Let’s tell the builder where to send each log line by configuring an appender:

AppenderComponentBuilder console 
  = builder.newAppender("stdout", "Console"); 

builder.add(console);

AppenderComponentBuilder file 
  = builder.newAppender("log", "File"); 
file.addAttribute("fileName", "target/logging.log");

builder.add(file);

While most new methods don’t support this, newAppender(name, plugin) allows us to give the appender a name, which will turn out to be important later on. These appenders, we’ve called stdout and log, though we could’ve named them anything.

We’ve also told the builder which appender plugin (or, more simply, which kind of appender) to use. Console and File refer to Log4j 2’s appenders for writing to standard out and the file system, respectively.

Though Log4j 2 supports several appenders, configuring them using Java can be a bit tricky since AppenderComponentBuilder is a generic class for all appender types.

This makes it have methods like addAttribute and addComponent instead of setFileName and addTriggeringPolicy:

AppenderComponentBuilder rollingFile 
  = builder.newAppender("rolling", "RollingFile");
rollingFile.addAttribute("fileName", "rolling.log");
rollingFile.addAttribute("filePattern", "rolling-%d{MM-dd-yy}.log.gz");
rollingFile.addComponent(triggeringPolicies);

builder.add(rollingFile);

And, finally, don’t forget to call builder.add to append it to the main configuration!

3.2. Configuring Filters

We can add filters to each of our appenders, which decide on each log line whether it should be appended or not.

Let’s use the MarkerFilter plugin on our console appender:

FilterComponentBuilder flow = builder.newFilter(
  "MarkerFilter", 
  Filter.Result.ACCEPT,
  Filter.Result.DENY);  
flow.addAttribute("marker", "FLOW");

console.add(flow);

Note that this new method doesn’t allow us to name the filter, but it does ask us to indicate what to do if the filter passes or fails.

In this case, we’ve kept it simple, stating that if the MarkerFilter passes, then ACCEPT the logline. Otherwise, DENY it.

Note in this case that we don’t append this to the builder but instead to the appenders that we want to use this filter.

3.3. Configuring Layouts

Next, let’s define the layout for each log line. In this case, we’ll use the PatternLayout plugin:

LayoutComponentBuilder standard 
  = builder.newLayout("PatternLayout");
standard.addAttribute("pattern", "%d [%t] %-5level: %msg%n%throwable");

console.add(standard);
file.add(standard);
rolling.add(standard);

Again, we’ve added these directly to the appropriate appenders instead of to the builder directly.

3.4. Configuring the Root Logger

Now that we know where logs will be shipped to, we want to configure which logs will go to each destination.

The root logger is the highest logger, kind of like Object in Java. This logger is what will be used by default unless overridden.

So, let’s use a root logger to set the default logging level to ERROR and the default appender to our stdout appender from above:

RootLoggerComponentBuilder rootLogger 
  = builder.newRootLogger(Level.ERROR);
rootLogger.add(builder.newAppenderRef("stdout"));

builder.add(rootLogger);

To point our logger at a specific appender, we don’t give it an instance of the builder. Instead, we refer to it by the name that we gave it earlier.

3.5. Configuring Additional Loggers

Child loggers can be used to target specific packages or logger names.

Let’s add a logger for the com package in our application, setting the logging level to DEBUG and having those go to our log appender:

LoggerComponentBuilder logger = builder.newLogger("com", Level.DEBUG);
logger.add(builder.newAppenderRef("log"));
logger.addAttribute("additivity", false);

builder.add(logger);

Note that we can set additivity with our loggers, which indicates whether this logger should inherit properties like logging level and appender types from its ancestors.

3.6. Configuring Other Components

Not all components have a dedicated new method on ConfigurationBuilder.

So, in that case, we call newComponent.

For example, because there isn’t a TriggeringPolicyComponentBuilder, we need to use newComponent for something like specifying our triggering policy for rolling file appenders:

ComponentBuilder triggeringPolicies = builder.newComponent("Policies")
  .addComponent(builder.newComponent("CronTriggeringPolicy")
    .addAttribute("schedule", "0 0 0 * * ?"))
  .addComponent(builder.newComponent("SizeBasedTriggeringPolicy")
    .addAttribute("size", "100M"));
 
rolling.addComponent(triggeringPolicies);

3.7. The XML Equivalent

ConfigurationBuilder comes equipped with a handy method to print out the equivalent XML:

builder.writeXMLConfiguration(System.out);

Running the above line prints out:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
   <Appenders>
      <Console name="stdout">
         <PatternLayout pattern="%d [%t] %-5level: %msg%n%throwable" />
         <MarkerFilter onMatch="ACCEPT" onMisMatch="DENY" marker="FLOW" />
      </Console>
      <RollingFile name="rolling" 
        fileName="target/rolling.log" 
        filePattern="target/archive/rolling-%d{MM-dd-yy}.log.gz">
         <PatternLayout pattern="%d [%t] %-5level: %msg%n%throwable" />
         <Policies>
            <CronTriggeringPolicy schedule="0 0 0 * * ?" />
            <SizeBasedTriggeringPolicy size="100M" />
         </Policies>
      </RollingFile>
      <File name="FileSystem" fileName="target/logging.log">
         <PatternLayout pattern="%d [%t] %-5level: %msg%n%throwable" />
      </File>
   </Appenders>
   <Loggers>
      <Logger name="com" level="DEBUG" additivity="false">
         <AppenderRef ref="log" />
      </Logger>
      <Root level="ERROR" additivity="true">
         <AppenderRef ref="stdout" />
      </Root>
   </Loggers>
</Configuration>

This comes in handy when we want to double-check our configuration or if we want to persist our configuration, say, to the file system.

3.8. Putting it all Together

Now that we are fully configured, let’s tell Log4j 2 to use our configuration:

Configurator.initialize(builder.build());

After this is invoked, future calls to Log4j 2 will use our configuration.

Note that this means that we need to invoke Configurator.initialize before we make any calls to LogManager.getLogger.

4. ConfigurationFactory

Now that we’ve seen one way to get and apply a ConfigurationBuilder, let’s take a look at one more:

public class CustomConfigFactory
  extends ConfigurationFactory {
 
    public Configuration createConfiguration(
      LoggerContext context, 
      ConfigurationSource src) {
 
        ConfigurationBuilder<BuiltConfiguration> builder = super
          .newConfigurationBuilder();

        // ... configure appenders, filters, etc.

        return builder.build();
    }

    public String[] getSupportedTypes() { 
        return new String[] { "*" };
    }
}

In this case, instead of using ConfigurationBuilderFactory, we subclassed ConfigurationFactory, an abstract class targetted for creating instances of Configuration.

Then, instead of calling Configurator.initialize like we did the first time, we simply need to let Log4j 2 know about our new configuration factory.

There are three ways to do this:

  • Static initialization
  • A runtime property, or
  • The @Plugin annotation

4.1. Use Static Initialization

Log4j 2 supports calling setConfigurationFactory during static initialization:

static {
    ConfigurationFactory custom = new CustomConfigFactory();
    ConfigurationFactory.setConfigurationFactory(custom);
}

This approach has the same limitation as for the last approach we saw, which is that we’ll need to invoke it before any calls to LogManager.getLogger.

4.2. Use a Runtime Property

If we have access to the Java startup command, then Log4j 2 also supports specifying the ConfigurationFactory to use via a -D parameter:

-Dlog4j2.configurationFactory=com.baeldung.log4j2.CustomConfigFactory

The main benefit of this approach is that we don’t have to worry about initialization order as we do with the first two approaches.

4.3. Use the @Plugin Annotation

And finally, in circumstances where we don’t want to fiddle with the Java startup command by adding a -D, we can simply annotate our CustomConfigurationFactory with the Log4j 2 @Plugin annotation:

@Plugin(
  name = "CustomConfigurationFactory", 
  category = ConfigurationFactory.CATEGORY)
@Order(50)
public class CustomConfigFactory
  extends ConfigurationFactory {

  // ... rest of implementation
}

Log4j 2 will scan the classpath for classes having the @Plugin annotation, and, finding this class in the ConfigurationFactory category, will use it.

4.4. Combining with Static Configuration

Another benefit to using a ConfigurationFactory extension is that we can easily combine our custom configuration with other configuration sources like XML:

public Configuration createConfiguration(
  LoggerContext context, 
  ConfigurationSource src) {
    return new WithXmlConfiguration(context, src);
}

The source parameter represents the static XML or JSON configuration file that Log4j 2 finds if any.

We can take that configuration file and send it to our custom implementation of XmlConfiguration where we can place whatever overriding configuration we need:

public class WithXmlConfiguration extends XmlConfiguration {
 
    @Override
    protected void doConfigure() {
        super.doConfigure(); // parse xml document

        // ... add our custom configuration
    }
}

5. Conclusion

In this article, we looked at how to use the new ConfigurationBuilder API available in Log4j 2.

We also took a look at customizing ConfigurationFactory in combination with ConfigurationBuilder for more advanced use cases.

Don’t forget to check out my complete examples over on GitHub.

Using Lombok’s @Builder Annotation

$
0
0

1. Overview

Project Lombok’s @Builder is a useful mechanism for using the Builder pattern without writing boilerplate code. We can apply this annotation to a Class or a method.

In this brief tutorial, we’ll look at the different use cases for @Builder.

2. Maven Dependencies

First, we need to add Project Lombok to our pom.xml:

<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
    <version>1.16.20.0</version>
</dependency>

Maven Central has the latest version of Project Lombok here.

3. Using @Builder on a Class

In the first use case, we’re simply implementing a Class, and we want to use a builder to create instances of our class.

The first and only step is to add the annotation to the class declaration:

@Getter
@Builder
public class Widget {
    private final String name;
    private final int id;
}

Lombok does all of the work for us. We can now build a Widget and test it:

Widget testWidget = Widget.builder()
  .name("foo")
  .id(1)
  .build();

assertThat(testWidget.getName())
  .isEqualTo("foo");
assertThat(testWidget.getId())
  .isEqualTo(1);

If we want to create copies or near-copies of objects, we can add the property toBuilder = true to the @Builder annotation:

@Builder(toBuilder = true)
public class Widget {
//...
}

This tells Lombok to add a toBuilder() method to our Class. When we invoke the toBuilder() method, it returns a builder initialized with the properties of the instance it is called on:

Widget testWidget = Widget.builder()
  .name("foo")
  .id(1)
  .build();

Widget.WidgetBuilder widgetBuilder = testWidget.toBuilder();

Widget newWidget = widgetBuilder.id(2).build();
assertThat(newWidget.getName())
  .isEqualTo("foo");
assertThat(newWidget.getId())
  .isEqualTo(2);

We can see in the test code that the builder class generated by Lombok is named like our class, with “Builder” appended to it — WidgetBuilder in this case. We can then modify the properties we wish and build() a new instance.

4. Using @Builder on a Method

Suppose we’re using an object that we want to construct with a builder, but we can’t modify the source or extend the Class.

First, let’s create a quick example using Lombok’s @Value annotation:

@Value
final class ImmutableClient {
    private int id;
    private String name;
}

Now we have a final Class with two immutable members, getters for them, and an all-arguments constructor.

We covered how to use @Builder on a Class, but we can use it on methods, too. We’ll use this ability to work around not being able to modify or extend ImmutableClient.

Next, we’ll create a new class with a method for creating ImmutableClients:

class ClientBuilder {

    @Builder(builderMethodName = "builder")
    public static ImmutableClient newClient(int id, String name) {
        return new ImmutableClient(id, name);
    }
}

This annotation creates a method named builder() that returns a Builder for creating ImmutableClients.

Now we can build an ImmutableClient:

ImmutableClient testImmutableClient = ClientBuilder.builder()
  .name("foo")
  .id(1)
  .build();
assertThat(testImmutableClient.getName())
  .isEqualTo("foo");
assertThat(testImmutableClient.getId())
  .isEqualTo(1);

5. Conclusion

In this article, we used Lombok’s @Builder annotation on a method to create a builder for a final Class.

Code samples, as always, can be found over on GitHub.

Introduction to Java Microservices with MSF4J

$
0
0

1. Overview

In this tutorial, we’ll showcase microservices development using the MSF4J framework.

This is a lightweight tool which provides an easy way to build a wide variety of services focused on high performance.

2. Maven Dependencies

We’ll need a bit more Maven configuration than usual to build an MSF4J-based microservice. The simplicity and the power of this framework do come at a price: basically, we need to define a parent artifact, as well as the main class:

<parent>
    <groupId>org.wso2.msf4j</groupId>
    <artifactId>msf4j-service</artifactId>
    <version>2.6.0</version>
</parent>

<properties>
    <microservice.mainClass>
        com.baeldung.msf4j.Application
    </microservice.mainClass>
</properties>

The latest version of msf4j-service can be found on Maven Central.

Next, we’ll show three different microservices scenarios. First a minimalistic example, then a RESTful API, and finally a Spring integration sample.

3. Basic Project

3.1. Simple API

We’re going to publish a simple web resource.

This service is provided with a class using some annotations where each method handles a request. Through these annotations, we set the method, the path, and the parameters required for each request.

The returned content type is just plain text:

@Path("/")
public class SimpleService {

    @GET
    public String index() {
        return "Default content";
    }

    @GET
    @Path("/say/{name}")
    public String say(@PathParam("name") String name) {
        return "Hello " + name;
    }
}

And remember that all classes and annotations used are just standard JAX-RS elements, which we already covered in this article.

3.2. Application

We can launch the microservice with this main class where we set, deploy and run the service defined earlier:

public class Application {
    public static void main(String[] args) {
        new MicroservicesRunner()
          .deploy(new SimpleService())
          .start();
    }
}

If we want, we can chain deploy calls here to run several services at once:

new MicroservicesRunner()
  .deploy(new SimpleService())
  .deploy(new ComplexService())
  .start()

3.3. Running the Microservice

To run the MSF4J microservice, we have a couple of options:

  1. On an IDE, running as a Java application
  2. Running the generated jar package

Once started, you can see the result at http://localhost:9090.

3.4. Startup Configurations

We can tweak the configuration in a lot of ways just by adding some clauses to the startup code.

For example, we can add any kind of interceptor for the requests:

new MicroservicesRunner()
  .addInterceptor(new MetricsInterceptor())
  .deploy(new SimpleService())
  .start();

Or, we can add a global interceptor, like one for authentication:

new MicroservicesRunner()
  .addGlobalRequestInterceptor(newUsernamePasswordSecurityInterceptor())
  .deploy(new SimpleService())
  .start();

Or, if we need session management, we can set a session manager:

new MicroservicesRunner()
  .deploy(new SimpleService())
  .setSessionManager(new PersistentSessionManager()) 
  .start();

For more details about each of this scenarios and to see some working samples, check out MSF4J’s official GitHub repo.

4. Building an API Microservice

We’ve shown the simplest example possible. Now we’ll move to a more realistic project.

This time, we show how to build an API with all the typical CRUD operations to manage a repository of meals.

4.1. The Model

The model is just a simple POJO representing a meal:

public class Meal {
    private String name;
    private Float price;

    // getters and setters
}

4.2. The API

We build the API as a web controller. Using standard annotations, we set each function with the following:

  • URL path
  • HTTP method: GET, POST, etc.
  • input (@Consumes) content type
  • output (@Produces) content type

So, let’s create a method for each standard CRUD operation:

@Path("/menu")
public class MenuService {

    private List<Meal> meals = new ArrayList<Meal>();

    @GET
    @Path("/")
    @Produces({ "application/json" })
    public Response index() {
        return Response.ok()
          .entity(meals)
          .build();
    }

    @GET
    @Path("/{id}")
    @Produces({ "application/json" })
    public Response meal(@PathParam("id") int id) {
        return Response.ok()
          .entity(meals.get(id))
          .build();
    }

    @POST
    @Path("/")
    @Consumes("application/json")
    @Produces({ "application/json" })
    public Response create(Meal meal) {
        meals.add(meal);
        return Response.ok()
          .entity(meal)
          .build();
    }

    // ... other CRUD operations
}

4.3. Data Conversion Features

MSF4J offers support for different data conversion libraries such as GSON (which comes by default) and Jackson (through the msf4j-feature dependency). For example, we can use GSON explicitly:

@GET
@Path("/{id}")
@Produces({ "application/json" })
public String meal(@PathParam("id") int id) {
    Gson gson = new Gson();
    return gson.toJson(meals.get(id));
}

In passing, note that we’ve used curly braces in both @Consumes and @Produces annotation so we can set more than one mime type.

4.4. Running the API Microservice

We run the microservice just as we did in the previous example, through an Application class that publishes the MenuService.

Once started, you can see the result at http://localhost:9090/menu.

5. MSF4J and Spring

We can also apply Spring in our MSF4J based microservices, from which we’ll get its dependency injection features.

5.1. Maven Dependencies

We’ll have to add the appropriate dependencies to the previous Maven configuration to add Spring and Mustache support:

<dependencies>
    <dependency>
        <groupId>org.wso2.msf4j</groupId>
        <artifactId>msf4j-spring</artifactId>
        <version>2.6.1</version>
    </dependency>
    <dependency>
        <groupId>org.wso2.msf4j</groupId>
        <artifactId>msf4j-mustache-template</artifactId>
        <version>2.6.1</version>
    </dependency>
</dependencies>

The latest version of msf4j-spring and msf4j-mustache-template can be found on Maven Central.

5.2. Meal API

This API is just a simple service, using a mock meal repository. Notice how we use Spring annotations for auto-wiring and to set this class as a Spring service component.

@Service
public class MealService {
 
    @Autowired
    private MealRepository mealRepository;

    public Meal find(int id) {
        return mealRepository.find(id);
    }

    public List<Meal> findAll() {
        return mealRepository.findAll();
    }

    public void create(Meal meal) {
        mealRepository.create(meal);
    }
}

5.3. Controller

We declare the controller as a component and Spring provides the service through auto-wiring. The first method shows how to serve a Mustache template and the second a JSON resource:

@Component
@Path("/meal")
public class MealResource {

    @Autowired
    private MealService mealService;

    @GET
    @Path("/")
    public Response all() {
        Map map = Collections.singletonMap("meals", mealService.findAll());
        String html = MustacheTemplateEngine.instance()
          .render("meals.mustache", map);
        return Response.ok()
          .type(MediaType.TEXT_HTML)
          .entity(html)
          .build();
    }

    @GET
    @Path("/{id}")
    @Produces({ "application/json" })
    public Response meal(@PathParam("id") int id) {
        return Response.ok()
          .entity(mealService.find(id))
          .build();
    }

}

5.4. Main Program

In the Spring scenario, this is how we get the microservice started:

public class Application {

    public static void main(String[] args) {
        MSF4JSpringApplication.run(Application.class, args);
    }
}

Once started, we can see the result at http://localhost:8080/meals. The default port differs in Spring projects, but we can set it to whatever port we want.

5.5. Configuration Beans

To enable specific settings, including interceptors and session management, we can add configuration beans.

For example, this one changes the default port for the microservice:

@Configuration
public class PortConfiguration {

    @Bean
    public HTTPTransportConfig http() {
        return new HTTPTransportConfig(9090);
    }

}

6. Conclusion

In this article, we’ve introduced the MSF4J framework, applying different scenarios to build Java-based microservices.

There is a lot of buzz around this concept, but some theoretical background has been already set, and MSF4J provides a convenient and standardized way to apply this pattern.

Also, for some further reading, take a look at building Microservices with Eclipse Microprofile, and of course our guide on Spring Microservices with Spring Boot and Spring Cloud.

And finally, all the examples here can be found in the GitHub repo.

NaN in Java

$
0
0

1. Overview

Simply put, NaN is a numeric data type value which stands for “not a number”.

In this quick tutorial, we’ll explain the NaN value in Java and the various operations that can produce or involve this value.

2. What is NaN?

NaN usually indicates the result of invalid operations. For example, attempting to divide zero by zero is one such operation.

We also use NaN for unrepresentable values. The square root of -1 is one such case, as we can describe the value (i) only in complex numbers.

The IEEE Standard for Floating-Point Arithmetic (IEEE 754) defines the NaN value. In Java, the floating-point types float and double implement this standard.

Java defines NaN constants of both float and double types as Float.NaN and Double.NaN:

A constant holding a Not-a-Number (NaN) value of type double. It is equivalent to the value returned by Double.longBitsToDouble(0x7ff8000000000000L).”

and:

“A constant holding a Not-a-Number (NaN) value of type float. It is equivalent to the value returned by Float.intBitsToFloat(0x7fc00000).”

We don’t have this type of constants for other numeric data types in Java.

3. Comparisons with NaN

While writing methods in Java, we should check that the input is valid and within the expected range. NaN value is not a valid input in most cases. Therefore, we should verify that the input value is not a NaN value and handle these input values appropriately.

NaN cannot be compared with any floating type value. This means that we’ll get false for all comparison operations involving NaN (except “!=” for which we get true).

We get true for “x != x” if and only if x is NaN:

System.out.println("NaN == 1 = " + (NAN == 1));
System.out.println("NaN > 1 = " + (NAN > 1));
System.out.println("NaN < 1 = " + (NAN < 1));
System.out.println("NaN != 1 = " + (NAN != 1));
System.out.println("NaN == NaN = " + (NAN == NAN));
System.out.println("NaN > NaN = " + (NAN > NAN));
System.out.println("NaN < NaN = " + (NAN < NAN));
System.out.println("NaN != NaN = " + (NAN != NAN));

Let’s have a look at the result of running the code above:

NaN == 1 = false
NaN > 1 = false
NaN < 1 = false
NaN != 1 = true
NaN == NaN = false
NaN > NaN = false
NaN < NaN = false
NaN != NaN = true

Hence, we cannot check for NaN by comparing with NaN using “==” or “!= “. In fact, we should rarely use “==” or “!= ” operators with float or double types.

Instead, we can use the expression “x != x”. This expression returns true only for NAN.

We can also use the methods Float.isNaN and Double.isNaN to check for these values. This is the preferred approach as it’s more readable and understandable:

double x = 1;
System.out.println(x + " is NaN = " + (x != x));
System.out.println(x + " is NaN = " + (Double.isNaN(x)));
        
x = Double.NaN;
System.out.println(x + " is NaN = " + (x != x));
System.out.println(x + " is NaN = " + (Double.isNaN(x)));

We’ll get the following result when running this code:

1.0 is NaN = false
1.0 is NaN = false
NaN is NaN = true
NaN is NaN = true

4. Operations Producing NaN

While doing operations involving float and double types, we need to be aware of the NaN values.

Some floating-point methods and operations produce NaN values instead of throwing an Exception. We may need to handle such results explicitly.

A common case resulting in not-a-number values are mathematically undefined numerical operations:

double ZERO = 0;
System.out.println("ZERO / ZERO = " + (ZERO / ZERO));
System.out.println("INFINITY - INFINITY = " + 
  (Double.POSITIVE_INFINITY - Double.POSITIVE_INFINITY));
System.out.println("INFINITY * ZERO = " + (Double.POSITIVE_INFINITY * ZERO));

These examples result in the following output:

ZERO / ZERO = NaN
INFINITY - INFINITY = NaN
INFINITY * ZERO = NaN

Numerical operations which don’t have results in real numbers also produce NaN:

System.out.println("SQUARE ROOT OF -1 = " + Math.sqrt(-1));
System.out.println("LOG OF -1 = " +  Math.log(-1));

These statements will result in:

SQUARE ROOT OF -1 = NaN
LOG OF -1 = NaN

All numeric operations with NaN as an operand produce NaN as a result:

System.out.println("2 + NaN = " +  (2 + Double.NaN));
System.out.println("2 - NaN = " +  (2 - Double.NaN));
System.out.println("2 * NaN = " +  (2 * Double.NaN));
System.out.println("2 / NaN = " +  (2 / Double.NaN));

And the result of the above is:

2 + NaN = NaN
2 - NaN = NaN
2 * NaN = NaN
2 / NaN = NaN

Finally, we cannot assign null to double or float type variables. Instead, we may explicitly assign NaN to such variables to indicate missing or unknown values:

double maxValue = Double.NaN;

5. Conclusion

In this article, we discussed NaN and the various operations involving it. We also discussed the need to handle NaN while doing floating-point computations in Java explicitly.

The full source code can be found over on GitHub.

Java Weekly, Issue 230

$
0
0

Here we go…

1. Spring and Java

>> Java & Docker: Java 10 improvements strengthen the friendship! [aboullaite.me]

Java 10 is finally fully suitable for working with Docker 🙂

>> Metrics with Spring Boot 2.0 – Counters and gauges [blog.frankel.ch]

These two concepts are key to understanding the new metrics functionality in Spring Boot 2 (and in general).

>> TestContainers and Spring Boot [java-allandsundry.com]

TestContainers greatly simplify the process of starting Docker containers for the sake of testing – highly recommended.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Docker Hub vs Creating a Local Docker Registry [code-maze.com]

The title says all.

>> The Benefits of Side Projects [techblog.bozho.net]

It’s always a good idea to not really on only a single source of income… or experience.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Second Opinion [dilbert.com]

>> Question and Answer [dilbert.com]

>> Idea Stealing [dilbert.com]

5. Pick of the Week

Just a quick reminder:

>> Concurrency != Parallelism [monades.roperzh.com]

Guide to Java 10

$
0
0

1. Introduction

JDK 10, which is an implementation of Java SE 10, was released on March 20, 2018.

In this article, we’ll cover and explore the new features and changes introduced in JDK 10.

2. Local Variable Type Inference

Follow the link for an in-depth article on this feature:

Java 10 Local Variable Type Inference

3. Unmodifiable Collections

There are a couple of changes related to unmodifiable collections in Java 10.

3.1. copyOf()

java.util.List, java.util.Map and java.util.Set each got a new static method copyOf(Collection).

It returns the unmodifiable copy of the given Collection:

@Test(expected = UnsupportedOperationException.class)
public void whenModifyCopyOfList_thenThrowsException() {
    List<Integer> copyList = List.copyOf(someIntList);
    copyList.add(4);
}

Any attempt to modify such a collection would result in java.lang.UnsupportedOperationExceptionruntime exception.

3.2. toUnmodifiable*()

java.util.stream.Collectors get additional methods to collect a Stream into unmodifiable List, Map or Set:

@Test(expected = UnsupportedOperationException.class)
public void whenModifyToUnmodifiableList_thenThrowsException() {
    List<Integer> evenList = someIntList.stream()
      .filter(i -> i % 2 == 0)
      .collect(Collectors.toUnmodifiableList());
    evenList.add(4);
}

Any attempt to modify such a collection would result in java.lang.UnsupportedOperationExceptionruntime exception.

4. Optional*.orElseThrow()

java.util.Optional, java.util.OptionalDouble, java.util.OptionalIntand java.util.OptionalLongeach got a new method orElseThrow()which doesn’t take any argument and throws NoSuchElementExceptionif no value is present:

@Test
public void whenListContainsInteger_OrElseThrowReturnsInteger() {
    Integer firstEven = someIntList.stream()
      .filter(i -> i % 2 == 0)
      .findFirst()
      .orElseThrow();
    is(firstEven).equals(Integer.valueOf(2));
}

It’s synonymous with and is now the preferred alternative to the existing get()method.

5. Performance Improvements

Follow the link for an in-depth article on this feature:

Java 10 Performance Improvements

6. Container Awareness

JVMs are now aware of being run in a Docker container and will extract container-specific configuration instead of querying the operating system itself – it applies to data like the number of CPUs and total memory that have been allocated to the container.

However, this support is only available for Linux-based platforms. This new support is enabled by default and can be disabled in the command line with the JVM option:

-XX:-UseContainerSupport

Also, this change adds a JVM option that provides the ability to specify the number of CPUs that the JVM will use:

-XX:ActiveProcessorCount=count

Also, three new JVM options have been added to allow Docker container users to gain more fine-grained control over the amount of system memory that will be used for the Java Heap:

-XX:InitialRAMPercentage
-XX:MaxRAMPercentage
-XX:MinRAMPercentage

7. Root Certificates

The cacerts keystore, which was initially empty so far, is intended to contain a set of root certificates that can be used to establish trust in the certificate chains used by various security protocols.

As a result, critical security components such as TLS didn’t work by default under OpenJDK builds.

With Java 10, Oracle has open-sourced the root certificates in Oracle’s Java SE Root CA program in order to make OpenJDK builds more attractive to developers and to reduce the differences between those builds and Oracle JDK builds.

8. Deprecations and Removals

8.1. Command Line Options and Tools

Tool javah has been removed from Java 10 which generated C headers and source files which were required to implement native methods – now, javac -h can be used instead.

policytool was the UI based tool for policy file creation and management. This has now been removed. The user can use simple text editor for performing this operation.

Removed java -Xprofoption. The option was used to profile the running program and send profiling data to standard output. The user should now use jmap tool instead.

8.2. APIs

Deprecated java.security.acl package has been marked forRemoval=true and is subject to removal in a future version of Java SE. It’s been replaced by java.security.Policy and related classes.

Similarly, java.security.{Certificate,Identity,IdentityScope,Signer} APIs are marked forRemoval=true.

9. Time-Based Release Versioning

Starting with Java 10, Oracle has moved to the time-based release of Java. This has following implications:

  1. A new Java release every six months. The March 2018 release is JDK 10, the September 2018 release is JDK 11, and so forth. These are called feature releases and are expected to contain at least one or two significant features
  2. Support for the feature release will last only for six months, i.e., until next feature release
  3. Long-term support release will be marked as LTS. Support for such release will be for three years
  4. Java 11 will be an LTS release

java -version will now contain the GA date, making it easier to identify how old the release is:

$ java -version
openjdk version "10" 2018-03-20
OpenJDK Runtime Environment 18.3 (build 10+46)
OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode)

10. Conclusion

In this article, we saw the new features and changes brought in by Java 10.

As usual, code snippets can be found over on GitHub.

On Kotlin

$
0
0

Guide to JNI (Java Native Interface)

$
0
0

1. Introduction

As we know, one of the main strengths of Java is its portability – meaning that once we write and compile code, the result of this process is platform-independent bytecode.

Simply put, this can run on any machine or device capable of running a Java Virtual Machine, and it will work as seamlessly as we could expect.

However, sometimes we do actually need to use code that’s natively-compiled for a specific architecture.

There could be some reasons for needing to use native code:

  • The need to handle some hardware
  • Performance improvement for a very demanding process
  • An existing library that we want to reuse instead of rewriting it in Java.

To achieve this, the JDK introduces a bridge between the bytecode running in our JVM and the native code (usually written in C or C++).

The tool is called Java Native Interface. In this article, we’ll see how it is to write some code with it.

2. How It Works

2.1. Native Methods: The JVM Meets Compiled Code

Java provides the native keyword that’s used to indicate that the method implementation will be provided by a native code.

Normally, when making a native executable program, we can choose to use static or shared libs:

  • Static libs – all library binaries will be included as part of our executable during the linking process. Thus, we won’t need the libs anymore, but it’ll increase the size of our executable file.
  • Shared libs – the final executable only has references to the libs, not the code itself. It requires that the environment in which we run our executable has access to all the files of the libs used by our program.

The latter is what makes sense for JNI as we can’t mix bytecode and natively compiled code into the same binary file.

Therefore, our shared lib will keep the native code separately within its .so/.dll/.dylib file (depending on which Operating System we’re using) instead of being part of our classes.

The native keyword transforms our method into a sort of abstract method:

private native void aNativeMethod();

With the main difference that instead of being implemented by another Java class, it will be implemented in a separated native shared library.

A table with pointers in memory to the implementation of all of our native methods will be constructed so they can be called from our Java code.

2.2. Components Needed

Here’s a brief description of the key components that we need to take into account. We’ll explain them further later in this article

  • Java Code – our classes. They will include at least one native method.
  • Native Code – the actual logic of our native methods, usually coded in C or C++.
  • JNI header file – this header file for C/C++ (include/jni.h into the JDK directory) includes all definitions of JNI elements that we may use into our native programs.
  • C/C++ Compiler – we can choose between GCC, Clang, Visual Studio, or any other we like as far as it’s able to generate a native shared library for our platform.

2.3. JNI Elements In Code (Java And C/C++)

Java elements:

  • “native” keyword – as we’ve already covered, any method marked as native must be implemented in a native, shared lib.
  • System.loadLibrary(String libname) – a static method that loads a shared library from the file system into memory and makes its exported functions available for our Java code.

C/C++ elements (many of them defined within jni.h)

  • JNIEXPORT- marks the function into the shared lib as exportable so it will be included in the function table, and thus JNI can find it
  • JNICALL – combined with JNIEXPORT, it ensures that our methods are available for the JNI framework
  • JNIEnv – a structure containing methods that we can use our native code to access Java elements
  • JavaVM – a structure that lets us manipulate a running JVM (or even start a new one) adding threads to it, destroying it, etc…

3. Hello World JNI

Next, let’s look at how JNI works in practice.

In this tutorial, we’ll use C++ as the native language and G++ as compiler and linker.

We can use any other compiler of our preference, but here’s how to install G++ on Ubuntu, Windows, and MacOS:

  • Ubuntu Linux – run command “sudo apt-get install build-essential” in a terminal
  • Windows – Install MinGW
  • MacOS – run command “g++” in a terminal and if it’s not yet present, it will install it.

3.1. Creating the Java Class

Let’s start creating our first JNI program by implementing a classic “Hello World”.

To begin, we create the following Java class that includes the native method that will perform the work:

package com.baeldung.jni;

public class HelloWorldJNI {

    static {
        System.loadLibrary("native");
    }
    
    public static void main(String[] args) {
        new HelloWorldJNI().sayHello();
    }

    // Declare a native method sayHello() that receives no arguments and returns void
    private native void sayHello();
}

As we can see, we load the shared library in a static block. This ensures that it will be ready when we need it and from wherever we need it.

Alternatively, in this trivial program, we could instead load the library just before calling our native method because we’re not using the native library anywhere else.

3.2. Implementing a Method in C++

Now, we need to create the implementation of our native method in C++.

Within C++ the definition and the implementation are usually stored in .h and .cpp files respectively.

First, to create the definition of the method, we have to use the -h flag of the Java compiler:

javac -h . HelloWorldJNI.java

This will generate a com_baeldung_jni_HelloWorldJNI.h file with all the native methods included in the class passed as a parameter, in this case, only one:

JNIEXPORT void JNICALL Java_com_baeldung_jni_HelloWorldJNI_sayHello
  (JNIEnv *, jobject);

As we can see, the function name is automatically generated using the fully qualified package, class and method name.

Also, something interesting that we can notice is that we’re getting two parameters passed to our function; a pointer to the current JNIEnv; and also the Java object that the method is attached to, the instance of our HelloWorldJNI class.

Now, we have to create a new .cpp file for the implementation of the sayHello function. This is where we’ll perform actions that print “Hello World” to console.

We’ll name our .cpp file with the same name as the .h one containing the header and add this code to implement the native function:

JNIEXPORT void JNICALL Java_com_baeldung_jni_HelloWorldJNI_sayHello
  (JNIEnv* env, jobject thisObject) {
    std::cout << "Hello from C++ !!" << std::endl;
}

3.3. Compiling And Linking

At this point, we have all parts we need in place and have a connection between them.

We need to build our shared library from the C++ code and run it!

To do so, we have to use G++ compiler, not forgetting to include the JNI headers from our Java JDK installation.

Ubuntu version:

g++ -c -fPIC -I${JAVA_HOME}/include -I${JAVA_HOME}/include/linux com_baeldung_jni_HelloWorldJNI.cpp -o com_baeldung_jni_HelloWorldJNI.o

Windows version:

g++ -c -I%JAVA_HOME%\include -I%JAVA_HOME%\include\win32 com_baeldung_jni_HelloWorldJNI.cpp -o com_baeldung_jni_HelloWorldJNI.o

MacOS version;

g++ -c -fPIC -I${JAVA_HOME}/include -I${JAVA_HOME}/include/darwin com_baeldung_jni_HelloWorldJNI.cpp -o com_baeldung_jni_HelloWorldJNI.o

Once we have the code compiled for our platform into the file com_baeldung_jni_HelloWorldJNI.o, we have to include it in a new shared library. Whatever we decide to name it is the argument passed into the method System.loadLibrary.

We named ours “native”, and we’ll load it when running our Java code.

The G++ linker then links the C++ object files into our bridged library.

Ubuntu version:

g++ -shared -fPIC -o libnative.so com_baeldung_jni_HelloWorldJNI.o -lc

Windows version:

g++ -shared -o native.dll com_baeldung_jni_HelloWorldJNI.o -Wl,--add-stdcall-alias

MacOS version:

g++ -dynamiclib -o libnative.dylib com_baeldung_jni_HelloWorldJNI.o -lc

And that’s it!

We can now run our program from the command line.

However, we need to add the full path to the directory containing the library we’ve just generated. This way Java will know where to look for our native libs:

java -cp . -Djava.library.path=/NATIVE_SHARED_LIB_FOLDER com.baeldung.jni.HelloWorldJNI

Console output:

Hello from C++ !!

4. Using Advanced JNI Features

Saying hello is nice but not very useful. Usually, we would like to exchange data between Java and C++ code and manage this data in our program.

4.1. Adding Parameters To Our Native Methods

We’ll add some parameters to our native methods. Let’s create a new class called ExampleParametersJNI with two native methods using parameters and returns of different types:

private native long sumIntegers(int first, int second);
    
private native String sayHelloToMe(String name, boolean isFemale);

And then, repeat the procedure to create a new .h file with “javac -h” as we did before.

Now create the corresponding .cpp file with the implementation of the new C++ method:

...
JNIEXPORT jlong JNICALL Java_com_baeldung_jni_ExampleParametersJNI_sumIntegers 
  (JNIEnv* env, jobject thisObject, jint first, jint second) {
    std::cout << "C++: The numbers received are : " << first << " and " << second << std::endl;
    return (long)first + (long)second;
}
JNIEXPORT jstring JNICALL Java_com_baeldung_jni_ExampleParametersJNI_sayHelloToMe 
  (JNIEnv* env, jobject thisObject, jstring name, jboolean isFemale) {
    const char* nameCharPointer = env->GetStringUTFChars(name, NULL);
    std::string title;
    if(isFemale) {
        title = "Ms. ";
    }
    else {
        title = "Mr. ";
    }

    std::string fullName = title + nameCharPointer;
    return env->NewStringUTF(fullName.c_str());
}
...

We’ve used the pointer *env of type JNIEnv to access the methods provided by the JNI environment instance.

JNIEnv allows us, in this case, to pass Java Strings into our C++ code and back out without worrying about the implementation.

We can check the equivalence of Java types and C JNI types into Oracle official documentation.

To test our code, we’ve to repeat all the compilation steps of the previous HelloWorld example.

4.2. Using Objects And Calling Java Methods From Native Code

In this last example, we’re going to see how we can manipulate Java objects into our native C++ code.

We’ll start creating a new class UserData that we’ll use to store some user info:

package com.baeldung.jni;

public class UserData {
    
    public String name;
    public double balance;
    
    public String getUserInfo() {
        return "[name]=" + name + ", [balance]=" + balance;
    }
}

Then, we’ll create another Java class called ExampleObjectsJNI with some native methods with which we’ll manage objects of type UserData:

...
public native UserData createUser(String name, double balance);
    
public native String printUserData(UserData user);

One more time, let’s create the .h header and then the C++ implementation of our native methods on a new .cpp file:

JNIEXPORT jobject JNICALL Java_com_baeldung_jni_ExampleObjectsJNI_createUser
  (JNIEnv *env, jobject thisObject, jstring name, jdouble balance) {
  
    // Create the object of the class UserData
    jclass userDataClass = env->FindClass("com/baeldung/jni/UserData");
    jobject newUserData = env->AllocObject(userDataClass);
	
    // Get the UserData fields to be set
    jfieldID nameField = env->GetFieldID(userDataClass , "name", "Ljava/lang/String;");
    jfieldID balanceField = env->GetFieldID(userDataClass , "balance", "D");
	
    env->SetObjectField(newUserData, nameField, name);
    env->SetDoubleField(newUserData, balanceField, balance);
    
    return newUserData;
}

JNIEXPORT jstring JNICALL Java_com_baeldung_jni_ExampleObjectsJNI_printUserData
  (JNIEnv *env, jobject thisObject, jobject userData) {
  	
    // Find the id of the Java method to be called
    jclass userDataClass=env->GetObjectClass(userData);
    jmethodID methodId=env->GetMethodID(userDataClass, "getUserInfo", "()Ljava/lang/String;");

    jstring result = (jstring)env->CallObjectMethod(userData, methodId);
    return result;
}

Again, we’re using the JNIEnv *env pointer to access the needed classes, objects, fields and methods from the running JVM.

Normally, we just need to provide the full class name to access a Java class, or the correct method name and signature to access an object method.

We’re even creating an instance of the class com.baeldung.jni.UserData in our native code. Once we have the instance, we can manipulate all its properties and methods in a way similar to Java reflection.

We can check all other methods of JNIEnv into the Oracle official documentation.

4. Disadvantages Of Using JNI

JNI bridging does have its pitfalls.

The main downside being the dependency on the underlying platform; we essentially lose the “write once, run anywhere” feature of Java. This means that we’ll have to build a new lib for each new combination of platform and architecture we want to support. Imagine the impact that this could have on the build process if we supported Windows, Linux, Android, MacOS…

JNI not only adds a layer of complexity to our program. It also adds a costly layer of communication between the code running into the JVM and our native code: we need to convert the data exchanged in both ways between Java and C++ in a marshaling/unmarshaling process.

Sometimes there isn’t even a direct conversion between types so we’ll have to write our equivalent.

5. Conclusion

Compiling the code for a specific platform (usually) makes it faster than running bytecode.

This makes it useful when we need to speed up a demanding process. Also, when we don’t have other alternatives such as when we need to use a library that manages a device.

However, this comes at a price as we’ll have to maintain additional code for each different platform we support.

That’s why it’s usually a good idea to only use JNI in the cases where there’s no Java alternative.

As always the code for this article is available over on GitHub.

Spring Boot Configuration with Jasypt

$
0
0

1. Introduction

Jasypt (Java Simplified Encryption) Spring Boot provides utilities for encrypting property sources in Boot applications.

In this article, we’ll discuss how we can add jasypt-spring-boot‘s support and use it.

For more information on using Jasypt as a framework for encryption, take a look at our Introduction to Jasypt here.

2. Why Jasypt?

Whenever we need to store sensitive information in the configuration file – that means we’re essentially making that information vulnerable; this includes any kind of sensitive information, such as credentials, but certainly a lot more than that.

By using Jasypt, we can provide encryption for the property file attributes and our application will do the job of decrypting it and retrieving the original value.

3. Ways to Use JASYPT with Spring Boot

Let’s discuss the different ways to use Jasypt with Spring Boot.

3.1. Using jasypt-spring-boot-starter

We need to add a single dependency to our project:

<dependency>
    <groupId>com.github.ulisesbocchio</groupId>
    <artifactId>jasypt-spring-boot-starter</artifactId>
    <version>2.0.0</version>
</dependency>

Maven Central has the latest version of the jasypt-spring-boot-starter.

Let’s now encrypt the text “Password@1” with secret key “password” and add it to the encrypted.properties:

encrypted.property=ENC(uTSqb9grs1+vUv3iN8lItC0kl65lMG+8)

And let’s define a configuration class AppConfigForJasyptStarter – to specify the encrypted.properties file as a PropertySource :

@Configuration
@PropertySource("encrypted.properties")
public class AppConfigForJasyptStarter {
}

Now, we’ll write a service bean PropertyServiceForJasyptStarter to retrieve the values from the encrypted.properties. The decrypted value can be retrieved using the @Value annotation or the getProperty() method of Environment class:

@Service
public class PropertyServiceForJasyptStarter {

    @Value("${encrypted.property}")
    private String property;

    public String getProperty() {
        return property;
    }

    public String getPasswordUsingEnvironment(Environment environment) {
        return environment.getProperty("encrypted.property");
    }
}

Finally, using the above service class and setting the secret key which we used for encryption, we can easily retrieve the decrypted password and use in our application:

@Test
public void whenDecryptedPasswordNeeded_GetFromService() {
    System.setProperty("jasypt.encryptor.password", "password");
    PropertyServiceForJasyptStarter service = appCtx
      .getBean(PropertyServiceForJasyptStarter.class);
 
    assertEquals("Password@1", service.getProperty());
 
    Environment environment = appCtx.getBean(Environment.class);
 
    assertEquals(
      "Password@1", 
      service.getPasswordUsingEnvironment(environment));
}

3.2. Using jasypt-spring-boot

For projects not using @SpringBootApplication or @EnableAutoConfiguration, we can use the jasypt-spring-boot dependency directly:

<dependency>
    <groupId>com.github.ulisesbocchio</groupId>
    <artifactId>jasypt-spring-boot</artifactId>
    <version>2.0.0</version>
</dependency>

Simillarly, let’s encrypt the text “Password@2” with secret key “password” and add it to the encryptedv2.properties:

encryptedv2.property=ENC(dQWokHUXXFe+OqXRZYWu22BpXoRZ0Drt)

And let’s have a new configuration class for jasypt-spring-boot dependency.

Here, we need to add the annotation @EncryptablePropertySource :

@Configuration
@EncryptablePropertySource("encryptedv2.properties")
public class AppConfigForJasyptSimple {
}

Also, a new PropertyServiceForJasyptSimple bean to return encryptedv2.properties is defined:

@Service
public class PropertyServiceForJasyptSimple {
 
    @Value("${encryptedv2.property}")
    private String property;

    public String getProperty() {
        return property;
    }
}

Finally, using the above service class and setting the secret key which we used for encryption, we can easily retrieve the encryptedv2.property:

@Test
public void whenDecryptedPasswordNeeded_GetFromService() {
    System.setProperty("jasypt.encryptor.password", "password");
    PropertyServiceForJasyptSimple service = appCtx
      .getBean(PropertyServiceForJasyptSimple.class);
 
    assertEquals("Password@2", service.getProperty());
}

3.3. Using Custom JASYPT Encryptor

The encryptors defined in section 3.1. and 3.2. are constructed with the default configuration values.

However, let’s go and define our own Jasypt encryptor and try to use for our application.

S0, the custom encryptor bean will look like:

@Bean(name = "encryptorBean")
public StringEncryptor stringEncryptor() {
    PooledPBEStringEncryptor encryptor = new PooledPBEStringEncryptor();
    SimpleStringPBEConfig config = new SimpleStringPBEConfig();
    config.setPassword("password");
    config.setAlgorithm("PBEWithMD5AndDES");
    config.setKeyObtentionIterations("1000");
    config.setPoolSize("1");
    config.setProviderName("SunJCE");
    config.setSaltGeneratorClassName("org.jasypt.salt.RandomSaltGenerator");
    config.setStringOutputType("base64");
    encryptor.setConfig(config);
    return encryptor;
}

Furthermore, we can modify all the properties for the SimpleStringPBEConfig.

Also, we need to add a property “jasypt.encryptor.bean” to our application.properties, so that Spring Boot knows which Custom Encryptor it should use.

For example, we add the custom text “Password@3” encrypted with secret key “password” in the application.properties:

jasypt.encryptor.bean=encryptorBean
encryptedv3.property=ENC(askygdq8PHapYFnlX6WsTwZZOxWInq+i)

Once we set it, we can easily get the encryptedv3.property from the Spring’s Environment:

@Test
public void whenConfiguredExcryptorUsed_ReturnCustomEncryptor() {
    Environment environment = appCtx.getBean(Environment.class);
 
    assertEquals(
      "Password@3", 
      environment.getProperty("encryptedv3.property"));
}

4. Conclusion

By using Jasypt we can provide additional security for the data that application handles.

It enables us to focus more on the core of our application and can also be used to provide custom encryption if required.

As always, the complete code for this example is available over on Github.

Getting the Size of an Iterable in Java

$
0
0

1. Overview

In this quick tutorial, we’ll learn about the various ways in which we can get the size of an Iterable in Java.

2. Iterable and Iterator

Iterable is one of the main interfaces of the collection classes in Java.

The Collection interface extends Iterable and hence all child classes of Collection also implement Iterable.

Iterable has only one method that produces an Iterator:

public interface Iterable<T> {
    public Iterator<T> iterator();    
}

This Iterator can then be used to iterate over the elements in the Iterable.

3. Iterable Size using Core Java

3.1. for-each loop

All classes that implement Iterable are eligible for the for-each loop in Java.

This allows us to loop over the elements in the Iterable while incrementing a counter to get its size:

int counter = 0;
for (Object i : data) {
    counter++;
}
return counter;

3.2. Collection.size()

In most cases, the Iterable will be an instance of Collection, such as a List or a Set.

In such cases, we can check the type of the Iterable and call size() method on it to get the number of elements.

if (data instanceof Collection) {
    return ((Collection<?>) data).size();
}

The call to size() is usually much faster than iterating through the entire collection.

Here’s an example showing the combination of the above two solutions: 

public static int size(Iterable data) {

    if (data instanceof Collection) {
        return ((Collection<?>) data).size();
    }
    int counter = 0;
    for (Object i : data) {
        counter++;
    }
    return counter;
}

3.3. Stream.count()

If we’re using Java 8, we can create a Stream from the Iterable.

The stream object can then be used to get the count of elements in the Iterable.

return StreamSupport.stream(data.spliterator(), false).count();

4. Iterable Size using Third-Party Libraries

4.1. IterableUtils#size()

The Apache Commons Collections library has a nice IterableUtils class that provides static utility methods for Iterable instances.

Before we start, we need to import the latest dependencies from Maven Central:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-collections4</artifactId>
    <version>4.1</version>
</dependency>

We can invoke the size() method of IterableUtils on an Iterable object to get its size.

return IterableUtils.size(data);

4.2. Iterables#size()

Similarly, the Google Guava library also provides a collection of static utility methods in its Iterables class to operate on Iterable instances.

Before we start, we need to import the latest dependencies from Maven Central:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>25.0</version>
</dependency>

Invoking the static size() method on the Iterables class gives us the number of elements.

return Iterables.size(data);

Under the hood, both IterableUtils and Iterables use the combination of approaches described in 3.1 and 3.2 to determine the size.

5. Conclusion

In this article, we looked at different ways of getting the size of an Iterable in Java.

The source code for this article and the relevant test cases are available over on GitHub.

Optional orElse Optional

$
0
0

1. Introduction

In some cases, we might want to fallback to another Optional instance if another one is empty.

In this tutorial, we’ll briefly mention how we can do that – which is harder than it looks.

For an introduction to the Java Optional class, have a look at our previous article.

2. Using Plain Java

In Java 8, there’s no direct way to achieve return a different Optional if the first one is empty.

Therefore, we can implement our own custom method:

public static <T> Optional<T> or(Optional<T> optional, Optional<T> fallback) {
    return optional.isPresent() ? optional : fallback;
}

And, in practice:

@Test
public void givenOptional_whenValue_thenOptionalGeneralMethod() {
    String name = "Filan Fisteku";
    String missingOptional = "Name not provided";
    Optional<String> optionalString = Optional.ofNullable(name);
    Optional<String> fallbackOptionalString = Optional.ofNullable(missingOptional);
 
    assertEquals(
      optionalString, 
      Optionals.or(optionalString, fallbackOptionalString));
}
    
@Test
public void givenEmptyOptional_whenValue_thenOptionalGeneralMethod() {
    Optional<String> optionalString = Optional.empty();
    Optional<String> fallbackOptionalString = Optional.ofNullable("Name not provided");
 
    assertEquals(
      fallbackOptionalString, 
      Optionals.or(optionalString, fallbackOptionalString));
}

On the other hand, Java 9 does add an or() method that we can use to get an Optional, or another value, if that Optional isn’t present.

Let’s see this in practice with a quick example:

public static Optional<String> getName(Optional<String> name) {
    return name.or(() -> getCustomMessage());
}

We’ve used an auxiliary method to help us with our example:

private static Optional<String> getCustomMessage() {
    return Optional.of("Name not provided");
}

We can test it and further understand how it’s working. The following test case is a demonstration of the case when Optional has a value:

@Test
public void givenOptional_whenValue_thenOptional() {
    String name = "Filan Fisteku";
    Optional<String> optionalString = Optional.ofNullable(name);
    assertEquals(optionalString, Optionals.getName(optionalString));
}

3. Using Guava

Another way to do this is by using or() method of the guava’s Optional class. First, we need to add guava in our project (the latest version can be found here):

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>24.1.1-jre</version>
</dependency>

Now, we can continue with the same example that we had earlier:

public static com.google.common.base.Optional<String> getOptionalGuavaName(com.google.common.base.Optional<String> name) {
    return name.or(getCustomMessageGuava());
}
private static com.google.common.base.Optional<String> getCustomMessageGuava() {
    return com.google.common.base.Optional.of("Name not provided");
}

As we can see, it’s very similar to the one displayed above. However, it has a slight difference in the naming of the method and is exactly the same as or() method of the class Optional from JDK 9.

We can now test it, similarly as the example above:

@Test
public void givenGuavaOptional_whenInvoke_thenOptional() {
    String name = "Filan Fisteku";
    Optional<String> stringOptional = Optional.of(name);
 
    assertEquals(name, Optionals.getOptionalGuavaName(stringOptional));
}
@Test
public void givenGuavaOptional_whenNull_thenDefaultText() {
    assertEquals(
      com.google.common.base.Optional.of("Name not provided"), 
      Optionals.getOptionalGuavaName(com.google.common.base.Optional.fromNullable(null)));
}

4. Conclusion

This was a quick article illustrating how to achieve Optional orElse Optional functionality.

The code for all the examples explained here, and much more can be found over on GitHub.

Using the Spring RestTemplate Interceptor

$
0
0

1. Overview

In this tutorial, we’re going to learn how to implement a Spring RestTemplate Interceptor.

We’ll go through an example in which we’ll create an interceptor that adds a custom header to the response.

2. Interceptor Usage Scenarios

Besides header modification, some of the other use-cases where a RestTemplate interceptor is useful are:

  • Request and response logging
  • Retrying the requests with a configurable back off strategy
  • Request denial based on certain request parameters
  • Altering the request URL address

3. Creating the Interceptor

In most programming paradigms, interceptors are an essential part that enables programmers to control the execution by intercepting it. Spring framework also supports a variety of interceptors for different purposes.

Spring RestTemplate allows us to add interceptors that implement ClientHttpRequestInterceptor interface. The intercept(HttpRequest, byte[], ClientHttpRequestExecution) method of this interface will intercept the given request and return the response by giving us access to the request, body and execution objects.

We’ll be using the ClientHttpRequestExecution argument to do the actual execution, and pass on the request to the subsequent process chain.

As a first step, let’s create an interceptor class that implements the ClientHttpRequestInterceptor interface:

public class RestTemplateHeaderModifierInterceptor
  implements ClientHttpRequestInterceptor {

    @Override
    public ClientHttpResponse intercept(
      HttpRequest request, 
      byte[] body, 
      ClientHttpRequestExecution execution) throws IOException {
 
        ClientHttpResponse response = execution.execute(request, body);
        response.getHeaders().add("Foo", "bar");
        return response;
    }
}

Our interceptor will be invoked for every incoming request, and it will add a custom header Foo to every response, once the execution completes and returns.

Since the intercept() method included the request and body as arguments, it’s also possible to do any modification on the request or even denying the request execution based on certain conditions.

4. Setting up the RestTemplate

Now that we have created our interceptor, let’s create the RestTemplate bean and add our interceptor to it:

@Configuration
public class RestClientConfig {

    @Bean
    public RestTemplate restTemplate() {
        RestTemplate restTemplate = new RestTemplate();

        List<ClientHttpRequestInterceptor> interceptors
          = restTemplate.getInterceptors();
        if (CollectionUtils.isEmpty(interceptors)) {
            interceptors = new ArrayList<>();
        }
        interceptors.add(new RestTemplateHeaderModifierInterceptor());
        restTemplate.setInterceptors(interceptors);
        return restTemplate;
    }
}

In some cases, there might be interceptors already added to the RestTemplate object. So to make sure everything works as expected, our code will initialize the interceptor list only if it’s empty.

As our code shows, we are using the default constructor to create the RestTemplate object, but there are some scenarios where we need to read the request/response stream twice.

For instance, if we want our interceptor to function as a request/response logger, then we need to read it twice – the first time by the interceptor and the second time by the client.

The default implementation allows us to read the response stream only once. To cater such specific scenarios, Spring provides a special class called BufferingClientHttpRequestFactory. As the name suggests, this class will buffer the request/response in JVM memory for multiple usage.

Here’s how the RestTemplate object is initialized using BufferingClientHttpRequestFactory  to enable the request/response stream caching:

RestTemplate restTemplate 
  = new RestTemplate(
    new BufferingClientHttpRequestFactory(
      new SimpleClientHttpRequestFactory()
    )
  );

5. Testing Our Example

Here’s the JUnit test case for testing our RestTemplate interceptor:

public class RestTemplateItegrationTest {
    
    @Autowired
    RestTemplate restTemplate;

    @Test
    public void givenRestTemplate_whenRequested_thenLogAndModifyResponse() {
        LoginForm loginForm = new LoginForm("username", "password");
        HttpEntity<LoginForm> requestEntity
          = new HttpEntity<LoginForm>(loginForm);
        HttpHeaders headers = new HttpHeaders();
        headers.setContentType(MediaType.APPLICATION_JSON);
        
        ResponseEntity<String> responseEntity
          = restTemplate.postForEntity(
            "http://httpbin.org/post", requestEntity, String.class
          );
        
        assertThat(
          responseEntity.getStatusCode(),
          is(equalTo(HttpStatus.OK))
        );
        assertThat(
          responseEntity.getHeaders().get("Foo").get(0),
          is(equalTo("bar"))
        );
    }
}

Here, we’ve used the freely hosted HTTP request and response service http://httpbin.org to post our data. This testing service will return our request body along with some metadata.

6. Conclusion

This tutorial is all about how to set up an interceptor and add it to the RestTemplate object. This kind of interceptors can also be used for filtering, monitoring and controlling the incoming requests.

A common use-case for a RestTemplate interceptor is the header modification – which we’ve illustrated in details in this article.

And, as always, you can find the example code over on Github project. This is a Maven-based project, so it should be easy to import and run as it is.

Spring RestTemplate Error Handling

$
0
0

1. Overview

In this short tutorial, we’ll discuss how to implement and inject the ResponseErrorHandler interface in a RestTemplate instance – to gracefully handle HTTP errors returned by remote APIs. 

2. Default Error Handling

By default, the RestTemplate will throw one of these exceptions in case of an HTTP error:

  1. HttpClientErrorException – in case of HTTP status 4xx
  2. HttpServerErrorException – in case of HTTP status 5xx
  3. UnknownHttpStatusCodeException – in case of an unknown HTTP status

All these exceptions are extensions of RestClientResponseException.

Obviously, the simplest strategy to add a custom error handling is to wrap the call in a try/catch block. Then, we process the caught exception as we see fit.

However, this simple strategy doesn’t scale well as the number of remote APIs or calls increases. It’d be more efficient if we could implement a reusable error handler for all of our remote calls.

3. Implementing a ResponseErrorHandler

And so, a class that implements ResponseErrorHandler will read the HTTP status from the response and either:

  1. Throw an exception that is meaningful to our application
  2. Simply ignore the HTTP status and let the response flow continue without interruption

We need to inject the ResponseErrorHandler implementation into the RestTemplate instance.

Hence, we use the RestTemplateBuilder to build the template and replace the DefaultResponseErrorHandler in the response flow.

So let’s first implement our RestTemplateResponseErrorHandler:

@Component
public class RestTemplateResponseErrorHandler 
  implements ResponseErrorHandler {

    @Override
    public boolean hasError(ClientHttpResponse httpResponse) 
      throws IOException {

        return (
          httpResponse.getStatusCode().series() == CLIENT_ERROR 
          || httpResponse.getStatusCode().series() == SERVER_ERROR);
    }

    @Override
    public void handleError(ClientHttpResponse httpResponse) 
      throws IOException {

        if (httpResponse.getStatusCode()
          .series() == HttpStatus.Series.SERVER_ERROR) {
            // handle SERVER_ERROR
        } else if (httpResponse.getStatusCode()
          .series() == HttpStatus.Series.CLIENT_ERROR) {
            // handle CLIENT_ERROR
            if (httpResponse.getStatusCode() == HttpStatus.NOT_FOUND) {
                throw new NotFoundException();
            }
        }
    }
}

Next, we build the RestTemplate instance using the RestTemplateBuilder to introduce our RestTemplateResponseErrorHandler:

@Service
public class BarConsumerService {

    private RestTemplate restTemplate;

    @Autowired
    public BarConsumerService(RestTemplateBuilder restTemplateBuilder) {
        RestTemplate restTemplate = restTemplateBuilder
          .errorHandler(new RestTemplateResponseErrorHandler())
          .build();
    }

    public Bar fetchBarById(String barId) {
        return restTemplate.getForObject("/bars/4242", Bar.class);
    }

}

4. Testing our Implementation

Finally, let’s test this handler by mocking a server and returning a NOT_FOUND status:

@RunWith(SpringRunner.class)
@ContextConfiguration(classes = { NotFoundException.class, Bar.class })
@RestClientTest
public class RestTemplateResponseErrorHandlerIntegrationTest {

    @Autowired 
    private MockRestServiceServer server;
 
    @Autowired 
    private RestTemplateBuilder builder;

    @Test(expected = NotFoundException.class)
    public void  givenRemoteApiCall_when404Error_thenThrowNotFound() {
        Assert.assertNotNull(this.builder);
        Assert.assertNotNull(this.server);

        RestTemplate restTemplate = this.builder
          .errorHandler(new RestTemplateResponseErrorHandler())
          .build();

        this.server
          .expect(ExpectedCount.once(), requestTo("/bars/4242"))
          .andExpect(method(HttpMethod.GET))
          .andRespond(withStatus(HttpStatus.NOT_FOUND));

        Bar response = restTemplate 
          .getForObject("/bars/4242", Bar.class);
        this.server.verify();
    }
}

5. Conclusion

This article presented a solution to implement and test a custom error handler for a RestTemplate that converts HTTP errors into meaningful exceptions.

As always, the code presented in this article is available over on Github.

Pessimistic Locking in JPA

$
0
0

1. Overview

There are plenty of situations when we want to retrieve data from a database. Sometimes we want to lock it for ourselves for further processing so nobody else can interrupt our actions.

We can think of two concurrency control mechanisms which allow us to do that: setting the proper transaction isolation level or setting a lock on data that we need at the moment.

The transaction isolation is defined for database connections. We can configure it to retain the different degree of locking data.

However, the isolation level is set once the connection is created and it affects every statement within that connection. Luckily, we can use pessimistic locking which uses database mechanisms for reserving more granular exclusive access to the data.

We can use a pessimistic lock to ensure that no other transactions can modify or delete reserved data.

There are two types of locks we can retain: an exclusive lock and a shared lock. We could read but not write in data when someone else holds a shared lock. In order to modify or delete the reserved data, we need to have an exclusive lock.

We can acquire exclusive locks using ‘SELECT … FOR UPDATE‘ statements.

2. Lock Modes

JPA specification defines three pessimistic lock modes which we’re going to discuss:

  • PESSIMISTIC_READ – allows us to obtain a shared lock and prevent the data from being updated or deleted
  • PESSIMISTIC_WRITE – allows us to obtain an exclusive lock and prevent the data from being read, updated or deleted
  • PESSIMISTIC_FORCE_INCREMENT – works like PESSIMISTIC_READ and it additionally increments a version attribute of a versioned entity

All of them are static members of the LockModeType class and allow transactions to obtain a database lock. They all are retained until the transaction commits or rolls back.

It’s worth noticing that we can obtain only one lock at a time. If it’s impossible a PersistenceException is thrown.

2.1. PESSIMISTIC_READ

Whenever we want to just read data and don’t encounter dirty reads, we could use PESSIMISTIC_READ (shared lock). We won’t be able to make any updates or deletes though.

It sometimes happens that the database we use doesn’t support the PESSIMISTIC_READ lock, so it’s possible that we obtain the PESSIMISTIC_WRITE lock instead.

2.2. PESSIMISTIC_WRITE

Any transaction that needs to acquire a lock on data and make changes to it should obtain the PESSIMISTIC_WRITE lock. According to the JPA specification, holding PESSIMISTIC_WRITE lock will prevent other transactions from reading, updating or deleting the data.

Please note that some database systems implement multi-version concurrency control which allows readers to fetch data that has been already blocked.

2.3. PESSIMISTIC_FORCE_INCREMENT

This lock works similarly to PESSIMISTIC_WRITE, but it was introduced to cooperate with versioned entities – entities which have an attribute annotated with @Version.

Any updates of versioned entities could be preceded with obtaining the PESSIMISTIC_FORCE_INCREMENT lock. Acquiring that lock results in updating the version column.

It’s up to a persistence provider to determine whether it supports PESSIMISTIC_FORCE_INCREMENT for unversioned entities or not. If it doesn’t, it throws the PersistanceException.

2.4. Exceptions

It’s good to know which exception may occur while working with pessimistic locking. JPA specification provides different types of exceptions:

  • PessimisticLockException – indicates that obtaining a lock or converting a shared to exclusive lock fails and results in a transaction-level rollback
  • LockTimeoutException –  indicates that obtaining a lock or converting a shared lock to exclusive times out and results in a statement-level rollback
  • PersistanceException – indicates that a persistence problem occurred. PersistanceException and its subtypes, except NoResultException, NonUniqueResultException, LockTimeoutException, and QueryTimeoutException, marks the active transaction to be rolled back.

3. Using Pessimistic Locks

There are a few possible ways to configure a pessimistic lock on a single record or group of records. Let’s see how to do it in JPA.

3.1. Find

It’s probably the most straightforward way. It’s enough to pass a LockModeType object as a parameter to the find method:

entityManager.find(Student.class, studentId, LockModeType.PESSIMISTIC_READ);

3.2. Query

Additionally, we can use a Query object as well and call the setLockMode setter with a lock mode as a parameter:

Query query = entityManager.createQuery("from Student where studentId = :studentId");
query.setParameter("studentId", studentId);
query.setLockMode(LockModeType.PESSIMISTIC_WRITE);
query.getResultList()

3.3. Explicit Locking

It’s also possible to lock manually the results retrieved by the find method:

Student resultStudent = entityManager.find(Student.class, studentId);
entityManager.lock(resultStudent, LockModeType.PESSIMISTIC_WRITE);

3.4. Refresh

If we want to overwrite the state of the entity by the refresh method, we can also set a lock:

Student resultStudent = entityManager.find(Student.class, studentId);
entityManager.refresh(resultStudent, LockModeType.PESSIMISTIC_FORCE_INCREMENT);

3.5. NamedQuery

@NamedQuery annotation allows us to set a lock mode as well:

@NamedQuery(name="lockStudent",
  query="SELECT s FROM Student s WHERE s.id LIKE :studentId",
  lockMode = PESSIMISTIC_READ)

4. Lock Scope

Lock scope parameter defines how to deal with locking relationships of the locked entity. It’s possible to obtain a lock just on a single entity defined in a query or additionally block its relationships.

To configure the scope we can use PessimisticLockScope enum. It contains two values: NORMAL and EXTENDED.

We can set the scope by passing a parameter ‘javax.persistance.lock.scope‘ with PessimisticLockScope value as an argument to the proper method of EntityManager, Query, TypedQuery or NamedQuery:

Map<String, Object> properties = new HashMap<>();
map.put("javax.persistence.lock.scope", PessimisticLockScope.EXTENDED);
    
entityManager.find(
  Student.class, 1L, LockModeType.PESSIMISTIC_WRITE, properties);

4.1. PessimisticLockScope.NORMAL

We should know that the PessimisticLockScope.NORMAL is the default scope. With this locking scope, we lock the entity itself. When used with joined inheritance it also locks the ancestors.

Let’s look at the sample code with two entities:

@Entity
@Inheritance(strategy = InheritanceType.JOINED)
public class Person {

    @Id
    private Long id;
    private String name;
    private String lastName;

    // getters and setters
}

@Entity
public class Employee extends Person {

    private BigDecimal salary;

    // getters and setters
}

When we want to obtain a lock on the Employee, we can observe the SQL query which spans over those two entities:

SELECT t0.ID, t0.DTYPE, t0.LASTNAME, t0.NAME, t1.ID, t1.SALARY 
FROM PERSON t0, EMPLOYEE t1 
WHERE ((t0.ID = ?) AND ((t1.ID = t0.ID) AND (t0.DTYPE = ?))) FOR UPDATE

4.2. PessimisticLockScope.EXTENDED

The EXTENDED scope covers the same functionality as NORMAL. In addition, it’s able to block related entities in a join table.

Simply put, it works with entities annotated with @ElementCollection or @OneToOne, @OneToMany etc. with @JoinTable.

Let’s look at the sample code with the @ElementCollection annotation:

@Entity
public class Customer {

    @Id
    private Long customerId;
    private String name;
    private String lastName;
    @ElementCollection
    @CollectionTable(name = "customer_address")
    private List<Address> addressList;

    // getters and setters
}

@Embeddable
public class Address {

    private String country;
    private String city;

    // getters and setters
}

Let’s analyze some queries when searching for the Customer entity:

SELECT CUSTOMERID, LASTNAME, NAME 
FROM CUSTOMER WHERE (CUSTOMERID = ?) FOR UPDATE

SELECT CITY, COUNTRY, Customer_CUSTOMERID 
FROM customer_address 
WHERE (Customer_CUSTOMERID = ?) FOR UPDATE

We can see that there are two ‘FOR UPDATE‘ queries which lock a row in the customer table as well as a row in the join table.

Another interesting fact we should be aware of is that not all persistence providers support lock scopes.

5. Setting Lock Timeout

Besides setting lock scopes, we can adjust another lock parameter – timeout. The timeout value is the number of milliseconds that we want to wait for obtaining a lock until the LockTimeoutException occurs.

We can change the value of timeout similarly to lock scopes, by using property ‘javax.persistence.lock.timeout’ with the proper number of milliseconds.

It’s also possible to specify ‘no wait’ locking by changing timeout value to zero. However, we should keep in mind that there are database drivers which don’t support setting a timeout value this way.

Map<String, Object> properties = new HashMap<>(); 
map.put("javax.persistence.lock.timeout", 1000L); 

entityManager.find(
  Student.class, 1L, LockModeType.PESSIMISTIC_READ, properties);

6. Conclusion

When setting the proper isolation level is not enough to cope with concurrent transactions, JPA gives us pessimistic locking. It enables us to isolate and orchestrate different transactions so they don’t access the same resource at the same time.

To achieve that we can choose between discussed types of locks and consequently modify such parameters as their scopes or timeouts.

On the other hand, we should remember that understanding database locks is as important as understanding the mechanisms of underlying database systems. It’s also important to have in mind that the behavior of pessimistic locks depends on persistence provider we work with.

Lastly, the source code of this tutorial is available over on GitHub for hibernate and for EclipseLink.


Working With Arrays in Thymeleaf

$
0
0

1. Overview

In this quick tutorial, we’re going to see how we can use arrays in Thymeleaf. For easy setup, we’re going to use a spring-boot initializer to bootstrap our application.

The basics of Spring MVC and Thymeleaf can be found here.

2. Thymeleaf Dependency

In our pom.xml file, the only dependencies we need to add are SpringMVC and Thymeleaf:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

3. The Controller

For simplicity, let’s use a controller with only one method which handles GET requests.

This responds by passing an array to the model object which will make it accessible to the view:

@Controller
public class ThymeleafArrayController {
 
    @GetMapping("/arrays")
    public String arrayController(Model model) {
        String[] continents = {
          "Africa", "Antarctica", "Asia", "Australia", 
          "Europe", "North America", "Sourth America"
        };
        
        model.addAttribute("continents", continents);

        return "continents";
    }
}

4. The View

In the view page, we’re going to access the array continents by the name we pass it with (continents) from our controller above.

4.1. Properties and Indexes

One of the first property we’re going to inspect is the length of the array. This is how we can check it:

<p>...<span th:text="${continents.length}"></span>...</p>

And looking at the snippet of code above, which is from the view page, we should notice the use of the keyword th:text. We used it to print the value of the variable inside the curly braces, in this case, the length of the array.

Consequently, we access the value of each element of the array continents by its index just like we use to do within our normal Java code:

<ol>
    <li th:text="${continents[2]}"></li>
    <li th:text="${continents[0]}"></li>
    <li th:text="${continents[4]}"></li>
    <li th:text="${continents[5]}"></li>
    <li th:text="${continents[6]}"></li>
    <li th:text="${continents[3]}"></li>
    <li th:text="${continents[1]}"></li>
</ol>

As we’ve seen in the above code fragment, each element is accessible through its index. We can go here to learn more about expressions in Thymeleaf.

4.2. Iteration

Similarly, we can iterate over the elements the array sequentially.

In Thymeleaf, here’s how we can achieve that:

<ul th:each="continet : ${continents}">
    <li th:text="${continent}"></li>
</ul>

When using th:each keyword to iterate over the element of an array, we’re not restricted to using list tags only. We can use any HTML tag capable of displaying text on the page. For example:

<h4 th:each="continent : ${continents}" th:text="${continent}"></h4>

In the above code snippet, each element is going to be displayed on its own separate <h4></h4> tag.

4.3. Utility Functions

Finally, we’re going to employ the use of utility class functions to examine some other properties of the array.

Let’s take a look at this:

<p>The greatest <span th:text="${#arrays.length(continents)}"></span> continents.</p>

<p>Europe is a continent: <span th:text="${#arrays.contains(continents, 'Europe')}"></span>.</p>

<p>Array of continents is empty <span th:text="${#arrays.isEmpty(continents)}"></span>.</p>

We query the length of the array first, and then check whether Europe is an element of the array continents.

Lastly, we check that the array continents is empty or not.

5. Conclusion

In this article, we’ve learned how to work with an array in Thymeleaf by checking its length and accessing its elements using an index. We have also learned how to iterate over its elements within Thymeleaf.

Lastly, we have seen the use of utility functions to inspect other properties of an array.

And, as always, the complete source code of this article can be found over on Github.

Context Hierarchy with the Spring Boot Fluent Builder API

$
0
0

1. Overview

It’s possible to create separate contexts and organize them in a hierarchy in Spring Boot.

A context hierarchy can be defined in different ways in Spring Boot application. In this article, we’ll look at how we can create multiple contexts using the fluent builder API.

As we won’t go into details on how to set up a Spring Boot application, you might want to check out this article.

2. Application Context Hierarchy

We can have multiple application contexts which share a parent-child relationship.

A context hierarchy allows multiple child contexts to share beans which reside in the parent context. Each child context can override configuration inherited from the parent context.

Furthermore, we can use contexts to prevent beans registered in one context from being accessible in another. This facilitates the creation of loosely coupled modules.

Here some points worth noting are that a context can have only one parent while a parent context can have multiple child contexts. Also, a child context can access beans in the parent context but not vice-versa.

3. Using SpringApplicationBuilder API

The SpringApplicationBuilder class provides a fluent API to create a parent-child relationship between contexts using parent()child() and sibling() methods.

To exemplify the context hierarchy, we’ll set up a non-web parent application context with 2 child web contexts.

To demonstrate this, we’ll start two instances of embedded Tomcat each with its own web application context and both running in a single JVM.

3.1. Parent Context

To begin, let’s create a service bean along with a bean definition class which reside in the parent package. We want this bean to return a greeting which is displayed to the client of our web application:

@Service
public class HomeService implements IHomeService {

    public String getGreeting() {
        return "Welcome User";
    }
}

And the bean definition class:

@Configuration
@ComponentScan("com.baeldung.parent")
public class ServiceConfig {}

Next, we’ll create the configuration for the two child contexts.

3.2. Child Context

Since all contexts are configured using the default configuration file, we need to provide separate configurations for properties which cannot be shared among contexts such as server ports.

To prevent conflicting configurations being picked up by the auto-configuration, we’ll also keep the classes in separate packages.

Let’s start by defining a properties file for the first child context:

server.port=8081
server.servlet.context-path=/ctx1

spring.application.admin.enabled=false
spring.application.admin.jmx-name=org.springframework.boot:type=Ctx1Rest,name=Ctx1Application

Note that we’ve configured the port and context path, as well as a JMX name so the application names don’t conflict.

Let’s now add the main configuration class for this context:

@Configuration
@ComponentScan("com.baeldung.ctx1")
@EnableAutoConfiguration
public class Ctx1Config {
    
    @Bean
    public IHomeService homeService() {
        return new GreetingService();
    }
}

This class provides a new definition for the homeService bean that will overwrite the one from the parent.

Let’s see the definition of the GreetingService class:

@Service
public class GreetingService implements IHomeService {

    public String getGreeting() {
        return "Greetings for the day";
    }
}

Finally, we’ll add a controller for this web context that use the homeService bean to display a message to the user:

@Controller
public class Ctx1Controller {

    @Autowired
    HomeService homeService;

    @GetMapping("/home")
    @ResponseBody
    public String greeting() {

        return homeService.getGreeting();
    }
}

3.3. Sibling Context

For our second context, we’ll create a controller and configuration class which are very similar to the ones in the previous section.

This time, we won’t create a homeService bean – as we’ll access it from the parent context.

First, let’s add a properties file for this context:

server.port=8082
server.servlet.context-path=/ctx2

spring.application.admin.enabled=false
spring.application.admin.jmx-name=org.springframework.boot:type=WebAdmin,name=SpringWebApplication

And the configuration class for the sibling application:

@Configuration
@ComponentScan("com.baeldung.ctx2")
@EnableAutoConfiguration
@PropertySource("classpath:ctx2.properties")
public class Ctx2Config {}

Let’s also add a controller, which has HomeService as a dependency:

@RestController
public class Ctx2Controller {

    @Autowired
    IHomeService homeService;

    @GetMapping(value = "/greeting", produces = "application/json")
    public String getGreeting() {
        return homeService.getGreeting();
    }
}

In this case, our controller should get the homeService bean from the parent context.

3.4. Context Hierarchy

Now we can put everything together and define the context hierarchy using SpringApplicationBuilder:

public class App {
    public static void main(String[] args) {
        new SpringApplicationBuilder()
          .parent(ParentConfig.class).web(WebApplicationType.NONE)
          .child(WebConfig.class).web(WebApplicationType.SERVLET)
          .sibling(RestConfig.class).web(WebApplicationType.SERVLET)
          .run(args);
    }
}

Finally, on running the Spring Boot App we can access both applications at their respective ports using localhost:8081/ctx1/home and localhost:8082/ctx2/greeting.

4. Conclusion

Using the SpringApplicationBuilder API, we first created a parent-child relationship between two contexts of an application. Next, we covered how to override the parent configuration in the child context. Lastly, we added a sibling context to demonstrate how the configuration in the parent context can be shared with other child contexts.

The source code of the example is available over on GitHub.

Download a File From an URL in Java

$
0
0

1. Introduction

In this tutorial, we’ll see several methods that we can use to download a file.

We’ll cover examples ranging from the basic usage of Java IO to the NIO package, and some common libraries like Async Http Client and Apache Commons IO.

Finally, we’ll talk about how we can resume a download if our connection fails before the whole file is read.

2. Using Java IO

The most basic API we can use to download a file is Java IO. We can use the URL class to open a connection to the file we want to download. To effectively read the file, we’ll use the openStream() method to obtain an InputStream:

BufferedInputStream in = new BufferedInputStream(new URL(FILE_URL).openStream())

When reading from an InputStream, it’s recommended to wrap it in a BufferedInputStream to increase the performance.

The performance increase comes from buffering. When reading one byte at a time using the read() method, each method call implies a system call to the underlying file system. When the JVM invokes the read() system call, the program execution context switches from user mode to kernel mode and back.

This context switch is expensive from a performance perspective. When we read a large number of bytes, the application performance will be poor, due to a large number of context switches involved.

For writing the bytes read from the URL to our local file, we’ll use the write() method from the FileOutputStream class:

try (BufferedInputStream in = new BufferedInputStream(new URL(FILE_URL).openStream());
  FileOutputStream fileOutputStream new FileOutputStream(FILE_NAME)) {
    byte dataBuffer[] = new byte[1024];
    int bytesRead;
    while ((bytesRead = in.read(dataBuffer, 0, 1024)) != -1) {
        fileOutputStream.write(dataBuffer, 0, bytesRead);
    }
} catch (IOException e) {
    // handle exception
}

When using a BufferedInputStream, the read() method will read as many bytes as we set for the buffer size. In our example, we’re already doing this by reading blocks of 1024 bytes at a time, so BufferedInputStream isn’t necessary.

The example above is very verbose, but luckily, as of Java 7, we have the Files class which contains helper methods for handling IO operations. We can use the Files.copy() method to read all the bytes from an InputStream and copy them to a local file:

InputStream in = new URL(FILE_URL).openStream();
Files.copy(in, Paths.get(FILE_NAME), StandardCopyOption.REPLACE_EXISTING);

Our code works well but can be improved. Its main drawback is the fact that the bytes are buffered into memory.

Fortunately, Java offers us the NIO package that has methods to transfer bytes directly between 2 Channels without buffering.

We’ll go into details in the next section.

3. Using NIO

The Java NIO package offers the possibility to transfer bytes between 2 Channels without buffering them into the application memory.

To read the file from our URL, we’ll create a new ReadableByteChannel from the URL stream:

ReadableByteChannel readableByteChannel = Channels.newChannel(url.openStream());

The bytes read from the ReadableByteChannel will be transferred to a FileChannel corresponding to the file that will be downloaded:

FileOutputStream fileOutputStream = new FileOutputStream(FILE_NAME);
FileChannel fileChannel = fileOutputStream.getChannel();

We’ll use the transferFrom() method from the ReadableByteChannel class to download the bytes from the given URL to our FileChannel:

fileOutputStream.getChannel()
  .transferFrom(readableByteChannel, 0, Long.MAX_VALUE);

The transferTo() and transferFrom() methods are more efficient than simply reading from a stream using a buffer. Depending on the underlying operating system, the data can be transferred directly from the filesystem cache to our file without copying any bytes into the application memory.

On Linux and UNIX systems, these methods use the zero-copy technique that reduces the number of context switches between the kernel mode and user mode.

4. Using Libraries

We’ve seen in the examples above how we can download content from a URL just by using the Java core functionality. We also can leverage the functionality of existing libraries to ease our work, when performance tweaks aren’t needed.

For example, in a real-world scenario, we’d need our download code to be asynchronous.

We could wrap all the logic into a Callable, or we could use an existing library for this.

4.1. Async HTTP Client

AsyncHttpClient is a popular library for executing asynchronous HTTP requests using the Netty framework. We can use it to execute a GET request to the file URL and get the file content.

First, we need to create an HTTP client:

AsyncHttpClient client = Dsl.asyncHttpClient();

The downloaded content will be placed into a FileOutputStream:

FileOutputStream stream = new FileOutputStream(FILE_NAME);

Next, we create an HTTP GET request and register an AsyncCompletionHandler handler to process the downloaded content:

client.prepareGet(FILE_URL).execute(new AsyncCompletionHandler<FileOutputStream>() {

    @Override
    public State onBodyPartReceived(HttpResponseBodyPart bodyPart) 
      throws Exception {
        stream.getChannel().write(bodyPart.getBodyByteBuffer());
        return State.CONTINUE;
    }

    @Override
    public FileOutputStream onCompleted(Response response) 
      throws Exception {
        return stream;
    }
})

Notice that we’ve overridden the onBodyPartReceived() method. The default implementation accumulates the HTTP chunks received into an ArrayList. This could lead to high memory consumption, or an OutOfMemory exception when trying to download a large file.

Instead of accumulating each HttpResponseBodyPart into memory, we use a FileChannel to write the bytes to our local file directly. We’ll use the getBodyByteBuffer() method to access the body part content through a ByteBuffer.

ByteBuffers have the advantage that the memory is allocated outside of the JVM heap, so it doesn’t affect out applications memory.

4.2. Apache Commons IO

Another highly used library for IO operation is Apache Commons IO. We can see from the Javadoc that there’s a utility class named FileUtils that is used for general file manipulation tasks.

To download a file from a URL, we can use this one-liner:

FileUtils.copyURLToFile(
  new URL(FILE_URL), 
  new File(FILE_NAME), 
  CONNECT_TIMEOUT, 
  READ_TIMEOUT);

From a performance standpoint, this code is the same as the one we’ve exemplified in section 2.

The underlying code uses the same concepts of reading in a loop some bytes from an InputStream and writing them to an OutputStream.

One difference is the fact that here the URLConnection class is used to control the connection timeouts so that the download doesn’t block for a large amount of time:

URLConnection connection = source.openConnection();
connection.setConnectTimeout(connectionTimeout);
connection.setReadTimeout(readTimeout);

5. Resumable Download

Considering internet connections fail from time to time, it’s useful for us to be able to resume a download, instead of downloading the file again from byte zero.

Let’s rewrite the first example from earlier, to add this functionality.

The first thing we should know is that we can read the size of a file from a given URL without actually downloading it by using the HTTP HEAD method:

URL url = new URL(FILE_URL);
HttpURLConnection httpConnection = (HttpURLConnection) url.openConnection();
httpConnection.setRequestMethod("HEAD");
long removeFileSize = httpConnection.getContentLengthLong();

Now that we have the total content size of the file, we can check whether our file is partially downloaded. If so, we’ll resume the download from the last byte recorded on disk:

long existingFileSize = outputFile.length();
if (existingFileSize < fileLength) {
    httpFileConnection.setRequestProperty(
      "Range", 
      "bytes=" + existingFileSize + "-" + fileLength
    );
}

What happens here is that we’ve configured the URLConnection to request the file bytes in a specific range. The range will start from the last downloaded byte and will end at the byte corresponding to the size of the remote file.

Another common way to use the Range header is for downloading a file in chunks by setting different byte ranges. For example, to download 2 KB file, we can use the range 0 – 1024 and 1024 – 2048.

Another subtle difference from the code at section 2. is that the FileOutputStream is opened with the append parameter set to true:

OutputStream os = new FileOutputStream(FILE_NAME, true);

After we’ve made this change the rest of the code is identical to the one we’ve seen in section 2.

6. Conclusion

We’ve seen in this article several ways in which we can download a file from a URL in Java.

The most common implementation is the one in which we buffer the bytes when performing the read/write operations. This implementation is safe to use even for large files because we don’t load the whole file into memory.

We’ve also seen how we can implement a zero-copy download using Java NIO Channels. This is useful because it minimized the number of context switches done when reading and writing bytes and by using direct buffers, the bytes are not loaded into the application memory.

Also, because usually downloading a file is done over HTTP, we’ve shown how we can achieve this using the AsyncHttpClient library.

The source code for the article is available over on GitHub.

Introduction to Dagger 2

$
0
0

1. Introduction

In this tutorial, we’ll take a look at Dagger 2 – a fast and lightweight dependency injection framework.

The framework is available for both Java and Android, but the high-performance derived from compile-time injection makes it a leading solution for the latter.

2. Dependency Injection

As a bit of a reminder, Dependency Injection is a concrete application of the more generic Inversion of Control principle in which the flow of the program is controlled by the program itself.

It’s implemented through an external component which provides instances of objects (or dependencies) needed by other objects.

And different frameworks implement dependency injection in different ways. In particular, one of the most notable of these differences is whether the injection happens at run-time or at compile-time.

Run-time DI is usually based on reflection which is simpler to use but slower at run-time. An example of a run-time DI framework is Spring.

Compile-time DI, on the other hand, is based on code generation. This means that all the heavy-weight operations are performed during compilation. Compile-time DI adds complexity but generally performs faster.

Dagger 2 falls into this category.

3. Maven/Gradle Configuration

In order to use Dagger in a project, we’ll need to add the dagger dependency to our pom.xml:

<dependency>
    <groupId>com.google.dagger</groupId>
    <artifactId>dagger</artifactId>
    <version>2.16</version>
</dependency>

Furthermore, we’ll also need to include the Dagger compiler used to convert our annotated classes into the code used for the injections:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.6.1</version>
    <configuration>
         <annotationProcessorPaths>
              <path>
                  <groupId>com.google.dagger</groupId>
                  <artifactId>dagger-compiler</artifactId>
                  <version>2.16</version>
              </path>
         </annotationProcessorPaths>
    </configuration>
</plugin>

With this configuration, Maven will output the generated code into target/generated-sources/annotations.

For this reason, we likely need to further configure our IDE if we want to use any of its code completion features. Some IDEs have direct support for annotation processors while others may need us to add this directory to the build path.

Alternatively, if we’re using Android with Gradle, we can include both dependencies:

compile 'com.google.dagger:dagger:2.16'
annotationProcessor 'com.google.dagger:dagger-compiler:2.16'

Now that we have Dagger in our project, let’s create a sample application to see how it works.

4. Implementation

For our example, we’ll try to build a car by injecting its components.

Now, Dagger uses the standard JSR-330 annotations in many places, one being @Inject.

We can add the annotations to fields or the constructor. But, since Dagger doesn’t support injection on private fields, we’ll go for constructor injection to preserve encapsulation:

public class Car {

    private Engine engine;
    private Brand brand;

    @Inject
    public Car(Engine engine, Brand brand) {
        this.engine = engine;
        this.brand = brand;
    }

    // getters and setters

}

Next, we’ll implement the code to perform the injection. More specifically, we’ll create:

  • a module, which is a class that provides or builds the objects’ dependencies, and
  • a component, which is an interface used to generate the injector

Complex projects may contain multiple modules and components but since we’re dealing with a very basic program, one of each is enough.

Let’s see how to implement them.

4.1. Module

To create a module, we need to annotate the class with the @Module annotation. This annotation indicates that the class can make dependencies available to the container:

@Module
public class VehiclesModule {
}

Then, we need to add the @Provides annotation on methods that construct our dependencies:

@Module
public class VehiclesModule {
    @Provides
    public Engine provideEngine() {
        return new Engine();
    }

    @Provides
    @Singleton
    public Brand provideBrand() { 
        return new Brand("Baeldung"); 
    }
}

Also, note that we can configure the scope of a given dependency. In this case, we give the singleton scope to our Brand instance so all the car instances share the same brand object.

4.2. Component

Moving on, we’re going to create our component interfaceThis is the class that will generate Car instances, injecting dependencies provided by VehiclesModule.

Simply put, we need a method signature that returns a Car and we need to mark the class with the @Component annotation:

@Singleton
@Component(modules = VehiclesModule.class)
public interface VehiclesComponent {
    Car buildCar();
}

Notice how we passed our module class as an argument to the @Component annotation. If we didn’t do that, Dagger wouldn’t know how to build the car’s dependencies.

Also, since our module provides a singleton object, we must give the same scope to our component because Dagger doesn’t allow for unscoped components to refer to scoped bindings.

4.3. Client Code

Finally, we can run mvn compile in order to trigger the annotation processors and generate the injector code.

After that, we’ll find our component implementation with the same name as the interface, just prefixed with “Dagger“:

@Test
public void givenGeneratedComponent_whenBuildingCar_thenDependenciesInjected() {
    VehiclesComponent component = DaggerVehiclesComponent.create();

    Car carOne = component.buildCar();
    Car carTwo = component.buildCar();

    Assert.assertNotNull(carOne);
    Assert.assertNotNull(carTwo);
    Assert.assertNotNull(carOne.getEngine());
    Assert.assertNotNull(carTwo.getEngine());
    Assert.assertNotNull(carOne.getBrand());
    Assert.assertNotNull(carTwo.getBrand());
    Assert.assertNotEquals(carOne.getEngine(), carTwo.getEngine());
    Assert.assertEquals(carOne.getBrand(), carTwo.getBrand());
}

5. Spring Analogies

Those familiar with Spring may have noticed some parallels between the two frameworks.

Dagger’s @Module annotation makes the container aware of a class in a very similar fashion as any of Spring’s stereotype annotations (for example, @Service, @Controller…). Likewise, @Provides and @Component are almost equivalent to Spring’s @Bean and @Lookup respectively.

Spring also has its @Scope annotation, correlating to @Singleton, though note here already another difference in that Spring assumes a singleton scope by default while Dagger defaults to what Spring developers might refer to as the prototype scope, invoking the provider method each time a dependency is required.

6. Conclusion

In this article, we went through how to set up and use Dagger 2 with a basic example. We also considered the differences between run-time and compile-time injection.

As always, all the code in the article is available over on GitHub.

Deploy a Spring Boot App to Azure

$
0
0

1. Introduction

Microsoft Azure now features quite solid Java support.

In this tutorial, we’ll demonstrate how to make our Spring Boot application work on the Azure platform, step by step.

2. Maven Dependency and Configuration

First, we do need an Azure subscription to make use of the cloud services there; currently, we can sign up a free account here.

Next, login to the platform and create a service principal using the Azure CLI:

> az login
To sign in, use a web browser to open the page \
https://microsoft.com/devicelogin and enter the code XXXXXXXX to authenticate.
> az ad sp create-for-rbac --name "app-name" --password "password"
{
    "appId": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
    "displayName": "app-name",
    "name": "http://app-name",
    "password": "password",
    "tenant": "tttttttt-tttt-tttt-tttt-tttttttttttt"
}

Now we configure Azure service principal authentication settings in our Maven settings.xml, with the help of the following section, under <servers>:

<server>
    <id>azure-auth</id>
    <configuration>
        <client>aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa</client>
        <tenant>tttttttt-tttt-tttt-tttt-tttttttttttt</tenant>
        <key>password</key>
        <environment>AZURE</environment>
    </configuration>
</server>

We’ll rely on the authentication configuration above when uploading our Spring Boot application to the Microsoft platform, using azure-webapp-maven-plugin.

Let’s add the following Maven plugin to the pom.xml:

<plugin>
    <groupId>com.microsoft.azure</groupId>
    <artifactId>azure-webapp-maven-plugin</artifactId>
    <version>1.1.0</version>
    <configuration>
        <!-- ... -->
    </configuration>
</plugin>

We can check the latest release version here.

There are a number of configurable properties for this plugin that will be covered in the following introduction.

3. Deploy a Spring Boot App to Azure

Now that we’ve set up the environment, let’s try to deploy our Spring Boot application to Azure.

Our application replies with “hello azure!” when we access “/hello“:

@GetMapping("/hello")
public String hello() {
    return "hello azure!";
}

The platform now allows Java Web App deployment for both Tomcat and Jetty. With azure-webapp-maven-plugin, we can deploy our application directly to supported web containers as the default(ROOT) application, or deploy via FTP.

Note that as we’re going to deploy the application to web containers, we should package it as a WAR archive. As a quick reminder, we’ve got an article introducing how to deploy a Spring Boot WAR into Tomcat.

3.1. Web Container Deployment

We’ll use the following configuration for azure-webapp-maven-plugin if we intend to deploy to Tomcat on a Windows instance:

<configuration>
    <javaVersion>1.8</javaVersion>
    <javaWebContainer>tomcat 8.5</javaWebContainer>
    <!-- ... -->
</configuration>

For a Linux instance, try the following configuration:

<configuration>
    <linuxRuntime>tomcat 8.5-jre8</linuxRuntime>
    <!-- ... -->
</configuration>

Let’s not forget the Azure authentication:

<configuration>
    <authentication>
        <serverId>azure-auth</serverId>
    </authentication>
    <appName>spring-azure</appName>
    <resourceGroup>baeldung</resourceGroup>
    <!-- ... -->
</configuration>

When we deploy our application to Azure, we’ll see it appear as an App Service. So here we specified the property <appName> to name the App Service. Also, the App Service, as a resource, needs to be held by a resource group container, so <resourceGroup> is also required.

Now we’re ready to pull the trigger using the azure-webapp:deploy Maven target, and we’ll see the output:

> mvn clean package azure-webapp:deploy
...
[INFO] Start deploying to Web App spring-baeldung...
[INFO] Authenticate with ServerId: azure-auth
[INFO] [Correlation ID: cccccccc-cccc-cccc-cccc-cccccccccccc] \
Instance discovery was successful
[INFO] Target Web App doesn't exist. Creating a new one...
[INFO] Creating App Service Plan 'ServicePlanssssssss-bbbb-0000'...
[INFO] Successfully created App Service Plan.
[INFO] Successfully created Web App.
[INFO] Starting to deploy the war file...
[INFO] Successfully deployed Web App at \
https://spring-baeldung.azurewebsites.net
...

Now we can access https://spring-baeldung.azurewebsites.net/hello and see the response: ‘hello azure!’.

During the deployment process, Azure automatically created an App Service Plan for us. Check out the official document for details about Azure App Service plans. If we already have an App Service plan, we can set property <appServicePlanName> to avoid creating a new one:

<configuration>
    <!-- ... -->
    <appServicePlanName>ServicePlanssssssss-bbbb-0000</appServicePlanName>
</configuration>

3.2. FTP Deployment

To deploy via FTP, we can use the configuration:

<configuration>
    <authentication>
        <serverId>azure-auth</serverId>
    </authentication>
    <appName>spring-baeldung</appName>
    <resourceGroup>baeldung</resourceGroup>
    <javaVersion>1.8</javaVersion>

    <deploymentType>ftp</deploymentType>
    <resources>
        <resource>
            <directory>${project.basedir}/target</directory>
            <targetPath>webapps</targetPath>
            <includes>
                <include>*.war</include>
            </includes>
        </resource>
    </resources>
</configuration>

In the configuration above, we make the plugin locate the WAR file in directory ${project.basedir}/target, and deploy it to Tomcat container’s webapps directory.

Say our final artifact is named azure-0.1.war, we’ll see output like the following once we commence the deployment:

> mvn clean package azure-webapp:deploy
...
[INFO] Start deploying to Web App spring-baeldung...
[INFO] Authenticate with ServerId: azure-auth
[INFO] [Correlation ID: cccccccc-cccc-cccc-cccc-cccccccccccc] \
Instance discovery was successful
[INFO] Target Web App doesn't exist. Creating a new one...
[INFO] Creating App Service Plan 'ServicePlanxxxxxxxx-xxxx-xxxx'...
[INFO] Successfully created App Service Plan.
[INFO] Successfully created Web App.
...
[INFO] Finished uploading directory: \
/xxx/.../target/azure-webapps/spring-baeldung --> /site/wwwroot
[INFO] Successfully uploaded files to FTP server: \
xxxx-xxxx-xxx-xxx.ftp.azurewebsites.windows.net
[INFO] Successfully deployed Web App at \
https://spring-baeldung.azurewebsites.net

Note that here we didn’t deploy our application as the default Web App for Tomcat, so we can only access it through ‘https://spring-baeldung.azurewebsites.net/azure-0.1/hello’. The server will respond ‘hello azure!’ as expected.

4. Deploy with Custom Application Settings

Most of the time, our Spring Boot application requires data access to provide services. Azure now supports databases such as SQL Server, MySQL, and PostgreSQL.

For the sake of simplicity, we’ll use its In-App MySQL as our data source, as its configuration is quite similar to other Azure database services.

4.1. Enable In-App MySQL on Azure

Since there isn’t a one-liner to create a web app with In-App MySQL enabled, we have to first create the web app using the CLI:

az group create --location japanwest --name bealdung-group
az appservice plan create --name baeldung-plan --resource-group bealdung-group --sku B1
az webapp create --name baeldung-webapp --resource-group baeldung-group \
  --plan baeldung-plan --runtime java|1.8|Tomcat|8.5

Then enable MySQL in App in the portal:

After the In-App MySQL is enabled, we can find the default database, data source URL, and default account information in a file named MYSQLCONNSTR_xxx.txt under the /home/data/mysql directory of the filesystem.

4.2. Spring Boot Application Using Azure In-App MySQL

Here, for demonstration needs, we create a User entity and two endpoints used to register and list Users:

@PostMapping("/user")
public String register(@RequestParam String name) {
    userRepository.save(userNamed(name));
    return "registered";
}

@GetMapping("/user")
public Iterable<User> userlist() {
    return userRepository.findAll();
}

We’re going to use an H2 database in our local environment, and switch it to MySQL on Azure. Generally, we configure data source properties in the application.properties file:

spring.datasource.url=jdbc:h2:file:~/test
spring.datasource.username=sa
spring.datasource.password=

While for Azure deployment, we need to configure azure-webapp-maven-plugin in <appSettings>:

<configuration>
    <authentication>
        <serverId>azure-auth</serverId>
    </authentication>
    <javaVersion>1.8</javaVersion>
    <resourceGroup>baeldung-group</resourceGroup>
    <appName>baeldung-webapp</appName>
    <appServicePlanName>bealdung-plan</appServicePlanName>
    <appSettings>
        <property>
            <name>spring.datasource.url</name>
            <value>jdbc:mysql://127.0.0.1:55738/localdb</value>
        </property>
        <property>
            <name>spring.datasource.username</name>
            <value>uuuuuu</value>
        </property>
        <property>
            <name>spring.datasource.password</name>
            <value>pppppp</value>
        </property>
    </appSettings>
</configuration>

Now we can start to deploy:

> mvn clean package azure-webapp:deploy
...
[INFO] Start deploying to Web App custom-webapp...
[INFO] Authenticate with ServerId: azure-auth
[INFO] [Correlation ID: cccccccc-cccc-cccc-cccc-cccccccccccc] \
Instance discovery was successful
[INFO] Updating target Web App...
[INFO] Successfully updated Web App.
[INFO] Starting to deploy the war file...
[INFO] Successfully deployed Web App at \
https://baeldung-webapp.azurewebsites.net

We can see from the log that the deployment is finished.

Let’s test our new endpoints:

> curl -d "" -X POST https://baeldung-webapp.azurewebsites.net/user\?name\=baeldung
registered

> curl https://baeldung-webapp.azurewebsites.net/user
[{"id":1,"name":"baeldung"}]

The server’s response says it all. It works!

5. Deploy a Containerized Spring Boot App to Azure

In the previous sections, we’ve shown how to deploy applications to servlet containers (Tomcat in this case). How about deploying as a standalone runnable jar?

For now, we may need to containerize our Spring Boot application. Specifically, we can dockerize it and upload the image to Azure.

We already have an article about how to dockerize a Spring Boot App, but here we’re about to make use of another maven plugin: docker-maven-plugin, to automate dockerization for us:

<plugin>
    <groupId>com.spotify</groupId>
    <artifactId>docker-maven-plugin</artifactId>
    <version>1.1.0</version>
    <configuration>
        <!-- ... -->
    </configuration>
</plugin>

The latest version can be found here.

5.1. Azure Container Registry

First, we need a Container Registry on Azure to upload our docker image.

So let’s create one:

az acr create --admin-enabled --resource-group baeldung-group \
  --location japanwest --name baeldungadr --sku Basic

We’ll also need the Container Registry’s authentication information, and this can be queried using:

> az acr credential show --name baeldungadr --query passwords[0]
{
  "additionalProperties": {},
  "name": "password",
  "value": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}

Then add the following server authentication configuration in Maven’s settings.xml:

<server>
    <id>baeldungadr</id>
    <username>baeldungadr</username>
    <password>xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx</password>
</server>

5.2. Maven Plugin Configuration

Let’s add the following Maven plugin configuration to the pom.xml:

<properties>
    <!-- ... -->
    <azure.containerRegistry>baeldungadr</azure.containerRegistry>
    <docker.image.prefix>${azure.containerRegistry}.azurecr.io</docker.image.prefix>
</properties>

<build>
    <plugins>
        <plugin>
            <groupId>com.spotify</groupId>
            <artifactId>docker-maven-plugin</artifactId>
            <version>1.0.0</version>
            <configuration>
                <imageName>${docker.image.prefix}/${project.artifactId}</imageName>
                <registryUrl>https://${docker.image.prefix}</registryUrl>
                <serverId>${azure.containerRegistry}</serverId>
                <dockerDirectory>docker</dockerDirectory>
                <resources>
                    <resource>
                        <targetPath>/</targetPath>
                        <directory>${project.build.directory}</directory>
                        <include>${project.build.finalName}.jar</include>
                    </resource>
                </resources>
            </configuration>
        </plugin>
        <!-- ... -->
    </plugins>
</build>

In the configuration above, we specified the docker image name, registry URL and some properties similar to that of FTP deployment.

Note that the plugin will use values in <dockerDirectory> to locate the Dockerfile. We put the Dockerfile in the docker directory, and its content is:

FROM frolvlad/alpine-oraclejdk8:slim
VOLUME /tmp
ADD azure-0.1.jar app.jar
RUN sh -c 'touch /app.jar'
EXPOSE 8080
ENTRYPOINT [ "sh", "-c", "java -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]

5.3. Run Spring Boot App in a Docker Instance

Now we can build a Docker image and push it to the Azure registry:

> mvn docker:build -DpushImage
...
[INFO] Building image baeldungadr.azurecr.io/azure-0.1
...
Successfully built aaaaaaaaaaaa
Successfully tagged baeldungadr.azurecr.io/azure-0.1:latest
[INFO] Built baeldungadr.azurecr.io/azure-0.1
[INFO] Pushing baeldungadr.azurecr.io/azure-0.1
The push refers to repository [baeldungadr.azurecr.io/azure-0.1]
...
latest: digest: sha256:0f0f... size: 1375

After the upload is finished, let’s check the baeldungadr registry. We shall see the image in the repository list:

Now we are ready to run an instance of the image:

Once the instance is booted, we can access services provided by our application via its public IP address:

> curl http://a.x.y.z:8080/hello
hello azure!

5.4. Docker Container Deployment

Suppose we have a container registry, no matter it’s from Azure, Docker Hub, or our private registry.

With the help of the following configuration of azure-webapp-maven-plugin, we can also deploy our Spring Boot web app to the containers:

<configuration>
    <containerSettings>
        <imageName>${docker.image.prefix}/${project.artifactId}</imageName>
        <registryUrl>https://${docker.image.prefix}</registryUrl>
        <serverId>${azure.containerRegistry}</serverId>
    </containerSettings>
    <!-- ... -->
<configuration>

Once we run mvn azure-webapp:deploy, the plugin will help deploy our web app archive to an instance of the specified image.

Then we can access web services via the instance’s IP address or Azure App Service’s URL.

6. Conclusion

In this article, we introduced how to deploy a Spring Boot application to Azure, as a deployable WAR or a runnable JAR in a container. Though we’ve covered most of the features of azure-webapp-maven-plugin, there are some rich features yet to be explored. Please check out here for more details.

As always, the full implementation of the code samples can be found over on Github.

Viewing all 4848 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>