Quantcast
Channel: Baeldung
Viewing all 4745 articles
Browse latest View live

Exceptions in Netty

$
0
0

1. Overview

In this quick article, we’ll be looking at exception handling in Netty.

Simply put, Netty is a framework for building high-performance asynchronous and event-driven network applications. I/O operations are handled inside its life-cycle using callback methods.

More details about the framework and how to get started with it can be found in our previous article here.

2. Handling Exceptions in Netty

As mentioned earlier, Netty is an event-driven system and has callback methods for specific events. Exceptions are such events too.

Exceptions can occur while processing data received from the client or during I/O operations. When this happens, a dedicated exception-caught event is fired.

2.1. Handling Exceptions in the Channel of Origin

The exception-caught event, when fired, is handled by the exceptionsCaught() method of the ChannelInboundHandler or its adapters and subclasses.

Note that the callback has been deprecated in the ChannelHandler interface. It’s now limited to the ChannelInboudHandler interface.

The method accepts a Throwable object and a ChannelHandlerContext object as parameters. The Throwable object could be used to print the stack trace or get the localized error message.

So let’s create a channel handler, ChannelHandlerA and override its exceptionCaught() with our implementation:

public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) 
  throws Exception {
 
    logger.info(cause.getLocalizedMessage());
    //do more exception handling
    ctx.close();
}

In the code snippet above, we logged the exception message and also call the close() of the ChannelHandlerContext.

This will close the channel between the server and the client. Essentially causing the client to disconnect and terminate.

2.2. Propagating Exceptions

In the previous section, we handled the exception in its channel of origin. However, we can actually propagate the exception on to another channel handler in the pipeline.

Instead of logging the error message and calling ctx.close(), we’ll use the ChannelHandlerContext object to fire another exception-caught event manually.

This will cause the exceptionCaught() of the next channel handler in the pipeline to be invoked.

Let’s modify the code snippet in ChannelHandlerA to propagate the event by calling the ctx.fireExceptionCaught():

public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) 
  throws Exception {
 
    logger.info("Exception Occurred in ChannelHandler A");
    ctx.fireExceptionCaught(cause);
}

Furthermore, let’s create another channel handler, ChannelHandlerB and override its exceptionCaught() with this implementation:

@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) 
  throws Exception {
 
    logger.info("Exception Handled in ChannelHandler B");
    logger.info(cause.getLocalizedMessage());
    //do more exception handling
    ctx.close();
}

In the Server class, the channels are added to the pipeline in the following order:

ch.pipeline().addLast(new ChannelHandlerA(), new ChannelHandlerB());

Propagating exception-caught events manually is useful in cases where all exceptions are being handled by one designated channel handler.

3. Conclusion

In this tutorial, we’ve looked at how to handle exceptions in Netty using the callback method and how to propagate the exceptions if needed.

The complete source code is available over on Github.


Spring MVC Tutorial

$
0
0

1. Overview and Maven

This is a simple Spring MVC tutorial showing how to set up a Spring MVC project, both with Java-based Configuration as well as with XML Configuration.

The Maven artifacts for Spring MVC project are described in the in detail in the Spring MVC dependencies article.

2. The web.xml

This is a simple configuration of the web.xml for a Spring MVC project:

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://java.sun.com/xml/ns/javaee" 
   xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_3_1.xsd"
   xsi:schemaLocation="
      http://java.sun.com/xml/ns/javaee 
      http://java.sun.com/xml/ns/javaee/web-app_3_1.xsd" 
   id="WebApp_ID" version="3.1">

   <display-name>Spring MVC Java Config App</display-name>

   <servlet>
      <servlet-name>mvc</servlet-name>
      <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
      <load-on-startup>1</load-on-startup>
   </servlet>
   <servlet-mapping>
      <servlet-name>mvc</servlet-name>
      <url-pattern>/</url-pattern>
   </servlet-mapping>

   <context-param>
      <param-name>contextClass</param-name>
      <param-value>
         org.springframework.web.context.support.AnnotationConfigWebApplicationContext
      </param-value>
   </context-param>
   <context-param>
      <param-name>contextConfigLocation</param-name>
      <param-value>org.baeldung.spring.web.config</param-value>
   </context-param>
   <listener>
      <listener-class>
         org.springframework.web.context.ContextLoaderListener
      </listener-class>
   </listener>

</web-app>

We are using Java based Configuration, so we’re using AnnotationConfigWebApplicationContext as the main context class – this accepts @Configuration annotated classes as input. As such, we only need to specify the package where these configuration classes are located, via contextConfigLocation.

To keep this mechanism flexible, multiple packages are also configurable here, merely space delimited:

   <context-param>
      <param-name>contextConfigLocation</param-name>
      <param-value>org.baeldung.spring.web.config org.baeldung.spring.persistence.config</param-value>
   </context-param>

This allows more complex projects with multiple modules to manage their own Spring Configuration classes and contribute them to the overall Spring context at runtime.

Finally, the Servlet is mapped to / – meaning it becomes the default Servlet of the application and it will pick up every pattern that doesn’t have another exact match defined by another Servlet.

Note: There are multiple approaches to configure a Spring MVC project. Instead of web.xml as described above, we can have a 100% Java-configured project using initializer. For more details, refer to our existing article.

3. The Spring MVC Configuration – Java

The Spring MVC Java configuration is simple – it uses the MVC configuration support introduced in Spring 3.1:

@EnableWebMvc
@Configuration
public class ClientWebConfig extends WebMvcConfigurerAdapter {

   @Override
   public void addViewControllers(ViewControllerRegistry registry) {
      super.addViewControllers(registry);

      registry.addViewController("/sample.html");
   }

   @Bean
   public ViewResolver viewResolver() {
      InternalResourceViewResolver bean = new InternalResourceViewResolver();

      bean.setViewClass(JstlView.class);
      bean.setPrefix("/WEB-INF/view/");
      bean.setSuffix(".jsp");

      return bean;
   }
}

Very important here is that we can register view controllers that create a direct mapping between the URL and the view name – no need for any Controller between the two now that we’re using Java configuration.

4. The Spring MVC Configuration – XML

Alternatively to the Java configuration above, we can also use a purely XML config:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
    xmlns:mvc="http://www.springframework.org/schema/mvc"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans 
        http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
        http://www.springframework.org/schema/mvc
        http://www.springframework.org/schema/mvc/spring-mvc-3.2.xsd">

    <bean id="viewResolver" 
      class="org.springframework.web.servlet.view.InternalResourceViewResolver">
        <property name="prefix" value="/WEB-INF/view/" />
        <property name="suffix" value=".jsp" />
    </bean>

    <mvc:view-controller path="/sample.html" view-name="sample" />

</beans>

5. The JSP Views

We defined above a basic view controller – sample.html – the corresponding jsp resource is:

<html>
   <head></head>

   <body>
      <h1>This is the body of the sample view</h1>	
   </body>
</html>

The JSP based view files are located under the /WEB-INF folder of the project, so they’re only accessible to the Spring infrastructure and not by direct URL access.

6. Spring MVC with Boot

Spring Boot is an addition to Spring Platform which makes it very easy to get started and create stand-alone, production-grade applications. Boot is not intended to replace Spring, but to make working with it faster and easier.

6.1. Spring Boot Starters

The new framework provides convenient starter dependencies – which are dependency descriptors that can bring in all the necessary technology for a certain functionality.

These have the advantage that we no longer need to specify a version for each dependency but instead allow the starter manage dependencies for us.

The quickest way to get started is by adding the spring-boot-starter-parent pom.xml:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.5.6.RELEASE</version>
</parent>

This will take care of dependency management.

6.2. Spring Boot Entry Point

Each application built using Spring Boot needs merely to define the main entry point. This is usually a Java class with the main method, annotated with @SpringBootApplication:

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

This annotation adds the following other annotations:

  • @Configuration – which marks the class as a source of bean definitions
  • @EnableAutoConfiguration – which tells the framework to add beans based on the dependencies on the classpath automatically
  • @ComponentScan – which scans for other configurations and beans in the same package as the Application class or below

With Spring Boot, we can set up frontend using Thymeleaf or JSP’s without using ViewResolver as defined in section 4. By adding spring-boot-starter-thymeleaf dependency to our pom.xml, Thymeleaf gets enabled, and no extra configuration is necessary.

The source code for the Boot app is, as always, available over on GitHub.

Finally, if you’re looking to get started with Spring Boot, have a look at our reference intro here.

7. Conclusion

In this example we configured a simple and functional Spring MVC project, using Java configuration.

The implementation of this simple Spring MVC tutorial can be found in the GitHub project – this is an Eclipse based project, so it should be easy to import and run as it is.

When the project runs locally, the sample.html can be accessed at:

http://localhost:8080/spring-mvc-xml/sample.html

Getting Started with Java RMI

$
0
0

1. Overview

When two JVMs need to communicate, Java RMI is one option we have to make that happen. In this article, we’ll bootstrap a simple example showcasing Java RMI technology.

2. Creating the Server

There are two steps needed to create an RMI server:

  1. Create an interface defining the client/server contract.
  2. Create an implementation of that interface.

2.1. Defining the Contract

First of all, let’s create the interface for the remote object. This interface extends the java.rmi.Remote marker interface.

In addition, each method declared in the interface throws the java.rmi.RemoteException:

public interface MessengerService extends Remote {
    String sendMessage(String clientMessage) throws RemoteException;
}

Note, though, that RMI supports the full Java specification for method signatures, as long as the Java types implement java.io.Serializable.

We’ll see in future sections, how both the client and the server will use this interface.

For the server, we’ll create the implementation, often referred to as the Remote Object.

For the client, the RMI library will dynamically create an implementation called a Stub.

2.2. Implementation

Furthermore, let’s implement the remote interface, again called the Remote Object:

public class MessengerServiceImpl implements MessengerService { 
 
    @Override 
    public String sendMessage(String clientMessage) { 
        return "Client Message".equals(clientMessage) ? "Server Message" : null;
    }

    public String unexposedMethod() { /* code */ }
}

Notice, that we’ve left off the throws RemoteException clause from the method definition.

It’d be unusual for our remote object to throw a RemoteException since this exception is typically reserved for the RMI library to raise communication errors to the client.

Leaving it out also has the benefit of keeping our implementation RMI-agnostic.

Also, any additional methods defined in the remote object, but not in the interface, remain invisible for the client.

3. Registering the Service

Once we create the remote implementation, we need to bind the remote object to an RMI registry.

3.1. Creating a Stub

First, we need to create a stub of our remote object:

MessengerService server = new MessengerServiceImpl();
MessengerService stub = (MessengerService) UnicastRemoteObject
  .exportObject((MessengerService) server, 0);

We use the static UnicastRemoteObject.exportObject method to create our stub implementation. The stub is what does the magic of communicating with the server over the underlying RMI protocol.

The first argument to exportObject is the remote server object.

The second argument is the port that exportObject uses for exporting the remote object to the registry.

Giving a value of zero indicates that we don’t care which port exportObject uses, which is typical and so chosen dynamically.

Unfortunately, the exportObject() method without a port number is deprecated.

3.2 Creating a Registry

We can stand up a registry local to our server or as a separate stand-alone service.

For simplicity, we’ll create one that is local to our server:

Registry registry = LocateRegistry.createRegistry(1099);

This creates a registry to which stubs can be bound by servers and discovered by clients.

Also, we’ve used the createRegistry method, since we are creating the registry local to the server.

By default, an RMI registry runs on port 1099. Rather, a different port can also be specified in the createRegistry factory method.

But in the stand-alone case, we’d call getRegistry, passing the hostname and port number as parameters.

3.3 Binding the Stub

Consequently, let’s bind our stub to the registry. An RMI registry is a naming facility like JNDI etc. We can follow a similar pattern here, binding our stub to a unique key:

registry.rebind("MessengerService", stub);

As a result, the remote object is now available to any client that can locate the registry.

4. Creating the Client

Finally, let’s write the client to invoke the remote methods.

To do this, we’ll first locate the RMI registry. In addition, we’ll look up the remote object stub using the bounded unique key.

And finally, we’ll invoke the sendMessage method:

Registry registry = LocateRegistry.getRegistry();
MessengerService server = (MessengerService) registry
  .lookup("MessengerService");
String responseMessage = server.sendMessage("Client Message");
String expectedMessage = "Server Message";
 
assertEquals(expectedMessage, responseMessage);

Because we’re running the RMI registry on the local machine and default port 1099, we don’t pass any parameters to getRegistry.

Indeed, if the registry is rather on a different host or different port, we can supply these parameters.

Once we lookup the stub object using the registry, we can invoke the methods on the remote server.

5. Conclusion

In this tutorial, we got a brief introduction to Java RMI and how it can be the foundation for client-server applications. Stay tuned for additional posts about some of RMI’s unique features!

The source code of this tutorial can be found over on GitHub.

Spring Boot Actuator

$
0
0

1. Overview

In this article, we’re going to introduce Spring Boot Actuator. We’ll cover the basics first, then discuss in detail what’s available in Spring Boot 1.x vs 2.x.

We’ll learn how to use, configure and extend this monitoring tool in Spring Boot 1.x. Then, we’ll discuss how to do the same using Boot 2.x and WebFlux taking advantage of the reactive programming model.

Spring Boot Actuator is available since April 2014, together with the first Spring Boot release.
With the upcoming release of Spring Boot 2, Actuator has been redesigned, and new exciting endpoints were added.

This guide is split into 3 main sections:

2. What is an Actuator?

In essence, Actuator brings production-ready features to our application.

Monitoring our app, gathering metrics, understanding traffic or the state of our database becomes trivial with this dependency.

The main benefit of this library is that we can get production grade tools without having to actually implement these features ourselves.

Actuator is mainly used to expose operational information about the running application – health, metrics, info, dump, env, etc. It uses HTTP endpoints or JMX beans to enable us to interact with it.

Once this dependency is on the classpath several endpoints are available for us out of the box. As with most Spring modules, we can easily configure or extend it in many ways.

2.1. Getting started

To enable Spring Boot Actuator we’ll just need to add the spring-boot-actuator dependency to our package manager. In Maven:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

Note that this remains valid regardless of the Boot version, as versions are specified in Spring Boot Bill of Materials (BOM).

3. Spring Boot 1.x Actuator

In 1.x Actuator follows a R/W model, that means we can either read from it or write to it. E.g. we can retrieve metrics or the health of our application. Alternatively, we could gracefully terminate our app or change our logging configuration.

In order to get it working, Actuator requires Spring MVC to expose its endpoints through HTTP. No other technology is supported.

3.1. Endpoints

In 1.x, Actuator brings its own security model. It takes advantage of Spring Security constructs, but needs to be configured independently from the rest of the application.

Also, most endpoints are sensitive – meaning they’re not fully public, or in other words, most information will be omitted – while a handful is not e.g. /info.

Here are some of the most common endpoints Boot provides out of the box:

  • /health – Shows application health information (a simple ‘status’ when accessed over an unauthenticated connection or full message details when authenticated); it’s not sensitive by default
  • /info – Displays arbitrary application info; not sensitive by default
  • /metrics – Shows ‘metrics’ information for the current application; it’s also sensitive by default
  • /trace – Displays trace information (by default the last few HTTP requests)

We can find the full list of existing endpoints over on the official docs.

3.2. Configuring Existing Endpoints

Each endpoint can be customized with properties using the following format: endpoints.[endpoint name].[property to customize]

Three properties are available:

  • id – by which this endpoint will be accessed over HTTP
  • enabled – if true then it can be accessed otherwise not
  • sensitive – if true then need the authorization to show crucial information over HTTP

For example, add the following properties will customize the /beans endpoint:

endpoints.beans.id=springbeans
endpoints.beans.sensitive=false
endpoints.beans.enabled=true

3.3. /health Endpoint

The /health endpoint is used to check the health or state of the running application. It’s usually exercised by monitoring software to alert us if the running instance goes down or gets unhealthy for other reasons. E.g. Connectivity issues with our DB, lack of disk space…

By default only health information is shown to unauthorized access over HTTP:

{
    "status" : "UP"
}

This health information is collected from all the beans implementing the HealthIndicator interface configured in our application context.

Some information returned by HealthIndicator is sensitive in nature – but we can configure endpoints.health.sensitive=false to expose more detailed information like disk space, messaging broker connectivity, custom checks etc.

We could also implement our own custom health indicator – which can collect any type of custom health data specific to the application and automatically expose it through the /health endpoint:

@Component
public class HealthCheck implements HealthIndicator {
 
    @Override
    public Health health() {
        int errorCode = check(); // perform some specific health check
        if (errorCode != 0) {
            return Health.down()
              .withDetail("Error Code", errorCode).build();
        }
        return Health.up().build();
    }
    
    public int check() {
    	// Our logic to check health
    	return 0;
    }
}

Here’s how the output would look like:

{
    "status" : "DOWN",
    "myHealthCheck" : {
        "status" : "DOWN",
        "Error Code" : 1
     },
     "diskSpace" : {
         "status" : "UP",
         "free" : 209047318528,
         "threshold" : 10485760
     }
}

3.4. /info Endpoint

We can also customize the data shown by the /info endpoint – for example:

info.app.name=Spring Sample Application
info.app.description=This is my first spring boot application
info.app.version=1.0.0

And the sample output:

{
    "app" : {
        "version" : "1.0.0",
        "description" : "This is my first spring boot application",
        "name" : "Spring Sample Application"
    }
}

3.5. /metrics Endpoint

The metrics endpoint publishes information about OS, JVM as well as application level metrics. Once enabled, we get information such as memory, heap, processors, threads, classes loaded, classes unloaded, thread pools along with some HTTP metrics as well.

Here’s what the output of this endpoint looks like out of the box:

{
    "mem" : 193024,
    "mem.free" : 87693,
    "processors" : 4,
    "instance.uptime" : 305027,
    "uptime" : 307077,
    "systemload.average" : 0.11,
    "heap.committed" : 193024,
    "heap.init" : 124928,
    "heap.used" : 105330,
    "heap" : 1764352,
    "threads.peak" : 22,
    "threads.daemon" : 19,
    "threads" : 22,
    "classes" : 5819,
    "classes.loaded" : 5819,
    "classes.unloaded" : 0,
    "gc.ps_scavenge.count" : 7,
    "gc.ps_scavenge.time" : 54,
    "gc.ps_marksweep.count" : 1,
    "gc.ps_marksweep.time" : 44,
    "httpsessions.max" : -1,
    "httpsessions.active" : 0,
    "counter.status.200.root" : 1,
    "gauge.response.root" : 37.0
}

In order to gather custom metrics, we have support for ‘gauges’, that is, single value snapshots of data, and ‘counters’ i.e. incrementing/decrementing metrics.

Let’s implement our own custom metrics into the /metrics endpoint. For example, we’ll customize the login flow to record a successful and failed login attempt:

@Service
public class LoginServiceImpl {

    private final CounterService counterService;
    
    public LoginServiceImpl(CounterService counterService) {
        this.counterService = counterService;
    }
	
    public boolean login(String userName, char[] password) {
        boolean success;
        if (userName.equals("admin") && "secret".toCharArray().equals(password)) {
            counterService.increment("counter.login.success");
            success = true;
        }
        else {
            counterService.increment("counter.login.failure");
            success = false;
        }
        return success;
    }
}

Here’s what the output might look like:

{
    ...
    "counter.login.success" : 105,
    "counter.login.failure" : 12,
    ...
}

Note that login attempts and other security related events are available out of the box in Actuator as audit events.

3.6. Creating A New Endpoint

Besides using the existing endpoints provided by Spring Boot, we could also create an entirely new one.

Firstly, we’d need to have the new endpoint implement the Endpoint<T> interface:

@Component
public class CustomEndpoint implements Endpoint<List<String>> {
    
    @Override
    public String getId() {
        return "customEndpoint";
    }

    @Override
    public boolean isEnabled() {
        return true;
    }

    @Override
    public boolean isSensitive() {
        return true;
    }

    @Override
    public List<String> invoke() {
        // Custom logic to build the output
        List<String> messages = new ArrayList<String>();
        messages.add("This is message 1");
        messages.add("This is message 2");
        return messages;
    }
}

In order to access this new endpoint, its id is used to map it, i.e. we could exercise it hitting /customEndpoint.

Output:

[ "This is message 1", "This is message 2" ]

3.7. Further Customization

For security purposes, we might choose to expose the actuator endpoints over a non-standard port – the management.port property can easily be used to configure that.

Also, as we already mentioned, in 1.x. Actuator configures its own security model, based on Spring Security but independent from the rest of the application.
Hence, we can change the management.address property to restrict where the endpoints can be accessed from over the network:

#port used to expose actuator
management.port=8081 

#CIDR allowed to hit actuator
management.address=127.0.0.1 

#Whether security should be enabled or disabled altogether
management.security.enabled=false

Besides, all the built-in endpoints except /info are sensitive by default. If the application is using Spring Security – we can secure these endpoints by defining the default security properties – username, password, and role – in the application.properties file:

security.user.name=admin
security.user.password=secret
management.security.role=SUPERUSER

4. Spring Boot 2.x Actuator

In 2.x Actuator keeps its fundamental intent, but simplifies its model, extends its capabilities and incorporate better defaults.

Firstly, this version becomes technology agnostic. Also, it simplifies its security model by merging it with the application one.

Lastly, among the various changes, it’s important to keep in mind that some of them are breaking. This includes HTTP request/responses as well as Java APIs.

Furthermore, the latest version supports now the CRUD model, as opposed to the old RW (read/write) model.

4.1. Technology Support

With its second major version, Actuator is now technology-agnostic whereas in 1.x it was tied to MVC, therefore to the Servlet API.

In 2.x Actuator defines its model, pluggable and extensible without relying on MVC for this.

Hence, with this new model, we’re able to take advantage of MVC as well as WebFlux as an underlying web technology.

Moreover, forthcoming technologies could be added by implementing the right adapters.

Lastly, JMX remains supported to expose endpoints without any additional code.

4.2. Important Changes

Unlike in previous versions, Actuator comes with most endpoints disabled.

Thus, the only two available by default are /health and /info.

Would we want to enable all of them, we could set management.endpoints.web.expose=*. Alternatively, we could list endpoints which should be enabled.

Actuator now shares the security config with the regular App security rules. Hence, the security model is dramatically simplified. 

Therefore, to tweak Actuator security rules, we could just add an entry for /actuator/**:

@Bean
public SecurityWebFilterChain securityWebFilterChain(
  ServerHttpSecurity http) {
    return http.authorizeExchange()
      .pathMatchers("/actuator/**").permitAll()
      .anyExchange().authenticated()
      .and().build();
}

We can find further details on the brand new Actuator official docs.

Also, by default, all Actuator endpoints are now placed under the /actuator path

Same as in the previous version, we can tweak this path, using the new property management.endpoints.web.base-path.

4.3. Predefined Endpoints

Let’s have a look at some available endpoints, most of them were available in 1.x already.

Nonetheless, some endpoints have been added, some removed and some have been restructured:

  • /auditevents – lists security audit-related events such as user login/logout. Also, we can filter by principal or type among others fields
  • /beans – returns all available beans in our BeanFactory. Unlike /auditevents, it doesn’t support filtering
  • /conditions – formerly known as /autoconfig, builds a report of conditions around auto-configuration
  • /configprops – allows us to fetch all @ConfigurationProperties beans
  • /env – returns the current environment properties. Additionally, we can retrieve single properties
  • /flyway – provides details about our Flyway database migrations
  • /health – summarises the health status of our application
  • /heapdump – builds and returns a heap dump from the JVM used by our application
  • /info – returns general information. It might be custom data, build information or details about the latest commit
  • /liquibase – behaves like /flyway but for Liquibase
  • /logfile – returns ordinary application logs
  • /loggers – enables us to query and modify the logging level of our application
  • /metrics – details metrics of our application. This might include generic metrics as well as custom ones
  • /prometheus – returns metrics like the previous one, but formatted to work with a Prometheus server
  • /scheduledtasks – provides details about every scheduled task within our application
  • /sessions – lists HTTP sessions given we are using Spring Session
  • /shutdown – performs a graceful shutdown of the application
  • /threaddump – dumps the thread information of the underlying JVM

4.4. Health Indicators

Just like in the previous version, we can add custom indicators easily. Opposite to other APIs, the abstractions for creating custom health endpoints remain unchanged. However, a new interface ReactiveHealthIndicator has been added to implement reactive health checks.

Let’s have a look at a simple custom reactive health check:

@Component
public class DownstreamServiceHealthIndicator implements ReactiveHealthIndicator {

    @Override
    public Mono<Health> health() {
        return checkDownstreamServiceHealth().onErrorResume(
          ex -> Mono.just(new Health.Builder().down(ex).build())
        );
    }

    private Mono<Health> checkDownstreamServiceHealth() {
        // we could use WebClient to check health reactively
        return Mono.just(new Health.Builder().up().build());
    }
}

A handy feature of health indicators is that we can aggregate them as part of a hierarchy. Hence, following the previous example, we could group all downstream services under a downstream-services category. This category would be healthy as long as every nested service was reachable.

Composite health checks are present in 1.x through CompositeHealthIndicator. Also, in 2.x we could use CompositeReactiveHealthIndicator for its reactive counterpart.

Unlike in Spring Boot 1.x, the endpoints.<id>.sensitive flag has been removed. To hide the complete health report, we can take advantage of the new management.endpoint.health.show-details. This flag is false by default.

4.5. Metrics in Spring Boot 2

In Spring Boot 2.0, the in-house metrics were replaced with Micrometer support. Thus, we can expect breaking changes. If our application was using metric services such as GaugeService or CounterService they will no longer be available.

Instead, we’re expected to interact with Micrometer directly. In Spring Boot 2.0, we’ll get a bean of type MeterRegistry autoconfigured for us.

Furthermore, Micrometer is now part of Actuator’s dependencies. Hence, we should be good to go as long as the Actuator dependency is in the classpath.

Moreover, we’ll get a completely new response from the /metrics endpoint:

{
  "names": [
    "jvm.gc.pause",
    "jvm.buffer.memory.used",
    "jvm.memory.used",
    "jvm.buffer.count",
    // ...
  ]
}

As we can observe in the previous example, there are no actual metrics as we got in 1.x.

To get the actual value of a specific metric, we can now navigate to the desired metric, i.e., /actuator/metrics/jvm.gc.pause and get a detailed response:

{
  "name": "jvm.gc.pause",
  "measurements": [
    {
      "statistic": "Count",
      "value": 3.0
    },
    {
      "statistic": "TotalTime",
      "value": 7.9E7
    },
    {
      "statistic": "Max",
      "value": 7.9E7
    }
  ],
  "availableTags": [
    {
      "tag": "cause",
      "values": [
        "Metadata GC Threshold",
        "Allocation Failure"
      ]
    },
    {
      "tag": "action",
      "values": [
        "end of minor GC",
        "end of major GC"
      ]
    }
  ]
}

As we can see, metrics now are much more thorough. Including not only different values but also some associated meta-data.

4.6. Customizing the /info Endpoint

The /info endpoint remains unchanged. As before, we can add git details using the Maven or Gradle respective dependency:

<dependency>
    <groupId>pl.project13.maven</groupId>
    <artifactId>git-commit-id-plugin</artifactId>
</dependency>

Likewise, we could also include build information including name, group, and version using the Maven or Gradle plugin:

<plugin>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-maven-plugin</artifactId>
    <executions>
        <execution>
            <goals>
                <goal>build-info</goal>
            </goals>
        </execution>
    </executions>
</plugin>

4.7. Creating a Custom Endpoint

As we pointed out previously, we can create custom endpoints. However, Spring Boot 2 has redesigned the way to achieve this to support the new technology-agnostic paradigm.

Let’s create an Actuator endpoint to query, enable and disable feature flags in our application:

@Component
@Endpoint(id = "features")
public class FeaturesEndpoint {

    private Map<String, Feature> features = new ConcurrentHashMap<>();

    @ReadOperation
    public Map<String, Feature> features() {
        return features;
    }

    @ReadOperation
    public Feature feature(@Selector String name) {
        return features.get(name);
    }

    @WriteOperation
    public void configureFeature(@Selector String name, Feature feature) {
        features.put(name, feature);
    }

    @DeleteOperation
    public void deleteFeature(@Selector String name) {
        features.remove(name);
    }

    public static class Feature {
        private Boolean enabled;

        // [...] getters and setters 
    }

}

To get the endpoint, we need a bean. In our example, we’re using @Component for this. Also, we need to decorate this bean with @Endpoint.

The path of our endpoint is determined by the id parameter of @Endpoint, in our case, it’ll route requests to /actuator/features.

Once ready, we can start defining operations using:

  • @ReadOperation – it’ll map to HTTP GET
  • @WriteOperation – it’ll map to HTTP POST
  • @DeleteOperation – it’ll map to HTTP DELETE

When we run the application with the previous endpoint in our application, Spring Boot will register it.

A quick way to verify this would be checking the logs:

[...].WebFluxEndpointHandlerMapping: Mapped "{[/actuator/features/{name}],
  methods=[GET],
  produces=[application/vnd.spring-boot.actuator.v2+json || application/json]}"
[...].WebFluxEndpointHandlerMapping : Mapped "{[/actuator/features],
  methods=[GET],
  produces=[application/vnd.spring-boot.actuator.v2+json || application/json]}"
[...].WebFluxEndpointHandlerMapping : Mapped "{[/actuator/features/{name}],
  methods=[POST],
  consumes=[application/vnd.spring-boot.actuator.v2+json || application/json]}"
[...].WebFluxEndpointHandlerMapping : Mapped "{[/actuator/features/{name}],
  methods=[DELETE]}"[...]

In the previous logs, we can see how WebFlux is exposing our new endpoint. Would we switch to MVC, It’ll simply delegate on that technology without having to change any code.

Also, we have a few important considerations to keep in mind with this new approach:

  • There are no dependencies with MVC
  • All the metadata present as methods before (sensitive, enabled…) no longer exists. We can, however, enable or disable the endpoint using @Endpoint(id = “features”, enableByDefault = false)
  • Unlike in 1.x, there is no need to extend a given interface anymore
  • In contrast with the old Read/Write model, now we can define DELETE operations using @DeleteOperation

4.8. Extending Existing Endpoints

Let’s imagine we want to make sure the production instance of our application is never a SNAPSHOT version. We decided to do this by changing the HTTP status code of the Actuator endpoint that returns this information, i.e., /info. If our app happened to be a SNAPSHOT. We would get a different HTTP status code.

We can easily extend the behavior of a predefined endpoint using the @EndpointExtension annotations, or its more concrete specializations @EndpointWebExtension or @EndpointJmxExtension:

@Component
@EndpointWebExtension(endpoint = InfoEndpoint.class)
public class InfoWebEndpointExtension {

    private InfoEndpoint delegate;

    // standard constructor

    @ReadOperation
    public WebEndpointResponse<Map> info() {
        Map<String, Object> info = this.delegate.info();
        Integer status = getStatus(info);
        return new WebEndpointResponse<>(info, status);
    }

    private Integer getStatus(Map<String, Object> info) {
        // return 5xx if this is a snapshot
        return 200;
    }
}

5. Summary

In this article, we talked about Spring Boot Actuator. We started defining what Actuator means and what it does for us.

Next, we focused on Actuator for the current Spring Boot version, 1.x. discussing how to use it, tweak it an extend it.

Then, we discussed Actuator in Spring Boot 2. We focused on what’s new, and we took advantage of WebFlux to expose our endpoint.

Also, we talked about the important security changes that we can find in this new iteration. We discussed some popular endpoints and how they have changed as well.

Lastly, we demonstrated how to customize and extend Actuator.

As always we can find the code used in this article over on GitHub for both Spring Boot 1.x and Spring Boot 2.x.

The Trie Data Structure in Java

$
0
0

1. Overview

Data structures represent a crucial asset in computer programming, and knowing when and why to use them is very important.

This article is a brief introduction to trie (pronounced “try”) data structure, its implementation and complexity analysis.

2. Trie

A trie is a discrete data structure that’s not quite well-known or widely-mentioned in typical algorithm courses, but nevertheless an important one.

A trie (also known as a digital tree) and sometimes even radix tree or prefix tree (as they can be searched by prefixes), is an ordered tree structure, which takes advantage of the keys that it stores – usually strings.

A node’s position in the tree defines the key with which that node is associated, which makes tries different in comparison to binary search trees, in which a node stores a key that corresponds only to that node.

All descendants of a node have a common prefix of a String associated with that node, whereas the root is associated with an empty String.

Here we have a preview of TrieNode that we will be using in our implementation of the Trie:

public class TrieNode {
    private HashMap<Character, TrieNode> children;
    private String content;
    private boolean isWord;
    
   // ...
}

There may be cases when a trie is a binary search tree, but in general, these are different. Both binary search trees and tries are trees, but each node in binary search trees always has two children, whereas tries’ nodes, on the other hand, can have more.

In a trie, every node (except the root node) stores one character or a digit. By traversing the trie down from the root node to a particular node n, a common prefix of characters or digits can be formed which is shared by other branches of the trie as well.

By traversing up the trie from a leaf node to the root node, a String or a sequence of digits can be formed.

Here is the Trie class, which represents an implementation of the trie data structure:

public class Trie {
    private TrieNode root;
    //...
}

3. Common Operations

Now, let’s see how to implement basic operations.

3.1. Inserting Elements

The first operation that we’ll describe is the insertion of new nodes.

Before we start the implementation, it’s important to understand the algorithm:

  1. Set a current node as a root node
  2. Set the current letter as the first letter of the word
  3. If the current node has already an existing reference to the current letter (through one of the elements in the “children” field), then set current node to that referenced node. Otherwise, create a new node, set the letter equal to the current letter, and also initialize current node to this new node
  4. Repeat step 3 until the key is traversed

The complexity of this operation is O(n), where n represents the key size.

Here is the implementation of this algorithm:

public void insert(String word) {
    TrieNode current = root;

    for (int i = 0; i < word.length(); i++) {
        current = current.getChildren()
          .computeIfAbsent(word.charAt(i), c -> new TrieNode());
    }
    current.setEndOfWord(true);
}

Now let’s see how we can use this method to insert new elements in a trie:

private Trie createExampleTrie() {
    Trie trie = new Trie();

    trie.insert("Programming");
    trie.insert("is");
    trie.insert("a");
    trie.insert("way");
    trie.insert("of");
    trie.insert("life");

    return trie;
}

We can test that trie has already been populated with new nodes from the following test:

@Test
public void givenATrie_WhenAddingElements_ThenTrieNotEmpty() {
    Trie trie = createTrie();

    assertFalse(trie.isEmpty());
}

3.2. Finding Elements

Let’s now add a method to check whether a particular element is already present in a trie:

  1. Get children of the root
  2. Iterate through each character of the String
  3. Check whether that character is already a part of a sub-trie. If it isn’t present anywhere in the trie, then stop the search and return false
  4. Repeat the second and the third step until there isn’t any character left in the String. If the end of the String is reached, return true

The complexity of this algorithm is O(n), where n represents the length of the key. 

Java implementation can look like:

public boolean find(String word) {
    TrieNode current = root;
    for (int i = 0; i < word.length(); i++) {
        char ch = word.charAt(i);
        TrieNode node = current.getChildren().get(ch);
        if (node == null) {
            return false;
        }
        current = node;
    }
    return current.isEndOfWord();
}

And in action:

@Test
public void givenATrie_WhenAddingElements_ThenTrieContainsThoseElements() {
    Trie trie = createExampleTrie();

    assertFalse(trie.containsNode("3"));
    assertFalse(trie.containsNode("vida"));
    assertTrue(trie.containsNode("life"));
}

3.3. Deleting an Element

Aside from inserting and finding an element, it’s obvious that we also need to be able to delete elements.

For the deletion process, we need to follow the steps:

  1. Check whether this element is already part of the trie
  2. If the element is found, then remove it from the trie

The complexity of this algorithm is O(n), where n represents the length of the key.

Let’s have a quick look at the implementation:

public void delete(String word) {
    delete(root, word, 0);
}

private boolean delete(TrieNode current, String word, int index) {
    if (index == word.length()) {
        if (!current.isEndOfWord()) {
            return false;
        }
        current.setEndOfWord(false);
        return current.getChildren().isEmpty();
    }
    char ch = word.charAt(index);
    TrieNode node = current.getChildren().get(ch);
    if (node == null) {
        return false;
    }
    boolean shouldDeleteCurrentNode = delete(node, word, index + 1);

    if (shouldDeleteCurrentNode) {
        current.getChildren().remove(ch);
        return current.getChildren().isEmpty();
    }
    return false;
}

And in action:

@Test
void whenDeletingElements_ThenTreeDoesNotContainThoseElements() {
    Trie trie = createTrie();

    assertTrue(trie.containsNode("Programming"));
 
    trie.delete("Programming");
    assertFalse(trie.containsNode("Programming"));
}

4. Conclusion

In this article, we’ve seen a brief introduction to trie data structure and its most common operations and their implementation.

The full source code for the examples shown in this article can be found over on GitHub.

Creating and Configuring Jetty 9 Server in Java

$
0
0

1. Overview

In this article, we’ll talk about creating and configuring a Jetty instance programmatically.

Jetty is an HTTP server and servlet container designed to be lightweight and easily embeddable. We’ll take a look at how to setup and configure one or more instances of the server.

2. Maven Dependencies

To start off, we want to add Jetty 9 with the following Maven dependencies into our pom.xml:

<dependency>
    <groupId>org.eclipse.jetty</groupId>
    <artifactId>jetty-server</artifactId>
    <version>9.4.8.v20171121</version>
</dependency>
<dependency>
    <groupId>org.eclipse.jetty</groupId>
    <artifactId>jetty-webapp</artifactId>
    <version>9.4.8.v20171121</version>
</dependency>

3. Creating a Basic Server

Spinning up an embedded server with Jetty is as easy as writing:

Server server = new Server();
server.start();

Shutting it down is equally simple:

server.stop();

4. Handlers

Now that our server is up and running, we need to instruct it on what to do with the incoming requests. This can be performed using the Handler interface.

We could create one ourselves but Jetty already provides a set of implementations for the most common use cases. Let’s take a look at two of them.

4.1. WebAppContext

The WebAppContext class allows you to delegate the request handling to an existing web application. The application can be provided either as a WAR file path or as a webapp folder path.

If we want to expose an application under the “myApp” context we would write:

Handler webAppHandler = new WebAppContext(webAppPath, "/myApp");
server.setHandler(webAppHandler);

4.2. HandlerCollection

For complex applications, we can even specify more than one handler using the HandlerCollection class.

Suppose we have implemented two custom handlers. The first one performs only logging operations meanwhile the second one creates and sends back an actual response to the user. We want to process each incoming request with both of them in this order.

Here’s how to do it:

Handler handlers = new HandlerCollection();
handlers.addHandler(loggingRequestHandler);
handlers.addHandler(customRequestHandler);
server.setHandler(handlers);

5. Connectors

The next thing we want to do is configuring on which addresses and ports the server will be listening and adding an idle timeout.

The Server class declares two convenience constructors that may be used to bind to a specific port or address.

Although this may be ok when dealing with small applications, it won’t be enough if we want to open multiple connections on different sockets.

In this situation, Jetty provides the Connector interface and more specifically the ServerConnector class which allows defining various connection configuration parameters:

ServerConnector connector = new ServerConnector(server);
connector.setPort(80);
connector.setHost("169.20.45.12");
connector.setIdleTimeout(30000);
server.addConnector(connector);

With this configuration, the server will be listening on 169.20.45.12:80. Each connection established on this address will have a timeout of 30 seconds.

If we need to configure other sockets we can add other connectors.

6. Conclusion

In this quick tutorial, we focused on how to set up an embedded server with Jetty. We also saw how to perform further configurations using Handlers and Connectors.

As always, all the code used here can be found over on GitHub.

Spring 5 and Servlet 4 – The PushBuilder

$
0
0

1. Introduction

The Server Push technology — part of HTTP/2 (RFC 7540) — allows us to send resources to the client proactively from the server-side. This is a major change from HTTP/1.X pull-based approach.

One of the new features that Spring 5 brings – is the server push support that comes with Java EE 8 Servlet 4.0 API. In this article, we’ll explore how to use server push and integrate it with Spring MVC controllers.

2. Maven Dependency

Let’s start by defining dependencies we’re going to use:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>5.0.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>javax.servlet-api</artifactId>
    <version>4.0.0</version>
    <scope>provided</scope>
</dependency>

The most recent versions of spring-mvc and servlet-api can be found on Maven Central.

3. HTTP/2 Requirements

To use server push, we’ll need to run our application in a container that supports HTTP/2 and the Servlet 4.0 API. Configuration requirements of various containers can be found here, in the Spring wiki.

Additionally, we’ll need HTTP/2 support on the client-side; of course, most current browsers do have this support.

4. PushBuilder Features

The PushBuilder interface is responsible for implementing server push. In Spring MVC, we can inject a PushBuilder as an argument of the methods annotated with @RequestMapping.

At this point, it’s important to consider that – if the client doesn’t have HTTP/2 support – the reference will be sent as null.

Here is the core API offered by the PushBuilder interface:

  • path (String path) – indicates the resource that we’re going to send
  • push() – sends the resource to the client
  • addHeader (String name, String value) – indicates the header that we’ll use for the pushed resource

5. Quick Example

To demonstrate the integration, we’ll create the demo.jsp page with one resource — logo.png:

<%@ page language="java" contentType="text/html; charset=UTF-8"
  pageEncoding="UTF-8"%>
<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"%>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>PushBuilder demo</title>
</head>
<body>
    <span>PushBuilder demo</span>
    <br>
    <img src="<c:url value="/resources/logo.png"/>" alt="Logo" 
      height="126" width="411">
    <br>
    <!--Content-->
</body>
</html>

We’ll also expose two endpoints with the PushController controller — one that uses server push and another that doesn’t:

@Controller
public class PushController {

    @GetMapping(path = "/demoWithPush")
    public String demoWithPush(PushBuilder pushBuilder) {
        if (null != pushBuilder) {
            pushBuilder.path("resources/logo.png")
              .addHeader("Content-Type", "image/png")
              .push();
        }
        return "demo";
    }

    @GetMapping(path = "/demoWithoutPush")
    public String demoWithoutPush() {
        return "demo";
    }
}

Using the Chrome Development tools, we can see the differences by calling both endpoints.

When we call the demoWithoutPush method, the view and the resource is published and consumed by the client using the pull technology:


When we call the demoWithPush method, we can see the use of the push server and how the resource is delivered in advance by the server, resulting in a lower load time:


The server push technology can improve the load time of the pages of our applications in many scenarios. That being said, we do need to consider that, although we decrease the latency, we can increase the bandwidth – depending on the number of resources we serve.

It’s also a good idea to combine this technology with other strategies such as Caching, Resource Minification, and CDN, and to run performance tests on our application to determine the ideal endpoints for using server push.

6. Conclusion

In this quick tutorial, we saw an example of how to use the server push technology with Spring MVC using the PushBuilder interface, and we compared the load times when using it versus the standard pull technology.

As always, the source code is available over on GitHub.

An Example of Load Balancing with Zuul and Eureka

$
0
0

1. Overview

In this article, we’ll look at how load balancing works with Zuul and Eureka.

We’ll route requests to a REST Service discovered by Spring Cloud Eureka through Zuul Proxy.

2. Initial Setup

We need to set up Eureka server/client as shown in the article Spring Cloud Netflix-Eureka

3. Configuring Zuul

Zuul, among many other things, fetches from Eureka service locations and does server-side load balancing.

3.1. Maven Configuration

First, we’ll add Zuul Server and Eureka dependency to our pom.xml:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-zuul</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>

3.2. Communication with Eureka

Secondly, we’ll add necessary properties in Zuul’s application.properties file:

server.port=8762
spring.application.name=zuul-server
eureka.instance.preferIpAddress=true
eureka.client.registerWithEureka=true
eureka.client.fetchRegistry=true
eureka.serviceurl.defaultzone=http://localhost:8761/eureka/

Here we’re telling Zuul to register itself as a service in Eureka and to run on port 8762.

Next, we’ll implement the main class with @EnableZuulProxy and @EnableDiscoveryClient. @EnableZuulProxy indicates this as Zuul Server and @EnableDiscoveryClient indicates this as Eureka Client:

@SpringBootApplication
@EnableZuulProxy
@EnableDiscoveryClient
public class ZuulConfig {
    public static void main(String[] args) {
        SpringApplication.run(ZuulConfig.class, args);
    }
}

We’ll point our browser to http://localhost:8762/routes. This should show up all the routes available for Zuul that are discovered by Eureka:

{"/spring-cloud-eureka-client/**":"spring-cloud-eureka-client"}

Now, we’ll communicate with Eureka client using Zuul Proxy route obtained. Pointing our browser to http://localhost:8762/spring-cloud-eureka-client/greeting should generate the response something like:

Hello from 'SPRING-CLOUD-EUREKA-CLIENT with Port Number 8081'!

4. Load Balancing with Zuul

When Zuul receives a request, it picks up one of the physical locations available and forwards requests to the actual service instance. The whole process of caching the location of the service instances and forwarding the request to the actual location is provided out of the box with no additional configurations needed.

Here, we can see how Zuul is encapsulating three different instances of the same service:

Internally, Zuul uses Netflix Ribbon to look up for all instances of the service from the service discovery (Eureka Server).

Let’s observe this behavior when multiple instances are brought up.

4.1. Registering Multiple Instances

We’ll start by running two instances (8081 and 8082 ports).

Once all the instances are up, we can observe in logs that physical locations of the instances are registered in DynamicServerListLoadBalancer and the route is mapped to Zuul Controller which takes care of forwarding requests to the actual instance:

Mapped URL path [/spring-cloud-eureka-client/**] onto handler of type [class org.springframework.cloud.netflix.zuul.web.ZuulController]
Client:spring-cloud-eureka-client instantiated a LoadBalancer:
  DynamicServerListLoadBalancer:{NFLoadBalancer:name=spring-cloud-eureka-client,
  current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:null
Using serverListUpdater PollingServerListUpdater
DynamicServerListLoadBalancer for client spring-cloud-eureka-client initialized: 
  DynamicServerListLoadBalancer:{NFLoadBalancer:name=spring-cloud-eureka-client,
  current list of Servers=[0.0.0.0:8081, 0.0.0.0:8082],
  Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone;	Instance count:2;	
  Active connections count: 0;	Circuit breaker tripped count: 0;	
  Active connections per server: 0.0;]},
  Server stats: 
    [[Server:0.0.0.0:8080;	Zone:defaultZone;......],
    [Server:0.0.0.0:8081;	Zone:defaultZone; ......],

Note: logs were formatted for better readability.

4.2. Load-Balancing Example

Let’s navigate our browser to http://localhost:8762/spring-cloud-eureka-client/greeting a few times.

Each time, we should see a slightly different result:

Hello from 'SPRING-CLOUD-EUREKA-CLIENT with Port Number 8081'!
Hello from 'SPRING-CLOUD-EUREKA-CLIENT with Port Number 8082'!
Hello from 'SPRING-CLOUD-EUREKA-CLIENT with Port Number 8081'!

Each request received by Zuul is forwarded to a different instance in a round robin fashion.

If we start another instance and register it in Eureka, Zuul will register it automatically and start forwarding requests to it:

Hello from 'SPRING-CLOUD-EUREKA-CLIENT with Port Number 8083'!

We can also change Zuul’s load-balancing strategy to any other Netflix Ribbon strategy – more about this can be found in our Ribbon article.

5. Conclusion

As we’ve seen, Zuul provides a single URL for all the instances of the Rest Service and does load balancing to forward the requests to one of the instances in round robin fashion.

As always, the complete code for this article can be found over on GitHub.


Spring Security – Auto Login User After Registration

$
0
0

1. Overview

In this quick tutorial, we’ll discuss how to auto-authenticate users immediately after the registration process – in a Spring Security implementation.

Simply put, once the user finishes registering, they’re typically redirected to the login page and have to now re-type their username and password.

Let’s see how we can avoid that by auto-authenticating the user instead.

Before we get started, note that we’re working within the scope of the registration series here on the site.

2. Using the HttpServletRequest

A very simple way to programmatically force an authentication is to leverage the HttpServletRequest login() method:

public void authWithHttpServletRequest(HttpServletRequest request, String username, String password) {
    try {
        request.login(username, password);
    } catch (ServletException e) {
        LOGGER.error("Error while login ", e);
    }
}

Now that, under the hood, the HttpServletRequest.login() API does use the AuthenticationManager to perform the authentication.

It’s also important to understand and deal with the ServletException that might occur at this level.

3. Using the AuthenticationManager

Next, we can also directly create a UsernamePasswordAuthenticationToken – and then go through the standard AuthenticationManager manually:

public void authWithAuthManager(HttpServletRequest request, String username, String password) {
    UsernamePasswordAuthenticationToken authToken = new UsernamePasswordAuthenticationToken(username, password);
    authToken.setDetails(new WebAuthenticationDetails(request));
    
    Authentication authentication = authenticationManager.authenticate(authToken);
    
    SecurityContextHolder.getContext().setAuthentication(authentication);
}

Notice how we’re creating the token request, passing it through the standard authentication flow, and then explicitly setting the result in the current security context.

4. Complex Registration

In some, more complex scenarios, the registration process has multiple stages, such as – for example – a confirmation step until the user can log into the system.

In cases like this, it’s, of course, important to understand exactly where we can auto-authenticate the user. We cannot do that right after they register because, at that point, the newly created account is still disabled.

Simply put – we have to perform the automatic authentication after they confirm their account. 

Also, keep in mind that, at that point – we no longer have access to their actual, raw credentials. We only have access to the encoded password of the user – and that’s what we’ll use here:

public void authWithoutPassword(User user){
    List<Privilege> privileges = user.getRoles().stream()
      .map(role -> role.getPrivileges())
      .flatMap(list -> list.stream())
      .distinct().collect(Collectors.toList());
    List<GrantedAuthority> authorities = privileges.stream()
        .map(p -> new SimpleGrantedAuthority(p.getName()))
        .collect(Collectors.toList());

    Authentication authentication = new UsernamePasswordAuthenticationToken(user, null, authorities);
    SecurityContextHolder.getContext().setAuthentication(authentication);
}

Note how we’re setting the authentication authorities properly here, as would typically be done in the AuthenticationProvider.

5. Conclusion

We discussed different ways to auto-authenticate users after the registration process.

As always, the full source code is available over on GitHub.

An Introduction to Kong

$
0
0

1. Introduction

Kong is an open-source API gateway and microservice management layer.

Based on Nginx and the lua-nginx-module (specifically OpenResty), Kong’s pluggable architecture makes it flexible and powerful.

2. Key Concepts

Before we dive into code samples, let’s take a look at the key concepts in Kong:

  • API Object – wraps properties of any HTTP(s) endpoint that accomplishes a specific task or delivers some service. Configurations include HTTP methods, endpoint URIs, upstream URL which points to our API servers and will be used for proxying requests, maximum retires, rate limits, timeouts, etc.
  • Consumer Object – wraps properties of anyone using our API endpoints. It will be used for tracking, access control and more
  • Upstream Object – describes how incoming requests will be proxied or load balanced, represented by a virtual hostname
  • Target Object – represents the services are implemented and served, identified by a hostname (or an IP address) and a port. Note that targets of every upstream can only be added or disabled. A history of target changes is maintained by the upstream
  • Plugin Object – pluggable features to enrich functionalities of our application during the request and response lifecycle. For example, API authentication and rate limiting features can be added by enabling relevant plugins. Kong provides very powerful plugins in its plugins gallery
  • Admin API – RESTful API endpoints used to manage Kong configurations, endpoints, consumers, plugins, and so on

The picture below depicts how Kong differs from a legacy architecture, which could help us understand why it introduced these concepts:


(source: https://getkong.org/)

3. Setup

The official documentation provides detailed instructions for various environments.

4. API Management

After setting up Kong locally, let’s take a bite of Kong’s powerful features by proxying our simple stock query endpoint:

@RestController
@RequestMapping("/stock")
public class QueryController {

    @GetMapping("/{code}")
    public String getStockPrice(@PathVariable String code){
        return "BTC".equalsIgnoreCase(code) ? "10000" : "0";
    }
}

4.1. Adding an API

Next, let’s add our query API into Kong.

The admin APIs is accessible via http://localhost:8001, so all our API management operations will be done with this base URI:

APIObject stockAPI = new APIObject(
  "stock-api", "stock.api", "http://localhost:8080", "/");
HttpEntity<APIObject> apiEntity = new HttpEntity<>(stockAPI);
ResponseEntity<String> addAPIResp = restTemplate.postForEntity(
  "http://localhost:8001/apis", apiEntity, String.class);

assertEquals(HttpStatus.CREATED, addAPIResp.getStatusCode());

Here, we added an API with the following configuration:

{
    "name": "stock-api",
    "hosts": "stock.api",
    "upstream_url": "http://localhost:8080",
    "uris": "/"
}
  • “name” is an identifier for the API, used when manipulating its behaviour
  • “hosts” will be used to route incoming requests to given “upstream_url” by matching the “Host” header
  • Relative paths will be matched to the configured “uris”

In case we want to deprecate an API or the configuration is wrong, we can simply remove it:

restTemplate.delete("http://localhost:8001/apis/stock-api");

After APIs are added, they will be available for consumption through http://localhost:8000:

String apiListResp = restTemplate.getForObject(
  "http://localhost:8001/apis/", String.class);
 
assertTrue(apiListResp.contains("stock-api"));

HttpHeaders headers = new HttpHeaders();
headers.set("Host", "stock.api");
RequestEntity<String> requestEntity = new RequestEntity<>(
  headers, HttpMethod.GET, new URI("http://localhost:8000/stock/btc"));
ResponseEntity<String> stockPriceResp 
  = restTemplate.exchange(requestEntity, String.class);

assertEquals("10000", stockPriceResp.getBody());

In the code sample above, we try to query stock price via the API we just added to Kong.

By requesting http://localhost:8000/stock/btc, we get the same service as querying directly from http://localhost:8080/stock/btc.

4.2. Adding an API Consumer

Let’s now talk about security – more specifically authentication for the users accessing our API.

Let’s add a consumer to our stock query API so that we can enable the authentication feature later.

To add a consumer for an API is just as simple as adding an API. The consumer’s name (or id) is the only required field of all consumer’s properties:

ConsumerObject consumer = new ConsumerObject("eugenp");
HttpEntity<ConsumerObject> addConsumerEntity = new HttpEntity<>(consumer);
ResponseEntity<String> addConsumerResp = restTemplate.postForEntity(
  "http://localhost:8001/consumers/", addConsumerEntity, String.class);
 
assertEquals(HttpStatus.CREATED, addConsumerResp.getStatusCode());

Here we added “eugenp” as a new consumer:

{
    "username": "eugenp"
}

4.3. Enabling Authentication

Here comes the most powerful feature of Kong, plugins.

Now we’re going to apply an auth plugin to our proxied stock query API:

PluginObject authPlugin = new PluginObject("key-auth");
ResponseEntity<String> enableAuthResp = restTemplate.postForEntity(
  "http://localhost:8001/apis/stock-api/plugins", 
  new HttpEntity<>(authPlugin), 
  String.class);
assertEquals(HttpStatus.CREATED, enableAuthResp.getStatusCode());

If we try to query a stock’s price through the proxy URI, the request will be rejected:

HttpHeaders headers = new HttpHeaders();
headers.set("Host", "stock.api");
RequestEntity<String> requestEntity = new RequestEntity<>(
  headers, HttpMethod.GET, new URI("http://localhost:8000/stock/btc"));
ResponseEntity<String> stockPriceResp = restTemplate
  .exchange(requestEntity, String.class);
 
assertEquals(HttpStatus.UNAUTHORIZED, stockPriceResp.getStatusCode());

Remember that Eugen is one of our API consumers, so we should allow him to use this API by adding an authentication key:

String consumerKey = "eugenp.pass";
KeyAuthObject keyAuth = new KeyAuthObject(consumerKey);
ResponseEntity<String> keyAuthResp = restTemplate.postForEntity(
  "http://localhost:8001/consumers/eugenp/key-auth", 
  new HttpEntity<>(keyAuth), 
  String.class); 
assertTrue(HttpStatus.CREATED == keyAuthResp.getStatusCode());

Then Eugen can use this API as before:

HttpHeaders headers = new HttpHeaders();
headers.set("Host", "stock.api");
headers.set("apikey", consumerKey);
RequestEntity<String> requestEntity = new RequestEntity<>(
  headers, 
  HttpMethod.GET, 
  new URI("http://localhost:8000/stock/btc"));
ResponseEntity<String> stockPriceResp = restTemplate
  .exchange(requestEntity, String.class);
 
assertEquals("10000", stockPriceResp.getBody());

5. Advanced Features

Aside from basic API proxy and management, Kong also supports API load-balancing, clustering, health checking, and monitoring, etc.

In this section, we’re going to take a look at how to load balance requests with Kong, and how to secure admin APIs.

5.1. Load Balancing

Kong provides two strategies of load balancing requests to backend services: a dynamic ring-balancer, and a straightforward DNS-based method. For the sake of simplicity, we’ll be using the ring-balancer.

As we mentioned earlier, upstreams are used for load-balancing, and each upstream can have multiple targets.

Kong supports both weighted-round-robin and hash-based balancing algorithms. By default, the weighted-round-robin scheme is used – where requests are delivered to each target according to their weight.

First, let’s prepare the upstream:

UpstreamObject upstream = new UpstreamObject("stock.api.service");
ResponseEntity<String> addUpstreamResp = restTemplate.postForEntity(
  "http://localhost:8001/upstreams", 
  new HttpEntity<>(upstream), 
  String.class);
 
assertEquals(HttpStatus.CREATED, addUpstreamResp.getStatusCode());

Then, add two targets for the upstream, a test version with weight=10, and a release version with weight=40:

TargetObject testTarget = new TargetObject("localhost:8080", 10);
ResponseEntity<String> addTargetResp = restTemplate.postForEntity(
  "http://localhost:8001/upstreams/stock.api.service/targets",
  new HttpEntity<>(testTarget), 
  String.class);
 
assertEquals(HttpStatus.CREATED, ddTargetResp.getStatusCode());

TargetObject releaseTarget = new TargetObject("localhost:9090",40);
addTargetResp = restTemplate.postForEntity(
  "http://localhost:8001/upstreams/stock.api.service/targets",
  new HttpEntity<>(releaseTarget), 
  String.class);
 
assertEquals(HttpStatus.CREATED, addTargetResp.getStatusCode());

With the configuration above, we can assume that 1/5 of the requests will go to test version and 4/5 will go to release version:

APIObject stockAPI = new APIObject(
  "balanced-stock-api", 
  "balanced.stock.api", 
  "http://stock.api.service", 
  "/");
HttpEntity<APIObject> apiEntity = new HttpEntity<>(stockAPI);
ResponseEntity<String> addAPIResp = restTemplate.postForEntity(
  "http://localhost:8001/apis", apiEntity, String.class);
 
assertEquals(HttpStatus.CREATED, addAPIResp.getStatusCode());

HttpHeaders headers = new HttpHeaders();
headers.set("Host", "balanced.stock.api");
for(int i = 0; i < 1000; i++) {
    RequestEntity<String> requestEntity = new RequestEntity<>(
      headers, HttpMethod.GET, new URI("http://localhost:8000/stock/btc"));
    ResponseEntity<String> stockPriceResp
     = restTemplate.exchange(requestEntity, String.class);
 
    assertEquals("10000", stockPriceResp.getBody());
}
 
int releaseCount = restTemplate.getForObject(
  "http://localhost:9090/stock/reqcount", Integer.class);
int testCount = restTemplate.getForObject(
  "http://localhost:8080/stock/reqcount", Integer.class);

assertTrue(Math.round(releaseCount * 1.0 / testCount) == 4);

Note that weighted-round-robin scheme balances requests to backend services approximately to the weight ratio, so only an approximation of the ratio can be verified, reflected in the last line of above code.

5.2. Securing the Admin API

By default, Kong only accepts admin requests from the local interface, which is a good enough restriction in most cases. But if we want to manage it via other network interfaces, we can change the admin_listen value in kong.conf, and configure firewall rules.

Or, we can make Kong serve as a proxy for the Admin API itself. Say we want to manage APIs with path “/admin-api”, we can add an API like this:

APIObject stockAPI = new APIObject(
  "admin-api", 
  "admin.api", 
  "http://localhost:8001", 
  "/admin-api");
HttpEntity<APIObject> apiEntity = new HttpEntity<>(stockAPI);
ResponseEntity<String> addAPIResp = restTemplate.postForEntity(
  "http://localhost:8001/apis", 
  apiEntity, 
  String.class);
 
assertEquals(HttpStatus.CREATED, addAPIResp.getStatusCode());

Now we can use the proxied admin API to manage APIs:

HttpHeaders headers = new HttpHeaders();
headers.set("Host", "admin.api");
APIObject baeldungAPI = new APIObject(
  "baeldung-api", 
  "baeldung.com", 
  "http://ww.baeldung.com", 
  "/");
RequestEntity<APIObject> requestEntity = new RequestEntity<>(
  baeldungAPI, 
  headers, 
  HttpMethod.POST, 
  new URI("http://localhost:8000/admin-api/apis"));
ResponseEntity<String> addAPIResp = restTemplate
  .exchange(requestEntity, String.class);

assertEquals(HttpStatus.CREATED, addAPIResp.getStatusCode());

Surely, we want the proxied API secured. This can be easily achieved by enabling authentication plugin for the proxied admin API.

6. Summary

In this article, we introduced Kong – a platform for microservice API gateway and focused on its core functionality – managing APIs and routing requests to upstream servers, as well as on some more advanced features such as load balancing.

Yet, there’re many more solid features for us to explore, and we can develop our own plugins if we need to – you can continue exploring the official documentation here.

As always, the full implementation can be found over on Github.

Extra Login Fields with Spring Security

$
0
0

1. Introduction

In this article, we’ll implement a custom authentication scenario with Spring Security by adding an extra field to the standard login form.

We’re going to focus on 2 different approaches, to show the versatility of the framework and the flexible ways we can use it in.

Our first approach will be a simple solution which focuses on reuse of existing core Spring Security implementations.

Our second approach will be a more custom solution that may be more suitable for advanced use cases.

We’ll build on top of concepts that are discussed in our previous articles on Spring Security login.

2. Maven Setup

We’ll use Spring Boot starters to bootstrap our project and bring in all necessary dependencies.

The setup we’ll use requires a parent declaration, web starter, and security starter; we’ll also include thymeleaf :

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.0.0.M7</version>
    <relativePath/>
</parent>
 
<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-security</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-thymeleaf</artifactId>
     </dependency>
     <dependency>
        <groupId>org.thymeleaf.extras</groupId>
        <artifactId>thymeleaf-extras-springsecurity4</artifactId>
    </dependency>
</dependencies>

The most current version of Spring Boot security starter can be found over at Maven Central.

3. Simple Project Setup

In our first approach, we’ll focus on reusing implementations that are provided by Spring Security. In particular, we’ll reuse DaoAuthenticationProvider and UsernamePasswordToken as they exist “out-of-the-box”.

The key components will include:

  • SimpleAuthenticationFilter – an extension of UsernamePasswordAuthenticationFilter
  • SimpleUserDetailsServicean implementation of UserDetailsService
  • User – an extension of the User class provided by Spring Security that declares our extra domain field
  • SecurityConfig – our Spring Security configuration that inserts our SimpleAuthenticationFilter into the filter chain, declares security rules and wires up dependencies
  • login.html – a login page that collects the username, password, and domain

3.1. Simple Authentication Filter

In our SimpleAuthenticationFilter, the domain and username fields are extracted from the request. We concatenate these values and use them to create an instance of UsernamePasswordAuthenticationToken.

The token is then passed along to the AuthenticationProvider for authentication:

public class SimpleAuthenticationFilter
  extends UsernamePasswordAuthenticationFilter {

    @Override
    public Authentication attemptAuthentication(
      HttpServletRequest request, 
      HttpServletResponse response) 
        throws AuthenticationException {

        // ...

        UsernamePasswordAuthenticationToken authRequest
          = getAuthRequest(request);
        setDetails(request, authRequest);
        
        return this.getAuthenticationManager()
          .authenticate(authRequest);
    }

    private UsernamePasswordAuthenticationToken getAuthRequest(
      HttpServletRequest request) {
 
        String username = obtainUsername(request);
        String password = obtainPassword(request);
        String domain = obtainDomain(request);

        // ...

        String usernameDomain = String.format("%s%s%s", username.trim(), 
          String.valueOf(Character.LINE_SEPARATOR), domain);
        return new UsernamePasswordAuthenticationToken(
          usernameDomain, password);
    }

    // other methods
}

3.2. Simple UserDetails Service

The UserDetailsService contract defines a single method called loadUserByUsername. Our implementation extracts the username and domain. The values are then passed to our UserRepository to get the User:

public class SimpleUserDetailsService implements UserDetailsService {

    // ...

    @Override
    public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {
        String[] usernameAndDomain = StringUtils.split(
          username, String.valueOf(Character.LINE_SEPARATOR));
        if (usernameAndDomain == null || usernameAndDomain.length != 2) {
            throw new UsernameNotFoundException("Username and domain must be provided");
        }
        User user = userRepository.findUser(usernameAndDomain[0], usernameAndDomain[1]);
        if (user == null) {
            throw new UsernameNotFoundException(
              String.format("Username not found for domain, username=%s, domain=%s", 
                usernameAndDomain[0], usernameAndDomain[1]));
        }
        return user;
    }
}

3.3. Spring Security Configuration

Our setup is different from a standard Spring Security configuration because we insert our SimpleAuthenticationFilter into the filter chain before the default with a call to addFilterBefore:

@Override
protected void configure(HttpSecurity http) throws Exception {

    http
      .addFilterBefore(authenticationFilter(), 
        UsernamePasswordAuthenticationFilter.class)
      .authorizeRequests()
        .antMatchers("/css/**", "/index").permitAll()
        .antMatchers("/user/**").authenticated()
      .and()
      .formLogin().loginPage("/login")
      .and()
      .logout()
      .logoutUrl("/logout");
}

We’re able to use the provided DaoAuthenticationProvider because we configure it with our SimpleUserDetailsService. Recall that our SimpleUserDetailsService knows how to parse out our username and domain fields and return the appropriate User to use when authenticating:

public AuthenticationProvider authProvider() {
    DaoAuthenticationProvider provider = new DaoAuthenticationProvider();
    provider.setUserDetailsService(userDetailsService);
    provider.setPasswordEncoder(passwordEncoder());
    return provider;
}

Since we’re using a SimpleAuthenticationFilter, we configure our own AuthenticationFailureHandler to ensure failed login attempts are appropriately handled:

public SimpleAuthenticationFilter authenticationFilter() throws Exception {
    SimpleAuthenticationFilter filter = new SimpleAuthenticationFilter();
    filter.setAuthenticationManager(authenticationManagerBean());
    filter.setAuthenticationFailureHandler(failureHandler());
    return filter;
}

3.4. Login Page

The login page we use collects our additional domain field that gets extracted by our SimpleAuthenticationFilter:

<form class="form-signin" th:action="@{/login}" method="post">
 <h2 class="form-signin-heading">Please sign in</h2>
 <p>Example: user / domain / password</p>
 <p th:if="${param.error}" class="error">Invalid user, password, or domain</p>
 <p>
   <label for="username" class="sr-only">Username</label>
   <input type="text" id="username" name="username" class="form-control" 
     placeholder="Username" required autofocus/>
 </p>
 <p>
   <label for="domain" class="sr-only">Domain</label>
   <input type="text" id="domain" name="domain" class="form-control" 
     placeholder="Domain" required autofocus/>
 </p>
 <p>
   <label for="password" class="sr-only">Password</label>
   <input type="password" id="password" name="password" class="form-control" 
     placeholder="Password" required autofocus/>
 </p>
 <button class="btn btn-lg btn-primary btn-block" type="submit">Sign in</button><br/>
 <p><a href="/index" th:href="@{/index}">Back to home page</a></p>
</form>

When we run the application and access the context at http://localhost:8081, we see a link to access a secured page. Clicking the link will cause the login page to display. As expected, we see the additional domain field:

Spring Security Form Login with Extra Fields

3.5. Summary

In our first example, we were able to reuse DaoAuthenticationProvider and UsernamePasswordAuthenticationToken by “faking out” the username field.

As a result, we were able to add support for an extra login field with a minimal amount of configuration and additional code.

4. Custom Project Setup

Our second approach will be very similar to the first but may be more appropriate for non-trivial uses cases.

The key components of our second approach will include:

  • CustomAuthenticationFilter – an extension of UsernamePasswordAuthenticationFilter
  • CustomUserDetailsServicea custom interface declaring a loadUserbyUsernameAndDomain method
  • CustomUserDetailsServiceImplan implementation of our CustomUserDetailsService
  • CustomUserDetailsAuthenticationProvider – an extension of AbstractUserDetailsAuthenticationProvider
  • CustomAuthenticationToken – an extension of UsernamePasswordAuthenticationToken
  • User – an extension of the User class provided by Spring Security that declares our extra domain field
  • SecurityConfig – our Spring Security configuration that inserts our CustomAuthenticationFilter into the filter chain, declares security rules and wires up dependencies
  • login.html – the login page that collects the username, password, and domain

4.1. Custom Authentication Filter

In our CustomAuthenticationFilter, we extract the username, password, and domain fields from the request. These values are used to create an instance of our CustomAuthenticationToken which is passed to the AuthenticationProvider for authentication:

public class CustomAuthenticationFilter 
  extends UsernamePasswordAuthenticationFilter {

    public static final String SPRING_SECURITY_FORM_DOMAIN_KEY = "domain";

    @Override
    public Authentication attemptAuthentication(
        HttpServletRequest request,
        HttpServletResponse response) 
          throws AuthenticationException {

        // ...

        CustomAuthenticationToken authRequest = getAuthRequest(request);
        setDetails(request, authRequest);
        return this.getAuthenticationManager().authenticate(authRequest);
    }

    private CustomAuthenticationToken getAuthRequest(HttpServletRequest request) {
        String username = obtainUsername(request);
        String password = obtainPassword(request);
        String domain = obtainDomain(request);

        // ...

        return new CustomAuthenticationToken(username, password, domain);
    }

4.2. Custom UserDetails Service

Our CustomUserDetailsService contract defines a single method called loadUserByUsernameAndDomain. 

The CustomUserDetailsServiceImpl class we create simply implements the contract and delegates to our CustomUserRepository to get the User:

 public UserDetails loadUserByUsernameAndDomain(String username, String domain) 
     throws UsernameNotFoundException {
     if (StringUtils.isAnyBlank(username, domain)) {
         throw new UsernameNotFoundException("Username and domain must be provided");
     }
     User user = userRepository.findUser(username, domain);
     if (user == null) {
         throw new UsernameNotFoundException(
           String.format("Username not found for domain, username=%s, domain=%s", 
             username, domain));
     }
     return user;
 }

4.3. Custom UserDetailsAuthenticationProvider

Our CustomUserDetailsAuthenticationProvider extends AbstractUserDetailsAuthenticationProvider and delegates to our CustomUserDetailService to retrieve the User. The most important feature of this class is the implementation of the retrieveUser method.

Note that we must cast the authentication token to our CustomAuthenticationToken for access to our custom field:

@Override
protected UserDetails retrieveUser(String username, 
  UsernamePasswordAuthenticationToken authentication) 
    throws AuthenticationException {
 
    CustomAuthenticationToken auth = (CustomAuthenticationToken) authentication;
    UserDetails loadedUser;

    try {
        loadedUser = this.userDetailsService
          .loadUserByUsernameAndDomain(auth.getPrincipal()
            .toString(), auth.getDomain());
    } catch (UsernameNotFoundException notFound) {
 
        if (authentication.getCredentials() != null) {
            String presentedPassword = authentication.getCredentials()
              .toString();
            passwordEncoder.matches(presentedPassword, userNotFoundEncodedPassword);
        }
        throw notFound;
    } catch (Exception repositoryProblem) {
 
        throw new InternalAuthenticationServiceException(
          repositoryProblem.getMessage(), repositoryProblem);
    }

    // ...

    return loadedUser;
}

4.4. Summary

Our second approach is nearly identical to the simple approach we presented first. By implementing our own AuthenticationProvider and CustomAuthenticationToken, we avoided needing to adapt our username field with custom parsing logic.

5. Conclusion

In this article, we’ve implemented a form login in Spring Security that made use of an extra login field. We did this in 2 different ways:

  • In our simple approach, we minimized the amount of code we needed write. We were able to reuse DaoAuthenticationProvider and UsernamePasswordAuthentication by adapting the username with custom parsing logic
  • In our more customized approach, we provided custom field support by extending AbstractUserDetailsAuthenticationProvider and providing our own CustomUserDetailsService with a CustomAuthenticationToken

As always, all source code can be found over on GitHub.

A Guide to the finalize Method in Java

$
0
0

1. Overview

In this tutorial, we’ll focus on a core aspect of the Java language – the finalize method provided by the root Object class.

Simply put, this is called before the garbage collection for a particular object.

2. Using Finalizers

The finalize() method is called the finalizer.

Finalizers get invoked when JVM figures out that this particular instance should be garbage collected. Such a finalizer may perform any operations, including bringing the object back to life.

The main purpose of a finalizer is, however, to release resources used by objects before they’re removed from the memory. A finalizer can work as the primary mechanism for clean-up operations, or as a safety net when other methods fail.

To understand how a finalizer works, let’s take a look at a class declaration:

public class Finalizable {
    private BufferedReader reader;

    public Finalizable() {
        InputStream input = this.getClass()
          .getClassLoader()
          .getResourceAsStream("file.txt");
        this.reader = new BufferedReader(new InputStreamReader(input));
    }

    public String readFirstLine() throws IOException {
        String firstLine = reader.readLine();
        return firstLine;
    }

    // other class members
}

The class Finalizable has a field reader, which references a closeable resource. When an object is created from this class, it constructs a new BufferedReader instance reading from a file in the classpath.

Such an instance is used in the readFirstLine method to extract the first line in the given file. Notice that the reader isn’t closed in the given code.

We can do that using a finalizer:

@Override
public void finalize() {
    try {
        reader.close();
        System.out.println("Closed BufferedReader in the finalizer");
    } catch (IOException e) {
        // ...
    }
}

It’s easy to see that a finalizer is declared just like any normal instance method.

In reality, the time at which the garbage collector calls finalizers is dependent on the JVM’s implementation and the system’s conditions, which are out of our control.

To make garbage collection happen on the spot, we’ll take advantage of the System.gc method. In real-world systems, we should never invoke that explicitly, for a number of reasons:

  1. It’s costly
  2. It doesn’t trigger the garbage collection immediately – it’s just a hint for the JVM to start GC
  3. JVM knows better when GC needs to be called

If we need to force GC, we can use jconsole for that.

The following is a test case demonstrating the operation of a finalizer:

@Test
public void whenGC_thenFinalizerExecuted() throws IOException {
    String firstLine = new Finalizable().readFirstLine();
    assertEquals("baeldung.com", firstLine);
    System.gc();
}

In the first statement, a Finalizable object is created, then its readFirstLine method is called. This object isn’t assigned to any variable, hence it’s eligible for garbage collection when the System.gc method is invoked.

The assertion in the test verifies the content of the input file and is used just to prove that our custom class works as expected.

When we run the provided test, a message will be printed on the console about the buffered reader being closed in the finalizer. This implies the finalize method was called and it has cleaned up the resource.

Up to this point, finalizers look like a great way for pre-destroy operations. However, that’s not quite true.

In the next section, we’ll see why using them should be avoided.

3. Avoiding Finalizers

Let’s have a look at several problems we’ll be facing when using finalizers to perform critical actions.

The first noticeable issue associated with finalizers is the lack of promptness. We cannot know when a finalizer is executed since garbage collection may occur anytime.

By itself, this isn’t a problem because the most important thing is that the finalizer is still invoked, sooner or later. However, system resources are limited. Thus, we may run out of those resources before they get a chance to be cleaned up, potentially resulting in system crashes.

Finalizers also have an impact on the program’s portability. Since the garbage collection algorithm is JVM implementation dependent, a program may run very well on one system while behaving differently at runtime on another.

Another significant issue coming with finalizers is the performance cost. Specifically, JVM must perform much more operations when constructing and destroying objects containing a non-empty finalizer.

The details are implementation-specific, but the general ideas are the same across all JVMs: additional steps must be taken to ensure finalizers are executed before the objects are discarded. Those steps can make the duration of object creation and destruction increase by hundreds or even thousands of times.

The last problem we’ll be talking about is the lack of exception handling during finalization. If a finalizer throws an exception, the finalization process is canceled, and the exception is ignored, leaving the object in a corrupted state without any notification.

4. No-Finalizer Example

Let’s explore a solution providing the same functionality but without the use of finalize() method. Notice that the example below isn’t the only way to replace finalizers.

Instead, it’s used to demonstrate an important point: there are always options that help us to avoid finalizers.

Here’s the declaration of our new class:

public class CloseableResource implements AutoCloseable {
    private BufferedReader reader;

    public CloseableResource() {
        InputStream input = this.getClass()
          .getClassLoader()
          .getResourceAsStream("file.txt");
        reader = new BufferedReader(new InputStreamReader(input));
    }

    public String readFirstLine() throws IOException {
        String firstLine = reader.readLine();
        return firstLine;
    }

    @Override
    public void close() {
        try {
            reader.close();
            System.out.println("Closed BufferedReader in the close method");
        } catch (IOException e) {
            // handle exception
        }
    }
}

It’s not hard to see that the only difference between the new CloseableResource class and our previous Finalizable class is the implementation of the AutoCloseable interface instead of a finalizer definition.

Notice that the body of the close method of CloseableResource is almost the same as the body of the finalizer in class Finalizable.

The following is a test method, which reads an input file and releases the resource after finishing its job:

@Test
public void whenTryWResourcesExits_thenResourceClosed() throws IOException {
    try (CloseableResource resource = new CloseableResource()) {
        String firstLine = resource.readFirstLine();
        assertEquals("baeldung.com", firstLine);
    }
}

In the above test, a CloseableResource instance is created in the try block of a try-with-resources statement, hence that resource is automatically closed when the try-with-resources block completes execution.

Running the given test method, we’ll see a message printed out from the close method of the CloseableResource class.

5. Conclusion

In this tutorial, we focused on a core concept in Java – the finalize method. This looks useful on paper but can have ugly side effects at runtime. And, more importantly, there’s always an alternative solution to using a finalizer.

One critical point to notice is that finalize has been deprecated starting with Java 9 – and will eventually be removed.

As always, the source code for this tutorial can be found over on GitHub.

Java Weekly, Issue 213

$
0
0

Here we go…

1. Spring and Java

>> Fumigating the IDEA Ultimate code using dataflow analysis [blog.jetbrains.com]

A deep dive into the how code inspection in works in IntelliJ.

>> Immutable Versus Unmodifiable in JDK 10 [marxsoftware.blogspot.com]

An important concept to understand – an unmodifiable collection isn’t necessarily immutable.

Simply put, if the contained elements are mutable, the entire collection is clearly mutable, even though the collection itself might be unmodifiable.

>> Refining redirect semantics in the Servlet API [blog.frankel.ch]

Unfortunately, HttpServletResponse.sendRedirect() returns HTTP 302 instead of 303, so we may need to handle that manually if we need to implement the Post/Redirect/Get pattern.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> A Hypothetical Consulting Gig [daedtech.com]

This is how an average consulting gig might look like 🙂

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Absurd Absolute [dilbert.com]

>> Data Encapsulation [dilbert.com]

>> Unforeseen Problems [dilbert.com]

5. Pick of the Week

>> The Brutal Lifecycle of JavaScript Frameworks [stackoverflow.blog]

How to Manually Authenticate User with Spring Security

$
0
0

1. Overview

In this quick article, we’ll focus on how to programmatically set an authenticated user in Spring Security and Spring MVC.

2. Spring Security

Simply put, Spring Security hold the principal information of each authenticated user in a ThreadLocal – represented as an Authentication object.

In order to construct and set this Authentication object – we need to use the same approach Spring Security typically uses to build the object on a standard authentication.

To, let’s manually trigger an authentication and then set the resulting Authentication object into the current SecurityContext used by the framework to hold the currently logged-in user:

UsernamePasswordAuthenticationToken authReq
 = new UsernamePasswordAuthenticationToken(user, pass);
Authentication auth = authManager.authenticate(authReq);
SecurityContext sc = SecurityContextHolder.getContext();
securityContext.setAuthentication(auth);

After setting the Authentication in the context, we’ll now be able to check if the current user is authenticated – using securityContext.getAuthentication().isAuthenticated().

3. Spring MVC

By default, Spring Security adds an additional filter in the Spring Security filter chain – which is capable of persisting the Security Context (SecurityContextPersistenceFilter class).

In turn, it delegates the persistence of the Security Context to an instance of SecurityContextRepository, defaulting to the HttpSessionSecurityContextRepository class.

So, in order to set the authentication on the request and hence, make it available for all subsequent requests from the client, we need to manually set the SecurityContext containing the Authentication in the HTTP session:

public void login(HttpServletRequest req, String user, String pass) { 
    UsernamePasswordAuthenticationToken authReq
      = new UsernamePasswordAuthenticationToken(user, pass);
    Authentication auth = authManager.authenticate(authReq);
    
    SecurityContext sc = SecurityContextHolder.getContext();
    sc.setAuthentication(auth);
    HttpSession session = req.getSession(true);
    session.setAttribute(SPRING_SECURITY_CONTEXT_KEY, sc);
}

SPRING_SECURITY_CONTEXT_KEY is a statically imported HttpSessionSecurityContextRepository.SPRING_SECURITY_CONTEXT_KEY.

It should be noted that we can’t directly use the HttpSessionSecurityContextRepository – because it works in conjunction with the SecurityContextPersistenceFilter.

That is because the filter uses the repository in order to load and store the security context before and after the execution of the rest of defined filters in the chain, but it uses a custom wrapper over the response which is passed to the chain.

So in this case, you should know the class type of the wrapper used and pass it to the appropriate save method in the repository.

4. Conclusion

In this quick tutorial, we went over how to manually set the user Authentication in the Spring Security context and how it can be made available for Spring MVC purposes, focusing on the code samples that illustrate the simplest way to achieve it.

As always, code samples can be found over on GitHub.

Intoduction to Spliterator in Java

$
0
0

1. Overview

The Spliterator interface, introduced in Java 8, can be used for traversing and partitioning sequences. It’s a base utility for Streams, especially parallel ones.

In this article, we’ll cover its usage, characteristics, methods and how to create our own custom implementations.

2. Spliterator API

2.1. tryAdvance

This is the main method used for stepping through a sequence. The method takes a Consumer that’s used to consume elements of the Spliterator one by one sequentially and returns false if there’re no elements to be traversed.

Here, we’ll take a look at how to use it to traverse and partition elements.

First, let’s assume that we’ve got an ArrayList with 35000 articles and that Article class is defined as:

public class Article {
    private List<Author> listOfAuthors;
    private int id;
    private String name;
    
    // standard constructors/getters/setters
}

Now, let’s implement a task that processes the list of articles and adds a suffix of “– published by Baeldung” to each article name:

public String call() {
    int current = 0;
    while (spliterator.tryAdvance(a -> a.setName(article.getName()
      .concat("- published by Baeldung")))) {
        current++;
    }
    
    return Thread.currentThread().getName() + ":" + current;
}

Notice that this task outputs the number of articles processed when it finishes the execution.

Another key point is that we used tryAdvance() method to process the next element.

2.2. trySplit

Next, let’s split Spliterators (hence the name) and process partitions independently.

The trySplit method tries to split it into two parts. Then the caller process elements, and finally, the returned instance will process the others, allowing the two to be processed in parallel.

Let’s generate our list first:

public static List<Article> generateElements() {
    return Stream.generate(() -> new Article("Java"))
      .limit(35000)
      .collect(Collectors.toList());
}

Next, we obtain our Spliterator instance using the spliterator() method. Then we apply our trySplit() method:

@Test
public void givenSpliterator_whenAppliedToAListOfArticle_thenSplittedInHalf() {
    assertThat(new Task(split1).call())
      .containsSequence(Executor.generateElements().size() / 2 + "");
    assertThat(new Task(split2).call())
      .containsSequence(Executor.generateElements().size() / 2 + "");
}

The splitting process worked as intended and divided the records equally.

2.3. estimatedSize 

The estimatedSize method gives us an estimated number of elements:

LOG.info("Size: " + split1.estimateSize());

This will output:

Size: 17500

2.4. hasCharacteristics 

This API checks if the given characteristics match the properties of the Spliterator. Then if we invoke the method above, the output will be an int representation of those characteristics:

LOG.info("Characteristics: " + split1.characteristics());
Characteristics: 16464

3. Spliterator Characteristics

It has eight different characteristics that describe its behavior. Those can be used as hints for external tools:

  • SIZED  if it’s capable of returning an exact number of elements with the estimateSize() method
  • SORTED – if it’s iterating through a sorted source
  • SUBSIZED – if we split the instance using a trySplit() method and obtain Spliterators that are SIZED as well
  • CONCURRENT – if source can be safely modified concurrently
  • DISTINCT – if for each pair of encountered elements x, y, !x.equals(y)
  • IMMUTABLE – if elements held by source can’t be structurally modified
  • NONNULL – if source holds nulls or not
  • ORDERED – if iterating over an ordered sequence

4. A Custom Spliterator

4.1. When to Customize

First, let’s assume the following scenario:

We’ve got an article class with a list of authors, and the article that can have more than one author. Furthermore, we consider an author related to the article if his related article’s id matches article id.

Our Author class will look like the this:

public class Author {
    private String name;
    private int relatedArticleId;

    // standard getters, setters & constructors
}

Next, we’ll implement a class to count authors while traversing a stream of authors. Then the class will perform a reduction on the stream.

Let’s have a look at the class implementation:

public class RelatedAuthorCounter {
    private int counter;
    private boolean isRelated;
 
    // standard constructors/getters
 
    public RelatedAuthorCounter accumulate(Author author) {
        if (author.getRelatedArticleId() == 0) {
            return isRelated ? this : new RelatedAuthorCounter( counter, true);
        } else {
            return isRelated ? new RelatedAuthorCounter(counter + 1, false) : this;
        }
    }

    public RelatedAuthorCounter combine(RelatedAuthorCounter RelatedAuthorCounter) {
        return new RelatedAuthorCounter(
          counter + RelatedAuthorCounter.counter, 
          RelatedAuthorCounter.isRelated);
    }
}

Each method in the above class performs a specific operation to count while traversing.

First, the accumulate() method traverse the authors one by one in an iterative way, then combine() sums two counters using their values. Finally, the getCounter() returns the counter.

Now, to test what we’ve done so far. Let’s convert our article’s list of authors to a stream of authors:

Stream<Author> stream = article.getListOfAuthors().stream();

And implement a countAuthor() method to perform the reduction on the stream using RelatedAuthorCounter:

private int countAutors(Stream<Author> stream) {
    RelatedAuthorCounter wordCounter = stream.reduce(
      new RelatedAuthorCounter(0, true), 
      RelatedAuthorCounter::accumulate, 
      RelatedAuthorCounter::combine);
    return wordCounter.getCounter();
}

If we used a sequential stream the output will be as expected “count = 9”, however, the problem arises when we try to parallelize the operation.

Let’s take a look at the following test case:

@Test
void 
  givenAStreamOfAuthors_whenProcessedInParallel_countProducesWrongOutput() {
    assertThat(Executor.countAutors(stream.parallel())).isGreaterThan(9);
}

Apparently, something has gone wrong – splitting the stream at a random position caused an author to be counted twice.

4.2. How to Customize

To solve this, we need to implement a Spliterator that splits authors only when related id and articleId matches. Here’s the implementation of our custom Spliterator:

public class RelatedAuthorSpliterator implements Spliterator<Author> {
    private final List<Author> list;
    AtomicInteger current = new AtomicInteger();
    // standard constructor/getters

    @Override
    public boolean tryAdvance(Consumer<? super Author> action) {
        action.accept(list.get(current.getAndIncrement()));
        return current.get() < list.size();
    }

    @Override
    public Spliterator<Author> trySplit() {
        int currentSize = list.size() - current.get();
        if (currentSize < 10) {
            return null;
        }
        for (int splitPos = currentSize / 2 + current.intValue();
          splitPos < list.size(); splitPos++) {
            if (list.get(splitPos).getRelatedArticleId() == 0) {
                Spliterator<Author> spliterator
                  = new RelatedAuthorSpliterator(
                  list.subList(current.get(), splitPos));
                current.set(splitPos);
                return spliterator;
            }
        }
        return null;
   }

   @Override
   public long estimateSize() {
       return list.size() - current.get();
   }
 
   @Override
   public int characteristics() {
       return CONCURRENT;
   }
}

Now applying countAuthors() method will give the correct output. The following code demonstrates that:

@Test
public void
  givenAStreamOfAuthors_whenProcessedInParallel_countProducesRightOutput() {
    Stream<Author> stream2 = StreamSupport.stream(spliterator, true);
 
    assertThat(Executor.countAutors(stream2.parallel())).isEqualTo(9);
}

Also, the custom Spliterator is created from a list of authors and traverses through it by holding the current position.

Let’s discuss in more details the implementation of each method:

  • tryAdvance passes authors to the Consumer at the current index position and increments its position
  • trySplit defines the splitting mechanism, in our case, the RelatedAuthorSpliterator is created when ids matched, and the splitting divides the list into two parts
  • estimatedSize – is the difference between the list size and the position of currently iterated author
  • characteristics – returns the Spliterator characteristics, in our case SIZED as the value returned by the estimatedSize() method is exact; moreover, CONCURRENT indicates that the source of this Spliterator may be safely modified by other threads

5. Support for Primitive Values

The Spliterator API supports primitive values including double, int and long.

The only difference between using a generic and a primitive dedicated Spliterator is the given Consumer and the type of the Spliterator.

For example, when we need it for an int value we need to pass an intConsumer. Furthermore, here’s a list of primitive dedicated Spliterators:

  • OfPrimitive<T, T_CONS, T_SPLITR extends Spliterator.OfPrimitive<T, T_CONS, T_SPLITR>>: parent interface for other primitives
  • OfInt: A Spliterator specialized for int
  • OfDouble: A Spliterator dedicated for double
  • OfLong: A Spliterator dedicated for long

6. Conclusion

In this article, we covered Java 8 Spliterator usage, methods, characteristics, splitting process, primitive support and how to customize it.

As always, the full implementation of this article can be found over on Github.


Geospatial Support in ElasticSearch

$
0
0

1. Introduction

Elasticsearch is best known for its full-text search capabilities but it also features full geospatial support.

We can find more about setting up Elasticsearch and getting started in this previous article.

Let’s take a look on to how we can save geo-data in Elasticsearch and how we can search those data using geo queries.

2. Geo Data Type

To enable geo-queries, we need to create the mapping of the index manually and explicitly set the field mapping.

Dynamic mapping won’t work while setting mapping for geo types.

Elasticsearch offers two ways to represent geodata:

  1. Latitude-longitude pairs using geo-point field type
  2. Complex shape defined in GeoJSON using geo-shape field type

Let’s take a more in-depth look at each of the above categories:

2.1. Geo Point Data Type

Geo-point field type accepts latitude-longitude pairs that can be used to:

  • Find points within a certain distance of central point
  • Find points within a box or a polygon
  • Aggregate documents geographically or by distance from the central point
  • Sort documents by distance

Below is sample mapping for the field to save geo point data:

PUT /index_name
{
    "mappings": {
        "TYPE_NAME": {
            "properties": {
                "location": { 
                    "type": "geo_point" 
                } 
            } 
        } 
    } 
}

As we can see from above example, type for location field is geo_point. Thus, we can now provide latitude-longitude pair in the location in the location field.

2.2. Geo Shape Data Type

Unlike geo-point, geo shape provides the functionality to save and search complex shapes like polygon and rectangle. Geo shape data type must be used when we want to search documents which contains shapes other than geo points.

Let’s take a look at mapping for geo shape data type:

PUT /index_name
{
    "mappings": {
        "TYPE_NAME": {
            "properties": {
                "location": {
                    "type": "geo_shape",
                    "tree": "quadtree",
                    "precision": "1m"
                }
            }
        }
    }
}

The above mapping will index location field with quadtree implementation with a precision of one meter. 

Elasticsearch breaks down the provided geo shape into series of geo hashes consisting of small grid-like squares called raster.

Depending on our requirement, we can control the indexing of geo shape fields. For example, when we’re searching documents for navigation, then precision up to one meter becomes very critical as it may lead to an incorrect path.

Whereas if we’re looking for some sightseeing places, a precision of up to 10-50 meters can be acceptable. 

One thing that we need to keep in mind while indexing geo shape data is, we’re always compromising performance with accuracy. With higher precision, Elasticsearch generates more terms – which leads to increased memory usage. Hence we need to very cautious when selecting mapping for the geo shape.

We can find more mapping options for geo-shape data type at official ES site.

3. Different Ways to Save Geo Point Data

3.1. Latitude Longitude Object

PUT index_name/index_type/1
{
    "location": { 
        "lat": 23.02,
        "lon": 72.57
    }
}

Here, geo-point location is saved as an object with latitude and longitude as keys.

3.2. Latitude Longitude Pair

{
    "location": "23.02,72.57"
}

Here, location is expressed as a latitude-longitude pair in a plain string format. Please note, the sequence of latitude and longitude in string format.

3.3. Geo Hash

{
    "location": "tsj4bys"
}

We can also provide geo point data in the form of geo hash as shown in the example above. We can use the online tool to convert latitude-longitude to geo hash.

3.4. Longitude Latitude Array

{
    "location": [72.57, 23.02]
}

The sequence of latitude-longitude is reversed when latitude and longitude are supplied as an array. Initially, the latitude-longitude pair was used in both string and in an array, but later it was reversed in order to match the format used by GeoJSON.

4. Different Ways to Save Geo Shape Data

4.1. Point

POST /index/type
{
    "location" : {
        "type" : "point",
        "coordinates" : [72.57, 23.02]
    }
}

Here, the geo shape type that we’re trying to insert is a point. Please take a look at location field, we have nested object consisting of fields type and coordinates. These meta-fields helps Elasticsaerch in identifying the geo shape and its actual data.

4.2. LineString

POST /index/type
{
    "location" : {
        "type" : "linestring",
        "coordinates" : [[77.57, 23.02], [77.59, 23.05]]
    }
}

Here, we’re inserting linestring geo shape. The coordinates for linestring consists of two points i.e. start and endpoint. LineString geo shape is very helpful for navigation use case.

4.3. Polygon

POST /index/type
{
    "location" : {
        "type" : "polygon",
        "coordinates" : [
            [ [10.0, 0.0], [11.0, 0.0], [11.0, 1.0], [10.0, 1.0], [10.0, 0.0] ]
        ]
    }
}

Here, we’re inserting polygon geo shape. Please take a look at the coordinates in above example, first and last coordinates in polygon should always match i.e a closed polygon.

Elasticsearch also supports other GeoJSON structures as well. A complete list of other supported formats is as below:

  • MultiPoint
  • MultiLineString
  • MultiPolygon
  • GeometryCollection
  • Envelope
  • Circle

We can find examples of above-supported formats on the official ES site.

For all structures, the inner type and coordinates are mandatory fields. Also, sorting and retrieving geo shape fields are currently not possible in Elasticsearch due to their complex structure. Thus, the only way to retrieve geo fields is from the source field.

5. ElasticSearch Geo Query

Now, that we know how to insert documents containing geo shapes, let’s dive into fetching those records using geo shape queries. But before we start using Geo Queries, we’ll need following maven dependencies to support Java API for Geo Queries:

<dependency>
    <groupId>org.locationtech.spatial4j</groupId>
    <artifactId>spatial4j</artifactId>
    <version>0.7</version> 
</dependency>
</dependency>
    <groupId>com.vividsolutions</groupId>
    <artifactId>jts</artifactId>
    <version>1.13</version>
    <exclusions>
        <exclusion>
            <groupId>xerces</groupId>
            <artifactId>xercesImpl</artifactId>
        </exclusion>
    </exclusions>
</dependency>

We can search for above dependencies in Maven Central repository as well.

Elasticsearch supports different types of geo queries and they are as follow:

5.1. Geo Shape Query

This requires the geo_shape mapping.

Similar to geo_shape type, geo_shape uses GeoJSON structure to query documents.

Below is sample query to fetch all documents that fall within given top-left and bottom-right coordinates:

{
    "query":{
        "bool": {
            "must": {
                "match_all": {}
            },
            "filter": {
                "geo_shape": {
                    "region": {
                        "shape": {
                            "type": "envelope",
                            "coordinates" : [[75.00, 25.0], [80.1, 30.2]]
                        },
                        "relation": "within"
                    }
                }
            }
        }
    }
}

Here, relation determines spatial relation operators used at search time.

Below is the list of supported operators:

  • INTERSECTS – (default) returns all documents whose geo_shape field intersects the query geometry
  • DISJOINT – retrieves all documents whose geo_shape field has nothing in common with the query geometry
  • WITHIN – gets all documents whose geo_shape field is within the query geometry
  • CONTAINS – returns all documents whose geo_shape field contains the query geometry

Similarly, we can query using different GeoJSON shapes.

Java code for above query is as below:

QueryBuilders
  .geoShapeQuery(
    "region",
    ShapeBuilder.newEnvelope().topLeft(75.00, 25.0).bottomRight(80.1, 30.2))
  .relation(ShapeRelation.WITHIN);

5.2. Geo Bounding Box Query

Geo Bounding Box query is used to fetch all the documents based on point location. Below is a sample bounding box query:

{
    "query": {
        "bool" : {
            "must" : {
                "match_all" : {}
            },
            "filter" : {
                "geo_bounding_box" : {
                    "location" : {
                        "bottom_left" : [28.3, 30.5],
                        "top_right" : [31.8, 32.12]
                    }
                }
            }
        }
    }
}

Java code for above bounding box query is as below:

QueryBuilders
  .geoBoundingBoxQuery("location").bottomLeft(28.3, 30.5).topRight(31.8, 32.12);

Geo Bounding Box query supports similar formats like we have in geo_point data type. Sample queries for supported formats can be found on the official site.

5.3. Geo Distance Query

Geo distance query is used to filter all documents that come with the specified range of the point.

Here’s a sample geo_distance query:

{
    "query": {
        "bool" : {
            "must" : {
                "match_all" : {}
            },
            "filter" : {
                "geo_distance" : {
                    "distance" : "10miles",
                    "location" : [31.131,29.976]
                }
            }
        }
    }
}

And here’s the Java code for above query:

QueryBuilders
  .geoDistanceQuery("location")
  .point(29.976, 31.131)
  .distance(10, DistanceUnit.MILES);

Similar to geo_point, geo distance query also supports multiple formats for passing location coordinates. More details on supported formats can be found at the official site.

5.4. Geo Polygon Query

A query to filter all records that have points that fall within the given polygon of points.

Let’s have a quick look at a sample query:

{
    "query": {
        "bool" : {
            "must" : {
                "match_all" : {}
            },
            "filter" : {
                "geo_polygon" : {
                    "location" : {
                        "points" : [
                        {"lat" : 22.733, "lon" : 68.859},
                        {"lat" : 24.733, "lon" : 68.859},
                        {"lat" : 23, "lon" : 70.859}
                        ]
                    }
                }
            }
        }
    }
}

And at the Java code for this query:

QueryBuilders
  .geoPolygonQuery("location")
  .addPoint(22.733, 68.859)
  .addPoint(24.733, 68.859)
  .addPoint(23, 70.859);

Geo Polygon Query also supports formats mentioned below:

  • lat-long as an array: [lon, lat]
  • lat-long as a string: “lat, lon”
  • geo hash

geo_point data type is mandatory in order to use this query.

6. Conclusion

In this article, we discussed different mapping options for indexing geo data i.e geo_point and geo_shape.

We also went through different ways to store geo-data and finally, we observed geo-queries and Java API to filter results using geo queries.

As always, the code is available in this GitHub project.

Introduction to Lettuce – the Java Redis Client

$
0
0

1. Overview

This article is an introduction to Lettuce, a Redis Java client.

Redis is an in-memory key-value store that can be used as a database, cache or message broker. Data is added, queried, modified, and deleted with commands that operate on keys in Redis’ in-memory data structure.

Lettuce supports both synchronous and asynchronous communication use of the complete Redis API, including its data structures, pub/sub messaging, and high-availability server connections.

2. Why Lettuce?

We’ve covered Jedis in one of the previous posts. What makes Lettuce different?

The most significant difference is its asynchronous support via the Java 8’s CompletionStage interface and support for Reactive Streams. As we’ll see below, Lettuce offers a natural interface for making asynchronous requests from the Redis database server and for creating streams.

It also uses Netty for communicating with the server. This makes for a “heavier” API, but also makes it better suited for sharing a connection with more than one thread.

3. Setup

3.1. Dependency

Let’s start by declaring the only dependency we’ll need in the pom.xml:

<dependency>
    <groupId>io.lettuce</groupId>
    <artifactId>lettuce-core</artifactId>
    <version>5.0.1.RELEASE</version>
</dependency>

The latest version of the library can be checked on the Github repository or on Maven Central.

3.2. Redis Installation

We’ll need to install and run at least one instance of Redis, two if we wish to test clustering or sentinel mode (although sentinel mode requires three servers to function correctly.) For this article, we’re using 4.0.x – the latest stable version at this moment.

More information about getting started with Redis can be found here, including downloads for Linux and MacOS.

Redis doesn’t officially support Windows, but there’s a port of the server here. We can also run Redis in Docker which is a better alternative for Windows 10 and a fast way to get up and running.

4. Connections

4.1. Connecting to a Server

Connecting to Redis consists of four steps:

  1. Creating a Redis URI
  2. Using the URI to connect to a RedisClient
  3. Opening a Redis Connection
  4. Generating a set of RedisCommands

Let’s see the implementation:

RedisClient redisClient = RedisClient
  .create("redis://password@localhost:6379/");
StatefulRedisConnection<String, String> connection
 = redisClient.connect();

A StatefulRedisConnection is what it sounds like; a thread-safe connection to a Redis server that will maintain its connection to the server and reconnect if needed. Once we have a connection, we can use it to execute Redis commands either synchronously or asynchronously.

RedisClient uses substantial system resources, as it holds Netty resources for communicating with the Redis server. Applications that require multiple connections should use a single RedisClient.

4.2. Redis URIs

We create a RedisClient by passing a URI to the static factory method.

Lettuce leverages a custom syntax for Redis URIs. This is the schema:

redis :// [password@] host [: port] [/ database]
  [? [timeout=timeout[d|h|m|s|ms|us|ns]]
  [&_database=database_]]

There are four URI schemes:

  • redis – a standalone Redis server
  • rediss – a standalone Redis server via an SSL connection
  • redis-socket – a standalone Redis server via a Unix domain socket
  • redis-sentinel – a Redis Sentinel server

The Redis database instance can be specified as part of the URL path or as an additional parameter. If both are supplied, the parameter has higher precedence.

In the example above, we’re using a String representation. Lettuce also has a RedisURI class for building connections. It offers the Builder pattern:

RedisURI.Builder
  .redis("localhost", 6379).auth("password")
  .database(1).build();

And a constructor:

new RedisURI("localhost", 6379, 60, TimeUnit.SECONDS);

4.3. Synchronous Commands

Similar to Jedis, Lettuce provides a complete Redis command set in the form of methods.

However, Lettuce implements both synchronous and asynchronous versions. We’ll look at the synchronous version briefly, and then use the asynchronous implementation for the rest of the tutorial.

After we create a connection, we use it to create a command set:

RedisCommands<String, String> syncCommands = connection.sync();

Now we have an intuitive interface for communicating with Redis.

We can set and get String values:

syncCommands.set("key", "Hello, Redis!");

String value = syncommands.get(“key”);

We can work with hashes:

syncCommands.hset("recordName", "FirstName", "John");
syncCommands.hset("recordName", "LastName", "Smith");
Map<String, String> record = syncCommands.hgetall("recordName");

We’ll cover more Redis later in the article.

The Lettuce synchronous API uses the asynchronous API. Blocking is done for us at the command level. This means that more than one client can share a synchronous connection.

4.4. Asynchronous Commands

Let’s take a look at the asynchronous commands:

RedisAsyncCommands<String, String> asyncCommands = connection.async();

We retrieve a set of RedisAsyncCommands from the connection, similar to how we retrieved the synchronous set. These commands return a RedisFuture (which is a CompletableFuture internally):

RedisFuture<String> result = asyncCommands.get("key");

A guide to working with a CompletableFuture can be found here.

4.5. Reactive API

Finally, let’s see how to work with non-blocking reactive API:

RedisStringReactiveCommands<String, String> reactiveCommands = connection.reactive();

These commands return results wrapped in a Mono or a Flux from Project Reactor.

A guide to working with Project Reactor can be found here.

5. Redis Data Structures

We briefly looked at strings and hashes above, let’s look at how Lettuce implements the rest of Redis’ data structures. As we’d expect, each Redis command has a similarly-named method.

5.1. Lists

Lists are lists of Strings with the order of insertion preserved. Values are inserted or retrieved from either end:

asyncCommands.lpush("tasks", "firstTask");
asyncCommands.lpush("tasks", "secondTask");
RedisFuture<String> redisFuture = asyncCommands.rpop("tasks");

String nextTask = redisFuture.get();

In this example, nextTask equals “firstTask“. Lpush pushes values to the head of the list, and then rpop pops values from the end of the list.

We can also pop elements from the other end:

asyncCommands.del("tasks");
asyncCommands.lpush("tasks", "firstTask");
asyncCommands.lpush("tasks", "secondTask");
redisFuture = asyncCommands.lpop("tasks");

String nextTask = redisFuture.get();

We start the second example by removing the list with del. Then we insert the same values again, but we use lpop to pop the values from the head of the list, so the nextTask holds “secondTask” text.

5.2. Sets

Redis Sets are unordered collections of Strings similar to Java Sets; there are no duplicate elements:

asyncCommands.sadd("pets", "dog");
asyncCommands.sadd("pets", "cat");
asyncCommands.sadd("pets", "cat");
 
RedisFuture<Set<String>> pets = asyncCommands.smembers("nicknames");
RedisFuture<Boolean> exists = asyncCommands.sismember("pets", "dog");

When we retrieve the Redis set as a Set, the size is two, since the duplicate “cat” was ignored. When we query Redis for the existence of “dog” with sismember, the response is true.

5.3. Hashes

We briefly looked at an example of hashes earlier. They are worth a quick explanation.

Redis Hashes are records with String fields and values. Each record also has a key in the primary index:

asyncCommands.hset("recordName", "FirstName", "John");
asyncCommands.hset("recordName", "LastName", "Smith");

RedisFuture<String> lastName 
  = syncCommands.hget("recordName", "LastName");
RedisFuture<Map<String, String>> record 
  = syncCommands.hgetall("recordName");

We use hset to add fields to the hash, passing in the name of the hash, the name of the field, and a value.

Then, we retrieve an individual value with hget, the name of the record and the field. Finally, we fetch the entire record as a hash with hgetall.

5.4. Sorted Sets

Sorted Sets contains values and a rank, by which they are sorted. The rank is 64-bit floating point value.

Items are added with a rank, and retrieved in a range:

asyncCommands.zadd("sortedset", 1, "one");
asyncCommands.zadd("sortedset", 4, "zero");
asyncCommands.zadd("sortedset", 2, "two");

RedisFuture<List<String>> valuesForward = asyncCommands.zrange(key, 0, 3);
RedisFuture<List<String>> valuesReverse = asyncCommands.zrevrange(key, 0, 3);

The second argument to zadd is a rank. We retrieve a range by rank with zrange for ascending order and zrevrange for descending.

We added “zero” with a rank of 4, so it will appear at the end of valuesForward and at the beginning of valuesReverse.

6. Transactions

Transactions allow the execution of a set of commands in a single atomic step. These commands are guaranteed to be executed in order and exclusively. Commands from another user won’t be executed until the transaction finishes.

Either all commands are executed, or none of them are. Redis will not perform a rollback if one of them fails. Once exec() is called, all commands are executed in the order specified.

Let’s look at an example:

asyncCommands.multi();
    
RedisFuture<String> result1 = asyncCommands.set("key1", "value1");
RedisFuture<String> result2 = asyncCommands.set("key2", "value2");
RedisFuture<String> result3 = asyncCommands.set("key3", "value3");

RedisFuture<TransactionResult> execResult = asyncCommands.exec();

TransactionResult transactionResult = execResult.get();

String firstResult = transactionResult.get(0);
String secondResult = transactionResult.get(0);
String thirdResult = transactionResult.get(0);

The call to multi starts the transaction. When a transaction is started, the subsequent commands are not executed until exec() is called.

In synchronous mode, the commands return null. In asynchronous mode, the commands return RedisFuture . Exec returns a TransactionResult which contains a list of responses.

Since the RedisFutures also receive their results, asynchronous API clients receive the transaction result in two places.

7. Batching

Under normal conditions, Lettuce executes commands as soon as they are called by an API client.

This is what most normal applications want, especially if they rely on receiving command results serially.

However, this behavior isn’t efficient if applications don’t need results immediately or if large amounts of data are being uploaded in bulk.

Asynchronous applications can override this behavior:

commands.setAutoFlushCommands(false);

List<RedisFuture<?>> futures = new ArrayList<>();
for (int i = 0; i < iterations; i++) {
    futures.add(commands.set("key-" + i, "value-" + i);
}
commands.flushCommands();

boolean result = LettuceFutures.awaitAll(5, TimeUnit.SECONDS,
  futures.toArray(new RedisFuture[0]));

With setAutoFlushCommands set to false, the application must call flushCommands manually. In this example, we queued multiple set command and then flushed the channel. AwaitAll waits for all of the RedisFutures to complete.

This state is set on a per connection basis and effects all threads that use the connection. This feature isn’t applicable to synchronous commands.

8. Publish/Subscribe

Redis offers a simple publish/subscribe messaging system. Subscribers consume messages from channels with the subscribe command. Messages aren’t persisted; they are only delivered to users when they are subscribed to a channel.

Redis uses the pub/sub system for notifications about the Redis dataset, giving clients the ability to receive events about keys being set, deleted, expired, etc.

See the documentation here for more details.

8.1. Subscriber

RedisPubSubListener receives pub/sub messages. This interface defines several methods, but we’ll just show the method for receiving messages here:

public class Listener implements RedisPubSubListener<String, String> {

    @Override
    public void message(String channel, String message) {
        log.debug("Got {} on channel {}",  message, channel);
        message = new String(s2);
    }
}

We use the RedisClient to connect a pub/sub channel and install the listener:

StatefulRedisPubSubConnection<String, String> connection
 = client.connectPubSub();
connection.addListener(new Listener())

RedisPubSubAsyncCommands<String, String> async
 = connection.async();
async.subscribe("channel");

With a listener installed, we retrieve a set of RedisPubSubAsyncCommands and subscribe to a channel.

8.2. Publisher

Publishing is just a matter of connecting a Pub/Sub channel and retrieving the commands:

StatefulRedisPubSubConnection<String, String> connection 
  = client.connectPubSub();

RedisPubSubAsyncCommands<String, String> async 
  = connection.async();
async.publish("channel", "Hello, Redis!");

Publishing requires a channel and a message.

8.3. Reactive Subscriptions

Lettuce also offers a reactive interface for subscribing to pub/sub messages:

StatefulRedisPubSubConnection<String, String> connection = client
  .connectPubSub();

RedisPubSubAsyncCommands<String, String> reactive = connection
  .reactive();

reactive.observeChannels().subscribe(message -> {
    log.debug("Got {} on channel {}",  message, channel);
    message = new String(s2);
});
reactive.subscribe("channel").subscribe();

The Flux returned by observeChannels receives messages for all channels, but since this is a stream, filtering is easy to do.

9. High Availability

Redis offers several options for high availability and scalability. Complete understanding requires knowledge of Redis server configurations, but we’ll go over a brief overview of how Lettuce supports them.

9.1. Master/Slave

Redis servers replicate themselves in a master/slave configuration. The master server sends the slave a stream of commands that replicate the master cache to the slave. Redis doesn’t support bi-directional replication, so slaves are read-only.

Lettuce can connect to Master/Slave systems, query them for the topology, and then select slaves for reading operations, which can improve throughput:

RedisClient redisClient = RedisClient.create();

StatefulRedisMasterSlaveConnection<String, String> connection
 = MasterSlave.connect(redisClient, 
   new Utf8StringCodec(), RedisURI.create("redis://localhost"));
 
connection.setReadFrom(ReadFrom.SLAVE);

9.2. Sentinel

Redis Sentinel monitors master and slave instances and orchestrates failovers to slaves in the event of a master failover.

Lettuce can connect to the Sentinel, use it to discover the address of the current master, and then return a connection to it.

To do this, we build a different RedisURI and connect our RedisClient with it:

RedisURI redisUri = RedisURI.Builder
  .sentinel("sentinelhost1", "clustername")
  .withSentinel("sentinelhost2").build();
RedisClient client = new RedisClient(redisUri);

RedisConnection<String, String> connection = client.connect();

We built the URI with the hostname (or address) of the first Sentinel and a cluster name, followed by a second sentinel address. When we connect to the Sentinel, Lettuce queries it about the topology and returns a connection to the current master server for us.

The complete documentation is available here.

9.3. Clusters

Redis Cluster uses a distributed configuration to provide high-availability and high-throughput.

Clusters shard keys across up to 1000 nodes, therefore transactions are not available in a cluster:

RedisURI redisUri = RedisURI.Builder.redis("localhost")
  .withPassword("authentication").build();
RedisClusterClient clusterClient = RedisClusterClient
  .create(rediUri);
StatefulRedisClusterConnection<String, String> connection
 = clusterClient.connect();
RedisAdvancedClusterCommands<String, String> syncCommands = connection
  .sync();

RedisAdvancedClusterCommands holds the set of Redis commands supported by the cluster, routing them to the instance that holds the key.

A complete specification is available here.

10. Conclusion

In this tutorial, we looked at how to use Lettuce to connect and query a Redis server from within our application.

Lettuce supports the complete set of Redis features, with the bonus of a completely thread-safe asynchronous interface. It also makes extensive use of Java 8’s CompletionStage interface to give applications fine-grained control over how they receive data.

Code samples, as always, can be found over on GitHub.

Introduction to Javadoc

$
0
0

1. Overview

Good API documentation is one of the many factors contributing to the overall success of a software project.

Fortunately, all modern versions of the JDK provide the Javadoc tool – for generating API documentation from comments present in the source code.

Prerequisites:

  1. JDK 1.4 (JDK 7+ is recommended for the latest version of the Maven Javadoc plugin)
  2. The JDK /bin folder added to the PATH environment variable
  3. (Optional) an IDE that with built-in tools

2. Javadoc Comments

Let’s start with comments.

Javadoc comments structure look very similar to a regular multi-line comment, but the key difference is the extra asterisk at the beginning:

// This is a single line comment

/*
 * This is a regular multi-line comment
 */

/**
 * This is a Javadoc
 */

Javadoc style comments may contain HTML tags as well.

2.1. Javadoc Format

Javadoc comments may be placed above any class, method, or field which we want to document.

These comments are commonly made up of two sections:

  1. The description of what we’re commenting on
  2. The standalone block tags (marked with the “@” symbol) which describe specific meta-data

We’ll be using some of the more common block tags in our example. For a complete list of block tags, visit the reference guide.

2.2. Javadoc at Class Level 

Let’s take a look at what a class-level Javadoc comment would look like:

/**
* Hero is the main entity we'll be using to . . .
* 
* Please see the {@link com.baeldung.javadoc.Person} class for true identity
* @author Captain America
* 
*/
public class SuperHero extends Person {
    // fields and methods
}

We have a short description and two different block tags- standalone and inline:

  • Standalone tags appear after the description with the tag as the first word in a line, e.g., the @author tag
  • Inline tags may appear anywhere and are surrounded with curly brackets, e.g., the @link tag in the description

In our example, we can also see two kinds of block tags being used:

  • {@link} provides an inline link to a referenced part of our source code
  • @author the name of the author who added the class, method, or field that is commented

2.3. Javadoc at Field Level

We can also use a description without any block tags like this inside our SuperHero class:

/**
 * The public name of a hero that is common knowledge
 */
private String heroName;

Private fields won’t have Javadoc generated for them unless we explicitly pass the -private option to the Javadoc command.

2.4. Javadoc at Method Level

Methods can contain a variety of Javadoc block tags.

Let’s take a look at a method we’re using:

/**
 * <p>This is a simple description of the method. . .
 * <a href="http://www.supermanisthegreatest.com">Superman!</a>
 * </p>
 * @param incomingDamage the amount of incoming damage
 * @return the amount of health hero has after attack
 * @see <a href="http://www.link_to_jira/HERO-402">HERO-402</a>
 * @since 1.0
 */
public int successfullyAttacked(int incomingDamage) {
    // do things
    return 0;
}

The successfullyAttacked method contains both a description and numerous standalone block tags.

There’re many block tags to help generate proper documentation and we can include all sorts of different kinds of information. We can even utilize basic HTML tags in the comments.

Let’s go over the tags we encounter in the example above:

  • @param provides any useful description about a method’s parameter or input it should expect
  • @return provides a description of what a method will or can return
  • @see will generate a link similar to the {@link} tag, but more in the context of a reference and not inline
  • @since specifies which version the class, field, or method was added to the project
  • @version specifies the version of the software, commonly used with %I% and %G% macros
  • @throws is used to further explain the cases the software would expect an exception
  • @deprecated gives an explanation of why code was deprecated, when it may have been deprecated, and what the alternatives are

Although both sections are technically optional, we’ll need at least one for the Javadoc tool to generate anything meaningful.

3. Javadoc Generation

In order to generate our Javadoc pages, we’ll want to take a look at the command line tool that ships with the JDK, and the Maven plugin.

3.1. Javadoc Command Line Tool

The Javadoc command line tool is very powerful but has some complexity attached to it.

Running the command javadoc without any options or parameters will result in an error and output parameters it expects.

We’ll need to at least specify what package or class we want documentation to be generated for.

Let’s open a command line and navigate to the project directory.

Assuming the classes are all in the src folder in the project directory:

user@baeldung:~$ javadoc -d doc src\*

This will generate documentation in a directory called doc as specified with the –d flag. If multiple packages or files exist, we’d need to provide all of them.

Utilizing an IDE with the built-in functionality is, of course, easier and generally recommended.


3.2. Javadoc With Maven Plugin

We can also make use of the Maven Javadoc plugin:

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-javadoc-plugin</artifactId>
            <version>3.0.0</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
            </configuration>
            <tags>
            ...
            </tags>
        </plugin>
    </plugins>
</build>

In the base directory of the project, we run the command to generate our Javadocs to a directory in target\site:

user@baeldung:~$ mvn javadoc:javadoc

The Maven plugin is very powerful and facilitates complex document generation seamlessly.

Let’s now see what a generated Javadoc page looks like:

We can see a tree view of the classes our SuperHero class extends. We can see our description, fields, and method, and we can click on links for more information.

A detailed view of our method looks like this:

3.3. Custom Javadoc Tags

In addition to using predefined block tags to format our documentation, we can also create custom block tags.

In order to do so, we just need to include a -tag option to our Javadoc command line in the format of <tag-name>:<locations-allowed>:<header>.

In order to create a custom tag called @location allowed anywhere, which is displayed in the “Notable Locations” header in our generated document, we need to run:

user@baeldung:~$ javadoc -tag location:a:"Notable Locations:" -d doc src\*

In order to use this tag, we can add it to the block section of a Javadoc comment:

/**
 * This is an example...
 * @location New York
 * @returns blah blah
 */

The Maven Javadoc plugin is flexible enough to also allow definitions of our custom tags in our pom.xml.

In order to set up the same tag above for our project, we can add the following to the <tags> section of our plugin:

...
<tags>
    <tag>
        <name>location</name>
        <placement>a</placement>
        <head>Notable Places:</head>
    </tag> 
</tags>
...

This way allows us to specify the custom tag once, instead of specifying it every time.

4. Conclusion

This quick introduction tutorial covered how to write basic Javadocs and generate them with the Javadoc command line.

An easier way to generate the documentation would to use any built-in IDE options or include the Maven plugin into our pom.xml file and run the appropriate commands.

The code samples, as always, can be found over on GitHub.

HTTP Requests with Kotlin and khttp

$
0
0

1. Introduction

The HTTP protocol and APIs built on it are of central importance in programming these days.

On the JVM we have several available options, from lower-level to very high-level libraries, from established projects to new kids on the block. However, most of them are targeted primarily at Java programs.

In this article, we’re going to look at khttp, an idiomatic Kotlin library for consuming HTTP-based resources and APIs.

2. Dependencies

In order to use the library in our project, first we have to add it to our dependencies:

<dependency>
    <groupId>khttp</groupId>
    <artifactId>khttp</artifactId>
    <version>0.1.0</version>
</dependency>

Since this is not yet on Maven Central, we also have to enable the JCenter repository:

<repository>
    <id>central</id>
    <url>http://jcenter.bintray.com</url>
</repository>

Version 0.1.0 is the current one at the time of writing. We can, of course, check JCenter for a newer one.

3. Basic Usage

The basics of the HTTP protocol are simple, even though the fine details can be quite complicated.  Therefore, khttp has a simple interface as well.

For every HTTP method, we can find a package-level function in the khttp package, such as get, post and so on.

The functions all take the same set of arguments and return a Response object; we’ll see the details of these in the following sections.

In the course of this article, we’ll use the fully qualified form, for example, khttp.put. In our projects we can, of course, import and possibly rename those methods:

import khttp.delete as httpDelete

Note: we’ve added type declarations for clarity throughout code examples because without an IDE they could be hard to follow.

4. A Simple Request

Every HTTP request has at least two required components: a method and a URL. In khttp, the method is determined by the function we invoke, as we’ve seen in the previous section.

The URL is the only required argument for the method; so, we can easily perform a simple request:

khttp.get("http://httpbin.org/get")

In the following sections, we’ll consider all requests to complete successfully.

4.1. Adding Parameters

We often have to provide query parameters in addition to the base URL, especially for GET requests.

khttp’s methods accept a params argument which is a Map of key-value pairs to include in the query String:

khttp.get(
  url = "http://httpbin.org/get",
  params = mapOf("key1" to "value1", "keyn" to "valuen"))

Notice that we’ve used the mapOf function to construct a Map on the fly; the resulting request URL will be:

http://httpbin.org/get?key1=value1&keyn=valuen

5. A Request Body

Another common operation we often need to perform is sending data, typically as the payload of a POST or PUT request.

For this, the library offers several options that we’re going to examine in the following sections.

5.1. Sending a JSON Payload

We can use the json argument to send a JSON object or array. It can be of several different types:

  • A JSONObject or JSONArray as provided by the org.json library
  • A Map, which is transformed into a JSON object
  • A Collection, Iterable or array, which is transformed to a JSON array

We can easily turn our earlier GET example into a POST one which will send a simple JSON object:

khttp.post(
  url = "http://httpbin.org/post",
  json = mapOf("key1" to "value1", "keyn" to "valuen"))

Note that the transformation from collections to JSON objects is shallow. For example, a List of Map‘s won’t be converted to a JSON array of JSON objects, but rather to an array of strings.

For deep conversion, we’d need a more complex JSON mapping library such as Jackson. The conversion facility of the library is only meant for simple cases.

5.2. Sending Form Data (URL Encoded)

To send form data (URL encoded, as in HTML forms) we use the data argument with a Map:

khttp.post(
  url = "http://httpbin.org/post",
  data = mapOf("key1" to "value1", "keyn" to "valuen"))

5.3. Uploading Files (Multipart Form)

We can send one or more files encoded as a multipart form data request.

In that case, we use the files argument:

khttp.post(
  url = "http://httpbin.org/post",
  files = listOf(
    FileLike("file1", "content1"),
    FileLike("file2", File("kitty.jpg"))))

We can see that khttp uses a FileLike abstraction, which is an object with a name and a content. The content can be a string, a byte array, a File, or a Path.

5.4. Sending Raw Content

If none of the options above are suitable, we can use an InputStream to send raw data as the body of an HTTP request:

khttp.post(url = "http://httpbin.org/post", data = someInputStream)

In this case, we’ll most likely need to manually set some headers too, which we’ll cover in a later section.

6. Handling the Response

So far we’ve seen various ways of sending data to a server. But many HTTP operations are useful because of the data they return as well.

khttp is based on blocking I/O, therefore all functions corresponding to HTTP methods return a Response object containing the response received from the server.

This object has various properties that we can access, depending on the type of content.

6.1. JSON Responses

If we know the response to be a JSON object or array, we can use the jsonObject and jsonArray properties:

val response : Response = khttp.get("http://httpbin.org/get")
val obj : JSONObject = response.jsonObject
print(obj["someProperty"])

6.2. Text or Binary Responses

If we want to read the response as a String instead, we can use the text property:

val message : String = response.text

Or, if we want to read it as binary data (e.g. a file download) we use the content property:

val imageData : ByteArray = response.content

Finally, we can also access the underlying InputStream:

val inputStream : InputStream = response.raw

7. Advanced Usage

Let’s also take a look at a couple of more advanced usage patterns which are generally useful, and that we haven’t yet treated in the previous sections.

7.1. Handling Headers and Cookies

All khttp functions take a headers argument which is a Map of header names and values.

val response = khttp.get(
  url = "http://httpbin.org/get",
  headers = mapOf("header1" to "1", "header2" to "2"))

Similarly for cookies:

val response = khttp.get(
  url = "http://httpbin.org/get",
  cookies = mapOf("cookie1" to "1", "cookie2" to "2"))

We can also access headers and cookies sent by the server in the response:

val contentType : String = response.headers["Content-Type"]
val sessionID : String = response.cookies["JSESSIONID"]

7.2. Handling Errors

There are two types of errors that can arise in HTTP: error responses, such as 404 – Not Found, which are part of the protocol; and low-level errors, such as “connection refused”.

The first kind doesn’t result in khttp throwing exceptions; instead, we should check the Response statusCode property:

val response = khttp.get(url = "http://httpbin.org/nothing/to/see/here")
if(response.statusCode == 200) {
    process(response)
} else {
    handleError(response)
}

Lower-level errors, instead, result in exceptions being thrown from the underlying Java I/O subsystem, such as ConnectException.

7.3. Streaming Responses

Sometimes the server can respond with a big piece of content, and/or take a long time to respond. In those cases, we may want to process the response in chunks, rather than waiting for it to complete and take up memory.

If we want to instruct the library to give us a streaming response, then we have to pass true as the stream argument:

val response = khttp.get(url = "http://httpbin.org", stream = true)

Then, we can process it in chunks:

response.contentIterator(chunkSize = 1024).forEach { arr : ByteArray -> handleChunk(arr) }

7.4. Non-Standard Methods

In the unlikely case that we need to use an HTTP method (or verb) that khttp doesn’t provide natively – say, for some extension of the HTTP protocol, like WebDAV – we’re still covered.

In fact, all functions in the khttp package, which correspond to HTTP methods, are implemented using a generic request function that we can use too:

khttp.request(
  method = "COPY",
  url = "http://httpbin.org/get",
  headers = mapOf("Destination" to "/copy-of-get"))

7.5. Other Features

We haven’t touched all the features of khttp. For example, we haven’t discussed timeouts, redirects and history, or asynchronous operations.

The official documentation is the ultimate source of information about the library and all of its features.

8. Conclusion

In this tutorial, we’ve seen how to make HTTP requests in Kotlin with the idiomatic library khttp.

The implementation of all these examples can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Get Log Output in JSON

$
0
0

1. Introduction

Most Java logging libraries today offer different layout options for formatting logs – to accurately fit the needs of each project.

In this quick article, we want to format and output our log entries as JSON. We’ll see how to do this for the two most widely used logging libraries: Log4j2 and Logback.

Both use Jackson internally for representing logs in the JSON format.

For an introduction to these libraries take a look at our introduction to Java Logging article.

2. Log4j2

Log4j2 is the direct successor of the most popular logging library for Java, Log4J.

As it’s the new standard for Java projects, we’ll show how to configure it to output JSON.

2.1. Dependencies

First, we have to include the following dependencies in our pom.xml file:

<dependencies>
    <dependency>
        <groupId>org.apache.logging.log4j</groupId>
        <artifactId>log4j-core</artifactId>
        <version>2.10.0</version>
    </dependency>

    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-databind</artifactId>
        <version>2.9.3</version>
    </dependency>
    
</dependencies>

The latest versions of the previous dependencies can be found on Maven Central: log4j-api, log4j-core, jackson-databind.

2.2. Configuration

Then, in our log4j2.xml file, we can create a new Appender that uses JsonLayout and a new Logger that uses this Appender:

<Appenders>
    <Console name="ConsoleJSONAppender" target="SYSTEM_OUT">
        <JsonLayout complete="false" compact="false">
            <KeyValuePair key="myCustomField" value="myCustomValue" />
        </JsonLayout>
    </Console>
</Appenders>

<Logger name="CONSOLE_JSON_APPENDER" level="TRACE" additivity="false">
    <AppenderRef ref="ConsoleJSONAppender" />
</Logger>

As we can see in the example config, it’s possible to add our own values to the log using KeyValuePair, that even supports lookout into the log context.

Setting the compact parameter to false will increase the size of the output but will make it also more human readable.

2.3. Using Log4j2

In our code, we can now instantiate our new JSON logger and make a new debug level trace:

Logger logger = LogManager.getLogger("CONSOLE_JSON_APPENDER");
logger.debug("Debug message");

The debug output message for the previous code would be:

{
  "timeMillis" : 1513290111664,
  "thread" : "main",
  "level" : "DEBUG",
  "loggerName" : "CONSOLE_JSON_APPENDER",
  "message" : "My debug message",
  "endOfBatch" : false,
  "loggerFqcn" : "org.apache.logging.log4j.spi.AbstractLogger",
  "threadId" : 1,
  "threadPriority" : 5,
  "myCustomField" : "myCustomValue"
}

3. Logback

Logback can be considered another successor of Log4J. It’s written by same developers and claims to be more efficient and faster than its predecessor.

So, let’s see how to configure it to get the output of the logs in JSON format.

3.1. Dependencies

Let’s include following dependencies in our pom.xml:

<dependencies>
    <dependency>
        <groupId>ch.qos.logback</groupId>
        <artifactId>logback-classic</artifactId>
        <version>1.1.7</version>
    </dependency>

    <dependency>
        <groupId>ch.qos.logback.contrib</groupId>
        <artifactId>logback-json-classic</artifactId>
        <version>0.1.5</version>
    </dependency>

    <dependency>
        <groupId>ch.qos.logback.contrib</groupId>
        <artifactId>logback-jackson</artifactId>
        <version>0.1.5</version>
    </dependency>

    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-databind</artifactId>
        <version>2.9.3</version>
    </dependency>
</dependencies>

We can check here for the latest versions of these dependencies: logback-classiclogback-json-classiclogback-jacksonjackson-databind

3.2. Configuration

First, we create a new appender in our logback.xml that uses JsonLayout and JacksonJsonFormatter.

After that, we can create a new logger that uses this appender:

<appender name="json" class="ch.qos.logback.core.ConsoleAppender">
    <layout class="ch.qos.logback.contrib.json.classic.JsonLayout">
        <jsonFormatter
            class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
            <prettyPrint>true</prettyPrint>
        </jsonFormatter>
        <timestampFormat>yyyy-MM-dd' 'HH:mm:ss.SSS</timestampFormat>
    </layout>
</appender>

<logger name="jsonLogger" level="TRACE">
    <appender-ref ref="json" />
</logger>

As we see, the parameter prettyPrint is enabled to obtain a human-readable JSON.

3.3. Using Logback

Let’s instantiate the logger in our code and log a debug message:

Logger logger = LoggerFactory.getLogger("jsonLogger");
logger.debug("Debug message");

With this – we’ll obtain the following output:

{
  "timestamp" : "2017-12-14 23:36:22.305",
  "level" : "DEBUG",
  "thread" : "main",
  "logger" : "jsonLogger",
  "message" : "Debug log message",
  "context" : "default"
}

4. Conclusion

We’ve seen here how we can easily configure Log4j2 and Logback have a JSON output format. We’ve delegated all the complexity of the parsing to the logging library, so we don’t need to change any existing logger calls.

As always the code for this article is available on GitHub here and here.

Viewing all 4745 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>