Quantcast
Channel: Baeldung
Viewing all 4814 articles
Browse latest View live

Spring MVC Streaming and SSE Request Processing

$
0
0

1. Introduction

This simple tutorial demonstrates the use of several asynchronous and streaming objects in Spring MVC 5.x.x.

Specifically, we’ll review three key classes:

  • ResponseBodyEmitter
  • SseEmitter
  • StreamingResponseBody

Also, we’ll discuss how to interact with them using a JavaScript client.

2. ResponseBodyEmitter

ResponseBodyEmitter handles async responses.

Also, it represents a parent for a number of subclasses – one of which we’ll take a closer look at below.

2.1. Server Side

It’s better to use a ResponseBodyEmitter along with its own dedicated asynchronous thread and wrapped with a ResponseEntity (which we can inject the emitter into directly):

@Controller
public class ResponseBodyEmitterController {
 
    private ExecutorService executor 
      = Executors.newCachedThreadPool();

    @GetMapping("/rbe")
    public ResponseEntity<ResponseBodyEmitter> handleRbe() {
        ResponseBodyEmitter emitter = new ResponseBodyEmitter();
        executor(() -> {
            try {
                emitter.send(
                  "/rbe" + " @ " + new Date(), MediaType.TEXT_PLAIN);
                emitter.complete();
            } catch (Exception ex) {
                emitter.completeWithError(ex);
            }
        });
        return new ResponseEntity(emitter, HttpStatus.OK);
    }
}

So, in the example above, we can sidestep needing to use CompleteableFutures, more complicated asynchronous promises, or use of the @Async annotation.

Instead, we simply declare our asynchronous entity and wrap it in a new Thread provided by the ExecutorService.

2.2. Client Side

For client-side use, we can use a simple XHR method and call our API endpoints just like in a usual AJAX operation:

var xhr = function(url) {
    return new Promise(function(resolve, reject) {
        var xmhr = new XMLHttpRequest();
        //...
        xmhr.open("GET", url, true);
        xmhr.send();
       //...
    });
};

xhr('http://localhost:8080/javamvcasync/rbe')
  .then(function(success){ //... });

3. SseEmitter

SseEmitter is actually a subclass of ResponseBodyEmitter and provides additional Server-Sent Event (SSE) support out-of-the-box.

3.1. Server Side

So, let’s take a quick look at an example controller leveraging this powerful entity:

@Controller
public class SseEmitterController {
    private ExecutorService nonBlockingService = Executors
      .newCachedThreadPool();
    
    @GetMapping("/sse")
    public SseEmitter handleSse() {
         SseEmitter emitter = new SseEmitter();
         nonBlockingService.execute(() -> {
             try {
                 emitter.send("/sse" + " @ " + new Date());
                 // we could send more events
                 emitter.complete();
             } catch (Exception ex) {
                 emitter.completeWithError(ex);
             }
         });
         return emitter;
    }   
}

Pretty standard fare, but we’ll notice a few differences between this and our usual REST controller:

  • First, we return a SseEmitter
  • Also, we wrap the core response information in its own Thread
  • Finally, we send response information using emitter.send()

3.2. Client Side

Our client works a little bit differently this time since we can leverage the continuously connected Server-Sent Event Library:

var sse = new EventSource('http://localhost:8080/javamvcasync/sse');
sse.onmessage = function (evt) {
    var el = document.getElementById('sse');
    el.appendChild(document.createTextNode(evt.data));
    el.appendChild(document.createElement('br'));
};

4. StreamingResponseBody

Lastly, we can use StreamingResponseBody to write directly to an OutputStream before passing that written information back to the client using a ResponseEntity.

4.1. Server Side

@Controller
public class StreamingResponseBodyController {
 
    @GetMapping("/srb")
    public ResponseEntity<StreamingResponseBody> handleRbe() {
        StreamingResponseBody stream = out -> {
            String msg = "/srb" + " @ " + new Date();
            out.write(msg.getBytes());
        };
        return new ResponseEntity(stream, HttpStatus.OK);
    }
}

4.2. Client Side

Just like before, we’ll use a regular XHR method to access the controller above:

var xhr = function(url) {
    return new Promise(function(resolve, reject) {
        var xmhr = new XMLHttpRequest();
        //...
        xmhr.open("GET", url, true);
        xmhr.send();
        //...
    });
};

xhr('http://localhost:8080/javamvcasync/srb')
  .then(function(success){ //... });

Next, let’s take a look at some successful uses of these examples.

5. Bringing it all Together

After we’ve successfully compiled our server and run our client above (accessing the supplied index.jsp), we should see the following in our browser:


And the following in our terminal:

 

We can also call the endpoints directly and see them streaming responses appear in our browser.

6. Conclusion

While Future and CompleteableFuture have proven robust additions to Java and Spring, we now have several resources at our disposal to more adequately handle asynchronous and streaming data for highly-concurrent web applications.

Finally, check out the complete code examples over on GitHub.


Java Weekly, Issue 240

$
0
0

Here we go…

1. Spring and Java

>> WireMock Tutorial: Request Matching, Part Four [petrikainulainen.net]

A nice write-up that shows how to specify expectations for XML documents received by a web service.

>> Spring Boot 1.x EOL Aug 1st 2019 [spring.io]

That’s one good incentive to finally migrate!

>> 5 Reasons and 101 Bugfixes – Why You Should Use Hibernate 5.3 [thoughts-on-java.org]

If you’ve been wondering whether you should upgrade to Hibernate 5,3, look no further — there is much to be gained, as you’ll see in this article.

>> Improving Testability of Java Microservices with Container Orchestration and a Service Mesh [infoq.com]

A quick overview of the benefits that container orchestration brings to the microservices testing table. Very cool.

>> Configuring Graal Native AOT for reflection [blog.frankel.ch]

A brief look at the challenges faced when using the Ahead-of-Time bytecode compiler to create native images from source code that uses a lot of reflection.

>> How Contract Tests Improve the Quality of Your Distributed Systems [infoq.com]

A detailed piece on how consumer-driven contracts can help you to catch bugs early between the integration points in your systems. This could save you hours of end-to-end testing.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Deep feature consistent variational auto-encoder [krasserm.github.io]

A quick look at a machine-learning algorithm for image analysis and comparison, fueled by neural networks. Fascinating.

>> Use Logging Levels Consistently [reflectoring.io]

A pragmatic approach to deciding what kinds of information to log at which levels. A good read.

>> A beginner’s guide to database multitenancy [vladmihalcea.com]

The title says it all.

>> How to Read an RFC [mnot.net]

As it turns out, the languages used to specify RFCs can leave them open to misinterpretation, even if you know the context(s) around which they were created.

Also worth reading:

3. Musings

>> Strong Opinions [blog.code-cop.org]

It’s not easy to abandon your strongly held opinions. But, as Confucius said, “Real knowledge is to know the extent of one’s ignorance.”

>> Being Good at Your Job is Overrated [daedtech.com]

Why merely being good at what you do is not necessarily going to advance your career.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> What’s in a Name? [dilbert.com]

>> A-B Testing [dilbert.com]

>> When in Doubt, Hire a Consultant [dilbert.com]

5. Pick of the Week

>> Web Architecture 101 [engineering.videoblocks.com]

Logging Exceptions Using SLF4J

$
0
0

1. Overview

In this quick tutorial, we’ll show how to log exceptions in Java using the SLF4J API. We’ll use the slf4j-simple API as the logging implementation.

You can explore different logging techniques in one of our previous articles.

2. Maven Dependencies

First, we need to add the following dependencies to our pom.xml:

<dependency>                             
    <groupId>org.slf4j</groupId>         
    <artifactId>slf4j-api</artifactId>   
    <version>1.7.25</version>  
</dependency> 
                       
<dependency>                             
    <groupId>org.slf4j</groupId>         
    <artifactId>slf4j-simple</artifactId>
    <version>1.7.25</version>  
</dependency>

The latest versions of these libraries can be found on Maven Central.

3. Examples

Usually, all exceptions are logged using the error() method available in the Logger class. There are quite a few variations of this method. We’re going to explore:

void error(String msg);
void error(String format, Object... arguments);
void error(String msg, Throwable t);

Let’s first initialize the Logger that we’re going to use:

Logger logger = LoggerFactory.getLogger(NameOfTheClass.class);

If we just have to show the error message, then we can simply add:

logger.error("An exception occurred!");

The output of the above code will be:

ERROR packageName.NameOfTheClass - An exception occurred!

This is simple enough. But to add more relevant information about the exception (including the stack trace) we can write:

logger.error("An exception occurred!", new Exception("Custom exception"));

The output will be:

ERROR packageName.NameOfTheClass - An exception occurred!
java.lang.Exception: Custom exception
  at packageName.NameOfTheClass.methodName(NameOfTheClass.java:lineNo)

In presence of multiple parameters, if the last argument in a logging statement is an exception, then SLF4J will presume that the user wants the last argument to be treated as an exception instead of a simple parameter:

logger.error("{}, {}! An exception occurred!", 
  "Hello", 
  "World", 
  new Exception("Custom exception"));

In the above snippet, the String message will be formatted based on the passed object details. We’ve used curly braces as placeholders for String parameters passed to the method.

In this case, the output will be:

ERROR packageName.NameOfTheClass - Hello, World! An exception occurred!
java.lang.Exception: Custom exception 
  at packageName.NameOfTheClass.methodName(NameOfTheClass.java:lineNo)

4. Conclusion

In this quick tutorial, we found out how to log exceptions using the SLF4J API.

The code snippets are available over in the GitHub repository.

Creating a Custom Log4j2 Appender

$
0
0

1. Introduction

In this tutorial, we’ll learn about creating a custom Log4j2 appender. If you’re looking for the introduction to Log4j2, please take a look at this article.

Log4j2 ships with a lot of built-in appenders which can be used for various purposes such as logging to a file, to a database, to a socket or to a NoSQL database.

However, there could be a need for a custom appender depending on the application demands.

Log4j2 is an upgraded version of Log4j and has significant improvements over Log4j. Hence, we’ll be using the Log4j2 framework to demonstrate the creation of a custom appender.

2. Maven Setup

We will need the log4j-core dependency in our pom.xml to start with:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.11.0</version>
</dependency>

The latest version log4j-core can be found here.

3. Custom Appender

There are two ways by which we can implement our custom appender. First is by implementing the Appender interface and the second is by extending the AbstractAppender class. The second method provides a simple way to implement our own custom appender and that is what we will use.

For this example, we’re going to create a MapAppender. We’ll capture the log events and store them in a ConcurrentHashMap with the timestamp for the key.

Here’s how we create the MapAppender:

@Plugin(
  name = "MapAppender", 
  category = Core.CATEGORY_NAME, 
  elementType = Appender.ELEMENT_TYPE)
public class MapAppender extends AbstractAppender {

    private ConcurrentMap<String, LogEvent> eventMap = new ConcurrentHashMap<>();

    protected MapAppender(String name, Filter filter) {
        super(name, filter, null);
    }

    @PluginFactory
    public static MapAppender createAppender(
      @PluginAttribute("name") String name, 
      @PluginElement("Filter") Filter filter) {
        return new MapAppender(name, filter);
    }

    @Override
    public void append(LogEvent event) {
        eventMap.put(Instant.now().toString(), event);
    }
}

We’ve annotated the class with the @Plugin annotation which indicates that our appender is a plugin.

The name of the plugin signifies the name we would provide in the configuration to use this appender. The category specifies that category under which we place the plugin. The elementType is appender.

We also need a factory method that will create the appender. Our createAppender method serves this purpose and is annotated with the @PluginFactory annotation.

Here, we initialize our appender by calling the protected constructor and we pass the layout as null as we are not going to provide any layout in the config file and we expect the framework to resolve default layout.

Next, we’ve overridden the append method which has the actual logic of handling the LogEvent. In our case, the append method puts the LogEvent into our eventMap. 

4. Configuration

Now that we have our MapAppender in place, we need a lo4j2.xml configuration file to use this appender for our logging.

Here’s how we define the configuration section in our log4j2.xml file:

<Configuration xmlns:xi="http://www.w3.org/2001/XInclude" packages="com.baeldung" status="WARN">

Note that the packages attribute should reference the package that contains your custom appender.

Next, in our appender’s section, we define the appender. Here is how we add our custom appender to the list of appenders in the configuration:

<MapAppender name="MapAppender" />

The last part is to actually use the appender in our Loggers section. For our implementation, we use MapAppender as a root logger and define it in the root section.

Here’s how it’s done:

<Root level="DEBUG">
    <AppenderRef ref="MapAppender" />
</Root>

5. Error Handling

To handle errors while logging the event we can use the error method inherited from AbstractAppender.

For example, if we don’t want to log events which have a log level less than that of WARN.

We can use the error method of AbstractAppender to log an error message. Here’s how it’s done in our class:

public void append(LogEvent event) {
    if (event.getLevel().isLessSpecificThan(Level.WARN)) {
        error("Unable to log less than WARN level.");
        return;
    }
    eventMap.put(Instant.now().toString(), event);
}

Observe how our append method has changed now. We check the event’s level for being greater than WARN and we return early if it is anything less than WARN.

6. Conclusion

In this article, we’ve seen how to implement a custom appender for Log4j2.

While there are many in-built ways of logging our data by using Log4j2’s provided appenders, we also have tools in this framework that enable us to create our own appender as per our application needs.

As usual, the example can be found over on Github.

MQTT Client in Java

$
0
0

1. Overview

In this tutorial, we’ll see how we can add MQTT messaging in a Java project using the libraries provided by the Eclipse Paho project.

2. MQTT Primer

MQTT (MQ Telemetry Transport) is a messaging protocol that was created to address the need for a simple and lightweight method to transfer data to/from low-powered devices, such as those used in industrial applications.

With the increased popularity of IoT (Internet of Things) devices, MQTT has seen an increased use, leading to its standardization by OASIS and ISO.

The protocol supports a single messaging pattern, namely the Publish-Subscribe pattern: each message sent by a client contains an associated “topic” which is used by the broker to route it to subscribed clients. Topics names can be simple strings like “oiltemp” or a path-like string “motor/1/rpm“.

In order to receive messages, a client subscribes to one or more topics using its exact name or a string containing one of the supported wildcards (“#” for multi-level topics and “+” for single-level”).

3. Project Setup

In order to include the Paho library in a Maven project, we have to add the following dependency:

<dependency>
  <groupId>org.eclipse.paho</groupId>
  <artifactId>org.eclipse.paho.client.mqttv3</artifactId>
  <version>1.2.0</version>
</dependency>

The latest version of the Eclipse Paho Java library module can be downloaded from Maven Central.

4. Client Setup

When using the Paho library, the first thing we need to do in order to send and/or receive messages from an MQTT broker is to obtain an implementation of the IMqttClient interfaceThis interface contains all methods required by an application in order to establish a connection to the server, send and receive messages.

Paho comes out of the box with two implementations of this interface, an asynchronous one (MqttAsyncClient) and a synchronous one (MqttClient). In our case, we’ll focus on the synchronous version, which has simpler semantics.

The setup itself is a two-step process: we first create an instance of the MqttClient class and then we connect it to our server. The following subsection detail those steps.

4.1. Creating a New IMqttClient Instance

The following code snippet shows how to create a new IMqttClient synchronous instance:

String publisherId = UUID.randomUUID().toString();
IMqttClient publisher = new MqttClient("tcp://iot.eclipse.org:1883",publisherId);

In this case, we’re using the simplest constructor available, which takes the endpoint address of our MQTT broker and a client identifier, which uniquely identifies our client.

In our case, we used a random UUID, so a new client identifier will be generated on every run.

Paho also provides additional constructors that we can use in order to customize the persistence mechanism used to store unacknowledged messages and/or the ScheduledExecutorService used to run background tasks required by the protocol engine implementation.

The server endpoint we’re using is a public MQTT broker hosted by the Paho project, which allows anyone with an internet connection to test clients without the need of any authentication.

4.2. Connecting to the Server

Our newly created MqttClient instance is not connected to the server. We do so by calling its connect() method, optionally passing a MqttConnectOptions instance that allows us to customize some aspects of the protocol.

In particular, we can use those options to pass additional information such as security credentials, session recovery mode, reconnection mode and so on.

The MqttConnectionOptions class expose those options as simple properties that we can set using normal setter methods. We only need to set the properties required for our scenario – the remaining ones will assume default values.

The code used to establish a connection to the server typically looks like this:

MqttConnectOptions options = new MqttConnectOptions();
options.setAutomaticReconnect(true);
options.setCleanSession(true);
options.setConnectionTimeout(10);
publisher.connect(options);

Here, we define our connection options so that:

  • The library will automatically try to reconnect to the server in the event of a network failure
  • It will discard unsent messages from a previous run
  • Connection timeout is set to 10 seconds

5. Sending Messages

Sending messages using an already connected MqttClient is very straightforward. We use one of the publish() method variants to send the payload, which is always a byte array, to a given topic, using one of the following quality-of-service options:

  • 0 – “at most once” semantics, also known as “fire-and-forget”. Use this option when message loss is acceptable, as it does not require any kind of acknowledgment or persistence
  • 1 – “at least once” semantics. Use this option when message loss is not acceptable and your subscribers can handle duplicates
  • 2 – “exactly once” semantics. Use this option when message loss is not acceptable and your subscribers cannot handle duplicates

In our sample project, the EngineTemperatureSensor class plays the role of a mock sensor that produces a new temperature reading every time we invoke its call() method.

This class implements the Callable interface so we can easily use it with one of the ExecutorService implementations available in the java.util.concurrent package:

public class EngineTemperatureSensor implements Callable<Void> {

    // ... private members omitted
    
    public EngineTemperatureSensor(IMqttClient client) {
        this.client = client;
    }

    @Override
    public Void call() throws Exception {        
        if ( !client.isConnected()) {
            return null;
        }           
        MqttMessage msg = readEngineTemp();
        msg.setQos(0);
        msg.setRetained(true);
        client.publish(TOPIC,msg);        
        return null;        
    }

    private MqttMessage readEngineTemp() {             
        double temp =  80 + rnd.nextDouble() * 20.0;        
        byte[] payload = String.format("T:%04.2f",temp)
          .getBytes();        
        retrun new MqttMessage(payload);           
    }
}

The MqttMessage encapsulates the payload itself, the requested Quality-of-Service and also the retained flag for the message. This flag indicates to the broker that it should retain this message until consumed by a subscriber.

We can use this feature to implement a “last known good” behavior, so when a new subscriber connects to the server, it will receive the retained message right away.

6. Receiving Messages

In order to receive messages from the MQTT broker, we need to use one of the subscribe() method variants, which allow us to specify:

  • One or more topic filters for messages we want to receive
  • The associated QoS
  • The callback handler to process received messages

In the following example, we show how to add a message listener to an existing IMqttClient instance to receive messages from a given topic. We use a CountDownLatch as a synchronization mechanism between our callback and the main execution thread, decrementing it every time a new message arrives.

In the sample code, we’ve used a different IMqttClient instance to receive messages. We did it just to make more clear which client does what, but this is not a Paho limitation – if you want, you can use the same client for publishing and receiving messages:

CountDownLatch receivedSignal = new CountDownLatch(10);
subscriber.subscribe(EngineTemperatureSensor.TOPIC, (topic, msg) -> {
    byte[] payload = msg.getPayload();
    // ... payload handling omitted
    receivedSignal.countDown();
});    
receivedSignal.await(1, TimeUnit.MINUTES);

The subscribe() variant used above takes an IMqttMessageListener instance as its second argument.

In our case, we use a simple lambda function that processes the payload and decrements a counter. If not enough messages arrive in the specified time window (1 minute), the await() method will throw an exception.

When using Paho, we don’t need to explicitly acknowledge message receipt. If the callback returns normally, Paho assumes it a successful consumption and sends an acknowledgment to the server.

If the callback throws an Exception, the client will be shut down. Please note that this will result in loss of any messages sent with QoS level of 0.

Messages sent with QoS level 1 or 2 will be resent by the server once the client is reconnected and subscribes to the topic again.

7. Conclusion

In this article, we demonstrated how we can add support for the MQTT protocol in our Java applications using the library provided by the Eclipse Paho project.

This library handles all low-level protocol details, allowing us to focus on other aspects of our solution, while leaving good space to customize important aspects of its internal features, such as message persistence.

The code shown in this article is available over on GitHub.

Sample Application with Spring Boot and Vaadin

$
0
0

1. Overview

Vaadin is a server-side Java framework for creating web user interfaces.

In this tutorial, we’ll explore how to use a Vaadin based UI on a Spring Boot based backend. For an introduction to Vaadin refer to this tutorial.

2. Setup

Let’s start by adding Maven dependencies to a standard Spring Boot application:

<dependency>
    <groupId>com.vaadin</groupId>
    <artifactId>vaadin-spring-boot-starter</artifactId>
</dependency>

Vaadin is also a recognized dependency by the Spring Initializer.

This tutorial uses a newer version of Vaadin than the default one brought in by the starter module. To use the newer version, simply define the Vaadin Bill of Materials (BOM) like this:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>com.vaadin</groupId>
            <artifactId>vaadin-bom</artifactId>
            <version>10.0.1</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

3. Backend Service

We’ll use an Employee entity with firstName and lastName properties to perform CRUD operations on it:

@Entity
public class Employee {

    @Id
    @GeneratedValue
    private Long id;

    private String firstName;
    private String lastName;
}

Here’s the simple, corresponding Spring Data repository – to manage the CRUD operations:

public interface EmployeeRepository extends JpaRepository<Employee, Long> {
    List<Employee> findByLastNameStartsWithIgnoreCase(String lastName);
}

We declare query method findByLastNameStartsWithIgnoreCase on the EmployeeRepository interface. It will return the list of Employees matching the lastName.

Let’s also pre-populate the DB with a few sample Employees:

@Bean
public CommandLineRunner loadData(EmployeeRepository repository) {
    return (args) -> {
        repository.save(new Employee("Bill", "Gates"));
        repository.save(new Employee("Mark", "Zuckerberg"));
        repository.save(new Employee("Sundar", "Pichai"));
        repository.save(new Employee("Jeff", "Bezos"));
    };
}

4. Vaadin UI

4.1. MainView Class

The MainView class is the entry point for Vaadin’s UI logic. Annotation @Route tells Spring Boot to automatically pick it up and show at the root of the web app:

@Route
public class MainView extends VerticalLayout {
    private EmployeeRepository employeeRepository;
    private EmployeeEditor editor;
    Grid<Employee> grid;
    TextField filter;
    private Button addNewBtn;
}

We can customize the URL where the view is shown by giving a parameter to the @Route annotation:

@Route(value="myhome")

The class uses following UI components to be displayed on the page:

EmployeeEditor editor – shows the Employee form used to provide employee information to create and edit.

Grid<Employee> grid – gird to display the list of Employees

TextField filter – text field to enter the last name based on which the gird will be filtered

Button addNewBtn – Button to add a new Employee. Displays the EmployeeEditor editor.

It internally uses the employeeRepository to perform the CRUD operations.

4.2. Wiring the Components Together

MainView extends VerticalLayout. VerticalLayout is a component container, which shows the subcomponents in the order of their addition (vertically).

Next, we initialize and add the components.

We provide a label to the button with a + icon.

this.grid = new Grid<>(Employee.class);
this.filter = new TextField();
this.addNewBtn = new Button("New employee", VaadinIcon.PLUS.create());

We use HorizontalLayout to horizontally arrange filter text field and the button. Then add this layout, gird, and editor into the parent vertical layout:

HorizontalLayout actions = new HorizontalLayout(filter, addNewBtn);
add(actions, grid, editor);

Provide the gird height and column names. We also add help text in the text field:

grid.setHeight("200px");
grid.setColumns("id", "firstName", "lastName");
grid.getColumnByKey("id").setWidth("50px").setFlexGrow(0);

filter.setPlaceholder("Filter by last name");

On the application startup, UI would look this:

4.3. Adding Logic to Components

We’ll set ValueChangeMode.EAGER to the filter text field. This syncs the value to the server each time it’s changed on the client.

We also set a listener for the value change event, which returns the filtered list of employees based on the text provided in the filter:

filter.setValueChangeMode(ValueChangeMode.EAGER);
filter.addValueChangeListener(e -> listEmployees(e.getValue()));

On selecting a row within the gird, we would show the Employee form, allowing the user to edit the first name and last name:

grid.asSingleSelect().addValueChangeListener(e -> {
    editor.editEmployee(e.getValue());
});

On clicking the add new employee button, we would show the blank Employee form:

addNewBtn.addClickListener(e -> editor.editEmployee(new Employee("", "")));

Finally, we listen to the changes made by the editor and refresh the grid with data from the backend:

editor.setChangeHandler(() -> {
    editor.setVisible(false);
    listEmployees(filter.getValue());
});

The listEmployees function gets the filtered list of Employees and updates the grid:

void listEmployees(String filterText) {
    if (StringUtils.isEmpty(filterText)) {
        grid.setItems(employeeRepository.findAll());
    } else {
        grid.setItems(employeeRepository.findByLastNameStartsWithIgnoreCase(filterText));
    }
}

4.4. Building the Form

We’ll use a simple form for the user to add/edit an employee:

@SpringComponent
@UIScope
public class EmployeeEditor extends VerticalLayout implements KeyNotifier {

    private EmployeeRepository repository;
    private Employee employee;

    TextField firstName = new TextField("First name");
    TextField lastName = new TextField("Last name");

    Button save = new Button("Save", VaadinIcon.CHECK.create());
    Button cancel = new Button("Cancel");
    Button delete = new Button("Delete", VaadinIcon.TRASH.create());

    HorizontalLayout actions = new HorizontalLayout(save, cancel, delete);
    Binder<Employee> binder = new Binder<>(Employee.class);
    private ChangeHandler changeHandler;
}

The @SpringComponent is just an alias to Springs @Component annotation to avoid conflicts with Vaadins Component class.

The @UIScope binds the bean to the current Vaadin UI.

Currently, edited Employee is stored in the employee member variable. We capture the Employee properties through firstName and lastName text fields.

The form has three button – save, cancel and delete.

Once all the components are wired together, the form would look as below for a row selection:

We use a Binder which binds the form fields with the Employee properties using the naming convention:

binder.bindInstanceFields(this);

We call the appropriate EmployeeRepositor method based on the user operations:

void delete() {
    repository.delete(employee);
    changeHandler.onChange();
}

void save() {
    repository.save(employee);
    changeHandler.onChange();
}

5. Conclusion

In this article, we wrote a full-featured CRUD UI application using Spring Boot and Spring Data JPA for persistence.

As usual, the code is available over on GitHub.

Java Null-Safe Streams from Collections

$
0
0

1. Overview

In this tutorial, we’ll see how to create null-safe streams from Java collections.

To start with, some familiarity with Java 8’s Method References, Lambda Expressions, Optional and Stream API is required to fully understand this material.

If you are unfamiliar with any of these topics, kindly take a look at our previous articles first: New Features in Java 8, Guide To Java 8 Optional and Introduction to Java 8 Streams.

2. Maven Dependency

Before we begin, there’s one Maven dependency that we’re going to need for certain scenarios:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-collections4</artifactId>
    <version>4.2</version>
</dependency>

The commons-collections4 library can be downloaded from Maven Central.

3. Creating Streams from Collections

The basic approach to creating a Stream from any type of Collection is to call the stream() or parallelStream() methods on the collection depending on the type of stream that is required:

Collection<String> collection = Arrays.asList("a", "b", "c");
Stream<String> streamOfCollection = collection.stream();

Our collection will most likely have an external source at some point, we’ll probably end up with a method similar to the one below when creating streams from collections:

public Stream<String> collectionAsStream(Collection<String> collection) {
    return collection.stream();
}

This is can cause some problems. When the provided collection points to a null reference, the code will throw a NullPointerException at runtime.

The following section addresses how we can protect against this.

4. Making Created Collection Streams Null-Safe

4.1. Add Checks to Prevent Null Dereferences

To prevent unintended null pointer exceptions, we can opt to add checks to prevent null references when creating streams from collections:

Stream<String> collectionAsStream(Collection<String> collection) {
    return collection == null 
      ? Stream.empty() 
      : collection.stream();
}

This method, however, has a couple of issues.

First, the null check gets in the way of the business logic decreasing the overall readability of the program.

Second, the use of null to represent the absence of a value is considered a wrong approach post-Java SE 8: There is a better way to model the absence and presence of a value.

It’s important to keep on mind that an empty Collection isn’t the same as a null Collection. While the first one is indicating that our query doesn’t have results or elements to show, the second one is suggesting that a kind of error just happened during the process.

4.2. Use emptyIfNull Method from CollectionUtils Library

We can opt to use Apache Commons’ CollectionUtils library to make sure our stream is null safe. This library provides an emptyIfNull method which returns an immutable empty collection given a null collection as an argument, or the collection itself otherwise:

public Stream<String> collectionAsStream(Collection<String> collection) {
    return emptyIfNull(collection).stream();
}

This is a very simple strategy to adopt. However, it depends on an external library. If a software development policy restricts the use of such a library, then this solution is rendered null and void.

4.3. Use Java 8’s Optional

Java SE 8’s Optional is a single-value container that either contains a value or doesn’t. Where a value is missing, the Optional container is said to be empty.

Using Optional can be arguably considered as the best overall strategy to create a null-safe collection from a stream.

Let’s see how we can use it followed by a quick discussion below:

public Stream<String> collectionToStream(Collection<String> collection) {
    return Optional.ofNullable(collection)
      .map(Collection::stream)
      .orElseGet(Stream::empty);
}
  • Optional.ofNullable(collection) creates an Optional object from the passed-in collection. An empty Optional object is created if the collection is null.
  • map(Collection::stream) extracts the value contained in the Optional object as an argument to the map method (Collection.stream())
  • orElseGet(Stream::empty) returns the fallback value in the event that the Optional object is empty, i.e the passed-in collection is null.

As a result, we proactively protect our code against unintended null pointer exceptions.

4.4. Use Java 9’s Stream OfNullable

Examining our previous ternary example in section 4.1. and considering the possibility of some elements could be null instead of the Collection, we have at our disposal the ofNullable method in the Stream class.

We can transform the above sample to:

Stream<String> collectionAsStream(Collection<String> collection) {  
  return collection.stream().flatMap(s -> Stream.ofNullable(s));
}

5. Conclusion

In this article, we briefly revisited how to create a stream from a given collection. We then proceeded to explore the three key strategies for making sure the created stream is null-safe when created from a collection.

Finally, we pointed out the weakness of using each strategy where relevant.

As usual, the full source code that accompanies the article is available over on GitHub.

java.util.Date vs java.sql.Date

$
0
0

1. Overview

In this tutorial, we’re going to compare two date classes: java.util.Date and java.sql.Date.

Once we complete the comparison, it should be clear which one to use and why.

2. java.util.Date

The java.util.Date class represents a particular moment in time, with millisecond precision since the 1st of January 1970 00:00:00 GMT (the epoch time). The class is used to keep coordinated universal time (UTC).

We can initialize it in two ways.

By calling the constructor:

Date date = new Date();

which will create a new date object with time set to the current time, measured to the nearest millisecond.

Or by passing a number of milliseconds since the epoch:

long timestamp = 1532516399000; // 25 July 2018 10:59:59 UTC
Date date = new Date(timestamp);

Let’s note that other constructors, present prior to Java 8, are deprecated now.

However, Date has a number of issues and overall its usage isn’t recommended anymore.

It’s mutable. Once we initialize it, we can change its internal value. For example, we can call the setTime method:

date.setTime(0); // 01 January 1970 00:00:00

To learn more about the advantages of immutable objects, check out this article: Immutable Objects in Java.

It also doesn’t handle all dates very well. Technically, it should reflect coordinated universal time (UTC). However, that depends on an operating system of the host environment.

Most modern operating systems use 1 day = 24h x 60m x 60s = 86400 seconds, which as we can see, doesn’t take the “leap second” into account.

With the introduction of Java 8, java.time package should be used. Prior to Java 8, an alternative solution was available – Joda Time.

3. java.sql.Date

The java.sql.Date extends java.util.Date class.

Its main purpose is to represent SQL DATE, which keeps years, months and days. No time data is kept.

In fact, the date is stored as milliseconds since the 1st of January 1970 00:00:00 GMT and the time part is normalized, i.e. set to zero.

Basically, it’s a wrapper around java.until.Date that handles SQL specific requirements. java.sql.Date should be used only when dealing with databases.

However, as java.sql.Date doesn’t hold timezone information, the timezone conversion between our local environment and database server depends on an implementation of JDBC driver. This adds another level of complexity.

Finally, let’s note, in order to support other SQL data types: SQL TIME and SQL TIMESTAMP, two other java.sql classes are available: Time and Timestamp.

The latter, even though extends from java.util.Date, supports nanoseconds.

4. Conclusion

Class java.util.Date stores a date-time value as milliseconds since the epoch. java.sql.Date stores a date only value and is commonly used in JDBC.

Handling dates is tricky. We need to remember about special cases: leap seconds, different timezones etc. When dealing with JDBC we can use java.sql.Date with caution.

If we’re going to use java.util.Date, we need to remember about its shortcomings. If using Java 8 then better not to use java.util.Date at all.


A Simple E-Commerce Implementation with Spring

$
0
0

1. Overview of our E-commerce Application

In this tutorial, we’ll implement a simple e-commerce application. We’ll develop an API using Spring Boot and a client application that will consume the API using Angular.

Basically, the user will be able to add/remove products from a product list to/from a shopping cart and to place an order.

2. Backend Part

To develop the API, we’ll use the latest version of Spring Boot. We also use JPA and H2 database for the persistence side of things.

To learn more about Spring Boot, you could check out our Spring Boot series of articles and if you’d like to get familiar with building a REST API, please check out another series.

2.1. Maven Dependencies

Let’s prepare our project and import the required dependencies into our pom.xml.

We’ll need some core Spring Boot dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
    <version>2.0.4.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>2.0.4.RELEASE</version>
</dependency>

Then, the H2 database:

<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.197</version>
    <scope>runtime</scope>
</dependency>

And finally – the Jackson library:

<dependency>
    <groupId>com.fasterxml.jackson.datatype</groupId>
    <artifactId>jackson-datatype-jsr310</artifactId>
    <version>2.9.6</version>
</dependency>

We’ve used Spring Initializr to quickly set up the project with needed dependencies.

2.2. Setting Up the Database

Although we could use in-memory H2 database out of the box with Spring Boot, we’ll still make some adjustments before we start developing our API.

We’ll enable H2 console in our application.properties file so we can actually check the state of our database and see if everything is going as we’d expect.

Also, it could be useful to log SQL queries to the console while developing:

spring.datasource.name=ecommercedb
spring.jpa.show-sql=true

#H2 settings
spring.h2.console.enabled=true
spring.h2.console.path=/h2-console

After adding these settings, we’ll be able to access the database at http://localhost:8080/h2-console using jdbc:h2:mem:ecommercedb as JDBC URL and user sa with no password.

2.3. The Project Structure

The project will be organized into several standard packages, with Angular application put in frontend folder:

├───pom.xml            
├───src
    ├───main
    │   ├───frontend
    │   ├───java
    │   │   └───com
    │   │       └───baeldung
    │   │           └───ecommerce
    │   │               │   EcommerceApplication.java
    │   │               ├───controller 
    │   │               ├───dto  
    │   │               ├───exception
    │   │               ├───model
    │   │               ├───repository
    │   │               └───service
    │   │                       
    │   └───resources
    │       │   application.properties
    │       ├───static
    │       └───templates
    └───test
        └───java
            └───com
                └───baeldung
                    └───ecommerce
                            EcommerceApplicationIntegrationTest.java

We should note that all interfaces in repository package are simple and extend Spring Data’s CrudRepository, so we’ll omit to display them here.

2.4. Exception Handling

We’ll need an exception handler for our API in order to properly deal with eventual exceptions.

You can find more details about the topic in our Error Handling for REST with Spring and Custom Error Message Handling for REST API articles.

Here, we keep a focus on ConstraintViolationException and our custom ResourceNotFoundException:

@RestControllerAdvice
public class ApiExceptionHandler {

    @SuppressWarnings("rawtypes")
    @ExceptionHandler(ConstraintViolationException.class)
    public ResponseEntity<ErrorResponse> handle(ConstraintViolationException e) {
        ErrorResponse errors = new ErrorResponse();
        for (ConstraintViolation violation : e.getConstraintViolations()) {
            ErrorItem error = new ErrorItem();
            error.setCode(violation.getMessageTemplate());
            error.setMessage(violation.getMessage());
            errors.addError(error);
        }
        return new ResponseEntity<>(errors, HttpStatus.BAD_REQUEST);
    }

    @SuppressWarnings("rawtypes")
    @ExceptionHandler(ResourceNotFoundException.class)
    public ResponseEntity<ErrorItem> handle(ResourceNotFoundException e) {
        ErrorItem error = new ErrorItem();
        error.setMessage(e.getMessage());

        return new ResponseEntity<>(error, HttpStatus.NOT_FOUND);
    }
}

2.5. Products

If you need more knowledge about persistence in Spring, there is a lot of useful articles in Spring Persistence series.

Our application will support only reading products from the database, so we need to add some first.

Let’s create a simple Product class:

@Entity
public class Product {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    @NotNull(message = "Product name is required.")
    @Basic(optional = false)
    private String name;

    private Double price;

    private String pictureUrl;

    // all arguments contructor 
    // standard getters and setters
}

Although the user won’t have the opportunity to add products through the application, we’ll support saving a product in the database in order to prepopulate the product list.

A simple service will be sufficient for our needs:

@Service
@Transactional
public class ProductServiceImpl implements ProductService {

    // productRepository constructor injection

    @Override
    public Iterable<Product> getAllProducts() {
        return productRepository.findAll();
    }

    @Override
    public Product getProduct(long id) {
        return productRepository
          .findById(id)
          .orElseThrow(() -> new ResourceNotFoundException("Product not found"));
    }

    @Override
    public Product save(Product product) {
        return productRepository.save(product);
    }
}

A simple controller will handle requests for retrieving the list of products:

@RestController
@RequestMapping("/api/products")
public class ProductController {

    // productService constructor injection

    @GetMapping(value = { "", "/" })
    public @NotNull Iterable<Product> getProducts() {
        return productService.getAllProducts();
    }
}

All we need now in order to expose the product list to the user – is to actually put some products in the database. Therefore, we’ll make a use of CommandLineRunner class to make a Bean in our main application class.

This way, we’ll insert products into the database during the application startup:

@Bean
CommandLineRunner runner(ProductService productService) {
    return args -> {
        productService.save(...);
        // more products
}

If we now start our application, we could retrieve product list via http://localhost:8080/api/products. Also, if we go to http://localhost:8080/h2-console and log in, we’ll see that there is a table named PRODUCT with the products we’ve just added.

2.6. Orders

On the API side, we need to enable POST requests to save the orders that the end-user will make.

Let’s first create the model:

@Entity
@Table(name = "orders")
public class Order {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    @JsonFormat(pattern = "dd/MM/yyyy")
    private LocalDate dateCreated;

    private String status;

    @JsonManagedReference
    @OneToMany(mappedBy = "pk.order")
    @Valid
    private List<OrderProduct> orderProducts = new ArrayList<>();

    @Transient
    public Double getTotalOrderPrice() {
        double sum = 0D;
        List<OrderProduct> orderProducts = getOrderProducts();
        for (OrderProduct op : orderProducts) {
            sum += op.getTotalPrice();
        }
        return sum;
    }

    @Transient
    public int getNumberOfProducts() {
        return this.orderProducts.size();
    }

    // standard getters and setters
}

We should note a few things here. Certainly one of the most noteworthy things is to remember to change the default name of our table. Since we named the class Order, by default the table named ORDER should be created. But because that is a reserved SQL word, we added @Table(name = “orders”) to avoid conflicts.

Furthermore, we have two @Transient methods that will return a total amount for that order and the number of products in it. Both represent calculated data, so there is no need to store it in the database.

Finally, we have a @OneToMany relation representing the order’s details. For that we need another entity class:

@Entity
public class OrderProduct {

    @EmbeddedId
    @JsonIgnore
    private OrderProductPK pk;

    @Column(nullable = false)
	private Integer quantity;

    // default constructor

    public OrderProduct(Order order, Product product, Integer quantity) {
        pk = new OrderProductPK();
        pk.setOrder(order);
        pk.setProduct(product);
        this.quantity = quantity;
    }

    @Transient
    public Product getProduct() {
        return this.pk.getProduct();
    }

    @Transient
    public Double getTotalPrice() {
        return getProduct().getPrice() * getQuantity();
    }

    // standard getters and setters

    // hashcode() and equals() methods
}

We have a composite primary key here:

@Embeddable
public class OrderProductPK implements Serializable {

    @JsonBackReference
    @ManyToOne(optional = false, fetch = FetchType.LAZY)
    @JoinColumn(name = "order_id")
    private Order order;

    @ManyToOne(optional = false, fetch = FetchType.LAZY)
    @JoinColumn(name = "product_id")
    private Product product;

    // standard getters and setters

    // hashcode() and equals() methods
}

Those classes are nothing too complicated, but we should note that in OrderProduct class we put @JsonIgnore on the primary key. That’s because we don’t want to serialize Order part of the primary key since it’d be redundant.

We only need the Product to be displayed to the user, so that’s why we have transient getProduct() method.

Next what we need is a simple service implementation:

@Service
@Transactional
public class OrderServiceImpl implements OrderService {

    // orderRepository constructor injection

    @Override
    public Iterable<Order> getAllOrders() {
        return this.orderRepository.findAll();
    }
	
    @Override
    public Order create(Order order) {
        order.setDateCreated(LocalDate.now());
        return this.orderRepository.save(order);
    }

    @Override
    public void update(Order order) {
        this.orderRepository.save(order);
    }
}

And a controller mapped to /api/orders to handle Order requests.

Most important is the create() method:

@PostMapping
public ResponseEntity<Order> create(@RequestBody OrderForm form) {
    List<OrderProductDto> formDtos = form.getProductOrders();
    validateProductsExistence(formDtos);
    // create order logic
    // populate order with products

    order.setOrderProducts(orderProducts);
    this.orderService.update(order);

    String uri = ServletUriComponentsBuilder
      .fromCurrentServletMapping()
      .path("/orders/{id}")
      .buildAndExpand(order.getId())
      .toString();
    HttpHeaders headers = new HttpHeaders();
    headers.add("Location", uri);

    return new ResponseEntity<>(order, headers, HttpStatus.CREATED);
}

First of all, we accept a list of products with their corresponding quantities. After that, we check if all products exist in the database and then create and save a new order. We’re keeping a reference to the newly created object so we can add order details to it.

Finally, we create a “Location” header.

The detailed implementation is in the repository – the link to it is mentioned at the end of this article.

3. Frontend

Now that we have our Spring Boot application built up, it’s time to move the Angular part of the project. To do so, we’ll first have to install Node.js with NPM and, after that, an Angular CLI, a command line interface for Angular.

It’s really easy to install both of those as we could see in the official documentation.

3.1. Setting Up the Angular Project

As we mentioned, we’ll use Angular CLI to create our application. To keep things simple and have all in one place, we’ll keep our Angular application inside the /src/main/frontend folder.

To create it, we need to open a terminal (or command prompt) in the /src/main folder and run:

ng new frontend

This will create all the files and folders we need for our Angular application. In the file pakage.json, we can check which versions of our dependencies are installed. This tutorial is based on Angular v6.0.3, but older versions should do the job, at least versions 4.3 and newer (HttpClient that we use here was introduced in Angular 4.3).

We should note that we’ll run all our commands from the /frontend folder unless stated differently.

This setup is enough to start the Angular application by running ng serve command. By default, it runs on http://localhost:4200 and if we now go there we’ll see base Angular application loaded.

3.2. Adding Bootstrap

Before we proceed with creating our own components, let’s first add Bootstrap to our project so we can make our pages look nice.

We need just a few things to achieve this. First, we need to run a command to install it:

npm install --save bootstrap

and then to say to Angular to actually use it. For this, we need to open a file src/main/frontend/angular.json and add node_modules/bootstrap/dist/css/bootstrap.min.css under “styles” property. And that’s it.

3.3. Components and Models

Before we start creating the components for our application, let’s first check out how our app will actually look like:

Now, we’ll create a base component, named ecommerce:

ng g c ecommerce

This will create our component inside the /frontend/src/app folder. To load it at application startup, we’ll include it into the app.component.html:

<div class="container">
    <app-ecommerce></app-ecommerce>
</div>

Next, we’ll create other components inside this base component:

ng g c /ecommerce/products
ng g c /ecommerce/orders
ng g c /ecommerce/shopping-cart

Certainly, we could’ve created all those folders and files manually if preferred, but in that case, we’d need to remember to register those components in our AppModule.

We’ll also need some models to easily manipulate our data:

export class Product {
    id: number;
    name: string;
    price: number;
    pictureUrl: string;

    // all arguments constructor
}
export class ProductOrder {
    product: Product;
    quantity: number;

    // all arguments constructor
}
export class ProductOrders {
    productOrders: ProductOrder[] = [];
}

The last model mentioned matches our OrderForm on the backend.

3.4. Base Component

At the top of our ecommerce component, we’ll put a navbar with the Home link on the right:

<nav class="navbar navbar-expand-lg navbar-dark bg-dark fixed-top">
    <div class="container">
        <a class="navbar-brand" href="#">Baeldung Ecommerce</a>
        <button class="navbar-toggler" type="button" data-toggle="collapse" 
          data-target="#navbarResponsive" aria-controls="navbarResponsive" 
          aria-expanded="false" aria-label="Toggle navigation" 
          (click)="toggleCollapsed()">
            <span class="navbar-toggler-icon"></span>
        </button>
        <div id="navbarResponsive" 
            [ngClass]="{'collapse': collapsed, 'navbar-collapse': true}">
            <ul class="navbar-nav ml-auto">
                <li class="nav-item active">
                    <a class="nav-link" href="#" (click)="reset()">Home
                        <span class="sr-only">(current)</span>
                    </a>
                </li>
            </ul>
        </div>
    </div>
</nav>

We’ll also load other components from here:

<div class="row">
    <div class="col-md-9">
        <app-products #productsC [hidden]="orderFinished"></app-products>
    </div>
    <div class="col-md-3">
        <app-shopping-cart (onOrderFinished)=finishOrder($event) #shoppingCartC 
          [hidden]="orderFinished"></app-shopping-cart>
    </div>
    <div class="col-md-6 offset-3">
        <app-orders #ordersC [hidden]="!orderFinished"></app-orders>
    </div>
</div>

We should keep in mind that, in order to see the content from our components, since we are using the navbar class, we need to add some CSS to the app.component.css:

.container {
    padding-top: 65px;
}

Let’s check out the .ts file before we comment most important parts:

@Component({
    selector: 'app-ecommerce',
    templateUrl: './ecommerce.component.html',
    styleUrls: ['./ecommerce.component.css']
})
export class EcommerceComponent implements OnInit {
    private collapsed = true;
    orderFinished = false;

    @ViewChild('productsC')
    productsC: ProductsComponent;

    @ViewChild('shoppingCartC')
    shoppingCartC: ShoppingCartComponent;

    @ViewChild('ordersC')
    ordersC: OrdersComponent;

    toggleCollapsed(): void {
        this.collapsed = !this.collapsed;
    }

    finishOrder(orderFinished: boolean) {
        this.orderFinished = orderFinished;
    }

    reset() {
        this.orderFinished = false;
        this.productsC.reset();
        this.shoppingCartC.reset();
        this.ordersC.paid = false;
    }
}

As we can see, clicking on the Home link will reset child components. We need to access methods and a field inside child components from the parent, so that’s why we are keeping references to the children and use those inside the reset() method.

3.5. The Service

In order for siblings components to communicate with each other and to retrieve/send data from/to our API, we’ll need to create a service:

@Injectable()
export class EcommerceService {
    private productsUrl = "/api/products";
    private ordersUrl = "/api/orders";

    private productOrder: ProductOrder;
    private orders: ProductOrders = new ProductOrders();

    private productOrderSubject = new Subject();
    private ordersSubject = new Subject();
    private totalSubject = new Subject();

    private total: number;

    ProductOrderChanged = this.productOrderSubject.asObservable();
    OrdersChanged = this.ordersSubject.asObservable();
    TotalChanged = this.totalSubject.asObservable();

    constructor(private http: HttpClient) {
    }

    getAllProducts() {
        return this.http.get(this.productsUrl);
    }

    saveOrder(order: ProductOrders) {
        return this.http.post(this.ordersUrl, order);
    }

    // getters and setters for shared fields
}

Relatively simple things are in here, as we could notice. We’re making a GET and a POST requests to communicate with the API. Also, we make data we need to share between components observable so we can subscribe to it later on.

Nevertheless, we need to point out one thing regarding the communication with the API. If we run the application now, we would receive 404 and retrieve no data. The reason for this is that, since we are using relative URLs, Angular by default will try to make a call to http://localhost:4200/api/products and our backend application is running on localhost:8080.

We could hardcode the URLs to localhost:8080, of course, but that’s not something we want to do. Instead, when working with different domains, we should create a file named proxy-conf.json in our /frontend folder:

{
    "/api": {
        "target": "http://localhost:8080",
        "secure": false
    }
}

And then we need to open package.json and change scripts.start property to match:

"scripts": {
    ...
    "start": "ng serve --proxy-config proxy-conf.json",
    ...
  }

And now we just should keep in mind to start the application with npm start instead ng serve.

3.6. Products

In our ProductsComponent, we’ll inject the service we made earlier and load the product list from the API and transform it into the list of ProductOrders since we want to append a quantity field to every product:

export class ProductsComponent implements OnInit {
    productOrders: ProductOrder[] = [];
    products: Product[] = [];
    selectedProductOrder: ProductOrder;
    private shoppingCartOrders: ProductOrders;
    sub: Subscription;
    productSelected: boolean = false;

    constructor(private ecommerceService: EcommerceService) {}

    ngOnInit() {
        this.productOrders = [];
        this.loadProducts();
        this.loadOrders();
    }

    loadProducts() {
        this.ecommerceService.getAllProducts()
            .subscribe(
                (products: any[]) => {
                    this.products = products;
                    this.products.forEach(product => {
                        this.productOrders.push(new ProductOrder(product, 0));
                    })
                },
                (error) => console.log(error)
            );
    }

    loadOrders() {
        this.sub = this.ecommerceService.OrdersChanged.subscribe(() => {
            this.shoppingCartOrders = this.ecommerceService.ProductOrders;
        });
    }
}

We also need an option to add the product to the shopping cart or to remove one from it:

addToCart(order: ProductOrder) {
    this.ecommerceService.SelectedProductOrder = order;
    this.selectedProductOrder = this.ecommerceService.SelectedProductOrder;
    this.productSelected = true;
}

removeFromCart(productOrder: ProductOrder) {
    let index = this.getProductIndex(productOrder.product);
    if (index > -1) {
        this.shoppingCartOrders.productOrders.splice(
            this.getProductIndex(productOrder.product), 1);
    }
    this.ecommerceService.ProductOrders = this.shoppingCartOrders;
    this.shoppingCartOrders = this.ecommerceService.ProductOrders;
    this.productSelected = false;
}

Finally, we’ll create a reset() method we mentioned in Section 3.4:

reset() {
    this.productOrders = [];
    this.loadProducts();
    this.ecommerceService.ProductOrders.productOrders = [];
    this.loadOrders();
    this.productSelected = false;
}

We’ll iterate through the product list in our HTML file and display it to the user:

<div class="row card-deck">
    <div class="col-lg-4 col-md-6 mb-4" *ngFor="let order of productOrders">
        <div class="card text-center">
            <div class="card-header">
                <h4>{{order.product.name}}</h4>
            </div>
            <div class="card-body">
                <a href="#"><img class="card-img-top" src={{order.product.pictureUrl}} 
                    alt=""></a>
                <h5 class="card-title">${{order.product.price}}</h5>
                <div class="row">
                    <div class="col-4 padding-0" *ngIf="!isProductSelected(order.product)">
                        <input type="number" min="0" class="form-control" 
                            [(ngModel)]=order.quantity>
                    </div>
                    <div class="col-4 padding-0" *ngIf="!isProductSelected(order.product)">
                        <button class="btn btn-primary" (click)="addToCart(order)"
                                [disabled]="order.quantity <= 0">Add To Cart
                        </button>
                    </div>
                    <div class="col-12" *ngIf="isProductSelected(order.product)">
                        <button class="btn btn-primary btn-block"
                                (click)="removeFromCart(order)">Remove From Cart
                        </button>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>

We’ll also add a simple class to corresponding CSS file so everything could fit nicely:

.padding-0 {
    padding-right: 0;
    padding-left: 1;
}

3.7. Shopping Cart

In the ShoppingCart component, we’ll also inject the service. We’ll use it to subscribe to the changes in the ProductsComponent (to notice when the product is selected to be put in the shopping cart) and then update the content of the cart and recalculate the total cost accordingly:

export class ShoppingCartComponent implements OnInit, OnDestroy {
    orderFinished: boolean;
    orders: ProductOrders;
    total: number;
    sub: Subscription;

    @Output() onOrderFinished: EventEmitter<boolean>;

    constructor(private ecommerceService: EcommerceService) {
        this.total = 0;
        this.orderFinished = false;
        this.onOrderFinished = new EventEmitter<boolean>();
    }

    ngOnInit() {
        this.orders = new ProductOrders();
        this.loadCart();
        this.loadTotal();
    }

    loadTotal() {
        this.sub = this.ecommerceService.OrdersChanged.subscribe(() => {
            this.total = this.calculateTotal(this.orders.productOrders);
        });
    }

    loadCart() {
        this.sub = this.ecommerceService.ProductOrderChanged.subscribe(() => {
            let productOrder = this.ecommerceService.SelectedProductOrder;
            if (productOrder) {
                this.orders.productOrders.push(new ProductOrder(
                    productOrder.product, productOrder.quantity));
            }
            this.ecommerceService.ProductOrders = this.orders;
            this.orders = this.ecommerceService.ProductOrders;
            this.total = this.calculateTotal(this.orders.productOrders);
        });
    }

    ngOnDestroy() {
        this.sub.unsubscribe();
    }
}

We are sending an event to the parent component from here when the order is finished and we need to go to the checkout. There is the reset() method in here also:

finishOrder() {
    this.orderFinished = true;
    this.ecommerceService.Total = this.total;
    this.onOrderFinished.emit(this.orderFinished);
}

reset() {
    this.orderFinished = false;
    this.orders = new ProductOrders();
    this.orders.productOrders = []
    this.loadTotal();
    this.total = 0;
}

HTML file is simple:

<div class="card text-white bg-danger mb-3" style="max-width: 18rem;">
    <div class="card-header text-center">Shopping Cart</div>
    <div class="card-body">
        <h5 class="card-title">Total: ${{total}}</h5>
        <hr>
        <h6 class="card-title">Items bought:</h6>

        <ul>
            <li *ngFor="let order of orders.productOrders">
                {{ order.product.name }} - {{ order.quantity}} pcs.
            </li>
        </ul>

        <button class="btn btn-light btn-block" (click)="finishOrder()"
             [disabled]="orders.productOrders.length == 0">Checkout
        </button>
    </div>
</div>

3.8. Orders

We’ll keep things as simple as we can and in the OrdersComponent simulate paying by setting the property to true and saving the order in the database. We can check that the orders are saved either via h2-console or by hitting http://localhost:8080/api/orders.

We need the EcommerceService here as well in order to retrieve the product list from the shopping cart and the total amount for our order:

export class OrdersComponent implements OnInit {
    orders: ProductOrders;
    total: number;
    paid: boolean;
    sub: Subscription;

    constructor(private ecommerceService: EcommerceService) {
        this.orders = this.ecommerceService.ProductOrders;
    }

    ngOnInit() {
        this.paid = false;
        this.sub = this.ecommerceService.OrdersChanged.subscribe(() => {
            this.orders = this.ecommerceService.ProductOrders;
        });
        this.loadTotal();
    }

    pay() {
        this.paid = true;
        this.ecommerceService.saveOrder(this.orders).subscribe();
    }
}

And finally we need to display info to the user:

<h2 class="text-center">ORDER</h2>
<ul>
    <li *ngFor="let order of orders.productOrders">
        {{ order.product.name }} - ${{ order.product.price }} x {{ order.quantity}} pcs.
    </li>
</ul>
<h3 class="text-right">Total amount: ${{ total }}</h3>

<button class="btn btn-primary btn-block" (click)="pay()" *ngIf="!paid">Pay</button>
<div class="alert alert-success" role="alert" *ngIf="paid">
    <strong>Congratulation!</strong> You successfully made the order.
</div>

4. Merging Spring Boot and Angular Applications

We finished development of both our applications and it is probably easier to develop it separately as we did. But, in production, it would be much more convenient to have a single application so let’s now merge those two.

What we want to do here is to build the Angular app which calls Webpack to bundle up all the assets and push them into the /resources/static directory of the Spring Boot app. That way, we can just run the Spring Boot application and test our application and pack all this and deploy as one app.

To make this possible, we need to open ‘package.json‘ again add some new scripts after scripts.build:

"postbuild": "npm run deploy",
"predeploy": "rimraf ../resources/static/ && mkdirp ../resources/static",
"deploy": "copyfiles -f dist/** ../resources/static",

We’re using some packages that we don’t have installed, so let’s install them:

npm install --save-dev rimraf
npm install --save-dev mkdirp
npm install --save-dev copyfiles

The rimraf command is gonna look at the directory and make a new directory (cleaning it up actually), while copyfiles copies the files from the distribution folder (where Angular places everything) into our static folder.

Now we just need to run npm run command and this should run all those commands and the ultimate output will be our packaged application in the static folder.

Then we run our Spring Boot application at the port 8080, access it there and use the Angular application.

5. Conclusion

In this article, we created a simple e-commerce application. We created an API on the backend using Spring Boot and then we consumed it in our frontend application made in Angular. We demonstrated how to make the components we need, make them communicate with each other and retrieve/send data from/to the API.

Finally, we showed how to merge both applications into one, packaged web app inside the static folder.

As always, the complete project that we described in this article can be found in the GitHub project.

Remove All Occurrences of a Specific Value from a List

$
0
0

1. Introduction

In Java, it’s straightforward to remove a specific value from a List using List.remove(). However, efficiently removing all occurrences of a value is much harder.

In this tutorial, we’ll see multiple solutions to this problem, describing the pros and cons.

For the sake of readability, we use a custom list(int…) method in the tests, which returns an ArrayList containing the elements we passed.

2. Using a while Loop

Since we know how to remove a single element, doing it repeatedly in a loop looks simple enough:

void removeAll(List<Integer> list, int element) {
    while (list.contains(element)) {
        list.remove(element);
    }
}

However, it doesn’t work as expected:

// given
List<Integer> list = list(1, 2, 3);
int valueToRemove = 1;

// when
assertThatThrownBy(() -> removeAll(list, valueToRemove))
  .isInstanceOf(IndexOutOfBoundsException.class);

The problem is in the 3rd line: we call List.remove(int), which treats its argument as the index, not the value we want to remove.

In the test above we always call list.remove(1), but the element’s index we want to remove is 0. Calling List.remove() shifts all elements after the removed one to smaller indices.

In this scenario, it means that we delete all elements, except the first.

When only the first remains, the index 1 will be illegal. Hence we get an Exception.

Note, that we face this problem only if we call List.remove() with a primitive byte, short, char or int argument, since the first thing the compiler does when it tries to find the matching overloaded method, is widening.

We can correct it by passing the value as Integer:

void removeAll(List<Integer> list, Integer element) {
    while (list.contains(element)) {
        list.remove(element);
    }
}

Now the code works as expected:

// given
List<Integer> list = list(1, 2, 3);
int valueToRemove = 1;

// when
removeAll(list, valueToRemove);

// then
assertThat(list).isEqualTo(list(2, 3));

Since List.contains() and List.remove() both have to find the first occurrence of the element, this code causes unnecessary element traversal.

We can do better if we store the index of the first occurrence:

void removeAll(List<Integer> list, Integer element) {
    int index;
    while ((index = list.indexOf(element)) >= 0) {
        list.remove(index);
    }
}

We can verify that it works:

// given
List<Integer> list = list(1, 2, 3);
int valueToRemove = 1;

// when
removeAll(list, valueToRemove);

// then
assertThat(list).isEqualTo(list(2, 3));

While these solutions produce short and clean code, they still have poor performance: because we don’t keep track of the progress, List.remove() has to find the first occurrence of the provided value to delete it.

Also, when we use an ArrayList, element shifting can cause many reference copying, even reallocating the backing array several times.

3. Removing Until the List Changes

List.remove(E element) has a feature we didn’t mention yet: it returns a boolean value, which is true if the List changed because of the operation, therefore it contained the element.

Note, that List.remove(int index) returns void, because if the provided index is valid, the List always removes it. Otherwise, it throws IndexOutOfBoundsException.

With this, we can perform removals until the List changes:

void removeAll(List<Integer> list, int element) {
    while (list.remove(element));
}

It works as expected:

// given
List<Integer> list = list(1, 1, 2, 3);
int valueToRemove = 1;

// when
removeAll(list, valueToRemove);

// then
assertThat(list).isEqualTo(list(2, 3));

Despite being short, this implementation suffers from the same problems we described in the previous section.

3. Using a for Loop

We can keep track of our progress by traversing through the elements with a for loop and remove the current one if it matches:

void removeAll(List<Integer> list, int element) {
    for (int i = 0; i < list.size(); i++) {
        if (Objects.equals(element, list.get(i))) {
            list.remove(i);
        }
    }
}

It works as expected:

// given
List<Integer> list = list(1, 2, 3);
int valueToRemove = 1;

// when
removeAll(list, valueToRemove);

// then
assertThat(list).isEqualTo(list(2, 3));

However, if we try it with a different input, it provides an incorrect output:

// given
List<Integer> list = list(1, 1, 2, 3);
int valueToRemove = 1;

// when
removeAll(list, valueToRemove);

// then
assertThat(list).isEqualTo(list(1, 2, 3));

Let’s analyze how the code works, step-by-step:

  • i = 0
    • element and list.get(i) are both equal to 1 at line 3, so Java enters the body of the if statement,
    • we remove the element at index 0,
    • so list now contains 1, 2 and 3
  • i = 1
    • list.get(i) returns 2 because when we remove an element from a List, it shifts all proceeding elements to smaller indices

So we face this problem when we have two adjacent values, which we want to remove. To solve this, we should maintain the loop variable.

Decreasing it when we remove the element:

void removeAll(List<Integer> list, int element) {
    for (int i = 0; i < list.size(); i++) {
        if (Objects.equals(element, list.get(i))) {
            list.remove(i);
            i--;
        }
    }
}

Increasing it only when we don’t remove the element:

void removeAll(List<Integer> list, int element) {
    for (int i = 0; i < list.size();) {
        if (Objects.equals(element, list.get(i))) {
            list.remove(i);
        } else {
            i++;
        }
    }
}

Note, that in the latter, we removed the statement i++ at line 2.

Both solutions work as expected:

// given
List<Integer> list = list(1, 1, 2, 3);
int valueToRemove = 1;

// when
removeAll(list, valueToRemove);

// then
assertThat(list).isEqualTo(list(2, 3));

This implementation seems right for the first sight. However, it still has serious performance problems:

  • removing an element from an ArrayList, shifts all items after it
  • accessing elements by index in a LinkedList means traversing through the elements one-by-one until we find the index

4. Using a for-each Loop

Since Java 5 we can use the for-each loop to iterate through a List. Let’s use it to remove elements:

void removeAll(List<Integer> list, int element) {
    for (Integer number : list) {
        if (Objects.equals(number, element)) {
            list.remove(number);
        }
    }
}

Note, that we use Integer as the loop variable’s type. Therefore we won’t get a NullPointerException.

Also, this way we invoke List.remove(E element), which expects the value we want to remove, not the index.

As clean as it looks, unfortunately, it doesn’t work:

// given
List<Integer> list = list(1, 1, 2, 3);
int valueToRemove = 1;

// when
assertThatThrownBy(() -> removeWithForEachLoop(list, valueToRemove))
  .isInstanceOf(ConcurrentModificationException.class);

The for-each loop uses Iterator to traverse through the elements. However, when we modify the List, the Iterator gets into an inconsistent state. Hence it throws ConcurrentModificationException.

The lesson is: we shouldn’t modify a List, while we’re accessing its elements in a for-each loop.

5. Using an Iterator

We can use the Iterator directly to traverse and modify the List with it:

void removeAll(List<Integer> list, int element) {
    for (Iterator<Integer> i = list.iterator(); i.hasNext();) {
        Integer number = i.next();
        if (Objects.equals(number, element)) {
            i.remove();
        }
    }
}

This way, the Iterator can track the state of the List (because it makes the modification). As a result, the code above works as expected:

// given
List<Integer> list = list(1, 1, 2, 3);
int valueToRemove = 1;

// when
removeAll(list, valueToRemove);

// then
assertThat(list).isEqualTo(list(2, 3));

Since every List class can provide their own Iterator implementation, we can safely assume, that it implements element traversing and removal the most efficient way possible.

However, using ArrayList still means lots of element shifting (and maybe array reallocating). Also, the code above is slightly harder to read, because it differs from the standard for loop, that most developers are familiar with.

6. Collecting

Until this, we modified the original List object by removing the items we didn’t need. Rather, we can create a new List and collect the items we want to keep:

List<Integer> removeAll(List<Integer> list, int element) {
    List<Integer> remainingElements = new ArrayList<>();
    for (Integer number : list) {
        if (!Objects.equals(number, element)) {
            remainingElements.add(number);
        }
    }
    return remainingElements;
}

Since we provide the result in a new List object, we have to return it from the method. Therefore we need to use the method in another way:

// given
List<Integer> list = list(1, 1, 2, 3);
int valueToRemove = 1;

// when
List<Integer> result = removeAll(list, valueToRemove);

// then
assertThat(result).isEqualTo(list(2, 3));

Note, that now we can use the for-each loop since we don’t modify the List we’re currently iterating through.

Because there aren’t any removals, there’s no need to shift the elements. Therefore this implementation performs well when we use an ArrayList.

This implementation behaves differently in some ways than the earlier ones:

  • it doesn’t modify the original List but returns a new one
  • the method decides what the returned List‘s implementation is, it may be different than the original

Also, we can modify our implementation to get the old behavior; we clear the original List and add the collected elements to it:

void removeAll(List<Integer> list, int element) {
    List<Integer> remainingElements = new ArrayList<>();
    for (Integer number : list) {
        if (!Objects.equals(number, element)) {
            remainingElements.add(number);
        }
    }

    list.clear();
    list.addAll(remainingElements);
}

It works the same way the ones before:

// given
List<Integer> list = list(1, 1, 2, 3);
int valueToRemove = 1;

// when
removeAll(list, valueToRemove);

// then
assertThat(list).isEqualTo(list(2, 3));

Since we don’t modify the List continually, we don’t have to access elements by position or shift them. Also, there’re only two possible array reallocations: when we call List.clear() and List.addAll().

7. Using the Stream API

Java 8 introduced lambda expressions and stream API. With these powerful features, we can solve our problem with a very clean code:

List<Integer> removeAll(List<Integer> list, int element) {
    return list.stream()
      .filter(e -> !Objects.equals(e, element))
      .collect(Collectors.toList());
}

This solution works the same way, like when we were collecting the remaining elements.

As a result, it has the same characteristics, and we should use it to return the result:

// given
List<Integer> list = list(1, 1, 2, 3);
int valueToRemove = 1;

// when
List<Integer> result = removeAll(list, valueToRemove);

// then
assertThat(result).isEqualTo(list(2, 3));

Note, that we can convert it to work like the other solutions with the same approach we did with the original ‘collecting’ implementation.

8. Using removeIf

With lambdas and functional interfaces, Java 8 introduced some API extensions, too. For example, the List.removeIf() method, which implements what we saw in the last section.

It expects a Predicate, which should return true when we want to remove the element, in contrast to the previous example, where we had to return true when we wanted to keep the element:

void removeAll(List<Integer> list, int element) {
    list.removeIf(n -> Objects.equals(n, element));
}

It works like the other solutions above:

// given
List<Integer> list = list(1, 1, 2, 3);
int valueToRemove = 1;

// when
removeAll(list, valueToRemove);

// then
assertThat(list).isEqualTo(list(2, 3));

Due to the fact, that the List itself implements this method, we can safely assume, that it has the best performance available. On top of that, this solution provides the cleanest code of all.

9. Conclusion

In this article, we saw many ways to solve a simple problem, including incorrect ones. We analyzed them to find the best solution for every scenario.

As usual, the examples are available over on GitHub.

Thread Safe LIFO Data Structure Implementations

$
0
0

1. Introduction

In this tutorial, we’ll discuss various options for Thread-safe LIFO Data structure implementations.

In the LIFO data structure, elements are inserted and retrieved according to the Last-In-First-Out principle. This means the last inserted element is retrieved first.

In computer science, stack is the term used to refer to such data structure. 

A stack is handy to deal with some interesting problems like expression evaluation, implementing undo operations, etc. Since it can be used in concurrent execution environments, we might need to make it thread-safe. 

2. Understanding Stacks

Basically, a Stack must implement the following methods:

  1. push() – add an element at the top
  2. pop() – fetch and remove the top element
  3. peek() – fetch the element without removing from the underlying container

As discussed before, let’s assume we want a command processing engine.

In this system, undoing executed commands is an important feature.

In general, all the commands are pushed onto the stack and then undo operation can simply be implemented:

  • pop() method to get the last executed command
  • call the undo() method on the popped command object

3. Understanding Thread Safety in Stacks

If a data structure is not thread-safe, when accessed concurrently, it might end up having race conditions.

Race conditions, in a nutshell, occur when the correct execution of code depends on the timing and sequence of threads. This happens mainly if more than one thread shares the data structure and this structure is not designed for this purpose.

Let’s examine a method below from a Java Collection class, ArrayDeque:

public E pollFirst() {
    int h = head;
    E result = (E) elements[h];
    // ... other book-keeping operations removed, for simplicity
    head = (h + 1) & (elements.length - 1);
    return result;
}

To explain the potential race condition in the above code, let us assume two threads executing this code as given in the below sequence:

  • First thread executes the third line: sets the result object with the element at the index ‘head’
  • The second thread executes the third line: sets the result object with the element at the index ‘head’
  • First thread executes the fifth line: resets the index ‘head’ to the next element in the backing array
  • The second thread executes the fifth line: resets the index ‘head’ to the next element in the backing array

Oops! Now, both the executions would return the same result object

To avoid such race conditions, in this case, a thread shouldn’t execute the first line till the other thread finishes resetting the ‘head’ index at the fifth line. In other words, accessing the element at the index ‘head’ and resetting the index ‘head’ should happen atomically for a thread.

Clearly, in this case, correct execution of code depends on the timing of threads and hence it’s not thread-safe.

4. Thread-safe Stacks Using Locks

In this section, we’ll discuss two possible options for concrete implementations of a thread-safe stack. 

In particular, we’ll cover the Java Stack and a thread-safe decorated ArrayDeque. 

Both use Locks for mutually exclusive access.

4.1. Using the Java Stack

Java Collections has a legacy implementation for thread-safe Stack, based on Vector which is basically a synchronized variant of ArrayList.

However, the official doc itself suggests considering using ArrayDeque. Hence we won’t get into too much detail.

Although the Java Stack is thread-safe and straight-forward to use, there are major disadvantages with this class:

  • It doesn’t have support for setting the initial capacity
  • It uses locks for all the operations. This might hurt the performance for single threaded executions.

4.2. Using ArrayDeque

Using the Deque interface is the most convenient approach for LIFO data structures as it provides all the needed stack operations. ArrayDeque is one such concrete implementation.  

Since it’s not using locks for the operations, single-threaded executions would work just fine. But for multi-threaded executions, this is problematic.

However, we can implement a synchronization decorator for ArrayDeque. Though this performs similarly to Java Collection Framework’s Stack class, the important issue of Stack class, lack of initial capacity setting, is solved.

Let’s have a look at this class:

public class DequeBasedSynchronizedStack<T> {

    // Internal Deque which gets decorated for synchronization.
    private ArrayDeque<T> dequeStore;

    public DequeBasedSynchronizedStack(int initialCapacity) {
        this.dequeStore = new ArrayDeque<>(initialCapacity);
    }

    public DequeBasedSynchronizedStack() {
        dequeStore = new ArrayDeque<>();
    }

    public synchronized T pop() {
        return this.dequeStore.pop();
    }

    public synchronized void push(T element) {
        this.dequeStore.push(element);
    }

    public synchronized T peek() {
        return this.dequeStore.peek();
    }

    public synchronized int size() {
        return this.dequeStore.size();
    }
}

Note that our solution does not implement Deque itself for simplicity, as it contains many more methods.

Also, Guava contains SynchronizedDeque which is a production-ready implementation of a decorated ArrayDequeue.

5. Lock-free Thread-safe Stacks

ConcurrentLinkedDeque is a lock-free implementation of Deque interface. This implementation is completely thread-safe as it uses an efficient lock-free algorithm.

Lock-free implementations are immune to the following issues, unlike lock based ones.

  • Priority inversion – This occurs when the low-priority thread holds the lock needed by a high priority thread. This might cause the high-priority thread to block
  • Deadlocks – This occurs when different threads lock the same set of resources in a different order.

On top of that, Lock-free implementations have some features which make them perfect to use in both single and multi-threaded environments.

  • For unshared data structures and for single-threaded access, performance would be at par with ArrayDeque
  • For shared data structures, performance varies according to the number of threads that access it simultaneously.

And in terms of usability, it is no different than ArrayDeque as both implement the Deque interface.

6. Conclusion

In this article, we’ve discussed the stack data structure and its benefits in designing systems like Command processing engine and Expression evaluators.

Also, we have analyzed various stack implementations in the Java collections framework and discussed their performance and thread-safety nuances.

As usual, code examples can be found over on GitHub.

Custom Validation MessageSource in Spring Boot

$
0
0

1. Overview

MessageSource is a powerful feature available in Spring applications. This helps application developers handle various complex scenarios with writing much extra code, such as environment-specific configuration, internationalization or configurable values.

One more scenario could be modifying the default validation messages to more user-friendly/custom messages.

In this tutorial, we’ll see how to configure and manage custom validation MessageSource in the application using Spring Boot.

2. Maven Dependencies

Let’s start with adding the necessary Maven dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-validation</artifactId>
</dependency>

You can find the latest versions of these libraries over on Maven Central.

3. Custom Validation Message Example

Let’s consider a scenario where we have to develop an application that supports multiple languages. If the user doesn’t provide the correct details as input, we’d like to show error messages according to the user’s locale.

Let’s take an example of a Login form bean:

public class LoginForm {

    @NotEmpty(message = "{email.notempty}")
    @Email
    private String email;

    @NotNull
    private String password;

    // standard getter and setters
}

Here we’ve added validation constraints that verify if an email is not provided at all, or provided, but not following the standard email address style.

To show custom and locale-specific message, we can provide a placeholder as mentioned for the @NotEmpty annotation.

The email.notempty property will be resolved from a properties files by the MessageSource configuration.

4. Defining the MessageSource Bean

An application context delegates the message resolution to a bean with the exact name messageSource.

ReloadableResourceBundleMessageSource is the most common MessageSource implementation that resolves messages from resource bundles for different locales:

@Bean
public MessageSource messageSource() {
    ReloadableResourceBundleMessageSource messageSource
      = new ReloadableResourceBundleMessageSource();
    
    messageSource.setBasename("classpath:messages");
    messageSource.setDefaultEncoding("UTF-8");
    return messageSource;
}

Here, it’s important to provide the basename as locale-specific file names will be resolved based on the name provided.

5. Defining LocalValidatorFactoryBean 

To use custom name messages in a properties file like we need to define a LocalValidatorFactoryBean and register the messageSource:

@Bean
public LocalValidatorFactoryBean getValidator() {
    LocalValidatorFactoryBean bean = new LocalValidatorFactoryBean();
    bean.setValidationMessageSource(messageSource());
    return bean;
}

However, note that if we had already extended the WebMvcConfigurerAdapter, to avoid having the custom validator ignored, we’d have to set the validator by overriding the getValidator() method from the parent class.

Now we can define a property message like:

email.notempty=<Custom_Message>”

instead of

“javax.validation.constraints.NotEmpty.message=<Custom_message>”

6. Defining Property Files

The final step is to create a properties file in the src/main/resources directory with the name provided in the basename in step 4:

# messages.properties
email.notempty=Please provide valid email id.

Here we can take advantage of internationalization along with this. Let’s say we want to show messages for a French user in their language.

In this case, we have to add one more property file with the name the messages_fr.properties in the same location (No code changes required at all):

# messages_fr.properties
email.notempty=Veuillez fournir un identifiant de messagerie valide.

7. Conclusion

In this article, we covered how the default validation messages can be changed without modifying the code if the configuration is done properly beforehand.

We can also leverage the support of internationalization along with this to make the application more user-friendly.

As always, the full source code is available over on GitHub.

Concurrent Strategies using MDBs

$
0
0

1. Introduction

Message Driven Beans, also known as “MDB”, handle message processing in an asynchronous context. We can learn the basics of MDB in this article.

This tutorial will discuss some strategies and best practices to implement concurrency using Message Driven Beans.

If you want to understand more about the basics of concurrency using Java, you can get started here.

In order to better use MDBs and concurrency, there are some considerations to make. It’s important to keep in mind that those considerations should be driven by the business rules and the needs of our application.

2. Tuning the Thread Pool

Tuning the Thread Pool is probably the main point of attention. To make a good use of concurrency, we must tune the number of MDB instances available to consume messages. When one instance is busy handling a message, other instances are able to pick up the next ones.

The MessageListener thread is responsible to execute the onMessage method of an MDB. This thread is part of the MessageListener thread pool, which means that it’s pooled and reused over and over again. This pool also has a configuration that allows us to set the number of threads, which may impact the performance:

  • setting a small pool size will cause messages to be consumed slowly (“MDB Throttling”)
  • setting a very large pool size might decrease performance – or not even work at all.

On Wildfly, we can set this value by accessing the management console. JMS capability isn’t enabled on the default standalone profile; we need to start the server using the full profile.

Usually, on a local installation, we access it through http://127.0.0.1:9990/console/index.html. After that, we need to access Configuration / Subsystems / Messaging / Server, select our server and click “View”.

Choose the “Attributes” tab, click on “Edit” and change the value of “Thread Pool Max Size”. The default value is 30.

3. Tuning Max Sessions

Another configurable property to be aware of is Maximum Sessions. This defines the concurrency for a particular listener port. Usually, this defaults to 1 but increasing it can give more scalability and availability to the MDB application.

We can configure it either by annotations or .xml descriptors. Through annotations, we use the @ActivationConfigProperty:

@MessageDriven(activationConfig = {
    @ActivationConfigProperty(
        propertyName=”maxSession”, propertyValue=”50”
    )
})

If the chosen method of configuration is .xml descriptors we can configure maxSession like this:

<activation-config>
    <activation-config-property>
        <activation-config-property-name>maxSession</activation-config-property-name>
        <activation-config-property-value>50</activation-config-property-value>
    </activation-config-property>
</activation-config>

4. Deployment Environment

When we have a requirement for high availability, we should consider deploying the MDB on an application server cluster. Thus, it can execute on any of the servers in the cluster and many application servers can invoke it concurrently, which also improves scalability.

For this particular case, we have an important choice to make:

  • make all servers in the cluster eligible to receive messages, which allows the use of all of its processing power, or
  • ensure message processing in a sequential manner by allowing just one server to receive them at a time

If we use an enterprise bus, a good practice is to deploy the MDB to the same server or cluster as the bus member to optimize the messaging performance.

5. Message Model and Message Types

Although this isn’t as clear as just setting another value to a pool, the message model and the message type might affect one of the best advantages of using concurrency: performance.

When choosing XML for a message type, for instance, the size of the message can affect the time spent to process it. This is an important consideration especially if the application handles a large number of messages.

Regarding the message model, if the application needs to send the same message to a lot of consumers, a publish-subscribe model might be the right choice. This would reduce the overhead of processing the messages, providing better performance.

To consume from a Topic on a publish-subscribe model, we can use annotations:

@ActivationConfigProperty(
  propertyName = "destinationType", 
  propertyValue = "javax.jms.Topic")

Again, we can also configure those values in a .xml deployment descriptor:

<activation-config>
    <activation-config-property>
        <activation-config-property-name>destinationType</activation-config-property-name>
        <activation-config-property-value>javax.jms.Topic</activation-config-property-value>
    </activation-config-property>
</activation-config>

If sending the very same message to many consumers isn’t a requirement, the regular PTP (Point-to-Point) model would suffice.

To consume from a Queue, we set the annotation as:

@ActivationConfigProperty(
  propertyName = "destinationType", 
  propertyValue = "javax.jms.Queue")

If we’re using .xml deployment descriptor, we can set it:

<activation-config>
    <activation-config-property>
        <activation-config-property-name>destinationType</activation-config-property-name>
        <activation-config-property-value>javax.jms.Queue</activation-config-property-value>
    </activation-config-property>
</activation-config>

6. Conclusion

As many computer scientists and IT writers already stated, we no longer have processors’ speed increasing on a fast pace. To make our programs work faster, we need to work with the higher number of processors and cores available today.

This article discussed some best practices for getting the most out of concurrency using MDBs.

Set the Time Zone of a Date in Java

$
0
0

1. Overview

In this quick tutorial, we’ll see how to set the time zone of a date using Java 7, Java 8 and the Joda-Time library.

2. Using Java 8

Java 8 introduced a new Date-Time API for working with dates and times which was largely based off of the Joda-Time library.

The Instant class from Java Date Time API models a single instantaneous point on the timeline in UTC. This represents the count of nanoseconds since the epoch of the first moment of 1970 UTC.

First, we’ll obtain the current Instant from the system clock and ZoneId for a time zone name:

Instant nowUtc = Instant.now();
ZoneId asiaSingapore = ZoneId.of("Asia/Singapore");

Finally, the ZoneId and Instant can be utilized to create a date-time object with time-zone details. The ZonedDateTime class represents a date-time with a time-zone in the ISO-8601 calendar system:

ZonedDateTime nowAsiaSingapore = ZonedDateTime.ofInstant(nowUtc, asiaSingapore);

We’ve used Java 8’s ZonedDateTime to represent a date-time with a time zone.

3. Using Java 7

In Java 7, setting the time-zone is a bit tricky. The Date class (which represents a specific instant in time) doesn’t contain any time zone information.

First, let’s get the current UTC date and a TimeZone object:

Date nowUtc = new Date();
TimeZone asiaSingapore = TimeZone.getTimeZone(timeZone);

In Java 7, we need to use the Calendar class to represent a date with a time zone.

Finally, we can create a nowUtc Calendar with the asiaSingapore TimeZone and set the time:

Calendar nowAsiaSingapore = Calendar.getInstance(asiaSingapore);
nowAsiaSingapore.setTime(nowUtc);

It’s recommended to avoid the Java 7 date time API in favor of Java 8 date time API or the Joda-Time library.

4. Using Joda-Time

If Java 8 isn’t an option, we can still get the same kind of result from Joda-Time, a de-facto standard for date-time operations in the pre-Java 8 world.

First, we need to add the Joda-Time dependency to pom.xml:

<dependency>
  <groupId>joda-time</groupId>
  <artifactId>joda-time</artifactId>
  <version>2.10</version>
</dependency>

To represent an exact point on the timeline we can use Instant from org.joda.time package. Internally, the class holds one piece of data, the instant as milliseconds from the Java epoch of 1970-01-01T00:00:00Z:

Instant nowUtc = Instant.now();

We’ll use DateTimeZone to represent a time-zone (for the specified time zone id):

DateTimeZone asiaSingapore = DateTimeZone.forID("Asia/Singapore");

Now the nowUtc time will be converted to a DateTime object using the time zone information:

DateTime nowAsiaSingapore = nowUtc.toDateTime(asiaSingapore);

This is how Joda-time API can be used to combine date and time zone information.

5. Conclusion

In this article, we found out how to set the time zone in Java using Java 7, 8 and Joda-Time API. To learn more about Java 8’s date-time support check out our Java 8 date-time intro.

As always the code snippet is available in the GitHub repository.

Why String is Immutable in Java?

$
0
0

1. Introduction

In Java, Strings are immutable. An obvious question that is quite prevalent in interviews is “Why Strings are designed as immutable in Java?”

James Gosling, the creator of Java, was once asked in an interview when should one use immutables, to which he answers:

I would use an immutable whenever I can.

He further supports his argument stating features that immutability provides, such as caching, security, easy reuse without replication, etc.

In this tutorial, we’ll further explore why the Java language designers decided to keep String immutable.

2. What is an Immutable Object?

An immutable object is an object whose internal state remains constant after it has been entirely created. This means that once the object has been assigned to a variable, we can neither update the reference nor mutate the internal state by any means.

We have a separate article that discusses immutable objects in detail. For more information, read the Immutable Objects in Java article.

3. Why is String Immutable in Java?

The key benefits of keeping this class as immutable are caching, security, synchronization, and performance.

Let’s discuss how these things work.

3.1. Introduce to String Pool

The String is the most widely used data structure. Caching the String literals and reusing them saves a lot of heap space because different String variables refer to the same object in the String pool. String intern pool serves exactly this purpose.

Java String Pool is the special memory region where Strings are stored by the JVM. Since Strings are immutable in Java, the JVM optimizes the amount of memory allocated for them by storing only one copy of each literal String in the pool. This process is called interning:

String s1 = "Hello World";
String s2 = "Hello World";
         
assertThat(s1 == s2).isTrue();

Because of the presence of the String pool in the preceding example, two different variables are pointing to same String object from the pool, thus saving crucial memory resource.

We have a separate article dedicated to Java String Pool. For more information, head on over to that article.

3.2. Security

The String is widely used in Java applications to store sensitive pieces of information like usernames, passwords, connection URLs, network connections, etc. It’s also used extensively by JVM class loaders while loading classes.

Hence securing String class is crucial regarding the security of the whole application in general. For example, consider this simple code snippet:

void criticalMethod(String userName) {
    // perform security checks
    if (!isAlphaNumeric(userName)) {
        throw new SecurityException(); 
    }
	
    // do some secondary tasks
    initializeDatabase();
	
    // critical task
    connection.executeUpdate("UPDATE Customers SET Status = 'Active' " +
      " WHERE UserName = '" + userName + "'");
}

In the above code snippet, let’s say that we received a String object from an untrustworthy source. We’re doing all necessary security checks initially to check if the String is only alphanumeric, followed by some more operations.

Remember that our unreliable source caller method still has reference to this userName object.

If Strings were mutable, then by the time we execute the update, we can’t be sure that the String we received, even after performing security checks, would be safe. The untrustworthy caller method still has the reference and can change the String between integrity checks. Thus making our query prone to SQL injections in this case. So mutable Strings could lead to degradation of security over time.

It could also happen that the String userName is visible to another thread, which could then change its value after the integrity check.

In general, immutability comes to our rescue in this case because it’s easier to operate with sensitive code when values don’t change because there are fewer interleavings of operations that might affect the result.

3.3. Synchronization

Being immutable automatically makes the String thread safe since they won’t be changed when accessed from multiple threads.

Hence immutable objects, in general, can be shared across multiple threads running simultaneously. They’re also thread-safe because if a thread changes the value, then instead of modifying the same, a new String would be created in the String pool. Hence, Strings are safe for multi-threading.

3.4. Hashcode Caching

Since String objects are abundantly used as a data structure, they are also widely used in hash implementations like HashMap, HashTable, HashSet, etc. When operating upon these hash implementations, hashCode() method is called quite frequently for bucketing.

The immutability guarantees Strings that their value won’t change. So the hashCode() method is overridden in String class to facilitate caching, such that the hash is calculated and cached during the first hashCode() call and the same value is returned ever since.

This, in turn, improves the performance of collections that uses hash implementations when operated with String objects.

On the other hand, mutable Strings would produce two different hashcodes at the time of insertion and retrieval if contents of String was modified after the operation, potentially losing the value object in the Map.

3.5. Performance

As we saw previously, String pool exists because Strings are immutable. In turn, it enhances the performance by saving heap memory and faster access of hash implementations when operated with Strings.

Since String is the most widely used data structure, improving the performance of String have a considerable effect on improving the performance of the whole application in general.

4. Conclusion

Through this article, we can conclude that Strings are immutable precisely so that their references can be treated as a normal variable and one can pass them around, between methods and across threads, without worrying about whether the actual String object it’s pointing to will change.

We also learned as what might be the other reasons that prompted the Java language designers to make this class as immutable.


Running JUnit Tests Programmatically, from a Java Application

$
0
0

1. Overview

In this tutorial, we’ll show how to run JUnit tests directly from Java code – there are scenarios where this option comes in handy.

If you are new to JUnit, or if you want to upgrade to JUnit 5, you can check some of many tutorials we have on the topic.

2. Maven Dependencies

We’ll need a couple of basic dependencies to run both JUnit 4 and JUnit 5 tests:

<dependencies>
    <dependency>
        <groupId>org.junit.jupiter</groupId>
        <artifactId>junit-jupiter-engine</artifactId>
        <version>5.2.0</version>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.junit.platform</groupId>
        <artifactId>junit-platform-launcher</artifactId>
        <version>1.2.0</version>
    </dependency>
</dependencies>

// for JUnit 4
<dependency> 
    <groupId>junit</groupId> 
    <artifactId>junit</artifactId> 
    <version>4.12</version> 
    <scope>test</scope> 
</dependency>

The latest versions of JUnit 4, JUnit 5, and JUnit Platform Launcher can be found on Maven Central.

3. Running JUnit 4 Tests

3.1. Test Scenario

For both JUnit 4 and JUnit 5, we’ll set up a few “placeholder” test classes which will be enough to demonstrate our examples:

public class FirstUnitTest {

    @Test
    public void whenThis_thenThat() {
        assertTrue(true);
    }

    @Test
    public void whenSomething_thenSomething() {
        assertTrue(true);
    }

    @Test
    public void whenSomethingElse_thenSomethingElse() {
        assertTrue(true);
    }
}
public class SecondUnitTest {

    @Test
    public void whenSomething_thenSomething() {
        assertTrue(true);
    }

    @Test
    public void whensomethingElse_thenSomethingElse() {
        assertTrue(true);
    }
}

When using JUnit 4, we create test classes adding @Test annotation to every test method.

We can also add other useful annotations, such as @Before or @After, but that’s not in the scope of this tutorial.

3.2. Running a Single Test Class

To run JUnit tests from Java code, we can use the JUnitCore class (with an addition of TextListener class, used to display the output in System.out):

JUnitCore junit = new JUnitCore();
junit.addListener(new TextListener(System.out));
junit.run(FirstUnitTest.class);

On the console, we’ll see a very simple message indicating successful tests:

Running one test class:
..
Time: 0.019
OK (2 tests)

3.3. Running Multiple Test Classes

If we want to specify multiple test classes with JUnit 4, we can use the same code as for a single class, and simply add the additional classes:

JUnitCore junit = new JUnitCore();
junit.addListener(new TextListener(System.out));

Result result = junit.run(
  FirstUnitTest.class, 
  SecondUnitTest.class);

resultReport(result);

Note that the result is stored in an instance of JUnit’s Result class, which we’re printing out using a simple utility method:

public static void resultReport(Result result) {
    System.out.println("Finished. Result: Failures: " +
      result.getFailureCount() + ". Ignored: " +
      result.getIgnoreCount() + ". Tests run: " +
      result.getRunCount() + ". Time: " +
      result.getRunTime() + "ms.");
}

3.4. Running a Test Suite

If we need to group some test classes in order to run them, we can create a TestSuite. This is just an empty class where we specify all classes using JUnit annotations:

@RunWith(Suite.class)
@Suite.SuiteClasses({
  FirstUnitTest.class,
  SecondUnitTest.class
})
public class MyTestSuite {
}

To run these tests, we’ll again use the same code as before:

JUnitCore junit = new JUnitCore();
junit.addListener(new TextListener(System.out));
Result result = junit.run(MyTestSuite.class);
resultReport(result);

3.5. Running Repeated Tests

One of the interesting features of JUnit is that we can repeat tests by creating instances of RepeatedTest. This can be really helpful when we’re testing random values, or for performance checks.

In the next example, we’ll run the tests from MergeListsTest five times:

Test test = new JUnit4TestAdapter(FirstUnitTest.class);
RepeatedTest repeatedTest = new RepeatedTest(test, 5);

JUnitCore junit = new JUnitCore();
junit.addListener(new TextListener(System.out));

junit.run(repeatedTest);

Here, we’re using JUnit4TestAdapter as a wrapper for our test class.

We can even create suites programmatically, applying repeated testing:

TestSuite mySuite = new ActiveTestSuite();

JUnitCore junit = new JUnitCore();
junit.addListener(new TextListener(System.out));

mySuite.addTest(new RepeatedTest(new JUnit4TestAdapter(FirstUnitTest.class), 5));
mySuite.addTest(new RepeatedTest(new JUnit4TestAdapter(SecondUnitTest.class), 3));

junit.run(mySuite);

4. Running JUnit 5 Tests

4.1. Test Scenario

With JUnit 5, we’ll use the same sample test classes as for the previous demo – FirstUnitTest and SecondUnitTest, with some minor differences due to a different version of JUnit framework, like the package for @Test and assertion methods.

4.2. Running Single Test Class

To run JUnit 5 tests from Java code, we’ll set up an instance of LauncherDiscoveryRequest. It uses a builder class where we must set package selectors and testing class name filters, to get all test classes that we want to run.

This request is then associated with a launcher and, before executing the tests, we’ll also set up a test plan and an execution listener.

Both of these will offer information about the tests to be executed and of the results:

public class RunJUnit5TestsFromJava {
    SummaryGeneratingListener listener = new SummaryGeneratingListener();

    public void runOne() {
        LauncherDiscoveryRequest request = LauncherDiscoveryRequestBuilder.request()
          .selectors(selectClass(FirstUnitTest.class))
          .build();
        Launcher launcher = LauncherFactory.create();
        TestPlan testPlan = launcher.discover(request);
        launcher.registerTestExecutionListeners(listener);
        launcher.execute(request);
    }
    // main method...
}

4.3. Running Multiple Test Classes

We can set selectors and filters to the request to run multiple test classes.

Let’s see how we can set package selectors and testing class name filters, to get all test classes that we want to run:

public void runAll() {
    LauncherDiscoveryRequest request = LauncherDiscoveryRequestBuilder.request()
      .selectors(selectPackage("com.baeldung.junit5.runfromjava"))
      .filters(includeClassNamePatterns(".*Test"))
      .build();
    Launcher launcher = LauncherFactory.create();
    TestPlan testPlan = launcher.discover(request);
    launcher.registerTestExecutionListeners(listener);
    launcher.execute(request);
}

4.4. Test Output

In the main() method, we call our class, and we also use the listener to get the result details. This time the result is stored as a TestExecutionSummary.

The simplest way to extract its info merely is by printing to a console output stream:

public static void main(String[] args) {
    RunJUnit5TestsFromJava runner = new RunJUnit5TestsFromJava();
    runner.runAll();

    TestExecutionSummary summary = runner.listener.getSummary();
    summary.printTo(new PrintWriter(System.out));
}

This will give us the details of our test run:

Test run finished after 177 ms
[         7 containers found      ]
[         0 containers skipped    ]
[         7 containers started    ]
[         0 containers aborted    ]
[         7 containers successful ]
[         0 containers failed     ]
[        10 tests found           ]
[         0 tests skipped         ]
[        10 tests started         ]
[         0 tests aborted         ]
[        10 tests successful      ]
[         0 tests failed          ]

5. Conclusion

In this article, we’ve shown how to run JUnit tests programmatically from Java code, covering JUnit 4 as well as the recent JUnit 5 version of this testing framework.

As always, the implementation of the examples shown here is available over on GitHub for both the JUnit 5 examples, as well as JUnit 4.

Default Password Encoder in Spring Security 5

$
0
0

1. Overview

In Spring Security 4, it was possible to store passwords in plain text using in-memory authentication.

A major overhaul of the password management process in version 5 has introduced more secure default mechanism for encoding and decoding passwords. This means that if your Spring application stores passwords in plain text, upgrading to Spring Security 5 may cause problems.

In this short tutorial, we’ll describe one of those potential problems and demonstrate a solution to the issue.

2. Spring Security 4

We’ll start by showing a standard security configuration that provides simple in-memory authentication (valid for Spring 4):

@Configuration
public class InMemoryAuthWebSecurityConfigurer 
  extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(AuthenticationManagerBuilder auth) 
      throws Exception {
        auth.inMemoryAuthentication()
          .withUser("spring")
          .password("secret")
          .roles("USER");
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.authorizeRequests()
          .antMatchers("/private/**")
          .authenticated()
          .antMatchers("/public/**")
          .permitAll()
          .and()
          .httpBasic();
    }
}

This configuration defines authentication for all /private/ mapped methods and public access for everything under /public/.

If we use the same configuration under Spring Security 5, we’d get the following error:

java.lang.IllegalArgumentException: There is no PasswordEncoder mapped for the id "null"

The error tells us that the given password couldn’t be decoded since no password encoder was configured for our in-memory authentication.

3. Spring Security 5

We can fix this the error by defining a DelegatingPasswordEncoder with the PasswordEncoderFactories class.

We use this encoder to configure our user with the AuthenticationManagerBuilder:

@Configuration
public class InMemoryAuthWebSecurityConfigurer 
  extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(AuthenticationManagerBuilder auth) 
      throws Exception {
        PasswordEncoder encoder = PasswordEncoderFactories.createDelegatingPasswordEncoder();
        auth.inMemoryAuthentication()
          .withUser("spring")
          .password(encoder.encode("secret"))
          .roles("USER");
    }
}

Now, with this configuration we’re storing our in-memory password using BCrypt in the following format:

{bcrypt}$2a$10$MF7hYnWLeLT66gNccBgxaONZHbrSMjlUofkp50sSpBw2PJjUqU.zS

Although we can define our own set of password encoders, it’s recommended to stick with the default encoders provided in PasswordEncoderFactories.

3.1. Migrating Existing Passwords

We can update existing passwords to the recommended Spring Security 5 standards by:

  • Updating plain text stored passwords with their value encoded:
String encoded = new BCryptPasswordEncoder().encode(plainTextPassword);
  • Prefixing hashed stored passwords with their known encoder identifier:
{bcrypt}$2a$10$MF7hYnWLeLT66gNccBgxaONZHbrSMjlUofkp50sSpBw2PJjUqU.zS
{sha256}97cde38028ad898ebc02e690819fa220e88c62e0699403e94fff291cfffaf8410849f27605abcbc0
  • Requesting users to update their passwords when the encoding-mechanism for stored passwords is unknown

4. Conclusion

In this quick example, we updated a valid a Spring 4 in-memory authentication configuration to Spring 5 using the new password storage mechanism.

As always, you can find the source code over on the GitHub project.

Java Weekly, Issue 241

$
0
0

Here we go…

1. Spring and Java

>> Spring Boot – Best Practices [e4developer.com]

This primer can help jumpstart your journey down the road of Spring Boot.

>> It’s time! Migrating to Java 11 [medium.com]

With JDK 8 nearing end-of-life and JDK 11 on the horizon, this step-by-step formula for migrating applications to Java 11 couldn’t come soon enough.

>> WireMock Tutorial: Introduction to Stubbing [petrikainulainen.net]

A nice overview of request stubbing and crafting HTTP response bodies, headers, and status codes in WireMock. Good stuff.

>> How to query by entity type using JPA Criteria API [vladmihalcea.com]

A quick example using JPA inheritance that shows how to find entities of a superclass or a specific subclass. Very cool.

>> How to Configure a Human-Readable Logging Format with Logback and Descriptive Logger [reflectoring.io]

A clever SLF4J wrapper library for injecting a custom ID to the Mapped Diagnostic Context of each Logback message, and some handy formatting tips to boot.

>> Spring Boot integration in IntelliJ IDEA [blog.frankel.ch]

A brief rundown of the many ways this popular IDE can help you create, configure, run, debug, and monitor Spring Boot projects. This can really speed up your development time.

>> Multi-module project builds with Maven and Gradle [andresalmiray.com]

A reminder that, while Maven and Gradle aren’t perfect, there’s usually a workaround that lets you achieve your objective.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Top Docker Monitoring Tools [code-maze.com]

If Docker is part of your infrastructure, you’ll need a way to monitor your containers. Here are some of the best tools for the job. Choose wisely.

>> Tip: Provide Contextual Information in Log Messages [reflectoring.io]

Some practical advice on how adding context to your log messages can make them more useful.

>> Should that be a Microservice? Part 5: Failure Isolation [content.pivotal.io]

A compelling argument in favor of isolating failure-prone services into microservices and using a circuit breaker to mitigate failures.

>> Pseudo Localization @ Netflix [medium.com]

A novel approach that helps developers identify and avoid some of the pitfalls of writing multi-language UIs, without incurring the added cost of translation.

>> Code Review Guidelines [philipphauer.de]

A great set of rules for both authors and reviewers that can make a code review much more personal and well-received.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> 99 Problems but a Giraffe Ain’t One [dilbert.com]

>> AI Guilt Trip Lost on Wally [dilbert.com]

>> Delegate Like a Boss [dilbert.com]

4. Pick of the Week

>> Imaginary Problems Are the Root of Bad Software [medium.com]

Extracting Principal and Authorities using Spring Security OAuth

$
0
0

1. Overview

In this tutorial, we’ll illustrate how to create an application that delegates user authentication to a third party, as well as to a custom authorization server, using Spring Boot and Spring Security OAuth.

Also, we’ll demonstrate how to extract both Principal and Authorities using Spring’s PrincipalExtractor and AuthoritiesExtractor interfaces.

For an introduction to Spring Security OAuth2 please refer to these articles.

2. Maven Dependencies

To get started, we need to add the spring-security-oauth2-autoconfigure dependency to our pom.xml:

<dependency>
    <groupId>org.springframework.security.oauth.boot</groupId>
    <artifactId>spring-security-oauth2-autoconfigure</artifactId>
    <version>2.0.1.RELEASE</version>
</dependency>

3. OAuth Authentication using Github

Next, let’s create the security configuration of our application:

@Configuration
@EnableOAuth2Sso
public class SecurityConfig 
  extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) 
      throws Exception {
 
        http.antMatcher("/**")
          .authorizeRequests()
          .antMatchers("/login**")
          .permitAll()
          .anyRequest()
          .authenticated()
          .and()
          .formLogin().disable();
    }
}

In short, we’re saying that anyone can access the /login endpoint and that all other endpoints will require user authentication.

We’ve also annotated our configuration class with @EnableOAuthSso which converts our application into an OAuth client and creates the necessary components for it to behave as such.

While Spring creates most of the components for us by default, we still need to configure some properties:

security.oauth2.client.client-id=89a7c4facbb3434d599d
security.oauth2.client.client-secret=9b3b08e4a340bd20e866787e4645b54f73d74b6a
security.oauth2.client.access-token-uri=https://github.com/login/oauth/access_token
security.oauth2.client.user-authorization-uri=https://github.com/login/oauth/authorize
security.oauth2.client.scope=read:user,user:email
security.oauth2.resource.user-info-uri=https://api.github.com/user

Instead of dealing with user account management, we’re delegating it to a third party – in this case, Github – thus enabling us to focus on the logic of our application.

4. Extracting Principal and Authorities

When acting as an OAuth client and authenticating users through a third party there are three steps we need to consider:

  1. User authentication – the user authenticates with the third party
  2. User authorization – follows authentication, it’s when the user allows our application to perform certain operations on their behalf; this is where scopes come in
  3. Fetch user data – use the OAuth token we’ve obtained to retrieve user’s data

Once we retrieve the user’s data, Spring is able to automatically create the user’s Principal and Authorities.

While that may be acceptable, more often than not we find ourselves in a scenario where we want to have complete control over them.

To do so, Spring gives us two interfaces we can use to override its default behavior:

  • PrincipalExtractor – Interface we can use to provide our custom logic to extract the Principal
  • AuthoritiesExtractor – Similar to PrincipalExtractor, but it’s used to customize Authorities extraction instead

By default, Spring provides two components – FixedPrincipalExtractor and FixedAuthoritiesExtractor  that implement these interfaces and have a pre-defined strategy to create them for us.

4.1. Customizing Github’s Authentication

In our case, we’re aware of how Github’s user data looks like and what we can use to tailor them according to our needs.

As such, to override Spring’s default components we just need to create two Beans that also implement these interfaces.

For our application’s Principal we’re simply going to use the user’s Github username:

@Bean
public class GithubPrincipalExtractor 
  implements PrincipalExtractor {

    @Override
    public Object extractPrincipal(Map<String, Object> map) {
        return map.get("login");
    }
}

Depending on our user’s Github subscription – free, or otherwise – we’ll give them a GITHUB_USER_SUBSCRIBED, or a GITHUB_USER_FREE authority:

@Bean
public class GithubAuthoritiesExtractor 
  implements AuthoritiesExtractor {
    List<GrantedAuthority> GITHUB_FREE_AUTHORITIES
     = AuthorityUtils.commaSeparatedStringToAuthorityList(
     "GITHUB_USER,GITHUB_USER_FREE");
    List<GrantedAuthority> GITHUB_SUBSCRIBED_AUTHORITIES 
     = AuthorityUtils.commaSeparatedStringToAuthorityList(
     "GITHUB_USER,GITHUB_USER_SUBSCRIBED");

    @Override
    public List<GrantedAuthority> extractAuthorities
      (Map<String, Object> map) {
 
        if (Objects.nonNull(map.get("plan"))) {
            if (!((LinkedHashMap) map.get("plan"))
              .get("name")
              .equals("free")) {
                return GITHUB_SUBSCRIBED_AUTHORITIES;
            }
        }
        return GITHUB_FREE_AUTHORITIES;
    }
}

4.2. Using a Custom Authorization Server

We can also use our own Authorization Server for our users – instead of relying on a third party.

Despite the authorization server we decide to use, the components we need to customize both Principal and Authorities remain the same: a PrincipalExtractor and an AuthoritiesExtractor.

We just need to be aware of the data returned by the user-info-uri endpoint and use it as we see fit.

Let’s change our application to authenticate our users using the authorization server described in this article:

security.oauth2.client.client-id=SampleClientId
security.oauth2.client.client-secret=secret
security.oauth2.client.access-token-uri=http://localhost:8081/auth/oauth/token
security.oauth2.client.user-authorization-uri=http://localhost:8081/auth/oauth/authorize
security.oauth2.resource.user-info-uri=http://localhost:8081/auth/user/me

Now that we’re pointing to our authorization server we need to create both extractors; in this case, our PrincipalExtractor is going to extract the Principal from the Map using the name key:

@Bean
public class BaeldungPrincipalExtractor 
  implements PrincipalExtractor {

    @Override
    public Object extractPrincipal(Map<String, Object> map) {
        return map.get("name");
    }
}

As for authorities, our Authorization Server is already placing them in its user-info-uri‘s data.

As such, we’re going to extract and enrich them:

@Bean
public class BaeldungAuthoritiesExtractor 
  implements AuthoritiesExtractor {

    @Override
    public List<GrantedAuthority> extractAuthorities
      (Map<String, Object> map) {
        return AuthorityUtils
          .commaSeparatedStringToAuthorityList(asAuthorities(map));
    }

    private String asAuthorities(Map<String, Object> map) {
        List<String> authorities = new ArrayList<>();
        authorities.add("BAELDUNG_USER");
        List<LinkedHashMap<String, String>> authz = 
          (List<LinkedHashMap<String, String>>) map.get("authorities");
        for (LinkedHashMap<String, String> entry : authz) {
            authorities.add(entry.get("authority"));
        }
        return String.join(",", authorities);
    }
}

5. Conclusion

In this article, we’ve implemented an application that delegates user authentication to a third party, as well as to a custom authorization server, and demonstrated how to customize both Principal and Authorities.

As usual, the implementation of this example can be found over on Github.

When running locally, you can run and test the application at localhost:8082

Server-Sent Events (SSE) In JAX-RS

$
0
0

1. Overview

Server-Sent Events (SSE) is an HTTP based specification that provides a way to establish a long-running and mono-channel connection from the server to the client. 

The client initiates the SSE connection by using the media type text/event-stream in the Accept header.

Later, it gets updated automatically without requesting the server.

We can check more details about the specification on the official spec.

In this tutorial, we’ll introduce the new JAX-RS 2.1 implementation of SSE.

Hence, we’ll look at how we can publish events with the JAX-RS Server API. Also, we’ll explore how we can consume them either by the JAX-RS Client API or just by an HTTP client like the curl tool.

2. Understanding SSE Events

An SSE Event is a block of text composed of the following fields:

  • Event: the event’s type. The server can send many messages of different types and the client may only listen for a particular type or can process differently each event type
  • Data: the message sent by the server. We can have many data lines for the same event
  • Id: the id of the event, used to send the Last-Event-ID header, after a connection retry. It is useful as it can prevent the server from sending already sent events
  • Retry: the time, in milliseconds, for the client to establish a new connection when the current is lost. The last received Id will be automatically sent through the Last-Event-ID header
  • :‘: this is a comment and is ignored by the client

Also, two consecutive events are separated by a double newline ‘\n\n‘.

Additionally, the data in the same event can be written in many lines as can be seen in the following example:

event: stock
id: 1
: price change
retry: 4000
data: {"dateTime":"2018-07-14T18:06:00.285","id":1,
data: "name":"GOOG","price":75.7119}

event: stock
id: 2
: price change
retry: 4000
data: {"dateTime":"2018-07-14T18:06:00.285","id":2,"name":"IBM","price":83.4611}

In JAX RS, an SSE event is abstracted by the SseEvent interface, or more precisely, by the two subinterfaces OutboundSseEvent and InboundSseEvent.

While the OutboundSseEvent is used on the Server API and designs a sent event, the InboundSseEvent is used by the Client API and abstracts a received event.

3. Publishing SSE Events

Now that we discussed what an SSE event is let’s see how we can build and send it to an HTTP client.

3.1. Project Setup

We already have a tutorial about setting up a JAX RS-based Maven project. Feel free to have a look there to see how to set dependencies and get started with JAX RS.

3.2. SSE Resource Method

An SSE Resource method is a JAX RS method that:

  • Can produce a text/event-stream media type
  • Has an injected SseEventSink parameter, where events are sent
  • May also have an injected Sse parameter which is used as an entry point to create an event builder
@GET
@Path("prices")
@Produces("text/event-stream")
public void getStockPrices(@Context SseEventSink sseEventSink, @Context Sse sse) {
    //...
}

In consequence, the client should make the first HTTP request, with the following HTTP header:

Accept: text/event-stream

3.3. The SSE Instance

An SSE instance is a context bean that the JAX RS Runtime will make available for injection.

We could use it as a factory to create:

  • OutboundSseEvent.Builder – allows us to create events then
  • SseBroadcaster – allows us to broadcast events to multiple subscribers

Let’s see how that works:

@Context
public void setSse(Sse sse) {
    this.sse = sse;
    this.eventBuilder = sse.newEventBuilder();
    this.sseBroadcaster = sse.newBroadcaster();
}

Now, let’s focus on the event builder. OutboundSseEvent.Builder is responsible for creating the OutboundSseEvent:

OutboundSseEvent sseEvent = this.eventBuilder
  .name("stock")
  .id(String.valueOf(lastEventId))
  .mediaType(MediaType.APPLICATION_JSON_TYPE)
  .data(Stock.class, stock)
  .reconnectDelay(4000)
  .comment("price change")
  .build();

As we can see, the builder has methods to set values for all event fields shown above. Additionally, the mediaType() method is used to serialize the data field Java object to a suitable text format.

By default, the media type of the data field is text/plain. Hence, it doesn’t need to be explicitly specified when dealing with the String data type.

Otherwise, if we want to handle a custom object, we need to specify the media type or to provide a custom MessageBodyWriter. The JAX RS Runtime provides MessageBodyWriters for the most known media types.

The Sse instance also has two builders shortcuts for creating an event with only the data field, or the type and data fields:

OutboundSseEvent sseEvent = sse.newEvent("cool Event");
OutboundSseEvent sseEvent = sse.newEvent("typed event", "data Event");

3.4. Sending Simple Event

Now that we know how to build events and we understand how an SSE Resource works. Let’s send a simple event.

The SseEventSink interface abstracts a single HTTP connection. The JAX-RS Runtime can make it available only through injection in the SSE resource method.

Sending an event is then as simple as invoking SseEventSink.send(). 

In the next example will send a bunch of stock updates and will eventually close the event stream:

@GET
@Path("prices")
@Produces("text/event-stream")
public void getStockPrices(@Context SseEventSink sseEventSink /*..*/) {
    int lastEventId = //..;
    while (running) {
        Stock stock = stockService.getNextTransaction(lastEventId);
        if (stock != null) {
            OutboundSseEvent sseEvent = this.eventBuilder
              .name("stock")
              .id(String.valueOf(lastEventId))
              .mediaType(MediaType.APPLICATION_JSON_TYPE)
              .data(Stock.class, stock)
              .reconnectDelay(3000)
              .comment("price change")
              .build();
            sseEventSink.send(sseEvent);
            lastEventId++;
        }
     //..
    }
    sseEventSink.close();
}

After sending all events, the server closes the connection either by explicitly invoking the close() method or, preferably, by using the try-with-resource, as the SseEventSink extends the AutoClosable interface:

try (SseEventSink sink = sseEventSink) {
    OutboundSseEvent sseEvent = //..
    sink.send(sseEvent);
}

In our sample app we can see this running if we visit:

http://localhost:9080/sse-jaxrs-server/sse.html

3.5. Broadcasting Events

Broadcasting is the process by which events are sent to multiple clients simultaneously. This is accomplished by the SseBroadcaster API, and it is done in three simple steps:

First, we create the SseBroadcaster object from an injected Sse context as shown previously:

SseBroadcaster sseBroadcaster = sse.newBroadcaster();

Then, clients should subscribe to be able to receive Sse Events. This is generally done in an SSE resource method where a SseEventSink context instance is injected:

@GET
@Path("subscribe")
@Produces(MediaType.SERVER_SENT_EVENTS)
public void listen(@Context SseEventSink sseEventSink) {
    this.sseBroadcaster.register(sseEventSink);
}

And finally, we can trigger the event publishing by invoking the broadcast() method:

@GET
@Path("publish")
public void broadcast() {
    OutboundSseEvent sseEvent = //...;
    this.sseBroadcaster.broadcast(sseEvent);
}

This will send the same event to each registered SseEventSink.

To showcase the broadcasting, we can access this URL:

http://localhost:9080/sse-jaxrs-server/sse-broadcast.html

And then we can trigger the broadcasting by invoking the broadcast() resource method:

curl -X GET http://localhost:9080/sse-jaxrs-server/sse/stock/publish

4. Consuming SSE Events

To consume an SSE event sent by the server, we can use any HTTP client, but for this tutorial, we’ll use the JAX RS client API.

4.1. JAX RS Client API for SSE

To get started with the client API for SSE, we need to provide dependencies for JAX RS Client implementation.

Here, we’ll use Apache CXF client implementation:

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-rs-client</artifactId>
    <version>${cxf-version}</version>
</dependency>
<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-rs-sse</artifactId>
    <version>${cxf-version}</version>
</dependency>

The SseEventSource is the heart of this API, and it is constructed from The WebTarget.

We start by listening for incoming events whose are abstracted by the InboundSseEvent interface:

Client client = ClientBuilder.newClient();
WebTarget target = client.target(url);
try (SseEventSource source = SseEventSource.target(target).build()) {
    source.register((inboundSseEvent) -> System.out.println(inboundSseEvent));
    source.open();
}

Once the connection established, the registered event consumer will be invoked for each received InboundSseEvent.

We can then use the readData() method to read the original data String:

String data = inboundSseEvent.readData();

Or we can use the overloaded version to get the Deserialized Java Object using the suitable media type:

Stock stock = inboundSseEvent.readData(Stock.class, MediaType.Application_Json);

Here, we just provided a simple event consumer that print the incoming event in the console.

5. Conclusion

In this tutorial, we focused on how to use the Server-Sent Events in JAX RS 2.1. We provided an example that showcases how to send events to a single client as well as how to broadcast events to multiples clients.

Finally, we consumed these events using the JAX-RS client API.

As usual, the code of this tutorial can be found over on Github.

Viewing all 4814 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>