Quantcast
Channel: Baeldung
Viewing all 4700 articles
Browse latest View live

Self-Hosted Monitoring For Spring Boot Applications

$
0
0

1. Introduction

One of the many great features of Spring Boot is the set of built-in actuators. These actuators provide an easy way to monitor and control just about every aspect of a Spring Boot application.

In this tutorial, we'll look at using the metrics actuator to create a self-hosted monitoring solution for Spring Boot applications.

2. Metrics Database

The first part of monitoring Spring Boot applications is choosing a metrics database. By default, Spring Boot will configure a Micrometer metrics registry in every application.

This default implementation collects a pre-defined set of application metrics such as memory and CPU usage, HTTP requests, and a few others. But these metrics are stored in memory only, meaning they will be lost any time the application is restarted.

To create a self-hosted monitoring solution, we should first choose a metrics database that lives outside the Spring Boot application. The following sections will discuss just a few of the available self-hosted options.

Note that any time Spring Boot detects another metrics database on the classpath, it automatically disables the in-memory registry.

2.1. InfluxDB

InfluxDB is an open-source time-series database. The quickest way to get started with InfluxDB is to run it locally as a Docker container:

docker run -p 8086:8086 -v /tmp:/var/lib/influxdb influxdb

Note that this will store metrics in the local /tmp partition. This is fine for development and testing, but would not be a good choice for production environments.

Once InfluxDB is running, we can configure our Spring Boot application to publish metrics to it by adding the following line to application.properties:

management.metrics.export.influx.uri=https://influx.example.com:8086

InfluxDB does not provide a native visualization tool. However, it provides a separate tool called Chronograph that works well for visualizing InfluxDB data.

2.2. Prometheus

Prometheus is an open-source monitoring and alerting toolkit originally built at SoundCloud. It works slightly differently from InfluxDB.

Instead of configuring our application to publish metrics to Prometheus, we configure Prometheus to poll our application periodically.

First, we configure our Spring Boot application to expose a new Prometheus actuator endpoint. We do this by including the micrometer-registry-prometheus dependency:

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

This will create a new actuator endpoint that produces metrics data in a special format that Prometheus understands.

Next, we have to configure Prometheus to poll our application by adding our desired configuration into a prometheus.yml file.

The following configuration instructs Prometheus to poll our application every 5 seconds, using the new actuator endpoint:

scrape_configs:
  - job_name: 'spring-actuator'
    metrics_path: '/actuator/prometheus'
    scrape_interval: 5s
    static_configs:
    - targets: ['127.0.0.1:8080']

Finally, we can start a local Prometheus server using Docker. This assumes our custom config file is located in the local file /etc/prometheus/prometheus.yml:

docker run -d \
--name=prometheus \
-p 9090:9090 \
-v /etc/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus \
--config.file=/etc/prometheus/prometheus.yml

Prometheus provides its own visualization tool for viewing metrics that is has collected. It can be accessed at the URL http://localhost:9090/.

2.3. Graphite

Graphite is another open-source time-series database. Its architecture is slightly more complicated than the other databases we've looked at, but with Docker, it's straightforward to run an instance locally:

docker run -d \
 --name graphite \
 --restart=always \
 -p 80:80 \
 -p 2003-2004:2003-2004 \
 -p 2023-2024:2023-2024 \
 -p 8125:8125/udp \
 -p 8126:8126 \
 graphiteapp/graphite-statsd

Then we can configure Spring Boot to publish metrics to our instance by adding the micrometer-registry-graphite dependency:

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-graphite</artifactId>
</dependency>

As well as adding the configuration properties to application.properties:

management.metrics.export.graphite.host=127.0.0.1
management.metrics.export.graphite.port=2004

Like Prometheus, Graphite includes its own visualization dashboard. It is available at the URL http://localhost/.

3. Visualization Tools

Once we have a solution for storing metrics outside of our Spring Boot application, the next decision is how we want to visualize the data.

Some of the metrics databases mentioned previously include their own visualization tools. There is a stand-alone visualization tool that is worth looking at for our self-hosted monitoring solution.

3.1. Grafana

Grafana is an open-source analytics and monitoring tool. It can connect to all of the previously mentioned databases, as well as many others.

Grafana generally provides better configuration and superior alerting than most built-in visualization tools. It can easily be extended using plugins, and there are lots of pre-built dashboards that can be imported to quickly create our own visualizations.

To run Grafana locally, we can start it using Docker:

docker run -d -p 3000:3000 grafana/grafana

We can now access the Grafana home page at the URL http://localhost:3000/.

At this point, we would need to configure one or more data sources. This can be any of the metrics databases discussed previously or a variety of other supported tools.

Once a data source is configured, we can build a new dashboard or import one that does what we want.

4. Conclusion

In this article, we have looked at creating a self-hosted monitoring solution for Spring Boot applications.

We looked at three metrics databases that Spring Boot readily supports and saw how to run them locally.

We also looked briefly at Grafana, a powerful visualization tool that can display metrics data from a variety of sources.


Graceful Shutdown of a Spring Boot Application

$
0
0

1. Overview

On shutdown, by default, Spring's TaskExecutor simply interrupts all running tasks, but it may be nice to instead have it wait for all running tasks to be complete. This gives a chance for each task to take measures to ensure the shutdown is safe.

In this quick tutorial, we'll learn how to do this more graceful shutdown of a Spring Boot application when it involves tasks executing using thread pools.

2. Simple Example

Let's consider a simple Spring Boot application. We'll autowire the default TaskExecutor bean:

@Autowired
private TaskExecutor taskExecutor;

On application startup, let's execute a 1-minute-long process using a thread from the thread pool:

taskExecutor.execute(() -> {
    Thread.sleep(60_000);
});

When a shutdown is initiated, for example, 20 seconds after startup, the thread in the example is interrupted and the application shuts down immediately.

3. Wait for Tasks to Complete

Let's change the default behavior of task executor by creating a custom ThreadPoolTaskExecutor bean.

This class provides a flag setWaitForTasksToCompleteOnShutdown to prevent interrupting running tasks. Let's set it to true:

@Bean
public TaskExecutor taskExecutor() {
    ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
    taskExecutor.setCorePoolSize(2);
    taskExecutor.setMaxPoolSize(2);
    taskExecutor.setWaitForTasksToCompleteOnShutdown(true);
    taskExecutor.initialize();
    return taskExecutor;
}

And, we'll rewrite the earlier logic to create 3 threads each executing a 1-minute-long task.

@PostConstruct
public void runTaskOnStartup() {
    for (int i = 0; i < 3; i++) {
        taskExecutor.execute(() -> {
            Thread.sleep(60_000);
        });
    }
}

Let's now initiate a shutdown within the first 60 seconds after the startup.

We see that the application shuts down only 120 seconds after startup. The pool size of 2 allows only two simultaneous tasks to execute so the third one is queued up.

Setting the flag ensures that both the currently executing tasks and queued up tasks are completed.

Note that when a shutdown request is received, the task executor closes the queue so that new tasks can't be added.

4. Max Wait Time Before Termination

Though we've configured to wait for ongoing and queued up tasks to complete, Spring continues with the shutdown of the rest of the container. This could release resources needed by our task executor and cause the tasks to fail.

In order to block the shutdown of the rest of the container, we can specify a max wait time on the ThreadPoolTaskExecutor:

taskExecutor.setAwaitTerminationSeconds(30);

This ensures that for the specified time period, the shutdown process at the container level will be blocked.

When we set the setWaitForTasksToCompleteOnShutdown flag to true, we need to specify a significantly higher timeout so that all remaining tasks in the queue are also executed.

5. Conclusion

In this quick tutorial, we saw how to safely shut down a Spring Boot application by configuring the task executor bean to complete the running and submitted tasks until the end. This guarantees that all tasks will have the indicated amount of time to complete their work.

One obvious side effect is that it can also lead to a longer shutdown phase. Therefore, we need to decide whether or not to use it depending on the nature of the application.

As always, the examples from this article are available over on GitHub.

Linux Commands – Remove All Text After X

$
0
0

1. Overview

There are various occasions when we might want to remove the text after a specific character or set of characters. For example, one typical scenario is when we want to remove the extension of a particular filename.

In this quick tutorial, we’re going to explore several approaches to see how we can manipulate strings to remove text after a given pattern. We'll be using the Bash shell in our examples, but these commands may also work in other POSIX shells.

2. Native String Manipulation

Let's start by taking a look at how we can remove text using some of the built-in string manipulation operations offered by Bash. For this, we're going to be using a feature of the shell called parameter expansion.

To quickly recap, parameter expansion is the process where Bash expands a variable with a given value. To achieve this we simply use a dollar sign followed by our variable name, enclosed in braces:

my_var="Hola Mundo"
echo ${my_var}

As expected the above example results in the output:

Hola Mundo

But as we're going to see during the expansion process, we can also modify the variable value or substitute it for other values.

Now that we understand the basics of parameter expansion, in the next subsections, we’ll explain several different ways of how to delete parts of our variable.

In all our examples we'll focus on a pretty simple use case to remove the file extension of a filename.

2.1. Extracting Characters Using a Given Position and Length

We'll start by seeing how to extract a substring of a particular length using a given starting position:

my_filename="interesting-text-file.txt"
echo ${my_filename:0:21}

This gives us the output:

interesting-text-file

In this example, we're extracting a string from the my_filename variable. Starting at position 0 and with a length of 21 characters. In effect, we're saying remove all the text after position 21 which in this case is the .txt extension.

Although this solution works there are some obvious downsides:

  • Not all the filenames will have the same length
  • We'd need to calculate where the file extension starts to make this a more dynamic solution
  • To the naked eye, it isn't very intuitive what the code is actually doing

In the next example, we'll see a more elegant solution.

2.2. Deleting the Shortest Match

Now we're going to see how we can delete the shortest substring match from the back of our variable:

echo ${my_filename%.*}

Let's explain in more detail what we're doing in the above example:

  • We use the ‘%' character which has a special meaning and strips from the back of the string
  • Then we use ‘.*' to match the substring that starts with a dot
  • We then execute the echo command to output the result of this substring manipulation

Again we delete the substring ‘.txt' resulting in the output:

interesting-text-file

2.3. Deleting the Longest Match

Likewise, we can also delete the longest substring match from our filename. Let's now imagine we have a slightly more complicated scenario where are filename has more than one extension:

complicated_filename="hello-world.tar.gz"
echo ${complicated_filename%%.*}

In this variation ‘%%.*' strips the longest match for ‘.*' from the back of our complicated_filename variable. This simply matches “.tar.gz” resulting in:

hello-world

2.4. Using Find and Replace

In this final string manipulation example, we'll see how to use the built-in find and replace capabilities of Bash:

echo ${my_filename/.*/}

In order to understand this example, let's first understand the syntax of substring replacement:

${string/substring/replacement}

Now to put this into context we are replacing the first match of ‘.*' in the my_filename variable and replacing it with an empty string. In this case, we again remove the extension.

3. Using the sed Command

In this penultimate section, we'll see how we can use the sed command. The sed command is a powerful stream editor which we can use to perform basic and complex text transformations.

Using this command, we can find a pattern and replace it with another pattern. When the replace placeholder is left empty, the pattern gets deleted.

As per our other example, we'll simply echo the name of our file after we have removed the extension:

echo 'interesting-text-file.txt' | sed 's/.txt*//'

In this example, instead of assigning our filename to a variable, we start by piping it into the sed command.

The pattern we search for is ‘.txt*' and as the replace part of the command is left empty it gets removed from the filename. Again the result is to simply echo the value of the filename without the extension.

4. Using the cut Command

In this final example, we'll explore the cut command. As the name suggests we can use the cut command for cutting out sections from text:

echo 'interesting-text-file.txt' | cut -f1 -d"."

Let's take a look at the command in more detail to understand it properly:

  • We first use the -f option to specify the field number which indicates the field to extract
  • The -d option is used to specify the field separator or delimiter, in this example a ‘.'

Output fields are separated by a single occurrence of the field delimiter character. This means in our example we end up with two fields split by the dot. Consequently, we select the first one and in the process discarding the ‘.txt' extension.

5. Conclusion

In this quick tutorial, we’ve described a few ways that help us remove text from a string.

First, we explored native string manipulation using parameter expansion. Later, we saw an example with the power stream editing command sed. Then, we showed how we could achieve similar results using the cut command.

As always, the full source code of the article is available over on GitHub.

Parsing an XML File Using SAX Parser

$
0
0

1. Overview

SAX, also known as the Simple API for XML, is used for parsing XML documents.

In this tutorial, we'll learn what SAX is and why, when and how it should be used.

2. SAX: The Simple API for XML

SAX is an API used to parse XML documents. It is based on events generated while reading through the document. Callback methods receive those events. A custom handler contains those callback methods.

The API is efficient because it drops events right after the callbacks received them. Therefore, SAX has efficient memory management, unlike DOM, for example.

3. SAX vs DOM

DOM stands for Document Object Model. The DOM parser does not rely on events. Moreover, it loads the whole XML document into memory to parse it. SAX is more memory-efficient than DOM.

DOM has its benefits, too. For example, DOM supports XPath. It makes it also easy to operate on the whole document tree at once since the document is loaded into memory.

4. SAX vs StAX

StAX is more recent than SAX and DOM. It stands for Streaming API for XML.

The main difference with SAX is that StAX uses a pull mechanism instead of SAX's push mechanism (using callbacks).
This means the control is given to the client to decide when the events need to be pulled. Therefore, there is no obligation to pull the whole document if only a part of it is needed.

It provides an easy API to work with XML with a memory-efficient way of parsing.

Unlike SAX, it doesn't provide schema validation as one of its features.

5. Parsing the XML File Using a Custom Handler

Let's now use the following XML representing the Baeldung website and its articles:

<baeldung>
    <articles>
        <article>
            <title>Parsing an XML File Using SAX Parser</title>
            <content>SAX Parser's Lorem ipsum...</content>
        </article>
        <article>
            <title>Parsing an XML File Using DOM Parser</title>
            <content>DOM Parser's Lorem ipsum...</content>
        </article>
        <article>
            <title>Parsing an XML File Using StAX Parser</title>
            <content>StAX's Lorem ipsum...</content>
        </article>
    </articles>
</baeldung>

We'll begin by creating POJOs for our Baeldung root element and its children:

public class Baeldung {
    private List<BaeldungArticle> articleList;
    // usual getters and setters
}
public class BaeldungArticle {
    private String title;
    private String content;
    // usual getters and setters
}

We'll continue by creating the BaeldungHandler. This class will implement the callback methods necessary to capture the events.

We'll override four methods from the superclass DefaultHandler, each characterizing an event:

    • characters(char[], int, int) receives characters with boundaries. We'll convert them to a String and store it in a variable of BaeldungHandler
    • startDocument() is invoked when the parsing begins – we'll use it to construct our Baeldung instance
    • startElement() is invoked when the parsing begins for an element – we'll use it to construct either List<BaeldungArticle> or BaeldungArticle instances – qName helps us make the distinction between both types
    • endElement() is invoked when the parsing ends for an element – this is when we'll assign the content of the tags to their respective variables

With all the callbacks defined, we can now write the BaeldungHandler class:

public class BaeldungHandler extends DefaultHandler {
    private static final String ARTICLES = "articles";
    private static final String ARTICLE = "article";
    private static final String TITLE = "title";
    private static final String CONTENT = "content";

    private Baeldung website;
    private String elementValue;

    @Override
    public void characters(char[] ch, int start, int length) throws SAXException {
        elementValue = new String(ch, start, length);
    }

    @Override
    public void startDocument() throws SAXException {
        website = new Baeldung();
    }

    @Override
    public void startElement(String uri, String lName, String qName, Attributes attr) throws SAXException {
        switch (qName) {
            case ARTICLES:
                website.articleList = new ArrayList<>();
                break;
            case ARTICLE:
                website.articleList.add(new BaeldungArticle());
        }
    }

    @Override
    public void endElement(String uri, String localName, String qName) throws SAXException {
        switch (qName) {
            case TITLE:
                latestArticle().title = elementValue;
                break;
            case CONTENT:
                latestArticle().content = elementValue;
                break;
        }
    }

    private BaeldungArticle latestArticle() {
        List<BaeldungArticle> articleList = website.articleList;
        int latestArticleIndex = articleList.size() - 1;
        return articleList.get(latestArticleIndex);
    }

    public Baeldung getWebsite() {
        return website;
    }
}

String constants have also been added to increase readability. A method to retrieve the latest encountered article is also convenient. Finally, we need a getter for the Baeldung object.

Note that the above isn't thread-safe since we're holding onto state in between the method calls.

6. Testing the Parser

In order to test the parser, we'll instantiate the SaxFactory, the SaxParser and also the BaeldungHandler:

SAXParserFactory factory = SAXParserFactory.newInstance();
SAXParser saxParser = factory.newSAXParser();
SaxParserMain.BaeldungHandler baeldungHandler = new SaxParserMain.BaeldungHandler();

After that, we'll parse the XML file and assert that the object contains all expected elements parsed:

saxParser.parse("src/test/resources/sax/baeldung.xml", baeldungHandler);

SaxParserMain.Baeldung result = baeldungHandler.getWebsite();

assertNotNull(result);
List<SaxParserMain.BaeldungArticle> articles = result.getArticleList();

assertNotNull(articles);
assertEquals(3, articles.size());

SaxParserMain.BaeldungArticle articleOne = articles.get(0);
assertEquals("Parsing an XML File Using SAX Parser", articleOne.getTitle());
assertEquals("SAX Parser's Lorem ipsum...", articleOne.getContent());

SaxParserMain.BaeldungArticle articleTwo = articles.get(1);
assertEquals("Parsing an XML File Using DOM Parser", articleTwo.getTitle());
assertEquals("DOM Parser's Lorem ipsum...", articleTwo.getContent());

SaxParserMain.BaeldungArticle articleThree = articles.get(2);
assertEquals("Parsing an XML File Using StAX Parser", articleThree.getTitle());
assertEquals("StAX Parser's Lorem ipsum...", articleThree.getContent());

As expected, the baeldung has been parsed correctly and contains the awaited sub-objects.

7. Conclusion

We just discovered how to use SAX to parse XML files. It's a powerful API generating a light memory footprint in our applications.

As usual, the code for this article is available over on GitHub.

Using JaVers for Data Model Auditing in Spring Data

$
0
0

1. Overview

In this tutorial, we’ll see how to set up and use JaVers in a simple Spring Boot application to track changes of entities.

2. JaVers

When dealing with mutable data we usually have only the last state of an entity stored in a database. As developers, we spend a lot of time debugging an application, searching through log files for an event that changed a state. This gets even trickier in the production environment when lots of different users are using the system.

Fortunately, we have great tools like JaVers. JaVers is an audit log framework that helps to track changes of entities in the application.

The usage of this tool is not limited to debugging and auditing only. It can be successfully applied to perform analysis, force security policies and maintaining the event log, too.

3. Project Set-up

First of all, to start using JaVers we need to configure the audit repository for persisting snapshots of entities. Secondly, we need to adjust some configurable properties of JaVers. Finally, we'll also cover how to configure our domain models properly.

But, it worth mentioning that JaVers provides default configuration options, so we can start using it with almost no configuration.

3.1. Dependencies

First, we need to add the JaVers Spring Boot starter dependency to our project. Depending on the type of persistence storage, we have two options: org.javers:javers-spring-boot-starter-sql and org.javers:javers-spring-boot-starter-mongo. In this tutorial, we'll use the Spring Boot SQL starter.

<dependency>
    <groupId>org.javers</groupId>
    <artifactId>javers-spring-boot-starter-sql</artifactId>
    <version>5.6.3</version>
</dependency>

As we are going to use the H2 database, let’s also include this dependency:

<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
</dependency>

3.2. JaVers Repository Setup

JaVers uses a repository abstraction for storing commits and serialized entities. All data is stored in the JSON format. Therefore, it might be a good fit to use a NoSQL storage. However, for the sake of simplicity, we'll use an H2 in-memory instance.

By default, JaVers leverages an in-memory repository implementation, and if we're using Spring Boot, there is no need for extra configuration. Furthermore, while using Spring Data starters, JaVers reuses the database configuration for the application.

JaVers provides two starters for SQL and Mongo persistence stacks.  They are compatible with Spring Data and don't require extra configuration by default. However, we can always override default configuration beans: JaversSqlAutoConfiguration.java and JaversMongoAutoConfiguration.java respectively.

3.3. JaVers Properties

JaVers allows configuring several options, though the Spring Boot defaults are sufficient in most use cases.

Let's override just one, newObjectSnapshot, so that we can get snapshots of newly created objects:

javers.newObjectSnapshot=true

3.4. JaVers Domain Configuration

JaVers internally defines the following types: Entities, Value Objects, Values, Containers, and Primitives. Some of these terms come from DDD (Domain Driven Design) terminology.

The main purpose of having several types is to provide different diff algorithms depending on the type. Each type has a corresponding diff strategy. As a consequence, if application classes are configured incorrectly we'll get unpredictable results.

To tell JaVers what type to use for a class, we have several options:

  • Explicitly – the first option is to explicitly use register* methods of the JaversBuilder class – the second way is to use annotations
  • Implicitly – JaVers provides algorithms for detecting types automatically based on class relations
  • Defaults – by default, JaVers will treat all classes as ValueObjects

In this tutorial, we'll configure JaVers explicitly, using the annotation method.

The great thing is that JaVers is compatible with javax.persistence annotations. As a result, we won't need to use JaVers-specific annotations on our entities.

4. Sample Project

Now we're going to create a simple application that will include several domain entities that we'll be auditing.

4.1. Domain Models

Our domain will include stores with products.

Let's define the Store entity:

@Entity
public class Store {

    @Id
    @GeneratedValue
    private int id;
    private String name;

    @Embedded
    private Address address;

    @OneToMany(
      mappedBy = "store",
      cascade = CascadeType.ALL,
      orphanRemoval = true
    )
    private List<Product> products = new ArrayList<>();
    
    // constructors, getters, setters
}

Please note that we are using default JPA annotations. JaVers maps them in the following way:

  • @javax.persistence.Entity is mapped to @org.javers.core.metamodel.annotation.Entity
  • @javax.persistence.Embeddable is mapped to @org.javers.core.metamodel.annotation.ValueObject.

Embeddable classes are defined in the usual manner:

@Embeddable
public class Address {
    private String address;
    private Integer zipCode;
}

4.2. Data Repositories

In order to audit JPA repositories, JaVers provides the @JaversSpringDataAuditable annotation.

Let’s define the StoreRepository with that annotation:

@JaversSpringDataAuditable
public interface StoreRepository extends CrudRepository<Store, Integer> {
}

Furthermore, we'll have the ProductRepository, but not annotated:

public interface ProductRepository extends CrudRepository<Product, Integer> {
}

Now consider a case when we are not using Spring Data repositories. JaVers has another method level annotation for that purpose: @JaversAuditable.

For example, we may define a method for persisting a product as follows:

@JaversAuditable
public void saveProduct(Product product) {
    // save object
}

Alternatively, we can even add this annotation directly above a method in the repository interface:

public interface ProductRepository extends CrudRepository<Product, Integer> {
    @Override
    @JaversAuditable
    <S extends Product> S save(S s);
}

4.3. Author Provider

Each committed change in JaVers should have its author. Moreover, JaVers supports Spring Security out of the box.

As a result, each commit is made by a specific authenticated user. However, for this tutorial we'll create a really simple custom implementation of the AuthorProvider Interface:

private static class SimpleAuthorProvider implements AuthorProvider {
    @Override
    public String provide() {
        return "Baeldung Author";
    }
}

And as the last step, to make JaVers use our custom implementation, we need to override the default configuration bean:

@Bean
public AuthorProvider provideJaversAuthor() {
    return new SimpleAuthorProvider();
}

5. JaVers Audit

Finally, we are ready to audit our application. We’ll use a simple controller for dispatching changes into our application and retrieving the JaVers commit log. Alternatively, we can also access the H2 console to see the internal structure of our database:

H2 Console

To have some initial sample data, let’s use an EventListener to populate our database with some products:

@EventListener
public void appReady(ApplicationReadyEvent event) {
    Store store = new Store("Baeldung store", new Address("Some street", 22222));
    for (int i = 1; i < 3; i++) {
        Product product = new Product("Product #" + i, 100 * i);
        store.addProduct(product);
    }
    storeRepository.save(store);
}

5.1. Initial commit

When an object is created, JaVers first makes a commit of the INITIAL type.

Let’s check the snapshots after the application startup:

@GetMapping("/stores/snapshots")
public String getStoresSnapshots() {
    QueryBuilder jqlQuery = QueryBuilder.byClass(Store.class);
    List<CdoSnapshot> snapshots = javers.findSnapshots(jqlQuery.build());
    return javers.getJsonConverter().toJson(snapshots);
}

In the code above, we're querying JaVers for snapshots for the Store class. If we make a request to this endpoint we’ll get a result like the one below:

[
  {
    "commitMetadata": {
      "author": "Baeldung Author",
      "properties": [],
      "commitDate": "2019-08-26T07:04:06.776",
      "commitDateInstant": "2019-08-26T04:04:06.776Z",
      "id": 1.00
    },
    "globalId": {
      "entity": "com.baeldung.springjavers.domain.Store",
      "cdoId": 1
    },
    "state": {
      "address": {
        "valueObject": "com.baeldung.springjavers.domain.Address",
        "ownerId": {
          "entity": "com.baeldung.springjavers.domain.Store",
          "cdoId": 1
        },
        "fragment": "address"
      },
      "name": "Baeldung store",
      "id": 1,
      "products": [
        {
          "entity": "com.baeldung.springjavers.domain.Product",
          "cdoId": 2
        },
        {
          "entity": "com.baeldung.springjavers.domain.Product",
          "cdoId": 3
        }
      ]
    },
    "changedProperties": [
      "address",
      "name",
      "id",
      "products"
    ],
    "type": "INITIAL",
    "version": 1
  }
]

Note that the snapshot above includes all products added to the store despite the missing annotation for the ProductRepository interface.

By default, JaVers will audit all related models of an aggregate root if they are persisted along with the parent.

We can tell JaVers to ignore specific classes by using the DiffIgnore annotation.

For instance, we may annotate the products field with the annotation in the Store entity:

@DiffIgnore
private List<Product> products = new ArrayList<>();

Consequently, JaVers won’t track changes of products originated from the Store entity.

5.2. Update commit

The next type of commit is the UPDATE commit. This is the most valuable commit type as it represents changes of an object's state.

Let’s define a method that will update the store entity and all products in the store:

public void rebrandStore(int storeId, String updatedName) {
    Optional<Store> storeOpt = storeRepository.findById(storeId);
    storeOpt.ifPresent(store -> {
        store.setName(updatedName);
        store.getProducts().forEach(product -> {
            product.setNamePrefix(updatedName);
        });
        storeRepository.save(store);
    });
}

If we run this method we'll get the following line in the debug output (in case of the same products and stores count):

11:29:35.439 [http-nio-8080-exec-2] INFO  org.javers.core.Javers - Commit(id:2.0, snapshots:3, author:Baeldung Author, changes - ValueChange:3), done in 48 millis (diff:43, persist:5)

Since JaVers has persisted changes successfully, let’s query the snapshots for products:

@GetMapping("/products/snapshots")
public String getProductSnapshots() {
    QueryBuilder jqlQuery = QueryBuilder.byClass(Product.class);
    List<CdoSnapshot> snapshots = javers.findSnapshots(jqlQuery.build());
    return javers.getJsonConverter().toJson(snapshots);
}

We'll get previous INITIAL commits and new UPDATE commits:

 {
    "commitMetadata": {
      "author": "Baeldung Author",
      "properties": [],
      "commitDate": "2019-08-26T12:55:20.197",
      "commitDateInstant": "2019-08-26T09:55:20.197Z",
      "id": 2.00
    },
    "globalId": {
      "entity": "com.baeldung.springjavers.domain.Product",
      "cdoId": 3
    },
    "state": {
      "price": 200.0,
      "name": "NewProduct #2",
      "id": 3,
      "store": {
        "entity": "com.baeldung.springjavers.domain.Store",
        "cdoId": 1
      }
    }
}

Here, we can see all the information about the change we made.

It is worth noting that JaVers doesn’t create new connections to the database. Instead, it reuses existing connections. JaVers data is committed or rolled back along with application data in the same transaction.

5.3. Changes

JaVers records changes as atomic differences between versions of an object. As we may see from the JaVers scheme, there is no separate table for storing changes, so JaVers calculates changes dynamically as the difference between snapshots.

Let’s update a product price:

public void updateProductPrice(Integer productId, Double price) {
    Optional<Product> productOpt = productRepository.findById(productId);
    productOpt.ifPresent(product -> {
        product.setPrice(price);
        productRepository.save(product);
    });
}

Then, let's query JaVers for changes:

@GetMapping("/products/{productId}/changes")
public String getProductChanges(@PathVariable int productId) {
    Product product = storeService.findProductById(productId);
    QueryBuilder jqlQuery = QueryBuilder.byInstance(product);
    Changes changes = javers.findChanges(jqlQuery.build());
    return javers.getJsonConverter().toJson(changes);
}

The output contains the changed  property and its values before and after:

[
  {
    "changeType": "ValueChange",
    "globalId": {
      "entity": "com.baeldung.springjavers.domain.Product",
      "cdoId": 2
    },
    "commitMetadata": {
      "author": "Baeldung Author",
      "properties": [],
      "commitDate": "2019-08-26T16:22:33.339",
      "commitDateInstant": "2019-08-26T13:22:33.339Z",
      "id": 2.00
    },
    "property": "price",
    "propertyChangeType": "PROPERTY_VALUE_CHANGED",
    "left": 100.0,
    "right": 3333.0
  }
]

To detect a type of a change JaVers compares subsequent snapshots of an object's updates. In the case above as we've changed the property of the entity we've got the PROPERTY_VALUE_CHANGED change type.

5.4. Shadows

Moreover, JaVers provides another view of audited entities called Shadow. A Shadow represents an object state restored from snapshots. This concept is closely related to Event Sourcing.

There are four different scopes for Shadows:

  • Shallow — shadows are created from a snapshot selected within a JQL query
  • Child-value-object — shadows contain all child value objects owned by selected entities
  • Commit-deep — shadows are created from all snapshots related to selected entities
  • Deep+ — JaVers tries to restore full object graphs with (possibly) all objects loaded.

Let’s use the Child-value-object scope and get a shadow for a single store:

@GetMapping("/stores/{storeId}/shadows")
public String getStoreShadows(@PathVariable int storeId) {
    Store store = storeService.findStoreById(storeId);
    JqlQuery jqlQuery = QueryBuilder.byInstance(store)
      .withChildValueObjects().build();
    List<Shadow<Store>> shadows = javers.findShadows(jqlQuery);
    return javers.getJsonConverter().toJson(shadows.get(0));
}

As a result, we'll get the store entity with the Address value object:

{
  "commitMetadata": {
    "author": "Baeldung Author",
    "properties": [],
    "commitDate": "2019-08-26T16:09:20.674",
    "commitDateInstant": "2019-08-26T13:09:20.674Z",
    "id": 1.00
  },
  "it": {
    "id": 1,
    "name": "Baeldung store",
    "address": {
      "address": "Some street",
      "zipCode": 22222
    },
    "products": []
  }
}

To get products in the result we may apply the Commit-deep scope.

6. Conclusion

In this tutorial, we've seen how easily JaVers integrates with Spring Boot and Spring Data in particular. All in all, JaVers requires almost zero configuration to set up.

To conclude, JaVers can have different applications, from debugging to complex analysis.

The full project for this article is available over on GitHub.

Grouping Javax Validation Constraints

$
0
0

1. Introduction

In our Java Bean Validation Basics tutorial, we saw the usage of various built-in javax.validation constraints. In this tutorial, we'll see how to group javax.validation constraints.

2. Use Case

There are many scenarios where we need to apply constraints on a certain set of fields of the bean, and then later we want to apply constraints on another set of fields of the same bean.

For example, let us imagine that we have a two-step signup form. In the first step, we ask the user to provide basic information like the first name, last name, email id, phone number, and captcha. When the user submits this data, we want to validate this information only.

In the next step, we ask the user to provide some other information like an address, and we want to validate this information as well — note that captcha is present in both steps.

3. Grouping Validation Constraints

All javax validation constraints have an attribute named groups. When we add a constraint to an element, we can declare the name of the group to which the constraint belongs. This is done by specifying the class name of the group interface in the groups attributes of the constraint.

The best way to understand something is to get our hands dirty. Let's see in action how we combine javax constraints into groups.

3.1. Declaring Constraint Groups

The first step is to create some interfaces. These interfaces will be the constraint group names. In our use-case, we're dividing validation constraints into two groups.

Let's see the first constraint group, BasicInfo:

public interface BasicInfo {
}

The next constraint group is AdvanceInfo:

public interface AdvanceInfo {
}

3.2. Using Constraint Groups

Now that we've declared our constraint groups, it's time to use them in our RegistrationForm Java bean:

public class RegistrationForm {
    @NotBlank(groups = BasicInfo.class)
    private String firstName;
    @NotBlank(groups = BasicInfo.class)
    private String lastName;
    @Email(groups = BasicInfo.class)
    private String email;
    @NotBlank(groups = BasicInfo.class)
    private String phone;

    @NotBlank(groups = {BasicInfo.class, AdvanceInfo.class})
    private String captcha;

    @NotBlank(groups = AdvanceInfo.class)
    private String street;
    
    @NotBlank(groups = AdvanceInfo.class)
    private String houseNumber;
    
    @NotBlank(groups = AdvanceInfo.class)
    private String zipCode;
    
    @NotBlank(groups = AdvanceInfo.class)
    private String city;
    
    @NotBlank(groups = AdvanceInfo.class)
    private String contry;
}

With the constraint groups attribute, we have divided the fields of our bean into two groups according to our use case. By default, all constraints are included in the Default constraint group.

3.3. Testing Constraints Having One Group

Now that we've declared constraint groups and used them in our bean class, it's time to see these constraint groups in action.

First, we'll see when basic information is not complete, using our BasicInfo constraint group for validation. We should get a constraint violation for any field left blank where we used BasicInfo.class in the groups attribute of the field's @NotBlank constraint:

public class RegistrationFormUnitTest {
    private static Validator validator;

    @BeforeClass
    public static void setupValidatorInstance() {
        validator = Validation.buildDefaultValidatorFactory().getValidator();
    }

    @Test
    public void whenBasicInfoIsNotComplete_thenShouldGiveConstraintViolationsOnlyForBasicInfo() {
        RegistrationForm form = buildRegistrationFormWithBasicInfo();
        form.setFirstName("");
 
        Set<ConstraintViolation<RegistrationForm>> violations = validator.validate(form, BasicInfo.class);
 
        assertThat(violations.size()).isEqualTo(1);
        violations.forEach(action -> {
            assertThat(action.getMessage()).isEqualTo("must not be blank");
            assertThat(action.getPropertyPath().toString()).isEqualTo("firstName");
        });
    }

    private RegistrationForm buildRegistrationFormWithBasicInfo() {
        RegistrationForm form = new RegistrationForm();
        form.setFirstName("devender");
        form.setLastName("kumar");
        form.setEmail("anyemail@yopmail.com");
        form.setPhone("12345");
        form.setCaptcha("Y2HAhU5T");
        return form;
    }
 
    //... additional tests
}

In the next scenario, we'll check when the advanced information is incomplete, using our AdvanceInfo constraint group for validation:

@Test
public void whenAdvanceInfoIsNotComplete_thenShouldGiveConstraintViolationsOnlyForAdvanceInfo() {
    RegistrationForm form = buildRegistrationFormWithAdvanceInfo();
    form.setZipCode("");
 
    Set<ConstraintViolation<RegistrationForm>> violations = validator.validate(form, AdvanceInfo.class);
 
    assertThat(violations.size()).isEqualTo(1);
    violations.forEach(action -> {
        assertThat(action.getMessage()).isEqualTo("must not be blank");
        assertThat(action.getPropertyPath().toString()).isEqualTo("zipCode");
    });
}

private RegistrationForm buildRegistrationFormWithAdvanceInfo() {
    RegistrationForm form = new RegistrationForm();
    return populateAdvanceInfo(form);
}

private RegistrationForm populateAdvanceInfo(RegistrationForm form) {
    form.setCity("Berlin");
    form.setContry("DE");
    form.setStreet("alexa str.");
    form.setZipCode("19923");
    form.setHouseNumber("2a");
    form.setCaptcha("Y2HAhU5T");
    return form;
}

3.4. Testing Constraints Having Multiple Groups

We can specify multiple groups for a constraint. In our use case, we're using captcha in both basic and advanced info. Let's first test the captcha with BasicInfo:

@Test
public void whenCaptchaIsBlank_thenShouldGiveConstraintViolationsForBasicInfo() {
    RegistrationForm form = buildRegistrationFormWithBasicInfo();
    form.setCaptcha("");
 
    Set<ConstraintViolation<RegistrationForm>> violations = validator.validate(form, BasicInfo.class);
 
    assertThat(violations.size()).isEqualTo(1);
    violations.forEach(action -> {
        assertThat(action.getMessage()).isEqualTo("must not be blank");
        assertThat(action.getPropertyPath().toString()).isEqualTo("captcha");
    });
}

Now let's test the captcha with AdvanceInfo:

@Test
public void whenCaptchaIsBlank_thenShouldGiveConstraintViolationsForAdvanceInfo() {
    RegistrationForm form = buildRegistrationFormWithAdvanceInfo();
    form.setCaptcha("");
 
    Set<ConstraintViolation<RegistrationForm>> violations = validator.validate(form, AdvanceInfo.class);
 
    assertThat(violations.size()).isEqualTo(1);
    violations.forEach(action -> {
        assertThat(action.getMessage()).isEqualTo("must not be blank");
        assertThat(action.getPropertyPath().toString()).isEqualTo("captcha");
    });
}

4. Specifying Constraint Group Validation Order with GroupSequence

By default, the constraint groups are not evaluated in any particular order. But we may have use cases where some groups should be validated before others. For achieving this, we can specify the order of group validation using GroupSequence. 

There are two ways of using the GroupSequence annotation:

  • on the entity being validated
  • on an Interface

4.1. Using GroupSequence on the Entity Being Validated

This is a simple way of ordering the constraints. Let's annotate the entity with GroupSequence and specify the order of constraints:

@GroupSequence({BasicInfo.class, AdvanceInfo.class})
public class RegistrationForm {
    @NotBlank(groups = BasicInfo.class)
    private String firstName;
    @NotBlank(groups = AdvanceInfo.class)
    private String street;
}

4.2. Using GroupSequence on an Interface

We can also specify the order of constraint validation using an interface. The advantage of this approach is that the same sequence can be used for other entities. Let's see how we can use GroupSequence with the interfaces we defined above:

@GroupSequence({BasicInfo.class, AdvanceInfo.class})
public interface CompleteInfo {
}

4.3. Testing GroupSequence

Now let's test GroupSequence. First, we will test that if BasicInfo is incomplete, then the AdvanceInfo group constraint will not be evaluated:

@Test
public void whenBasicInfoIsNotComplete_thenShouldGiveConstraintViolationsForBasicInfoOnly() {
    RegistrationForm form = buildRegistrationFormWithBasicInfo();
    form.setFirstName("");
 
    Set<ConstraintViolation<RegistrationForm>> violations = validator.validate(form, CompleteInfo.class);
 
    assertThat(violations.size()).isEqualTo(1);
    violations.forEach(action -> {
        assertThat(action.getMessage()).isEqualTo("must not be blank");
        assertThat(action.getPropertyPath().toString()).isEqualTo("firstName");
    });
}

Next, test that when BasicInfo is complete, then the AdvanceInfo constraint should be evaluated:

@Test
public void whenBasicAndAdvanceInfoIsComplete_thenShouldNotGiveConstraintViolationsWithCompleteInfoValidationGroup() {
    RegistrationForm form = buildRegistrationFormWithBasicAndAdvanceInfo();
 
    Set<ConstraintViolation<RegistrationForm>> violations = validator.validate(form, CompleteInfo.class);
 
    assertThat(violations.size()).isEqualTo(0);
}

5. Conclusion

In this quick tutorial, we saw how to group javax.validation constraints.

As usual, all code snippets are available over on GitHub.

Authentication with HttpUrlConnection

$
0
0

1. Overview

In this tutorial, we're going to explore how to authenticate HTTP requests using the HttpUrlConnection class.

2. HTTP Authentication

In web applications, servers may require clients to authenticate themselves. Failing to comply usually results in the server returning an HTTP 401 (Unauthorized) status code.

There are multiple authentication schemes which differ in the security strength they provide. However, the implementation effort varies as well.

Let's see three of them:

  • basic is a scheme which we'll say more about in the next section
  • digest applies hash algorithms on user credentials and a server-specified nonce
  • bearer utilizes access tokens as part of OAuth 2.0

3. Basic Authentication

Basic authentication allows clients to authenticate themselves using an encoded user name and password via the Authorization header:

GET / HTTP/1.1
Authorization: Basic dXNlcjpwYXNzd29yZA==

To create the encoded user name and password string, we simply Base64-encode the username, followed by a colon, followed by the password:

basic(user, pass) = base64-encode(user + ":" + pass)

Remember some caution from RFC 7617, though:

This scheme is not considered to be a secure method of user authentication unless used in conjunction with some external secure system such as TLS

This is, of course, since the user name and password travel as plain text over the network within each request.

4. Authenticate a Connection

Okay, with that as background, let's jump into configuring HttpUrlConnection to use HTTP Basic.

The class HttpUrlConnection can send requests, but first, we have to obtain an instance of it from an URL object:

HttpURLConnection connection = (HttpURLConnection) url.openConnection();

A connection offers many methods to configure it, like setRequestMethod and setRequestProperty.

As odd as setRequestProperty sounds, this is the one we want.

Once we've joined the user name and password using “:”, we can use the java.util.Base64 class to encode the credentials:

String auth = user + ":" + password;
byte[] encodedAuth = Base64.encodeBase64(auth.getBytes(StandardCharsets.UTF_8));

Then, we create the header value from the literal “Basic ” followed by the encoded credentials:

String authHeaderValue = "Basic " + new String(encodedAuth);

Next, we call the method setRequestProperty(key, value) to authenticate the request. As mentioned previously, we have to use “Authorization” as our header and “Basic ” + encoded credentials as our value:

connection.setRequestProperty("Authorization", authHeaderValue);

Finally, we need to actually send the HTTP request, like for example by calling getResponseCode(). As a result, we get an HTTP response code from the server:

int responseCode = connection.getResponseCode();

Anything in the 2xx family means that our request including the authentication part was okay!

5. Java Authenticator

The above-mentioned basic auth implementation requires setting the authorization header for every request. In contrast, the abstract class java.net.Authenticator allows setting the authentication globally for all connections.

We need to extend the class first. Then, we call the static method Authenticator.setDefault() in order to register an instance of our authenticator:

Authenticator.setDefault(new BasicAuthenticator());

Our basic auth class just overrides the getPasswordAuthentication() non-abstract method of the base class:

private final class BasicAuthenticator extends Authenticator {
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication(user, password.toCharArray());
}
}

The Authenticator class utilizes the credentials of our authenticator to fulfill the authentication scheme required by the server automatically.

6. Conclusion

In this short tutorial, we've seen how to apply basic authentication to requests sent via HttpUrlConnection.

As always, the code example can be found on GitHub.

How to Change the Java Version in an IntelliJ Project

$
0
0

1. Overview

In this tutorial, we'll look at how to change the JDK version in IntelliJ projects. This will work on both Community and Ultimate Editions of IntelliJ.

2. Project Structure Settings

IntelliJ stores the JDK version used by the project within its Project Structure. There are two ways to locate this:

  • Via menu navigation:
    • Navigating to File -> Project Structure
  • Via keyboard shortcut:
    • For OSX, we press ⌘ + ;
    • For Windows, we press Ctrl + Shift + Alt + S

We'll then see a popup dialog appear that looks similar to this:

project structure in intellij

Under the Project SDK section, you will be able to select a new JDK that will be used for the project via the combo box. After updating to a new version of Java, the project will begin reindexing its source files and libraries to ensure that autocompletion and other IDE features are synchronized.

3. Common Gotchas

When changing the JDK, one should remember that this only affects the JDK used by IntelliJ. Therefore, when running the Java project via the command line, it will still use the JDK specified in the JAVA_HOME environment variable.

Additionally, changing the Project SDK does not change the JVM version of the build tools used as well. So when using Maven or Gradle within IntelliJ, changing the Project SDK will not change the JVM used for these build tools.

4. Conclusion

In this tutorial, we illustrated two ways in which one could change the Java version used within IntelliJ projects. In addition, we also highlighted that there are some caveats to be aware of when changing the Java version.

To learn more about IntelliJ's Project Structure, visit their documentation link.


Threading Models in Java

$
0
0

1. Introduction

Often in our applications, we need to be able to do multiple things at the same time. We can achieve this in several ways, but key amongst them is to implement multitasking in some form.

Multi-tasking means running multiple tasks at the same time, where each task is performing its work. These tasks typically all run at the same time, reading and writing the same memory and interacting with the same resources, but doing different things.

2. Native Threads

The standard way of implementing multi-tasking in Java is to use threads. Threading is usually supported down to the operating system. We call threads that work at this level “native threads”.

The operating system has some abilities with threading that are often unavailable to our applications, simply because of how much closer it is to the underlying hardware. This means that executing native threads are typically more efficient. These threads directly map to threads of execution on the computer CPU – and the operating system manages the mapping of threads onto CPU cores.

The standard threading model in Java, covering all JVM languages, uses native threads. This has been the case since Java 1.2 and is the case regardless of the underlying system that the JVM is running on.

This means that any time we use any of the standard threading mechanisms in Java, then we're using native threads. This includes java.lang.Thread, java.util.concurrent.Executor, java.util.concurrent.ExecutorService, and so on.

3. Green Threads

In software engineering, one alternative to native threads is green threads. This is where we are using threads, but they do not directly map to operating system threads. Instead, the underlying architecture manages the threads itself and manages how these map on to operating system threads.

Typically this works by running several native threads and then allocating the green threads onto these native threads for execution. The system can then choose which green threads are active at any given time, and which native threads they are active on.

This sounds very complicated, and it is. But it's a complication that we generally don't need to care about. The underlying architecture takes care of all of this, and we get to use it as if it was a native threading model.

So why would we do this? Native threads are very efficient to run, but they have a high cost around starting and stopping them. Green threads help to avoid this cost and give the architecture a lot more flexibility. If we are using relatively long-running threads, then native threads are very efficient. For very short-lived jobs, the cost of starting them can outweigh the benefit of using them. In these cases, green threads can become more efficient.

Unfortunately, Java does not have built-in support for green threads.

Very early versions used green threads instead of native threads as the standard threading model. This changed in Java 1.2, and there has not been any support for it at the JVM level since.

It's also challenging to implement green threads in libraries because they would need very low-level support to perform well. As such, a common alternative used is fibers.

4. Fibers

Fibers are an alternative form of multi-threading and are similar to green threads. In both cases, we aren't using native threads and instead are using the underlying system controls which are running at any time. The big difference between green threads and fibers is in the level of control, and specifically who is in control.

Green threads are a form of preemptive multitasking. This means that the underlying architecture is entirely responsible for deciding which threads are executing at any given time.

This means that all of the usual issues of threading apply, where we don't know anything about the order of our threads executing, or which ones will be executing at the same time. It also means that the underlying system needs to be able to pause and restart our code at any time, potentially in the middle of a method or even a statement.

Fibers are instead a form of cooperative multitasking, meaning that a running thread will continue to run until it signals that it can yield to another. It means that it is our responsibility for the fibers to co-operate with each other. This puts us in direct control over when the fibers can pause execution, instead of the system deciding this for us.

This also means we need to write our code in a way that allows for this. Otherwise, it won't work. If our code doesn't have any interruption points, then we might as well not be using fibers at all.

Java does not currently have built-in support for fibers. Some libraries exist that can introduce this to our applications, including but not limited to:

4.1. Quasar

Quasar is a Java library that works well with pure Java and Kotlin and has an alternative version that works with Clojure.

It works by having a Java agent that needs to run alongside the application, and this agent is responsible for managing the fibers and ensuring that they work together correctly. The use of a Java agent means that there are no special build steps needed.

Quasar also requires Java 11 to work correctly so that might limit the applications that can use it. Older versions can be used on Java 8, but these are not actively supported.

4.2. Kilim

Kilim is a Java library that offers very similar functionality to Quasar but does so by using bytecode weaving instead of a Java agent. This means that it can work in more places, but it makes the build process more complicated.

Kilim works with Java 7 and newer and will work correctly even in scenarios where a Java agent is not an option. For example, if a different one is already used for instrumentation or monitoring.

4.3. Project Loom

Project Loom is an experiment by the OpenJDK project to add fibers to the JVM itself, rather than as an add-on library. This will give us the advantages of fibers over threads. By implementing it on the JVM directly, it can help to avoid complications that Java agents and bytecode weaving introduce.

There is no current release schedule for Project Loom, but we can download early access binaries right now to see how things are going. However, because it is still very early, we need to be careful relying on this for any production code.

5. Co-Routines

Co-routines are an alternative to threading and fibers. We can think of co-routines as fibers without any form of scheduling. Instead of the underlying system deciding which tasks are performing at any time, our code does this directly.

Generally, we write co-routines so that they yield at specific points of their flow. These can be seen as pause points in our function, where it will stop working and potentially output some intermediate result. When we do yield, we are then stopped until the calling code decides to re-start us for whatever reason. This means that our calling code controls the scheduling of when this will run.

Kotlin has native support for co-routines built into its standard library. There are several other Java libraries that we can use to implement them as well if desired.

6. Conclusion

We've seen several different alternatives for multi-tasking in our code, ranging from the traditional native threads to some very light-weight alternatives. Why not try them out next time an application needs concurrency?

Ignoring Unmapped Properties with MapStruct

$
0
0

1. Overview

In Java applications, we may wish to copy values from one type of Java bean to another. To avoid long, error-prone code, we can use a bean mapper such as MapStruct.

While mapping identical fields with identical field names is very straightforward, we often encounter mismatched beans. In this tutorial, we'll look at how MapStruct handles partial mapping.

2. Mapping

MapStruct is a Java annotation processor. Therefore, all we need to do is to define the mapper interface and to declare mapping methods. MapStruct will generate an implementation of this interface during compilation.

For simplicity, let's start with two classes with the same field names:

public class CarDTO {
    private int id;
    private String name;
}
public class Car {
    private int id;
    private String name;
}

Next, let's create a mapper interface:

@Mapper
public interface CarMapper {
    CarMapper INSTANCE = Mappers.getMapper(CarMapper.class);
    CarDTO carToCarDTO(Car car);
}

Finally, let's test our mapper:

@Test
public void givenCarEntitytoCar_whenMaps_thenCorrect() {
    Car entity = new Car();
    entity.setId(1);
    entity.setName("Toyota");

    CarDTO carDto = CarMapper.INSTANCE.carToCarDTO(entity);

    assertThat(carDto.getId()).isEqualTo(entity.getId());
    assertThat(carDto.getName()).isEqualTo(entity.getName());
}

3. Unmapped Properties

As MapStruct operates at compile time, it can be faster than a dynamic mapping framework. It can also generate error reports if mappings are incomplete — that is, if not all target properties are mapped:

Warning:(X,X) java: Unmapped target property: "propertyName".

While this is is a helpful warning in the case of an accident, we may prefer to handle things differently if the fields are missing on purpose.

Let's explore this with an example of mapping two simple objects:

public class DocumentDTO {
    private int id;
    private String title;
    private String text;
    private List<String> comments;
    private String author;
}
public class Document {
    private int id;
    private String title;
    private String text;
    private Date modificationTime;
}

We have unique fields in both classes that are not supposed to be filled during mapping. They are:

  • comments in DocumentDTO
  • author in DocumentDTO
  • modificationTime in Document

If we define a mapper interface, it will result in warning messages during the build:

@Mapper
public interface DocumentMapper {
    DocumentMapper INSTANCE = Mappers.getMapper(DocumentMapper.class);

    DocumentDTO documentToDocumentDTO(Document entity);
    Document documentDTOToDocument(DocumentDTO dto);
}

As we do not want to map these fields, we can exclude them from mapping in a few ways.

4. Ignoring Specific Fields

To skip several properties in a particular mapping method, we can use the ignore property in the @Mapping annotation:

@Mapper
public interface DocumentMapperMappingIgnore {

    DocumentMapperMappingIgnore INSTANCE =
            Mappers.getMapper(DocumentMapperMappingIgnore.class);

    @Mapping(target = "comments", ignore = true)
    @Mapping(target = "author", ignore = true)
    DocumentDTO documentToDocumentDTO(Document entity);

    @Mapping(target = "modificationTime", ignore = true)
    Document documentDTOToDocument(DocumentDTO dto);
}

Here, we've provided the field name as the target and set ignore to true to show that it's not required for mapping.

However, this technique is not convenient for some cases. We may find it difficult to use, for example, when using big models with a large number of fields.

5. Unmapped Target Policy

To make things clearer and the code more readable, we can specify the unmapped target policy.

To do this, we use the MapStruct unmappedTargetPolicy to provide our desired behavior when there is no source field for the mapping:

  • ERROR: any unmapped target property will fail the build – this can help us avoid accidentally unmapped fields
  • WARN: (default) warning messages during the build
  • IGNORE: no output or errors

In order to ignore unmapped properties and get no output warnings, we should assign the IGNORE value to the unmappedTargetPolicy. There are several ways to do it depending on the purpose.

5.1. Set a Policy on Each Mapper

We can set the unmappedTargetPolicy to the @Mapper annotation. As a result, all its methods will ignore unmapped properties:

@Mapper(unmappedTargetPolicy = ReportingPolicy.IGNORE)
public interface DocumentMapperUnmappedPolicy {
    // mapper methods
}

5.2. Use a Shared MapperConfig

We can ignore unmapped properties in several mappers by setting the unmappedTargetPolicy via @MapperConfig to share a setting across several mappers.

First we create an annotated interface:

@MapperConfig(unmappedTargetPolicy = ReportingPolicy.IGNORE)
public interface IgnoreUnmappedMapperConfig {
}

Then we apply that shared configuration to a mapper:

@Mapper(config = IgnoreUnmappedMapperConfig.class)
public interface DocumentMapperWithConfig { 
    // mapper methods 
}

We should note that this is a simple example showing the minimal usage of @MapperConfig, which might not seem much better than setting the policy on each mapper. The shared config becomes very useful when there are multiple settings to standardize across several mappers.

5.3. Configuration Options

Finally, we can configure the MapStruct code generator's annotation processor options. When using Maven, we can pass processor options using the compilerArgs parameter of the processor plug-in:

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>${maven-compiler-plugin.version}</version>
            <configuration>
                <source>${maven.compiler.source}</source>
                <target>${maven.compiler.target}</target>
                <annotationProcessorPaths>
                    <path>
                        <groupId>org.mapstruct</groupId>
                        <artifactId>mapstruct-processor</artifactId>
                        <version>${org.mapstruct.version}</version>
                    </path>
                </annotationProcessorPaths>
                <compilerArgs>
                    <compilerArg>
                        -Amapstruct.unmappedTargetPolicy=IGNORE
                    </compilerArg>
                </compilerArgs>
            </configuration>
        </plugin>
    </plugins>
</build>

In this example, we're ignoring the unmapped properties in the whole project.

6. The Order of Precedence

We've looked at several ways that can help us to handle partial mappings and completely ignore unmapped properties. We've also seen how to apply them independently on a mapper, but we can also combine them.

Let's suppose we have a large codebase of beans and mappers with the default MapStruct configuration. We don't want to allow partial mappings except in a few cases. We might easily add more fields to a bean or its mapped counterpart and get a partial mapping without even noticing it.

So, it's probably a good idea to add a global setting through Maven configuration to make the build fail in case of partial mappings.

Now, in order to allow unmapped properties in some of our mappers and override the global behavior, we can combine the techniques, keeping in mind the order of precedence (from highest to lowest):

  • Ignoring specific fields at the mapper method level
  • The policy on the mapper
  • The shared MapperConfig
  • The global configuration

7. Conclusion

In this tutorial, we looked at how to configure MapStruct to ignore unmapped properties.

First, we looked at what unmapped properties mean for mapping. Then we saw how partial mappings could be allowed without errors, in a few different ways.

Finally, we learned how to combine these techniques, keeping in mind their order of precedence.

As always, the code from this tutorial is available over on GitHub.

Mocking a WebClient in Spring

$
0
0

1. Overview

These days, we expect to call REST APIs in most of our services. Spring provides a few options for building a REST client, and WebClient is recommended.

In this quick tutorial, we will look at how to unit test services that use WebClient to call APIs.

2. Mocking

We have two main options for mocking in our tests:

3. Using Mockito

Mockito is the most common mocking library for Java. It's good at providing pre-defined responses to method calls, but things get challenging when mocking fluent APIs. This is because in a fluent API, a lot of objects pass between the calling code and the mock.

For example, let's have an EmployeeService class with a getEmployeeById method to fetch data via HTTP using WebClient:

public class EmployeeService {

    public Mono<Employee> getEmployeeById(Integer employeeId) {
        return webClient
                .get()
                .uri("http://localhost:8080/employee/{id}", employeeId)
                .retrieve()
                .bodyToMono(Employee.class);
    }
}

We can use Mockito to mock this:

@ExtendWith(MockitoExtension.class)
public class EmployeeServiceTest {
   
    @Test
    void givenEmployeeId_whenGetEmployeeById_thenReturnEmployee() {

        Integer employeeId = 100;
        Employee mockEmployee = new Employee(100, "Adam", "Sandler", 
          32, Role.LEAD_ENGINEER);
        when(webClientMock.get())
          .thenReturn(requestHeadersUriSpecMock);
        when(requestHeadersUriMock.uri("/employee/{id}", employeeId))
          .thenReturn(requestHeadersSpecMock);
        when(requestHeadersMock.retrieve())
          .thenReturn(responseSpecMock);
        when(responseMock.bodyToMono(Employee.class))
          .thenReturn(Mono.just(mockEmployee));

        Mono<Employee> employeeMono = employeeService.getEmployeeById(employeeId);

        StepVerifier.create(employeeMono)
          .expectNextMatches(employee -> employee.getRole()
            .equals(Role.LEAD_ENGINEER))
          .verifyComplete();
    }

}

As we can see, we need to provide a different mock object for each call in the chain, with four different when/thenReturn calls required. This is verbose and cumbersome. It also requires us to know the implementation details of how exactly our service uses WebClient, making this a brittle way of testing.

How can we write better tests for WebClient?

4. Using MockWebServer

MockWebServer, built by the Square team, is a small web server that can receive and respond to HTTP requests.

Interacting with MockWebServer from our test cases allows our code to use real HTTP calls to a local endpoint. We get the benefit of testing the intended HTTP interactions and none of the challenges of mocking a complex fluent client.

Using MockWebServer is recommended by the Spring Team for writing integration tests.

4.1. MockWebServer Dependencies

To use MockWebServer, we need to add Maven dependencies for both okhttp and mockwebserver to our pom.xml:

<dependency>
    <groupId>com.squareup.okhttp3</groupId>
    <artifactId>okhttp</artifactId>
    <version>4.0.1</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>com.squareup.okhttp3</groupId>
    <artifactId>mockwebserver</artifactId>
    <version>4.0.1</version>
    <scope>test</scope>
</dependency>

4.2. Adding MockWebServer to our Test

Let's test our EmployeeService with MockWebServer:

public class EmployeeServiceMockWebServerTest {

    public static MockWebServer mockBackEnd;

    @BeforeAll
    static void setUp() throws IOException {
        mockBackEnd = new MockWebServer();
        mockBackEnd.start();
    }

    @AfterAll
    static void tearDown() throws IOException {
        mockBackEnd.shutdown();
    }
}

In the above JUnit Test class, the setUp and tearDown method takes care of creating and shutting down the MockWebServer.

The next step is to map the port of the actual REST service call to the MockWebServer's port.

@BeforeEach
void initialize() {
    String baseUrl = String.format("http://localhost:%s", 
      mockBackEnd.getPort());
    employeeService = new EmployeeService(baseUrl);
}

Now it's time to create a stub so that the MockWebServer can respond to an HttpRequest.

4.3. Stubbing a Response

Let's use MockWebServer's handy enqueue method to queue a test response on the web server:

@Test
void getEmployeeById() throws Exception {
    Employee mockEmployee = new Employee(100, "Adam", "Sandler", 
      32, Role.LEAD_ENGINEER);
    mockBackEnd.enqueue(new MockResponse()
      .setBody(objectMapper.writeValueAsString(mockEmployee))
      .addHeader("Content-Type", "application/json"));

    Mono<Employee> employeeMono = employeeService.getEmployeeById(100);

    StepVerifier.create(employeeMono)
      .expectNextMatches(employee -> employee.getRole()
        .equals(Role.LEAD_ENGINEER))
      .verifyComplete();
}

When the actual API call is made from the getEmployeeById(Integer employeeId) method in our EmployeeService class, MockWebServer will respond with the queued stub.

4.4. Checking a Request

We may also want to make sure that the MockWebServer was sent the correct HttpRequest.

MockWebServer has a handy method named takeRequest that returns an instance of RecordedRequest:

RecordedRequest recordedRequest = mockBackEnd.takeRequest();
 
assertEquals("GET", recordedRequest.getMethod());
assertEquals("/employee/100", recordedRequest.getPath());

With RecordedRequest, we can verify the HttpRequest that was received to make sure our WebClient sent it correctly.

5. Conclusion

In this tutorial, we tried the two main options available to mock WebClient based REST client code.

While Mockito worked and may be a good option for simple examples, the recommended approach is to use MockWebServer.

As always, the source code for this article is available over on GitHub.

Flogger Fluent Logging

$
0
0

 1. Overview

In this tutorial, we're going to talk about the Flogger framework, a fluent logging API for Java designed by Google.

2. Why use Flogger?

With all the logging frameworks that are currently in the market, like Log4j and Logback, why do we need yet another logging framework?

It turns out Flogger has several advantages over other frameworks – let's take a look.

2.1. Readability

The fluent nature of Flogger's API goes a long way to making it more readable.

Let's look at an example where we want to log a message every ten iterations.

With a traditional logging framework, we'd see something like:

int i = 0;

// ...

if (i % 10 == 0) {
    logger.info("This log shows every 10 iterations");
    i++;
}

But now, with Flogger, the above can be simplified to:

logger.atInfo().every(10).log("This log shows every 10 iterations");

While one would argue that the Flogger version of the logger statement looks a bit more verbose than the traditional versions, it does permit greater functionality and ultimately leads to more readable and expressive log statements.

2.2. Performance

Logging objects are optimized as long we avoid calling toString on the logged objects:

User user = new User();
logger.atInfo().log("The user is: %s", user);

If we log, as shown above, the backend has the opportunity to optimize the logging. On the other hand, if we call toString directly, or concatenate the strings then this opportunity is lost:

logger.atInfo().log("Ths user is: %s", user.toString());
logger.atInfo().log("Ths user is: %s" + user);

2.3. Extensibility

The Flogger framework already covers most of the basic functionality that we'd expect from a logging framework.

However, there are cases where we would need to add to the functionality. In these cases, it's possible to extend the API.

Currently, this requires a separate supporting class. We could, for example, extend the Flogger API by writing a UserLogger class:

logger.at(INFO).forUserId(id).withUsername(username).log("Message: %s", param);

This could be useful in cases where we want to format the message consistently. The UserLogger would then provide the implementation for the custom methods forUserId(String id) and withUsername(String username).

To do this, the UserLogger class will have to extend the AbstractLogger class and provide an implementation for the API. If we look at FluentLogger, it's just a logger with no additional methods, we can, therefore, start by copying this class as-is, and then build up from this foundation by adding methods to it.

2.4. Efficiency

Traditional frameworks extensively use varargs. These methods require a new Object[] to be allocated and filled before the method can be invoked. Additionally, any fundamental types passed in must be auto-boxed.

This all costs additional bytecode and latency at the call site. It's particularly unfortunate if the log statement isn’t actually enabled. The cost becomes more apparent in debug level logs that appear often in loops. Flogger ditches these costs by avoiding varargs totally.

Flogger works around this problem by using a fluent call chain from which logging statements can be built. This allows the framework to only have a small number of overrides to the log method, and thus be able to avoid things like varargs and auto-boxing. This means that the API can accommodate a variety of new features without a combinatorial explosion.

A typical logging framework would have these methods:

level(String, Object)
level(String, Object...)

where level can be one of about seven log level names (severe for example), as well as having a canonical log method which accepts an additional log level:

log(Level, Object...)

In addition to this, there are usually variants of the methods that take a cause (a Throwable instance) that is associated with the log statement:

level(Throwable, String, Object)
level(Throwable, String, Object...)

It's clear that the API is coupling three concerns into one method call:

  1. It's trying to specify the log level (method choice)
  2. Trying to attach metadata to the log statement (Throwable cause)
  3. And also, specifying the log message and arguments.

This approach quickly multiplies the number of different logging methods needed to satisfy these independent concerns.

We can now see why it’s important to have two methods in the chain:

logger.atInfo().withCause(e).log("Message: %s", arg);

Let's now take a look at how we can use it in our codebase.

3. Dependencies

It's pretty simple to set up Flogger. We just need to add flogger and flogger-system-backend to our pom:

<dependencies>
    <dependency>
        <groupId>com.google.flogger</groupId>
        <artifactId>flogger</artifactId>
        <version>0.4</version>
    </dependency>
    <dependency>
        <groupId>com.google.flogger</groupId>
        <artifactId>flogger-system-backend</artifactId>
        <version>0.4</version>
        <scope>runtime</scope>
    </dependency>
</dependencies>

With these dependencies set up, we can now go on to explore the API that is at our disposal.

4. Exploring the Fluent API

First off, let's declare a static instance for our logger:

private static final FluentLogger logger = FluentLogger.forEnclosingClass();

And now we can start logging. We'll start with something simple:

int result = 45 / 3;
logger.atInfo().log("The result is %d", result);

The log messages can use any of Java’s printf format specifiers, such as %s, %d or %016x.

4.1. Avoiding Work At Log Sites

Flogger creators recommend that we avoid doing work at the log site.

Let's say we have the following long-running method for summarising the current state of a component:

public static String collectSummaries() {
    longRunningProcess();
    int items = 110;
    int s = 30;
    return String.format("%d seconds elapsed so far. %d items pending processing", s, items);
}

It's tempting to call collectSummaries directly in our log statement:

logger.atFine().log("stats=%s", collectSummaries());

Regardless of the configured log levels or rate-limiting, though, the collectSummaries method will now be called every time.

Making the cost of disabled logging statements virtually free is at the core of the logging framework. This, in turn, means that more of them can be left intact in the code without harm. Writing the log statement like we just did takes away this advantage.

Instead, we should do use the LazyArgs.lazy method:

logger.atFine().log("stats=%s", LazyArgs.lazy(() -> collectSummaries()));

Now, almost no work is done at the log site — just instance creation for the lambda expression. Flogger will only evaluate this lambda if it intends to actually log the message.

Although it's allowed to guard log statements using isEnabled:

if (logger.atFine().isEnabled()) {
    logger.atFine().log("summaries=%s", collectSummaries());
}

This is not necessary and we should avoid it because Flogger does these checks for us. This approach also only guards log statements by level and does not help with rate-limited log statements.

4.2. Dealing With Exceptions

How about exceptions, how do we handle them?

Well, Flogger comes with a withStackTrace method that we can use to log a Throwable instance:

try {
    int result = 45 / 0;
} catch (RuntimeException re) {
    logger.atInfo().withStackTrace(StackSize.FULL).withCause(re).log("Message");
}

Where withStackTrace takes as an argument the StackSize enum with constant values SMALL, MEDIUM, LARGE or FULL. A stack trace generated by withStackTrace() will show up as a LogSiteStackTrace exception in the default java.util.logging backend. Other backends may choose to handle this differently though.

4.3. Logging Configuration and Levels

So far we've been using logger.atInfo in most of our examples, but Flogger does support many other levels. We'll look at these, but first, let's introduce how to configure the logging options.

To configure logging, we use the LoggerConfig class.

For example, when we want to set the logging level to FINE:

LoggerConfig.of(logger).setLevel(Level.FINE);

And Flogger supports various logging levels:

logger.atInfo().log("Info Message");
logger.atWarning().log("Warning Message");
logger.atSevere().log("Severe Message");
logger.atFine().log("Finest Message");
logger.atFine().log("Fine Message");
logger.atFiner().log("Finer Message");
logger.atConfig().log("Config Message");

4.4. Rate Limiting

How about the issue of rate-limiting? How do we handle the case where we don't want to log every iteration?

Flogger comes to our rescue with the every(int n) method:

IntStream.range(0, 100).forEach(value -> {
    logger.atInfo().every(40).log("This log shows every 40 iterations => %d", value);
});

We get the following output when we run the code above:

Sep 18, 2019 5:04:02 PM com.baeldung.flogger.FloggerUnitTest lambda$givenAnInterval_shouldLogAfterEveryTInterval$0
INFO: This log shows every 40 iterations => 0 [CONTEXT ratelimit_count=40 ]
Sep 18, 2019 5:04:02 PM com.baeldung.flogger.FloggerUnitTest lambda$givenAnInterval_shouldLogAfterEveryTInterval$0
INFO: This log shows every 40 iterations => 40 [CONTEXT ratelimit_count=40 ]
Sep 18, 2019 5:04:02 PM com.baeldung.flogger.FloggerUnitTest lambda$givenAnInterval_shouldLogAfterEveryTInterval$0
INFO: This log shows every 40 iterations => 80 [CONTEXT ratelimit_count=40 ]

What if we want to log say every 10 seconds? Then, we can use atMostEvery(int n, TimeUnit unit):

IntStream.range(0, 1_000_0000).forEach(value -> {
    logger.atInfo().atMostEvery(10, TimeUnit.SECONDS).log("This log shows [every 10 seconds] => %d", value);
});

With this, the outcome now becomes:

Sep 18, 2019 5:08:06 PM com.baeldung.flogger.FloggerUnitTest lambda$givenATimeInterval_shouldLogAfterEveryTimeInterval$1
INFO: This log shows [every 10 seconds] => 0 [CONTEXT ratelimit_period="10 SECONDS" ]
Sep 18, 2019 5:08:16 PM com.baeldung.flogger.FloggerUnitTest lambda$givenATimeInterval_shouldLogAfterEveryTimeInterval$1
INFO: This log shows [every 10 seconds] => 3545373 [CONTEXT ratelimit_period="10 SECONDS [skipped: 3545372]" ]
Sep 18, 2019 5:08:26 PM com.baeldung.flogger.FloggerUnitTest lambda$givenATimeInterval_shouldLogAfterEveryTimeInterval$1
INFO: This log shows [every 10 seconds] => 7236301 [CONTEXT ratelimit_period="10 SECONDS [skipped: 3690927]" ]

5. Using Flogger With Other Backends

So, what if we would like to add Flogger to our existing application that is already using say Slf4j or Log4j for example? This could be useful in cases where we would want to take advantage of our existing configurations. Flogger supports multiple backends as we'll see.

5.1 Flogger With Slf4j

It's simple to configure an Slf4j back-end. First, we need to add the flogger-slf4j-backend dependency to our pom:

<dependency>
    <groupId>com.google.flogger</groupId>
    <artifactId>flogger-slf4j-backend</artifactId>
    <version>0.4</version>
</dependency>

Next, we need to tell Flogger that we would like to use a different back-end from the default one. We do this by registering a Flogger factory through system properties:

System.setProperty(
  "flogger.backend_factory", "com.google.common.flogger.backend.slf4j.Slf4jBackendFactory#getInstance");

And now our application will use the existing configuration.

5.1 Flogger With Log4j

We follow similar steps for configuring Log4j back-end. Let's add the flogger-log4j-backend dependency to our pom:

<dependency>
    <groupId>com.google.flogger</groupId>
    <artifactId>flogger-log4j-backend</artifactId>
    <version>0.4</version>
    <exclusions>
        <exclusion>
            <groupId>com.sun.jmx</groupId>
            <artifactId>jmxri</artifactId>
        </exclusion>
        <exclusion>
            <groupId>com.sun.jdmk</groupId>
            <artifactId>jmxtools</artifactId>
        </exclusion>
        <exclusion>
            <groupId>javax.jms</groupId>
            <artifactId>jms</artifactId>
        </exclusion>
    </exclusions>
</dependency>

<dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>1.2.17</version>
</dependency>
<dependency>
    <groupId>log4j</groupId>
    <artifactId>apache-log4j-extras</artifactId>
    <version>1.2.17</version>
</dependency>

We also need to register a Flogger back-end factory for Log4j:

System.setProperty(
  "flogger.backend_factory", "com.google.common.flogger.backend.log4j.Log4jBackendFactory#getInstance");

And that's it, our application is now set up to use existing Log4j configurations!

6. Conclusion

In this tutorial, we've seen how to use the Flogger framework as an alternative for the traditional logging frameworks. We've seen some powerful features that we can benefit from when using the framework.

We've also seen how we can leverage our existing configurations by registering different back-ends like Slf4j and Log4j.

As usual, the source code for this tutorial is available over on GitHub.

System.out.println vs Loggers

$
0
0

1. Why Loggers?

While writing a program or developing an enterprise production application, using System.out.println seems to be the simplest and easiest option. There are no extra libraries to be added to the classpath and no additional configurations to be made.

But using System.out.println comes with several disadvantages that affect its usability in many situations. In this tutorial, we'll discuss why and when we'd want to use a Logger over plain old System.out and System.err. We'll also show some quick examples using the Log4J2 logging framework.

2. Setup

Before we begin, let's look into the Maven dependencies and configurations required.

2.1. Maven Dependencies

Let's start by adding the Log4J2 dependency to our pom.xml:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-api</artifactId>
    <version>2.12.1</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.12.1</version>
</dependency>

We can find the latest versions of log4j-api and log4j-core on Maven Central.

2.2. Log4J2 Configuration

The use of System.out doesn't require any additional configuration. However, to use Log4J2, we need a log4j.xml configuration file:

<Configuration status="debug" name="baeldung" packages="">
    <Appenders>
        <Console name="stdout" target="SYSTEM_OUT">
            <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss} %p %m%n"/>
        </Console>
    </Appenders>
    <Root level="error">
        <AppenderRef ref="STDOUT"/>
    </Root>
</Configuration>

Almost all logger frameworks will require some level of configuration, either programmatically or through an external configuration file, such as the XML file shown here.

3. Separating Log Output

3.1. System.out and System.err

When we deploy our application to a server like Tomcat, the server uses its own logger. If we use System.out, the logs end up in catalina.out. It's much easier to debug our application if logs are put in a separate file. With Log4j2, we need to include a file appender in the configuration to save application logs in a separate file.

Also, with System.out.println, there's no control or filtering of which logs are to be printed. The only possible way to separate the logs is to use System.out.println for information logs and System.err.println for error logs:

System.out.println("This is an informational message");
System.err.println("This is an error message");

3.2. Log4J2 Logging Levels

In debug or development environments, we want to see all the information the application is printing. But in a live enterprise application, more logs means an increase in latency. Logger frameworks like Log4J2 provide multiple log level controls:

  • FATAL
  • ERROR
  • WARN
  • INFO
  • DEBUG
  • TRACE
  • ALL

Using these levels, we can easily filter when and where to print what information:

logger.trace("Trace log message");
logger.debug("Debug log message");
logger.info("Info log message");
logger.error("Error log message");
logger.warn("Warn log message");
logger.fatal("Fatal log message");

We may also configure the levels for each source code package individually. For more details on log level configuration, refer our Java Logging article.

4. Writing Logs to Files

4.1. Rerouting System.out and System.err

It is possible to route System.out.println to a file using the System.setOut() method:

PrintStream outStream = new PrintStream(new File("outFile.txt"));
System.setOut(outStream);
System.out.println("This is a baeldung article");

And in case of System.err:

PrintStream errStream = new PrintStream(new File("errFile.txt"));
System.setErr(errStream);
System.err.println("This is a baeldung article error");

When redirecting the output to a file using System.out or System.err, we can't control the file size, thus the file keeps growing for the duration of the run of the application.

As the file size grows, it might be difficult to open or analyze these bigger logs.

4.2. Logging to Files with Log4J2

Log4J2 provides a mechanism to systematically write logs in files and also roll the files based on certain policies. For example, we can configure the files to be rolled over based on a date/time pattern:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO">
    <Appenders>
        <File name="fout" fileName="log4j/target/baeldung-log4j2.log"
          immediateFlush="false" append="false">
            <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss} %p %m%n"/>
        </File>
    <Loggers>
        <AsyncRoot level="DEBUG">
            <AppenderRef ref="stdout"/>
            <AppenderRef ref="fout"/>
        </AsyncRoot>
    </Loggers>
</Configuration>

Or we can roll the files based on size once they reach a given threshold:

...
<RollingFile name="roll-by-size"
  fileName="target/log4j2/roll-by-size/app.log" filePattern="target/log4j2/roll-by-size/app.%i.log.gz"
  ignoreExceptions="false">
    <PatternLayout>
        <Pattern>%d{yyyy-MM-dd HH:mm:ss} %p %m%n</Pattern>
    </PatternLayout>
    <Policies>
        <OnStartupTriggeringPolicy/>
        <SizeBasedTriggeringPolicy size="5 KB"/>
    </Policies>
</RollingFile>

5. Logging to External Systems

As we've seen in the previous section, logger frameworks allow writing the logs to a file. Similarly, they also provide appenders to send logs to other systems and applications. This makes it possible to send logs to a Kafka Stream or an Elasticsearch database using Log4J appenders rather than using System.out.println.

Please refer to our Log4j appender article for more details on how to use such appenders.

6. Customizing Log Output

With the use of Loggers, we can customize what information is to be printed along with the actual message. The information that we can print includes the package name, log level, line number, timestamp, method name, etc.

While this would be possible with System.out.println, it would require a lot of manual work, while logging frameworks provide this functionality out of the box. With loggers, we can simply define a pattern in the logger configuration:

<Console name="ConsoleAppender" target="SYSTEM_OUT">
    <PatternLayout pattern="%style{%date{DEFAULT}}{yellow}
      %highlight{%-5level}{FATAL=bg_red, ERROR=red, WARN=yellow, INFO=green} %message"/>
</Console>

If we consider Log4J2 for our logger framework, there are several patterns that we can choose from or customize. Refer to the official Log4J2 documentation to learn more about them.

7. Conclusion

This article explains various reasons why to use a logger framework and why not to rely only on System.out.println for our application logs. While it is justifiable to use System.out.println for small test programs, we'd prefer not to use it as our main source of logging for an enterprise production application.

As always, the code examples in the article are available over on GitHub.

Methods in Java

$
0
0

1. Introduction

In Java, methods are where we define the business logic of an application. They define the interactions among the data enclosed in an object.

In this tutorial, we'll go through the syntax of Java methods, the definition of the method signature, and how to call and overload methods.

2. Method Syntax

First, a method consists of six parts:

  • Access modifier: optionally we can specify from wherein the code one can access the method
  • Return type: the type of the value returned by the method, if any
  • Method identifier: the name we give to the method
  • Parameter list: an optional comma-separated list of inputs for the method
  • Exception list: an optional list of exceptions the method can throw
  • Body: definition of the logic (can be empty)

Let's see an example:

 

Let's take a closer look at each of these six parts of a Java method.

2.1. Access Modifier

The access modifier allows us to specify which objects can have access to the method. There are four possible access modifiers: public, protected, private, and default (also called package-private).

A method can also include the static keyword before or after the access modifier. This means that the method belongs to the class and not to the instances, and therefore, we can call the method without creating an instance of the class. Methods without the static keyword are known as instance methods and may only be invoked on an instance of the class.

Regarding performance, a static method will be loaded into memory just once – during class loading – and are thus more memory-efficient.

2.2. Return Type

Methods can return data to the code where they have been called from. A method can return a primitive value or an object reference, or it can return nothing if we use the void keyword as the return type.

Let's see an example of a void method:

public void printFullName(String firstName, String lastName) {
    System.out.println(firstName + " " + lastName);
}

If we declare a return type, then we have to specify a return statement in the method body. Once the return statement has been executed, the execution of the method body will be finished and if there are more statements, these won't be processed.

On the other hand, a void method doesn't return any value and, thus, does not have a return statement.

2.3. Method Identifier

The method identifier is the name we assign to a method specification. It is a good practice to use an informative and descriptive name. It's worth mentioning that a method identifier can have at most 65536 characters (a long name though).

2.4. Parameter List

We can specify input values for a method in its parameter list, which is enclosed in parentheses. A method can have anywhere from 0 to 255 parameters that are delimited by commas. A parameter can be an object, a primitive or an enumeration. We can use Java annotations at the method parameter level (for example the Spring annotation @RequestParam).

2.5. Exception List

We can specify which exceptions are thrown by a method by using the throws clause. In the case of a checked exception, either we must enclose the code in try-catch clause or we must provide a throws clause in the method signature.

So, let's take a look at a more complex variant of our previous method, which throws a checked exception:

public void writeName(String name) throws IOException {
    PrintWriter out = new PrintWriter(new FileWriter("OutFile.txt"));
    out.println("Name: " + name);
    out.close();
}

2.6. Method Body

The last part of a Java method is the method body, which contains the logic we want to execute. In the method body, we can write as many lines of code as we want — or none at all in the case of static methods. If our method declares a return type, then the method body must contain a return statement.

3. Method Signature

As per its definition, a method signature is comprised of only two components — the method's name and parameter list.

So, let's write a simple method:

public String getName(String firstName, String lastName) {
  return firstName + " " + middleName + " " + lastName;
}

The signature of this method is getName(String firstName, String lastName).

The method identifier can be any identifier. However, if we follow common Java coding conventions, the method identifier should be a verb in lowercase that can be followed by adjectives and/or nouns.

4. Calling a Method

Now, let's explore how to call a method in Java. Following the previous example, let's suppose that those methods are enclosed in a Java class called PersonName:

public class PersonName {
  public String getName(String firstName, String lastName) {
    return firstName + " " + middleName + " " + lastName;
  }
}

Since our getName method is an instance method and not a static method, in order to call the method getName, we need to create an instance of the class PersonName:

PersonName personName = new PersonName();
String fullName = personName.getName("Alan", "Turing");

As we can see, we use the created object to call the getName method.

Finally, let's take a look at how to call a static method. In the case of a static method, we don't need a class instance to make the call. Instead, we invoke the method with its name prefixed by the class name.

Let's demonstrate using a variant of the previous example:

public class PersonName {
  public static String getName(String firstName, String lastName) {
    return firstName + " " + middleName + " " + lastName;
  }
}

In this case, the method call is:

String fullName = PersonName.getName("Alan", "Turing");

5. Method Overloading

Java allows us to have two or more methods with the same identifier but different parameter list — different method signatures. In this case, we say that the method is overloaded. Let's go with an example:

public String getName(String firstName, String lastName) {
  return getName(firstName, "", lastName);
}

public String getName(String firstName, String middleName, String lastName) {
  if (!middleName.isEqualsTo("")) {
    return firstName + " " + lastName;
  }
  return firstName + " " + middleName + " " + lastName;
}

Method overloading is useful for cases like the one in the example, where we can have a method implementing a simplified version of the same functionality.

Finally, a good design habit is to ensure that overloaded methods behave in a similar manner. Otherwise, the code will be confusing if a method with the same identifier behaves in a different way.

6. Conclusion

In this tutorial, we've explored the parts of Java syntax involved when specifying a method in Java.

In particular, we went through the access modifier, the return type, the method identifier, the parameter list, exception list, and method body. Then we saw the definition of method signature, how to call a method, and how to overload a method.

As usual, the code seen here is available over on GitHub.

Spring Security – Attacking OAuth

$
0
0

1. Introduction

OAuth is the industry standard framework for delegated authorization. A lot of thought and care has gone into creating the various flows that make up the standard. Even then, it's not without vulnerability.

In this series of articles, we'll discuss attacks against OAuth from a theoretical standpoint and describe various options that exist to protect our applications.

2. The Authorization Code Grant

The Authorization Code Grant flow is the default flow that is used by most applications implementing delegated authorization.

Before that flow begins, the Client must have pre-registered with the Authorization Server, and during this process, it must have also provided a redirection URL — that is, a URL on which the Authorization Server can call back into the Client with an Authorization Code.

Let's take a closer look at how it works and what some of these terms mean.

During an Authorization Code Grant Flow, a Client (the application that is requesting delegated authorization) redirects the Resource Owner (user) to an Authorization Server (for example, Login with Google). After login, the Authorization Server redirects back to the client with an Authorization Code.

Next, the client calls into an endpoint at the Authorization Server, requesting an Access Token by providing the Authorization Code. At this point, the flow ends, and the client can use the token to access resources protected by the Authorization Server.

Now, the OAuth 2.0 Framework allows for these Clients to be public, say in scenarios where the Client can't safely hold onto a Client Secret. Let's take a look at some redirection attacks that are possible against Public Clients.

3. Redirection Attacks

3.1. Attack Preconditions

Redirection attacks rely on the fact that the OAuth standard doesn't fully describe the extent to which this redirect URL must be specified. This is by design.

This allows some implementations of the OAuth protocol to allow for a partial redirect URL.

For example, if we register a Client ID and a Client Redirect URL with the following wildcard-based match against an Authorization Server:

*.cloudapp.net

This would be valid for:

app.cloudapp.net

but also for:

evil.cloudapp.net

We've selected the cloudapp.net domain on purpose, as this is a real location where we can host OAuth-powered applications. The domain is a part of Microsoft's Windows Azure platform and allows any developer to host a subdomain under it to test an application. This in itself is not a problem, but it's a vital part of the greater exploit.

The second part of this exploit is an Authorization Server that allows wildcard matching on callback URLs.

Finally, to realize this exploit, the application developer needs to register with the Authorization server to accept any URL under the main domain, in the form *.cloudapp.net.

3.2. The Attack

When these conditions are met, the attacker then needs to trick the user into launching a page from the subdomain under his control, by for example, sending the user an authentic looking email asking him to take some action on the account protected by OAuth. Typically, this would look something like https://evil.cloudapp.net/login. When the user opens this link and selects login, he will be redirected to the Authorization Server with an authorization request:

GET /authorize?response_type=code&client_id={apps-client-id}&state={state}&redirect_uri=https%3A%2F%2Fevil.cloudapp.net%2Fcb HTTP/1.1

While this may look typical, this URL is malicious. See, in this case, the Authorization Server receives a doctored URL with the app's Client ID and a redirection URL pointing back to evil's app.

The Authorization Server will then validate the URL, which is a subdomain under the specified main domain. Since the Authorization Server believes that the request originated from a valid source, it will authenticate the user and then ask for consent as it would do normally.

After this is done, it will now redirect back into the evil.cloudapp.net subdomain, handing the Authorization Code to the attacker.

Since the attacker now has the Authorization Code, all he needs to do is to call the token endpoint of the Authorization Server with the Authorization Code to receive a token, which allows him access to the Resource Owner's protected resources.

4. Spring OAuth Authorization Server Vulnerability Assessment

Let's take a look at a simple Spring OAuth Authorization Server configuration:

@Configuration
public class AuthorizationServerConfig extends AuthorizationServerConfigurerAdapter {    
    @Override
    public void configure(ClientDetailsServiceConfigurer clients) throws Exception {
        clients.inMemory()
          .withClient("apricot-client-id")
          .authorizedGrantTypes("authorization_code")
          .scopes("scope1", "scope2")
          .redirectUris("https://app.cloudapp.net/oauth");
    }
    // ...
}

We can see here that the Authorization Server is configuring a new client with the id “apricot-client-id”. There is no client secret, so this is a Public Client.

Our security ears should perk up at this, as we now have two out of the three conditions – evil people can register subdomains and we are using a Public Client.

But, note that we are configuring the redirect URL here too and that it's absolute. We can mitigate the vulnerability by doing so.

4.1. Strict

By default, Spring OAuth allows a certain degree of flexibility in redirect URL matching.

For example, the DefaultRedirectResolver supports subdomain matching.

Let's only use what we need. And if we can just exactly match the redirect URL, we should do:

@Configuration
public class AuthorizationServerConfig extends AuthorizationServerConfigurerAdapter {    
    //...

    @Override
    public void configure(AuthorizationServerEndpointsConfigurer endpoints) {
        endpoints.redirectResolver(new ExactMatchRedirectResolver());
    }
}

In this case, we've switched to using the ExactMatchRedirectResolver for redirect URLs. This resolver does an exact string match, without parsing the redirect URL in any way. This makes its behavior far more secure and certain.

4.2. Lenient

We can find the default code that deals with redirect URL matching in the Spring Security OAuth source:

/**
Whether the requested redirect URI "matches" the specified redirect URI. For a URL, this implementation tests if
the user requested redirect starts with the registered redirect, so it would have the same host and root path if
it is an HTTP URL. The port, userinfo, query params also matched. Request redirect uri path can include
additional parameters which are ignored for the match
<p>
For other (non-URL) cases, such as for some implicit clients, the redirect_uri must be an exact match.
@param requestedRedirect The requested redirect URI.
@param redirectUri The registered redirect URI.
@return Whether the requested redirect URI "matches" the specified redirect URI.
*/
protected boolean redirectMatches(String requestedRedirect, String redirectUri) {
   UriComponents requestedRedirectUri = UriComponentsBuilder.fromUriString(requestedRedirect).build();
   UriComponents registeredRedirectUri = UriComponentsBuilder.fromUriString(redirectUri).build();
   boolean schemeMatch = isEqual(registeredRedirectUri.getScheme(), requestedRedirectUri.getScheme());
   boolean userInfoMatch = isEqual(registeredRedirectUri.getUserInfo(), requestedRedirectUri.getUserInfo());
   boolean hostMatch = hostMatches(registeredRedirectUri.getHost(), requestedRedirectUri.getHost());
   boolean portMatch = matchPorts ? registeredRedirectUri.getPort() == requestedRedirectUri.getPort() : true;
   boolean pathMatch = isEqual(registeredRedirectUri.getPath(),
     StringUtils.cleanPath(requestedRedirectUri.getPath()));
   boolean queryParamMatch = matchQueryParams(registeredRedirectUri.getQueryParams(),
     requestedRedirectUri.getQueryParams());

   return schemeMatch && userInfoMatch && hostMatch && portMatch && pathMatch && queryParamMatch;
}

We can see that the URL matching is done by parsing the incoming redirect URL into its component parts. This is quite complex due to its several features, like whether the port, subdomain, and query parameters should match. And choosing to allow subdomain matches is something to think twice about.

Of course, this flexibility is there, if we need it – let's just use it with caution.

5. Implicit Flow Redirect Attacks

To be clear, the Implicit Flow isn't recommended. It's much better to use the Authorization Code Grant flow with additional security provided by PKCE. That said, let's take a look at how a redirect attack manifests with the implicit flow.

A redirect attack against an implicit flow would follow the same basic outline as we've seen above. The main difference is that the attacker gets the token immediately, as there is no authorization code exchange step.

As before, an absolute matching of the redirect URL will mitigate this class of attack as well.

Furthermore, we can find that the implicit flow contains another related vulnerability. An attacker can use a client as an open redirector and get it to reattach fragments.

The attack begins as before, with an attacker getting the user to visit a page under the attacker's control, for example, https://evil.cloudapp.net/info. The page is crafted to initiate an authorization request as before. However, it now includes a redirect URL:

GET /authorize?response_type=token&client_id=ABCD&state=xyz&redirect_uri=https%3A%2F%2Fapp.cloudapp.net%2Fcb%26redirect_to
%253Dhttps%253A%252F%252Fevil.cloudapp.net%252Fcb HTTP/1.1

The redirect_to https://evil.cloudapp.net is setting up the Authorization Endpoint to redirect the token to a domain under the attacker's control. The authorization server will now first redirect to the actual app site:

Location: https://app.cloudapp.net/cb?redirect_to%3Dhttps%3A%2F%2Fevil.cloudapp.net%2Fcb#access_token=LdKgJIfEWR34aslkf&...

When this request arrives at the open redirector, it will extract the redirect URL evil.cloudapp.net and then redirect to the attacker's site:

https://evil.cloudapp.net/cb#access_token=LdKgJIfEWR34aslkf&...

Absolute URL matching will mitigate this attack, too.

6. Summary

In this article, we've discussed a class of attacks against the OAuth protocol that are based on redirection URLs.

While this has potentially serious consequences, using absolute URL matching at the Authorization Server mitigates this class of attack.


Java Weekly, Issue 301

$
0
0

1. Spring and Java

>> How to deploy war files to Spring Boot Embedded Tomcat [vojtechruzicka.com]

A couple of solutions — one for Spring Boot 2.x and one for 1.x.

>> GraphQL server in Java: Part I: Basics [nurkiewicz.com]

An interesting new series begins by looking at the basics of GraphQL and a naïve solution in Java.

>> Truly Public Methods [javaspecialists.eu]

And as surprising as it might seem, not all public methods are accessible using reflection.

 

Also worth reading:

 

Webinars and presentations:

 

Time to upgrade (all Spring):

2. Technical and Musing

>> Efficient enterprise testing — integration tests (3/6) and >> workflows & code quality (4/6) and >> test frameworks (5/6) [blog.sebastian-daschner.com]

As the series begins to wind down, a few thoughts on code-level and system-level integration tests and more.

>> When TDD Is Not a Good Fit [henrikwarne.com]

And although TDD purists may disagree, the author makes a case for certain situations where TDD can actually slow progress.

 

Also worth reading:

3. Comics

>> Boss Recommends Blockchain [dilbert.com]

>> Parody Inversion Point [dilbert.com]

>> Topper [dilbert.com]

4. Pick of the Week

I'll pick DataDog this week, as they've been firing on all cylinders lately:

>> Use DataDog to monitor and troubleshoot your Java web applications  

Simply put – a really solid and mature end-to-end way to monitor your application, with full support for pretty much anything Java.

You can use their trial here.

Java FileWriter

$
0
0

1. Overview

In this tutorial, we'll learn and understand the FileWriter class present in the java.io package.

2. FileWriter

FileWriter is a specialized OutputStreamWriter for writing character files. It doesn't expose any new operations but works with the operations inherited from the OutputStreamWriter and Writer classes.

Until Java 11, the FileWriter worked with the default character encoding and default byte buffer size. However, Java 11 introduced four new constructors that accept a Charset, thereby allowing user-specified Charset. Unfortunately, we still cannot modify the byte buffer size, and it's set to 8192.

2.1. Instantiating the FileWriter

There are five constructors in the FileWriter class if we're using a Java version before Java 11.

Let’s have a glance at various constructors:

public FileWriter(String fileName) throws IOException {
    super(new FileOutputStream(fileName));
}

public FileWriter(String fileName, boolean append) throws IOException {
    super(new FileOutputStream(fileName, append));
}

public FileWriter(File file) throws IOException {
    super(new FileOutputStream(file));
}

public FileWriter(File file, boolean append) throws IOException {
    super(new FileOutputStream(file, append));
}

public FileWriter(FileDescriptor fd) {
    super(new FileOutputStream(fd));
}

Java 11 introduced four additional constructors:

public FileWriter(String fileName, Charset charset) throws IOException {
    super(new FileOutputStream(fileName), charset);
}

public FileWriter(String fileName, Charset charset, boolean append) throws IOException {
    super(new FileOutputStream(fileName, append), charset);
}

public FileWriter(File file, Charset charset) throws IOException {
    super(new FileOutputStream(file), charset);
}

public FileWriter(File file, Charset charset, boolean append) throws IOException {
    super(new FileOutputStream(file, append), charset);
}

2.2. Writing a String to a File

Let's now use one of the FileWriter constructors to create an instance of FileWriter and then write to a file:

try (FileWriter fileWriter = new FileWriter("src/test/resources/FileWriterTest.txt")) {
    fileWriter.write("Hello Folks!");
}

We've used the single argument constructor of the FileWriter that accepts a file name.  We then use the write(String str) operation inherited from the Writer class. Since the FileWriter is AutoCloseable, we've used try-with-resources so that we don't have to close the FileWriter explicitly.

On executing the above code, the String will be written to the specified file:

Hello Folks!

The FileWriter does not guarantee whether the FileWriterTest.txt file will be available or be created. It is dependent on the underlying platform.

We must also make a note that certain platforms may allow only a single FileWriter instance to open the file. In that case, the other constructors of the FileWriter class will fail if the file involved is already open.

2.3. Appending a String to a File

We often need to append data to the existing contents of a file. Let's now see an example of a FileWriter that supports appending:

try (FileWriter fileWriter = new FileWriter("src/test/resources/FileWriterTest.txt", true)) {
    fileWriter.write("Hello Folks Again!");
}

As we can see, we've used the two-argument constructor that accepts a file name and a boolean flag append. Passing the flag append as true creates a FileWriter that allows us to append text to existing contents of a file.

On executing the code, we'll have the String appended to the existing contents of the specified file:

Hello Folks!Hello Folks Again!

3. Conclusion

In this article, we learned about the convenience class FileWriter and a couple of ways in which the FileWriter can be created. We then used it to write data to a file.

As always, the complete source code for the tutorial is available over on GitHub.

Excluding URLs for a Filter in a Spring Web Application

$
0
0

1. Overview

Most web applications have a use-case of performing operations like request logging, validation, or authentication. And, what's more, such tasks are usually shared across a set of HTTP endpoints.

The good news is that the Spring web framework provides a filtering mechanism for precisely this purpose.

In this tutorial, we'll learn how a filter-style task can be included or excluded from execution for a given set of URLs.

2. Filter for Specific URLs

Let's say our web application needs to log some information about its requests, such as their paths and content types. One way to do this is by creating a logging filter.

2.1. Logging Filter

First, let's create our logging filter in a LogFilter class that extends the OncePerRequestFilter class and implements the doFilterInternal method:

@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response,
  FilterChain filterChain) throws ServletException, IOException {
    String path = request.getRequestURI();
    String contentType = request.getContentType();
    logger.info("Request URL path : {}, Request content type: {}", path, contentType);
    filterChain.doFilter(request, response);
}

2.1. Rule-in Filter

Let's assume that we need the logging task to be executed only for select URL patterns, namely /health, /faq/*. For this, we'll register our logging filter using a FilterRegistrationBean such that it matches only the required URL patterns:

@Bean
public FilterRegistrationBean<LogFilter> logFilter() {
    FilterRegistrationBean<LogFilter> registrationBean = new FilterRegistrationBean<>();
    registrationBean.setFilter(new LogFilter());
    registrationBean.addUrlPatterns("/health","/faq/*");
    return registrationBean;
}

2.2. Rule-out Filter

If we want to exclude URLs from executing the logging task, we can achieve this easily in two ways:

  • For a new URL, ensure that it doesn't match the URL patterns used by the filter
  • For an old URL for which logging was earlier enabled, we can modify the URL pattern to exclude this URL

3. Filter for All Possible URLs

We easily met our previous use case of including URLs in the LogFilter with minimal efforts. However, it gets trickier if the Filter uses a wildcard (*) to match all the possible URL patterns.

In this circumstance, we'll need to write the inclusion and exclusion logic ourselves.

3.1. Custom Filter

Clients can send useful information to the server by using the request headers. Let's say our web application is currently operational only in the United States, which means we don't want to process the requests coming from other countries.

Let's further imagine that our web application indicates the locale via an X-Country-Code request header. Consequently, each request comes with this information, and we have a clear case for using a filter.

Let's implement a Filter that checks for the header, rejecting requests that don't meet our conditions:

@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response,
      FilterChain filterChain) throws ServletException, IOException {

    String countryCode = request.getHeader("X-Country-Code");
    if (!"US".equals(countryCode)) {
        response.sendError(HttpStatus.BAD_REQUEST.value(), "Invalid Locale");
        return;
    }

    filterChain.doFilter(request, response);
}

3.2. Filter Registration

To start with, let's use the asterisk (*) wildcard to register our filter to match all possible URL patterns:

@Bean
public FilterRegistrationBean<HeaderValidatorFilter> headerValidatorFilter() {
    FilterRegistrationBean<HeaderValidatorFilter> registrationBean = new FilterRegistrationBean<>();
    registrationBean.setFilter(new HeaderValidatorFilter());
    registrationBean.addUrlPatterns("*");
    return registrationBean;
}

At a later point in time, we can exclude the URL patterns which are not required to execute the task of validating the locale request header information.

3.3. URL Exclusion

Let's again imagine that we have a web route at /health that can be used to do a ping-pong health check of the application.

So far, all requests will trigger our filter. As we can guess, it's an overhead when it comes to our health check.

So, let's simplify our  /health requests by excluding it from the main body of our filter:

@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain)
      throws ServletException, IOException {
    String path = request.getRequestURI();
    if ("/health".equals(path)) {
    	filterChain.doFilter(request, response);
    	return;
    }

    String countryCode = request.getHeader("X-Country-Code");
    // ... same as before
}

We must note that adding this custom logic within the doFilter method introduces coupling between the /health endpoint and our filter. As such, it's not optimal since we could break the filtering logic if we change the health check endpoint without making a corresponding change inside the doFilter method.

4. Conclusion

In this tutorial, we've explored how to exclude URL pattern(s) from a servlet filter in a Spring Boot web application for two use cases, namely logging and request header validation.

Moreover, we learned that it gets tricky to rule-out a specific set of URLs for a filter that uses a * wildcard for matching all possible URL patterns.

As always, the complete source code for the tutorial is available over on GitHub.

Integrating Spring with AWS Kinesis

$
0
0

1. Introduction

Kinesis is a tool for collecting, processing, and analyzing data streams in real-time, developed at Amazon. One of its main advantages is that it helps with the development of event-driven applications.

In this tutorial, we’ll explore a few libraries that enable our Spring application to produce and consume records from a Kinesis Stream. The code examples will show the basic functionality but don’t represent the production-ready code.

2. Prerequisite

Before we go any further, we need to do two things.

The first is to create a Spring project, as the goal here is to interact with Kinesis from a Spring project.

The second one is to create a Kinesis Data Stream. We can do this from a web browser in our AWS account. One alternative for the AWS CLI fans among us is to use the command line. Because we’ll interact with it from code, we also must have at hand the AWS IAM Credentials, the access key and secret key, and the region.

All our producers will create dummy IP address records, while the consumers will read those values and list them in the application console.

3. AWS SDK for Java

The very first library we'll use is the AWS SDK for Java. Its advantage is that it allows us to manage many parts of working with Kinesis Data Streams. We can read data, produce data, create data streams, and reshard data streams. The drawback is that in order to have production-ready code, we'll have to code aspects like resharding, error handling, or a daemon to keep the consumer alive.

3.1. Maven Dependency

The amazon-kinesis-client Maven dependency will bring everything we need to have working examples. We'll now add it to our pom.xml file:

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>amazon-kinesis-client</artifactId>
    <version>1.11.2</version>
</dependency>

3.2. Spring Setup

Let’s reuse the AmazonKinesis object needed to interact with our Kinesis Stream. We'll create it as a @Bean inside our @SpringBootApplication class:

@Bean
public AmazonKinesis buildAmazonKinesis() {
    BasicAWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secretKey);
    return AmazonKinesisClientBuilder.standard()
      .withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
      .withRegion(Regions.EU_CENTRAL_1)
      .build();
}

Next, let's define the aws.access.key and aws.secret.key, needed for the local machine, in application.properties:

aws.access.key=my-aws-access-key-goes-here
aws.secret.key=my-aws-secret-key-goes-here

And we'll read them using the @Value annotation:

@Value("${aws.access.key}")
private String accessKey;

@Value("${aws.secret.key}")
private String secretKey;

For the sake of simplicity, we're going to rely on @Scheduled methods to create and consume records.

3.3. Consumer

The AWS SDK Kinesis Consumer uses a pull model, meaning our code will draw records from the shards of the Kinesis data stream:

GetRecordsRequest recordsRequest = new GetRecordsRequest();
recordsRequest.setShardIterator(shardIterator.getShardIterator());
recordsRequest.setLimit(25);

GetRecordsResult recordsResult = kinesis.getRecords(recordsRequest);
while (!recordsResult.getRecords().isEmpty()) {
    recordsResult.getRecords().stream()
      .map(record -> new String(record.getData().array()))
      .forEach(System.out::println);

    recordsRequest.setShardIterator(recordsResult.getNextShardIterator());
    recordsResult = kinesis.getRecords(recordsRequest);
}

The GetRecordsRequest object builds the request for stream data. In our example, we’ve defined a limit of 25 records per request, and we keep reading until there’s nothing more to read.

We can also notice that, for our iteration, we’ve used a GetShardIteratorResult object. We created this object inside a @PostConstruct method so that we’ll begin tracking records straight away:

private GetShardIteratorResult shardIterator;

@PostConstruct
private void buildShardIterator() {
    GetShardIteratorRequest readShardsRequest = new GetShardIteratorRequest();
    readShardsRequest.setStreamName(IPS_STREAM);
    readShardsRequest.setShardIteratorType(ShardIteratorType.LATEST);
    readShardsRequest.setShardId(IPS_SHARD_ID);

    this.shardIterator = kinesis.getShardIterator(readShardsRequest);
}

3.4. Producer

Let’s now see how to handle the creation of records for our Kinesis data stream.

We insert data using a PutRecordsRequest object. For this new object, we add a list that comprises multiple PutRecordsRequestEntry objects:

List<PutRecordsRequestEntry> entries = IntStream.range(1, 200).mapToObj(ipSuffix -> {
    PutRecordsRequestEntry entry = new PutRecordsRequestEntry();
    entry.setData(ByteBuffer.wrap(("192.168.0." + ipSuffix).getBytes()));
    entry.setPartitionKey(IPS_PARTITION_KEY);
    return entry;
}).collect(Collectors.toList());

PutRecordsRequest createRecordsRequest = new PutRecordsRequest();
createRecordsRequest.setStreamName(IPS_STREAM);
createRecordsRequest.setRecords(entries);

kinesis.putRecords(createRecordsRequest);

We've created a basic consumer and a producer of simulated IP records. All that's left to do now is to run our Spring project and see IPs listed in our application console.

4. KCL and KPL

Kinesis Client Library (KCL) is a library that simplifies the consuming of records. It’s also a layer of abstraction over the AWS SDK Java APIs for Kinesis Data Streams. Behind the scenes, the library handles load balancing across many instances, responding to instance failures, checkpointing processed records, and reacting to resharding.

Kinesis Producer Library (KPL) is a library useful for writing to a Kinesis data stream. It also provides a layer of abstraction that sits over the AWS SDK Java APIs for Kinesis Data Streams. For better performance, the library automatically handles batching, multi-threading, and retry logic.

KCL and KPL both have the main advantage that they're easy to use, so that we can focus on producing and consuming records.

4.1. Maven Dependencies

The two libraries can be brought separately in our project if needed. To include KPL and KCL in our Maven project, we need to update our pom.xml file:

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>amazon-kinesis-producer</artifactId>
    <version>0.13.1</version>
</dependency>
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>amazon-kinesis-client</artifactId>
    <version>1.11.2</version>
</dependency>

4.2. Spring Setup

The only Spring preparation we need is to make sure we have the IAM credentials at hand. The values for aws.access.key and aws.secret.key are defined in our application.properties file so we can read them with @Value when needed.

4.3. Consumer

First, we'll create a class that implements the IRecordProcessor interface and defines our logic for how to handle Kinesis data stream records, which is to print them in the console:

public class IpProcessor implements IRecordProcessor {
    @Override
    public void initialize(InitializationInput initializationInput) { }

    @Override
    public void processRecords(ProcessRecordsInput processRecordsInput) {
        processRecordsInput.getRecords()
          .forEach(record -> System.out.println(new String(record.getData().array())));
    }

    @Override
    public void shutdown(ShutdownInput shutdownInput) { }
}

The next step is to define a factory class that implements the IRecordProcessorFactory interface and returns a previously created IpProcessor object:

public class IpProcessorFactory implements IRecordProcessorFactory {
    @Override
    public IRecordProcessor createProcessor() {
        return new IpProcessor();
    }
}

And now for the final step, we’ll use a Worker object to define our consumer pipeline. We need a KinesisClientLibConfiguration object that will define, if needed, the IAM Credentials and AWS Region.

We’ll pass the KinesisClientLibConfiguration, and our IpProcessorFactory object, to our Worker and then start it in a separate thread. We keep this logic of consuming records always alive with the use of the Worker class, so we’re continuously reading new records now:

BasicAWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secretKey);
KinesisClientLibConfiguration consumerConfig = new KinesisClientLibConfiguration(
  APP_NAME, 
  IPS_STREAM,
  new AWSStaticCredentialsProvider(awsCredentials), 
  IPS_WORKER)
    .withRegionName(Regions.EU_CENTRAL_1.getName());

final Worker worker = new Worker.Builder()
  .recordProcessorFactory(new IpProcessorFactory())
  .config(consumerConfig)
  .build();
CompletableFuture.runAsync(worker.run());

4.4. Producer

Let's now define the KinesisProducerConfiguration object, adding the IAM Credentials and the AWS Region:

BasicAWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secretKey);
KinesisProducerConfiguration producerConfig = new KinesisProducerConfiguration()
  .setCredentialsProvider(new AWSStaticCredentialsProvider(awsCredentials))
  .setVerifyCertificate(false)
  .setRegion(Regions.EU_CENTRAL_1.getName());

this.kinesisProducer = new KinesisProducer(producerConfig);

We'll include the kinesisProducer object previously created in a @Scheduled job and produce records for our Kinesis data stream continuously:

IntStream.range(1, 200).mapToObj(ipSuffix -> ByteBuffer.wrap(("192.168.0." + ipSuffix).getBytes()))
  .forEach(entry -> kinesisProducer.addUserRecord(IPS_STREAM, IPS_PARTITION_KEY, entry));

5. Spring Cloud Stream Binder Kinesis

We’ve already seen two libraries, both created outside of the Spring ecosystem. We’ll now see how the Spring Cloud Stream Binder Kinesis can simplify our life further while building on top of Spring Cloud Stream.

5.1. Maven Dependency

The Maven dependency we need to define in our application for the Spring Cloud Stream Binder Kinesis is:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-stream-binder-kinesis</artifactId>
    <version>1.2.1.RELEASE</version>
</dependency>

5.2. Spring Setup

When running on EC2, the required AWS properties are automatically discovered, so there is no need to define them. Since we're running our examples on a local machine, we need to define our IAM access key, secret key, and region for our AWS account. We've also disabled the automatic CloudFormation stack name detection for the application:

cloud.aws.credentials.access-key=my-aws-access-key
cloud.aws.credentials.secret-key=my-aws-secret-key
cloud.aws.region.static=eu-central-1
cloud.aws.stack.auto=false

Spring Cloud Stream is bundled with three interfaces that we can use in our stream binding:

  • The Sink is for data ingestion
  • The Source is used for publishing records
  • The Processor is a combination of both

We can also define our own interfaces if we need to.

5.3. Consumer

Defining a consumer is a two-part job. First, we'll define, in the application.properties, the data stream from which we'll consume:

spring.cloud.stream.bindings.input.destination=live-ips
spring.cloud.stream.bindings.input.group=live-ips-group
spring.cloud.stream.bindings.input.content-type=text/plain

And next, let's define a Spring @Component class. The annotation @EnableBinding(Sink.class) will allow us to read from the Kinesis stream using the method annotated with @StreamListener(Sink.INPUT):

@EnableBinding(Sink.class)
public class IpConsumer {

    @StreamListener(Sink.INPUT)
    public void consume(String ip) {
        System.out.println(ip);
    }
}

5.4. Producer

The producer can also be split in two. First, we have to define our stream properties inside application.properties:

spring.cloud.stream.bindings.output.destination=live-ips
spring.cloud.stream.bindings.output.content-type=text/plain

And then we add @EnableBinding(Source.class) on a Spring @Component and create new test messages every few seconds:

@Component
@EnableBinding(Source.class)
public class IpProducer {

    @Autowired
    private Source source;

    @Scheduled(fixedDelay = 3000L)
    private void produce() {
        IntStream.range(1, 200).mapToObj(ipSuffix -> "192.168.0." + ipSuffix)
          .forEach(entry -> source.output().send(MessageBuilder.withPayload(entry).build()));
    }
}

That's all we need for Spring Cloud Stream Binder Kinesis to work. We can simply start the application now.

6. Conclusion

In this article, we've seen how to integrate our Spring project with two AWS libraries for interacting with a Kinesis Data Stream. We've also seen how to use Spring Cloud Stream Binder Kinesis library to make the implementation even easier.

The source code for this article can be found over on Github.

Creating Symbolic and Hard Links

$
0
0

1. Overview

On a Linux machine, we can create links to an existing file. A link in Unix can be thought of as a pointer or a reference to a file. In other words, they're more of a shortcut to access a file. We can create as many links as we want.

In this tutorial, we'll quickly explore the two types of links: hard and symbolic links. We'll further talk about the differences between them.

A file in any Unix-based operating system comprises of the data block(s) and an inode. The data blocks store the actual file contents. On the other hand, an inode store file attributes (except the file name) and the disk block locations.

A hard link is just another file that points to the same underlying inode as the original file. And so, it references to the same physical file location.

We can use the ln command to create a hard link:

ls -l
-rw-rw-r-- 2 runner3 ubuntu 0 Sep 29 11:22 originalFile

ln originalFile sampleHardLink

ls -l
-rw-rw-r-- 2 runner3 ubuntu 0 Sep 29 11:22 originalFile
-rw-rw-r-- 2 runner3 ubuntu 0 Sep 29 11:22 sampleHardLink

Let's quickly see their mapped inode numbers:

ls -i
2835126 originalFile
2835126 sampleHardLink

Both of these files point to the same inode. With this, even if we later delete the original file, we'll still be able to access its contents using the created hard link.

However, please note that we can't create hard links for directories. Also, hard links cannot cross filesystem boundaries, like between network-mapped disks.

A symbolic or soft link is a new file that just stores the path of the original file and not its contents. If the original file gets moved or removed, then a soft link won't work.

Let's now create a soft or symbolic link:

ln -s originalFile sampleSoftLink

ls -l
-rw-rw-r-- 1 runner1 ubuntu  0 Sep 29 12:16 originalFile
lrwxrwxrwx 1 runner1 ubuntu 12 Sep 29 12:16 sampleSoftLink -> originalFile

Unlike hard links, a soft or symbolic link is a file with a different inode number than the original file:

ls -i
2835126 originalFile
2835217 sampleSoftLink

We're allowed to create a soft link for a directory. Moreover, with soft links, we can link files across various filesystems.

4. Differences

Now that we understand what a soft or a hard link is, let's quickly sum up the key differences:

  • A hard link has the same inode number as that of the original file and so can be thought of as its copy. On the other hand, a soft link is a new file which only stores the file location of the original one
  • If the original file gets moved or removed, we can still access its contents using any of its hard links. However, all soft links for that file will become invalid
  • Unlike that of hard links, we can create soft links for directories. Soft links can also span across filesystems

5. Conclusion

In this quick tutorial, we learned about the hard and symbolic/soft links used in all Unix-based operating systems.

Viewing all 4700 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>