Quantcast
Channel: Baeldung
Viewing all 4754 articles
Browse latest View live

How to Find all Getters Returning Null

$
0
0

1. Overview

In this quick article, we’ll use the Java 8 Stream API and the Introspector class – to invoke all getters found in a POJO.

We will create a stream of getters, inspect return values and see if a field value is null.

2. Setup

The only setup we need is to create a simple POJO class:

public class Customer {

    private Integer id;
    private String name;
    private String emailId;
    private Long phoneNumber;

    // standard getters and setters
}

3. Invoking Getter Methods

We’ll analyze the Customer class using Introspector; this provides an easy way for discovering properties, events, and methods supported by a target class.

We’ll first collect all the PropertyDescriptor instances of our Customer class. PropertyDescriptor captures all the info of a Java Bean property:

PropertyDescriptor[] propDescArr = Introspector
  .getBeanInfo(Customer.class, Object.class)
  .getPropertyDescriptors();

Let’s now go over all PropertyDescriptor instances, and invoke the read method for every property:

return Arrays.stream(propDescArr)
  .filter(nulls(customer))
  .map(PropertyDescriptor::getName)
  .collect(Collectors.toList());

The nulls predicate we use above checks if the property can be read invokes the getter and filters only null values:

private static Predicate<PropertyDescriptor> nulls(Customer customer) { 
    return = pd -> { 
        Method getterMethod = pd.getReadMethod(); 
        boolean result = false; 
        return (getterMethod != null && getterMethod.invoke(customer) == null); 
    }; 
}

Finally, let’s now create an instance of a Customer, set a few properties to null and test our implementation:

@Test
public void givenCustomer_whenAFieldIsNull_thenFieldNameInResult() {
    Customer customer = new Customer(1, "John", null, null);
	    
    List<String> result = Utils.getNullPropertiesList(customer);
    List<String> expectedFieldNames = Arrays
      .asList("emailId","phoneNumber");
	    
    assertTrue(result.size() == expectedFieldNames.size());
    assertTrue(result.containsAll(expectedFieldNames));      
}

4. Conclusion

In this short tutorial, we made good use of the Java 8 Stream API and an Introspector instance – to invoke all getters and retrieve a list of null properties.

As usual, the code is available over on GitHub.


How to Get All Spring-Managed Beans?

$
0
0

1. Overview

In this article, we’ll explore different techniques for displaying all Spring-managed beans withing the container.

2. The IoC Container

A bean is the foundation of a Spring-managed application; all beans reside withing the IOC container, which is responsible for managing their life cycle.

We can get a list of all beans within this container in two ways:

  1. Using a ListableBeanFactory interface
  2. Using a Spring Boot Actuator

3. Using ListableBeanFactory Interface

The ListableBeanFactory interface provides getBeanDefinitionNames() method which returns the names of all the beans defined in this factory. This interface is implemented by all the bean factories that pre-loads their bean definitions to enumerate all their bean instances.

You can find the list of all known subinterfaces and its implementing classes in the official documentation.

For this example, we’ll be using a Spring Boot Application.

First, we’ll create some Spring beans. Let’s create a simple Spring Controller FooController:

@Controller
public class FooController {

    @Autowired
    private FooService fooService;
    
    @RequestMapping(value="/displayallbeans") 
    public String getHeaderAndBody(Map model){
        model.put("header", fooService.getHeader());
        model.put("message", fooService.getBody());
        return "displayallbeans";
    }
}

This Controller is dependent on another Spring bean FooService:

@Service
public class FooService {
    
    public String getHeader() {
        return "Display All Beans";
    }
    
    public String getBody() {
        return "This is a sample application that displays all beans "
          + "in Spring IoC container using ListableBeanFactory interface "
          + "and Spring Boot Actuators.";
    }
}

Note that we’ve created two different beans here:

  1. fooController
  2. fooService

While executing this application, we’ll use applicationContext object and call its getBeanDefinitionNames() method, which will return all the beans in our applicationContext container:

@SpringBootApplication
public class Application {
    private static ApplicationContext applicationContext;

    public static void main(String[] args) {
        applicationContext = SpringApplication.run(Application.class, args);
        displayAllBeans();
    }
    
    public static void displayAllBeans() {
        String[] allBeanNames = applicationContext.getBeanDefinitionNames();
        for(String beanName : allBeanNames) {
            System.out.println(beanName);
        }
    }
}

This will print all the beans from applicationContext container:

fooController
fooService
//other beans

Note that along with beans defined by us, it will also log all other beans that are in this container. For the sake of clarity, we’ve omitted them here because there are quite a lot of them.

4. Using Spring Boot Actuator

The Spring Boot Actuator functionality provides endpoints which are used for monitoring our application’s statistics.

It includes many built-in endpoints, including /beans. This displays a complete list of all the Spring managed beans in our application. You can find the full list of existing endpoints over on the official docs.

Now we’ll configure our beans endpoint in our application.properties:

endpoints.beans.id=springbeans
endpoints.beans.sensitive=false

Here, we’re setting id of bean endpoint. This springbeans id will now be mapped to a URL which will be used to access it over HTTP. We’ve set the sensitive property to false so that we can access it without authentication. We can leave it to default value true if we want only authenticated users to see the data.

Now, we’ll just hit the URL http://<address>:<management-port>/springbeans. We can use our default server port if we haven’t specified any separate management port. This will return a JSON response displaying all the beans within the Spring IoC Container:

[
    {
        "context": "application:8080",
        "parent": null,
        "beans": [
            {
                "bean": "fooController",
                "aliases": [],
                "scope": "singleton",
                "type": "com.baeldung.displayallbeans.controller.FooController",
                "resource": "file [E:/Workspace/tutorials-master/spring-boot/target
                  /classes/com/baeldung/displayallbeans/controller/FooController.class]",
                "dependencies": [
                    "fooService"
                ]
            },
            {
                "bean": "fooService",
                "aliases": [],
                "scope": "singleton",
                "type": "com.baeldung.displayallbeans.service.FooService",
                "resource": "file [E:/Workspace/tutorials-master/spring-boot/target/
                  classes/com/baeldung/displayallbeans/service/FooService.class]",
                "dependencies": []
            },
            // ...other beans
        ]
    }
]

Of course, this also consists of many other beans that reside in the same spring container, but for the sake of clarity, we’ve omitted them here.

If you want to explore more about Spring Boot Actuators, you can head on over to the main Spring Boot Actuator guide.

5. Conclusion

In this article, we learned about how to display all beans in a Spring IoC Container using ListableBeanFactory interface and Spring Boot Actuators.

The full implementation of this tutorial can be found over on Github.

Java Web Weekly, Issue 181

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> Cleaner Parameterized Tests With JUnit 5 [blog.codeleak.pl]

JUnit 5 brings many new exciting features – one of which will definitely be the native support for parameterized tests.

>> The best way to map the @DiscriminatorColumn with JPA and Hibernate [vladmihalcea.com]

A quick but comprehensive guide to mapping the @DiscriminatorColumn.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> How “Effective Java” may have influenced the design of Kotlin — Part 1 [hackernoon.com]

A very interesting analysis of how some of the central points in “Effective Java” have shaped the design of Kotlin.

>> Electronic Signature Using The WebCrypto API [techblog.bozho.net]

An interesting idea of “placing” an electronic signature using the WebCrypto API.

Also worth reading:

3. Musings

>> The Hidden Costs of Slow Websites [daedtech.com]

Having a slow site can be more costly than you might think, so investing in performance is always a good idea.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Two hours late [dilbert.com]

>> It’s hard to be a misunderstood genius [dilbert.com]

>> My European vacation starts tomorrow [dilbert.com]

5. Pick of the Week

>> 9 Logging Sins in Your Java Applications [stackify.com]

Introduction to Liquibase Rollback

$
0
0

1. Overview

In our previous article, we showed Liquibase as a tool for managing database schemas and data.

In this article, we’re going to look more into the rollback feature – and how we can undo a Liquibase operation.

Naturally, this is a critical, feature of any production-grade system.

2. Categories of Liquibase Migrations

There are two categories of Liquibase operations, resulting in a different generation of a rollback statement:

  • automatic, where migration can deterministically generate steps required for rolling back
  • manual, where we need to issue a rollback command because migration instruction cannot be used to identify the statement deterministically

For example, the rollback of a “create table” statement would be to “drop” the created table. This can be determined without a doubt, and therefore the rollback statement can be autogenerated.

On the other hand, the rollback statement for a “drop table” command is not possible to be determined. It is not possible to determine the last state of the table, and therefore the rollback statement can’t be autogenerated. These types of migration statements require a manual rollback instructions.

3. Writing a Simple Rollback Statement 

Let’s write a simple changeset which will create a table when executed and add a rollback statement to the changeset:

<changeSet id="testRollback" author="baeldung">
    <createTable tableName="baeldung_turorial">
        <column name="id" type="int"/>
        <column name="heading" type="varchar(36)"/>
        <column name="author" type="varchar(36)"/>
    </createTable>
    <rollback>
        <dropTable tableName="baeldung_test"/>
    </rollback>
</changeSet>

The above example falls under the first category mentioned above. It will create a rollback statement automatically if we do not add one. But we can override the default behavior by creating our rollback statement.

We can run the migration using the command:

mvn liquibase:update

After the executing we can rollback the action using:

mvn liquibase:rollback

This executes the rollback segment of the changeset and should revert the task completed during the update stage. But if we issue this command alone, the build will fail.

The reason for that is – we do not specify the limit of rollback; the database will be completely wiped out by rolling back to the initial stage. Therefore it’s mandatory to define one of the three constraints below to limit the rollback operation when the condition is satisfied:

  • rollbackTag
  • rollbackCount
  • rollbackDate

3.1. Rolling Back to a Tag

We can define a particular state of our database to be a tag. Therefore, we can reference back to that state. Rolling back to the tag name “1.0” looks like:

mvn liquibase:rollback -Dliquibase.rollbackTag=1.0

This executes rollback statements of all the changesets executed after tag “1.0”.

3.2. Rolling Back by Count

Here, we define how many changesets we need to be rolled back. If we define it to be one, the last changeset execute will be rolled back:

mvn liquibase:rollback -Dliquibase.rollbackCount=1

3.3. Rolling Back to Date

We can set a rollback target as a date, therefore, any changeset executed after that day will be rolled back:

mvn liquibase:rollback "-Dliquibase.rollbackDate=Jun 03, 2017"

The date format has to be an ISO data format or should match the value of DateFormat.getDateInstance() of the executing platform.

4. Rollback Changeset Options

Let’s explore what are the possible usages of rollback statement in changesets.

4.1. Multistatement Rollback

A single rollback tag may enclose more than one instruction to be executed:

<changeSet id="multiStatementRollback" author="baeldung">
    <createTable tableName="baeldung_tutorial2">
        <column name="id" type="int"/>
        <column name="heading" type="varchar(36)"/>
    </createTable>
    <createTable tableName="baeldung_tutorial3">
        <column name="id" type="int"/>
        <column name="heading" type="varchar(36)"/>
    </createTable>
    <rollback>
        <dropTable tableName="baeldung_tutorial2"/>
        <dropTable tableName="baeldung_tutorial3"/>
    </rollback>
</changeSet>

Here we drop two tables in the same rollback tag. We can split the task over multiple statements as well.

4.2. Multiple Rollback Tags

In a changeset, we can have more than one rollback tags. They are executed in the order of appearance in changeset:

<changeSet id="multipleRollbackTags" author="baeldung">
    <createTable tableName="baeldung_tutorial4">
        <column name="id" type="int"/>
        <column name="heading" type="varchar(36)"/>
    </createTable>
    <createTable tableName="baeldung_tutorial5">
        <column name="id" type="int"/>
        <column name="heading" type="varchar(36)"/>
    </createTable>
    <rollback>
        <dropTable tableName="baeldung_tutorial4"/>
    </rollback>
    <rollback>
        <dropTable tableName="baeldung_tutorial5"/>
    </rollback>
</changeSet>

4.3. Refer Another Changeset for Rollback

We can refer to another change set, possibly the original changeset if we are going to change some details of the database. This will reduce the code duplication and can correctly revert the done changes:

<changeSet id="referChangeSetForRollback" author="baeldung">
    <dropTable tableName="baeldung_tutorial2"/>
    <dropTable tableName="baeldung_tutorial3"/>
    <rollback changeSetId="multiStatementRollback" changeSetAuthor="baeldung"/>
</changeSet>

4.4. Empty Rollback Tag

By default, Liquibase tries to generate a rollback script if we have not provided. If we need to break this feature, we can have empty rollback tag so that rollback operation is not get reverted:

<changeSet id="emptyRollback" author="baeldung">
    <createTable tableName="baeldung_tutorial">
        <column name="id" type="int"/>
        <column name="heading" type="varchar(36)"/>
        <column name="author" type="varchar(36)"/>
    </createTable>
    <rollback/>
</changeSet>

5. Rollback Command Options

Apart from rolling back a database to a previous state, Liquibase can be used in many different ways. Those are, generate the rollback SQL, create future rollback script, and finally, we can test the migration and rolling back, both in one step.

5.1. Generate Rollback Script

Same as rolling back, we have three options in hand for generating rollback SQL. Those are:

  • rollbackSQL <tag> – a script for rollbacking the database to the mentioned tag
  • rollbackToDateSQL <date/time> –  an SQL script to rollback the database to the state as of the mentioned date/time
  • rollbackCountSQL <value> – an SQL script to rollback the database to the state mentioned by number of steps before

Let’s see one of the examples in action:

mvn liquibase:rollbackCountSQL 2

5.2. Generate Future Rollback Script

This command generates the required rollback SQL commands to bring the database to the current state from the state, at which changesets eligible to run at the moment, have completed. This will be very useful if there is a need to provide a rollback script for a change we are about to perform:

This will be very useful if there is a need to provide a rollback script for a change we are about to perform:

mvn liquibase:futureRollbackSQL

5.3. Run Update Testing Rollback

This command executes the database update and then rolls back the changesets to bring the database to the current state. Both future rollback and update testing rollback does not alter the current database after the executing is finished. But update testing rollback performs the actual migration and then rolls it back.

This can be used to test te execution of update changes without permanently changing the database:

mvn liquibase:updateTestingRollback

6. Conclusion

In this quick tutorial, we explored some command line and changeset features of the Liquibase rollback functionality.

As always, the source code can be found over on GitHub.

Converting String to Stream of chars

$
0
0

1. Overview

Java 8 introduced the Stream API, with functional-like operations for processing sequences. If you want to read more about it, have a look at this article.

In this quick article, we’ll see how to convert a String to a Stream of single characters.

2. Conversion Using chars()

The String API has a new method – chars() – with which we can obtain an instance of Stream from a String object. This simple API returns an instance of IntStream from the input String.

Simply put, IntStream contains an integer representation of the characters from the String object:

String testString = "String";
IntStream intStream = testString.chars();

It’s possible to work with the integer representation of the characters without converting them to their Character equivalent. This can lead to some minor performance gains, as there will be no need to box each integer into a Character object.

However, if we’re to display the characters for reading, we need to convert the integers to the human-friendly Character form:

Stream<Character> characterStream = testString.chars()
  .mapToObj(c -> (char) c);

3. Conversion Using codePoints()

Alternatively, we can use the codePoints() method to get an instance of IntStream from a String. The advantage of using this API is that Unicode supplementary characters can be handled effectively.

Supplementary characters are represented by Unicode surrogate pairs and will be merged into a single codepoint. This way we can correctly process (and display) any Unicode symbol:

IntStream intStream1 = testString.codePoints();

We need to map the returned IntStream to Stream<Character> in order to display it to users:

Stream<Character> characterStream2 
  = testString.codePoints().mapToObj(c -> (char) c);

4. Conversion to a Stream of Single Character Strings

So far, we’ve been able to get a Stream of characters; what if we want a Stream of single character Strings instead?

Just as specified earlier in the article, we’ll use either the codePoints() or chars() methods to obtain an instance of IntStream that we can now map to Stream<String>.

The mapping process involves converting the integer values to their respective character equivalents first.

Then we can use String.valueOf() or Character.toString() to convert the characters to a String object:

Stream<String> stringStream = testString.codePoints()
  .mapToObj(c -> String.valueOf((char) c));

5. Conclusion

In this quick tutorial, we learn to obtain a stream of Character from a String object by either calling codePoints() or chars() methods.

This allows us to take full advantage of the Stream API – to conveniently and effectively manipulate characters.

As always, code snippets can be found over on GitHub.

Changing the Order in a Sum Operation Can Produce Different Results?

$
0
0

1. Overview

In this quick article, we’re going to have a look at why changing the sum order returns a different result.

2. Problem

When we look at the following code, we can easily predict the correct answer (13.22 + 4.88 + 21.45 = 39.55). What is easy for us, might be interpreted differently by the Java compiler:

double a = 13.22;
double b = 4.88;
double c = 21.45;

double abc = a + b + c;
System.out.println("a + b + c = " + abc); // Outputs: a + b + c = 39.55

double acb = a + c + b;
System.out.println("a + c + b = " + acb); // Outputs: a + c + b = 39.550000000000004

From a Mathematical point of view, changing the order of a sum should always give the same result:

(A + B) + C = (A + C) + B

This is true and works well in Java (and other computer programming languages) for integers. However, almost all CPUs use for non-integer numbers IEEE 754 binary floating point standard, which introduces inaccuracy when decimal number is stored as the binary value. Computers can’t represent all real numbers precisely.

When we change the order, we also change the intermediate value that is stored in the memory, and thus the result may differ. In the next example, we simply start with either sum of A+B or A+C:

double ab = 18.1; // = 13.22 + 4.88
double ac = 34.67; // = 13.22 + 21.45
double sum_ab_c = ab + c;
double sum_ac_b = ac + b;
System.out.println("ab + c = " + sum_ab_c); // Outputs: 39.55
System.out.println("ac + b = " + sum_ac_b); // Outputs: 39.550000000000004

3. Solution

Because of notorious inaccuracy of floating point numbers, double should never be used for precise values. This includes currency. For accurate values, we can use BigDecimal class:

BigDecimal d = new BigDecimal(String.valueOf(a));
BigDecimal e = new BigDecimal(String.valueOf(b));
BigDecimal f = new BigDecimal(String.valueOf(c));

BigDecimal def = d.add(e).add(f);
BigDecimal dfe = d.add(f).add(e);

System.out.println("d + e + f = " + def); // Outputs: 39.55
System.out.println("d + f + e = " + dfe); // Outputs: 39.55

Now we can see that in the both cases results were the same.

4. Conclusion

When working with decimal values, we always need to remember that floating point numbers are not represented correctly, and this can cause unexpected and unwanted results. When precision is required, we must use BigDecimal class.

As always, the code used throughout the article can be found over on GitHub.

Introduction to Quartz

$
0
0

1. Overview

Quartz is an open source job-scheduling framework written entirely in Java and designed for use in both J2SE and J2EE applications. It offers great flexibility without sacrificing simplicity.

You can create complex schedules for executing any job. Examples are e.g. tasks that run daily, every other Friday at 7:30 p.m. or only on the last day of every month.

In this article, we’ll take a look at elements to build a job with the Quartz API. For an introduction in combination with Spring, we recommend Scheduling in Spring with Quartz.

2. Maven Dependencies

We need to add the following dependency to the pom.xml:

<dependency>
    <groupId>org.quartz-scheduler</groupId>
    <artifactId>quartz</artifactId>
    <version>2.3.0</version>
</dependency>

The latest version can be found in the Maven Central repository.

3. The Quartz API

The heart of the framework is the Scheduler. It is responsible for managing the runtime environment for our application.

To ensure scalability, Quartz is based on a multi-threaded architecture. When started, the framework initializes a set of worker threads that are used by the Scheduler to execute Jobs.

This is how the framework can run many Jobs concurrently. It also relies on a loosely coupled set of ThreadPool management components for managing the thread environment.

The key interfaces of the API are:

  • Scheduler – the primary API for interacting with the scheduler of the framework
  • Job – an interface to be implemented by components that we wish to have executed
  • JobDetail – used to define instances of Jobs
  • Trigger – a component that determines the schedule upon which a given Job will be performed
  • JobBuilder – used to build JobDetail instances, which define instances of Jobs
  • TriggerBuilder – used to build Trigger instances

Let’s take a look at each one of those components.

4. Scheduler

Before we can use the Scheduler, it needs to be instantiated. To do this, we can use the factory SchedulerFactory:

SchedulerFactory schedulerFactory = new StdSchedulerFactory();
Scheduler scheduler = schedulerFactory.getScheduler();

A Scheduler’s life-cycle is bounded by its creation, via a SchedulerFactory and a call to its shutdown() method. Once created the Scheduler interface can be used to add, remove, and list Jobs and Triggers, and perform other scheduling-related operations (such as pausing a trigger).

However, the Scheduler will not act on any triggers until it has been started with the start() method:

scheduler.start();

5. Jobs

A Job is a class that implements the Job interface. It has only one simple method:

public class SimpleJob implements Job {
    public void execute(JobExecutionContext arg0) throws JobExecutionException {
        System.out.println("This is a quartz job!");
    }
}

When the Job’s trigger fires, the execute() method gets invoked by one of the scheduler’s worker threads.

The JobExecutionContext object that is passed to this method provides the job instance, with information about its runtime environment, a handle to the Scheduler that executed it, a handle to the Trigger that triggered the execution, the job’s JobDetail object, and a few other items.

The JobDetail object is created by the Quartz client at the time the Job is added to the Scheduler. It is essentially the definition of the job instance:

JobDetail job = JobBuilder.newJob(SimpleJob.class)
  .withIdentity("myJob", "group1")
  .build();

This object may also contain various property settings for the Job, as well as a JobDataMap, which can be used to store state information for a given instance of our job class.

5.1. JobDataMap

The JobDataMap is used to hold any amount of data objects that we wish to make available to the job instance when it executes. JobDataMap is an implementation of the Java Map interface and has some added convenience methods for storing and retrieving data of primitive types.

Here’s an example of putting data into the JobDataMap while building the JobDetail, before adding the job to the scheduler:

JobDetail job = newJob(SimpleJob.class)
  .withIdentity("myJob", "group1")
  .usingJobData("jobSays", "Hello World!")
  .usingJobData("myFloatValue", 3.141f)
  .build();

And here is an example of how to access these data during the job’s execution:

public class SimpleJob implements Job { 
    public void execute(JobExecutionContext context) throws JobExecutionException {
        JobDataMap dataMap = context.getJobDetail().getJobDataMap();

        String jobSays = dataMap.getString("jobSays");
        float myFloatValue = dataMap.getFloat("myFloatValue");

        System.out.println("Job says: " + jobSays + ", and val is: " + myFloatValue);
    } 
}

The above example will print “Job says Hello World!, and val is 3.141”.

We can also add setter methods to our job class that corresponds to the names of keys in the JobDataMap.

If we do this, Quartz’s default JobFactory implementation automatically calls those setters when the job is instantiated, thus preventing the need to explicitly get the values out of the map within our execute method.

6. Triggers

Trigger objects are used to trigger the execution of Jobs.

When we wish to schedule a Job, we need to instantiate a trigger and adjust its properties to configure our scheduling requirements:

Trigger trigger = TriggerBuilder.newTrigger()
  .withIdentity("myTrigger", "group1")
  .startNow()
  .withSchedule(SimpleScheduleBuilder.simpleSchedule()
    .withIntervalInSeconds(40)
    .repeatForever())
  .build();

Trigger may also have a JobDataMap associated with it. This is useful for passing parameters to a Job that are specific to the executions of the trigger.

There are different types of triggers for different scheduling needs. Each one has different TriggerKey properties for tracking their identities. However, some other properties are common to all trigger types:

  • The jobKey property indicates the identity of the job that should be executed when the trigger fires.
  • The startTime property indicates when the trigger’s schedule first comes into effect. The value is a java.util.Date object that defines a moment in time for a given calendar date. For some trigger types, the trigger fires at the given start time. For others, it simply marks the time that the schedule should start.
  • The endTime property indicates when the trigger’s schedule should be canceled.

Quartz ships with a handful of different trigger types, but the most commonly used ones are SimpleTrigger and CronTrigger.

6.1. Priority

Sometimes, when we have many triggers, Quartz may not have enough resources to immediately fire all of the jobs are scheduled to fire at the same time. In this case, we may want to control which of our triggers gets available first. This is exactly what the priority property on a trigger is used for.

For example, when ten triggers are set to fire at the same time and merely four worker threads are available, the first four triggers with the highest priority will be executed first. When we do not set a priority on a trigger, it uses a default priority of five. Any integer value is allowed as a priority, positive or negative.

In the example below, we have two triggers with a different priority. If there aren’t enough resources to fire all the triggers at the same time, triggerA will be the first one to be fired:

Trigger triggerA = TriggerBuilder.newTrigger()
  .withIdentity("triggerA", "group1")
  .startNow()
  .withPriority(15)
  .withSchedule(SimpleScheduleBuilder.simpleSchedule()
    .withIntervalInSeconds(40)
    .repeatForever())
  .build();
            
Trigger triggerB = TriggerBuilder.newTrigger()
  .withIdentity("triggerB", "group1")
  .startNow()
  .withPriority(10)
  .withSchedule(SimpleScheduleBuilder.simpleSchedule()
    .withIntervalInSeconds(20)
    .repeatForever())
  .build();

6.2. Misfire Instructions

A misfire occurs if a persistent trigger misses its firing time because of the Scheduler being shut down, or in case there are no available threads in Quartz’s thread pool.

The different trigger types have different misfire instructions available. By default, they use a smart policy instruction. When the scheduler starts, it searches for any persistent triggers that have misfired. After that, it updates each of them based on their individually configured misfire instructions.

Let’s take a look at the examples below:

Trigger misFiredTriggerA = TriggerBuilder.newTrigger()
  .startAt(DateUtils.addSeconds(new Date(), -10))
  .build();
            
Trigger misFiredTriggerB = TriggerBuilder.newTrigger()
  .startAt(DateUtils.addSeconds(new Date(), -10))
  .withSchedule(SimpleScheduleBuilder.simpleSchedule()
    .withMisfireHandlingInstructionFireNow())
  .build();

We have scheduled the trigger to run 10 seconds ago (so it is 10 seconds late by the time it is created) to simulate a misfire, e.g. because the scheduler was down or didn’t have a sufficient amount of worker threads available. Of course, in a real-world scenario, we would never schedule triggers like this.

In the first trigger (misFiredTriggerA) no misfire handling instruction are set. Hence a called smart policy is used in that case and is called: withMisfireHandlingInstructionFireNow(). This means that the job is executed immediately after the scheduler discovers the misfire.

The second trigger explicitly defines what kind of behavior we expect when misfiring occurs. In this example, it just happens to be the same smart policy.

6.3. SimpleTrigger

SimpleTrigger is used for scenarios in which we need to execute a job at a specific moment in time. This can either be exactly once or repeatedly at specific intervals.

An example could be to fire a job execution at exactly 12:20:00 AM on January 13, 2018. Similarly, we can start at that time, and then five more times, every ten seconds.

In the code below, the date myStartTime has previously been defined and is used to build a trigger for one particular timestamp:

SimpleTrigger trigger = (SimpleTrigger) TriggerBuilder.newTrigger()
  .withIdentity("trigger1", "group1")
  .startAt(myStartTime)
  .forJob("job1", "group1")
  .build();

Next, let’s build a trigger for a specific moment in time, then repeating every ten seconds ten times:

SimpleTrigger trigger = (SimpleTrigger) TriggerBuilder.newTrigger()
  .withIdentity("trigger2", "group1")
  .startAt(myStartTime)
  .withSchedule(simpleSchedule()
    .withIntervalInSeconds(10)
    .withRepeatCount(10))
  .forJob("job1") 
  .build();

6.4. CronTrigger

The CronTrigger is used when we need schedules based on calendar-like statements. For example, we can specify firing-schedules such as every Friday at noon or every weekday at 9:30 am.

Cron-Expressions are used to configure instances of CronTrigger. These expressions consist of Strings that are made up of seven sub-expressions. We can read more about Cron-Expressions here.

In the example below, we build a trigger that fires every other minute between 8 am and 5 pm, every day:

CronTrigger trigger = TriggerBuilder.newTrigger()
  .withIdentity("trigger3", "group1")
  .withSchedule(CronScheduleBuilder.cronSchedule("0 0/2 8-17 * * ?"))
  .forJob("myJob", "group1")
  .build();

7. Conclusion

In this article, we have shown how to build a Scheduler to trigger a Job. We also saw some of the most common trigger options used: SimpleTrigger and CronTrigger.

Quartz can be used to create simple or complex schedules for executing dozens, hundreds, or even more jobs. More information on the framework can be found on the main website.

The source code of the examples can be found over on GitHub.

Testing with Selenium/WebDriver and the Page Object Pattern

$
0
0

1. Introduction

In this article, we’re going to build on the previous writeup and continue to improve our Selenium/WebDriver testing by introducing the Page Object pattern.

2. Adding Selenium

Let’s add a new dependency to our project to write simpler, more readable assertions:

<dependency>
    <groupId>org.hamcrest</groupId>
    <artifactId>hamcrest-all</artifactId>
    <version>1.3</version>
</dependency>

The latest version can be found in the Maven Central Repository.

2.1. Additional Methods

In the first part of the series, we used a few additional utility methods which we’re going to be using here as well.

We’ll start with the navigateTo(String url) method – which will help us navigate through different pages of the application:

public void navigateTo(String url) {
    driver.navigate().to(url);
}

Then, the clickElement(WebElement element) – as the name implies – will take care of performing the click action on a specified element:

public void clickElement(WebElement element) {
    element.click();
}

3. Page Object Pattern

Selenium gives us a lot of powerful, low-level APIs we can use to interact with the HTML page.

However, as the complexity of our tests grows, interacting with the low-level, raw elements of the DOM is not ideal. Our code will be harder to change, may break after small UI changes, and will be, simply put, less flexible.

Instead, we can utilize simple encapsulation and move all of these low-level details into a page object.

Before we start writing our first-page object, it’s good to have a clear understanding of the pattern – as it should allow us to emulate the interaction of a user with our application.

The page object will behave as a sort of interface, that will encapsulate the details of our pages or elements and will expose a high-level API to interact with that element or page.

As such, an important detail is to provide descriptive names for our methods (ex. clickButton(), navigateTo()), as it would be easier for us to replicate an action taken by the user and will generally lead to a better API when we’re chaining steps together.

Ok, so now, let’s go ahead and create our page object – in this case, our home page:

public class BaeldungHomePage {

    private SeleniumConfig config;
 
    @FindBy(css=".header--menu > a")
    private WebElement title;
 
    @FindBy(css = ".menu-start-here > a")
    private WebElement startHere;

    // ...

    public StartHerePage clickOnStartHere() {
        config.clickElement(startHere);

        StartHerePage startHerePage = new StartHerePage(config);
        PageFactory.initElements(config.getDriver(), startHerePage);

        return startHerePage;
    }
}

Notice how our implementation is dealing with the low-level details of the DOM and exposing a nice, high-level API.

For example, the @FindBy annotation, allows us to pre-populate our WebElements, this can also be represented using the By API:

private WebElement title = By.cssSelector(".header--menu > a");

Of course, both are valid, however using annotations is a bit cleaner.

Also notice the chaining – our clickOnStartHere() method returns a StartHerePage object – where we can continue the interaction:

public class StartHerePage {

    // Includes a SeleniumConfig attribute

    @FindBy(css = ".page-title")
    private WebElement title;

    // constructor

    public String getPageTitle() {
        return title.getText();
    }
}

Let’s write a quick test, where we simply navigate to the page and check one of the elements:

@Test
public void givenHomePage_whenNavigate_thenShouldBeInStartHere() {
    homePage.navigate();
    StartHerePage startHerePage = homePage.clickOnStartHere();
 
    assertThat(startHerePage.getPageTitle(), is("Start Here"));
}

It’s important to take into account that our homepage has the responsibility of:

  1. Based on the given browser configuration, navigate to the page.
  2. Once there, validate the content of the page (in this case, the title).

Our test is very straightforward; we navigate to the home page, execute the click on the “Start Here” element, which will take us to the page with the same name, and finally, we just validate the title is present.

After our tests run, the close() method will be executed, and our browser should be closed automatically.

3.1. Separating Concerns

Another possibility that we can take into consideration might be separating concerns (even more), by having two separate classes, one will take care of having all attributes (WebElement or By) of our page:

public class BaeldungAboutPage {

    @FindBy(css = ".page-header > h1")
    public static WebElement title;
}

The other will take care of having all the implementation of the functionality we want to test:

public class BaeldungAbout {

    private SeleniumConfig config;

    public BaeldungAbout(SeleniumConfig config) {
        this.config = config;
        PageFactory.initElements(config.getDriver(), BaeldungAboutPage.class);
    }

    // navigate and getTitle methods
}

If we are using attributes as By and not using the annotation feature, it is recommended to add a private constructor in our page class to prevent it from being instantiated.

It’s important to mention that we need to pass the class that contains the annotations in this case the BaeldungAboutPage class, in contrast to what we did in our previous example by passing the this keyword.

@Test
public void givenAboutPage_whenNavigate_thenTitleMatch() {
    about.navigateTo();
 
    assertThat(about.getPageTitle(), is("About Baeldung"));
}

Notice how we can now keep all the internal details of interacting with our page in the implementation, and here, we can actually use this client at a high, readable level.

4. Conclusion

In this quick tutorial, we focused on improving our usage of Selenium/WebDriver with the help of the Page-Object Pattern. We went through different examples and implementations, to see the practical ways of utilizing the pattern to interact with our site.

As always, the implementation of all of these examples and snippets can be found over on GitHub. This is a Maven-based project so it should be easy to import and run.


Locality-Sensitive Hashing in Java Using Java-LSH

$
0
0

1. Overview

The Locality-Sensitive Hashing (LSH) algorithm hashes input items so that similar items have a high probability of being mapped to the same buckets.

In this quick article, we will use the java-lsh library to demonstrate a simple use case of this algorithm.

2. Maven Dependency

To get started we’ll need to add Maven dependency to java-lsh library:

<dependency>
    <groupId>info.debatty</groupId>
    <artifactId>java-lsh</artifactId>
    <version>0.10</version>
</dependency>

3. Locality-Sensitive Hashing Use Case

LSH has many possible applications, but we will consider one particular example.

Suppose we have a database of documents and want to implement a search engine that will be able to identify similar documents.

We can use LSH as part of this solution:

  • Every document can be transformed to a vector of numbers or booleans – for example, we could use the word2vect algorithm to transform words and documents into vectors of numbers
  • Once we have a vector representing each document, we can use the LSH algorithm to calculate a hash for each vector, and due to the characteristics of LSH, documents that are presented as similar vectors will have a similar or same hash
  • As a result, given a particular document’s vector, we can find N numbers of vectors that have a similar hash and return the corresponding documents to the end user

4. Example

We will be using the java-lsh library to calculate hashes for our input vectors. We won’t be covering the transformation itself, as this is a huge topic beyond the scope of this article.

However, suppose we have three input vectors that are transformed from a set of three documents, presented in a form that can be used as the input for the LSH algorithm:

boolean[] vector1 = new boolean[] {true, true, true, true, true};
boolean[] vector2 = new boolean[] {false, false, false, true, false};
boolean[] vector3 = new boolean[] {false, false, true, true, false};

Note that in a production application, the number of input vectors should be a lot higher to leverage the LSH algorithm, but for the sake of this demonstration, we will stick to three vectors only.

It is important to note that first vector is vastly different from the second and third, whereas the second and third vectors are quite similar to each other.

Let’s create an instance of the LSHMinHash class. We need to pass the size of the input vectors to it – all input vectors should have equal size. We also need to specify how many hash buckets we want and how many stages of computation (iterations) LSH should perform:

int sizeOfVectors = 5;
int numberOfBuckets = 10;
int stages = 4;

LSHMinHash lsh = new LSHMinHash(stages, numberOfBuckets, sizeOfVectors);

We specify that all vectors that will be hashed by the algorithms should be hashed among ten buckets. We also want to have four iterations of LSH for calculating hashes.

To calculate the hash for each vector, we pass the vector to the hash() method:

int[] firstHash = lsh.hash(vector1);
int[] secondHash = lsh.hash(vector2);
int[] thirdHash = lsh.hash(vector3);

System.out.println(Arrays.toString(firstHash));
System.out.println(Arrays.toString(secondHash));
System.out.println(Arrays.toString(thirdHash));

Running that code will result in output similar to:

[0, 0, 1, 0]
[9, 3, 9, 8]
[1, 7, 8, 8]

Looking at each output array, we can see the hash values calculated at each of the four iterations for the corresponding input vector. The first line shows the hash results for the first vector, the second line for the second vector, and third line for the third vector.

After four iterations, the LSH yielded results as we expected – LSH calculated the same hash value (8) for the second and third vectors, which were similar to each other, and a different hash value (0) for the first vector, which was different from the second and third vectors.

LSH is an algorithm that is based on probability, so we cannot be sure that two similar vectors will land in the same hash bucket. Nevertheless, when we have a large enough number of input vectors, the algorithm yields results that will have a high probability for assigning similar vectors to the same buckets.

When we are dealing with massive data sets, LSH can be a handy algorithm.

5. Conclusion

In this quick article, we looked at an application of the Locality-Sensitive Hashing algorithm and showed how to use it with the help of the java-lsh library.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Drools Using Rules from Excel Files

$
0
0

1. Overview

Drools has support for managing business rules in a spreadsheet format.

In this article, we’ll see a quick example of using Drools to manage business rules using an Excel file.

2. Maven Dependencies

Let’s add the required Drools dependencies into our application:

<dependency>
    <groupId>org.kie</groupId>
    <artifactId>kie-ci</artifactId>
    <version>7.1.0.Beta2</version>
</dependency>
<dependency>
    <groupId>org.drools</groupId>
    <artifactId>drools-decisiontables</artifactId>
    <version>7.1.0.Beta2</version>
</dependency>

The latest version of these dependencies can be found at kie-ci and drools-decisiontables.

3. Defining Rules in Excel

For our example, let’s define rules to determine discount based on customer type and the number of years as a customer:

  • Individual customers with greater than 3 years get 15% discount
  • Individual customers with less than 3 years get 5% discount
  • All business customers get 20% discount

3.1. The Excel File

Let’s begin with creating our excel file as per the specific structure and keywords required by Drools:

For our simple example, we have used the most relevant set of keywords:

  • RuleSet – indicates the beginning of the decision table
  • Import – Java classes used in the rules
  • RuleTable – indicates the beginning of the set of rules
  • Name – Name of the rule
  • CONDITION – the code snippet of the condition to be checked against the input data. A rule should contain at least one condition
  • ACTION – the code snippet of the action to be taken if the conditions of the rule are met. A rule should contain at least one action. In the example, we are calling setDiscount on the Customer object

In addition, we have used the Customer class in the Excel file. So, let’s create that now.

3.2. The Customer Class

As can be seen from the CONDITIONs and ACTION in the excel sheet, we are using an object of the Customer class for the input data (type and years) and to store the result (discount).

The Customer class:

public class Customer {
    private CustomerType type;

    private int years;

    private int discount;

    // Standard getters and setters

    public enum CustomerType {
        INDIVIDUAL,
        BUSINESS;
    }
}

4. Creating Drools Rule Engine Instance

Before we can execute the rules that we have defined, we have to work with an instance of Drools rule engine. For that, we have to use Kie core components.

4.1. KieServices

The KieServices class provides access to all the Kie build and runtime facilities. It provides several factories, services, and utility methods. So, let’s first get hold of a KieServices instance:

KieServices kieServices = KieServices.Factory.get();

Using the KieServices, we are going to create new instances of KieFileSystem, KieBuilder, and KieContainer.

4.2. KieFileSystem

KieFileSystem is a virtual file system. Let’s add our Excel spreadsheet to it:

Resource dt 
  = ResourceFactory
    .newClassPathResource("com/baeldung/drools/rules/Discount.xls",
      getClass());

KieFileSystem kieFileSystem = kieServices.newKieFileSystem().write(dt);

4.3. KieBuilder

Now, build the content of the KieFileSystem by passing it to KieBuilder:

KieBuilder kieBuilder = kieServices.newKieBuilder(kieFileSystem);
kieBuilder.buildAll();

If successfully built, it creates a KieModule (any Maven produced jar with a kmodule.xml in it, is a KieModule).

4.4. KieRepository

The framework automatically adds the KieModule (resulting from the build) to KieRepository:

KieRepository kieRepository = kieServices.getRepository();

4.5. KieContainer

It is now possible to create a new KieContainer with this KieModule using its ReleaseId. In this case, Kie assigns a default ReleaseId:

ReleaseId krDefaultReleaseId = kieRepository.getDefaultReleaseId();
KieContainer kieContainer 
  = kieServices.newKieContainer(krDefaultReleaseId);

4.6. KieSession

We can now obtain KieSession from the KieContainer. Our application interacts with the KieSession, which stores and executes on the runtime data:

KieSession kieSession = kieContainer.newKieSession();

5. Executing the Rules

Finally, it is time to provide input data and fire the rules:

Customer customer = new Customer(CustomerType.BUSINESS, 2);
kieSession.insert(customer);

kieSession.fireAllRules();

6. Test Cases

Let’s now add some test cases:

public class DiscountExcelIntegrationTest {

    private KieSession kSession;

    @Before
    public void setup() {
        Resource dt 
          = ResourceFactory
            .newClassPathResource("com/baeldung/drools/rules/Discount.xls",
              getClass());
        kSession = new DroolsBeanFactory().getKieSession(dt);
    }

    @Test
    public void 
      giveIndvidualLongStanding_whenFireRule_thenCorrectDiscount() 
        throws Exception {
        Customer customer = new Customer(CustomerType.INDIVIDUAL, 5);
        kSession.insert(customer);

        kSession.fireAllRules();

        assertEquals(customer.getDiscount(), 15);
    }

    @Test
    public void 
      giveIndvidualRecent_whenFireRule_thenCorrectDiscount() 
      throws Exception {
        Customer customer = new Customer(CustomerType.INDIVIDUAL, 1);
        kSession.insert(customer);

        kSession.fireAllRules();

        assertEquals(customer.getDiscount(), 5);
    }

    @Test
    public void 
      giveBusinessAny_whenFireRule_thenCorrectDiscount() 
        throws Exception {
        Customer customer = new Customer(CustomerType.BUSINESS, 0);
        kSession.insert(customer);

        kSession.fireAllRules();

        assertEquals(customer.getDiscount(), 20);
    }
}

7. Troubleshooting

Drools converts the decision table to DRL. Due to that, dealing with errors and typos in the Excel file can be hard. Often the errors refer to the content of the DRL. So to troubleshoot, it helps to print and analyze the DRL:

Resource dt 
  = ResourceFactory
    .newClassPathResource("com/baeldung/drools/rules/Discount.xls",
      getClass());

DecisionTableProviderImpl decisionTableProvider 
  = new DecisionTableProviderImpl();
 
String drl = decisionTableProvider.loadFromResource(dt, null);

8. Conclusion

In this article, we have seen a quick example of using Drools to manage business rules in an Excel spreadsheet. We have seen the structure and the minimal set of keywords to be used in defining rules in an Excel file. Next, we have used Kie components to read and fire the rules. Finally, we wrote test cases to verify the results.

As always, the example used in this article can be found in the Github project.

Exploring the Spring 5 MVC URL Matching Improvements

$
0
0

1. Overview

Spring 5 will bring a new PathPatternParser for parsing URI template patterns. This is an alternative to the previously used AntPathMatcher.

The AntPathMatcher was an implementation of Ant-style path pattern matching. PathPatternParser breaks the path into a linked list of PathElements. This chain of PathElements is taken by the PathPattern class for quick matching of patterns.

With the PathPatternParser, support for a new URI variable syntax was also introduced.

In this article, we will go through the new/updated URL pattern matchers introduced in Spring 5.0 and also the ones that have been there since older versions of Spring.

2. New URL Pattern Matchers in Spring 5.0

The Spring 5.0 M5 release added a very easy to use URI variable syntax: {*foo} to capture any number of path segments at the end of the pattern.

Keep in mind that this pattern will be available to use with handler methods, starting with Spring 5.0 RC2.

We would need the Spring v5.0.0.RC2 version for this new URL pattern to work. This can be used along with SpringBoot v2.0.0.M2, by just overriding the spring.version property in pom.xml:

<spring.version>5.0.0.RC2</spring.version>

2.1. URI Variable Syntax {*foo} Using a Handler Method

Let’s see an example of the URI variable pattern {*foo} another example using @GetMapping and a Handler method. Whatever we give in the path after “/spring5” will be stored in the path variable “id”:

@GetMapping("/spring5/{*id}")
public String URIVariableHandler(@PathVariable String id) {
    return id;
}
@Test
public void whenMultipleURIVariablePattern_thenGotPathVariable() {
        
    client.get()
      .uri("/spring5/baeldung/tutorial")
      .exchange()
      .expectStatus()
      .is2xxSuccessful()
      .expectBody()
      .equals("/baeldung/tutorial");

    client.get()
      .uri("/spring5/baeldung")
      .exchange()
      .expectStatus()
      .is2xxSuccessful()
      .expectBody()
      .equals("/baeldung");
}

2.2. URI Variable Syntax {*foo} Using RouterFunction

Let’s see an example of the new URI variable path pattern using RouterFunction:

private RouterFunction<ServerResponse> routingFunction() {
    return route(GET("/test/{*id}"), 
      serverRequest -> ok().body(fromObject(serverRequest.pathVariable("id"))));
}

In this case, whatever path we write after “/test” will be captured in the path variable “id”. So the test case for it could be:

@Test
public void whenMultipleURIVariablePattern_thenGotPathVariable() 
  throws Exception {
 
    client.get()
      .uri("/test/ab/cd")
      .exchange()
      .expectStatus()
      .isOk()
      .expectBody(String.class)
      .isEqualTo("/ab/cd");
}

2.3. Use of URI Variable Syntax {*foo} to Access Resources

If we want to access resources, then we’ll need to write the similar path pattern as we wrote in the previous example.

So let’s say our pattern is: “/files/{*filepaths}”. In this case, if the path is /files/hello.txt, the value of path variable “filepaths” will be “/hello.txt”, whereas, if the path is /files/test/test.txt, the value of “filepaths” = “/test/test.txt”.

Our routing function for accessing file resources under the /files/ directory:

private RouterFunction<ServerResponse> routingFunction() { 
    return RouterFunctions.resources(
      "/files/{*filepaths}", 
      new ClassPathResource("files/"))); 
}

Let’s assume that our text files hello.txt and test.txt contain “hello” and “test” respectively. This can be demonstrated with a JUnit test case:

@Test 
public void whenMultipleURIVariablePattern_thenGotPathVariable() 
  throws Exception { 
      client.get() 
        .uri("/files/test/test.txt") 
        .exchange() 
        .expectStatus() 
        .isOk() 
        .expectBody(String.class) 
        .isEqualTo("test");
 
      client.get() 
        .uri("/files/hello.txt") 
        .exchange() 
        .expectStatus() 
        .isOk() 
        .expectBody(String.class) 
        .isEqualTo("hello"); 
}

3. Existing URL Patterns from Previous Versions

Let’s now take a peek into all the other URL pattern matchers that have been supported by older versions of Spring. All these patterns work with both RouterFunction and Handler methods with @GetMapping.

3.1. ‘?’ Matches Exactly One Character

If we specify the path pattern as: “/t?st“, this will match paths like: “/test” and “/tast”, but not “/tst” and “/teest”.

The example code using RouterFunction and its JUnit test case:

private RouterFunction<ServerResponse> routingFunction() { 
    return route(GET("/t?st"), 
      serverRequest -> ok().body(fromObject("Path /t?st is accessed"))); 
}

@Test
public void whenGetPathWithSingleCharWildcard_thenGotPathPattern()   
  throws Exception {
 
      client.get()
        .uri("/test")
        .exchange()
        .expectStatus()
        .isOk()
        .expectBody(String.class)
        .isEqualTo("Path /t?st is accessed");
}

3.2. ‘*’ Matches 0 or More Characters Within a Path Segment

If we specify the path pattern as : “/baeldung/*Id”, this will match path patterns like:”/baeldung/Id”, “/baeldung/tutorialId”, “/baeldung/articleId”, etc:

private RouterFunction<ServerResponse> routingFunction() { 
    returnroute(
      GET("/baeldung/*Id"), 
      serverRequest -> ok().body(fromObject("/baeldung/*Id path was accessed"))); }

@Test
public void whenGetMultipleCharWildcard_thenGotPathPattern() 
  throws Exception {
      client.get()
        .uri("/baeldung/tutorialId")
        .exchange()
        .expectStatus()
        .isOk()
        .expectBody(String.class)
        .isEqualTo("/baeldung/*Id path was accessed");
}

3.3. ‘**’ Matches 0 or More Path Segments Until the End of the Path

In this case, the pattern matching is not limited to a single path segment. If we specify the pattern as “/resources/**”, it will match all paths to any number of path segments after “/resources/”:

private RouterFunction<ServerResponse> routingFunction() { 
    return RouterFunctions.resources(
      "/resources/**", 
      new ClassPathResource("resources/"))); 
}

@Test
public void whenAccess_thenGot()() throws Exception {
    client.get()
      .uri("/resources/test/test.txt")
      .exchange()
      .expectStatus()
      .isOk()
      .expectBody(String.class)
      .isEqualTo("content of file test.txt");
}

3.4. ‘{baeldung:[a-z]+}’ Regex in Path Variable

We can also specify a regex for the value of path variable. So if our pattern is like “/{baeldung:[a-z]+}”, the value of path variable “baeldung” will be any path segment that matches the gives regex:

private RouterFunction<ServerResponse> routingFunction() { 
    return route(GET("/{baeldung:[a-z]+}"), 
      serverRequest ->  ok()
        .body(fromObject("/{baeldung:[a-z]+} was accessed and "
        + "baeldung=" + serverRequest.pathVariable("baeldung")))); 
}

@Test
public void whenGetRegexInPathVarible_thenGotPathVariable() 
  throws Exception {
 
      client.get()
        .uri("/abcd")
        .exchange()
        .expectStatus()
        .isOk()
        .expectBody(String.class)
        .isEqualTo("/{baeldung:[a-z]+} was accessed and "
          + "baeldung=abcd");
}

3.5. ‘/{var1}_{var2}’ Multiple Path Variables in Same Path Segment

Spring 5 made sure that multiple path variables will be allowed in a single path segment only when separated by a delimiter. Only then can Spring distinguish between the two different path variables:

private RouterFunction<ServerResponse> routingFunction() { 
 
    return route(
      GET("/{var1}_{var2}"),
      serverRequest -> ok()
        .body(fromObject( serverRequest.pathVariable("var1") + " , " 
        + serverRequest.pathVariable("var2"))));
 }

@Test
public void whenGetMultiplePathVaribleInSameSegment_thenGotPathVariables() 
  throws Exception {
      client.get()
        .uri("/baeldung_tutorial")
        .exchange()
        .expectStatus()
        .isOk()
        .expectBody(String.class)
        .isEqualTo("baeldung , tutorial");
}

4. Conclusion

In this article, we went through the new URL pattern matchers in Spring5, as well as the ones that are available from older versions of Spring. Here is the PathPattern documentation of Spring5 which lists these patterns in brief.

As always, the implementation for all the examples we discussed can be found over on GitHub.

Vert.x Spring Integration

$
0
0

1. Overview

In this quick article, we’ll discuss the integration of Spring with Vert-x and leverage the best of both worlds: the powerful and well-known Spring feature, and the reactive single-event loop from Vert.x.

To understand more about Vert.x, please refer to our introductory article here.

2. Setup

First, let’s get our dependencies in place:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <exclusions>
        <exclusion>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-tomcat</artifactId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>io.vertx</groupId>
    <artifactId>vertx-web</artifactId>
    <version>3.4.1</version>
</dependency>

Notice that we’ve excluded the embedded Tomcat dependency from spring-boot-starter-web since we are going to deploy our services using verticles.

You may find the latest dependencies here.

3. Spring Vert.x Application

Now, we’ll build a sample application with two verticles deployed.

The first Verticle routes requests to the handler that sends them as messages to the given address. The other Verticle listens at a given address.

Let’s look at these in action.

3.1. Sender Verticle

ServerVerticle accepts HTTP requests and sends them as messages to a designated address.  Let’s create a ServerVerticle class extending the AbstractVerticle,  and override the start() method to create our HTTP Server:

@Override
public void start() throws Exception {
    super.start();

    Router router = Router.router(vertx);
    router.get("/api/baeldung/articles")
      .handler(this::getAllArticlesHandler);

    vertx.createHttpServer()
      .requestHandler(router::accept)
      .listen(config().getInteger("http.port", 8080));
}

In the server request handler, we passed a router object, which redirects any incoming request to the getAllArticlesHandler handler:

private void getAllArticlesHandler(RoutingContext routingContext) {
    vertx.eventBus().<String>send(ArticleRecipientVerticle.GET_ALL_ARTICLES, "", 
      result -> {
        if (result.succeeded()) {
            routingContext.response()
              .putHeader("content-type", "application/json")
              .setStatusCode(200)
              .end(result.result()
              .body());
        } else {
            routingContext.response()
              .setStatusCode(500)
              .end();
        }
      });
}

In the handler method, we’re passing an event to the Vert.x Event bus, with an event id as GET_ALL_ARTICLES. Then we process the callback accordingly for success and error scenarios.

The message from the event bus will be consumed by the ArticleRecipientVerticlediscussed in the following section.

3.2. Recipient Verticle

ArticleRecipientVerticle listens for incoming messages and injects a Spring bean. It acts as the rendezvous point for Spring and Vert.x.

We’ll inject Spring service bean into a Verticle and invoke respective methods:

@Override
public void start() throws Exception {
    super.start();
    vertx.eventBus().<String>consumer(GET_ALL_ARTICLES)
      .handler(getAllArticleService(articleService));
}

Here, articleService is an injected Spring bean:

@Autowired
private ArticleService articleService;

This Verticle will keep listening to the event bus on an address GET_ALL_ARTICLES. Once it receives a message, it delegates it to the getAllArticleService handler method:

private Handler<Message<String>> getAllArticleService(ArticleService service) {
    return msg -> vertx.<String> executeBlocking(future -> {
        try {
            future.complete(
            mapper.writeValueAsString(service.getAllArticle()));
        } catch (JsonProcessingException e) {
            future.fail(e);
        }
    }, result -> {
        if (result.succeeded()) {
            msg.reply(result.result());
        } else {
            msg.reply(result.cause().toString());
        }
    });
}

This performs the required service operation and replies to the message with the status. The message reply is being referenced at the ServerVerticle and the callback result as we saw in the earlier section.

4. Service Class

The service class is a simple implementation, providing methods to interact with the repository layer:

@Service
public class ArticleService {

    @Autowired
    private ArticleRepository articleRepository;

    public List<Article> getAllArticle() {
        return articleRepository.findAll();
    }
}

The ArticleRepository extends, org.springframework.data.repository.CrudRepository and provides basic CRUD functionalities.

5. Deploying Verticles

We will be deploying the application, just the way we would do for a regular Spring Boot application. We have to create a Vert.X instance, and deploy verticles in it, after the Spring context initialization in completed:

public class VertxSpringApplication {

    @Autowired
    private ServerVerticle serverVerticle;

    @Autowired
    private ArticleRecipientVerticle articleRecipientVerticle;

    public static void main(String[] args) {
        SpringApplication.run(VertxSpringApplication.class, args);
    }

    @PostConstruct
    public void deployVerticle() {
        Vertx vertx = Vertx.vertx();
        vertx.deployVerticle(serverVerticle);
        vertx.deployVerticle(articleRecipientVerticle);
    }
}

Notice that, we are injecting verticle instances into the Spring application class. So, we will have to annotate the Verticle classes,

So, we will have to annotate the Verticle classes, ServerVerticle and ArticleRecipientVerticle with @Component.

Let’s test the application:

@Test
public void givenUrl_whenReceivedArticles_thenSuccess() {
    ResponseEntity<String> responseEntity = restTemplate
      .getForEntity("http://localhost:8080/api/baeldung/articles", String.class);
 
    assertEquals(200, responseEntity.getStatusCodeValue());
}

6. Conclusion

In this article, we learned about how to build a RESTful WebService using Spring and Vert.x.

As usual, the example is available over on GitHub.

How to Get a Name of a Method Being Executed?

$
0
0

1. Overview

Sometimes we need to know the name of the current Java method being executed.

This quick article presents a couple of simple ways of getting hold of the method name in the current execution stack.

2. Using getEnclosingMethod

We can find the name of the method being executed by using the getEnclosingMethod() API:

public void givenObject_whenGetEnclosingMethod_thenFindMethod() {
    String methodName = new Object() {}
      .getClass()
      .getEnclosingMethod()
      .getName();
       
    assertEquals("givenObject_whenGetEnclosingMethod_thenFindMethod",
      methodName);
}

3. Using Throwable Stack Trace

Using a Throwable stack trace gives us stack trace with the method currently being executed:

public void givenThrowable_whenGetStacktrace_thenFindMethod() {
    StackTraceElement[] stackTrace = new Throwable().getStackTrace();
 
    assertEquals(
      "givenThrowable_whenGetStacktrace_thenFindMethod",
      stackTrace[0].getMethodName());
}

4. Using Thread Stack Trace

Also, the stack trace of a current thread (since JDK 1.5) usually includes the name of the method that is being executed:

public void givenCurrentThread_whenGetStackTrace_thenFindMethod() {
    StackTraceElement[] stackTrace = Thread.currentThread()
      .getStackTrace();
 
    assertEquals(
      "givenCurrentThread_whenGetStackTrace_thenFindMethod",
      stackTrace[1].getMethodName()); 
}

However, we need to remember that this solution has one significant drawback. Some virtual machines may skip one or more stack frames. Although this is not common, we should be aware that this can happen.

5. Conclusion

In this tutorial, we have presented some examples how to get the name of the currently executed method. Examples are based on stack traces and the getEnclosingMethod().

As always, you can check out the examples provided in this article over on GitHub.





                       

Spring YAML Configuration

$
0
0

1. Overview

One of the ways of configuring Spring applications is using YAML configuration files.

In this quick article, we’ll configure different profiles for a simple Spring Boot application using YAML.

2. Spring YAML File

Spring profiles help enable Spring Applications to define different properties for different environments.

Following is a simple YAML file that contains two profiles. The three dashes separating the two profiles indicate the start of a new document so all the profiles can be described in the same YAML file.

The relative path of application.yml file is /myApplication/src/main/resources/application.yml.

The Spring application takes the first profile as the default profile unless declared otherwise in the Spring application.

spring:
    profiles: test
name: test-YAML
environment: test
servers: 
    - www.abc.test.com
    - www.xyz.test.com

---
spring:
    profiles: prod
name: prod-YAML
environment: production
servers: 
    - www.abc.com
    - www.xyz.com

3. Binding YAML to a Config Class

To load a set of related properties from a properties file, we will create a bean class:

@Configuration
@EnableConfigurationProperties
@ConfigurationProperties
public class YAMLConfig {
 
    private String name;
    private String environment;
    private List<String> servers = new ArrayList<>();

    // standard getters and setters

}

The annotation used here are:

  • @Configuration marks the class as a source of bean definitions
  • @ConfigurationProperties binds and validates the external configurations to a configuration class
  • @EnableConfigurationProperties this annotation is used to enable @ConfigurationProperties annotated beans in the Spring application

4. Accessing the YAML Properties

To access the YAML properties, we create an object of the YAMLConfig class and access the properties using that object.

In the properties file, let’s set the spring.active.profiles environment variable to prod. If we don’t define spring.profiles.active, it will default to the first profiles property defined in the YAML file.

The relative path for properties file is /myApplication/src/main/resources/application.properties.

spring.profiles.active=prod

In this example, we display the properties using the CommandLineRunner.

@SpringBootApplication
public class MyApplication implements CommandLineRunner {

    @Autowired
    private YAMLConfig myConfig;

    public static void main(String[] args) {
        SpringApplication app = new SpringApplication(MyApplication.class);
        app.run();
    }

    public void run(String... args) throws Exception {
        System.out.println("using environment: " + myConfig.getEnvironment());
        System.out.println("name: " + myConfig.getName());
        System.out.println("servers: " + myConfig.getServers());
    }
}

The output on the command line:

using environment: production
name: prod-YAML
servers: [www.abc.com, www.xyz.com]

5. YAML Property Overriding

In Spring Boot, YAML files can be overridden by other YAML properties files depending on their location. YAML properties can be overridden by properties files in the following locations, in order of highest precedence first:

  • Profiles’ properties placed outside the packaged jar
  • Profiles’ properties packaged inside the packaged jar
  • Application properties placed outside the packaged jar
  • Application properties packaged inside the packaged jar

6. Conclusion

In this quick article, we’ve seen how to configure properties in Spring Boot applications using YAML. We’ve also seen the property overriding rules followed by Spring Boot for YAML files.

The code for this article is available over on GitHub.

Java Weekly, Issue 182

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> Kotlin and Spring: Working with JPA and data classes [codecentric.de]

Kotlin makes it possible to create entities using data classes without Java-like boilerplate. However, there are a few things to remember about when doing this.

>> What’s new in JPA 2.2 [thoughts-on-java.org]

This JPA release has a host of new features worth looking at.

>> JSR 369: JavaTM Servlet 4.0 Specification [jcp.org]

This has been a long time coming. A really long time – but, the final draft of the Servlet 4 spec is finally here.

>> A preview on Spring Data Kay [spring.io]

The release of Spring Data Kay is getting closer and closer. Here’s a cool list of features added during the last milestone, including improved reactive and Kotlin support.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> A SonarQube plugin for Kotlin – Analyzing with ANTLR [blog.frankel.ch]

A very interesting insight into creating the SonarQube plugin for Kotlin analysis.

>> Gatling Load Testing Part 1 – Using Gatling [blog.codecentric.de]

A solid way to start learning how to perf-test with Gatling.

Also worth reading:

3. Musings

>> The One Thing Every Company Can Do to Reduce Technical Debt [daedtech.com]

The lack of client’s involvement in the project is a strong leading indicator of things going south sooner rather than later.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> This didn’t go as I hoped [dilbert.com]

>> Your social media score is nearly zero [dilbert.com]

>> I speak truth to the powerless [dilbert.com]

5. Pick of the Week

>> How Log4J2 Works: 10 Ways to Get the Most Out Of It [stackify.com]


Allow Authentication from Accepted Locations Only with Spring Security

$
0
0

1. Overview

In this tutorial, we’ll focus on a very interesting security feature – securing the account of a user based on their location.

Simply put, we’ll block any login from unusual or non-standard locations and allow user to enable new locations in a secured way.

This is part of the registration series and, naturally, builds on top of the existing codebase.

2. User Location Model

First, let’s take a look at our UserLocation model – which holds information about the user login locations; each user has at least one location associated with their account:

@Entity
public class UserLocation {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String country;

    private boolean enabled;

    @ManyToOne(targetEntity = User.class, fetch = FetchType.EAGER)
    @JoinColumn(nullable = false, name = "user_id")
    private User user;

    public UserLocation() {
        super();
        enabled = false;
    }

    public UserLocation(String country, User user) {
        super();
        this.country = country;
        this.user = user;
        enabled = false;
    }
    ...
}

And we’re going to add a simple retrieval operation to our repository:

public interface UserLocationRepository extends JpaRepository<UserLocation, Long> {
    UserLocation findByCountryAndUser(String country, User user);
}

Note that

  • The new UserLocation is disabled by default
  • Each user has at least one location associated with their accounts which is the first location they accessed the application on registration

3. Registration

Now, let’s discuss how to modify the registration process to add the default user location:

@RequestMapping(value = "/user/registration", method = RequestMethod.POST)
@ResponseBody
public GenericResponse registerUserAccount(@Valid UserDto accountDto, 
  HttpServletRequest request) {
    
    User registered = userService.registerNewUserAccount(accountDto);
    userService.addUserLocation(registered, getClientIP(request));
    ...
}

In the service implementation, we’ll obtain the country by the IP address of the user:

public void addUserLocation(User user, String ip) {
    InetAddress ipAddress = InetAddress.getByName(ip);
    String country 
      = databaseReader.country(ipAddress).getCountry().getName();
    UserLocation loc = new UserLocation(country, user);
    loc.setEnabled(true);
    loc = userLocationRepo.save(loc);
}

Note that we’re using the GeoLite2 database to get the country from the IP address. To use GeoLite2 , we needed the maven dependency:

<dependency>
    <groupId>com.maxmind.geoip2</groupId>
    <artifactId>geoip2</artifactId>
    <version>2.9.0</version>
</dependency>

And we also need to define a simple bean:

@Bean
public DatabaseReader databaseReader() throws IOException, GeoIp2Exception {
    File resource = new File("src/main/resources/GeoLite2-Country.mmdb");
    return new DatabaseReader.Builder(resource).build();
}

We’ve loaded up the GeoLite2 Country database from MaxMind here.

4. Secure Login 

Now that we have the default country of the user, we’ll add a simple location checker after authentication:

@Autowired
private DifferentLocationChecker differentLocationChecker;

@Bean
public DaoAuthenticationProvider authProvider() {
    CustomAuthenticationProvider authProvider = new CustomAuthenticationProvider();
    authProvider.setUserDetailsService(userDetailsService);
    authProvider.setPasswordEncoder(encoder());
    authProvider.setPostAuthenticationChecks(differentLocationChecker);
    return authProvider;
}

And here is our DifferentLocationChecker:

@Component
public class DifferentLocationChecker implements UserDetailsChecker {

    @Autowired
    private IUserService userService;

    @Autowired
    private HttpServletRequest request;

    @Autowired
    private ApplicationEventPublisher eventPublisher;

    @Override
    public void check(UserDetails userDetails) {
        String ip = getClientIP();
        NewLocationToken token = userService.isNewLoginLocation(userDetails.getUsername(), ip);
        if (token != null) {
            String appUrl = 
              "http://" 
              + request.getServerName() 
              + ":" + request.getServerPort() 
              + request.getContextPath();
            
            eventPublisher.publishEvent(
              new OnDifferentLocationLoginEvent(
                request.getLocale(), userDetails.getUsername(), ip, token, appUrl));
            throw new UnusualLocationException("unusual location");
        }
    }

    private String getClientIP() {
        String xfHeader = request.getHeader("X-Forwarded-For");
        if (xfHeader == null) {
            return request.getRemoteAddr();
        }
        return xfHeader.split(",")[0];
    }
}

Note that we used setPostAuthenticationChecks() so that the check only run after successful authentication – when user provide the right credentials.

Also, our custom UnusualLocationException is a simple AuthenticationException.

We’ll also need to modify our AuthenticationFailureHandler to customize the error message:

@Override
public void onAuthenticationFailure(...) {
    ...
    else if (exception.getMessage().equalsIgnoreCase("unusual location")) {
        errorMessage = messages.getMessage("auth.message.unusual.location", null, locale);
    }
}

Now, let’s take a deep look at the isNewLoginLocation() implementation:

@Override
public NewLocationToken isNewLoginLocation(String username, String ip) {
    try {
        InetAddress ipAddress = InetAddress.getByName(ip);
        String country 
          = databaseReader.country(ipAddress).getCountry().getName();
        
        User user = repository.findByEmail(username);
        UserLocation loc = userLocationRepo.findByCountryAndUser(country, user);
        if ((loc == null) || !loc.isEnabled()) {
            return createNewLocationToken(country, user);
        }
    } catch (Exception e) {
        return null;
    }
    return null;
}

Notice how, when the user provides the correct credentials, we then check their location. If the location is already associated with that user account, then the user is able to authenticate successfully.

If not, we create a NewLocationToken and a disabled UserLocation – to allow the user to enable this new location. More on that, in the following sections.

private NewLocationToken createNewLocationToken(String country, User user) {
    UserLocation loc = new UserLocation(country, user);
    loc = userLocationRepo.save(loc);
    NewLocationToken token = new NewLocationToken(UUID.randomUUID().toString(), loc);
    return newLocationTokenRepository.save(token);
}

Finally, here’s the simple NewLocationToken implementation – to allow users to associate new locations to their account:

@Entity
public class NewLocationToken {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String token;

    @OneToOne(targetEntity = UserLocation.class, fetch = FetchType.EAGER)
    @JoinColumn(nullable = false, name = "user_location_id")
    private UserLocation userLocation;
    
    ...
}

5. Different Location Login Event

When the user login from a different location, we created a NewLocationToken and used it to trigger an OnDifferentLocationLoginEvent:

public class OnDifferentLocationLoginEvent extends ApplicationEvent {
    private Locale locale;
    private String username;
    private String ip;
    private NewLocationToken token;
    private String appUrl;
}

The DifferentLocationLoginListener handles our event as follows:

@Component
public class DifferentLocationLoginListener 
  implements ApplicationListener<OnDifferentLocationLoginEvent> {

    @Autowired
    private MessageSource messages;

    @Autowired
    private JavaMailSender mailSender;

    @Autowired
    private Environment env;

    @Override
    public void onApplicationEvent(OnDifferentLocationLoginEvent event) {
        String enableLocUri = event.getAppUrl() + "/user/enableNewLoc?token=" 
          + event.getToken().getToken();
        String changePassUri = event.getAppUrl() + "/changePassword.html";
        String recipientAddress = event.getUsername();
        String subject = "Login attempt from different location";
        String message = messages.getMessage("message.differentLocation", new Object[] { 
          new Date().toString(), 
          event.getToken().getUserLocation().getCountry(), 
          event.getIp(), enableLocUri, changePassUri 
          }, event.getLocale());

        SimpleMailMessage email = new SimpleMailMessage();
        email.setTo(recipientAddress);
        email.setSubject(subject);
        email.setText(message);
        email.setFrom(env.getProperty("support.email"));
        mailSender.send(email);
    }
}

Note how, when the user logs in from a different location, we’ll send an email to notify them.

If someone else attempted to log into their account, they’ll, of course, change their password. If they recognize the authentication attempt, they’ll be able to associate the new login location to their account.

6. Enable a New Login Location

Finally, now that the user has been notified of the suspicious activity, let’s have a look at how the application will handle enabling the new location:

@RequestMapping(value = "/user/enableNewLoc", method = RequestMethod.GET)
public String enableNewLoc(Locale locale, Model model, @RequestParam("token") String token) {
    String loc = userService.isValidNewLocationToken(token);
    if (loc != null) {
        model.addAttribute(
          "message", 
          messages.getMessage("message.newLoc.enabled", new Object[] { loc }, locale)
        );
    } else {
        model.addAttribute(
          "message", 
          messages.getMessage("message.error", null, locale)
        );
    }
    return "redirect:/login?lang=" + locale.getLanguage();
}

And our isValidNewLocationToken() method:

@Override
public String isValidNewLocationToken(String token) {
    NewLocationToken locToken = newLocationTokenRepository.findByToken(token);
    if (locToken == null) {
        return null;
    }
    UserLocation userLoc = locToken.getUserLocation();
    userLoc.setEnabled(true);
    userLoc = userLocationRepo.save(userLoc);
    newLocationTokenRepository.delete(locToken);
    return userLoc.getCountry();
}

Simply put, we’ll enable the UserLocation associated with the token and then delete the token.

7. Conclusion

In this tutorial, we focused on a powerful new mechanism to add security into our applications – restricting unexpected user activity based on their location. 

As always, the full implementation can be found over on GiHub.

How to Warm Up the JVM

$
0
0

1. Overview

The JVM is one of the oldest yet powerful virtual machines ever built.

In this article, we have a quick look at what it means to warm up a JVM and how to do it.

2. JVM Architecture Basics

Whenever a new JVM process starts, all required classes are loaded into memory by an instance of the ClassLoader. This process takes place in three steps:

  1. Bootstrap Class Loading:  The “Bootstrap Class Loader” loads Java code and essential Java classes such as java.lang.Object into memory. These loaded classes reside in JRE\lib\rt.jar.
  2. Extension Class Loading: The ExtClassLoader is responsible for loading all JAR files located at the java.ext.dirs path. In non-Maven or non-Gradle based applications, where a developer adds JARs manually, all those classes are loaded during this phase.
  3. Application Class Loading: The AppClassLoader loads all classes located in the application class path.

This initialization process is based on a lazy loading scheme.

3. What Is Warming up the JVM

Once class-loading is complete, all important classes (used at the time of process start) are pushed into the JVM cache (native code) – which makes them accessible faster during runtime. Other classes are loaded on a per-request basis.

The first request made to a Java web application is often substantially slower than the average response time during the lifetime of the process. This warm-up period can usually be attributed to lazy class loading and just-in-time compilation.

Keeping this in mind, for low-latency applications, we need to cache all classes beforehand – so that they’re available instantly when accessed at runtime.

This process of tuning the JVM is known as warming up.

4. Tiered Compilation

Thanks to the sound architecture of the JVM, frequently used methods are loaded into the native cache during the application life-cycle.

We can make use of this property to force-load critical methods into the cache when an application starts. To that extent, we need to set a VM argument named Tiered Compilation:

-XX:CompileThreshold -XX:TieredCompilation

Normally, the VM uses the interpreter to collect profiling information on methods that are fed into the compiler. In the tiered scheme, in addition to the interpreter, the client compiler is used to generate compiled versions of methods that collect profiling information about themselves.

Since compiled code is substantially faster than interpreted code, the program executes with better performance during the profiling phase.

Applications running on JBoss and JDK version 7 with this VM argument enabled tend to crash after some time due to a documented bug. The issue has been fixed in JDK version 8.

Another point to note here is that in order to force load, we’ve to make sure that all (or most) classes that are to going be executed need to be accessed. It’s similar to determining code coverage during unit testing. The more code is covered, the better the performance will be.

The next section demonstrates how this can be implemented.

5. Manual Implementation

We may implement an alternate technique to warm up the JVM. In this case, a simple manual warm-up could include repeating the creation of different classes thousands of times as soon as the application starts.

Firstly, we need to create a dummy class with a normal method:

public class Dummy {
    public void m() {
    }
}

Next, we need to create a class that has a static method that will be executed at least 100000 times as soon as application starts and with each execution, it creates a new instance of the aforementioned dummy class we created earlier:

public class ManualClassLoader {
    protected static void load() {
        for (int i = 0; i < 100000; i++) {
            Dummy dummy = new Dummy();
            dummy.m();
        }
    }
}

Now, in order to measure the performance gain, we need to create a main class. This class contains one static block that contains a direct call to the ManualClassLoader’s load() method.

Inside the main function, we make a call to the ManualClassLoader’s load() method once more and capture the system time in nanoseconds just before and after our function call. Finally, we subtract these times to get the actual execution time.

We’ve to run the application twice; once with the load() method call inside the static block and once without this method call:

public class MainApplication {
    static {
        long start = System.nanoTime();
        ManualClassLoader.load();
        long end = System.nanoTime();
        System.out.println("Warm Up time : " + (end - start));
    }
    public static void main(String[] args) {
        long start = System.nanoTime();
        ManualClassLoader.load();
        long end = System.nanoTime();
        System.out.println("Total time taken : " + (end - start));
    }
}

Below the results are reproduced in nanoseconds:

With Warm Up No Warm   Up Difference(%)
1220056 8903640 730
1083797 13609530 1256
1026025 9283837 905
1024047 7234871 706
868782 9146180 1053

As expected, with warm up approach shows much better performance than the normal one.

Of course, this is a very simplistic benchmark and only provides some surface-level insight into the impact of this technique. Also, it’s important to understand that, with a real-world application, we need to warm up with the typical code paths in the system.

6. Tools

We can also use several tools to warm up the JVM. One of the most well-known tools is the Java Microbenchmark Harness, JMH. It’s generally used for micro-benchmarking. Once it is loaded, it repeatedly hits a code snippet and monitors the warm-up iteration cycle.

To use it we need to add another dependency to the pom.xml:

<dependency>
    <groupId>org.openjdk.jmh</groupId>
    <artifactId>jmh-core</artifactId>
    <version>1.19</version>
</dependency>
<dependency>
    <groupId>org.openjdk.jmh</groupId>
    <artifactId>jmh-generator-annprocess</artifactId>
    <version>1.19</version>
</dependency>

We can check the latest version of JMH in Central Maven Repository.

Alternatively, we can use JMH’s maven plugin to generate a sample project:

mvn archetype:generate \
    -DinteractiveMode=false \
    -DarchetypeGroupId=org.openjdk.jmh \
    -DarchetypeArtifactId=jmh-java-benchmark-archetype \
    -DgroupId=com.baeldung \
    -DartifactId=test \
    -Dversion=1.0

Next, let’s create a main method:

public static void main(String[] args) 
  throws RunnerException, IOException {
    Main.main(args);
}

Now, we need to create a method and annotate it with JMH’s @Benchmark annotation:

@Benchmark
public void init() {
    //code snippet	
}

Inside this init method, we need to write code that needs to be executed repeatedly in order to warm up.

7. Performance Benchmark

In the last 20 years, most contributions to Java were related to the GC (Garbage Collector) and JIT (Just In Time Compiler). Almost all of the performance benchmarks found online are done on a JVM already running for some time. However,

However, Beihang University has published a benchmark report taking into account JVM warm-up time. They used Hadoop and Spark based systems to process massive data:

Here HotTub designates the environment in which the JVM was warmed up.

As you can see, the speed-up can be significant, especially for relatively small read operations – which is why this data is interesting to consider.

8. Conclusion

In this quick article, we showed how the JVM loads classes when an application starts and how we can warm up the JVM in order gain a performance boost.

This book goes over more information and guidelines on the topic, if you want to continue.

And, like always, the full source code is available over on GitHub.

Iterate over a Map in Java

$
0
0

1. Overview

In this quick article, we’ll have a look at the different ways of iterating through the entries of a Map in Java.

Simply put, we can extract the contents of a Map using keySet(), valueSet() or entrySet(). Since those are all sets, similar iteration principles apply to all of them.

The Map.entrySet API returns a collection-view of the map, whose elements are from the Map class. The only way to obtain a reference to a single map entry is from the iterator of this collection view. 

The entry.getKey() returns the key and entry.getValue() returns the corresponding value.

Let’s have a look at a few of these.

2. EntrySet and For Loop

First, let’s see how to iterate through a Map using the EntrySet:

public void iterateUsingEntrySet(Map<String, Integer> map) {
    for (Map.Entry<String, Integer> entry : map.entrySet()) {
          System.out.println(entry.getKey() + ":" + entry.getValue());
    }
}

Here, we’re converting our map to a set of entries and then iterating through them using the classical for-each approach.

We can access a key of each entry by calling getKey() and we can access a value of each entry by calling getValue().

 

3. Iterator and EntrySet

Another approach would be to obtain a set of entries and perform the iteration using an Iterator:

public void iterateUsingIteratorAndEntry(Map<String, Integer> map) {
    Iterator<Map.Entry<String, Integer>> iterator = map.entrySet().iterator();
    while (iterator.hasNext()) {
        Map.Entry<String, Integer> entry = iterator.next();
        System.out.println(entry() + ":" + entry());
    }
}

Notice how we can get the Iterator instance using the iterator() API of entrySet(). Then, as usual, we loop through the iterator with iterator.next().

4. With Lambdas

Let’s now see how to iterate a Map using lambda expressions.

Like most other things in Java 8, this turns out to be much simpler than the alternatives; we’ll make use of the forEach() method:

public void iterateUsingLambda(Map<String, Integer> map) {
    map.forEach((k, v) -> System.out.println((k + ":" + v)));
}

In this case, we do not need to convert a map to a set of entries. To learn more about lambda expressions, you can start here.

5. Stream API

Stream API is one of the main features of Java 8. We can use this feature to loop through a Map as well but as in previous examples, we need to obtain a set of entries first:

public void iterateUsingStreamAPI(Map<String, Integer> map) {
    map.entrySet().stream()
      // ...
      .forEach(e -> System.out.println(e.getKey() + ":" + e.getValue()));
}

This should be used when we are planning on doing some additional Stream processing. Otherwise, it’s just a simple forEach() as described previously.

To learn more about Stream API, check out this article.

6. Conclusion

In this tutorial, we’ve focused on a simple but critical operation – iterating through the entries of a map.

We’ve seen a couple of methods which can be used with Java 8 only, namely Lambda expressions and the Stream API.

As always, the code examples in the article can be found over on GitHub.

Monte Carlo Tree Search for Tic-Tac-Toe Game

$
0
0

1. Overview

In this article, we’re going to explore the Monte Carlo Tree Search (MCTS) algorithm and its applications.

We’ll look at its phases in detail by implementing the game of Tic-Tac-Toe in Java. We’ll design a general solution which could be used in many other practical applications, with minimal changes.

2. Introduction

Simply put, Monte Carlo tree search is a probabilistic search algorithm. It’s a unique decision-making algorithm because of its efficiency in open-ended environments with an enormous amount of possibilities.

If you are already familiar with game theory algorithms like Minimax, it requires a function to evaluate the current state, and it has to compute many levels in the game tree to find the optimal move.

Unfortunately, it is not feasible to do so in a game like Go in which there is high branching factor (resulting in millions of possibilities as the height of the tree increases), and it’s difficult to write a good evaluation function to compute how good the current state is.

Monte Carlo tree search applies Monte Carlo method to the game tree search. As it is based on random sampling of game states, it does not need to brute force its way out of each possibility. Also, it does not necessarily require us to write an evaluation or good heuristic functions.

And, a quick side-note – it revolutionized the world of computer Go. Since March 2016, It has become a prevalent research topic as Google’s AlphaGo (built with MCTS and neural network) beat Lee Sedol (the world champion in Go).

3. Monte Carlo Tree Search Algorithm

Now, let’s explore how the algorithm works. Initially, we’ll build a lookahead tree (game tree) with a root node, and then we’ll keep expanding it with random rollouts. In the process, we’ll maintain visit count and win count for each node.

In the end, we are going to select the node with most promising statistics.

The algorithm consists of four phases; let’s explore all of them in detail.

3.1. Selection

In this initial phase, the algorithm starts with a root node and selects a child node such that it picks the node with maximum win rate. We also want to make sure that each node is given a fair chance.

The idea is to keep selecting optimal child nodes until we reach the leaf node of the tree. A good way to select such a child node is to use UCT (Upper Confidence Bound applied to trees) formula:
In which

  • wi = number of wins after i-th move
  • ni = number of simulations after i-th move
  • c = exploration parameter (theoretically equal to √2)
  • t = total number of simulations for the parent node

The formula ensures that no state will be a victim of starvation and it also plays promising branches more often than their counterparts.

3.2. Expansion

When it can no longer apply UCT to find the successor node, it expands the game tree by appending all possible states from the leaf node.

3.3. Simulation

After Expansion, the algorithm picks a child node arbitrarily, and it simulates a randomized game from selected node until it reaches the resulting state of the game. If nodes are picked randomly or semi-randomly during the play out, it is called light play out. You can also opt for heavy play out by writing quality heuristics or evaluation functions.

3.4. Backpropagation

This is also known as an update phase. Once the algorithm reaches the end of the game, it evaluates the state to figure out which player has won. It traverses upwards to the root and increments visit score for all visited nodes. It also updates win score for each node if the player for that position has won the playout.

MCTS keeps repeating these four phases until some fixed number of iterations or some fixed amount of time.

In this approach, we estimate winning score for each node based on random moves. So higher the number of iterations, more reliable the estimate becomes. The algorithm estimates will be less accurate at the start of a search and keep improving after sufficient amount of time. Again it solely depends on the type of the problem.

4. Dry Run

Here, nodes contain statistics as total visits/win score.

5. Implementation

Now, let’s implement a game of Tic-Tac-Toe – using Monte Carlo tree search algorithm.

We’ll design a generalized solution for MCTS which can be utilized for many other board games as well. We’ll have a look at most of the code in the article itself.

Although to make the explanation crisp, we may have to skip some minor details (not particularly related to MCTS), but you can always find the complete implementation here.

First of all, we need a basic implementation for Tree and Node classes to have a tree search functionality:

public class Node {
    State state;
    Node parent;
    List<Node> childArray;
    // setters and getters
}
public class Tree {
    Node root;
}

As each node will have a particular state of the problem, let’s implement a State class as well:

public class State {
    Board board;
    int playerNo;
    int visitCount;
    double winScore;

    // copy constructor, getters and setters

    public List<State> getAllPossibleStates() {
        // constructs a list of all possible states from current state
    }
    public void randomPlay() {
        /* get a list of all possible positions on the board and 
           play a random move */
    }
}

Now, let’s implement MonteCarloTreeSearch class, which will be responsible for finding the next best move from the given game position:

public class MonteCarloTreeSearch {
    static final int WIN_SCORE = 10;
    int level;
    int opponent;

    public Board findNextMove(Board board, int playerNo) {
        // define an end time which will act as a terminating condition

        opponent = 3 - playerNo;
        Tree tree = new Tree();
        Node rootNode = tree.getRoot();
        rootNode.getState().setBoard(board);
        rootNode.getState().setPlayerNo(oponent);

        while (System.currentTimeMillis() < end) {
            Node promisingNode = selectPromisingNode(rootNode);
            if (promisingNode.getState().getBoard().checkStatus() 
              == Board.IN_PROGRESS) {
                expandNode(promisingNode);
            }
            Node nodeToExplore = promisingNode;
            if (promisingNode.getChildArray().size() > 0) {
                nodeToExplore = promisingNode.getRandomChildNode();
            }
            int playoutResult = simulateRandomPlayout(nodeToExplore);
            backPropogation(nodeToExplore, playoutResult);
        }

        Node winnerNode = rootNode.getChildWithMaxScore();
        tree.setRoot(winnerNode);
        return winnerNode.getState().getBoard();
    }
}

Here, we keep iterating over all of the four phases until the predefined time, and at the end, we get a tree with reliable statistics to make a smart decision.

Now, let’s implement methods for all the phases.

We will start with the selection phase which requires UCT implementation as well:

private Node selectPromisingNode(Node rootNode) {
    Node node = rootNode;
    while (node.getChildArray().size() != 0) {
        node = UCT.findBestNodeWithUCT(node);
    }
    return node;
}
public class UCT {
    public static double uctValue(
      int totalVisit, double nodeWinScore, int nodeVisit) {
        if (nodeVisit == 0) {
            return Integer.MAX_VALUE;
        }
        return ((double) nodeWinScore / (double) nodeVisit) 
          + 1.41 * Math.sqrt(Math.log(totalVisit) / (double) nodeVisit);
    }

    public static Node findBestNodeWithUCT(Node node) {
        int parentVisit = node.getState().getVisitCount();
        return Collections.max(
          node.getChildArray(),
          Comparator.comparing(c -> uctValue(parentVisit, 
            c.getState().getWinScore(), c.getState().getVisitCount())));
    }
}

This phase recommends a leaf node which should be expanded further in the expansion phase:

private void expandNode(Node node) {
    List<State> possibleStates = node.getState().getAllPossibleStates();
    possibleStates.forEach(state -> {
        Node newNode = new Node(state);
        newNode.setParent(node);
        newNode.getState().setPlayerNo(node.getState().getOpponent());
        node.getChildArray().add(newNode);
    });
}

Next, we write code to pick a random node and simulate a random play out from it. Also, we will have an update function to propagate score and visit count starting from leaf to root:

private void backPropogation(Node nodeToExplore, int playerNo) {
    Node tempNode = nodeToExplore;
    while (tempNode != null) {
        tempNode.getState().incrementVisit();
        if (tempNode.getState().getPlayerNo() == playerNo) {
            tempNode.getState().addScore(WIN_SCORE);
        }
        tempNode = tempNode.getParent();
    }
}
private int simulateRandomPlayout(Node node) {
    Node tempNode = new Node(node);
    State tempState = tempNode.getState();
    int boardStatus = tempState.getBoard().checkStatus();
    if (boardStatus == opponent) {
        tempNode.getParent().getState().setWinScore(Integer.MIN_VALUE);
        return boardStatus;
    }
    while (boardStatus == Board.IN_PROGRESS) {
        tempState.togglePlayer();
        tempState.randomPlay();
        boardStatus = tempState.getBoard().checkStatus();
    }
    return boardStatus;
}

Now we are done with the implementation of MCTS. All we need is a Tic-Tac-Toe particular Board class implementation. Notice that to play other games with our implementation; We just need to change Board class.

public class Board {
    int[][] boardValues;
    public static final int DEFAULT_BOARD_SIZE = 3;
    public static final int IN_PROGRESS = -1;
    public static final int DRAW = 0;
    public static final int P1 = 1;
    public static final int P2 = 2;
    
    // getters and setters
    public void performMove(int player, Position p) {
        this.totalMoves++;
        boardValues[p.getX()][p.getY()] = player;
    }

    public int checkStatus() {
        /* Evaluate whether game is won and return winner.
           If it is draw return 0 else return -1 */         
    }

    public List<Position> getEmptyPositions() {
        int size = this.boardValues.length;
        List<Position> emptyPositions = new ArrayList<>();
        for (int i = 0; i < size; i++) {
            for (int j = 0; j < size; j++) {
                if (boardValues[i][j] == 0)
                    emptyPositions.add(new Position(i, j));
            }
        }
        return emptyPositions;
    }
}

We just implemented an AI which can not be beaten in Tic-Tac-Toe. Let’s write a unit case which demonstrates that AI vs. AI will always result in a draw:

@Test
public void givenEmptyBoard_whenSimulateInterAIPlay_thenGameDraw() {
    Board board = new Board();
    int player = Board.P1;
    int totalMoves = Board.DEFAULT_BOARD_SIZE * Board.DEFAULT_BOARD_SIZE;
    for (int i = 0; i < totalMoves; i++) {
        board = mcts.findNextMove(board, player);
        if (board.checkStatus() != -1) {
            break;
        }
        player = 3 - player;
    }
    int winStatus = board.checkStatus();
 
    assertEquals(winStatus, Board.DRAW);
}

6. Advantages

  • It does not necessarily require any tactical knowledge about the game
  • A general MCTS implementation can be reused for any number of games with little modification
  • Focuses on nodes with higher chances of winning the game
  • Suitable for problems with high branching factor as it does not waste computations on all possible branches
  • Algorithm is very straightforward to implement
  • Execution can be stopped at any given time, and it will still suggest the next best state computed so far

7. Drawback

If MCTS is used in its basic form without any improvements, it may fail to suggest reasonable moves. It may happen if nodes are not visited adequately which results in inaccurate estimates.

However, MCTS can be improved using some techniques. It involves domain specific as well as domain-independent techniques.

In domain specific techniques, simulation stage produces more realistic play outs rather than stochastic simulations. Though it requires knowledge of game specific techniques and rules.

8. Summary

At first glance, it’s difficult to trust that an algorithm relying on random choices can lead to smart AI. However, thoughtful implementation of MCTS can indeed provide us a solution which can be used in many games as well as in decision-making problems.

As always, the complete code for the algorithm can be found over on GitHub.

Spring with Maven BOM

$
0
0

1. Overview

In this quick tutorial, we’re going to look at how Maven, a tool based on the concept of Project Object Model (POM), can make use of a BOM or “Bill Of Materials”.

For more details about Maven, you can check our article Apache Maven Tutorial.

2. Dependency Management Concepts

To understand what a BOM is and what we can use it for, we first need to learn basic concepts.

2.1. What is Maven POM?

Maven POM is an XML file that contains information and configurations (about the project) that are used by Maven to import dependencies and to build the project.

2.2. What is Maven BOM?

BOM stands for Bill Of Materials. A BOM is a special kind of POM that is used to control the versions of a project’s dependencies and provide a central place to define and update those versions.

BOM provides the flexibility to add a dependency to our module without worrying about the version that we should depend on.

2.3. Transitive Dependencies

Maven can discover the libraries that are needed by our own dependencies in our pom.xml and includes them automatically. There’s no limit to the number of dependency levels that the libraries are gathered from.

The conflict here comes when 2 dependencies refer to different versions of a specific artifact. Which one will be included by Maven?

The answer here is the “nearest definition”. This means that the version used will be the closest one to our project in the tree of dependencies. This is called dependency mediation.

Let’s see the following example to clarify the dependency mediation:

A -> B -> C -> D 1.4  and  A -> E -> D 1.0

This example shows that project A depends on B and E. B and E have their own dependencies which encounter different versions of the D artifact. Artifact D 1.0 will be used in the build of A project because the path through E is shorter.

There are different techniques to determine which version of the artifacts should be included:

  • We can always guarantee a version by declaring it explicitly in our project’s POM. For instance, to guarantee that D 1.4 is used, we should add it explicitly as a dependency in the pom.xml file.
  • We can use the Dependency Management section to control artifact versions as we will explain later in this article.

2.4. Dependency Management

Simply put, Dependency Management is a mechanism to centralize the dependency information.

When we have a set of projects that inherit a common parent, we can put all dependency information in a shared POM file called BOM.

Following is an example of how to write a BOM file:

<project ...>
	
    <modelVersion>4.0.0</modelVersion>
    <groupId>baeldung</groupId>
    <artifactId>Baeldung-BOM</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>pom</packaging>
    <name>BaelDung-BOM</name>
    <description>parent pom</description>
    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>test</groupId>
                <artifactId>a</artifactId>
                <version>1.2</version>
            </dependency>
            <dependency>
                <groupId>test</groupId>
                <artifactId>b</artifactId>
                <version>1.0</version>
                <scope>compile</scope>
            </dependency>
            <dependency>
                <groupId>test</groupId>
                <artifactId>c</artifactId>
                <version>1.0</version>
                <scope>compile</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>
</project>


As we can see, the BOM is a normal POM file with a dependencyManagement section where we can include all an artifact’s information and versions.

2.5. Using the BOM File

There are 2 ways to use the previous BOM file in our project and then we will be ready to declare our dependencies without having to worry about version numbers.

We can inherit from the parent:

<project ...>
    <modelVersion>4.0.0</modelVersion>
    <groupId>baeldung</groupId>
    <artifactId>Test</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>pom</packaging>
    <name>Test</name>
    <parent>
        <groupId>baeldung</groupId>
        <artifactId>Baeldung-BOM</artifactId>
        <version>0.0.1-SNAPSHOT</version>
    </parent>
</project>

As we can see our project Test inherits the Baeldung-BOM.

We can also import the BOM.

In larger projects, the approach of inheritance is not efficient because the project can inherit only a single parent. Importing is the alternative as we can import as many BOMs as we need.

Let’s see how we can import a BOM file into our project POM:

<project ...>
    <modelVersion>4.0.0</modelVersion>
    <groupId>baeldung</groupId>
    <artifactId>Test</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>pom</packaging>
    <name>Test</name>
    
    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>baeldung</groupId>
                <artifactId>Baeldung-BOM</artifactId>
                <version>0.0.1-SNAPSHOT</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>
</project>

2.6. Overwriting BOM Dependency

The order of precedence of the artifact’s version is:

  1. The version of the artifact’s direct declaration in our project pom
  2. The version of the artifact in the parent project
  3. The version in the imported pom, taking into consideration the order of importing files
  4. dependency mediation
  • We can overwrite the artifact’s version by explicitly defining the artifact in our project’s pom with the desired version
  • If the same artifact is defined with different versions in 2 imported BOMs, then the version in the BOM file that was declared first will win

3. Spring BOM

We may find that a third-party library, or another Spring project, pulls in a transitive dependency to an older release. If we forget to explicitly declare a direct dependency, unexpected issues can arise.

To overcome such problems, Maven supports the concept of BOM dependency. We can import the

We can import the spring-framework-bom in our dependencyManagement section to ensure that all Spring dependencies are at the same version:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-framework-bom</artifactId>
            <version>4.3.8.RELEASE</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

We don’t need to specify the version attribute when we use the Spring artifacts as in the following example:

<dependencies>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-context</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-web</artifactId>
    </dependency>
<dependencies>

4. Conclusion

In this quick article, we showed the Maven Bill-Of-Material Concept and how to centralize the artifact’s information and versions in a common POM.

Simply put, we can then either inherit or import it to make use of the BOM benefits.

The code examples in the article can be found over on GitHub.

Viewing all 4754 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>