Quantcast
Channel: Baeldung
Viewing all 4754 articles
Browse latest View live

Method Overloading and Overriding in Java

$
0
0

1. Overview

Method overloading and overriding are key concepts of the Java programming language, and as such, they deserve an in-depth look.

In this article, we’ll learn the basics of these concepts and see in what situations they can be useful.

2. Method Overloading

Method overloading is a powerful mechanism that allows us to define cohesive class APIs. To better understand why method overloading is such a valuable feature, let’s see a simple example.

Suppose that we’ve written a naive utility class that implements different methods for multiplying two numbers, three numbers, and so on.

If we’ve given the methods misleading or ambiguous names, such as multiply2(), multiply3(), multiply4(), then that would be a badly designed class API. Here’s where method overloading comes into play.

Simply put, we can implement method overloading in two different ways:

  • implementing two or more methods that have the same name but take different numbers of arguments
  • implementing two or more methods that have the same name but take arguments of different types

2.1. Different Numbers of Arguments

The Multiplier class shows, in a nutshell, how to overload the multiply() method by simply defining two implementations that take different numbers of arguments:

public class Multiplier {
    
    public int multiply(int a, int b) {
        return a * b;
    }
    
    public int multiply(int a, int b, int c) {
        return a * b * c;
    }
}

2.2. Arguments of Different Types

Similarly, we can overload the multiply() method by making it accept arguments of different types:

public class Multiplier {
    
    public int multiply(int a, int b) {
        return a * b;
    }
    
    public double multiply(double a, double b) {
        return a * b;
    }
}

Furthermore, it’s legitimate to define the Multiplier class with both types of method overloading:

public class Multiplier {
    
    public int multiply(int a, int b) {
        return a * b;
    }
    
    public int multiply(int a, int b, int c) {
        return a * b * c;
    }
    
    public double multiply(double a, double b) {
        return a * b;
    }
}

It’s worth noting, however, that it’s not possible to have two method implementations that differ only in their return types.

To understand why – let’s consider the following example:

public int multiply(int a, int b) { 
    return a * b; 
}
 
public double multiply(int a, int b) { 
    return a * b; 
}

In this case, the code simply wouldn’t compile because of the method call ambiguity – the compiler wouldn’t know which implementation of multiply() to call.

2.3. Type Promotion

One neat feature provided by method overloading is the so-called type promotion, a.k.a. widening primitive conversion.

In simple terms, one given type is implicitly promoted to another one when there’s no matching between the types of the arguments passed to the overloaded method and a specific method implementation.

To understand more clearly how type promotion works, consider the following implementations of the multiply() method:

public double multiply(int a, long b) {
    return a * b;
}

public int multiply(int a, int b, int c) {
    return a * b * c;
}

Now, calling the method with two int arguments will result in the second argument being promoted to long, as in this case there’s not a matching implementation of the method with two int arguments.

Let’s see a quick unit test to demonstrate type promotion:

@Test
public void whenCalledMultiplyAndNoMatching_thenTypePromotion() {
    assertThat(multiplier.multiply(10, 10)).isEqualTo(100.0);
}

Conversely, if we call the method with a matching implementation, type promotion just doesn’t take place:

@Test
public void whenCalledMultiplyAndMatching_thenNoTypePromotion() {
    assertThat(multiplier.multiply(10, 10, 10)).isEqualTo(1000);
}

Here’s a summary of the type promotion rules that apply for method overloading:

  • byte can be promoted to short, int, long, float, or double
  • short can be promoted to int, long, float, or double
  • char can be promoted to int, long, float, or double
  • int can be promoted to long, float, or double
  • long can be promoted to float or double
  • float can be promoted to double

2.4. Static Binding

The ability to associate a specific method call to the method’s body is known as binding.

In the case of method overloading, the binding is performed statically at compile time, hence it’s called static binding.

The compiler can effectively set the binding at compile time by simply checking the methods’ signatures.

 3. Method Overriding

Method overriding allows us to provide fine-grained implementations in subclasses for methods defined in a base class.

While method overriding is a powerful feature – considering that is a logical consequence of using inheritance, one of the biggest pillars of OOP – when and where to utilize it should be analyzed carefully, on a per-use-case basis.

Let’s see now how to use method overriding by creating a simple, inheritance-based (“is-a”) relationship.

Here’s the base class:

public class Vehicle {
    
    public String accelerate(long mph) {
        return "The vehicle accelerates at : " + mph + " MPH.";
    }
    
    public String stop() {
        return "The vehicle has stopped.";
    }
    
    public String run() {
        return "The vehicle is running.";
    }
}

And here’s a contrived subclass:

public class Car extends Vehicle {

    @Override
    public String accelerate(long mph) {
        return "The car accelerates at : " + mph + " MPH.";
    }
}

In the hierarchy above, we’ve simply overridden the accelerate() method in order to provide a more refined implementation for the subtype Car.

Here, it’s clear to see that if an application uses instances of the Vehicle class, then it can work with instances of Car as well, as both implementations of the accelerate() method have the same signature and the same return type.

Let’s write a few unit tests to check the Vehicle and Car classes:

@Test
public void whenCalledAccelerate_thenOneAssertion() {
    assertThat(vehicle.accelerate(100))
      .isEqualTo("The vehicle accelerates at : 100 MPH.");
}
    
@Test
public void whenCalledRun_thenOneAssertion() {
    assertThat(vehicle.run())
      .isEqualTo("The vehicle is running.");
}
    
@Test
public void whenCalledStop_thenOneAssertion() {
    assertThat(vehicle.stop())
      .isEqualTo("The vehicle has stopped.");
}

@Test
public void whenCalledAccelerate_thenOneAssertion() {
    assertThat(car.accelerate(80))
      .isEqualTo("The car accelerates at : 80 MPH.");
}
    
@Test
public void whenCalledRun_thenOneAssertion() {
    assertThat(car.run())
      .isEqualTo("The vehicle is running.");
}
    
@Test
public void whenCalledStop_thenOneAssertion() {
    assertThat(car.stop())
      .isEqualTo("The vehicle has stopped.");
}

Now, let’s see some unit tests that show how the run() and stop() methods, which aren’t overridden, return equal values for both Car and Vehicle:

@Test
public void givenVehicleCarInstances_whenCalledRun_thenEqual() {
    assertThat(vehicle.run()).isEqualTo(car.run());
}
 
@Test
public void givenVehicleCarInstances_whenCalledStop_thenEqual() {
   assertThat(vehicle.stop()).isEqualTo(car.stop());
}

In our case, we have access to the source code for both classes, so we can clearly see that calling the accelerate() method on a base Vehicle instance and calling accelerate() on a Car instance will return different values for the same argument.

Therefore, the following test demonstrates that the overridden method is invoked for an instance of Car:

@Test
public void whenCalledAccelerateWithSameArgument_thenNotEqual() {
    assertThat(vehicle.accelerate(100))
      .isNotEqualTo(car.accelerate(100));
}

3.1. Type Substitutability

A core principle in OOP is that of type substitutability, which is closely associated with the Liskov Substitution Principle (LSP).

Simply put, the LSP states that if an application works with a given base type, then it should also work with any of its subtypes. That way, type substitutability is properly preserved.

The biggest problem with method overriding is that some specific method implementations in the derived classes might not fully adhere to the LSP and therefore fail to preserve type substitutability.

Of course, it’s valid to make an overridden method to accept arguments of different types and return a different type as well, but with full adherence to these rules:

  • If a method in the base class takes argument(s) of a given type, the overridden method should take the same type or a supertype (a.k.a. contravariant method arguments)
  • If a method in the base class returns void, the overridden method should return void
  • If a method in the base class returns a primitive, the overridden method should return the same primitive
  • If a method in the base class returns a certain type, the overridden method should return the same type or a subtype (a.k.a. covariant return type)
  • If a method in the base class throws an exception, the overridden method must throw the same exception or a subtype of the base class exception

3.2. Dynamic Binding

Considering that method overriding can be only implemented with inheritance, where there is a hierarchy of a base type and subtype(s), the compiler can’t determine at compile time what method to call, as both the base class and the subclasses define the same methods.

As a consequence, the compiler needs to check the type of object to know what method should be invoked.

As this checking happens at runtime, method overriding is a typical example of dynamic binding.

4. Conclusion

In this tutorial, we learned how to implement method overloading and method overriding, and we explored some typical situations where they’re useful.

As usual, all the code samples shown in this article are available over on GitHub.


Object Type Casting in Java

$
0
0

1. Overview

The Java type system is made up of two kinds of types: primitives and references.

We covered primitive conversions in this article, and we’ll focus on references casting here, to get a good understanding of how Java handles types.

2. Primitive vs. Reference

Although primitive conversions and reference variable casting may look similar, they’re quite different concepts.

In both cases, we’re “turning” one type into another. But, in a simplified way, a primitive variable contains its value, and conversion of a primitive variable means irreversible changes in its value:

double myDouble = 1.1;
int myInt = (int) myDouble;
        
assertNotEquals(myDouble, myInt);

After the conversion in the above example, myInt variable is 1, and we can’t restore the previous value 1.1 from it.

Reference variables are different; the reference variable only refers to an object but doesn’t contain the object itself.

And casting a reference variable doesn’t touch the object it refers to, but only labels this object in another way, expanding or narrowing opportunities to work with it. Upcasting narrows the list of methods and properties available to this object, and downcasting can extend it.

A reference is like a remote control to an object. The remote control has more or fewer buttons depending on its type, and the object itself is stored in a heap. When we do casting, we change the type of the remote control but don’t change the object itself.

3. Upcasting

Casting from a subclass to a superclass is called upcasting. Typically, the upcasting is implicitly performed by the compiler.

Upcasting is closely related to inheritance – another core concept in Java. It’s common to use reference variables to refer to a more specific type. And every time we do this, implicit upcasting takes place.

To demonstrate upcasting let’s define an Animal class:

public class Animal {

    public void eat() {
        // ... 
    }
}

Now let’s extend Animal:

public class Cat extends Animal {

    public void eat() {
         // ... 
    }

    public void meow() {
         // ... 
    }
}

Now we can create an object of Cat class and assign it to the reference variable of type Cat:

Cat cat = new Cat();

And we can also assign it to the reference variable of type Animal:

Animal animal = cat;

In the above assignment, implicit upcasting takes place. We could do it explicitly:

animal = (Animal) cat;

But there is no need to do explicit cast up the inheritance tree. The compiler knows that cat is an Animal and doesn’t display any errors.

Note, that reference can refer to any subtype of the declared type.

Using upcasting, we’ve restricted the number of methods available to Cat instance but haven’t changed the instance itself. Now we can’t do anything that is specific to Cat – we can’t invoke meow() on the animal variable.

Although Cat object remains Cat object, calling meow() would cause the compiler error:

// animal.meow(); The method meow() is undefined for the type Animal

To invoke meow() we need to downcast animal, and we’ll do this later.

But now we’ll describe what gives us the upcasting. Thanks to upcasting, we can take advantage of polymorphism.

3.1. Polymorphism

Let’s define another subclass of Animal, a Dog class:

public class Dog extends Animal {

    public void eat() {
         // ... 
    }
}

Now we can define the feed() method which treats all cats and dogs like animals:

public class AnimalFeeder {

    public void feed(List<Animal> animals) {
        animals.forEach(animal -> {
            animal.eat();
        });
    }
}

We don’t want AnimalFeeder to care about which animal is on the list – a Cat or a Dog. In the feed() method they are all animals.

Implicit upcasting occurs when we add objects of a specific type to the animals list:

List<Animal> animals = new ArrayList<>();
animals.add(new Cat());
animals.add(new Dog());
new AnimalFeeder().feed(animals);

We add cats and dogs and they are upcast to Animal type implicitly. Each Cat is an Animal and each Dog is an Animal. They’re polymorphic.

By the way, all Java objects are polymorphic because each object is an Object at least. We can assign an instance of Animal to the reference variable of Object type and the compiler won’t complain:

Object object = new Animal();

That’s why all Java objects we create already have Object specific methods, for example, toString().

Upcasting to an interface is also common.

We can create Mew interface and make Cat implement it:

public interface Mew {
    public void meow();
}

public class Cat extends Animal implements Mew {
    
    public void eat() {
         // ... 
    }

    public void meow() {
         // ... 
    }
}

Now any Cat object can also be upcast to Mew:

Mew mew = new Cat();

Cat is a Mew, upcasting is legal and done implicitly.

Thus, Cat is a Mew, Animal, Object, and Cat. It can be assigned to reference variables of all four types in our example.

3.2. Overriding

In the example above, the eat() method is overridden. This means that although eat() is called on the variable of the Animal type, the work is done by methods invoked on real objects – cats and dogs:

public void feed(List<Animal> animals) {
    animals.forEach(animal -> {
        animal.eat();
    });
}

If we add some logging to our classes, we’ll see that Cat’s and Dog’s methods are called:

web - 2018-02-15 22:48:49,354 [main] INFO com.baeldung.casting.Cat - cat is eating
web - 2018-02-15 22:48:49,363 [main] INFO com.baeldung.casting.Dog - dog is eating

To sum up:

  • A reference variable can refer to an object if the object is of the same type as a variable or if it is a subtype
  • Upcasting happens implicitly
  • All Java objects are polymorphic and can be treated as objects of supertype due to upcasting

4. Downcasting

What if we want to use the variable of type Animal to invoke a method available only to Cat class? Here comes the downcasting. It’s the casting from a superclass to a subclass.

Let’s take an example:

Animal animal = new Cat();

We know that animal variable refers to the instance of Cat. And we want to invoke Cat’s meow() method on the animal. But the compiler complains that meow() method doesn’t exist for the type Animal.

To call meow() we should downcast animal to Cat:

((Cat) animal).meow();

The inner parentheses and the type they contain are sometimes called the cast operator. Note that external parentheses are also needed to compile the code.

Let’s rewrite the previous AnimalFeeder example with meow() method:

public class AnimalFeeder {

    public void feed(List<Animal> animals) {
        animals.forEach(animal -> {
            animal.eat();
            if (animal instanceof Cat) {
                ((Cat) animal).meow();
            }
        });
    }
}

Now we gain access to all methods available to Cat class. Look at the log to make sure that meow() is actually called:

web - 2018-02-16 18:13:45,445 [main] INFO com.baeldung.casting.Cat - cat is eating
web - 2018-02-16 18:13:45,454 [main] INFO com.baeldung.casting.Cat - meow
web - 2018-02-16 18:13:45,455 [main] INFO com.baeldung.casting.Dog - dog is eating

Note that in the above example we’re trying to downcast only those objects which are really instances of Cat. To do this, we use the operator instanceof.

4.1. instanceof Operator

We often use instanceof operator before downcasting to check if the object belongs to the specific type:

if (animal instanceof Cat) {
    ((Cat) animal).meow();
}

4.2. ClassCastException

If we hadn’t checked the type with the instanceof operator, the compiler wouldn’t have complained. But at runtime, there would be an exception.

To demonstrate this let’s remove the instanceof operator from the above code:

public void uncheckedFeed(List<Animal> animals) {
    animals.forEach(animal -> {
        animal.eat();
        ((Cat) animal).meow();
    });
}

This code compiles without issues. But if we try to run it we’ll see an exception:

java.lang.ClassCastException: com.baeldung.casting.Dog cannot be cast to com.baeldung.casting.Cat

This means that we are trying to convert an object which is an instance of Dog into a Cat instance.

ClassCastException’s always thrown at runtime if the type we downcast to doesn’t match the type of the real object.

Note, that if we try to downcast to an unrelated type, the compiler won’t allow this:

Animal animal;
String s = (String) animal;

The compiler says “Cannot cast from Animal to String”.

For the code to compile, both types should be in the same inheritance tree.

Let’s sum up:

  • Downcasting is necessary to gain access to members specific to subclass
  • Downcasting is done using cast operator
  • To downcast an object safely, we need instanceof operator
  • If the real object doesn’t match the type we downcast to, then ClassCastException will be thrown at runtime

5. Conclusion

In this foundational tutorial, we’ve explored what is upcasting, downcasting, how to use them and how these concepts can help you take advantage of polymorphism.

As always, the code for this article is available over on GitHub.

Code Analysis with SonarQube

$
0
0

1. Overview

In this article, we’re going to be looking at static source code analysis with SonarQube – which is an open-source platform for ensuring code quality.

Let’s start with a core question – why analyze source code in the first place? Very simply put, to ensure quality, reliability, and maintainability over the life-span of the project; a poorly written codebase is always more expensive to maintain.

Alright, now let’s get started by downloading the latest LTS version of SonarQube from the download page and setting up our local server as outlined in this quick start guide.

2. Analyzing Source Code

Now that we’re logged in, we’re required to create a token by specifying a name – which can be our username or any other name of choice and click on the generate button.

We’ll use the token later at the point of analyzing our project(s). We also need to select the primary language (Java) and the build technology of the project (Maven).

Let’s define the plugin in the pom.xml:

<build>
    <pluginManagement>
        <plugins>
            <plugin>
                <groupId>org.sonarsource.scanner.maven</groupId>
                <artifactId>sonar-maven-plugin</artifactId>
                <version>3.4.0.905</version>
            </plugin>
        </plugins>
    </pluginManagement>
</build>

The latest version of the plugin is available here. Now, we need to execute this command from the root of our project directory to scan it:

mvn sonar:sonar -Dsonar.host.url=http://localhost:9000 
  -Dsonar.login=the-generated-token

We need to replace the-generated-token with the token from above.

The project that we used in this article is available here.

We specified the host URL of the SonarQube server and the login (generated token) as parameters for the Maven plugin.

After executing the command, the results will be available on the Projects dashboard – at http://localhost:9000.

There are other parameters that we can pass to the Maven plugin or even set from the web interface; sonar.host.url, sonar.projectKey, and sonar.sources are mandatory while others are optional.

Other analysis-parameters and their default values are here. Also, note that each language-plugin has rules for analyzing compatible source code.

3. Analysis Result

Now that we’ve analyzed our first project, we can go to the web interface at http://localhost:9000 and refresh the page.

There we’ll see the report summary:

Discovered issues can either be a Bug, Vulnerability, Code Smell, Coverage or Duplication. Each category has a corresponding number of issues or a percentage value.

Moreover, issues can have one of five different severity levels: blocker, critical, major, minor and info. Just in front of the project name is an icon that displays the Quality Gate status – passed (green) or failed (red).

Clicking on the project name will take us to a dedicated dashboard where we can explore issues particular to the project in greater detail.

We can see the project code, activity and perform administration tasks from the project dashboard – each available on a separate tab.

Though there is a global Issues tab, the Issues tab on the project dashboard display issues specific to the project concerned alone:

The issues tab always display the category, severity level, tag(s), and the calculated effort (regarding time) it will take to rectify an issue.

From the issues tab, it’s possible to assign an issue to another user, comment on it, and change its severity level. Clicking on the issue itself will show more detail about the issue.

The issue tab comes with sophisticated filters to the left. These are good for pinpointing issues. So how can one know if the codebase is healthy enough for deployment into production? That’s what Quality Gate is for.

4. SonarQube Quality Gate

In this section, we’re going to look at a key feature of SonarQube – Quality Gate. Then we’ll see an example of how to set up a custom one.

4.1. What is a Quality Gate?

A Quality Gate is a set of conditions the project must meet before it can qualify for production release. It answers one question: can I push my code to production in its current state or not?

Ensuring code quality of “new” code while fixing existing ones is one good way to maintain a good codebase over time. The Quality Gate facilitates setting up rules for validating every new code added to the codebase on subsequent analysis.

The conditions set in the Quality Gate still affect unmodified code segments. If we can prevent new issues arising, over time, we’ll eliminate all issues.

This approach is comparable to fixing the water leakage from the source. This brings us to a particular term – Leakage Period. This is the period between two analyses/versions of the project.

If we rerun the analysis, on the same project, the overview tab of the project dashboard will show results for the leak period:

From the web interface, the Quality Gates tab is where we can access all the defined quality gates. By default, SonarQube way came preinstalled with the server.

The default configuration for SonarQube way flags the code as failed if:

  • the coverage on new code is less than 80%
  • percentage of duplicated lines on new code is greater than 3
  • maintainability, reliability or security rating is worse than A

With this understanding, we can create a custom Quality Gate.

4.2. Adding Custom Quality Gate

First, we need to click on the Quality Gates tab and then click on the Create button which is on the left of the page. We’ll need to give it a name – baeldung.

Now we can set the conditions we want:

From the Add Condition drop-down, let’s choose Blocker Issuesit’ll immediately show up on the list of conditions.

We’ll specify is greater than as the Operator, set zero (0) for the Error column and check Over Leak Period column:

Then we’ll click on the Add button to effect the changes. Let’s add another condition following the same procedure as above.

We’ll select issues from the Add Condition drop-down and check Over Leak Period column.

The value of the Operator column will be set to “is less than” and we’ll add one (1) as the value for the Error column. This means if the number of issues in the new code added is less than 1, mark the Quality Gate as failed.

I know this doesn’t make technical sense but let’s use it for learning sake. Don’t forget to click the Add button to save the rule.

One final step, we need to attach a project to our custom Quality Gate. We can do so by scrolling down the page to the Projects section.

There we need to click on All and then mark our project of choice. We can as well set it as the default Quality Gate from the top-right corner of the page.

We’ll scan the project source code, again, as we did before with Maven command. When that’s done, we’ll go to the projects tab and refresh.

This time, the project will not meet the Quality Gate criteria and will fail. Why? Because in one of our rules we have specified that, it should fail if there are no new issues.

Let’s go back to the Quality Gates tab and change the condition for issues to is greater than. We need to click the update button to effect this change.

A new scan of the source code will pass this time around.

5. Integrating SonarQube into a CI

Making SonarQube part of a Continuous Integration process is possible. This will automatically fail the build if the code analysis did not satisfy the Quality Gate condition.

For us to achieve this, we’re going to be using SonarCloud which is the cloud-hosted version of SonaQube server. We can create an account here.

From My Account > Organizations, we can see the organization key, and it will usually be in the form xxxx-github or xxxx-bitbucket.

Also from My Account > Security, we can generate a token as we did in the local instance of the server. Take note of both the token and the organization key for later use.

In this article, we’ll be using Travis CI, and we’ll create an account here with an existing Github profile. It will load all our projects, and we can flip the switch on any to activate Travis CI on it.

We need to add the token we generated on SonarCloud to Travis environment variables. We can do this by clicking on the project we’ve activated for CI.

Then, we’ll click “More Options” > “Settings” and then scroll down to “Environment Variables”:

We’ll add a new entry with the name SONAR_TOKEN and use the token generated, on SonarCloud, as the value. Travis CI will encrypt and hide it from public view:

Finally, we need to add a .travis.yml file to the root of our project with the following content:

language: java
sudo: false
install: true
addons:
  sonarcloud:
    organization: "your_organization_key"
    token:
      secure: "$SONAR_TOKEN"
jdk:
  - oraclejdk8
script:
  - mvn clean org.jacoco:jacoco-maven-plugin:prepare-agent package sonar:sonar
cache:
  directories:
    - '$HOME/.m2/repository'
    - '$HOME/.sonar/cache'

Remember to substitute your organization key with the organization key described above. Committing the new code and pushing to Github repo will trigger Travis CI build and in turn activate the sonar scanning as well.

6. Conclusion

In this tutorial, we’ve looked at how to set up a SonarQube server locally and how to use Quality Gate to define the criteria for the fitness of a project for production release.

The SonarQube documentation has more information about other aspects of the platform.

A Practical Guide to DecimalFormat

$
0
0

1. Overview

In this article, we’re going to explore the DecimalFormat class along with its practical usages.

This is a subclass of NumberFormat, which allows formatting decimal numbers’ String representation using predefined patterns.

It can also be used inversely, to parse Strings into numbers.

2. How Does it Work?

In order to format a number, we have to define a pattern, which is a sequence of special characters potentially mixed with text.

There are 11 Special Pattern Characters, but the most important are:

  • 0 – prints a digit if provided, 0 otherwise
  • # – prints a digit if provided, nothing otherwise
  • . – indicate where to put the decimal separator
  • , – indicate where to put the grouping separator

When the pattern gets applied to a number, its formatting rules are executed, and the result is printed according to the DecimalFormatSymbol of our JVM’s Locale unless a specific Locale is specified.

The following examples’ outputs are from a JVM running on an English Locale.

3. Basic Formatting

Let’s now see which outputs are produced when formatting the same number with the following patterns.

3.1. Simple Decimals

double d = 1234567.89;    
assertThat(
  new DecimalFormat("#.##").format(d)).isEqualTo("1234567.89");
assertThat(
  new DecimalFormat("0.00").format(d)).isEqualTo("1234567.89");

As we can see, the integer part is never discarded, no matter if the pattern is smaller than the number.

assertThat(new DecimalFormat("#########.###").format(d))
  .isEqualTo("1234567.89");
assertThat(new DecimalFormat("000000000.000").format(d))
  .isEqualTo("001234567.890");

If the pattern instead is bigger than the number, zeros get added, while hashes get dropped, both in the integer and in the decimal parts.

3.2. Rounding

If the decimal part of the pattern can’t contain the whole precision of the input number, it gets rounded.

Here, the .89 part has been rounded to .90, then the 0 has been dropped:

assertThat(new DecimalFormat("#.#").format(d))
  .isEqualTo("1234567.9");

Here, the .89 part has been rounded to 1.00, then the .00 has been dropped and the 1 has been summed to the 7:

assertThat(new DecimalFormat("#").format(d))
  .isEqualTo("1234568");

The default rounding mode is HALF_EVEN, but it can be customized through the setRoundingMode method.

3.3. Grouping

The grouping separator is used to specify a sub-pattern which gets repeated automatically:

assertThat(new DecimalFormat("#,###.#").format(d))
  .isEqualTo("1,234,567.9");
assertThat(new DecimalFormat("#,###").format(d))
  .isEqualTo("1,234,568");

3.4. Multiple Grouping Patterns

Some countries have a variable number of grouping patterns in their numbering systems.

The Indian Numbering System uses the format #,##,###.##, in which only the first grouping separator holds three numbers, while all the others hold two numbers.

This isn’t possible to achieve using the DecimalFormat class, which keeps only the latest pattern encountered from left to right, and applies it to the whole number, ignoring previous grouping patterns.

An attempt to use the pattern #,##,##,##,### would result in a regroup to #######,### and end in a redistribution to #,###,###,###.

To achieve multiple grouping pattern matching, it’s necessary to write our own String manipulation code, or alternatively to try the Icu4J’s DecimalFormat, which allows that.

3.5. Mixing String Literals

It’s possible to mix String literals within the pattern:

assertThat(new DecimalFormat("The # number")
  .format(d))
  .isEqualTo("The 1234568 number");

It’s also possible to use special characters as String literals, through escaping:

assertThat(new DecimalFormat("The '#' # number")
  .format(d))
  .isEqualTo("The # 1234568 number");

4. Localized Formatting

Many countries don’t use English symbols and use the comma as decimal separator and the dot as grouping separator.

Running the #,###.## pattern on a JVM with an Italian Locale, for example, would output 1.234.567,89.

While this could be a useful i18n feature in some cases, in others we might want to enforce a specific, JVM-independent format.

Here’s how we can do that:

assertThat(new DecimalFormat("#,###.##", 
  new DecimalFormatSymbols(Locale.ENGLISH)).format(d))
  .isEqualTo("1,234,567.89");
assertThat(new DecimalFormat("#,###.##", 
  new DecimalFormatSymbols(Locale.ITALIAN)).format(d))
  .isEqualTo("1.234.567,89");

If the Locale we’re interested in is not among the ones covered by the DecimalFormatSymbols constructor, we can specify it with the getInstance method:

Locale customLocale = new Locale("it", "IT");
assertThat(new DecimalFormat(
  "#,###.##", 
   DecimalFormatSymbols.getInstance(customLocale)).format(d))
  .isEqualTo("1.234.567,89");

5. Scientific Notations

The Scientific Notation represents the product of a Mantissa and an exponent of ten. The number 1234567.89 can also be represented as 12.3456789 * 10^5 (the dot is shifted by 5 positions).

5.1. E-Notation

It’s possible to express a number in Scientific Notation using the E pattern character representing the exponent of ten:

assertThat(new DecimalFormat("00.#######E0").format(d))
  .isEqualTo("12.3456789E5");
assertThat(new DecimalFormat("000.000000E0").format(d))
  .isEqualTo("123.456789E4");

We should keep in mind that the number of characters after the exponent is relevant, so if we need to express 10^12, we need E00 and not E0.

5.2. Engineering Notation

It’s common to use a particular form of Scientific Notation called Engineering Notation, which adjusts results in order to be expressed as multiple of three, for example when using measuring units like Kilo (10^3), Mega (10^6), Giga (10^9), and so on.

We can enforce this kind of notation by adjusting the maximum number of integer digits (the characters expressed with the # and on the left of the decimal separator) so that it’s higher than the minimum number (the one expressed with the 0) and higher than 1.

This forces the exponent to be a multiple of the maximum number, so for this use-case we want the maximum number to be three:

assertThat(new DecimalFormat("##0.######E0")
  .format(d)).isEqualTo("1.23456789E6");		
assertThat(new DecimalFormat("###.000000E0")
  .format(d)).isEqualTo("1.23456789E6");

6. Parsing

Let’s see how is possible to parse a String into a Number with the parse method:

assertThat(new DecimalFormat("", new DecimalFormatSymbols(Locale.ENGLISH))
  .parse("1234567.89"))
  .isEqualTo(1234567.89);
assertThat(new DecimalFormat("", new DecimalFormatSymbols(Locale.ITALIAN))
  .parse("1.234.567,89"))
  .isEqualTo(1234567.89);

Since the returned value isn’t inferred by the presence of a decimal separator, we can use the methods like .doubleValue(), .longValue() of the returned Number object to enforce a specific primitive in output.

We can also obtain a BigDecimal as follows:

NumberFormat nf = new DecimalFormat(
  "", 
  new DecimalFormatSymbols(Locale.ENGLISH));
((DecimalFormat) nf).setParseBigDecimal(true);
 
assertThat(nf.parse("1234567.89"))
  .isEqualTo(BigDecimal.valueOf(1234567.89));

7. Thread-Safety

DecimalFormat isn’t thread-safe, thus we should pay special attention when sharing the same instance between threads.

8. Conclusion

We’ve seen the major usages of the DecimalFormat class, along with its strengths and weaknesses.

As always, the full source code is available over on Github.

Intro to Google Cloud Storage with Java

$
0
0

1. Overview

Google Cloud Storage offers online storage tailored to an individual application’s needs based on location, the frequency of access, and cost. Unlike Amazon Web Services, Google Cloud Storage uses a single API for high, medium, and low-frequency access.

Like most cloud platforms, Google offers a free tier of access; the pricing details are here.

In this tutorial, we’ll connect to storage, create a bucket, write, read, and update data. While using the API to read and write data, we’ll also use the gsutil cloud storage utility.

2. Google Cloud Storage Setup

2.1. Maven Dependency

We need to add a single dependency to our pom.xml:

<dependency>
    <groupId>com.google.cloud</groupId>
    <artifactId>google-cloud-storage</artifactId>
    <version>1.17.0</version>
</dependency>

Maven Central has the latest version of the library.

2.2. Create Authentication Key

Before we can connect to Google Cloud, we need to configure authentication. Google Cloud Platform (GCP) applications load a private key and configuration information from a JSON configuration file. We generate this file via the GCP console. Access to the console requires a valid Google Cloud Platform Account.

We create our configuration by:

  1. Going to the Google Cloud Platform Console
  2. If we haven’t yet defined a GCP project, we click the create button and enter a project name, such as “baeldung-cloud-tutorial
  3. Select “new service account” from the drop-down list
  4. Add a name such as “baeldung-cloud-storage” into the account name field.
  5. Under “role” select Project, and then Owner in the submenu.
  6. Select create, and the console downloads a private key file.

The role in step #6 authorizes the account to access project resources. For the sake of simplicity, we gave this account complete access to all project resources.

For a production environment, we would define a role that corresponds to the access the application needs.

2.3. Install the Authentication Key

Next, we copy the file downloaded from GCP console to a convenient location and point the GOOGLE_APPLICATION_CREDENTIALS environment variable at it. This is the easiest way to load the credentials, although we’ll look at another possibility below.

For Linux or Mac:

export GOOGLE_APPLICATION_CREDENTIALS="/path/to/file"

For Windows:

set GOOGLE_APPLICATION_CREDENTIALS="C:\path\to\file"

2.4. Install Cloud Tools

Google provides several tools for managing their cloud platform. We’re going to use gsutil during this tutorial to read and write data alongside the API.

We can do this in two easy steps:

  1. Install the Cloud SDK from the instructions here for our platform.
  2. Follow the Quickstart for our platform here. In step 4 of Initialize the SDK, we select the project name in step 4 of section 2.2 above (“baeldung-cloud-storage” or whichever name you used).

gsutil is now installed and configured to read data from our cloud project.

3. Connecting to Storage and Creating a Bucket

3.1. Connect to Storage

Before we can use Google Cloud storage, we have to create a service object. If we’ve already set up the GOOGLE_APPLICATION_CREDENTIALS environment variable, we can use the default instance:

Storage storage = StorageOptions.getDefaultInstance().getService();

If we don’t want to use the environment variable, we have to create a Credentials instance and pass it to Storage with the project name:

Credentials credentials = GoogleCredentials
  .fromStream(new FileInputStream("path/to/file"));
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
  .setProjectId("baeldung-cloud-tutorial").build().getService();

3.2. Creating a Bucket

Now that we’re connected and authenticated, we can create a bucket. Buckets are containers that hold objects. They can be used to organize and control data access.

There is no limit to the number of objects in a bucket. GCP limits the numbers of operations on buckets and encourages application designers to emphasize operations on objects rather than on buckets.

Creating a bucket requires a BucketInfo:

Bucket bucket = storage.create(BucketInfo.of("baeldung-bucket"));

For this simple example, we a bucket name and accept the default properties. Bucket names must be globally unique. If we choose a name that is already used, create() will fail.

3.3. Examing a Bucket with gsutil

Since we have a bucket now, we can take a examine it with gsutil.

Let’s open a command prompt and take a look:

$ gsutil ls -L -b gs://baeldung-1-bucket/
gs://baeldung-1-bucket/ :
	Storage class:			STANDARD
	Location constraint:		US
	Versioning enabled:		None
	Logging configuration:		None
	Website configuration:		None
	CORS configuration: 		None
	Lifecycle configuration:	None
	Requester Pays enabled:		None
	Labels:				None
	Time created:			Sun, 11 Feb 2018 21:09:15 GMT
	Time updated:			Sun, 11 Feb 2018 21:09:15 GMT
	Metageneration:			1
	ACL:
	  [
	    {
	      "entity": "project-owners-385323156907",
	      "projectTeam": {
	        "projectNumber": "385323156907",
	        "team": "owners"
	      },
	      "role": "OWNER"
	    },
	    ...
	  ]
	Default ACL:
	  [
	    {
	      "entity": "project-owners-385323156907",
	      "projectTeam": {
	        "projectNumber": "385323156907",
	        "team": "owners"
	      },
	      "role": "OWNER"
	    },
            ...
	  ]

gsutil looks a lot like shell commands, and anyone familiar with the Unix command line should feel very comfortable here. Notice we passed in the path to our bucket as a URL: gs://baeldung-1-bucket/, along with a few other options.

The ls option produces a listing or objects or buckets, and the -L option indicated that we want a detailed listing – so we received details about the bucket including the creation times and access controls.

Let’s add some data to our bucket!

4. Reading, Writing and Updating Data

In Google Cloud Storage, objects are stored in BlobsBlob names can contain any Unicode character, limited to 1024 characters.

4.1. Writing Data

Let’s save a String to our bucket:

String value = "Hello, World!";
byte[] bytes = value.getBytes(UTF_8);
Blob blob = bucket.create("my-first-blob", bytes);

As you can see, objects are simply arrays of bytes in the bucket, so we store a String by simply operating with its raw bytes.

4.2. Reading Data with gsutil

Now that we have a bucket with an object in it, let’s take a look at gsutil.

Let’s start by listing the contents of our bucket:

$ gsutil ls gs://baeldung-1-bucket/
gs://baeldung-1-bucket/my-first-blob

We passed gsutil the ls option again but omitted -b and -L, so we asked for a brief listing of objects. We receive a list of URIs for each object, which is one in our case.

Let’s examine the object:

$ gsutil cat gs://baeldung-1-bucket/my-first-blob
Hello World!

Cat concatenates the contents of the object to standard output. We see the String we wrote to the Blob.

4.3. Reading Data

Blobs are assigned a BlobId upon creation.

The easiest way to retrieve a Blob is with the BlobId:

Blob blob = storage.get(blobId);
String value = new String(blob.getContent());

We pass the id to Storage and get the Blob in return, and getContent() returns the bytes.

If we don’t have the BlobId, we can search the Bucket by name:

Page<Blob> blobs = bucket.list();
for (Blob blob: blobs.getValues()) {
    if (name.equals(blob.getName())) {
        return new String(blob.getContent());
    }
}

4.4. Updating Data

We can update a Blob by retrieving it and then accessing its WriteableByteChannel:

String newString = "Bye now!";
Blob blob = storage.get(blobId);
WritableByteChannel channel = blob.writer();
channel.write(ByteBuffer.wrap(newString.getBytes(UTF_8)));
channel.close();

Let’s examine the updated object:

$ gsutil cat gs://baeldung-1-bucket/my-first-blob
Bye now!

4.5. Save an Object to File, Then Delete

Let’s save the updated the object to a file:

$ gsutil copy gs://baeldung-1-bucket/my-first-blob my-first-blob
Copying gs://baeldung-1-bucket/my-first-blob...
/ [1 files][    9.0 B/    9.0 B]
Operation completed over 1 objects/9.0 B.
Grovers-Mill:~ egoebelbecker$ cat my-first-blob
Bye now!

As expected, the copy option copies the object to the filename specified on the command line.

gsutil can copy any object from Google Cloud Storage to the local file system, assuming there is enough space to store it. 

We’ll finish by cleaning up:

$ gsutil rm gs://baeldung-1-bucket/my-first-blob
Removing gs://baeldung-1-bucket/my-first-blob...
/ [1 objects]
Operation completed over 1 objects.
$ gsutil ls gs://baeldung-1-bucket/
$

rm (del works too) deletes the specified object

5. Conclusion

In this brief tutorial, we created credentials for Google Cloud Storage and connected to the infrastructure. We created a bucket, wrote data, and then read and modified it. As we’re working with the API, we also used gsutil to examine cloud storage as we created and read data.

We also discussed how to use buckets and write and modify data efficiently.

Code samples, as always, can be found over on GitHub.

An Intro to Spring Cloud Task

$
0
0

1. Overview

The goal of Spring Cloud Task is to provide the functionality of creating short-lived microservices for Spring Boot application.

In Spring Cloud Task, we’ve got the flexibility of running any task dynamically, allocating resources on demand and retrieving the results after the Task completion.

Tasks is a new primitive within Spring Cloud Data Flow allowing users to execute virtually any Spring Boot application as a short-lived task.

2. Developing a Simple Task Application

2.1. Adding Relevant Dependencies

To start, we can add dependency management section with spring-cloud-task-dependencies:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-task-dependencies</artifactId>
            <version>1.2.2.RELEASE</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

This dependency management manages versions of dependencies through the import scope.

We need to add the following dependencies:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-task</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-task-core</artifactId>
</dependency>

This is the link to the Maven Central of spring-cloud-task-core.

Now, to start our Spring Boot application, we need spring-boot-starter with the relevant parent.

We’re going to use Spring Data JPA as an ORM tool, so we need to add the dependency for that as well:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
    <version>1.5.10</version>
</dependency>

The details of bootstrapping a simple Spring Boot application with Spring Data JPA are available here.

We can check the newest version of the spring-boot-starter-parent on Maven Central.

2.2. The @EnableTask Annotation

To bootstrap the functionality of Spring Cloud Task, we need to add @EnableTask annotation:

@SpringBootApplication
@EnableTask
public class TaskDemo {
    // ...
}

The annotation brings SimpleTaskConfiguration class in the picture which in turns registers the TaskRepository and its infrastructure. By default, an in-memory map is used to store the status of the TaskRepository.

The primary information of TaskRepository is modeled in TaskExecution class. The noted fields of this class are taskName, startTime, endTime, exitMessage. The exitMessage stores the available information at the exit time.

If an exit is caused by a failure in any event of the application, the complete exception stack trace will be stored here.

Spring Boot provides an interface ExitCodeExceptionMapper which maps uncaught exceptions to exit codes allowing scrutinized debug. The Cloud Task stores the information in the data source for future analysis.

2.3. Configuring a DataSource For TaskRepository

The in-memory map to store the TaskRepository will vanish once the task ends and we’ll lose data related to Task events. To store in a permanent storage, we’re going to use MySQL as a data source with Spring Data JPA.

The data source is configured in application.yml file. To configure Spring Cloud Task to use the provided data source as an storage of TaskRepository, we need to create a class that extends DefaultTaskConfigurer.

Now, we can send configured Datasource as a constructor argument to the superclass’ constructor:

@Autowired
private DataSource dataSource;

public class HelloWorldTaskConfigurer extends DefaultTaskConfigurer{
    public HelloWorldTaskConfigurer(DataSource dataSource){
        super(dataSource);
    }
}

To have the above configuration in action, we need to annotate an instance of DataSource with @Autowired annotation and inject the instance as constructor-argument of a HelloWorldTaskConfigurer bean defined above:

@Bean
public HelloWorldTaskConfigurer getTaskConfigurer() {
    return new HelloWorldTaskConfigurer(dataSource);
}

This completes the configuration to store TaskRepository to MySQL database.

2.4. Implementation

In Spring Boot, we can execute any Task just before application finishes its startup. We can use ApplicationRunner or  CommandLineRunner interfaces to create a simple Task.

We need to implement the run method of these interfaces and declare the implementing class as a bean:

@Component
public static class HelloWorldApplicationRunner 
  implements ApplicationRunner {
 
    @Override
    public void run(ApplicationArguments arg0) throws Exception {
        System.out.println("Hello World from Spring Cloud Task!");
    }
}

Now, if we run our application, we should get our task producing necessary output with required tables created in our MySQL database logging the event data of the Task.

3. Life-cycle of a Spring Cloud Task

In the beginning, we create an entry in the TaskRepository.  This’s the indication that all beans are ready to be used in the Application and the run method of Runner interface is ready to be executed.

Upon completion of the execution of the run method or in any failure of ApplicationContext event, TaskRepository will be updated with another entry.

During the task life-cycle, we can register listeners available from TaskExecutionListener interface. We need a class implementing the interface having three methods – onTaskEnd, onTaksFailed and onTaskStartup triggered in respective events of the Task.

We need to declare the bean of the implementing class in our TaskDemo class:

@Bean
public TaskListener taskListener() {
    return new TaskListener();
}

4. Integration with Spring Batch

We can execute Spring Batch Job as a Task and log events of the Job execution using Spring Cloud Task. To enable this feature we need to add Batch dependencies pertaining to Boot and Cloud:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-batch</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-task-batch</artifactId>
</dependency>

Here is the link to the Maven Central of spring-cloud-task-batch.

To configure a job as a Task we need to have the Job bean registered in the JobConfiguration class:

@Bean
public Job job2() {
    return jobBuilderFactory.get("job2")
      .start(stepBuilderFactory.get("job2step1")
      .tasklet(new Tasklet(){
          @Override
          public RepeatStatus execute(
            StepContribution contribution,
            ChunkContext chunkContext) throws Exception {
            System.out.println("This job is from Baeldung");
                return RepeatStatus.FINISHED;
          }
    }).build()).build();
}

We need to decorate the TaskDemo class with @EnableBatchProcessing annotation:

//..Other Annotation..
@EnableBatchProcessing
public class TaskDemo {
    // ...
}

The @EnableBatchProcessing annotation enables Spring Batch features with a base configuration required to set up batch jobs.

Now, if we run the application, the @EnableBatchProcessing annotation will trigger the Spring Batch Job execution and Spring Cloud Task will log the events of the executions of all batch jobs with the other Task executed in the springcloud database.

5. Launching a Task from Stream

We can trigger Tasks from Spring Cloud Stream.  To serve this purpose, we have the @EnableTaskLaucnher annotation. Once, we add the annotation with Spring Boot app, a TaskSink will be available:

@SpringBootApplication
@EnableTaskLauncher
public class StreamTaskSinkApplication {
    public static void main(String[] args) {
        SpringApplication.run(TaskSinkApplication.class, args);
    }
}

The TaskSink receives the message from a stream that contains a GenericMessage containing TaskLaunchRequest as a payload. Then it triggers a Task-based on co-ordinate provided in the Task launch request.

To have TaskSink functional, we require a bean configured that implements TaskLauncher interface. For testing purpose, we’re mocking the implementation here:

@Bean
public TaskLauncher taskLauncher() {
    return mock(TaskLauncher.class);
}

We need to note here that the TaskLauncher interface is only available after adding the spring-cloud-deployer-local dependency:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-deployer-local</artifactId>
    <version>1.3.1.RELEASE</version>
</dependency>

We can test whether the Task launched by invoking input of the Sink interface:

public class StreamTaskSinkApplicationTests {
   
    @Autowired
    private Sink sink; 
    
    //
}

Now, we create an instance of TaskLaunchRequest and send that as a payload of GenericMessage<TaskLaunchRequestobject. Then we can invoke the input channel of the Sink keeping the GenericMessage object in the channel.

6. Conclusion

In this tutorial, we’ve explored how Spring Cloud Task performs and how to configure it to log its events in a database. We also observed how Spring Batch job is defined and stored in the TaskRepository. Lastly, we explained how we can trigger Task from within Spring Cloud Stream.

As always, the code is available over on GitHub.

How to Detect the OS Using Java

$
0
0

1. Introduction

There are a couple of ways to figure out the OS on which our code is running on.

In this brief article, we’re going to see how to focus on doing OS detection in Java.

2. Implementation

One way is to make use of the System.getProperty(os.name) to obtain the name of the operating system.

The second way is to make use of SystemUtils from the Apache Commons Lang API.

Let’s see both of them in action.

2.1. Using System Properties

We can make use of the System class to detect the OS.

Let’s check it out:

public String getOperatingSystem() {
    String os = System.getProperty("os.name");
    // System.out.println("Using System Property: " + os);
    return os;
}

2.2. SystemUtils – Apache Commons Lang

SystemUtils from Apache Commons Lang is another popular option to try for. It’s a nice API that gracefully takes care of such details.

Let’s find out the OS using SystemUtils:

public String getOperatingSystemSystemUtils() {
    String os = SystemUtils.OS_NAME;
    // System.out.println("Using SystemUtils: " + os);
    return os;
}

3. Result

Executing the code in our environment gives us the same result:

Using SystemUtils: Windows 10
Using System Property: Windows 10

4. Conclusion

In this quick article, we saw how we can find/detect the OS programmatically, from Java.

As always, the code examples for this article are available over on GitHub.

Combining Publishers in Project Reactor

$
0
0

1. Overview

In this article, we’ll take a look at various ways of combining Publishers in Project Reactor.

2. Maven Dependencies

Let’s set up our example with the Project Reactor dependencies:

<dependency>
    <groupId>io.projectreactor</groupId>
    <artifactId>reactor-core</artifactId>
    <version>3.1.4.RELEASE</version>
</dependency>
<dependency>
    <groupId>io.projectreactor</groupId>
    <artifactId>reactor-test</artifactId>
    <version>3.1.4.RELEASE</version>
    <scope>test</scope>
</dependency>

3. Combining Publishers

Given a scenario when one has to work with Flux<T> or Mono<T>, there are different ways to combine streams.

Let’s create a few examples to illustrate the usage of static methods in the Flux<T> class such as concat, concatWith, merge, zip and combineLatest.

Our examples will make use of two publishers of type Flux<Integer>, namely evenNumbers, which is a Flux of Integer and holds a sequence of even numbers starting with 1 (min variable) and limited by 5 (max variable).

We’ll create oddNumbers, also a Flux of type Integer of odd numbers:

Flux<Integer> evenNumbers = Flux
  .range(min, max)
  .filter(x -> x % 2 == 0); // i.e. 2, 4

Flux<Integer> oddNumbers = Flux
  .range(min, max)
  .filter(x -> x % 2 > 0);  // ie. 1, 3, 5

3.1. concat()

The concat method executes a concatenation of the inputs, forwarding elements emitted by the sources downstream.

The concatenation is achieved by sequentially subscribing to the first source then waiting for it to complete before subscribing to the next, and so on until the last source completes. Any error interrupts the sequence immediately and is forwarded downstream.

Here is a quick example:

@Test
public void givenFluxes_whenConcatIsInvoked_thenConcat() {
    Flux<Integer> fluxOfIntegers = Flux.concat(
      evenNumbers, 
      oddNumbers);
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(2)
      .expectNext(4)
      .expectNext(1)
      .expectNext(3)
      .expectNext(5)
      .expectComplete()
      .verify();
}

3.2. concatWith()

Using the static method concatWith, we’ll produce a concatenation of two sources of type Flux<T> as a result:

@Test
public void givenFluxes_whenConcatWithIsInvoked_thenConcatWith() {
    Flux<Integer> fluxOfIntegers = evenNumbers.concatWith(oddNumbers);
        
    // same stepVerifier as in the concat example above
}

3.3. combineLatest()

The Flux static method combineLatest will generate data provided by the combination of the most recently published value from each of the Publisher sources.

Here’s an example of the usage of this method with two Publisher sources and a BiFunction as parameters:

@Test
public void givenFluxes_whenCombineLatestIsInvoked_thenCombineLatest() {
    Flux<Integer> fluxOfIntegers = Flux.combineLatest(
      evenNumbers, 
      oddNumbers, 
      (a, b) -> a + b);

    StepVerifier.create(fluxOfIntegers)
      .expectNext(5) // 4 + 1
      .expectNext(7) // 4 + 3
      .expectNext(9) // 4 + 5
      .expectComplete()
      .verify();
}

We can see here that the function combineLatest applied the function  “a + b” using the latest element of evenNumbers (4) and the elements of oddNumbers (1,3,5), thus generating the sequence 5,7,9.

3.4. merge()

The merge function executes a merging of the data from Publisher sequences contained in an array into an interleaved merged sequence:

@Test
public void givenFluxes_whenMergeIsInvoked_thenMerge() {
    Flux<Integer> fluxOfIntegers = Flux.merge(
      evenNumbers, 
      oddNumbers);
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(2)
      .expectNext(4)
      .expectNext(1)
      .expectNext(3)
      .expectNext(5)
      .expectComplete()
      .verify();
}

An interesting thing to note is that, opposed to concat (lazy subscription), the sources are subscribed eagerly.

Here, we can see a different outcome of the merge function if we insert a delay between the elements of the Publishers:

@Test
public void givenFluxes_whenMergeWithDelayedElementsIsInvoked_thenMergeWithDelayedElements() {
    Flux<Integer> fluxOfIntegers = Flux.merge(
      evenNumbers.delayElements(Duration.ofMillis(500L)), 
      oddNumbers.delayElements(Duration.ofMillis(300L)));
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(1)
      .expectNext(2)
      .expectNext(3)
      .expectNext(5)
      .expectNext(4)
      .expectComplete()
      .verify();
}

3.5. mergeSequential()

The mergeSequential method merges data from Publisher sequences provided in an array into an ordered merged sequence.

Unlike concat, sources are subscribed to eagerly.

Also, unlike merge, their emitted values are merged into the final sequence in subscription order:

@Test
public void testMergeSequential() {
    Flux<Integer> fluxOfIntegers = Flux.mergeSequential(
      evenNumbers, 
      oddNumbers);
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(2)
      .expectNext(4)
      .expectNext(1)
      .expectNext(3)
      .expectNext(5)
      .expectComplete()
      .verify();
}

3.6. mergeDelayError()

The mergeDelayError merges data from Publisher sequences contained in an array into an interleaved merged sequence.

Unlike concat, sources are subscribed to eagerly.

This variant of the static merge method will delay any error until after the rest of the merge backlog has been processed.

Here is an example of mergeDelayError:

@Test
public void givenFluxes_whenMergeWithDelayedElementsIsInvoked_thenMergeWithDelayedElements() {
    Flux<Integer> fluxOfIntegers = Flux.mergeDelayError(1, 
      evenNumbers.delayElements(Duration.ofMillis(500L)), 
      oddNumbers.delayElements(Duration.ofMillis(300L)));
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(1)
      .expectNext(2)
      .expectNext(3)
      .expectNext(5)
      .expectNext(4)
      .expectComplete()
      .verify();
}

3.7. mergeWith()

The static method mergeWith merges data from this Flux and a Publisher into an interleaved merged sequence.

Again, unlike concat, inner sources are subscribed to eagerly:

@Test
public void givenFluxes_whenMergeWithIsInvoked_thenMergeWith() {
    Flux<Integer> fluxOfIntegers = evenNumbers.mergeWith(oddNumbers);
        
    // same StepVerifier as in "3.4. Merge"
    StepVerifier.create(fluxOfIntegers)
      .expectNext(2)
      .expectNext(4)
      .expectNext(1)
      .expectNext(3)
      .expectNext(5)
      .expectComplete()
      .verify();
}

3.8. zip()

The static method zip agglutinates multiple sources together, i.e., waits for all the sources to emit one element and combines these elements into an output value (constructed by the provided combinator function).

The operator will continue doing so until any of the sources completes:

@Test
public void givenFluxes_whenZipIsInvoked_thenZip() {
    Flux<Integer> fluxOfIntegers = Flux.zip(
      evenNumbers, 
      oddNumbers, 
      (a, b) -> a + b);
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(3) // 2 + 1
      .expectNext(7) // 4 + 3
      .expectComplete()
      .verify();
}

As there is no element left from evenNumbers to pair up,  the element 5 from oddNumbers Publisher is ignored.

3.9. zipWith()

The zipWith executes the same method that zip does, but only with two Publishers:

@Test
public void givenFluxes_whenZipWithIsInvoked_thenZipWith() {
    Flux<Integer> fluxOfIntegers = evenNumbers
     .zipWith(oddNumbers, (a, b) -> a * b);
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(2)  // 2 * 1
      .expectNext(12) // 4 * 3
      .expectComplete()
      .verify();
}

4. Conclusion

In this quick tutorial, we’ve discovered multiple ways of combining Publishers.

As always, the examples are available in over on GitHub.


Java Weekly, Issue 218

$
0
0

Here we go…

1. Spring and Java

>> Monitor and troubleshoot Java applications and services with Datadog 

Optimize performance with end-to-end tracing and out-of-the-box support for popular Java frameworks, application servers, and databases. Try it free.

>> Brian Goetz Speaks to InfoQ on Data Classes for Java [infoq.com]

A super interesting dive into data classes – showing what challenges creators of Java need to face when designing the language.

>> How Java 10 will CHANGE the Way You Code [blog.takipi.com]

Local Variable Type Inference is another exciting upcoming feature of Java – let’s hope it won’t be abused 🙂

>> Putting Bean Validation Constraints to Guava’s Multimap [in.relation.to]

We can now apply constraints to the contents of collections. Nice.

>> How to Order Versioned File Names Semantically in Java [blog.jooq.org]

Finally, a proper Comparator implementation for comparing semantically versioned filenames.

>> How To Use Multi-release JARs To Target Multiple Java Versions [blog.codefx.org]

DevOps life made easier – multi-release JARs can contain bytecode for different Java versions and JVMs.

>> Spring Cloud Stream 2.0 – Polled Consumers [spring.io]

Spring Cloud Stream 2.0 applications can control the rate at which messages are consumed.

>> JDK 10 [openjdk.java.net]

Here’s how you can keep track of the JDKs in Java 10.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Tacking Restbucks with Clean Architecture, episode 1 [blog.sourced-bvba.be]

The beginning of an interesting series showcasing Clean Architecture principles by example.

Also worth reading:

3. Musings

>> Breaking and Mending Compatibility [michaelfeathers.silvrback.com]

Sometimes it makes more sense to mess up observable behaviors of your system so that users don’t make false assumptions about the contract.

>> Tech Stack, Framework, Library or API: How Not to Specialize [daedtech.com]

Business clients often don’t care about the tools you use to solve their problems 🙂

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Ted Dies By Software [dilbert.com]

>> Meeting Moth [dilbert.com]

>> Dogbert Consults [dilbert.com]

5. Pick of the Week

>> Rock stars have a boss? [sivers.org]

The Checker Framework – Pluggable Type Systems for Java

$
0
0

1. Overview

From the Java 8 release onwards, it’s possible to compile programs using the so-called Pluggable Type Systems – which can apply stricter checks than the ones applied by the compiler.

We only need to use the annotations provided by the several Pluggable Type Systems available.

In this quick article, we’ll explore the Checker Framework, courtesy of the University of Washington.

2. Maven

To start working with the Checker Framework, we need to first add it into our pom.xml:

<dependency>
    <groupId>org.checkerframework</groupId>
    <artifactId>checker-qual</artifactId>
    <version>2.3.2</version>
</dependency>
<dependency>
    <groupId>org.checkerframework</groupId>
    <artifactId>checker</artifactId>
    <version>2.3.2</version>
</dependency>
<dependency>
    <groupId>org.checkerframework</groupId>
    <artifactId>jdk8</artifactId>
    <version>2.3.2</version>
</dependency>

The latest version of the libraries can be checked on Maven Central.

The first two dependencies contain the code of The Checker Framework while the latter is a custom version of the Java 8 classes, in which all types have been properly annotated by the developers of The Checker Framework.

We then have to properly tweak the maven-compiler-plugin to use The Checker Framework as a pluggable Type System:

<plugin>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.6.1</version>
    <configuration>
        <source>1.8</source>
        <target>1.8</target>
        <compilerArguments>
            <Xmaxerrs>10000</Xmaxerrs>
            <Xmaxwarns>10000</Xmaxwarns>
        </compilerArguments>
        <annotationProcessors>
            <annotationProcessor>
                org.checkerframework.checker.nullness.NullnessChecker
            </annotationProcessor>
            <annotationProcessor>
                org.checkerframework.checker.interning.InterningChecker
            </annotationProcessor>
            <annotationProcessor>
                org.checkerframework.checker.fenum.FenumChecker
            </annotationProcessor>
            <annotationProcessor>
                org.checkerframework.checker.formatter.FormatterChecker
            </annotationProcessor>
        </annotationProcessors>
        <compilerArgs>
            <arg>-AprintErrorStack</arg>
            <arg>-Awarns</arg>
        </compilerArgs>
    </configuration>
</plugin>

The main point here is the content of the <annotationProcessors> tag. Here we listed all the checkers that we want to run against our sources.

3. Avoiding NullPointerExceptions

The first scenario in which The Checker Framework can help us is identifying the piece of codes where a NullPoinerException could originate:

private static int countArgs(@NonNull String[] args) {
    return args.length;
}

public static void main(@Nullable String[] args) {
    System.out.println(countArgs(args));
}

In the above example, we declared with the @NonNull annotation that the args argument of countArgs() has to be not null.

Regardless of this constraint, in main(), we invoke the method passing an argument that can indeed be null, because it’s been annotated with @Nullable.

When we compile the code, The Checker Framework duly warns us that something in our code could be wrong:

[WARNING] /checker-plugin/.../NonNullExample.java:[12,38] [argument.type.incompatible]
 incompatible types in argument.
  found   : null
  required: @Initialized @NonNull String @Initialized @NonNull []

4. Proper Use of Constants as Enumerations

Sometimes we use a series of constants as they were items of an enumeration.

Let’s suppose we need a series of countries and planets. We can then annotate these items with the @Fenum annotation to group all the constants that are part of the same “fake” enumeration:

static final @Fenum("country") String ITALY = "IT";
static final @Fenum("country") String US = "US";
static final @Fenum("country") String UNITED_KINGDOM = "UK";

static final @Fenum("planet") String MARS = "Mars";
static final @Fenum("planet") String EARTH = "Earth";
static final @Fenum("planet") String VENUS = "Venus";

After that, when we write a method that should accept a String that is a “planet”, we can properly annotate the argument:

void greetPlanet(@Fenum("planet") String planet){
    System.out.println("Hello " + planet);
}

By error, we can invoke greetPlanet() with a string that hasn’t been defined as being a possible value for a planet, such:

public static void main(String[] args) {
    obj.greetPlanets(US);
}

The Checker Framework can spot the error:

[WARNING] /checker-plugin/.../FakeNumExample.java:[29,26] [argument.type.incompatible]
 incompatible types in argument.
  found   : @Fenum("country") String
  required: @Fenum("planet") String

5. Regular Expressions

Let’s suppose we know a String variable has to store a regular expression with at least one matching group.

We can leverage the Checker Framework and declare such variable like that:

@Regex(1) private static String FIND_NUMBERS = "\\d*";

This is obviously a potential error because the regular expression we assigned to FIND_NUMBERS does not have any matching group.

Indeed, the Checker Framework will diligently inform us about our error at compile time:

[WARNING] /checker-plugin/.../RegexExample.java:[7,51] [assignment.type.incompatible]
incompatible types in assignment.
  found   : @Regex String
  required: @Regex(1) String

6. Conclusion

The Checker Framework is a useful tool for developers that want to go beyond the standard compiler and improve the correctness of their code.

It’s able to detect, at compile time, several typical errors that can usually only be detected at runtime or even halt compilation by raising a compilation error.

There’re many more standard checks than what we covered in this article; check out the checks available in The Checker Framework official manual here, or even write your own.

As always, the source code for this tutorial, with some more examples, can be found over on GitHub.

Method Handles in Java

$
0
0

1. Introduction

In this article, we’re going to explore an important API that was introduced in Java 7 and enhanced in the following versions, the java.lang.invoke.MethodHandles.

In particular, we’ll learn what method handles are, how to create them and how to use them.

2. What are Method Handles?

Coming to its definition, as stated in the API documentation:

A method handle is a typed, directly executable reference to an underlying method, constructor, field, or similar low-level operation, with optional transformations of arguments or return values.

In a simpler way, method handles are a low-level mechanism for finding, adapting and invoking methods.

Method handles are immutable and have no visible state.

For creating and using a MethodHandle, 4 steps are required:

  • Creating the lookup
  • Creating the method type
  • Finding the method handle
  • Invoking the method handle

2.1. Method Handles vs Reflection

Method handles were introduced in order to work alongside the existing java.lang.reflect API, as they serve different purposes and have different characteristics.

From a performance standpoint, the MethodHandles API can be much faster than the Reflection API since the access checks are made at creation time rather than at execution time. This difference gets amplified if a security manager is present, since member and class lookups are subject to additional checks.

However, considering that performance isn’t the only suitability measure for a task, we have also to consider that the MethodHandles API is harder to use due to the lack of mechanisms such as member class enumeration, accessibility flags inspection and more.

Even so, the MethodHandles API offers the possibility to curry methods, change the types of parameters and change their order.

Having a clear definition and goals of the MethodHandles API, we can now begin to work with them, starting from the lookup.

3. Creating the Lookup

The first thing to do when we want to create a method handle is to retrieve the lookup, the factory object that is responsible for creating method handles for methods, constructors, and fields, that are visible to the lookup class.

Through the MethodHandles API, it’s possible to create the lookup object, with different access modes.

Let’s create the lookup that provides access to public methods:

MethodHandles.Lookup publicLookup = MethodHandles.publicLookup();

However, in case we want to have access also to private and protected methods, we can use, instead, the lookup() method:

MethodHandles.Lookup lookup = MethodHandles.lookup();

4. Creating a MethodType

In order to be able to create the MethodHandle, the lookup object requires a definition of its type and this is achieved through the MethodType class.

In particular, MethodType represents the arguments and return type accepted and returned by a method handle or passed and expected by a method handle caller.

The structure of a MethodType is simple and it’s formed by a return type together with an appropriate number of parameter types that must be properly matched between a method handle and all its callers.

In the same way as MethodHandle, even the instances of a MethodType are immutable.

Let’s see how it’s possible to define a MethodType that specifies a java.util.List class as return type and an Object array as input type:

MethodType mt = MethodType.methodType(List.class, Object[].class);

In case the method returns a primitive type or void as its return type, we will use the class representing those types (void.class, int.class …).

Let’s define a MethodType that returns an int value and accepts an Object:

MethodType mt = MethodType.methodType(int.class, Object.class);

We can now proceed to create MethodHandle.

5. Finding a MethodHandle

Once we’ve defined our method type, in order to create a MethodHandle, we have to find it through the lookup or publicLookup object, providing also the origin class and the method name.

In particular, the lookup factory provides a set of methods that allow us to find the method handle in an appropriate way considering the scope of our method. Starting with the simplest scenario, let’s explore the principal ones.

5.1. Method Handle for Methods

Using the findVirtual() method allow us to create a MethodHandle for an object method. Let’s create one, based on the concat() method of the String class:

MethodType mt = MethodType.methodType(String.class, String.class);
MethodHandle concatMH = publicLookup.findVirtual(String.class, "concat", mt);

5.2. Method Handle for Static Methods

When we want to gain access to a static method, we can instead use the findStatic() method:

MethodType mt = MethodType.methodType(List.class, Object[].class);

MethodHandle asListMH = publicLookup.findStatic(Arrays.class, "asList", mt);

In this case, we created a method handle that converts an array of Objects to a List of them.

5.3. Method Handle for Constructors

Gaining access to a constructor can be done using the findConstructor() method.

Let’s create a method handles that behaves as the constructor of the Integer class, accepting a String attribute:

MethodType mt = MethodType.methodType(void.class, String.class);

MethodHandle newIntegerMH = publicLookup.findConstructor(Integer.class, mt);

5.4. Method Handle for Fields

Using a method handle it’s possible to gain access also to fields.

Let’s start defining the Book class:

public class Book {
    
    String id;
    String title;

    // constructor

}

Having as precondition a direct access visibility between the method handle and the declared property, we can create a method handle that behaves as a getter:

MethodHandle getTitleMH = lookup.findGetter(Book.class, "title", String.class);

For further information on handling variables/fields, give a look at the Java 9 Variable Handles Demystified, where we discuss the java.lang.invoke.VarHandle API, added in Java 9.

5.5. Method Handle for Private Methods

Creating a method handle for a private method can be done, with the help of the java.lang.reflect API.

Let’s start adding a private method to the Book class:

private String formatBook() {
    return id + " > " + title;
}

Now we can create a method handle that behaves exactly as the formatBook() method:

Method formatBookMethod = Book.class.getDeclaredMethod("formatBook");
formatBookMethod.setAccessible(true);

MethodHandle formatBookMH = lookup.unreflect(formatBookMethod);

6. Invoking a Method Handle

Once we’ve created our method handles, use them is the next step. In particular, the MethodHandle class provides 3 different way to execute a method handle: invoke(), invokeWithArugments() and invokeExact().

Let’s start with the invoke option.

6.1. Invoking a Method Handle

When using the invoke() method, we enforce the number of the arguments (arity) to be fixed but we allow the performing of casting and boxing/unboxing of the arguments and return types.

Let’s see how it’s possible to use the invoke() with a boxed argument:

MethodType mt = MethodType.methodType(String.class, char.class, char.class);
MethodHandle replaceMH = publicLookup.findVirtual(String.class, "replace", mt);

String output = (String) replaceMH.invoke("jovo", Character.valueOf('o'), 'a');

assertEquals("java", output);

In this case, the replaceMH requires char arguments, but the invoke() performs an unboxing on the Character argument before its execution.

6.2. Invoking with Arguments

Invoking a method handle using the invokeWithArguments method, is the least restrictive of the three options.

In fact, it allows a variable arity invocation, in addition to the casting and boxing/unboxing of the arguments and of the return types.

Coming to practice, this allows us to create a List of Integer starting from an array of int values:

MethodType mt = MethodType.methodType(List.class, Object[].class);
MethodHandle asList = publicLookup.findStatic(Arrays.class, "asList", mt);

List<Integer> list = (List<Integer>) asList.invokeWithArguments(1,2);

assertThat(Arrays.asList(1,2), is(list));

6.3. Invoking Exact

In case we want to be more restrictive in the way we execute a method handle (number of arguments and their type), we have to use the invokeExact() method.

In fact, it doesn’t provide any casting to the class provided and requires a fixed number of arguments.

Let’s see how we can sum two int values using a method handle:

MethodType mt = MethodType.methodType(int.class, int.class, int.class);
MethodHandle sumMH = lookup.findStatic(Integer.class, "sum", mt);

int sum = (int) sumMH.invokeExact(1, 11);

assertEquals(12, sum);

If in this case, we decide to pass to the invokeExact method a number that isn’t an int, the invocation will lead to WrongMethodTypeException.

7. Working with Array

MethodHandles aren’t intended to work only with fields or objects, but also with arrays. As a matter of fact, with the asSpreader() API, it’s possible to make an array-spreading method handle.

In this case, the method handle accepts an array argument, spreading its elements as positional arguments, and optionally the length of the array.

Let’s see how we can spread a method handle to check if the elements within an array are equals:

MethodType mt = MethodType.methodType(boolean.class, Object.class);
MethodHandle equals = publicLookup.findVirtual(String.class, "equals", mt);

MethodHandle methodHandle = equals.asSpreader(Object[].class, 2);

assertTrue((boolean) methodHandle.invoke(new Object[] { "java", "java" }));

8. Enhancing a Method Handle

Once we’ve defined a method handle, it’s possible to enhance it by binding the method handle to an argument without actually invoking it.

For example, in Java 9, this kind of behaviour is used to optimize String concatenation.

Let’s see how we can perform a concatenation, binding a suffix to our concatMH:

MethodType mt = MethodType.methodType(String.class, String.class);
MethodHandle concatMH = publicLookup.findVirtual(String.class, "concat", mt);

MethodHandle bindedConcatMH = concatMH.bindTo("Hello ");

assertEquals("Hello World!", bindedConcatMH.invoke("World!"));

9. Java 9 Enhancements

With Java 9, few enhancements were made to the MethodHandles API with the aim to make it much easier to use.

The enhancements affected 3 main topics:

  • Lookup functions – allowing class lookups from different contexts and support non-abstract methods in interfaces
  • Argument handling – improving the argument folding, argument collecting and argument spreading functionalities
  • Additional combinations – adding loops (loopwhileLoop, doWhileLoop…) and a better exception handling support with the tryFinally

These changes resulted in few additional benefits:

  • Increased JVM compiler optimizations
  • Instantiation reduction
  • Enabled precision in the usage of the MethodHandles API

Details of the enhancements made are available at the MethodHandles API Javadoc.

10. Conclusion

In this article, we covered the MethodHandles API, what they’re and how we can use them.

We also discussed how it relates to the Reflection API and since the method handles allow low-level operations, it should be better to avoid using them, unless they fit perfectly the scope of the job.

As always, the complete source code for this article is available over on Github.

JDBC with Groovy

$
0
0

1. Introduction

In this article, we’re going to look at how to query relational databases with JDBC, using idiomatic Groovy.

JDBC, while relatively low-level, is the foundation of most ORMs and other high-level data access libraries on the JVM. And we can use JDBC directly in Groovy, of course; however, it has a rather cumbersome API.

Fortunately for us, the Groovy standard library builds upon JDBC to present an interface that is clean, simple, yet powerful. So, we’ll be exploring the Groovy SQL module.

We’re going to look at JDBC in plain Groovy, not considering any framework such as Spring, for which we have other guides.

2. JDBC and Groovy Setup

We have to include the groovy-sql module among our dependencies:

<dependency>
    <groupId>org.codehaus.groovy</groupId>
    <artifactId>groovy</artifactId>
    <version>2.4.13</version>
</dependency>
<dependency>
    <groupId>org.codehaus.groovy</groupId>
    <artifactId>groovy-sql</artifactId>
    <version>2.4.13</version>
</dependency>

It’s not necessary to list it explicitly if we’re using groovy-all:

<dependency>
    <groupId>org.codehaus.groovy</groupId>
    <artifactId>groovy-all</artifactId>
    <version>2.4.13</version>
</dependency>

We can find the latest version of groovy, groovy-sql and groovy-all on Maven Central.

3. Connecting to the Database

The first thing we have to do in order to work with the database is connecting to it.

Let’s introduce the groovy.sql.Sql class, which we’ll use for all operations on the database with the Groovy SQL module.

An instance of Sql represents a database on which we want to operate.

However, an instance of Sql isn’t a single database connection. We’ll talk about connections later, let’s not worry about them now; let’s just assume everything magically works.

3.1. Specifying Connection Parameters

Throughout this article, we’re going to use an HSQL Database, which is a lightweight relational DB that is mostly used in tests.

A database connection needs a URL, a driver, and access credentials:

Map dbConnParams = [
  url: 'jdbc:hsqldb:mem:testDB',
  user: 'sa',
  password: '',
  driver: 'org.hsqldb.jdbc.JDBCDriver']

Here, we’ve chosen to specify those using a Map, although it’s not the only possible choice.

We can then obtain a connection from the Sql class:

def sql = Sql.newInstance(dbConnParams)

We’ll see how to use it in the following sections.

When we’re finished, we should always release any associated resources:

sql.close()

3.2. Using a DataSource

It is common, especially in programs running inside an application server, to use a datasource to connect to the database.

Also, when we want to pool connections or to use JNDI, a datasource is the most natural option.

Groovy’s Sql class accepts datasources just fine:

def sql = Sql.newInstance(datasource)

3.3. Automatic Resource Management

Remembering to call close() when we’re done with an Sql instance is tedious; machines remember stuff much better than we do, after all.

With Sql we can wrap our code in a closure and have Groovy call close() automatically when control leaves it, even in case of exceptions:

Sql.withInstance(dbConnParams) {
    Sql sql -> haveFunWith(sql)
}

4. Issuing Statements Against the Database

Now, we can go on to the interesting stuff.

The most simple and unspecialized way to issue a statement against the database is the execute method:

sql.execute "create table PROJECT (id integer not null, name varchar(50), url varchar(100))"

In theory it works both for DDL/DML statements and for queries; however, the simple form above does not offer a way to get back query results. We’ll leave queries for later.

The execute method has several overloaded versions, but, again, we’ll look at the more advanced use cases of this and other methods in later sections.

4.1. Inserting Data

For inserting data in small amounts and in simple scenarios, the execute method discussed earlier is perfectly fine.

However, for cases when we have generated columns (e.g., with sequences or auto-increment) and we want to know the generated values, a dedicated method exists: executeInsert.

As for execute, we’ll now look at the most simple method overload available, leaving more complex variants for a later section.

So, suppose we have a table with an auto-increment primary key (identity in HSQLDB parlance):

sql.execute "create table PROJECT (ID IDENTITY, NAME VARCHAR (50), URL VARCHAR (100))"

Let’s insert a row in the table and save the result in a variable:

def ids = sql.executeInsert """
  INSERT INTO PROJECT (NAME, URL) VALUES ('tutorials', 'github.com/eugenp/tutorials')
"""

executeInsert behaves exactly like execute, but what does it return?

It turns out that the return value is a matrix: its rows are the inserted rows (remember that a single statement can cause multiple rows to be inserted) and its columns are the generated values.

It sounds complicated, but in our case, which is by far the most common one, there is a single row and a single generated value:

assertEquals(0, ids[0][0])

A subsequent insertion would return a generated value of 1:

ids = sql.executeInsert """
  INSERT INTO PROJECT (NAME, URL)
  VALUES ('REST with Spring', 'github.com/eugenp/REST-With-Spring')
"""

assertEquals(1, ids[0][0])

4.2. Updating and Deleting Data

Similarly, a dedicated method for data modification and deletion exists: executeUpdate.

Again, this differs from execute only in its return value, and we’ll only look at its simplest form.

The return value, in this case, is an integer, the number of affected rows:

def count = sql.executeUpdate("UPDATE PROJECT SET URL = 'https://' + URL")

assertEquals(2, count)

5. Querying the Database

Things start getting Groovy when we query the database.

Dealing with the JDBC ResultSet class is not exactly fun. Luckily for us, Groovy offers a nice abstraction over all of that.

5.1. Iterating Over Query Results

While loops are so old style… we’re all into closures nowadays.

And Groovy is here to suit our tastes:

sql.eachRow("SELECT * FROM PROJECT") { GroovyResultSet rs ->
    haveFunWith(rs)
}

The eachRow method issues our query against the database and calls a closure over each row.

As we can see, a row is represented by an instance of GroovyResultSet, which is an extension of plain old ResultSet with a few added goodies. Read on to find more about it.

5.2. Accessing Result Sets

In addition to all of the ResultSet methods, GroovyResultSet offers a few convenient utilities.

Mainly, it exposes named properties matching column names:

sql.eachRow("SELECT * FROM PROJECT") { rs ->
    assertNotNull(rs.name)
    assertNotNull(rs.URL)
}

Note how property names are case-insensitive.

GroovyResultSet also offers access to columns using a zero-based index:

sql.eachRow("SELECT * FROM PROJECT") { rs ->
    assertNotNull(rs[0])
    assertNotNull(rs[1])
    assertNotNull(rs[2])
}

5.3. Pagination

We can easily page the results, i.e., load only a subset starting from some offset up to some maximum number of rows. This is a common concern in web applications, for example.

eachRow and related methods have overloads accepting an offset and a maximum number of returned rows:

def offset = 1
def maxResults = 1
def rows = sql.rows('SELECT * FROM PROJECT ORDER BY NAME', offset, maxResults)

assertEquals(1, rows.size())
assertEquals('REST with Spring', rows[0].name)

Here, the rows method returns a list of rows rather than iterating over them like eachRow.

6. Parameterized Queries and Statements

More often than not, queries and statements are not fully fixed at compile time; they usually have a static part and a dynamic part, in the form of parameters.

If you’re thinking about string concatenation, stop now and go read about SQL injection!

We mentioned earlier that the methods that we’ve seen in previous sections have many overloads for various scenarios.

Let’s introduce those overloads that deal with parameters in SQL queries and statements.

6.1. Strings with Placeholders

In style similar to plain JDBC, we can use positional parameters:

sql.execute(
    'INSERT INTO PROJECT (NAME, URL) VALUES (?, ?)',
    'tutorials', 'github.com/eugenp/tutorials')

or we can use named parameters with a map:

sql.execute(
    'INSERT INTO PROJECT (NAME, URL) VALUES (:name, :url)',
    [name: 'REST with Spring', url: 'github.com/eugenp/REST-With-Spring'])

This works for execute, executeUpdate, rows and eachRow. executeInsert supports parameters, too, but its signature is a little bit different and trickier.

6.2. Groovy Strings

We can also opt for a Groovier style using GStrings with placeholders.

All the methods we’ve seen don’t substitute placeholders in GStrings the usual way; rather, they insert them as JDBC parameters, ensuring the SQL syntax is correctly preserved, with no need to quote or escape anything and thus no risk of injection.

This is perfectly fine, safe and Groovy:

def name = 'REST with Spring'
def url = 'github.com/eugenp/REST-With-Spring'
sql.execute "INSERT INTO PROJECT (NAME, URL) VALUES (${name}, ${url})"

7. Transactions and Connections

So far we’ve skipped over a very important concern: transactions.

In fact, we haven’t talked at all about how Groovy’s Sql manages connections, either.

7.1. Short-Lived Connections

In the examples presented so far, each and every query or statement was sent to the database using a new, dedicated connection. Sql closes the connection as soon as the operation terminates.

Of course, if we’re using a connection pool, the impact on performance might be small.

Still, if we want to issue multiple DML statements and queries as a single, atomic operation, we need a transaction.

Also, for a transaction to be possible in the first place, we need a connection that spans multiple statements and queries.

7.2. Transactions with a Cached Connection

Groovy SQL does not allow us to create or access transactions explicitly.

Instead, we use the withTransaction method with a closure:

sql.withTransaction {
    sql.execute """
        INSERT INTO PROJECT (NAME, URL)
        VALUES ('tutorials', 'github.com/eugenp/tutorials')
    """
    sql.execute """
        INSERT INTO PROJECT (NAME, URL)
        VALUES ('REST with Spring', 'github.com/eugenp/REST-With-Spring')
    """
}

Inside the closure, a single database connection is used for all queries and statements.

Furthermore, the transaction is automatically committed when the closure terminates, unless it exits early due to an exception.

However, we can also manually commit or rollback the current transaction with methods in the Sql class:

sql.withTransaction {
    sql.execute """
        INSERT INTO PROJECT (NAME, URL)
        VALUES ('tutorials', 'github.com/eugenp/tutorials')
    """
    sql.commit()
    sql.execute """
        INSERT INTO PROJECT (NAME, URL)
        VALUES ('REST with Spring', 'github.com/eugenp/REST-With-Spring')
    """
    sql.rollback()
}

7.3. Cached Connections Without a Transaction

Finally, to reuse a database connection without the transaction semantics described above, we use cacheConnection:

sql.cacheConnection {
    sql.execute """
        INSERT INTO PROJECT (NAME, URL)
        VALUES ('tutorials', 'github.com/eugenp/tutorials')
    """
    throw new Exception('This does not roll back')
}

8. Conclusions and Further Reading

In this article, we’ve looked at the Groovy SQL module and how it enhances and simplifies JDBC with closures and Groovy strings.

We can then safely conclude that plain old JDBC looks a bit more modern with a sprinkle of Groovy!

We haven’t talked about every single feature of Groovy SQL; for example, we’ve left out batch processing, stored procedures, metadata, and other things.

For further information, see the Groovy documentation.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as is.

An MVC Example with Servlets and JSP

$
0
0

1. Overview

In this quick article, we’ll create a small web application that implements the Model View Controller (MVC) design pattern, using basic Servlets and JSPs.

We’ll explore a little bit about how MVC works, and its key features before we move on to the implementation.

2. Introduction to MVC

Model-View-Controller (MVC) is a pattern used in software engineering to separate the application logic from the user interface. As the name implies, the MVC pattern has three layers.

The Model defines the business layer of the application, the Controller manages the flow of the application, and the View defines the presentation layer of the application.

Although the MVC pattern isn’t specific to web applications, it fits very well in this type of applications. In a Java context, the Model consists of simple Java classes, the Controller consists of servlets and the View consists of JSP pages.

Here’re some key features of the pattern:

  • It separates the presentation layer from business layer
  • The Controller performs the action of invoking the Model and sending data to View
  • The Model is not even aware that it is used by some web application or a desktop application

Let’s have a look at each layer.

2.1. The Model Layer

This is the data layer which contains business logic of the system, and also represents the state of the application.

It’s independent of the presentation layer, the controller fetches the data from the Model layer and sends it to the View layer.

2.2. The Controller Layer

Controller layer acts as an interface between View and Model. It receives requests from the View layer and processes them, including the necessary validations.

The requests are further sent to Model layer for data processing, and once they are processed, the data is sent back to the Controller and then displayed on the View.

2.3. The View Layer

This layer represents the output of the application, usually some form of UI. The presentation layer is used to display the Model data fetched by the Controller.

3. MVC with Servlets and JSP

To implement a web application based on MVC design pattern, we’ll create the Student and StudentService classes – which will act as our Model layer.

StudentServlet class will act as a Controller, and for the presentation layer, we’ll create student-record.jsp page.

Now, let’s write these layers one by one and start with Student class:

public class Student {
    private int id;
    private String firstName;
    private String lastName;
	
    // constructors, getters and setters goes here
}

Let’s now write our StudentService which will process our business logic:

public class StudentService {

    public Optional<Student> getStudent(int id) {
        switch (id) {
            case 1:
                return Optional.of(new Student(1, "John", "Doe"));
            case 2:
                return Optional.of(new Student(2, "Jane", "Goodall"));
            case 3:
                return Optional.of(new Student(3, "Max", "Born"));
            default:
                return Optional.empty();
        }
    }
}

Now let’s create our Controller class StudentServlet:

@WebServlet(
  name = "StudentServlet", 
  urlPatterns = "/student-record")
public class StudentServlet extends HttpServlet {

    private StudentService studentService = new StudentService();

    private void processRequest(
      HttpServletRequest request, HttpServletResponse response) 
      throws ServletException, IOException {

        String studentID = request.getParameter("id");
        if (studentID != null) {
            int id = Integer.parseInt(studentID);
            studentService.getStudent(id)
              .ifPresent(s -> request.setAttribute("studentRecord", s));
        }

        RequestDispatcher dispatcher = request.getRequestDispatcher(
          "/WEB-INF/jsp/student-record.jsp");
        dispatcher.forward(request, response);
    }

    @Override
    protected void doGet(
      HttpServletRequest request, HttpServletResponse response) 
      throws ServletException, IOException {

        processRequest(request, response);
    }

    @Override
    protected void doPost(
      HttpServletRequest request, HttpServletResponse response) 
      throws ServletException, IOException {

        processRequest(request, response);
    }
}

This servlet is the controller of our web application.

First, it reads a parameter id from the request. If the id is submitted, a Student object is fetched from the business layer.

Once it retrieves the necessary data from the Model, it puts this data in the request using the setAttribute() method.

Finally, the Controller forwards the request and response objects to a JSP, the view of the application.

Next, let’s write our presentation layer student-record.jsp:

<html>
    <head>
        <title>Student Record</title>
    </head>
    <body>
    <% 
        if (request.getAttribute("studentRecord") != null) {
            Student student = (Student) request.getAttribute("studentRecord");
    %>
 
    <h1>Student Record</h1>
    <div>ID: <%= student.getId()%></div>
    <div>First Name: <%= student.getFirstName()%></div>
    <div>Last Name: <%= student.getLastName()%></div>
        
    <% 
        } else { 
    %>

    <h1>No student record found.</h1>
         
    <% } %>	
    </body>
</html>

And, of course, the JSP is the view of the application; it receives all the information it needs from the Controller, it doesn’t need to interact with the business layer directly.

4. Conclusion

In this tutorial, we’ve learned about the MVC i.e. Model View Controller architecture, and we focused on how to implement a simple example.

As usual, the code presented here can be found over on GitHub.

Guide to Inheritance in Java

$
0
0

1. Overview

One of the core principles of Object Oriented Programming – inheritance – enables us to reuse existing code or extend an existing type.

Simply put, in Java, a class can inherit another class and multiple interfaces, while an interface can inherit other interfaces.

In this article, we’ll start with the need of inheritance, moving to how inheritance works with classes and interfaces.

Then, we’ll cover how the variable/ method names and access modifiers affect the members that are inherited.

And at the end, we’ll see what it means to inherit a type.

2. The Need for Inheritance

Imagine, as a car manufacturer, you offer multiple car models to your customers. Even though different car models might offer different features like a sunroof or bulletproof windows, they would all include common components and features, like engine and wheels.

It makes sense to create a basic design and extend it to create their specialized versions, rather than designing each car model separately, from scratch.

In a similar manner, with inheritance, we can create a class with basic features and behavior and create its specialized versions, by creating classes, that inherit this base class. In the same way, interfaces can extend existing interfaces.

We’ll notice the use of multiple terms to refer to a type which is inherited by another type, specifically:

  • a base type is also called a super or a parent type
  • a derived type is referred to as an extended, sub or a child type

3. Inheritance with Classes

3.1. Extending a Class

A class can inherit another class and define additional members.

Let’s start by defining a base class Car:

public class Car {
    int wheels;
    String model;
    void start() {
        // Check essential parts
    }
}

The class ArmoredCar can inherit the members of Car class by using the keyword extends in its declaration:

public class ArmoredCar extends Car {
    int bulletProofWindows;
    void remoteStartCar() {
	// this vehicle can be started by using a remote control
    }
}

Classes in Java support single inheritance; the ArmoredCar class can’t extend multiple classes. In the absence of an extends keyword, a class implicitly inherits class java.lang.Object.

3.2. What Is Inherited

Basically, a derived class inherits the protected and public members from the base class, which are not static. In addition, the members with default and package access are inherited if the two classes are in the same package.

A base class doesn’t allow all of its code to be accessed by the derived classes.

The private and static members of a class aren’t inherited by a derived class. Also, if base class and derived classes are defined in separate packages, members with default or package access in the base class aren’t inherited in the derived class.

3.3. Access Parent Class Members from a Derived Class

It’s simple. Just use them (we don’t need a reference to the base class to access its members). Here’s a quick example:

public class ArmoredCar extends Car {
    public String registerModel() {
        return model;
    }
}

3.4. Hidden Base Class Instance Members

What happens if both our base class and derived class define a variable or method with the same name? Don’t worry; we can still access both of them. However, we must make our intent clear to Java, by prefixing the variable or method with the keywords this or super.

The this keyword refers to the instance in which it’s used. The super keyword (as it seems obvious) refers to the parent class instance:

public class ArmoredCar extends Car {
    private String model;
    public String getAValue() {
    	return super.model;   // returns value of model defined in base class Car
    	// return this.model;   // will return value of model defined in ArmoredCar
    	// return model;   // will return value of model defined in ArmoredCar
    }
}

A lot of developers use this and super keywords to explicitly state which variable or method they’re referring to. However, using them with all members can make our code look cluttered.

3.5. Hidden Base Class Static Members

What happens when our base class and derived classes define static variables and methods with the same name? Can we access a static member from the base class, in the derived class, the way we do for the instance variables?

Let’s find out using an example:

public class Car {
    public static String msg() {
        return "Car";
    }
}
public class ArmoredCar extends Car {
    public static String msg() {
        return super.msg(); // this won't compile.
    }
}

No, we can’t. The static members belong to a class and not to instances. So we can’t use the non-static super keyword in msg().

Since static members belong to a class, we can modify the preceding call as follows:

return Car.msg();

Consider the following example, in which both the base class and derived class define a static method msg() with the same signature:

public class Car {
    public static String msg() {
        return "Car";
    }
}
public class ArmoredCar extends Car {
    public static String msg() {
        return "ArmoredCar";
    }
}

Here’s how we can call them:

Car first = new ArmoredCar();
ArmoredCar second = new ArmoredCar();

For the preceding code, first.msg() will output “Car and second.msg() will output “ArmoredCar”. The static message that is called depends on the type of the variable used to refer to ArmoredCar instance.

4. Inheritance with Interfaces

4.1. Implementing Multiple Interfaces

Although classes can inherit only one class, they can implement multiple interfaces.

Imagine the ArmoredCar that we defined in the preceding section is required for a super spy. So the Car manufacturing company thought of adding flying and floating functionality:

public interface Floatable {
    void floatOnWater();
}
public interface Flyable {
    void fly();
}
public class ArmoredCar extends Car implements Floatable, Flyable{
    public void floatOnWater() {
        System.out.println("I can float!");
    }
 
    public void fly() {
        System.out.println("I can fly!");
    }
}

In the example above, we notice the use of the keyword implements to inherit from an interface.

4.2. Issues with Multiple Inheritance

Multiple inheritance with interfaces is allowed in Java.

Until Java 7, this wasn’t an issue. Interfaces could only define abstract methods, that is, methods without any implementation. So if a class implemented multiple interfaces with the same method signature, it was not a problem. The implementing class eventually had just one method to implement.

Let’s see how this simple equation changed with the introduction of default methods in interfaces, with Java 8.

Starting with Java 8, interfaces could choose to define default implementations for its methods (an interface can still define abstract methods). This means that if a class implements multiple interfaces, which define methods with the same signature, the child class would inherit separate implementations. This sounds complex and is not allowed.

Java disallows inheritance of multiple implementations of the same methods, defined in separate interfaces.

Here’s an example:

public interface Floatable {
    default void repair() {
    	System.out.println("Repairing Floatable object");	
    }
}
public interface Flyable {
    default void repair() {
    	System.out.println("Repairing Flyable object");	
    }
}
public class ArmoredCar extends Car implements Floatable, Flyable {
    // this won't compile
}

If we do want to implement both interfaces, we’ll have to override the repair() method.

If the interfaces in the preceding examples define variables with the same name, say duration, we can’t access them without preceding the variable name with the interface name:

public interface Floatable {
    int duration = 10;
}
public interface Flyable {
    int duration = 20;
}
public class ArmoredCar extends Car implements Floatable, Flyable {
 
    public void aMethod() {
    	System.out.println(duration); // won't compile
    	System.out.println(Floatable.duration); // outputs 10
    	System.out.println(Flyable.duration); // outputs 20
    }
}

4.3. Interfaces Extending Other Interfaces

An interface can extend multiple interfaces. Here’s an example:

public interface Floatable {
    void floatOnWater();
}
interface interface Flyable {
    void fly();
}
public interface SpaceTraveller extends Floatable, Flyable {
    void remoteControl();
}

An interface inherits other interfaces by using the keyword extends. Classes use the keyword implements to inherit an interface.

5. Inheriting Type

When a class inherits another class or interfaces, apart from inheriting their members, it also inherits their type. This also applies to an interface that inherits other interfaces.

This is a very powerful concept, which allows developers to program to an interface (base class or interface), rather than programming to their implementations (concrete or derived classes).

For an example, imagine a condition, where an organization maintains a list of the cars owned by its employees. Of course, all employees might own different car models. So how can we refer to different car instances? Here’s the solution:

public class Employee {
    private String name;
    private Car car;
    
    // standard constructor
}

Because all derived classes of Car inherit the type Car, the derived class instances can be referred by using a variable of class Car:

Employee e1 = new Employee("Shreya", new ArmoredCar());
Employee e2 = new Employee("Paul", new SpaceCar());
Employee e3 = new Employee("Pavni", new BMW());

6. Conclusion

In this article, we covered a core aspect of the Java language – how inheritance works.

We say how Java supports single inheritance with classes and multiple inheritance with interfaces and discussed the intricacies of how the mechanism works in the language.

As always, the full source code for the examples is available over on GitHub.

An Intro to Spring Cloud Contract

$
0
0

1. Introduction

Spring Cloud Contract is a project that, simply put, helps us write Consumer-Driven Contracts (CDC).

This ensures the contract between a Producer and a Consumer, in a distributed system – for both HTTP-based and message-based interactions.

In this quick article, we’ll explore writing producer and consumer side test cases for Spring Cloud Contract through an HTTP interaction.

2. Producer – Server Side

We’re going to write a producer side CDC, in the form of an EvenOddController – which just tells whether the number parameter is even or odd:

@RestController
public class EvenOddController {

    @GetMapping("/validate/prime-number")
    public String isNumberPrime(@RequestParam("number") Integer number) {
        return Integer.parseInt(number) % 2 == 0 ? "Even" : "Odd";
    }
}

2.1. Maven Dependencies

For our producer side, we’ll need the spring-cloud-starter-contract-verifier dependency:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-contract-verifier</artifactId>
    <version>Edgware.SR1</version>
    <scope>test</scope>
</dependency>

And we’ll need to configure spring-cloud-contract-maven-plugin with the name of our base test class, which we’ll describe in the next section:

<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <version>1.2.2.RELEASE</version>
    <extensions>true</extensions>
    <configuration>
        <baseClassForTests>
            com.baeldung.spring.cloud.springcloudcontractproducer.BaseTestClass
        </baseClassForTests>
    </configuration>
</plugin>

2.2. Producer Side Setup

We need to add a base class in the test package that loads our Spring context:

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.MOCK)
@DirtiesContext
@AutoConfigureMessageVerifier
public class BaseTestClass {

    @Autowired
    private EvenOddController evenOddController;

    @Before
    public void setup() {
        StandaloneMockMvcBuilder standaloneMockMvcBuilder 
          = MockMvcBuilders.standaloneSetup(evenOddController);
        RestAssuredMockMvc.standaloneSetup(standaloneMockMvcBuilder);
    }
}

In the /src/test/resources/contracts/ package, we’ll add the test stubs, such as this one in the file shouldReturnEvenWhenRequestParamIsEven.groovy:

import org.springframework.cloud.contract.spec.Contract
Contract.make {
    description "should return even when number input is even"
    request{
        method GET()
        url("/validate/prime-number") {
            queryParameters {
                parameter("number", "2")
            }
        }
    }
    response {
        body("Even")
        status 200
    }
}

When we run the build, the plugin automatically generates a test class named ContractVerifierTest that extends our BaseTestClass and puts it in /target/generated-test-sources/contracts/.

The names of the test methods are derived from the prefix “validate_” concatenated with the names of our Groovy test stubs. For the above Groovy file, the generated method name will be “validate_shouldReturnEvenWhenRequestParamIsEven”.

Let’s have a look at this auto-generated test class:

public class ContractVerifierTest extends BaseTestClass {

@Test
public void validate_shouldReturnEvenWhenRequestParamIsEven() throws Exception {
    // given:
    MockMvcRequestSpecification request = given();

    // when:
    ResponseOptions response = given().spec(request)
      .queryParam("number","2")
      .get("/validate/prime-number");

    // then:
    assertThat(response.statusCode()).isEqualTo(200);
    
    // and:
    String responseBody = response.getBody().asString();
    assertThat(responseBody).isEqualTo("Even");
}

The build will also add the stub jar in our local Maven repository so that it can be used by our consumer.

Stubs will be present in the output folder under stubs/mapping/.

3. Consumer – Client Side

The consumer side of our CDC will consume stubs generated by the producer side through HTTP interaction to maintain the contract, so any changes on the producer side would break the contract.

We’ll add BasicMathController, which will make an HTTP request to get the response from the generated stubs:

@RestController
public class BasicMathController {

    @Autowired
    private RestTemplate restTemplate;

    @GetMapping("/calculate")
    public String checkOddAndEven(@RequestParam("number") Integer number) {
        HttpHeaders httpHeaders = new HttpHeaders();
        httpHeaders.add("Content-Type", "application/json");

        ResponseEntity<String> responseEntity = restTemplate.exchange(
          "http://localhost:8090/validate/prime-number?number=" + number,
          HttpMethod.GET,
          new HttpEntity<>(httpHeaders),
          String.class);

        return responseEntity.getBody();
    }
}

3.1. The Maven Dependencies

For our consumer, we’ll need to add the spring-cloud-contract-wiremock and spring-cloud-contract-stub-runner dependencies:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-wiremock</artifactId>
    <version>1.2.2.RELEASE</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-stub-runner</artifactId>
    <version>1.2.2.RELEASE</version>
    <scope>test</scope>
</dependency>

3.2. Consumer Side Setup

Now it’s time to configure our stub runner, which will inform our consumer of the available stubs in our local Maven repository:

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.MOCK)
@AutoConfigureMockMvc
@AutoConfigureJsonTesters
@AutoConfigureStubRunner(
  workOffline = true,
  ids = "com.baeldung.spring.cloud:spring-cloud-contract-producer:+:stubs:8090")
public class BasicMathControllerIntegrationTest {

    @Autowired
    private MockMvc mockMvc;

    @Test
    public void given_WhenPassEvenNumberInQueryParam_ThenReturnEven()
      throws Exception {
 
        mockMvc.perform(MockMvcRequestBuilders.get("/calculate?number=2")
          .contentType(MediaType.APPLICATION_JSON))
          .andExpect(status().isOk())
          .andExpect(content().string("Even"));
    }
}

Note that the ids property of the @AutoConfigureStubRunner annotation specifies:

  • com.baeldung.spring.cloud — the groupId of our artifact
  • spring-cloud-contract-producer — the artifactId of the producer stub jar
  • 8090 — the port on which the generated stubs will run

4. When the Contract is Broken

If we make any changes on the producer side that directly impact the contract without updating the consumer side, this can result in contract failure.

For example, suppose we’re to change the EvenOddController request URI to /validate/change/prime-number on our producer side.

If we fail to inform our consumer of this change, the consumer will still send its request to the /validate/prime-number URI, and the consumer side test cases will throw org.springframework.web.client.HttpClientErrorException: 404 Not Found.

5. Summary

We’ve seen how Spring Cloud Contract can help us maintain contracts between a service consumer and producer so that we can push out new code without any worry of breaking the contracts.

And, as always, the full implementation of this tutorial can be found over on GitHub.


How to TDD a List Implementation in Java

$
0
0

1. Overview

In this tutorial, we’ll walk through a custom List implementation using the Test-Driven Development (TDD) process.

This is not an intro to TDD, so we’re assuming you already have some basic idea of what it means and the sustained interest to get better at it.

Simply put, TDD is a design tool, enabling us to drive our implementation with the help of tests.

A quick disclaimer – we’re not focusing on creating efficient implementation here – just using it as an excuse to display TDD practices.

2. Getting Started

First, let’s define the skeleton for our class:

public class CustomList<E> implements List<E> {
    private Object[] internal = {};
    // empty implementation methods
}

The CustomList class implements the List interface, hence it must contain implementations for all the methods declared in that interface.

To get started, we can just provide empty bodies for those methods. If a method has a return type, we can return an arbitrary value of that type, such as null for Object or false for boolean.

For the sake of brevity, we’ll omit optional methods, together with some obligatory methods that aren’t often used.

3. TDD Cycles

Developing our implementation with TDD means that we need to create test cases first, thereby defining requirements for our implementation. Only then we’ll create or fix the implementation code to make those tests pass.

In a very simplified manner, the three main steps in each cycle are:

  1. Writing tests – define requirements in the form of tests
  2. Implementing features – make the tests pass without focusing too much on the elegance of the code
  3. Refactoring – improve the code to make it easier to read and maintain while still passing the tests

We’ll go through these TDD cycles for some methods of the List interface, starting with the simplest ones.

4. The isEmpty Method

The isEmpty method is probably the most straightforward method defined in the List interface. Here’s our starting implementation:

@Override
public boolean isEmpty() {
    return false;
}

This initial method definition is enough to compile. The body of this method will be “forced” to improve when more and more tests are added.

4.1. The First Cycle

Let’s write the first test case which makes sure that the isEmpty method returns true when the list doesn’t contain any element:

@Test
public void givenEmptyList_whenIsEmpty_thenTrueIsReturned() {
    List<Object> list = new CustomList<>();

    assertTrue(list.isEmpty());
}

The given test fails since the isEmpty method always returns false. We can make it pass just by flipping the return value:

@Override
public boolean isEmpty() {
    return true;
}

4.2. The Second Cycle

To confirm that the isEmpty method returns false when the list isn’t empty, we need to add at least one element:

@Test
public void givenNonEmptyList_whenIsEmpty_thenFalseIsReturned() {
    List<Object> list = new CustomList<>();
    list.add(null);

    assertFalse(list.isEmpty());
}

An implementation of the add method is now required. Here’s the add method we start with:

@Override
public boolean add(E element) {
    return false;
}

This method implementation doesn’t work as no changes to the internal data structure of the list are made. Let’s update it to store the added element:

@Override
public boolean add(E element) {
    internal = new Object[] { element };
    return false;
}

Our test still fails since the isEmpty method hasn’t been enhanced. Let’s do that:

@Override
public boolean isEmpty() {
    if (internal.length != 0) {
        return false;
    } else {
        return true;
    }
}

The non-empty test passes at this point.

4.3. Refactoring

Both test cases we’ve seen so far pass, but the code of the isEmpty method could be more elegant.

Let’s refactor it:

@Override
public boolean isEmpty() {
    return internal.length == 0;
}

We can see that tests pass, so the implementation of the isEmpty method is complete now.

5. The size Method

This is our starting implementation of the size method enabling the CustomList class to compile:

@Override
public int size() {
    return 0;
}

5.1. The First Cycle

Using the existing add method, we can create the first test for the size method, verifying that the size of a list with a single element is 1:

@Test
public void givenListWithAnElement_whenSize_thenOneIsReturned() {
    List<Object> list = new CustomList<>();
    list.add(null);

    assertEquals(1, list.size());
}

The test fails as the size method is returning 0. Let’s make it pass with a new implementation:

@Override
public int size() {
    if (isEmpty()) {
        return 0;
    } else {
        return internal.length;
    }
}

5.2. Refactoring

We can refactor the size method to make it more elegant:

@Override
public int size() {
    return internal.length;
}

The implementation of this method is now complete.

6. The get Method

Here’s the starting implementation of get:

@Override
public E get(int index) {
    return null;
}

6.1. The First Cycle

Let’s take a look at the first test for this method, which verifies the value of the single element in the list:

@Test
public void givenListWithAnElement_whenGet_thenThatElementIsReturned() {
    List<Object> list = new CustomList<>();
    list.add("baeldung");
    Object element = list.get(0);

    assertEquals("baeldung", element);
}

The test will pass with this implementation of the get method:

@Override
public E get(int index) {
    return (E) internal[0];
}

6.2. Improvement

Usually, we’d add more tests before making additional improvements to the get method. Those tests would need other methods of the List interface to implement proper assertions.

However, these other methods aren’t mature enough, yet, so we break the TDD cycle and create a complete implementation of the get method, which is, in fact, not very hard.

It’s easy to imagine that get must extract an element from the internal array at the specified location using the index parameter:

@Override
public E get(int index) {
    return (E) internal[index];
}

7. The add Method

This is the add method we created in section 4:

@Override
public boolean add(E element) {
    internal = new Object[] { element };
    return false;
}

7.1. The First Cycle

The following is a simple test that verifies the return value of add:

@Test
public void givenEmptyList_whenElementIsAdded_thenGetReturnsThatElement() {
    List<Object> list = new CustomList<>();
    boolean succeeded = list.add(null);

    assertTrue(succeeded);
}

We must modify the add method to return true for the test to pass:

@Override
public boolean add(E element) {
    internal = new Object[] { element };
    return true;
}

Although the test passes, the add method doesn’t cover all cases yet. If we add a second element to the list, the existing element will be lost.

7.2. The Second Cycle

Here’s another test adding the requirement that the list can contain more than one element:

@Test
public void givenListWithAnElement_whenAnotherIsAdded_thenGetReturnsBoth() {
    List<Object> list = new CustomList<>();
    list.add("baeldung");
    list.add(".com");
    Object element1 = list.get(0);
    Object element2 = list.get(1);

    assertEquals("baeldung", element1);
    assertEquals(".com", element2);
}

The test will fail since the add method in its current form doesn’t allow more than one element to be added.

Let’s change the implementation code:

@Override
public boolean add(E element) {
    Object[] temp = Arrays.copyOf(internal, internal.length + 1);
    temp[internal.length] = element;
    internal = temp;
    return true;
}

The implementation is elegant enough, hence we don’t need to refactor it.

8. Conclusion

This tutorial went through a test-driven development process to create part of a custom List implementation. Using TDD, we can implement requirements step by step, while keeping the test coverage at a very high level. Also, the implementation is guaranteed to be testable, since it was created to make the tests pass.

Note that the custom class created in this article is just used for demonstration purposes and should not be adopted in a real-world project.

The complete source code for this tutorial, including the test and implementation methods left out for the sake of brevity, can be found over on GitHub.

Multi-Swarm Optimization Algorithm in Java

$
0
0

1. Introduction

In this article, we’ll take a look at a Multi-swarm optimization algorithm. Like other algorithms of the same class, its purpose is to find the best solution to a problem by maximizing or minimizing a specific function, called a fitness function.

Let’s start with some theory.

2. How Multi-Swarm Optimization Works

The Multi-swarm is a variation of the Swarm algorithm. As the name suggests, the Swarm algorithm solves a problem by simulating the movement of a group of objects in the space of possible solutions. In the multi-swarm version, there are multiple swarms instead of just one.

The basic component of a swarm is called a particle. The particle is defined by its actual position, which is also a possible solution to our problem, and its speed, which is used to calculate the next position.

The speed of the particle constantly changes, leaning towards the best position found among all the particles in all the swarms with a certain degree of randomness to increase the amount of space covered.

This ultimately leads most particles to a finite set of points which are local minima or maxima in the fitness function, depending on whether we’re trying to minimize or maximize it.

Although the point found is always a local minimum or maximum of the function, it’s not necessarily a global one since there’s no guarantee that the algorithm has completely explored the space of solutions.

For this reason, the multi-swarm is said to be a metaheuristic – the solutions it finds are among the best, but they may not be the absolute best.

3. Implementation

Now that we know what a multi-swarm is and how it works let’s take a look at how to implement it.

For our example, we’ll try to address this real-life optimization problem posted on StackExchange:

In League of Legends, a player’s Effective Health when defending against physical damage is given by E=H(100+A)/100, where H is health and A is armor.

Health costs 2.5 gold per unit, and Armor costs 18 gold per unit. You have 3600 gold, and you need to optimize the effectiveness E of your health and armor to survive as long as possible against the enemy team’s attacks. How much of each should you buy?

3.1. Particle

We start off by modeling our base construct, a particle. The state of a particle includes its current position, which is a pair of health and armor values that solve the problem, the speed of the particle on both axes and the particle fitness score.

We’ll also store the best position and fitness score we find since we’ll need them to update the particle speed:

public class Particle {
    private long[] position;
    private long[] speed;
    private double fitness;
    private long[] bestPosition;	
    private double bestFitness = Double.NEGATIVE_INFINITY;

    // constructors and other methods
}

We choose to use long arrays to represent both speed and position because we can deduce from the problem statement that we can’t buy fractions of armor or health, hence the solution must be in the integer domain.

We don’t want to use int because that can cause overflow problems during calculations.

3.2. Swarm

Next up, let’s define a swarm as a collection of particles. Once again we’ll also store the historical best position and score for later computation.

The swarm will also need to take care of its particles’ initialization by assigning a random initial position and speed to each one.

We can roughly estimate a boundary for the solution, so we add this limit to the random number generator.

This will reduce the computational power and time needed to run the algorithm:

public class Swarm {
    private Particle[] particles;
    private long[] bestPosition;
    private double bestFitness = Double.NEGATIVE_INFINITY;
    
    public Swarm(int numParticles) {
        particles = new Particle[numParticles];
        for (int i = 0; i < numParticles; i++) {
            long[] initialParticlePosition = { 
              random.nextInt(Constants.PARTICLE_UPPER_BOUND),
              random.nextInt(Constants.PARTICLE_UPPER_BOUND) 
            };
            long[] initialParticleSpeed = { 
              random.nextInt(Constants.PARTICLE_UPPER_BOUND),
              random.nextInt(Constants.PARTICLE_UPPER_BOUND) 
            };
            particles[i] = new Particle(
              initialParticlePosition, initialParticleSpeed);
        }
    }

    // methods omitted
}

3.3. Multiswarm

Finally, let’s conclude our model by creating a Multiswarm class.

Similarly to the swarm, we’ll keep track of a collection of swarms and the best particle position and fitness found among all the swarms.

We’ll also store a reference to the fitness function for later use:

public class Multiswarm {
    private Swarm[] swarms;
    private long[] bestPosition;
    private double bestFitness = Double.NEGATIVE_INFINITY;
    private FitnessFunction fitnessFunction;

    public Multiswarm(
      int numSwarms, int particlesPerSwarm, FitnessFunction fitnessFunction) {
        this.fitnessFunction = fitnessFunction;
        this.swarms = new Swarm[numSwarms];
        for (int i = 0; i < numSwarms; i++) {
            swarms[i] = new Swarm(particlesPerSwarm);
        }
    }

    // methods omitted
}

3.4. Fitness Function

Let’s now implement the fitness function.

To decouple the algorithm logic from this specific problem, we’ll introduce an interface with a single method.

This method takes a particle position as an argument and returns a value indicating how good it is:

public interface FitnessFunction {
    public double getFitness(long[] particlePosition);
}

Provided that the found result is valid according to the problem constraints, measuring the fitness is just a matter of returning the computed effective health which we want to maximize.

For our problem, we have the following specific validation constraints:

  • solutions must only be positive integers
  • solutions must be feasible with the provided amount of gold

When one of these constraints is violated, we return a negative number that tells how far away we’re from the validity boundary.

This is either the number found in the former case or the amount of unavailable gold in the latter:

public class LolFitnessFunction implements FitnessFunction {

    @Override
    public double getFitness(long[] particlePosition) {
        long health = particlePosition[0];
        long armor = particlePosition[1];

        if (health < 0 && armor < 0) {
            return -(health * armor);
        } else if (health < 0) {
            return health;
        } else if (armor < 0) {
            return armor;
        }

        double cost = (health * 2.5) + (armor * 18);
        if (cost > 3600) {
            return 3600 - cost;
        } else {
            long fitness = (health * (100 + armor)) / 100;
            return fitness;
        }
    }
}

3.5. Main Loop

The main program will iterate between all particles in all swarms and do the following:

  • compute the particle fitness
  • if a new best position has been found, update the particle, swarm and multiswarm history
  • compute the new particle position by adding the current speed to each dimension
  • compute the new particle speed

For the moment, we’ll leave the speed updating to the next section by creating a dedicated method:

public void mainLoop() {
    for (Swarm swarm : swarms) {
        for (Particle particle : swarm.getParticles()) {
            long[] particleOldPosition = particle.getPosition().clone();
            particle.setFitness(fitnessFunction.getFitness(particleOldPosition));
       
            if (particle.getFitness() > particle.getBestFitness()) {
                particle.setBestFitness(particle.getFitness());				
                particle.setBestPosition(particleOldPosition);
                if (particle.getFitness() > swarm.getBestFitness()) {						
                    swarm.setBestFitness(particle.getFitness());
                    swarm.setBestPosition(particleOldPosition);
                    if (swarm.getBestFitness() > bestFitness) {
                        bestFitness = swarm.getBestFitness();
                        bestPosition = swarm.getBestPosition().clone();
                    }
                }
            }

            long[] position = particle.getPosition();
            long[] speed = particle.getSpeed();
            position[0] += speed[0];
            position[1] += speed[1];
            speed[0] = getNewParticleSpeedForIndex(particle, swarm, 0);
            speed[1] = getNewParticleSpeedForIndex(particle, swarm, 1);
        }
    }
}

3.6. Speed Update

It’s essential for the particle to change its speed since that’s how it manages to explore different possible solutions.

The speed of the particle will need to make the particle move towards the best position found by itself, by its swarm and by all the swarms, assigning a certain weight to each of these. We’ll call these weights, cognitive weight, social weight and global weight, respectively.

To add some variation, we’ll multiply each of these weights with a random number between 0 and 1. We’ll also add an inertia factor to the formula which incentivizes the particle not to slow down too much:

private int getNewParticleSpeedForIndex(
  Particle particle, Swarm swarm, int index) {
 
    return (int) ((Constants.INERTIA_FACTOR * particle.getSpeed()[index])
      + (randomizePercentage(Constants.COGNITIVE_WEIGHT)
      * (particle.getBestPosition()[index] - particle.getPosition()[index]))
      + (randomizePercentage(Constants.SOCIAL_WEIGHT) 
      * (swarm.getBestPosition()[index] - particle.getPosition()[index]))
      + (randomizePercentage(Constants.GLOBAL_WEIGHT) 
      * (bestPosition[index] - particle.getPosition()[index])));
}

Accepted values for inertia, cognitive, social and global weights are 0.729, 1.49445, 1.49445 and 0.3645, respectively.

4. Conclusion

In this tutorial, we went through the theory and the implementation of a swarm algorithm. We also saw how to design a fitness function according to a specific problem.

If you want to read more about this topic, have a look at this book and this article which were also used as information sources for this article.

As always, all the code of the example is available over on the GitHub project.

Jersey Filters and Interceptors

$
0
0

 

1. Introduction

In this article, we’re going to explain how filters and interceptors work in the Jersey framework, as well as the main differences between these.

We’ll use Jersey 2 here, and we’ll test our application using a Tomcat 9 server.

2. Application Setup

Let’s first create a simple resource on our server:

@Path("/greetings")
public class Greetings {

    @GET
    public String getHelloGreeting() {
        return "hello";
    }
}

Also, let’s create the corresponding server configuration for our application:

@ApplicationPath("/*")
public class ServerConfig extends ResourceConfig {

    public ServerConfig() {
        packages("com.baeldung.jersey.server");
    }
}

If you want to dig deeper into how to create an API with Jersey, you can check out this article.

You can also have a look at our client-focused article and learn how to create a Java client with Jersey.

3. Filters

Now, let’s get started with filters.

Simply put, filters let us modify the properties of requests and responses – for example, HTTP headers. Filters can be applied both in the server and client side.

Keep in mind that filters are always executed, regardless of whether the resource was found or not.

3.1. Implementing a Request Server Filter

Let’s start with the filters on the server side and create a request filter.

We’ll do that by implementing the ContainerRequestFilter interface and registering it as a Provider in our server:

@Provider
public class RestrictedOperationsRequestFilter implements ContainerRequestFilter {
    
    @Override
    public void filter(ContainerRequestContext ctx) throws IOException {
        if (ctx.getLanguage() != null && "EN".equals(ctx.getLanguage()
          .getLanguage())) {
 
            ctx.abortWith(Response.status(Response.Status.FORBIDDEN)
              .entity("Cannot access")
              .build());
        }
    }
}

This simple filter just rejects the requests with the language “EN” in the request by calling the abortWith() method.

As the example shows, we had to implement only one method that receives the context of the request, which we can modify as we need.

Let’s keep in mind that this filter is executed after the resource was matched.

In case we want to execute a filter before the resource matching, we can use a pre-matching filter by annotating our filter with the @PreMatching annotation:

@Provider
@PreMatching
public class PrematchingRequestFilter implements ContainerRequestFilter {

    @Override
    public void filter(ContainerRequestContext ctx) throws IOException {
        if (ctx.getMethod().equals("DELETE")) {
            LOG.info("\"Deleting request");
        }
    }
}

If we try to access our resource now, we can check that our pre-matching filter is executed first:

2018-02-25 16:07:27,800 [http-nio-8080-exec-3] INFO  c.b.j.s.f.PrematchingRequestFilter - prematching filter
2018-02-25 16:07:27,816 [http-nio-8080-exec-3] INFO  c.b.j.s.f.RestrictedOperationsRequestFilter - Restricted operations filter

3.2. Implementing a Response Server Filter

We’ll now implement a response filter on the server side that will merely add a new header to the response.

To do that, our filter has to implement the ContainerResponseFilter interface and implement its only method:

@Provider
public class ResponseServerFilter implements ContainerResponseFilter {

    @Override
    public void filter(ContainerRequestContext requestContext, 
      ContainerResponseContext responseContext) throws IOException {
        responseContext.getHeaders().add("X-Test", "Filter test");
    }
}

Notice that the ContainerRequestContext parameter is just used as read-only – since we’re already processing the response.

2.3. Implementing a Client Filter

We’ll work now with filters on the client side. These filters work in the same way as server filters, and the interfaces we have to implement are very similar to the ones for the server side.

Let’s see it in action with a filter that adds a property to the request:

@Provider
public class RequestClientFilter implements ClientRequestFilter {

    @Override
    public void filter(ClientRequestContext requestContext) throws IOException {
        requestContext.setProperty("test", "test client request filter");
    }
}

Let’s also create a Jersey client to test this filter:

public class JerseyClient {

    private static String URI_GREETINGS = "http://localhost:8080/jersey/greetings";

    public static String getHelloGreeting() {
        return createClient().target(URI_GREETINGS)
          .request()
          .get(String.class);
    }

    private static Client createClient() {
        ClientConfig config = new ClientConfig();
        config.register(RequestClientFilter.class);

        return ClientBuilder.newClient(config);
    }
}

Notice that we have to add the filter to the client configuration to register it.

Finally, we’ll also create a filter for the response in the client.

This works in a very similar way as the one in the server, but implementing the ClientResponseFilter interface:

@Provider
public class ResponseClientFilter implements ClientResponseFilter {

    @Override
    public void filter(ClientRequestContext requestContext, 
      ClientResponseContext responseContext) throws IOException {
        responseContext.getHeaders()
          .add("X-Test-Client", "Test response client filter");
    }

}

Again, the ClientRequestContext is for read-only purposes.

4. Interceptors

Interceptors are more connected with the marshalling and unmarshalling of the HTTP message bodies that are contained in the requests and the responses. They can be used both in the server and in the client side.

Keep in mind that they’re executed after the filters and only if a message body is present.

There are two types of interceptors: ReaderInterceptor and WriterInterceptor, and they are the same for both the server and the client side.

Next, we’re going to create another resource on our server – which is accessed via a POST and receives a parameter in the body, so interceptors will be executed when accessing it:

@POST
@Path("/custom")
public Response getCustomGreeting(String name) {
    return Response.status(Status.OK.getStatusCode())
      .build();
}

We’ll also add a new method to our Jersey client – to test this new resource:

public static Response getCustomGreeting() {
    return createClient().target(URI_GREETINGS + "/custom")
      .request()
      .post(Entity.text("custom"));
}

4.1. Implementing a ReaderInterceptor

Reader interceptors allow us to manipulate inbound streams, so we can use them to modify the request on the server side or the response on the client side.

Let’s create an interceptor on the server side to write a custom message in the body of the request intercepted:

@Provider
public class RequestServerReaderInterceptor implements ReaderInterceptor {

    @Override
    public Object aroundReadFrom(ReaderInterceptorContext context) 
      throws IOException, WebApplicationException {
        InputStream is = context.getInputStream();
        String body = new BufferedReader(new InputStreamReader(is)).lines()
          .collect(Collectors.joining("\n"));

        context.setInputStream(new ByteArrayInputStream(
          (body + " message added in server reader interceptor").getBytes()));

        return context.proceed();
    }
}

Notice that we have to call the proceed() method to call the next interceptor in the chain. Once all the interceptors are executed, the appropriate message body reader will be called.

3.2. Implementing a WriterInterceptor

Writer interceptors work in a very similar way to reader interceptors, but they manipulate the outbound streams – so that we can use them with the request in the client side or with the response in the server side.

Let’s create a writer interceptor to add a message to the request, on the client side:

@Provider
public class RequestClientWriterInterceptor implements WriterInterceptor {

    @Override
    public void aroundWriteTo(WriterInterceptorContext context) 
      throws IOException, WebApplicationException {
        context.getOutputStream()
          .write(("Message added in the writer interceptor in the client side").getBytes());

        context.proceed();
    }
}

Again, we have to call the method proceed() to call the next interceptor.

When all the interceptors are executed, the appropriate message body writer will be called.

Don’t forget that you have to register this interceptor in the client configuration, as we did before with the client filter:

private static Client createClient() {
    ClientConfig config = new ClientConfig();
    config.register(RequestClientFilter.class);
    config.register(RequestWriterInterceptor.class);

    return ClientBuilder.newClient(config);
}

5. Execution Order

Let’s summarize all that we’ve seen so far in a diagram that shows when the filters and interceptors are executed during a request from a client to a server:

As we can see, the filters are always executed first, and the interceptors are executed right before calling the appropriate message body reader or writer.

If we take a look at the filters and interceptors that we’ve created, they will be executed in the following order:

  1. RequestClientFilter
  2. RequestClientWriterInterceptor
  3. PrematchingRequestFilter
  4. RestrictedOperationsRequestFilter
  5. RequestServerReaderInterceptor
  6. ResponseServerFilter
  7. ResponseClientFilter

Furthermore, when we have several filters or interceptors, we can specify the exact executing order by annotating them with the @Priority annotation.

The priority is specified with an Integer and sorts the filters and interceptors in ascending order for the requests and in descending order for the responses.

Let’s add a priority to our RestrictedOperationsRequestFilter:

@Provider
@Priority(Priorities.AUTHORIZATION)
public class RestrictedOperationsRequestFilter implements ContainerRequestFilter {
    // ...
}

Notice that we’ve used a predefined priority for authorization purposes.

6. Name Binding

The filters and interceptors that we’ve seen so far are called global because they’re executed for every request and response.

However, they can also be defined to be executed only for specific resource methods, which is called name binding.

6.1. Static Binding

One way to do the name binding is statically by creating a particular annotation that will be used in the desired resource. This annotation has to include the @NameBinding meta-annotation.

Let’s create one in our application:

@NameBinding
@Retention(RetentionPolicy.RUNTIME)
public @interface HelloBinding {
}

After that, we can annotate some resources with this @HelloBinding annotation:

@GET
@HelloBinding
public String getHelloGreeting() {
    return "hello";
}

Finally, we’re going to annotate one of our filters with this annotation too, so this filter will be executed only for requests and responses that are accessing the getHelloGreeting() method:

@Provider
@Priority(Priorities.AUTHORIZATION)
@HelloBinding
public class RestrictedOperationsRequestFilter implements ContainerRequestFilter {
    // ...
}

Keep in mind that our RestrictedOperationsRequestFilter won’t be triggered for the rest of the resources anymore.

6.2. Dynamic Binding

Another way to do this is by using a dynamic binding, which is loaded in the configuration during startup.

Let’s first add another resource to our server for this section:

@GET
@Path("/hi")
public String getHiGreeting() {
    return "hi";
}

Now, let’s create a binding for this resource by implementing the DynamicFeature interface:

@Provider
public class HelloDynamicBinding implements DynamicFeature {

    @Override
    public void configure(ResourceInfo resourceInfo, FeatureContext context) {
        if (Greetings.class.equals(resourceInfo.getResourceClass()) 
          && resourceInfo.getResourceMethod().getName().contains("HiGreeting")) {
            context.register(ResponseServerFilter.class);
        }
    }
}

In this case, we’re associating the getHiGreeting() method to the ResponseServerFilter that we had created before.

It’s important to remember that we had to delete the @Provider annotation from this filter since we’re now configuring it via DynamicFeature.

If we don’t do this, the filter will be executed twice: one time as a global filter and another time as a filter bound to the getHiGreeting() method.

7. Conclusion

In this tutorial, we focused on understanding how filters and interceptors work in Jersey 2 and how we can use them in an web application.

As always, the full source code for the examples is available over on GitHub.

 

Security In Spring Integration

$
0
0

1. Introduction

In this article, we’ll focus on how we can use Spring Integration and Spring Security together in an integration flow.

Therefore, we’ll set up a simple secured message flow to demonstrate the use of Spring Security in Spring Integration. Also, we’ll provide the example of SecurityContext propagation in multithreading message channels.

For more details of using the framework, you can refer to our introduction to Spring Integration.

2. Spring Integration Configuration

2.1. Dependencies

Firstly, we need to add the Spring Integration dependencies to our project.

Since we’ll set up a simple message flows with DirectChannelPublishSubscribeChannel, and ServiceActivator, we need spring-integration-core dependency.

Also, we also need the spring-integration-security dependency to be able to use Spring Security in Spring Integration:

<dependency>
    <groupId>org.springframework.integration</groupId>
    <artifactId>spring-integration-security</artifactId>
    <version>5.0.3.RELEASE</version>
</dependency>

And we ‘re also using Spring Security, so we’ll add spring-security-config to our project:

<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-config</artifactId>
    <version>5.0.3.RELEASE</version>
</dependency>

We can check out the latest version of all above dependencies at Maven Central: spring-integration-security, spring-security-config.

2.2. Java-Based Configuration

Our example will use basic Spring Integration components. Thus, we only need to enable Spring Integration in our project by using @EnableIntegration annotation:

@Configuration
@EnableIntegration
public class SecuredDirectChannel {
    //...
}

3. Secured Message Channel

First of all, we need an instance of ChannelSecurityInterceptor which will intercept all send and receive calls on a channel and decide if that call can be executed or denied:

@Autowired
@Bean
public ChannelSecurityInterceptor channelSecurityInterceptor(
  AuthenticationManager authenticationManager, 
  AccessDecisionManager customAccessDecisionManager) {

    ChannelSecurityInterceptor 
      channelSecurityInterceptor = new ChannelSecurityInterceptor();

    channelSecurityInterceptor
      .setAuthenticationManager(authenticationManager);

    channelSecurityInterceptor
      .setAccessDecisionManager(customAccessDecisionManager);

    return channelSecurityInterceptor;
}

The AuthenticationManager and AccessDecisionManager beans are defined as:

@Configuration
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class SecurityConfig extends GlobalMethodSecurityConfiguration {

    @Override
    @Bean
    public AuthenticationManager 
      authenticationManager() throws Exception {
        return super.authenticationManager();
    }

    @Bean
    public AccessDecisionManager customAccessDecisionManager() {
        List<AccessDecisionVoter<? extends Object>> 
          decisionVoters = new ArrayList<>();
        decisionVoters.add(new RoleVoter());
        decisionVoters.add(new UsernameAccessDecisionVoter());
        AccessDecisionManager accessDecisionManager
          = new AffirmativeBased(decisionVoters);
        return accessDecisionManager;
    }
}

Here, we use two AccessDecisionVoter: RoleVoter and a custom UsernameAccessDecisionVoter.

Now, we can use that ChannelSecurityInterceptor to secure our channel. What we need to do is decorating the channel by @SecureChannel annotation:

@Bean(name = "startDirectChannel")
@SecuredChannel(
  interceptor = "channelSecurityInterceptor", 
  sendAccess = { "ROLE_VIEWER","jane" })
public DirectChannel startDirectChannel() {
    return new DirectChannel();
}

@Bean(name = "endDirectChannel")
@SecuredChannel(
  interceptor = "channelSecurityInterceptor", 
  sendAccess = {"ROLE_EDITOR"})
public DirectChannel endDirectChannel() {
    return new DirectChannel();
}

The @SecureChannel accepts three properties:

  • The interceptor property: refers to a ChannelSecurityInterceptor bean.
  • The sendAccess and receiveAccess properties: contains the policy for invoking send or receive action on a channel.

In the example above, we expect only users who have ROLE_VIEWER or have username jane can send a message from the startDirectChannel.

Also, only users who have ROLE_EDITOR can send a message to the endDirectChannel.

We achieve this with the support of our custom AccessDecisionManager: either RoleVoter or UsernameAccessDecisionVoter returns an affirmative response, the access is granted.

4. Secured ServiceActivator

It’s worth to mention that we also can secure our ServiceActivator by Spring Method Security. Therefore, we need to enable method security annotation:

@Configuration
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class SecurityConfig extends GlobalMethodSecurityConfiguration {
    //....
}

For simplicity, in this article, we’ll only use Spring pre and post annotations, so we’ll add the @EnableGlobalMethodSecurity annotation to our configuration class and set prePostEnabled to true.

Now we can secure our ServiceActivator with a @PreAuthorization annotation:

@ServiceActivator(
  inputChannel = "startDirectChannel", 
  outputChannel = "endDirectChannel")
@PreAuthorize("hasRole('ROLE_LOGGER')")
public Message<?> logMessage(Message<?> message) {
    Logger.getAnonymousLogger().info(message.toString());
    return message;
}

The ServiceActivator here receives the message from startDirectChannel and output the message to endDirectChannel.

Besides, the method is accessible only if the current Authentication principal has role ROLE_LOGGER.

5. Security Context Propagation

Spring SecurityContext is thread-bound by default. It means the SecurityContext won’t be propagated to a child-thread.

For all above examples, we use both DirectChannel and ServiceActivator – which all run in a single thread; thus, the SecurityContext is available throughout the flow.

However, when using QueueChannel, ExecutorChannel, and PublishSubscribeChannel with an Executor, messages will be transferred from one thread to others threads. In this case, we need to propagate the SecurityContext to all threads receiving the messages.

Let create another message flow which starts with a PublishSubscribeChannel channel, and two ServiceActivator subscribes to that channel:

@Bean(name = "startPSChannel")
@SecuredChannel(
  interceptor = "channelSecurityInterceptor", 
  sendAccess = "ROLE_VIEWER")
public PublishSubscribeChannel startChannel() {
    return new PublishSubscribeChannel(executor());
}

@ServiceActivator(
  inputChannel = "startPSChannel", 
  outputChannel = "finalPSResult")
@PreAuthorize("hasRole('ROLE_LOGGER')")
public Message<?> changeMessageToRole(Message<?> message) {
    return buildNewMessage(getRoles(), message);
}

@ServiceActivator(
  inputChannel = "startPSChannel", 
  outputChannel = "finalPSResult")
@PreAuthorize("hasRole('ROLE_VIEWER')")
public Message<?> changeMessageToUserName(Message<?> message) {
    return buildNewMessage(getUsername(), message);
}

In the example above, we have two ServiceActivator subscribe to the startPSChannel. The channel requires an Authentication principal with role ROLE_VIEWER to be able to send a message to it.

Likewise, we can invoke the changeMessageToRole service only if the Authentication principal has the ROLE_LOGGER role.

Also, the changeMessageToUserName service can only be invoked if the Authentication principal has the role ROLE_VIEWER.

Meanwhile, the startPSChannel will run with the support of a ThreadPoolTaskExecutor:

@Bean
public ThreadPoolTaskExecutor executor() {
    ThreadPoolTaskExecutor pool = new ThreadPoolTaskExecutor();
    pool.setCorePoolSize(10);
    pool.setMaxPoolSize(10);
    pool.setWaitForTasksToCompleteOnShutdown(true);
    return pool;
}

Consequently, two ServiceActivator will run in two different threads. To propagate the SecurityContext to those threads, we need to add to our message channel a SecurityContextPropagationChannelInterceptor:

@Bean
@GlobalChannelInterceptor(patterns = { "startPSChannel" })
public ChannelInterceptor securityContextPropagationInterceptor() {
    return new SecurityContextPropagationChannelInterceptor();
}

Notice how we decorated the SecurityContextPropagationChannelInterceptor with the @GlobalChannelInterceptor annotation. We also added our startPSChannel to its patterns property.

Therefore, above configuration states that the SecurityContext from the current thread will be propagated to any thread derived from startPSChannel.

6. Testing

Let’s start verifying our message flows using some JUnit tests.

6.1. Dependency

We, of course, need the spring-security-test dependency at this point:

<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-test</artifactId>
    <version>5.0.3.RELEASE</version>
    <scope>test</scope>
</dependency>

Likewise, the latest version can be checked out from Maven Central: spring-security-test.

6.2. Test Secured Channel

Firstly, we try to send a message to our startDirectChannel:

@Test(expected = AuthenticationCredentialsNotFoundException.class)
public void 
  givenNoUser_whenSendToDirectChannel_thenCredentialNotFound() {

    startDirectChannel
      .send(new GenericMessage<String>(DIRECT_CHANNEL_MESSAGE));
}

Since the channel is secured, we expect an AuthenticationCredentialsNotFoundException exception when sending the message without providing an authentication object.

Next, we provide a user who has role ROLE_VIEWER, and sends a message to our startDirectChannel:

@Test
@WithMockUser(roles = { "VIEWER" })
public void 
  givenRoleViewer_whenSendToDirectChannel_thenAccessDenied() {
    expectedException.expectCause
      (IsInstanceOf.<Throwable> instanceOf(AccessDeniedException.class));

    startDirectChannel
      .send(new GenericMessage<String>(DIRECT_CHANNEL_MESSAGE));
 }

Now, even though our user can send the message to startDirectChannel because he has role ROLE_VIEWER, but he cannot invoke the logMessage service which requests user with role ROLE_LOGGER.

In this case, a MessageHandlingException which has the cause is AcessDeniedException will be thrown.

The test will throw MessageHandlingException with the cause is AccessDeniedExcecption. Hence, we use an instance of ExpectedException rule to verify the cause exception.

Next, we provide a user with username jane and two roles: ROLE_LOGGER and ROLE_EDITOR.

Then try to send a message to startDirectChannel again:

@Test
@WithMockUser(username = "jane", roles = { "LOGGER", "EDITOR" })
public void 
  givenJaneLoggerEditor_whenSendToDirectChannel_thenFlowCompleted() {
    startDirectChannel
      .send(new GenericMessage<String>(DIRECT_CHANNEL_MESSAGE));
    assertEquals
      (DIRECT_CHANNEL_MESSAGE, messageConsumer.getMessageContent());
}

The message will travel successfully throughout our flow starting with startDirectChannel to logMessage activator, then go to endDirectChannel. That’s because the provided authentication object has all required authorities to access those components.

6.3. Test SecurityContext Propagation

Before declaring the test case, we can review the whole flow of our example with the PublishSubscribeChannel:

  • The flow starts with a startPSChannel which have the policy sendAccess = “ROLE_VIEWER”
  • Two ServiceActivator subscribe to that channel: one has security annotation @PreAuthorize(“hasRole(‘ROLE_LOGGER’)”) , and one has security annotation @PreAuthorize(“hasRole(‘ROLE_VIEWER’)”)

And so, first we provide a user with role ROLE_VIEWER and try to send a message to our channel:

@Test
@WithMockUser(username = "user", roles = { "VIEWER" })
public void 
  givenRoleUser_whenSendMessageToPSChannel_thenNoMessageArrived() 
  throws IllegalStateException, InterruptedException {
 
    startPSChannel
      .send(new GenericMessage<String>(DIRECT_CHANNEL_MESSAGE));

    executor
      .getThreadPoolExecutor()
      .awaitTermination(2, TimeUnit.SECONDS);

    assertEquals(1, messageConsumer.getMessagePSContent().size());
    assertTrue(
      messageConsumer
      .getMessagePSContent().values().contains("user"));
}

Since our user only has role ROLE_VIEWER, the message can only pass through startPSChannel and one ServiceActivator.

Hence, at the end of the flow, we only receive one message.

Let’s provide a user with both roles ROLE_VIEWER and ROLE_LOGGER:

@Test
@WithMockUser(username = "user", roles = { "LOGGER", "VIEWER" })
public void 
  givenRoleUserAndLogger_whenSendMessageToPSChannel_then2GetMessages() 
  throws IllegalStateException, InterruptedException {
    startPSChannel
      .send(new GenericMessage<String>(DIRECT_CHANNEL_MESSAGE));

    executor
      .getThreadPoolExecutor()
      .awaitTermination(2, TimeUnit.SECONDS);

    assertEquals(2, messageConsumer.getMessagePSContent().size());
    assertTrue
      (messageConsumer
      .getMessagePSContent()
      .values().contains("user"));
    assertTrue
      (messageConsumer
      .getMessagePSContent()
      .values().contains("ROLE_LOGGER,ROLE_VIEWER"));
}

Now, we can receive both messages at the end of our flow because the user has all required authorities it needs.

7. Conclusion

In this tutorial, we’ve explored the possibility of using Spring Security in Spring Integration to secure message channel and ServiceActivator.

As always, we can find all examples over on Github.

ASCII Art in Java

$
0
0

1. Overview

In this article, we’ll discuss creating a graphical print of ASCII characters or Strings in Java, using concepts from the 2D graphics support of the language.

2. Drawing Strings with 2D Graphics

With the help of the Graphics2D class, it’s possible to draw a String as an image, achieved invoking the drawString() method.

Because Graphics2D is abstract, we can create an instance by extending it and implementing the various methods associated with the Graphics class.

While this is a tedious task, it’s often done by creating a BufferedImage instance in Java and retrieving its underlying Graphics instance from it:

BufferedImage bufferedImage = new BufferedImage(
  width, height, 
  BufferedImage.TYPE_INT_RGB);
Graphics graphics = bufferedImage.getGraphics();

2.1. Replacing Image Matrix Indices With ASCII Character

When drawing Strings, the Graphics2D class uses a simple matrix-like technique where regions which carve out the designed Strings are assigned a particular value while others are given a zeroth value.

For us to be able to replace the carved area with desired ASCII character, we need to detect the values of the carved region as a single data point (e.g. integer) and not the RGB color values.

To have the image’s RGB color represented as an integer, we set the image type to integer mode:

BufferedImage bufferedImage = new BufferedImage(
  width, height, 
  BufferedImage.TYPE_INT_RGB);

The fundamental idea is to replace the values assigned to non-zero indices of the image matrix with the desired artistic character.

While indices of the matrix representing the zero value will be assigned a single space character. The zero equivalent of the integer mode is -16777216.

3. ASCII Art Generator

Let’s consider a case where we need to make an ASCII art of the “BAELDUNG” string.

We begin by creating an empty image with desired width/height and the image type set to integer mode as mention in section 2.1.

To be able to use advanced rendering options of 2D graphics in Java, we cast our Graphics object to a Graphics2D instance. We then set the desired rendering parameters before invoking the drawString() method with the “BAELDUNG” String:

Graphics2D graphics2D = (Graphics2D) graphics;
graphics2D.setRenderingHint(RenderingHints.KEY_TEXT_ANTIALIASING, 
  RenderingHints.VALUE_TEXT_ANTIALIAS_ON);
graphics2D.drawString("BAELDUNG", 12, 24);

In the above, 12 and 24 represent respectively, the x and y coordinates for the point on the image where the text printing should start from.

Now, we have a 2D graphics whose underlying matrix contains two types of discriminated values; non-zero and zero indices.

But for us to get the concept, we will go through the 2-dimensional array (or matrix) and replace all values with the ASCII character “*” by:

for (int y = 0; y < settings.height; y++) {
    StringBuilder stringBuilder = new StringBuilder();

    for (int x = 0; x < settings.width; x++) {
        stringBuilder.append("*");
    }

    if (stringBuilder.toString().trim().isEmpty()) {
        continue;
    }

    System.out.println(stringBuilder);
}

The output of the above shows just a block of asterisks (*) as seen below:

 

If we discriminate the replacement with “*” by replacing only the integer values equal to -16777216 with “*” and the rest with ” “:

for (int y = 0; y < settings.height; y++) {
    StringBuilder stringBuilder = new StringBuilder();

    for (int x = 0; x < settings.width; x++) {
        stringBuilder.append(image.getRGB(x, y) == -16777216 ? "*" : " ");
    }

    if (stringBuilder.toString().trim().isEmpty()) {
        continue;
    }

    System.out.println(stringBuilder);
}

We obtain a different ASCII art which corresponds to our string “BAELDUNG” but in an inverted carving like this:

Finally, we invert the discrimination by replacing the integer values equal to -16777216 with ” ” and the rest with “*”:

for (int y = 0; y < settings.height; y++) {
    StringBuilder stringBuilder = new StringBuilder();

    for (int x = 0; x < settings.width; x++) {
        stringBuilder.append(image.getRGB(x, y) == -16777216 ? " " : "*");
    }

    if (stringBuilder.toString().trim().isEmpty()) {
        continue;
    }

    System.out.println(stringBuilder);
}

This gives us an ASCII art of the desired String:

 

4. Conclusion

In this quick tutorial, we had a look at how to create ASCII art in Java using the inbuilt 2D graphics library.

While we have shown specifically for the text; “BAELDUNG”, the source code on Github provides a utility function that accepts any String.

Source code, as always, can be found over on GitHub.

Viewing all 4754 articles
Browse latest View live