Quantcast
Channel: Baeldung
Viewing all 4731 articles
Browse latest View live

A Quick Look at R2DBC

$
0
0

1. Introduction

R2DBC (Reactive Relational Database Connectivity) is an effort presented by Pivotal during Spring One Platform 2018. It intends to create a reactive API to SQL databases.

In other words, this effort creates a non-blocking JDBC connection, fully reactive and backpressure-aware.

2. Our First R2DBC Project

To begin with, the R2DBC project is very recent. At this moment, only PostGres, MSSQL, and H2 have R2DBC drivers. Further, we can’t use all Spring Boot functionality with it. Therefore there’re some steps that we’ll need to manually add. But, we can leverage projects like Spring Data to help us.

We’ll create a Maven project first. At this point, there are a few dependency issues with R2DBC, so our pom.xml will be bigger than it’d normally be.

For the scope of this article, we’ll use H2 as our database and we’ll create reactive CRUD functions for our application.

Let’s open the pom.xml of the generated project and add the appropriate dependencies as well as some early-release Spring repositories:

<dependencies>
     <dependency>
        <groupId>org.springframework.data</groupId>
        <artifactId>spring-data-r2dbc</artifactId>
        <version>1.0.0.M2</version>
    </dependency>
    <dependency>
        <groupId>io.r2dbc</groupId>
        <artifactId>r2dbc-h2</artifactId>
        <version>0.8.0.M8</version>
    </dependency>
    <dependency>
        <groupId>com.h2database</groupId>
        <artifactId>h2</artifactId>
        <version>1.4.199</version>
    </dependency>
</dependencies>

<repositories>
    <repository>
        <id>spring-snapshots</id>
        <name>Spring Snapshots</name>
        <url>https://repo.spring.io/snapshot</url>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
    </repository>
    <repository>
        <id>spring-milestones</id>
        <name>Spring Milestones</name>
        <url>https://repo.spring.io/milestone</url>
    </repository>
</repositories>

We need these repositories in order to use the milestone dependencies.

Other required artifacts include Lombok, Spring WebFlux and a few others that complete our project dependencies.

3. Connection Factory

When working with a database, we need a connection factory. So, of course, we’ll need the same thing with R2DBC.

So we’ll now add the details to connect to our instance:

@Configuration
@EnableR2dbcRepositories
class R2DBCConfiguration extends AbstractR2dbcConfiguration {
    @Bean
    public H2ConnectionFactory connectionFactory() {
        return new H2ConnectionFactory(
            H2ConnectionConfiguration.builder()
              .url("mem:testdb;DB_CLOSE_DELAY=-1;")
              .username("sa")
              .build()
        );
    }
}

The first thing we notice in the code above is the @EnableR2dbcRepositories. We need this annotation to use Spring Data functionality. In addition, we’re extending the AbstractR2dbcConfiguration since it’ll provide a lot of beans that we’d need later on.

4. Our First R2DBC Application

Our next step is to create the repository:

interface PlayerRepository extends ReactiveCrudRepository<Player, Integer> {}

The ReactiveCrudRepository interface is very useful. It provides, for example, basic CRUD functionality.

Finally, we’ll define our model class. We’ll use Lombok to avoid boilerplate code:

@Data
@NoArgsConstructor
@AllArgsConstructor
class Player {
    @Id
    Integer id;
    String name;
    Integer age;
}

5. Testing

It’s time to test our code. So, let’s start by creating a few test cases:

@Test
public void whenDeleteAll_then0IsExpected() {
    playerRepository.deleteAll()
      .as(StepVerifier::create)
      .expectNextCount(0)
      .verifyComplete();
}

@Test
public void whenInsert6_then6AreExpected() {
    insertPlayers();
    playerRepository.findAll()
      .as(StepVerifier::create)
      .expectNextCount(6)
      .verifyComplete();
}

6. Custom Queries

We can also generate custom queries. In order to add it, we’ll need to change our PlayerRepository:

@Query("select id, name, age from player where name = $1")
Flux<Player> findAllByName(String name);

@Query("select * from player where age = $1")
Flux<Player> findByAge(int age);

In addition to the existing tests, we’ll add tests to our recently updated repository:

@Test
public void whenSearchForCR7_then1IsExpected() {
    insertPlayers();
    playerRepository.findAllByName("CR7")
      .as(StepVerifier::create)
      .expectNextCount(1)
      .verifyComplete();
}

@Test
public void whenSearchFor32YearsOld_then2AreExpected() {
    insertPlayers();
    playerRepository.findByAge(32)
      .as(StepVerifier::create)
      .expectNextCount(2)
      .verifyComplete();
}

private void insertPlayers() {
    List<Player> players = Arrays.asList(
        new Player(1, "Kaka", 37),
        new Player(2, "Messi", 32),
        new Player(3, "Mbappé", 20),
        new Player(4, "CR7", 34),
        new Player(5, "Lewandowski", 30),
        new Player(6, "Cavani", 32)
    );
    playerRepository.saveAll(players).subscribe();
}

7. Batches

Another feature of R2DBC is to create batches. A batch is useful when executing multiple SQL statements as they’ll perform better than individual operations.

To create a Batch we need a Connection object:

Batch batch = connection.createBatch();

After our application creates the Batch instance, we can add as many SQL statements as we want. To execute it, we’ll invoke the execute() method. The result of a batch is a Publisher that’ll return a result object for each statement.

So let’s jump into the code and see how we can create a Batch:

@Test
public void whenBatchHas2Operations_then2AreExpected() {
    Mono.from(factory.create())
      .flatMapMany(connection -> Flux.from(connection
        .createBatch()
        .add("select * from player")
        .add("select * from player")
        .execute()))
      .as(StepVerifier::create)
      .expectNextCount(2)
      .verifyComplete();
}

8. Conclusion

To summarize, R2DBC is still in an early stage. It’s an attempt to create an SPI that will define a reactive API to SQL databases. When used with Spring WebFlux, R2DBC allows us to write an application that handles data asynchronously from the top and all the way down to the database.

As always the code is available at GitHub.


The Proxy Pattern in Java

$
0
0

1. Overview

The Proxy pattern allows us to create an intermediary that acts as an interface to another resource, while also hiding the underlying complexity of the component.

2. Proxy Pattern Example

Consider a heavy Java object (like a JDBC connection or a SessionFactory) that requires some initial configuration.

We only want such objects to be initialized on demand, and once they are, we’d want to reuse them for all calls:

Let’s now create a simple interface and the configuration for this object:

public interface ExpensiveObject {
    void process();
}

And the implementation of this interface with a large initial configuration:

public class ExpensiveObjectImpl implements ExpensiveObject {

    public ExpensiveObjectImpl() {
        heavyInitialConfiguration();
    }
    
    @Override
    public void process() {
        LOG.info("processing complete.");
    }
    
    private void heavyInitialConfiguration() {
        LOG.info("Loading initial configuration...");
    }
    
}

We’ll now utilize the Proxy pattern and initialize our object on demand:

public class ExpensiveObjectProxy implements ExpensiveObject {
    private static ExpensiveObject object;

    @Override
    public void process() {
        if (object == null) {
            object = new ExpensiveObjectImpl();
        }
        object.process();
    }
}

Whenever our client calls the process() method, they’ll just get to see the processing and the initial configuration will always remain hidden:

public static void main(String[] args) {
    ExpensiveObject object = new ExpensiveObjectProxy();
    object.process();
    object.process();
}

Note that we’re calling the process() method twice. Behind the scenes, the settings part will occur only once – when the object is first initialized.

For every other subsequent call, this pattern will skip the initial configuration, and only processing will occur:

Loading initial configuration...
processing complete.
processing complete.

3. When to Use Proxy

Understanding how to use a pattern is important.

Understanding when to use it is critical.

Let’s talk about when to use the Proxy pattern:

  • When we want a simplified version of a complex or heavy object. In this case, we may represent it with a skeleton object which loads the original object on demand, also called as lazy initialization. This is known as the Virtual Proxy
  • When the original object is present in different address space, and we want to represent it locally. We can create a proxy which does all the necessary boilerplate stuff like creating and maintaining the connection, encoding, decoding, etc., while the client accesses it as it was present in their local address space. This is called the Remote Proxy
  • When we want to add a layer of security to the original underlying object to provide controlled access based on access rights of the client. This is called Protection Proxy

4. Conclusion

In this article, we had a look at the proxy design pattern. This is a good choice in the following cases:

  • When we want to have a simplified version of an object or access the object more securely
  • When we want a local version of a remote object

The full source code for this example is available over on GitHub.

Java Weekly, Issue 290

$
0
0

Here we go…

1. Spring and Java

>> Java InfoQ Trends Report – July 2019 [infoq.com]

An overview of current trends in the adoptions of technologies within the Java ecosystem, according to InfoQ.

>> Evolving Java With ––enable–preview aka Preview Features [blog.codefx.org]

A beginner’s guide to enabling experimental features in early-access builds.

>> Exercises in Programming Style and the Event Bus [blog.frankel.ch]

And a quick comparison of the point-to-point Observer pattern and the publish-subscribe model of the Event Bus.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musing

>> Re-Architecting the Video Gatekeeper [medium.com]

A good write-up on Netflix’s use of its Hollow technology — a total high-density near cache — to eliminate I/O bottlenecks in the publishing workflow.

>> Consistency is key…But should it be? [blog.scottlogic.com]

And while consistency in a codebase is generally a good thing, it’s not a justification for repeating bad coding practices.

Also worth reading:

3. Comics

>> Unforeseen Risks [dilbert.com]

>> Finding a Scapegoat [dilbert.com]

>> Wally Has an Idea [dilbert.com]

4. Pick of the Week

I haven’t picked a podcast in over a year, I think. And, moving past the silly name, it’s hard to find a better one to recommend:

>> The Tropical MBA Podcast [tropicalmba.com]

If you’re into podcasts, this is a good one to have in your app. If you don’t, maybe start with it.

JUnit 5 TestWatcher API

$
0
0

1. Overview

When unit testing, we may, periodically, wish to process the results of our test method executions. In this quick tutorial, we’ll take a look at how we can accomplish this using the TestWatcher API provided by JUnit.

For an in-depth guide to testing with JUnit, check out our excellent Guide to JUnit 5.

2. The TestWatcher API

In short, the TestWatcher interface defines the API for extensions that wish to process test results. One way we can think of this API is providing hooks for getting the status of an individual test case.

But, before we dive into some real examples, let’s take a step back and briefly summarize the methods in the TestWatcher interface:

  • testAborted​(ExtensionContext context, Throwable cause)

    To process the results of an aborted test, we can override the testAborted method. As the name suggests, this method is invoked after a test has been aborted.

  • testDisabled​(ExtensionContext context, Optional reason)

    We can override the testDisabled method when we want to handle the results of a disabled test method. This method may also include the reason the test is disabled.

  • testFailed(ExtensionContext context, Throwable cause)

    If we want to do some additional processing after a test failure, we can simply implement the functionality in the testFailed method. This method may include the cause of the test failure.

  • testSuccessful(ExtensionContext context)

    Last but not least, when we wish to process the results of a successful test, we simply override the testSuccessful method.

We should note that all the methods contain the ExtensionContext. This encapsulates the context in which the current test executed.

3. Maven Dependencies

First of all, let’s add the project dependencies we will need for our examples.
Apart from the main JUnit 5 library junit-jupiter-engine, we’ll also need the junit-jupiter-api library:

<dependency>
    <groupId>org.junit.jupiter</groupId>
    <artifactId>junit-jupiter-api</artifactId>
    <version>5.4.2</version>
    <scope>test</scope>
</dependency>

As always, we can get the latest version from Maven Central.

4. A TestResultLoggerExtension Example

Now that we have a basic understanding of the TestWatcher API, we’ll walk through a practical example.

Let’s begin by creating a simple extension for logging the results and providing a summary of our tests. In this case, to create the extension, we need to define a class that implements the TestWatcher interface:

public class TestResultLoggerExtension implements TestWatcher, AfterAllCallback {
    private List<TestResultStatus> testResultsStatus = new ArrayList<>();

    private enum TestResultStatus {
        SUCCESSFUL, ABORTED, FAILED, DISABLED;
    }

    //...
}

As with all extension interfaces, the TestWatcher interface also extends the main Extension interface, which is only a marker interface. In this example, we also implement the AfterAllCallback interface.

In our extension, we have a list of TestResultStatus, which is a simple enumeration we’re going to use to represent the status of a test result.

4.1. Processing the Test Results

Now, let’s see how to process the results of the individual unit test method:

@Override
public void testDisabled(ExtensionContext context, Optional<String> reason) {
    LOG.info("Test Disabled for test {}: with reason :- {}", 
      context.getDisplayName(),
      reason.orElse("No reason"));

    testResultsStatus.add(TestResultStatus.DISABLED);
}

@Override
public void testSuccessful(ExtensionContext context) {
    LOG.info("Test Successful for test {}: ", context.getDisplayName());

    testResultsStatus.add(TestResultStatus.SUCCESSFUL);
}  

We begin by filling the body of our extension and overriding the testDisabled() and testSuccessful() methods.

In our trivial example, we output the name of the test and add the status of the test to the testResultsStatus list.

We’ll continue in this fashion for the other two methods — testAborted() and testFailed():

@Override
public void testAborted(ExtensionContext context, Throwable cause) {
    LOG.info("Test Aborted for test {}: ", context.getDisplayName());

    testResultsStatus.add(TestResultStatus.ABORTED);
}

@Override
public void testFailed(ExtensionContext context, Throwable cause) {
    LOG.info("Test Aborted for test {}: ", context.getDisplayName());

    testResultsStatus.add(TestResultStatus.FAILED);
}

4.2. Summarizing the Test Results

In the last part of our example, we’ll override the afterAll() method:

@Override
public void afterAll(ExtensionContext context) throws Exception {
    Map<TestResultStatus, Long> summary = testResultsStatus.stream()
      .collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));

    LOG.info("Test result summary for {} {}", context.getDisplayName(), summary.toString());
}

To quickly recap, the afterAll method is executed after all test methods have been run. We use this method to group the different TestResultStatus we have in the list of test results before outputting a very basic summary.

For an in-depth guide to Lifecycle Callbacks, check out our excellent Guide to JUnit 5 extensions.

5. Running the Tests

In this penultimate section, we’ll see what the output from our tests looks like using our simple logging extension.

Now that we’ve defined our extension, we’ll first register it using the standard @ExtendWith annotation:

@ExtendWith(TestResultLoggerExtension.class)
class TestWatcherAPIUnitTest {

    @Test
    void givenFalseIsTrue_whenTestAbortedThenCaptureResult() {
        Assumptions.assumeTrue(false);
    }

    @Disabled
    @Test
    void givenTrueIsTrue_whenTestDisabledThenCaptureResult() {
        Assert.assertTrue(true);
    }

    //...

Next, we fill our test class with unit tests, adding a mixture of disabled, aborted, and successful tests.

5.1 Reviewing the Output

When we run the unit test, we should see the output for each test:

INFO  c.b.e.t.TestResultLoggerExtension - 
    Test Successful for test givenTrueIsTrue_whenTestAbortedThenCaptureResult()
...
Test result summary for TestWatcherAPIUnitTest {ABORTED=1, SUCCESSFUL=1, DISABLED=2}

Naturally, we’ll also see the summary printed when all the test methods have completed.

6. Gotchas

In this last section, let’s review a couple of the subtleties we should be aware of when working with the TestWatcher interface:

  • TestWatcher extensions are not permitted to influence the execution of tests; this means if an exception is thrown from a TestWatcher, it will not be propagated up to the running test
  • Currently, this API is only used to report the results of @Test methods and @TestTemplate methods
  • By default, if no reason is provided to the testDisabled method, then it will contain the fully qualified name of the test method followed by ‘is @Disabled

7. Conclusion

To summarize, in this tutorial, we’ve shown how we can make use of the JUnit 5 TestWatcher API to process the results of our test method executions.

The full source code of the examples can be found over on GitHub.

Advanced Quasar Usage for Kotlin

$
0
0

1. Introduction

We recently looked at Quasar, which gives us tools to make asynchronous programming more accessible and more efficient. We’ve seen the basics of what we can do with it, allowing for lightweight threads and message passing.

In this tutorial, we’re going to see some more advanced things that we can do with Quasar to take our asynchronous programming even further.

2. Actors

Actors are a well-known programming practice for concurrent programming, made especially popular in Erlang. Quasar allows us to define Actors, which are the fundamental building blocks of this form of programming.

Actors can:

  • Start other actors
  • Send messages to other actors
  • Receive messages from other actors that they react to

These three pieces of functionality give us everything we need to build our application.

In Quasar, an actor is represented as a strand — normally a fiber, but threads are also an option if needed — with a channel to get messages in, and some special support for lifecycle management and error handling.

2.1. Adding Actors to the Build

Actors aren’t a core concept in Quasar. Instead, we need to add the dependency that gives us access to them:

<dependency>
    <groupId>co.paralleluniverse</groupId>
    <artifactId>quasar-actors</artifactId>
    <version>0.8.0</version>
</dependency>

It is important that we use the same version of this dependency as of any other Quasar dependencies in use.

2.2. Creating Actors

We create an actor by subclassing the Actor class, providing a name and a MailboxConfig and implementing the doRun() method:

val actor = object : Actor<Int, String>("noopActor", MailboxConfig(5, Channels.OverflowPolicy.THROW)) {
    @Suspendable
    override fun doRun(): String {
        return "Hello"
    }
}

Both the name and the mailbox configuration are optional — if we don’t specify a mailbox configuration, then the default is an unbounded mailbox.

Note that we need to mark the methods in the actor as @Suspendable manually. Kotlin doesn’t require that we declare exceptions at all, which means that we don’t declare the SuspendException that is on the base class we are extending. This means that Quasar doesn’t see our methods as suspendable without a little more help.

Once we’ve created an actor, we then need to start it — using either the spawn() method to start a new fiber, or spawnThread() to start a new thread. Other than the difference between a fiber and thread, these two work the same way.

Once we spawn the actor, we can treat it to an extent the same as any other strand. This includes being able to call join() to wait for it to finish executing, and get() to retrieve the value from it:

actor.spawn()

println("Noop Actor: ${actor.get()}")

2.3. Sending Messages to Actors

When we spawn a new actor, the spawn() and spawnThread() methods return an ActorRef instance. We can use this to interact with the actor itself, by sending messages for it to receive.

The ActorRef implements the SendPort interface, and as such we can use it the same as we would use the producing half of a Channel. This gives us access to the send and trySend methods that we can use to pass messages into the actor:

val actorRef = actor.spawn()

actorRef.send(1)

2.4. Receiving Messages with Actors

Now that we can pass messages into the actor, we need to be able to do things with them. We do this inside our doRun() method on the actor itself, where we can call the receive() method to get the next message to process:

val actor = object : Actor<Int, Void?>("simpleActor", null) {
    @Suspendable
    override fun doRun(): Void? {
        val msg = receive()
        println("SimpleActor Received Message: $msg")
        return null
    }
}

The receive() method will block inside the actor until a message is available, and then it will allow the actor to process this message as required.

Often, actors will be designed to receive many messages and process them all. As such, actors will typically have an infinite loop inside the doRun() method that will process all messages that come in:

val actor = object : Actor<Int, Void?>("loopingActor", null) {
    @Suspendable
    override fun doRun(): Void? {
        while (true) {
            val msg = receive()

            if (msg > 0) {
                println("LoopingActor Received Message: $msg")
            } else {
                break
            }
        }

        return null
    }
}

This will then keep processing incoming messages until we receive a value of 0.

2.5. Sending Messages Too Fast

In some cases, the actor will process messages slower than they are sent to it. This will cause the mailbox to fill up, and potentially, to overflow.

The default mailbox policy has an unlimited capacity. We can configure this when we create the actor, though, by providing a MailboxConfig. Quasar also offers a configuration for how to react when the mailbox overflows, but at present, this isn’t implemented.

Instead, Quasar will use the policy of THROW, regardless of what we specify:

Actor<Int, String>("backlogActor", 
    MailboxConfig(1, Channels.OverflowPolicy.THROW)) {
}

If we specify a mailbox size, and the mailbox overflows, then the receive() method inside of the actor will cause the actor to abort by throwing an exception.

This isn’t something we can handle in any way:

try {
    receive()
} catch (e: Throwable) {
    // This is never reached
}

When this happens, the get() method from outside the actor will also throw an exception, but this can be handled. In this case, we’ll get an ExecutionException that wraps a QueueCapacityExceededException with a stack trace pointing to the send() method that added the overflowing message.

If we know we’re working with an actor that has a limited mailbox size, we can use the trySend() method to send messages to it instead. This won’t cause the actor to fail, but will instead report whether the message was successfully sent or not:

val actor = object : Actor<Int, String>("backlogTrySendActor", 
  MailboxConfig(1, Channels.OverflowPolicy.THROW)) {
    @Suspendable
    override fun doRun(): String {
        TimeUnit.MILLISECONDS.sleep(500);
        println("Backlog TrySend Actor Received: ${receive()}")

        return "No Exception"
    }
}

val actorRef = actor.spawn()

actorRef.trySend(1) // Returns True
actorRef.trySend(2) // Returns False

2.6. Reading Messages Too Fast

In the opposite case, we might have an actor that is trying to read messages faster than they are being provided. Normally this is fine — the actor will block until a message is available and then process it.

In some situations, though, we want to be able to handle this in other ways.

When it comes to receiving messages, we have three options available to us:

  • Block indefinitely until a message is available
  • Block until a message is available or until a timeout occurs
  • Don’t block at all

So far, we’ve used the receive() method, which blocks forever.

If necessary, we can provide timeout details to the receive() method. This will cause it to block only for that period before returning — either the received message or null if we timed out:

while(true) {
    val msg = receive(1, TimeUnit.SECONDS)
    if (msg != null) {
        // Process Message
    } else {
        println("Still alive")
    }
}

On rare occasions, we might want not to block at all, and instead, return immediately with a message or null. We can do this with the tryReceive() method instead — as a mirror to the trySend() method we saw above:

while(true) {
    val msg = tryReceive()
    if (msg != null) {
        // Process Message
    } else {
        print(".")
    }
}

2.7. Filtering Messages

So far, our actors have received every single message that was sent to them. We can adjust this if desired, though.

Our doRun() method is designed to represent the bulk of the actor functionality, and the receive() method called from this will give us the next method to work with.

We can also override a method called filterMessage() that will determine if we should process any given message or not. The receive() method calls this for us, and if it returns null, then the message isn’t passed on to the actor. For example, the following will filter out all messages that are odd numbers:

override fun filterMessage(m: Any?): Int? {
    return when (m) {
        is Int -> {
            if (m % 2 == 0) {
                m
            } else {
                null
            }
        } else -> super.filterMessage(m)
    }
}

The filterMessage() method is also able to transform the messages as they come through. The value that we return is the value provided to the actor, so it acts as both filter and map. The only restriction is that the return type must match the expected message type of actor.

For example, the following will filter out all odd numbers, but then multiply all the even numbers by 10:

override fun filterMessage(m: Any?): Int? {
    return when (m) {
        is Int -> {
            if (m % 2 == 0) {
                m * 10
            } else {
                null
            }
        }
        else -> super.filterMessage(m)
    }
}

2.8. Linking Actors and Error Handling

So far, our actors have all worked strictly in isolation. We do have the ability to have actors that watch each other so that one can react to events in the other. We can do this in a symmetric or asymmetric manner as desired.

At present, the only event that we can handle is when an actor exits — either deliberately or because it failed for some reason.

When we link actors using the watch() method, then we are allowing one actor — the watcher — to be informed of lifecycle events in the other — the watched. This is strictly a one-way affair, and the watched actor doesn’t get notified of anything about the watcher:

val watcherRef = watcher.spawn()
val watchedRef = watched.spawn()
watcher.watch(watchedRef)

Alternatively, we can use the link() method, which is the symmetric version. In this case, both actors are informed of lifecycle events in the other, instead of having a watcher and a watched actor:

val firstRef = first.spawn()
val secondRef = second.spawn()
first.watch(secondRef)

In both cases, the effect is the same. Any lifecycle events that occur in the watched actor will cause a special message — of type LifecycleMessage — to be added to the input channel of the watcher actor. This then gets processed by the filterMessage() method as described earlier.

The default implementation will then pass this on to the handleLifecycleMessage() method in our actor instead, which can then process these messages as needed:

override fun handleLifecycleMessage(m: LifecycleMessage?): Int? {
    println("WatcherActor Received Lifecycle Message: ${m}")
    return super.handleLifecycleMessage(m)
}

Here, there’s a subtle difference between link() and watch(). With watch(), the standard handleLifecycleMessage() method does nothing more than remove the listener references, whereas with link(), it also throws an exception that will be received in the doRun() message in response to the receive() call.

This means that using link() automatically causes our doRun() method of the actors to see an exception when any linked actors exit, whereas watch() forces us to implement the handleLifecycleMessage() method to be able to react to the message.

2.9. Registering and Retrieving Actors

So far, we’ve only ever interacted with actors immediately after we’ve created them, so we’ve been able to use the variables in scope to interact with them. Sometimes, though, we need to be able to interact with actors a long way from where we spawned them.

One way that we can do this is by using standard programming practices — pass the ActorRef variable around so that we have access to it from where we need it.

Quasar gives us another way to achieve this. We can register actors with a central ActorRegistry and then access them by name later:

val actorRef = actor.spawn()
actor.register()

val retrievedRef = ActorRegistry.getActor<ActorRef<Int>>("theActorName")

assertEquals(actorRef, retrievedRef)

This assumes that we gave the actor a name when we created it and registers it with that name. If the actor wasn’t named — for example, if the first constructor argument was null — then we can pass in a name to the register() method instead:

actor.register("renamedActor")

ActorRegistry.getActor() is static so that we can access this from anywhere in our application.

If we try to retrieve an actor using a name that isn’t known, Quasar will block until such an actor does exist. This can potentially be forever, so we can also give a timeout when we’re retrieving the actor to avoid this. This will then return null on timeout should the requested actor not be found:

val retrievedRef = ActorRegistry.getActor<ActorRef<Int>>("unknownActor", 1, TimeUnit.SECONDS)

Assert.assertNull(retrievedRef)

3. Actor Templates

So far, we have written our actors from first principles. However, there are several common patterns that get used over and over. As such, Quasar has packaged these up in a way that we can easily re-use them.

These templates are often referred to as Behaviors, borrowing the terminology for the same concept used in Erlang.

Many of these templates are implemented as subclasses of Actor and of ActorRef, which add additional features for us to use. This will give additional methods inside the Actor class to override or to call from inside our implemented functionality, and additional methods on the ActorRef class for the calling code to interact with the actor.

3.1. Request/Reply

A common use case for actors is that some calling code will send them a message, and then the actor will do some work and send some result back. The calling code then receives the response and carries on working with it. Quasar gives us the RequestReplyHelper to let us achieve both sides of this easily.

To use this, our messages must all be subclasses of the RequestMessage class. This allows Quasar to store additional information to get the reply back to the correct calling code:

data class TestMessage(val input: Int) : RequestMessage<Int>()

As the calling code, we can use RequestReplyHelper.call() to submit a message to the actor, and then get either the response or an exception back as appropriate:

val result = RequestReplyHelper.call(actorRef, TestMessage(50))

Inside the actor itself, we then receive the message, process it, and use RequestReplyHelper.reply() to send back the result:

val actor = object : Actor<TestMessage, Void?>() {
    @Suspendable
    override fun doRun(): Void {
        while (true) {
            val msg = receive()

            RequestReplyHelper.reply(msg, msg.input * 100)
        }

        return null
    }
}

3.2. Server

The ServerActor is an extension to the above where the request/reply capabilities are part of the actor itself. This gives us the ability to make a synchronous call to the actor and to get a response from it — using the call() method — or to make an asynchronous call to the actor where we don’t need a response — using the cast() method.

We implement this form of an actor by using the ServerActor class and passing an instance of ServerHandler to the constructor. This is generic over the types of message to handle for a synchronous call, to return from a synchronous call, and to handle for an asynchronous call.

When we implement a ServerHandler, then we have several methods that we need to implement:

  • init — Handle the actor starting up
  • terminate — Handle the actor shutting down
  • handleCall — Handle a synchronous call and return the response
  • handleCast — Handle an asynchronous call
  • handleInfo — Handle a message that is neither a Call nor a Cast
  • handleTimeout — Handle when we haven’t received any messages for a configured duration

The easiest way to achieve this is to subclass AbstractServerHandler, which has default implementations of all the methods. This then gives us the ability only to implement the bits we need for our use case:

val actor = ServerActor(object : AbstractServerHandler<Int, String, Float>() {
    @Suspendable
    override fun handleCall(from: ActorRef<*>?, id: Any?, m: Int?): String {
        println("Called with message: " + m + " from " + from)
        return m.toString() ?: "None"
    }

    @Suspendable
    override fun handleCast(from: ActorRef<*>?, id: Any?, m: Float?) {
        println("Cast message: " + m + " from " + from)
    }
})

Our handleCall() and handleCast() methods get called with the message to handle but are also given a reference to where the message came from and a unique ID to identify the call, in case they are important. Both the source ActorRef and the ID are optional and may not be present.

Spawning a ServerActor will return us a Server instance. This is a subclass of ActorRef that gives us additional functionality for call() and cast(), to send messages in as appropriate, and a method to shut the server down:

val server = actor.spawn()

val result = server.call(5)
server.cast(2.5f)

server.shutdown()

3.3. Proxy Server

The Server pattern gives us a specific way to handle messages and given responses. An alternative to this is the ProxyServer, which has the same effect but in a more usable form. This uses Java dynamic proxies to allow us to implement standard Java interfaces using actors.

To implement this pattern, we need to define an interface that describes our functionality:

@Suspendable
interface Summer {
    fun sum(a: Int, b: Int) : Int
}

This can be any standard Java Interface, with whatever functions we need.

We then pass an instance of this to the ProxyServerActor constructor to create the actor:

val actor = ProxyServerActor(false, object : Summer {
    override fun sum(a: Int, b: Int): Int {
        return a + b
    }
})

val summerActor = actor.spawn()

The boolean also passed to ProxyServerActor is a flag to indicate whether to use the actor’s strand for void methods or not. If set to true, then the calling strand will block until the method completes, but there will be no return from it.

Quasar will then ensure that we run the method calls inside the actor as needed, rather than on the calling strand. The instance returned from spawn() or spawnThread() implements both Server — as seen above — and our interface, thanks to the power of Java dynamic proxies:

// Calling the interface method
val result = (summerActor as Summer).sum(1, 2)

// Calling methods on Server
summerActor.shutdown()

Internally, Quasar implements a ProxyServerActor using the Server behavior that we saw earlier, and we can use it in the same way. The use of dynamic proxies simply makes calling methods on it easier to achieve.

3.4. Event Sources

The event source pattern allows us to create an actor where messages sent to it get handled by several event handlers. These handlers get added and removed as necessary. This follows the pattern that we have seen several times for handling asynchronous events. The only real difference here is that our event handlers are run on the actor strand and not the calling strand.

We create an EventSourceActor without any special code and start it running in the standard way:

val actor = EventSourceActor<String>()
val eventSource = actor.spawn()

Once the actor has been spawned, we can then register event handlers against it. The body of these handlers are then executed in the strand of the actor, but they are registered outside of it:

eventSource.addHandler { msg ->
    println(msg)
}

Kotlin allows us to write our event handlers as lambda functions, and to, therefore, use all the functionality we have here. This includes accessing values from outside the lambda function, but these will be accessed across the different strands — so we need to be careful when we do this as in any multi-threaded scenario:

val name = "Baeldung"
eventSource.addHandler { msg ->
    println(name + " " + msg)
}

We also get the major benefit of event-handling code, in that we can register as many handlers as we need whenever we need to, each of which is focused on its one task. All handlers run on the same strand — the one that the actor runs on — so handlers need to take this into account with the processing they do.

As such, it would be common to have these handlers do any heavy processing by passing on to another actor.

3.5. Finite-State Machine

A finite-state machine is a standard construct where we have a fixed number of possible states, and where the processing of one state can switch to a different one. We can represent many algorithms in this way.

Quasar gives us the ability to model a finite-state machine as an actor, so the actor itself maintains the current state, and each state is essentially a message handler.

To implement this, we have to write our actor as a subclass of FiniteStateMachineActor. We then have as many methods as we need, each of which will handle a message and return the new state to transition into:

@Suspendable
fun lockedState() : SuspendableCallable<SuspendableCallable<*>> {
    return receive {msg ->
        when (msg) {
            "PUSH" -> {
                println("Still locked")
                lockedState()
            }
            "COIN" -> {
                println("Unlocking...")
                unlockedState()
            }
            else -> TERMINATE
        }
    }
}

We then also need to implement the initialState() method to tell the actor where to start:

@Suspendable
override fun initialState(): SuspendableCallable<SuspendableCallable<*>> {
    return SuspendableCallable { lockedState() }
}

Each of our state methods will do whatever it needs to do, then return one of three possible values as needed:

  • The new state to use
  • The special token TERMINATE, which indicates that the actor should shut down
  • null, which indicates to not consume this specific message — in this case, the message is available to the next state we transition into

4. Reactive Streams

Reactive Streams are a relatively new standard that is becoming popular across many languages and platforms. This API allows for interoperation between various libraries and frameworks that support asynchronous I/O — including RxJava, Akka, and Quasar, amongst others.

The Quasar implementation allows us to convert between Reactive streams and Quasar channels, which then makes it possible to have events from these streams feed into strands or messages from strands feeding into streams.

Reactive streams have the concept of a Publisher and a Subscriber. A publisher is something that can publish messages to subscribers. Conversely, Quasar uses the concepts of SendPort and ReceivePort, where we use the SendPort to send messages and the ReceivePort to receive those same messages. Quasar also has the concept of a Topic, which is simply a mechanism to allow us to send messages to multiple channels.

These are similar concepts, and Quasar lets us convert one to the other.

4.1. Adding Reactive Streams to the Build

Reactive streams aren’t a core concept in Quasar. Instead, we need to add a dependency that gives us access to them:

<dependency>
    <groupId>co.paralleluniverse</groupId>
    <artifactId>quasar-reactive-streams</artifactId>
    <version>0.8.0</version>
</dependency>

It’s important that we use the same version of this dependency as of any other Quasar dependencies in use. It is also important that the dependency is consistent with the Reactive streams APIs that we are using in the application. For example, quasar-reactive-streams:0.8.0 depends on reactive-streams:1.0.2.

If we do not depend on Reactive Streams already, then this is not a concern. We only need to care about this if we’re already depending on Reactive streams, since our local dependency will override the one that Quasar depends on.

4.2. Publishing to a Reactive Stream

Quasar gives us the ability to convert a Channel to a Publisher, such that we can generate messages using a standard Quasar channel, but the receiving code can treat it as a reactive Publisher:

val inputChannel = Channels.newChannel<String>(1)
val publisher = ReactiveStreams.toPublisher(inputChannel)

Once we’ve done this, we can treat our Publisher as if it were any other Publisher instance, meaning that the client code doesn’t need to be aware of Quasar at all, or even that the code is asynchronous.

Any messages that get sent to inputChannel get added to this stream, such that they can be pulled by the subscriber.

At this point, we can only have a single subscriber to our stream. Attempting to add a second subscriber will throw an exception instead.

If we want to support multiple subscribers, then we can use a Topic instead. This looks the same from the Reactive Streams end, but we end up with a Publisher that supports multiple subscribers:

val inputTopic = Topic<String>()
val publisher = ReactiveStreams.toPublisher(inputTopic)

4.3. Subscribing to a Reactive Stream

The opposite side of this is converting a Publisher to a Channel. This allows us to consume messages from a Reactive stream using standard Quasar channels as if it were any other channel:

val channel = ReactiveStreams.subscribe(10, Channels.OverflowPolicy.THROW, publisher)

This gives us a ReceivePort portion of a channel. Once done, we can treat it the same as any other channel, using standard Quasar constructs to consume messages from it. Those messages originate from the Reactive stream, wherever that has come from.

5. Conclusion

We have seen some more advanced techniques that we can achieve using Quasar. These allow us to write better, more maintainable asynchronous code, and to more easily interact with streams that come out of different asynchronous libraries.

Examples of some of the concepts we’ve covered here can be found over on GitHub.

Introduction to Morphia – Java ODM for MongoDB

$
0
0

1. Overview

In this tutorial, we’ll understand how to use Morphia, an Object Document Mapper (ODM) for MongoDB in Java.

In the process, we’ll also understand what is an ODM and how it facilitates working with MongoDB.

2. What is an ODM?

For those uninitiated in this area, MongoDB is a document-oriented database built to be distributed by nature. Document-oriented databases, in simple terms, manage documents, which are nothing but a schema-less way of organizing semi-structured data. They fall under a broader and loosely defined umbrella of NoSQL databases, named after their apparent departure from traditional organization of SQL databases.

MongoDB provides drivers for almost all popular programming languages like Java. These drivers offer a layer of abstraction for working with MongoDB so that we aren’t working with Wire Protocol directly. Think of this as Oracle providing an implementation of JDBC driver for their relational database.

However, if we recall our days working with JDBC directly, we can appreciate how messy it can get — especially in an object-oriented paradigm. Fortunately, we have Object Relational Mapping (ORM) frameworks like Hibernate to our rescue. It isn’t very different for MongoDB.

While we can certainly work with the low-level driver, it requires a lot more boilerplate to accomplish the task. Here, we’ve got a similar concept to ORM called Object Document Mapper (ODM). Morphia exactly fills that space for the Java programming language and works on top of the Java driver for MongoDB.

3. Setting up Dependencies

We’ve seen enough theory to get us into some code. For our examples, we’ll model a library of books and see how we can manage it in MongoDB using Morphia.

But before we begin, we’ll need to set-up some of the dependencies.

3.1. MongoDB

We need to have a running instance of MongoDB to work with. There are several ways to get this, and the simplest is to download and install community edition on our local machine.

We should leave all default configurations as-is, including the port on which MongoDB runs.

3.2. Morphia

We can download the pre-built JARs for Morphia from Maven Central and use them in our Java project.

However, the simplest way is to use a dependency management tool like Maven:

<dependency>
    <groupId>dev.morphia.morphia</groupId>
    <artifactId>core</artifactId>
    <version>1.5.3</version>
</dependency>

4. How to Connect using Morphia?

Now that we have MongoDB installed and running and have set-up Morphia in our Java project, we’re ready to connect to MongoDB using Morphia.

Let’s see how we can accomplish that:

Morphia morphia = new Morphia();
morphia.mapPackage("com.baeldung.morphia");
Datastore datastore = morphia.createDatastore(new MongoClient(), "library");
datastore.ensureIndexes();

That’s pretty much it! Let’s understand this better. We need two things for our mapping operations to work:

  1. A Mapper: This is responsible for mapping our Java POJOs to MongoDB Collections. In our code snippet above, Morphia is the class responsible for that. Note how we’re configuring the package where it should look for our POJOs.
  2. A Connection: This is the connection to a MongoDB database on which the mapper can execute different operations. The class Datastore takes as a parameter an instance of MongoClient (from the Java MongoDB driver) and the name of the MongoDB database, returning an active connection to work with.

So, we’re all set to use this Datastore and work with our entities.

5. How to Work with Entities?

Before we can use our freshly minted Datastore, we need to define some domain entities to work with.

5.1. Simple Entity

Let’s begin by defining a simple Book entity with some attributes:

@Entity("Books")
public class Book {
    @Id
    private String isbn;
    private String title;
    private String author;
    @Property("price")
    private double cost;
    // constructors, getters, setters and hashCode, equals, toString implementations
}

There are a couple of interesting things to note here:

  • Notice the annotation @Entity that qualifies this POJO for ODM mapping by Morphia
  • Morphia, by default, maps an entity to a collection in MongoDB by the name of its class, but we can explicitly override this (like we’ve done for the entity Book here)
  • Morphia, by default, maps the variables in an entity to the keys in a MongoDB collection by the name of the variable, but again we can override this (like we’ve done for the variable cost here)
  • Lastly, we need to mark a variable in the entity to act as the primary key by the annotation @Id (like we’re using ISBN for our book here)

5.2. Entities with Relationships

In the real world, though, entities are hardly as simple as they look and have complex relationships with each other. For instance, our simple entity Book can have a Publisher and can reference other companion books. How do we model them?

MongoDB offers two mechanisms to build relationships — Referencing and Embedding. As the name suggests, with referencing, MongoDB stores related data as a separate document in the same or a different collection and just references it using its id.

On the contrary, with embedding, MongoDB stores or rather embeds the relation within the parent document itself.

Let’s see how we can use them. Let’s begin by embedding Publisher in our Book:

@Embedded
private Publisher publisher;

Simple enough. Now let’s go ahead and add references to other books:

@Reference
private List<Book> companionBooks;

That’s it — Morphia provides convenient annotations to model relationships as supported by MongoDB. The choice of referencing vs embedding, however, should draw from data model complexity, redundancy, and consistency amongst other considerations.

The exercise is similar to normalization in relational databases.

Now, we’re ready to perform some operations on Book using Datastore.

6. Some Basic Operations

Let’s see how to work with some of the basic operations using Morphia.

6.1. Save

Let’s begin with the simplest of the operations, creating an instance of Book in our MongoDB database library:

Publisher publisher = new Publisher(new ObjectId(), "Awsome Publisher");

Book book = new Book("9781565927186", "Learning Java", "Tom Kirkman", 3.95, publisher);
Book companionBook = new Book("9789332575103", "Java Performance Companion", 
  "Tom Kirkman", 1.95, publisher);

book.addCompanionBooks(companionBook);

datastore.save(companionBook);
datastore.save(book);

This is enough to let Morphia create a collection in our MongoDB database, if it does not exist, and perform an upsert operation.

6.2. Query

Let’s see if we’re able to query the book we just created in MongoDB:

List<Book> books = datastore.createQuery(Book.class)
  .field("title")
  .contains("Learning Java")
  .find()
  .toList();

assertEquals(1, books.size());

assertEquals(book, books.get(0));

Querying a document in Morphia begins with creating a query using Datastore and then declaratively adding filters, to the delight of those in love with functional programming!

Morphia supports much more complex query construction with filters and operators. Moreover, Morphia allows for limiting, skipping, and ordering of results in the query.

What’s more, Morphia allows us to use raw queries written with the Java driver for MongoDB for more control, should that be needed.

6.3. Update

Although a save operation can handle updates if the primary key matches, Morphia provides ways to selectively update documents:

Query<Book> query = datastore.createQuery(Book.class)
  .field("title")
  .contains("Learning Java");

UpdateOperations<Book> updates = datastore.createUpdateOperations(Book.class)
  .inc("price", 1);

datastore.update(query, updates);

List<Book> books = datastore.createQuery(Book.class)
  .field("title")
  .contains("Learning Java")
  .find()
  .toList();

assertEquals(4.95, books.get(0).getCost());

Here, we’re building a query and an update operation to increase by one the price of all books returned by the query.

6.4. Delete

Finally, that which has been created must be deleted! Again, with Morphia, it’s quite intuitive:

Query<Book> query = datastore.createQuery(Book.class)
  .field("title")
  .contains("Learning Java");

datastore.delete(query);

List<Book> books = datastore.createQuery(Book.class)
  .field("title")
  .contains("Learning Java")
  .find()
  .toList();

assertEquals(0, books.size());

We create the query quite similarly as before and run the delete operation on the Datastore.

7. Advanced Usage

MongoDB has some advanced operations like Aggregation, Indexing, and many others. While it isn’t possible to perform all of that using Morphia, it’s certainly possible to achieve some of that. For others, sadly, we’ll have to fall back to the Java driver for MongoDB.

Let’s focus on some of these advanced operations that we can perform through Morphia.

7.1. Aggregation

Aggregation in MongoDB allows us to define a series of operations in a pipeline that can operate on a set of documents and produce aggregated output.

Morphia has an API to support such an aggregation pipeline.

Let’s assume we wish to aggregate our library data in such a manner that we have all the books grouped by their author:

Iterator<Author> iterator = datastore.createAggregation(Book.class)
  .group("author", grouping("books", push("title")))
  .out(Author.class);

So, how does this work? We begin by creating an aggregation pipeline using the same old Datastore. We have to provide the entity on which we wish to perform aggregation operations, for instance, Book here.

Next, we want to group documents by “author” and aggregate their “title” under a key called “books”. Finally, we’re working with an ODM here. So, we have to define an entity to collect our aggregated data — in our case, it’s Author.

Of course, we have to define an entity called Author with a variable called books:

@Entity
public class Author {
    @Id
    private String name;
    private List<String> books;
    // other necessary getters and setters
}

This, of course, just scratches the surface of a very powerful construct provided by MongoDB and can be explored further for details.

7.2. Projection

Projection in MongoDB allows us to select only the fields we want to fetch from documents in our queries. In case document structure is complex and heavy, this can be really useful when we need only a few fields.

Let’s suppose we only need to fetch books with their title in our query:

List<Book> books = datastore.createQuery(Book.class)
  .field("title")
  .contains("Learning Java")
  .project("title", true)
  .find()
  .toList();
 
assertEquals("Learning Java", books.get(0).getTitle());
assertNull(books.get(0).getAuthor());

Here, as we can see, we only get back the title in our result and not the author and other fields. We should, however, be careful in using the projected output in saving back to MongoDB. This may result in data loss!

7.3. Indexing

Indexes play a very important role in query optimization with databases — relational as well as many non-relational ones.

MongoDB defines indexes at the level of the collection with a unique index created on the primary key by default. Moreover, MongoDB allows indexes to be created on any field or sub-field within a document. We should choose to create an index on a key depending on the query we wish to create.

For instance, in our example, we may wish to create an index on the field “title” of Book as we often end up querying on it:

@Indexes({
  @Index(
    fields = @Field("title"),
    options = @IndexOptions(name = "book_title")
  )
})
public class Book {
    // ...
    @Property
    private String title;
    // ...
}

Of course, we can pass additional indexing options to tailor the nuances of the index that gets created. Note that the field should be annotated by @Property to be used in an index.

Moreover, apart from the class-level index, Morphia has an annotation to define a field-level index as well.

7.4. Schema Validation

We’ve got an option to provide data validation rules for a collection that MongoDB can use while performing an update or insert operation. Morphia supports this through their APIs.

Let’s say that we don’t want to insert a book without a valid price. We can leverage schema validation to achieve this:

@Validation("{ price : { $gt : 0 } }")
public class Book {
    // ...
    @Property("price")
    private double cost;
    // ...
}

There is a rich set of validations provided by MongoDB that can be employed here.

8. Alternative MongoDB ODMs

Morphia is not the only available MongoDB ODM for Java. There are several others that we can consider to use in our applications. A discussion on comparison with Morphia is not possible here, but it is always useful to know our options:

  • Spring Data: Provides a Spring-based programming model for working with MongoDB
  • MongoJack: Provides direct mapping from JSON to MongoDB objects

This is not a complete list of MongoDB ODMs for Java, but there are some interesting alternates available!

9. Conclusion

In this article, we understood the basic details of MongoDB and the use of an ODM to connect and operate on MongoDB from a programing language like Java. We further explored Morphia as a MongoDB ODM for Java and the various capabilities it has.

As always, the code can be found over on GitHub.

Mocking a Void Method with EasyMock

$
0
0

1. Overview

Mocking frameworks are used to mock interaction with dependencies so as to test our classes in isolation. Typically, we mock the dependencies to return the various possible values. This way, we can ensure our class can handle each of those values.

But, sometimes we might have to mock dependency methods that do not return anything.

In this tutorial, we will see when and how to mock void methods using EasyMock.

2. Maven Dependency

First, let’s add the EasyMock dependency to our pom.xml:

<dependency>
    <groupId>org.easymock</groupId>
    <artifactId>easymock</artifactId>
    <version>4.0.2</version>
    <scope>test</scope>
</dependency>

3. When to Mock a void Method

When we test classes with dependencies, we would normally want to cover all values returned by the dependency. But sometimes, the dependency methods do not return a value. So, if nothing is returned, why would we want to mock a void method?

Even though void methods do not return a value, they might have side-effects.  An example of this is the Session.save() method. When we save a new entity, the save() method generates an id and sets it on the entity passed.

For this reason, we have to mock the void method to simulate the various processing outcomes.

Another time the mocking might come in handy is when testing exceptions thrown by the void method.

4. How to Mock a void Method

Now, let’s see how we can mock a void method using EasyMock.

Let’s suppose, we have to mock the void method of a WeatherService class that takes a location and sets the minimum and maximum temperature:

public interface WeatherService {
    void populateTemperature(Location location);
}

4.1. Creating the Mock Object

Let’s start by creating a mock for the WeatherService:

@Mock
private WeatherService mockWeatherService;

Here, we’ve done this using the EasyMock annotation @Mock. But, we can do this using the EasyMock.mock() method as well.

Next, we’ll record the expected interactions with the mock by calling populateTemperature():

mockWeatherService.populateTemperature(EasyMock.anyObject(Location.class));

Now, if we don’t want to simulate the processing of this method, this call itself is sufficient to mock the method.

4.2. Throwing an Exception

First, let’s take the case where we want to test whether our class can handle exceptions thrown by the void method. For this, we’ll have to mock the method in such a way that it throws these exceptions.

In our example, the method throws ServiceUnavailableException:

EasyMock.expectLastCall().andThrow(new ServiceUnavailableException());

As seen above, this involves simply calling the andThrow(Throwable) method.

4.3. Simulating Method Behavior

As mentioned earlier, we might sometimes need to simulate the behavior of the void method.

In our case, this would involve populating the minimum and maximum temperatures of the locations passed:

EasyMock.expectLastCall()
  .andAnswer(() -> {
      Location passedLocation = (Location) EasyMock.getCurrentArguments()[0];
      passedLocation.setMaximumTemparature(new BigDecimal(MAX_TEMP));
      passedLocation.setMinimumTemperature(new BigDecimal(MAX_TEMP - 10));
      return null;
  });

Here, we’ve used the andAnswer(IAnswer) method to define the behavior of the populateTemperature() method when called. Then, we’ve used the EasyMock.getCurrentArguments() method – that returns the arguments passed to the mock method – to modify the locations passed.

Note that we have returned null at the end. This is because we are mocking a void method.

It’s also worth noting that this approach is not restricted to mocking void methods only. We can also use it for methods that return a value. There, it comes in handy when we want to mock the method to return values based on the arguments passed.

4.4. Replaying the Mocked Method

Lastly, we’ll use the EasyMock.replay() method to change the mock to “replay” mode, so that the recorded actions can be replayed when called:

EasyMock.replay(mockWeatherService);

Consequently, when we call the test method, the custom behavior defined should be executed.

5. Conclusion

In this tutorial, we saw how to mock void methods using EasyMock.

And of course, the code used in this article can be found over on GitHub.

Get the Current Working Directory in Java

$
0
0

1. Overview

It’s an easy task to get the current working directory in Java, but unfortunately, there’s no direct API available in the JDK to do this.

In this tutorial, we’ll learn how to get the current working directory in Java with java.lang.System, java.io.File, java.nio.file.FileSystems, and java.nio.file.Paths.

2. System

Let’s begin with the standard solution using System#getProperty, assuming our current working directory name is Baeldung throughout the code:

static final String CURRENT_DIR = "Baeldung";
@Test
void whenUsingSystemProperties_thenReturnCurrentDirectory() {
    String userDirectory = System.getProperty("user.dir");
    assertTrue(userDirectory.endsWith(CURRENT_DIR));
}

We used a Java built-in property key user.dir to fetch the current working directory from the System‘s property map. This solution works across all JDK versions.

3. File

Let’s see another solution using java.io.File:

@Test
void whenUsingJavaIoFile_thenReturnCurrentDirectory() {
    String userDirectory = new File("").getAbsolutePath();
    assertTrue(userDirectory.endsWith(CURRENT_DIR));
}

Here, the File#getAbsolutePath internally uses System#getProperty to get the directory name, similar to our first solution. It’s a nonstandard solution to get the current working directory, and it works across all JDK versions.

4. FileSystems

Another valid alternative would be to use the new java.nio.file.FileSystems API:

@Test
void whenUsingJavaNioFileSystems_thenReturnCurrentDirectory() {
    String userDirectory = FileSystems.getDefault()
        .getPath("")
        .toAbsolutePath()
        .toString();
    assertTrue(userDirectory.endsWith(CURRENT_DIR));
}

This solution uses the new Java NIO API, and it works only with JDK 7 or higher.

5. Paths

And finally, let’s see a simpler solution to get the current directory using java.nio.file.Paths API:

@Test
void whenUsingJavaNioPaths_thenReturnCurrentDirectory() {
    String userDirectory = Paths.get("")
        .toAbsolutePath()
        .toString();
    assertTrue(userDirectory.endsWith(CURRENT_DIR));
}

Here, Paths#get internally uses FileSystem#getPath to fetch the path. It uses the new Java NIO API, so this solution works only with JDK 7 or higher.

6. Conclusion

In this tutorial, we explored four different ways to get the current working directory in Java. The first two solutions work across all versions of the JDK whereas the last two work only with JDK 7 or higher.

We recommend using the System solution since it’s efficient and straight forward, we can simplify it by wrapping this API call in a static utility method and access it directly.

The source code for this tutorial is available on GitHub – it is a Maven-based project, so it should be easy to import and run as-is.


Transferring a File Through SFTP in Java

$
0
0

1. Overview

In this tutorial, we’ll discuss how to upload and download files from a remote server using SFTP in Java.

We’ll use three different libraries: JSch, SSHJ, and Apache Commons VFS.

2. Using JSch

First, let’s see how to upload and download files from a remote server using the JSch library.

2.1. Maven Configuration

We’ll need to add the jsch dependency to our pom.xml:

<dependency>
    <groupId>com.jcraft</groupId>
    <artifactId>jsch</artifactId>
    <version>0.1.55</version>
</dependency>

The latest version of jsch can be found on Maven Central.

2.2. Setting Up JSch

Now, we’ll set up JSch.

JSch enables us to use either Password Authentication or Public Key Authentication to access a remote server. In this example, we’ll use password authentication:

private ChannelSftp setupJsch() throws JSchException {
    JSch jsch = new JSch();
    jsch.setKnownHosts("/Users/john/.ssh/known_hosts");
    Session jschSession = jsch.getSession(username, remoteHost);
    jschSession.setPassword(password);
    jschSession.connect();
    return (ChannelSftp) jschSession.openChannel("sftp");
}

In the example above, the remoteHost represents the name or IP address of the remote server (i.e. example.com). We can define the variables used in the test as:

private String remoteHost = "HOST_NAME_HERE";
private String username = "USERNAME_HERE";
private String password = "PASSWORD_HERE";

Also, we can generate the known_hosts file using the following command:

ssh-keyscan -H -t rsa REMOTE_HOSTNAME >> known_hosts

2.3. Uploading a File with JSch

Now, to upload a file to the remote server, we’ll use the method ChannelSftp.put():

@Test
public void whenUploadFileUsingJsch_thenSuccess() throws JSchException, SftpException {
    ChannelSftp channelSftp = setupJsch();
    channelSftp.connect();
 
    String localFile = "src/main/resources/sample.txt";
    String remoteDir = "remote_sftp_test/";
 
    channelSftp.put(localFile, remoteDir + "jschFile.txt");
 
    channelSftp.exit();
}

In this example, the first parameter of the method represents the local file to be transferred, for example, src/main/resources/sample.txt, while remoteDir is the path of the target directory at the remote server.

2.4. Downloading a File with JSch

We can also download a file from the remote server using ChannelSftp.get():

@Test
public void whenDownloadFileUsingJsch_thenSuccess() throws JSchException, SftpException {
    ChannelSftp channelSftp = setupJsch();
    channelSftp.connect();
 
    String remoteFile = "welcome.txt";
    String localDir = "src/main/resources/";
 
    channelSftp.get(remoteFile, localDir + "jschFile.txt");
 
    channelSftp.exit();
}

The remoteFile is the path of the file to be downloaded, and localDir represents the path of the target local directory:

3. Using SSHJ

Next, we’ll use the SSHJ library to upload and download files from a remote server.

3.1. Maven Configuration

First, we’ll add the dependency to our pom.xml:

<dependency>
    <groupId>com.hierynomus</groupId>
    <artifactId>sshj</artifactId>
    <version>0.27.0</version>
</dependency>

The latest version of sshj can be found on Maven Central.

3.2. Setting Up SSHJ

Next, we’ll set up the SSHClient.

SSHJ also allow us to use Password or Public Key Authentication to access the remote server.

We’ll use the Password Authentication in our example:

private SSHClient setupSshj() throws IOException {
    SSHClient client = new SSHClient();
    client.addHostKeyVerifier(new PromiscuousVerifier());
    client.connect(remoteHost);
    client.authPassword(username, password);
    return client;
}

3.3. Uploading a File with SSHJ

Similar to JSch, we’ll use the SFTPClient.put() method to upload a file to the remote server:

@Test
public void whenUploadFileUsingSshj_thenSuccess() throws IOException {
    SSHClient sshClient = setupSshj();
    SFTPClient sftpClient = sshClient.newSFTPClient();
 
    sftpClient.put(localFile, remoteDir + "sshjFile.txt");
 
    sftpClient.close();
    sshClient.disconnect();
}

We have two new variables here to define:

private String localFile = "src/main/resources/input.txt";
private String remoteDir = "remote_sftp_test/";

3.4. Downloading a File with SSHJ

Same goes for downloading a file from the remote server — we’ll use SFTPClient.get():

@Test
public void whenDownloadFileUsingSshj_thenSuccess() throws IOException {
    SSHClient sshClient = setupSshj();
    SFTPClient sftpClient = sshClient.newSFTPClient();
 
    sftpClient.get(remoteFile, localDir + "sshjFile.txt");
 
    sftpClient.close();
    sshClient.disconnect();
}

And let’s add the two variables used above:

private String remoteFile = "welcome.txt";
private String localDir = "src/main/resources/";

4. Using Apache Commons VFS

Finally, we’ll use Apache Commons VFS to transfer files to a remote server.

Actually, Apache Commons VFS uses JSch library internally.

4.1. Maven Configuration

We need to add the commons-vfs2 dependency to our pom.xml:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-vfs2</artifactId>
    <version>2.4</version>
</dependency>

The latest version of commons-vfs2 can be found on Maven Central.

4.2. Uploading a File with Apache Commons VFS

Apache Commons VFS is a little different.

We’ll use a FileSystemManager to create FileObjects from our target files, then use the FileObjects to transfer our files.

In this example, we’ll upload a file by using method FileObject.copyFrom():

@Test
public void whenUploadFileUsingVfs_thenSuccess() throws IOException {
    FileSystemManager manager = VFS.getManager();
 
    FileObject local = manager.resolveFile(
      System.getProperty("user.dir") + "/" + localFile);
    FileObject remote = manager.resolveFile(
      "sftp://" + username + ":" + password + "@" + remoteHost + "/" + remoteDir + "vfsFile.txt");
 
    remote.copyFrom(local, Selectors.SELECT_SELF);
 
    local.close();
    remote.close();
}

Note that the local file path should be absolute, and the remote file path should start with sftp://username:password@remoteHost.

4.3. Downloading a File with Apache Commons VFS

Downloading a file from a remote server is very similar — we’ll also use FileObject.copyFrom() to copy localFile from remoteFile:

@Test
public void whenDownloadFileUsingVfs_thenSuccess() throws IOException {
    FileSystemManager manager = VFS.getManager();
 
    FileObject local = manager.resolveFile(
      System.getProperty("user.dir") + "/" + localDir + "vfsFile.txt");
    FileObject remote = manager.resolveFile(
      "sftp://" + username + ":" + password + "@" + remoteHost + "/" + remoteFile);
 
    local.copyFrom(remote, Selectors.SELECT_SELF);
 
    local.close();
    remote.close();
}

5. Conclusion

In this article, we learned how to upload and download files from a remote SFTP server in Java. For this, we used multiple libraries: JSch, SSHJ, and Apache Commons VFS.

The full source code can be found on GitHub.

Concatenate Strings with Groovy

$
0
0

1. Overview

In this tutorial, we’ll look at several ways to concatenate Strings using Groovy. Note that a Groovy online interpreter comes in handy here.

We’ll start by defining a numOfWonder variable, which we’ll use throughout our examples:

def numOfWonder = 'seven'

2. Concatenation Operators

Quite simply, we can use the + operator to join Strings:

'The ' + numOfWonder + ' wonders of the world'

Similarly, Groovy also supports the left shift << operator:

'The ' << numOfWonder << ' wonders of ' << 'the world'

3. String Interpolation

As a next step, we’ll try to improve the readability of the code using a Groovy expression within a string literal:

"The $numOfWonder wonders of the world\n"

This can also be achieved using curly braces:

"The ${numOfWonder} wonders of the world\n"

4. Multi-line Strings

Let’s say we want to print all the wonders of the world, then we can use the triple-double-quotes to define a multi-line String, still including our numOfWonder variable:

"""
There are $numOfWonder wonders of the world.
Can you name them all? 
1. The Great Pyramid of Giza
2. Hanging Gardens of Babylon
3. Colossus of Rhode
4. Lighthouse of Alexendra
5. Temple of Artemis
6. Status of Zeus at Olympia
7. Mausoleum at Halicarnassus
"""

5. Concatenation Methods

As a final option, we’ll look at String‘s concat method:

'The '.concat(numOfWonder).concat(' wonders of the world')​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​

For really long texts, we recommend using a StringBuilder or a StringBuffer instead:

new StringBuilder().append('The ').append(numOfWonder).append(' wonders of the world')
new StringBuffer().append('The ').append(numOfWonder).append(' wonders of the world')​​​​​​​​​​​​​​​

6. Conclusion

In this article, we had a quick look at how to concatenate Strings using Groovy.

As usual, the full source code for this tutorial available at GitHub.

@DynamicUpdate with Spring Data JPA

$
0
0

1. Overview

When we use Spring Data JPA with Hibernate, we can use the additional features of Hibernate as well. @DynamicUpdate is one such feature.

@DynamicUpdate is a class-level annotation that can be applied to a JPA entity. It ensures that Hibernate uses only the modified columns in the SQL statement that it generates for the update of an entity.

In this article, we’ll take a look at the @DynamicUpdate annotation, with the help of a Spring Data JPA example.

2. JPA @Entity

When an application starts, Hibernate generates the SQL statements for CRUD operations of all the entities. These SQL statements are generated once and are cached, in memory, to improve the performance.

The generated SQL update statement includes all the columns of an entity. In case we update an entity, the values of the modified columns are passed to the SQL update statement. For the columns that are not updated, Hibernate uses their existing values for the update.

Let’s try to understand this with an example. First, let’s consider a JPA entity named Account:

@Entity
public class Account {

    @Id
    private int id;

    @Column
    private String name;

    @Column
    private String type;

    @Column
    private boolean active;

    // Getters and Setters
}

Next, let’s write a JPA repository for the Account entity:

@Repository
public interface AccountRepository extends JpaRepository<Account, Integer> {
}

Now, we’ll use the AccountRepository to update the name field of an Account object:

Account account = accountRepository.findOne(ACCOUNT_ID);
account.setName("Test Account");
accountRepository.save(account);

After we execute this update, we can verify the generated SQL statement. The generated SQL statement will include all the columns of Account:

update Account set active=?, name=?, type=? where id=?

3. JPA @Entity with @DynamicUpdate

We’ve seen that even though we’ve modified the name field only, Hibernate has included all the columns in the SQL statement.

Now, let’s add the @DynamicUpdate annotation to the Account entity:

@Entity
@DynamicUpdate
public class Account {
    // Existing data and methods
}

Next, let’s run the same update code we used in the previous section. We can see that the SQL generated by Hibernate in this case includes only the name column:

update Account set name=? where id=?

So, what happens when we use @DynamicUpdate on an entity?

Actually, when we use @DynamicUpdate on an entity, Hibernate does not use the cached SQL statement for the update. Instead, it will generate a SQL statement each time we update the entity. This generated SQL includes only the changed columns.

In order to find out the changed columns, Hibernate needs to track the state of the current entity. So, when we change any field of an entity, it compares the current and the modified states of the entity.

This means that @DynamicUpdate has a performance overhead associated with it. Therefore, we should only use it when it’s actually required.

Certainly, there are a few scenarios where we should use this annotation — for example, if an entity represents a table that has a large number of columns and only a few of these columns are required to be updated frequently. Also, when we use version-based optimistic locking, we need to use @DynamicUpdate.

4. Conclusion

In this tutorial, we’ve looked into the @DynamicUpdate annotation of Hibernate. We’ve used an example of Spring Data JPA to see @DynamicUpdate in action. Also, we’ve discussed when we should use this feature and when we should not.

As always, the complete code examples used in this tutorial are available over on Github.

Spring WebClient vs. RestTemplate

$
0
0

1. Introduction

In this tutorial, we’re going to compare two of Spring’s web client implementations – RestTemplate and new Spring 5’s reactive alternative WebClient.

2. Blocking vs. Non-Blocking Client

It’s a common requirement in web applications to make HTTP calls to other services. Therefore, we need a web client tool.

2.1. RestTemplate Blocking Client

For a long time, Spring has been offering RestTemplate as a web client abstraction. Under the hood, RestTemplate uses the Java Servlet API, which is based on the thread-per-request model.

This means that the thread will block until the web client receives the response. The problem with the blocking code is due to each thread consuming some amount of memory and CPU cycles.

Let’s consider having a lot of incoming requests, which are waiting for some slow service needed to produce the result.

Sooner or later, the requests waiting for the results will pile up. Consequently, the application will create many threads, which will exhaust the thread pool or occupy all the available memory. We can also experience performance degradation because of the frequent CPU context (thread) switching.

2.2. WebClient Non-Blocking Client

On the other side, WebClient uses an asynchronous, non-blocking solution provided by the Spring Reactive framework.

While RestTemplate creates a new Thread for each event (HTTP call), WebClient will create something like a “task” for each event. Behind the scenes, the Reactive framework will queue those “tasks” and execute them only when the appropriate response is available.

The Reactive framework uses an event-driven architecture. It provides means to compose asynchronous logic through the Reactive Streams API. As a result, the reactive approach can process more logic while using fewer threads and system resources, compared to the synchronous/blocking method.

WebClient is part of the Spring WebFlux library. Therefore, we can additionally write client code using a functional, fluent API with reactive types (Mono and Flux) as a declarative composition.

3. Comparison Example

To demonstrate the differences between these two approaches, we’d need to run performance tests with many concurrent client requests. We’d see a significant performance degradation with the blocking method after a certain number of parallel client requests.

On the other side, the reactive/non-blocking method should give constant performances, regardless of the number of requests.

For the purpose of this article, let’s implement two REST endpoints, one using RestTemplate and the other using WebClient. Their task is to call another slow REST web service, which returns a list of tweets.

For a start, we’ll need the Spring Boot WebFlux starter dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-webflux</artifactId>
</dependency>

Furthermore, here’s our slow service REST endpoint:

@GetMapping("/slow-service-tweets")
private List<Tweet> getAllTweets() {
    Thread.sleep(2000L); // delay
    return Arrays.asList(
      new Tweet("RestTemplate rules", "@user1"),
      new Tweet("WebClient is better", "@user2"),
      new Tweet("OK, both are useful", "@user1"));
}

3.1. Using RestTemplate to Call a Slow Service

Let’s now implement another REST endpoint which will call our slow service via the web client.

Firstly, we’ll use RestTemplate:

@GetMapping("/tweets-blocking")
public List<Tweet> getTweetsBlocking() {
    log.info("Starting BLOCKING Controller!");
    final String uri = getSlowServiceUri();

    RestTemplate restTemplate = new RestTemplate();
    ResponseEntity<List<Tweet>> response = restTemplate.exchange(
      uri, HttpMethod.GET, null,
      new ParameterizedTypeReference<List<Tweet>>(){});

    List<Tweet> result = response.getBody();
    result.forEach(tweet -> log.info(tweet.toString()));
    log.info("Exiting BLOCKING Controller!");
    return result;
}

When we call this endpoint, due to the synchronous nature of RestTemplate, the code will block waiting for the response from our slow service. Only when the response has been received, the rest of the code in this method will be executed. In the logs, we’ll see:

Starting BLOCKING Controller!
Tweet(text=RestTemplate rules, username=@user1)
Tweet(text=WebClient is better, username=@user2)
Tweet(text=OK, both are useful, username=@user1)
Exiting BLOCKING Controller!

3.2. Using WebClient to Call a Slow Service

Secondly, let’s use WebClient to call the slow service:

@GetMapping(value = "/tweets-non-blocking", 
            produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<Tweet> getTweetsNonBlocking() {
    log.info("Starting NON-BLOCKING Controller!");
    Flux<Tweet> tweetFlux = WebClient.create()
      .get()
      .uri(getSlowServiceUri())
      .retrieve()
      .bodyToFlux(Tweet.class);

    tweetFlux.subscribe(tweet -> log.info(tweet.toString()));
    log.info("Exiting NON-BLOCKING Controller!");
    return tweetFlux;
}

In this case, WebClient returns a Flux publisher and the method execution gets completed. Once the result is available, the publisher will start emitting tweets to its subscribers. Note that a client (in this case, a web browser) calling this /tweets-non-blocking endpoint will also be subscribed to the returned Flux object.

Let’s observe the log this time:

Starting NON-BLOCKING Controller!
Exiting NON-BLOCKING Controller!
Tweet(text=RestTemplate rules, username=@user1)
Tweet(text=WebClient is better, username=@user2)
Tweet(text=OK, both are useful, username=@user1)

Note that this endpoint method completed before the response was received.

4. Conclusion

In this article, we explored two different ways of using web clients in Spring.

RestTemplate uses Java Servlet API and is therefore synchronous and blocking. Contrarily, WebClient is asynchronous and will not block the executing thread while waiting for the response to come back. Only when the response is ready will the notification be produced.

RestTemplate will still be used. In some cases, the non-blocking approach uses much fewer system resources compared to the blocking one. Hence, in those cases, WebClient is a preferable choice.

All of the code snippets, mentioned in the article, can be found over on GitHub.

Looping Diagonally Through a 2d Java Array

$
0
0

1. Overview

In this tutorial, we will see how to loop diagonally through a two-dimensional array. The solution that we provide can be used for a square two-dimensional array of any size.

2. Two-Dimensional Array

The key in working with elements of an array is knowing how to get a specific element from that array. For a two-dimensional array, we use row and column indices to get elements of an array. For this problem, we’ll use the following diagram to show how to get these elements.

Next, we need to understand how many diagonal lines we have in our array, as seen in the diagram. We do this by first getting the length of one dimension of the array and then using that to get the number of diagonal lines (diagonalLines). We then use the number of diagonal lines to get the mid-point which will help in the search for row and column indices.

In this example, the mid-point is three:

int length = twoDArray.length
int diagonalLines = (length + length) - 1
int midPoint = (diagonalLines / 2) + 1

3. Getting Row and Column Indices

To loop through the whole array, we start looping from 1 until the loop variable is less than or equal to the diagonalLines variable.

for (int i = 1; i <= diagonalLines; i++) {
    // some operations
}

Let’s also introduce the idea of the number of items in a diagonal line, calling it itemsInDiagonal. For example, line 3 in the diagram above has 3 items (g, e, c) and line 4 has 2 (h, f). This variable is incremented by 1 in the loop when loop variable is less or equal to midPoint. It is then decremented by 1 otherwise.

After incrementing or decrementing itemsInDiagonal, we then have a new loop with loop variable j. Variable is incremented from 0 until it is less than itemsInDiagonal. We then use loop variables and j to get the row and column indices. The logic of this calculation depends on whether loop variable i is greater than midPoint or not. When is greater than midPoint, we also use the length variable to determine the row and column indices:

int rowIndex;
int columnIndex;

if (i <= midPoint) {
    itemsInDiagonal++;
    for (int j = 0; j < itemsInDiagonal; j++) {
        rowIndex = (i - j) - 1;
        columnIndex = j;
        items.append(twoDArray[rowIndex][columnIndex]);
    }
} else {
    itemsInDiagonal--;
    for (int j = 0; j < itemsInDiagonal; j++) {
        rowIndex = (length - 1) - j;
        columnIndex = (i - length) + j;
        items.append(twoDArray[rowIndex][columnIndex]);
    }
}

4. Conclusion

In this tutorial, we have shown how to loop diagonally through a square two-dimensional array using a method that helps in getting row and column indices.

As always, the full source code of the example is available over on GitHub.

Checking If an Array Is Sorted in Java

$
0
0

1. Overview

In this tutorial, we’ll see different ways to check if an array is sorted.

Before starting, though, it’d be interesting to check how to sort arrays in Java.

2. With a Loop

One way to check is with a for loop. We can iterate all the values of the array one by one.

Let’s see how to do it.

2.1. Primitive Array

Simply put, we’ll iterate over all positions but the last one. This is because we’re going to compare one position with the next one.

If some of them are not sorted, the method will return false. If none of the comparisons return false, it means that an array is sorted:

boolean isSorted(int[] array) {
    for (int i = 0; i < array.length - 1; i++) {
        if (array[i] > array[i + 1])
            return false;
    }
    return true;
}

2.2. Objects That Implement Comparable

We can do something similar with objects that implement Comparable. Instead of using a greater-than sign, we’ll use compareTo:

boolean isSorted(Comparable[] array) {
    for (int i = 0; i < array.length - 1; ++i) {
        if (array[i].compareTo(array[i + 1]) > 0)
            return false;
    }
    return true;
}

2.3. Objects That Don’t Implement Comparable

But, what if our objects don’t implement Comparable? In this case, we can instead create a Comparator.

In this example, we’re going to use the Employee object. It’s a simple POJO with three fields:

public class Employee implements Serializable {
    private int id;
    private String name;
    private int age;

    // getters and setters
}

Then, we need to choose which field we want to order by. Here, let’s order by the age field:

Comparator<Employee> byAge = Comparator.comparingInt(Employee::getAge);

And then, we can change our method to also take a Comparator:

boolean isSorted(Object[] array, Comparator comparator) {
    for (int i = 0; i < array.length - 1; ++i) {
        if (comparator.compare(array[i], (array[i + 1])) > 0)
            return false;
    }

    return true;
}

3. Recursively

We can, of course, use recursion instead. The idea here is we’ll check two positions in the array and then recurse until we’ve checked every position.

3.1. Primitive Array

In this method, we check the last two positions. If they’re sorted, we’ll call the method again but with a previous position. If one of these positions is not sorted, the method will return false:

boolean isSorted(int[] array, int length) {
    if (array == null || length < 2) 
        return true; 
    if (array[length - 2] > array[length - 1])
        return false;
    return isSorted(array, length - 1);
}

3.2. Objects That Implement Comparable

Now, let’s look again as objects that implement Comparable. We’ll see that the same approach with compareTo will work:

boolean isSorted(Comparable[] array, int length) {
    if (array == null || length < 2) 
        return true; 
    if (array[length - 2].compareTo(array[length - 1]) > 0)
        return false;
    return isSorted(array, length - 1);
}

3.3. Objects That Don’t Implement Comparable

Lately, let’s try our Employee object again, adding the Comparator parameter:

boolean isSorted(Object[] array, Comparator comparator, int length) {
    if (array == null || length < 2)
        return true;
    if (comparator.compare(array[length - 2], array[length - 1]) > 0)
        return false;
    return isSorted(array, comparator, length - 1);
}

4. Conclusion

In this tutorial, we have seen how to check if an array is sorted or not. We saw both iterative and recursive solutions.

Our recommendation is to use the loop solution. It’s cleaner and easier to read.

As usual, the source code from this tutorial can be found over on GitHub.

Java Weekly, Issue 291

$
0
0

Here we go…

1. Spring and Java

>> Deploying Spring Boot app as WAR [vojtechruzicka.com]

Even if you must deploy to a traditional app server, you can still build your app as a WAR without losing direct executability.

>> Reflecting over Exercises in Programming Style [blog.frankel.ch]

A nice write-up on reflection and the Kotlin Poet API for code generation.

>> DON’T make an ASS out of U and ME when dealing with Hibernate caching! [blog.codecentric.de]

And a great way to test your ORM behavior using ttddyy’s DataSourceProxy, a wrapper API around DataSource. Very cool.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musing

>> Differences between PUT and POST S3 signed URLs [advancedweb.hu]

Although PUT is much simpler to use, POST provides many more features.

>> Why won’t it… [blog.cleancoder.com]

And as we expect modern software to “get out of our way”, we should be mindful to make our systems more flexible.

Also worth reading:

3. Comics

>> More People Working At Home [dilbert.com]

>> Housing Costs [dilbert.com]

>> The New Consultant [dilbert.com]

4. Pick of the Week

>> No, You Can’t Make a Person Change [markmanson.net]


Embedded Redis Server with Spring Boot Test

$
0
0

1. Overview

Spring Data Redis provides an easy way to integrate with Redis instances.

However, in some cases, it’s more convenient to use an embedded server than to create an environment with a real server.

Therefore, we’ll learn how to set up and use the Embedded Redis Server.

2. Dependencies

Let’s begin by adding the necessary dependencies:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

<dependency>
  <groupId>it.ozimov</groupId>
  <artifactId>embedded-redis</artifactId>
  <version>0.7.2</version>
  <scope>test</scope>
</dependency>

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-test</artifactId>
  <scope>test</scope>
</dependency>

The spring-boot-starter-test dependency contains everything we need to run integration tests.

Additionally, the embedded-redis contains the embedded server that we’ll use.

3. Setup

After adding the dependencies, we should define the connection settings between the Redis server and our application.

Let’s begin by creating a class that will hold our properties:

@Configuration
public class RedisProperties {
    private int redisPort;
    private String redisHost;

    public RedisProperties(
      @Value("${spring.redis.port}") int redisPort, 
      @Value("${spring.redis.host}") String redisHost) {
        this.redisPort = redisPort;
        this.redisHost = redisHost;
    }

    // getters
}

Next, we should create a configuration class that defines the connection and uses our properties:

@Configuration
@EnableRedisRepositories
public class RedisConfiguration {

    @Bean
    public LettuceConnectionFactory redisConnectionFactory(
      RedisProperties redisProperties) {
        return new LettuceConnectionFactory(
          redisProperties.getRedisHost(), 
          redisProperties.getRedisPort());
    }

    @Bean
    public RedisTemplate<?, ?> redisTemplate(LettuceConnectionFactory connectionFactory) {
        RedisTemplate<byte[], byte[]> template = new RedisTemplate<>();
        template.setConnectionFactory(connectionFactory);
        return template;
    }
}

The configuration is quite simple. Additionally, it allows us to run the embedded server on a different port.

Check out our Introduction to Spring Data Redis article to learn more about the Redis with Spring Boot.

4. Embedded Redis Server

Now, we’ll configure the embedded server and use it in one of our tests.

Firstly, let’s create an application.properties file in the test resource directory (src/test/resources):

spring.redis.host=localhost
spring.redis.port=6370

After that, we’ll create a @TestConfiguration-annotated class:

@TestConfiguration
public class TestRedisConfiguration {

    private RedisServer redisServer;

    public TestRedisConfiguration(RedisProperties redisProperties) {
        this.redisServer = new RedisServer(redisProperties.getRedisPort());
    }

    @PostConstruct
    public void postConstruct() {
        redisServer.start();
    }

    @PreDestroy
    public void preDestroy() {
        redisServer.stop();
    }
}

The server will start once the context is up. It’ll start on our machine on the port that we’ve defined in our properties. For instance, we can now run the test without stopping the actual Redis server.

Ideally, we’d like to start it on the random available port but embedded Redis doesn’t have this feature yet. What we could do right now is to get the random port via the ServerSocket API.

Additionally, the server will stop once the context is destroyed.

The server can also be provided with our own executable:

this.redisServer = new RedisServer("/path/redis", redisProperties.getRedisPort());

Furthermore, the executable can be defined per operating system:

RedisExecProvider customProvider = RedisExecProvider.defaultProvider()
  .override(OS.UNIX, "/path/unix/redis")
  .override(OS.Windows, Architecture.x86_64, "/path/windows/redis")
  .override(OS.MAC_OS_X, Architecture.x86_64, "/path/macosx/redis")
  
this.redisServer = new RedisServer(customProvider, redisProperties.getRedisPort());

Finally, let’s create a test that’ll use our TestRedisConfiguration class:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = TestRedisConfiguration.class)
public class UserRepositoryIntegrationTest {

    @Autowired
    private UserRepository userRepository;

    @Test
    public void shouldSaveUser_toRedis() {
        UUID id = UUID.randomUUID();
        User user = new User(id, "name");

        User saved = userRepository.save(user);

        assertNotNull(saved);
    }
}

The user has been saved to our embedded Redis server.

Additionally, we had to manually add TestRedisConfiguration to SpringBootTest. As we said earlier, the server has started before the test and stopped after.

5. Conclusion

The Embedded Redis Server is the perfect tool to replace the actual server in the test environment. We’ve seen how to configure it and how to use it in our test.

As always, the code for examples is available over on GitHub.

Checking if a URL Exists in Java

$
0
0

1. Overview

In this tutorial, we’ll be looking at how to check if a URL exists with an example in Java using the GET and HEAD HTTP methods.

2. URL Existence

There can be situations in programming when we have to know if a resource exists in the given URL before accessing it, or we may even need to check a URL to know the resource’s health.

We decide a resource’s existence at a URL by looking at its response code. Typically we look for a 200, which means “OK” and that the request has succeeded.

3. Using a GET Request

First of all, to make a GET request, we can create an instance of java.net.URL and pass the URL that we would like to access as a constructor argument. After that, we simply open the connection and get the response code:

URL url = new URL("http://www.example.com");
HttpURLConnection huc = (HttpURLConnection) url.openConnection();
 
int responseCode = huc.getResponseCode();
 
Assert.assertEquals(HttpURLConnection.HTTP_OK, responseCode);

When the resource is not found at the URL, we get a 404 response code:

URL url = new URL("http://www.example.com/xyz"); 
HttpURLConnection huc = (HttpURLConnection) url.openConnection();
 
int responseCode = huc.getResponseCode();
 
Assert.assertEquals(HttpURLConnection.HTTP_NOT_FOUND, responseCode);

As the default HTTP method in HttpURLConnection is GET, we’re not setting the request method in the examples in this section. We’ll see how to override the default method in the next section.

4. Using a HEAD Request 

The HEAD is also an HTTP request method that is identical to GET except that it does not return the response body. 

It acquires the response code along with the response headers that we’ll receive if the same resource is requested with a GET method.

To create a HEAD request, we can simply set the Request Method to HEAD before getting the response code:

URL url = new URL("http://www.example.com");
HttpURLConnection huc = (HttpURLConnection) url.openConnection();
huc.setRequestMethod("HEAD");
 
int responseCode = huc.getResponseCode();
 
Assert.assertEquals(HttpURLConnection.HTTP_OK, responseCode);

Similarly, when the resource is not found at the URL:

URL url = new URL("http://www.example.com/xyz");
HttpURLConnection huc = (HttpURLConnection) url.openConnection();
huc.setRequestMethod("HEAD");
 
int responseCode = huc.getResponseCode();
 
Assert.assertEquals(HttpURLConnection.HTTP_NOT_FOUND, responseCode);

By using the HEAD method and thereby not downloading the response body, we reduce the response time and bandwidth, and we improve performance.

Although most modern servers support the HEAD method, some home-grown or legacy servers might reject the HEAD method with an invalid method type error. So, we should use the HEAD method with caution.

5. Following Redirects

Finally, when looking for URL existence, it might be a good idea not to follow redirects. But this can also depend on the reason we’re looking for the URL.

When a URL is moved, the server can redirect the request to a new URL with 3xx response codes. The default is to follow a redirect. We can choose to follow or ignore the redirect based on our need.

To do this, we can either override the default value of followRedirects for all the HttpURLConnections:

URL url = new URL("http://www.example.com");
HttpURLConnection.setFollowRedirects(false);
HttpURLConnection huc = (HttpURLConnection) url.openConnection();
 
int responseCode = huc.getResponseCode();
 
Assert.assertEquals(HttpURLConnection.HTTP_OK, responseCode);

Or, we can disable following redirects for a single connection by using the setInstanceFollowRedirects() method:

URL url = new URL("http://www.example.com");
HttpURLConnection huc = (HttpURLConnection) url.openConnection();
huc.setInstanceFollowRedirects(false);
 
int responseCode = huc.getResponseCode();
 
Assert.assertEquals(HttpURLConnection.HTTP_OK, responseCode);

6. Conclusion

In this article, we looked at checking the response code to find the availability of a URL. Also, we looked at how it might be a good idea to use the HEAD method to save bandwidth and get a quicker response.

The code example used in this tutorial is available in our GitHub project.

Using Spring Boot Application as a Dependency

$
0
0

1. Overview

In this tutorial, we’ll see how to use a Spring Boot application as a dependency of another project.

2. Spring Boot Packaging

The Spring Boot Maven and Gradle plugins both package our application as executable JARs – such a file can’t be used in another project since class files are put into BOOT-INF/classes. This is not a bug, but a feature.

In order to share classes with another project, the best approach to take is to create a separate jar containing shared classes, then make it a dependency of all modules that rely on them.

But if that isn’t possible, we can configure the plugin to generate a separate jar that can be used as a dependency.

2.1. Maven Configuration

Let’s configure the plugin with a classifier:

...
<build>
    ...
    <plugins>
        ...
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
	    <configuration>
	        <classifier>exec</classifier>
            </configuration>
        </plugin>
    </plugins>
</build>

Though, the configuration for Spring Boot 1.x would be a little different:

...
<plugin>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-maven-plugin</artifactId>
    <executions>
        <execution>
            <goals>
                <goal>repackage</goal>
            </goals>
            <configuration>
                <classifier>exec</classifier>
            </configuration>
        </execution>
    </executions>
</plugin>

This will create two jars, one with the suffix exec as an executable jar, and another as a more typical jar that we can include in other projects.

3. Packaging with Maven Assembly Plugin

We may also use the maven-assembly-plugin to create the dependent jar:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-assembly-plugin</artifactId>
    <configuration>
        <descriptorRefs>
            <descriptorRef>jar-with-dependencies</descriptorRef>
        </descriptorRefs>
    </configuration>
    <executions>
        <execution>
            <id>make-assembly</id>
            <phase>package</phase>
            <goals>
                <goal>single</goal>
            </goals>
        </execution>
    </executions>
</plugin>

If we use this plugin along with the exec classifier in spring-boot-maven-plugin, it will generate three jars. The first two will be the same we saw previously.

The third will have whatever suffix we specified in the <descriptorRef> tag and will contain all the project’s transitive dependencies. If we include it in another project, we won’t need to separately include Spring dependencies.

4. Conclusion

In this article, we showed a couple of approaches to packaging a Spring Boot application for use as a dependency in other Maven projects.

As always, the code backing the article is available over on GitHub.

String toLowerCase and toUpperCase Methods in Java

$
0
0

1. Overview

In this tutorial, we’ll cover the toUpperCase and toLowerCase methods included in the Java String class.

We’ll start by creating a String called name:

String name = "John Doe";

2. Convert to Uppercase

To create a new uppercase String based on name, we call the toUpperCase method:

String uppercaseName = name.toUpperCase();

This results in uppercaseName having the value “JOHN DOE”:

assertEquals("JOHN DOE", uppercaseName);

Note that Strings are immutable in Java and that calling toUpperCase creates a new String. In other words, name is unchanged when calling toUpperCase.

3. Convert to Lowercase

Similarly, we create a new lowercase String based on name by calling toLowerCase:

String lowercaseName = name.toLowerCase();

This results in lowercaseName having the value “john doe”:

assertEquals("john doe", lowercaseName);

Just as with toUpperCasetoLowerCase does not change the value of name.

4. Change Case Using Locales

Additionally, by supplying a Locale to the toUpperCase and toLowerCase methods, we can change the case of a String using locale-specific rules.

For example, we can supply a Locale to uppercase a Turkish i (Unicode 0069):

Locale TURKISH = new Locale("tr");
System.out.println("\u0069".toUpperCase());
System.out.println("\u0069".toUpperCase(TURKISH));

Accordingly, this results in an uppercase I and a dotted uppercase I:

I
İ

We can verify this using the following assertions:

assertEquals("\u0049", "\u0069".toUpperCase());
assertEquals("\u0130", "\u0069".toUpperCase(TURKISH));

Likewise, we can do the same for toLowerCase using the Turkish I (Unicode 0049):

System.out.println("\u0049".toLowerCase());
System.out.println("\u0049".toLowerCase(TURKISH));

Consequently, this results in a lowercase i and a lowercase dotless i:

i
ı

We can verify this using the following assertions:

assertEquals("\u0069", "\u0049".toLowerCase());
assertEquals("\u0131", "\u0049".toLowerCase(TURKISH));

5. Conclusion

In conclusion, the Java String class includes the toUpperCase and toLowerCase methods for changing the case of a String. If needed, a Locale can be supplied to provide locale-specific rules when changing the case of a String.

The source code for this article, including examples, can be found over on GitHub.

Spring Request Parameters with Thymeleaf

$
0
0

1. Introduction

In our article Introduction to Using Thymeleaf in Spring, we saw how to bind user input to objects.

We used th:object and th:field in the Thymeleaf template and @ModelAttribute in the controller to bind data to a Java object. In this article, we’ll look at how to use the Spring annotation @RequestParam in combination with Thymeleaf.

2. Parameters in Forms

Let’s first create a simple controller that accepts four optional request parameters:

@Controller
public class MainController {
    @RequestMapping("/")
    public String index(
        @RequestParam(value = "participant", required = false) String participant,
        @RequestParam(value = "country", required = false) String country,
        @RequestParam(value = "action", required = false) String action,
        @RequestParam(value = "id", required = false) Integer id,
        Model model
    ) {
        model.addAttribute("id", id);
        List<Integer> userIds = asList(1,2,3,4);
        model.addAttribute("userIds", userIds);
        return "index";
    }
}

The name of our Thymeleaf template is index.html. In the following three sections, we’ll use different HTML form elements for the user to pass data to the controller.

2.1. Input Element

First, let’s create a simple form with a text input field and a button to submit the form:

<form th:action="@{/}">
<input type="text" th:name="participant"/> 
<input type="submit"/> 
</form>

The attribute th:name=”participant” binds the value of the input field to the parameter participant of the controller. For this to work, we need to annotate the parameter with @RequestParam(value = “participant”).

2.2. Select Element

Likewise for the HTML select element:

<form th:action="@{/}">
    <input type="text" th:name="participant"/>
    <select th:name="country">
        <option value="de">Germany</option>
        <option value="nl">Netherlands</option>
        <option value="pl">Poland</option>
        <option value="lv">Latvia</option>
    </select>
</form>

The value of the selected option is bound to the parameter country, annotated with @RequestParam(value = “country”).

2.3. Button Element

Another element where we can use th:name is the button element:

<form th:action="@{/}">
    <button type="submit" th:name="action" th:value="in">check-in</button>
    <button type="submit" th:name="action" th:value="out">check-out</button>
</form>

Depending on whether the first or second button is pressed to submit the form, the value of the parameter action will be either check-in or check-out.

Another way to pass request parameters to a controller is via a hyperlink:

<a th:href="@{/index}">

And we can add parameters in parentheses:

<a th:href="@{/index(param1='value1',param2='value2')}">

Thymeleaf evaluates the above to:

<a href="/index?param1=value1&param2=value2">

Using Thymeleaf expressions to generate hyperlinks is particularly useful if we want to assign parameter values based on variables. For example, let’s generate a hyperlink for each user ID:

<th:block th:each="userId: ${userIds}">
    <a th:href="@{/(id=${userId})}"> User [[${userId}]]</a> <br/>
</th:block>

We can pass a list of user IDs as a property to the template:

List<Integer> userIds = asList(1,2,3);
model.addAttribute("userIds", userIds);

And the resulting HTML will be:

<a th:href="/?id=1"> User 1</a> <br/>
<a th:href="/?id=2"> User 2</a> <br/>
<a th:href="/?id=3"> User 3</a> <br/>

The parameter id in the hyperlink is bound to the parameter id, annotated with @RequestParam(value = “id”).

4. Summary

In this short article, we saw how to use Spring request parameters in combination with Thymeleaf.

First, we created a simple controller that accepts request parameters. Second, we looked at how to use Thymeleaf to generate an HTML page that can call our controller.

The full source code for all examples in this article can be found on GitHub.

Viewing all 4731 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>