Quantcast
Channel: Baeldung
Viewing all 4702 articles
Browse latest View live

Guide to Try in Javaslang

$
0
0

1. Overview

In this article, we will look at a functional way of error handling other than a standard try-catch block.

We will be using Try class from Javaslang library that will allow us to create more fluent and conscious API by embedding error handling into normal program processing flow.

If you want to get more information about Javaslang, check this article.

2. Standard Way Of Handling Exceptions

Let’s say that we have a simple interface with a method call() that returns a Response or throws ClientException that is a checked exception in a case of a failure:

public interface HttpClient {
    Response call() throws ClientException;
}

The Response is a simple class with only one id field:

public class Response {
    public final String id;

    public Response(String id) {
        this.id = id;
    }
}

Let’s say that we have a service that calls that HttpClient, then we need to handle that checked exception in a standard try-catch block:

public Response getResponse() {
    try {
        return httpClient.call();
    } catch (ClientException e) {
        return null;
    }
}

When we want to create API that is fluent and is written in a functional way, each method that throws checked exceptions disrupts program flow and our program code consists of many try-catch blocks making it very hard to read.

Ideally, we will want to have a special class that encapsulates result state ( a success or a failure ) and then we can chain operations according to that result.

3. Handling Exceptions with Try

Javaslang library gives us a special container that represents a computation that may either result in an exception or complete successfully.

Enclosing operation within Try object gave us a result that is either Success or a Failure. Then we can execute further operations accordingly to that type.

Let’s look how the same method getResponse() as in a previous example will look like using Try:

public class JavaslangTry {
    private HttpClient httpClient;

    public Try<Response> getResponse() {
        return Try.of(httpClient::call);
    }

    // standard constructors
}

The important thing to notice is a return type Try<Response>. When a method returns such result type, we need to handle that in a proper way and keep in mind, that result type can be Success or Failure, so we need to handle that explicitly at a compile time.

3.1. Handling Success

Let’s write a test case that is using our JavaslangTry class in a case when httpClient is returning a successful result. The method getResponse() returns Try<Resposne> object, therefore we can call map() method on it that will execute an action on Response only when Try will be of Success type:

@Test
public void givenHttpClient_whenMakeACall_shouldReturnSuccess() {
    // given
    Integer defaultChainedResult = 1;
    String id = "a";
    HttpClient httpClient = () -> new Response(id);

    // when
    Try<Response> response = new JavaslangTry(httpClient).getResponse();
    Integer chainedResult = response
        .map(this::actionThatTakesResponse)
        .getOrElse(defaultChainedResult);
    Stream<String> stream = response.toStream().map(it -> it.id);

    // then
    assertTrue(!stream.isEmpty());
    assertTrue(response.isSuccess());
    response.onSuccess(r -> assertEquals(id, r.id));
    response.andThen(r -> assertEquals(id, r.id));
    assertNotEquals(defaultChainedResult, chainedResult);
}

Function actionThatTakesResponse() simply takes Response as an argument and returns hashCode of an id field:

public int actionThatTakesResponse(Response response) {
    return response.id.hashCode();
}

Once we map our value using actionThatTakesResponse() function we execute method getOrElse().

If Try has a Success inside it, it returns value of Try, otherwise, it returns defaultChainedResult. Our httpClient execution was successful thus the isSuccess method returns true. Then we can execute onSuccess() method that makes an action on a Response object. Try has also a method andThen that takes a Consumer that consume a value of a Try when that value is a Success. 

We can treat our Try response as a stream. To do so we need to convert it to a Stream using toStream() method, then all operations that are available in Stream class could be used to make operations on that result.

If we want to execute an action on Try type, we can use transform() method that takes Try as an argument and make an action on it without unwrapping enclosed value:

public int actionThatTakesTryResponse(Try<Response> response, int defaultTransformation){
    return response.transform(responses -> response.map(it -> it.id.hashCode())
      .getOrElse(defaultTransformation));
}

3.2. Handling Failure

Let’s write an example when our HttpClient will throw ClientException when executed.

Comparing to the previous example our getOrElse method will return defaultChainedResult because Try will be of a Failure type:

@Test
public void givenHttpClientFailure_whenMakeACall_shouldReturnFailure() {
    // given
    Integer defaultChainedResult = 1;
    HttpClient httpClient = () -> {
        throw new ClientException("problem");
    };

    // when
    Try<Response> response = new JavaslangTry(httpClient).getResponse();
    Integer chainedResult = response
        .map(this::actionThatTakesResponse)
        .getOrElse(defaultChainedResult);
     Option<Response> optionalResponse = response.toOption();

    // then
    assertTrue(optionalResponse.isEmpty());
    assertTrue(response.isFailure());
    response.onFailure(ex -> assertTrue(ex instanceof ClientException));
    assertEquals(defaultChainedResult, chainedResult);
}

The method getReposnse() returns Failure thus method isFailure returns true.

We could execute the onFailure() callback on returned response and see that exception is of ClientException type. The object that is of a Try type could be mapped to Option type using toOption() method.

It is useful when we do not want to carry our Try result throughout all codebase, but we have methods that are handling an explicit absence of value using Option type. When we map our Failure to Option then method isEmpty() is returning true. When Try object is a type Success calling toOption on it will make Option that is defined thus method isDefined() will return true.

3.3. Utilizing Pattern Matching

When our httpClient returns an Exception we could do a pattern matching on a type of that Exception. Then according to a type of that Exception in recover() a method we can decide if we want to recover from that exception and turn our Failure into Success or if we want to leave our computation result as a Failure:

@Test
public void givenHttpClientThatFailure_whenMakeACall_shouldReturnFailureAndNotRecover() {
    // given
    Response defaultResponse = new Response("b");
    HttpClient httpClient = () -> {
        throw new RuntimeException("critical problem");
    };

    // when
    Try<Response> recovered = new JavaslangTry(httpClient).getResponse()
      .recover(r -> Match(r).of(
          Case(instanceOf(ClientException.class), defaultResponse)
      ));

    // then
    assertTrue(recovered.isFailure());

Pattern matching inside the recover() method will turn Failure into Success only if a type of an Exception is a ClientException. Otherwise, it will leave it as a Failure(). We see that our httpClient is throwing RuntimeException thus our recovery method will not handle that case, therefore isFailure() returns true.

If we want to get the result from recovered object, but in a case of critical failure rethrows that exception we can do it using getOrElseThrow() method:

recovered.getOrElseThrow(throwable -> {
    throw new RuntimeException(throwable);
});

There are some errors that are critical and when they occur we want to signal that explicitly by throwing the exception higher in a call stack, to let the caller decide about further exception handling. In such cases, rethrowing exception like in above example is very useful.

When our client will throw a non-critical exception, our pattern matching in a recover() method will turn our Failure into Success. We are recovering from two types of exceptions ClientException and IllegalArgumentException:

@Test
public void givenHttpClientThatFailure_whenMakeACall_shouldReturnFailureAndRecover() {
    // given
    Response defaultResponse = new Response("b");
    HttpClient httpClient = () -> {
        throw new ClientException("non critical problem");
    };

    // when
    Try<Response> recovered = new JavaslangTry(httpClient).getResponse()
        .recover(r -> Match(r).of(
                 Case(instanceOf(ClientException.class), defaultResponse),
                 Case(instanceOf(IllegalArgumentException.class), defaultResponse)
                ));
    
    // then
    assertTrue(recovered.isSuccess());
}

We see that isSuccess() returns true, so our recovery handling code worked successfully.

4. Conclusion

This article shows a practical use of Try container from Javaslang library. We looked at the practical examples of using that construct by handling failure in the more functional way. Using Try will allow us to create more functional and readable API.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as it is.


Guide to Guava’s Ordering

$
0
0

1. Overview

In this article, we will look at Ordering class from the Guava library.

Ordering class implements a Comparator interface and gives us a useful fluent API for creating and chaining comparators.

Java 8 has a method Comparator.comparing() that provides a similar functionality so Ordering will be particularly useful in projects that were not migrated to Java 8 yet and this is why we will be using anonymous classes instead of Lambda Expressions. You can read more about this here.

2. Creating Ordering

Ordering class has a useful builder method that returns a proper instance that can be used in a sort() method on collections or anywhere else where an instance of Comparator is needed.

We can create natural order instance by executing method natural():

List<Integer> integers = Arrays.asList(3, 2, 1);

integers.sort(Ordering.natural());

assertEquals(Arrays.asList(1,2,3), integers);

Let’s say that we have a collection of Person objects:

class Person {
    private String name;
    private Integer age;
    
    // standard constructors, getters
}

And we want to sort a list of such objects by age field. We can create our custom Ordering that will do exactly that by extending it:

List<Person> persons = Arrays.asList(new Person("Michael", 10), new Person("Alice", 3));
Ordering<Person> orderingByAge = new Ordering<Person>() {
    @Override
    public int compare(Person p1, Person p2) {
        return Ints.compare(p1.age, p2.age);
    }
};

persons.sort(orderingByAge);

assertEquals(Arrays.asList(new Person("Alice", 3), new Person("Michael", 10)), persons);

Then we can use our orderingByAge and pass it to sort() method.

3. Chaining Orderings

One useful feature of this class is that we can chain different ways of Ordering. Let’s say we have a collection of persons and we want to sort it by age field and have null age field values at the beginning of a list:

List<Person> persons = Arrays.asList(
  new Person("Michael", 10),
  new Person("Alice", 3), 
  new Person("Thomas", null));
 
Ordering<Person> ordering = Ordering
  .natural()
  .nullsFirst()
  .onResultOf(new Function<Person, Comparable>() {
      @Override
      public Comparable apply(Person person) {
          return person.age;
      }
});

persons.sort(ordering);
        
assertEquals(Arrays.asList(
  new Person("Thomas", null), 
  new Person("Alice", 3), 
  new Person("Michael", 10)), persons);

The important thing to notice here is an order in which particular Orderings are executed – order is from right to left. So firstly onResultOf() is executed and that method extracts the field that we want to compare.

Then, nullFirst() comparator is executed. Because of that, the resulting sorted collection will have a Person object that has a null as an age field at the beginning of the list.

At the end of the sorting process, age fields are compared using natural ordering as specified using method natural().

4. Conclusion

In this article, we looked at Ordering class from Guava library that allows us to create more fluent and elegant Comparators. We created our custom Ordering, we used predefined ones from the API, and we chained them to achieve more custom order.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as it is.

Java Web Weekly, Issue 162

$
0
0

Lots of weekend reading for this week.

Let’s jump right in…

1. Spring and Java

>> Java 9 Enters First Bug Fixing Round [infoq.com]

Java 9 vs Bugs – the first round 🙂

>> Compilation of Java code on the fly [frankel.ch]

A short example showing how to compile Java code at runtime (yes, you read that right).

>> Surprising += Cast [javaspecialists.eu]

Exploring edge cases of casting in Java.

>> Hibernate Tips: How to map an Enum to a database column [thoughts-on-java.org]

A short write-up about a not-so-trivial problem of mapping enums to database columns using Hibernate.

>> Chronicle Queue storing 1 TB in virtual memory on a 128 GB machine [vanilla-java.github.io]

Chronicle Queue utilizes heap space economically 🙂

>> Why Elvis Should Not Visit Java [codefx.org]

As long as Java’s type system doesn’t distinguish between nullable and non-nullable types, the Elvis operator isn’t a good fit for Java.

>> How to automatically validate entities with Hibernate Validator [thoughts-on-java.org]

A short guide to the highly important Hibernate Validator.

>> Tool Time: Preventing leaky APIs with jQAssistant [in.relation.to]

You can now perform some interesting static analysis of your APIs.

>> Surprising += Cast [javaspecialists.eu]

Exploring edge cases of casting in Java.

>> Java Community Oscars – The Top 10 Posts of 2016 [takipi.com]

It turns out Java developers host their own Oscars too 🙂

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Building Event-driven Microservices Using CQRS and Serverless [kennybastani.com]

A rich introduction to building event-driven microservices and CQRS.

>> Revealing Interfaces [michaelfeathers.silvrback.com]

A short trick that might help you clean up your codebase.

Also worth reading:

3. Musings

>> Stop calling yourself a DevOps engineer [insaneprogramming.be]

DevOps is not a role, it’s a mentality.

>> Deep learning: the silver bullet? [lemire.me]

Thoughts about the future of deep learning.

>> Measure Your Code to Get Back on Track [daedtech.com]

What isn’t measured, doesn’t improve. Definitely measure the quality of your code/work as the first step towards improving it.

>> Trust automation [ontestautomation.com]

How to establish trust with your test automation 🙂

>> Processing billions of events/day [plumbr.eu]

An in-depth case study of going from a monolith to scalable Kafka-backed microservices.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Tupac is dead [dilbert.com]

>> How do you get social media followers? [dilbert.com]

>> So wise [dilbert.com]

5. Pick of the Week

>> The Cave Essentials [randsinrepose.com]

A Guide to ConcurrentMap

$
0
0

1. Overview

Maps are naturally one of the most widely style of Java collection.

And, importantly, HashMap is not a thread-safe implementation, while Hashtable does provide thread-safety by synchronizing operations.

Even though Hashtable is thread safe, it is not very efficient. Another fully synchronized Map, Collections.synchronizedMap, does not exhibit great efficiency either. If we want thread-safety with high throughput under high concurrency, these implementations aren’t the way to go.

To solve the problem, the Java Collections Framework introduced ConcurrentMap in Java 1.5.

The following discussions are based on Java 1.8.

2. ConcurrentMap

ConcurrentMap is an extension of the Map interface. It aims to provides a structure and guidance to solving the problem of reconciling throughput with thread-safety.

By overriding several interface default methods, ConcurrentMap gives guidelines for valid implementations to provide thread-safety and memory-consistent atomic operations.

Several default implementations are overridden, disabling the null key/value support:

  • getOrDefault
  • forEach
  • replaceAll
  • computeIfAbsent
  • computeIfPresent
  • compute
  • merge

The following APIs are also overridden to support atomicity, without a default interface implementation:

  • putIfAbsent
  • remove
  • replace(key, oldValue, newValue)
  • replace(key, value)

The rest of actions are directly inherited with basically consistent with Map.

3. ConcurrentHashMap

ConcurrentHashMap is the out-of-box ConcurrentMap implementation. For better performance, it consists of a set of tables (segments), each of which can be independently locked, thus read and update operations mostly do not block.

The number of segments required is relative to the number of threads accessing the table so that the update in progress per segment would be no more than one most of time.

That’s why constructors, compared to HashMap, provides the extra concurrencyLevel argument to control the number of estimated threads to use:

public ConcurrentHashMap(
  int initialCapacity, float loadFactor, int concurrencyLevel)

The other two arguments: initialCapacity and loadFactor, work quite the same as HashMap.

3.1. Thread-Safety

ConcurrentMap guarantees memory consistency on key/value operations in a multi-threading environment.

Actions in a thread prior to placing an object into a ConcurrentMap as a key or value happen-before actions subsequent to the access or removal of that object in another thread.

To confirm, let’s have a look at a memory inconsistent case:

@Test
public void givenHashMap_whenSumParallel_thenError() throws Exception {
    Map<String, Integer> map = new HashMap<>();
    List<Integer> sumList = parallelSum100(map, 100);

    assertNotEquals(1, sumList
      .stream()
      .distinct()
      .count());
    long wrongResultCount = sumList
      .stream()
      .filter(num -> num != 100)
      .count();
    
    assertTrue(wrongResultCount > 0);
}

private List<Integer> parallelSum100(Map<String, Integer> map, 
  int executionTimes) throws InterruptedException {
    List<Integer> sumList = new ArrayList<>(1000);
    for (int i = 0; i < executionTimes; i++) {
        map.put("test", 0);
        ExecutorService executorService = 
          Executors.newFixedThreadPool(4);
        for (int j = 0; j < 10; j++) {
            executorService.execute(() -> {
                for (int k = 0; k < 10; k++)
                    map.computeIfPresent(
                      "test", 
                      (key, value) -> value + 1
                    );
            });
        }
        executorService.shutdown();
        executorService.awaitTermination(5, TimeUnit.SECONDS);
        sumList.add(map.get("test"));
    }
    return sumList;
}

For each map.computeIfPresent action in parallel, HashMap does not provide a consistent view of what should be the present integer value, leading to inconsistent and undesirable results.

As for ConcurrentHashMap, we can get a consistent and correct result:

@Test
public void givenConcurrentMap_whenSumParallel_thenCorrect() 
  throws Exception {
    Map<String, Integer> map = new ConcurrentHashMap<>();
    List<Integer> sumList = parallelSum100(map, 1000);

    assertEquals(1, sumList
      .stream()
      .distinct()
      .count());
    long wrongResultCount = sumList
      .stream()
      .filter(num -> num != 100)
      .count();
    
    assertEquals(0, wrongResultCount);
}

3.2. Null Key/Value

Most APIs provided by ConcurrentMap does not allow null key or value, for example:

@Test(expected = NullPointerException.class)
public void givenConcurrentHashMap_whenPutWithNullKey_thenThrowsNPE() {
    concurrentMap.put(null, new Object());
}

@Test(expected = NullPointerException.class)
public void givenConcurrentHashMap_whenPutNullValue_thenThrowsNPE() {
    concurrentMap.put("test", null);
}

However, for compute* and merge actions, the computed value can be null, which indicates the key-value mapping is removed if present or remains absent if previously absent.

@Test
public void givenKeyPresent_whenComputeRemappingNull_thenMappingRemoved() {
    Object oldValue = new Object();
    concurrentMap.put("test", oldValue);
    concurrentMap.compute("test", (s, o) -> null);

    assertNull(concurrentMap.get("test"));
}

3.3. Performance

Under the hood, ConcurrentHashMap is somewhat similar to HashMap, with data access and update based on a hash table (though more complex).

And of course the ConcurrentHashMap should yield much better performance in most concurrent cases for data retrieval and update.

Let’s write a quick micro-benchmark for get and put performance and compare that to Hashtable and Collections.synchronizedMap, running both operations for 500,000 times in 4 threads.

@Test
public void givenMaps_whenGetPut500KTimes_thenConcurrentMapFaster() 
  throws Exception {
    Map<String, Object> hashtable = new Hashtable<>();
    Map<String, Object> synchronizedHashMap = 
      Collections.synchronizedMap(new HashMap<>());
    Map<String, Object> concurrentHashMap = new ConcurrentHashMap<>();

    long hashtableAvgRuntime = timeElapseForGetPut(hashtable);
    long syncHashMapAvgRuntime = 
      timeElapseForGetPut(synchronizedHashMap);
    long concurrentHashMapAvgRuntime = 
      timeElapseForGetPut(concurrentHashMap);

    assertTrue(hashtableAvgRuntime > concurrentHashMapAvgRuntime);
    assertTrue(syncHashMapAvgRuntime > concurrentHashMapAvgRuntime);
}

private long timeElapseForGetPut(Map<String, Object> map) 
  throws InterruptedException {
    ExecutorService executorService = 
      Executors.newFixedThreadPool(4);
    long startTime = System.nanoTime();
    for (int i = 0; i < 4; i++) {
        executorService.execute(() -> {
            for (int j = 0; j < 500_000; j++) {
                int value = ThreadLocalRandom
                  .current()
                  .nextInt(10000);
                String key = String.valueOf(value);
                map.put(key, value);
                map.get(key);
            }
        });
    }
    executorService.shutdown();
    executorService.awaitTermination(1, TimeUnit.MINUTES);
    return (System.nanoTime() - startTime) / 500_000;
}

Keep in mind micro-benchmarks are only looking at a single scenario and aren’t always a good reflection of real world performance.

That being said, on an OS X system with an an average dev system, we’re seeing an average sample result for 100 consecutive runs (in nanoseconds):

Hashtable: 1142.45
SynchronizedHashMap: 1273.89
ConcurrentHashMap: 230.2

In a multi-threading environment, where multiple threads are expected to access a common Map, the ConcurrentHashMap is clearly preferable.

However, when the Map is only accessible to a single thread, HashMap can be a better choice for its simplicity and solid performance.

3.4. Pitfalls

Retrieval operations generally do not block in ConcurrentHashMap and could overlap with update operations. So for better performance, they only reflect the results of the most recently completed update operations, as stated in the official Javadoc.

There are several other facts to bear in mind:

  • results of aggregate status methods including size, isEmpty, and containsValue are typically useful only when a map is not undergoing concurrent updates in other threads:
@Test
public void givenConcurrentMap_whenUpdatingAndGetSize_thenError() 
  throws InterruptedException {
    Runnable collectMapSizes = () -> {
        for (int i = 0; i < MAX_SIZE; i++) {
            mapSizes.add(concurrentMap.size());
        }
    };
    Runnable updateMapData = () -> {
        for (int i = 0; i < MAX_SIZE; i++) {
            concurrentMap.put(String.valueOf(i), i);
        }
    };
    executorService.execute(updateMapData);
    executorService.execute(collectMapSizes);
    executorService.shutdown();
    executorService.awaitTermination(1, TimeUnit.MINUTES);

    assertNotEquals(MAX_SIZE, mapSizes.get(MAX_SIZE - 1).intValue());
    assertEquals(MAX_SIZE, concurrentMap.size());
}

If concurrent updates are under strict control, aggregate status would still be reliable.

Although these aggregate status methods do not guarantee the real-time accuracy, they may be adequate for monitoring or estimation purposes.

  • hashCode matters: note that using many keys with exactly the same hashCode() is a sure way to slow down a performance of any hash table.

To ameliorate impact when keys are Comparable, ConcurrentHashMap may use comparison order among keys to help break ties. Still, we should avoid using the same hashCode() as much as we can.

  • iterators are only designed to use in a single thread as they provide weak consistency rather than fast-fail traversal, and they will never throw ConcurrentModificationException.
  • the default initial table capacity is 16, and it’s adjusted by the specified concurrency level:
public ConcurrentHashMap(
  int initialCapacity, float loadFactor, int concurrencyLevel) {
 
    //...
    if (initialCapacity < concurrencyLevel) {
        initialCapacity = concurrencyLevel;
    }
    //...
}
  • keys in ConcurrentHashMap are not in sorted order, so for cases when ordering is required, ConcurrentSkipListMap is a suitable choice.

4. ConcurrentNavigableMap

For cases when ordering of keys is required, we can use ConcurrentSkipListMap, a concurrent version of TreeMap.

As a supplement for ConcurrentMap, ConcurrentNavigableMap supports total ordering of its keys (in ascending order by default) and is concurrently navigable. Methods that return views of the map are overridden for concurrency compatibility:

  • subMap
  • headMap
  • tailMap
  • subMap
  • headMap
  • tailMap
  • descendingMap

keySet() views’ iterators and spliterators are enhanced with weak-memory-consistency:

  • navigableKeySet
  • keySet
  • descendingKeySet

5. ConcurrentSkipListMap

Previously, we have covered NavigableMap interface and its implementation TreeMap. ConcurrentSkipListMap can be seen a scalable concurrent version of TreeMap.

In practice, there’s no concurrent implementation of the red-black tree in Java. A concurrent variant of SkipLists is implemented in ConcurrentSkipListMap, providing an expected average log(n) time cost for the containsKey, get, put and remove operations and their variants.

In addition to TreeMap‘s features, key insertion, removal, update and access operations are guaranteed with thread-safety. Here’s a comparison to TreeMap when navigating concurrently:

@Test
public void givenSkipListMap_whenNavConcurrently_thenCountCorrect() 
  throws InterruptedException {
    NavigableMap<Integer, Integer> skipListMap
      = new ConcurrentSkipListMap<>();
    int count = countMapElementByPollingFirstEntry(skipListMap, 10000, 4);
 
    assertEquals(10000 * 4, count);
}

@Test
public void givenTreeMap_whenNavConcurrently_thenCountError() 
  throws InterruptedException {
    NavigableMap<Integer, Integer> treeMap = new TreeMap<>();
    int count = countMapElementByPollingFirstEntry(treeMap, 10000, 4);
 
    assertNotEquals(10000 * 4, count);
}

private int countMapElementByPollingFirstEntry(
  NavigableMap<Integer, Integer> navigableMap, 
  int elementCount, 
  int concurrencyLevel) throws InterruptedException {
 
    for (int i = 0; i < elementCount * concurrencyLevel; i++) {
        navigableMap.put(i, i);
    }
    
    AtomicInteger counter = new AtomicInteger(0);
    ExecutorService executorService
      = Executors.newFixedThreadPool(concurrencyLevel);
    for (int j = 0; j < concurrencyLevel; j++) {
        executorService.execute(() -> {
            for (int i = 0; i < elementCount; i++) {
                if (navigableMap.pollFirstEntry() != null) {
                    counter.incrementAndGet();
                }
            }
        });
    }
    executorService.shutdown();
    executorService.awaitTermination(1, TimeUnit.MINUTES);
    return counter.get();
}

A full explanation of the performance concerns behind the scenes is beyond the scope of this article. The details can be found in ConcurrentSkipListMap’s Javadoc, which is located under java/util/concurrent in the src.zip file.

6. Conclusion

In this article, we mainly introduced the ConcurrentMap interface and the features of ConcurrentHashMap and covered on ConcurrentNavigableMap being key-ordering required.

The full source code for all the examples used in this article can be found in the GitHub project.

Constructor Injection in Spring with Lombok

$
0
0

1. Introduction

Lombok is an extremely useful library overcoming boilerplate code. If you are not familiar with it yet, I highly recommend taking a look at the previous tutorial – Introduction to Project Lombok.

In this article, we’ll demonstrate its usability when combined with Spring’s Constructor-Based Dependency Injection.

2. Constructor-Based Dependency Injection

A good way to wire dependencies in Spring using constructor-based Dependency Injection. This approach forces us to explicitly pass component’s dependencies to a constructor.

As opposed to Field-Based Dependency Injection, it also provides a number of advantages:

  • no need to create a test-specific configuration component – dependencies are injected explicitly in a constructor
  • consistent design – all required dependencies are emphasized and looked after by constructor’s definition
  • simple unit tests – reduced Spring Framework’s overhead
  • reclaimed freedom of using final keywords

However, due to the need for writing a constructor, it uses to lead to a significantly larger code base. Consider the two examples of GreetingService and FarewellService:

@Component
public class GreetingService {

    @Autowired
    private Translator translator;

    public String produce() {
        return translator.translate("hello");
    }
}
@Component
public class FarewellService {

    private final Translator translator;

    public FarewellService(Translator translator) {
        this.translator = translator;
    }

    public String produce() {
        return translator.translate("bye");
    }
}

Basically, both of the components do the same thing – they call a configurable Translator with a task-specific word.

The second variation, though, is much more obfuscated because of the constructor’s boilerplate which doesn’t really bring any value to the code.

In the newest Spring release, it’s constructor does not need to be annotated with @Autowired annotation.

3. Constructor Injection with Lombok

With Lombok, it’s possible to generate a constructor for either all class’s fields (with @AllArgsConstructor) or all final class’s fields (with @RequiredArgsConstructor). Moreover, if you still need an empty constructor, you can append an additional @NoArgsConstructor annotation.

Let’s create a third component, analogous to the previous two:

@Component
@AllArgsConstructor
public class ThankingService {

    private final Translator translator;

    public String produce() {
        return translator.translate("thank you");
    }
}

The above annotation will cause Lombok to generate a constructor for us:

@Component
public class ThankingService {

    private final Translator translator;

    public String thank() {
        return translator.translate("thank you");
    }

    /* Generated by Lombok */
    public ThankingService(Translator translator) {
        this.translator = translator;
    }
}

4. Multiple Constructors

A constructor doesn’t have to be annotated as long as there is only one in a component and Spring can unambiguously choose it as the right one to instantiate a new object. Once there are more, you also need to annotate the one that is to be used by IoC container.

Consider the ApologizeService example:

@Component
@AllArgsConstructor
public class ApologizeService {

    private final Translator translator;
    private final String message;

    @Autowired
    public ApologizeService(Translator translator) {
        this(translator, "sorry");
    }

    public String produce() {
        return translator.translate(message);
    }
}

The above component is optionally configurable with the message field which cannot change after the component is created (hence the lack of a setter). It thus required us to provide two constructors – one with full configuration and the other with an implicit, default value of the message.

Unless one of the constructors is annotated with either @Autowired@Inject or @Resource, Spring will throw an error:

Failed to instantiate [...]: No default constructor found;

If we wanted to annotate the Lombok-generated constructor, we would have to pass the annotation with an onConstructor parameter of the @AllArgsConstructor:

@Component
@AllArgsConstructor(onConstructor = @__(@Autowired))
public class ApologyService {
    // ...
}

The onConstructor parameter accepts an array of annotations (or a single annotation like in this specific example) that are to be put on a generated constructor. The double underscore idiom has been introduced because of the backward compatibility issues. According to the documentation:

The reason of the weird syntax is to make this feature work in javac 7 compilers; the @__ type is an annotation reference to the annotation type __ (double underscore) which doesn’t actually exist; this makes javac 7 delay aborting the compilation process due to an error because it is possible an annotation processor will later create the __ type.

5. Summary

In this tutorial, we showed that there is no need to favor field-based DI over constructor-based DI in terms of increased boilerplate code.

Thanks to Lombok, it’s possible to automate common code generation without a performance impact on runtime, abbreviating long, obscuring code to the use of a single-line annotation.

The code used during the tutorial is available over on GitHub.

Working with Apache Thrift

$
0
0

1. Overview

In this article, we will discover how to develop cross-platform client-server applications with the help of RPC framework called Apache Thrift.

We will cover:

  • Defining data types and service interfaces with IDL
  • Installing the library and generating the sources for different languages
  • Implementing the defined interfaces in particular language
  • Implementing client/server software

If you want to go straight to examples, proceed straight to section 5.

2. Apache Thrift

Apache Thrift was originally developed by the Facebook development team and is currently maintained by Apache.

In comparison to Protocol Buffers, which manage cross-platform object serialization/deserialization processes, Thrift mainly focuses on the communication layer between components of your system.

Thrift uses a special Interface Description Language (IDL) to define data types and service interfaces which are stored as .thrift files and used later as input by the compiler for generating the source code of client and server software that communicate over different programming languages.

To use Apache Thrift in your project, add this Maven dependency:

<dependency>
    <groupId>org.apache.thrift</groupId>
    <artifactId>libthrift</artifactId>
    <version>0.10.0</version>
</dependency>

You can find the latest version in the Maven repository.

3. Interface Description Language

As already described, IDL allows defining of communication interfaces in a neutral language. Below you will find the currently supported types.

3.1. Base Types

  • bool – a boolean value (true or false)
  • byte – an 8-bit signed integer
  • i16 – a 16-bit signed integer
  • i32 – a 32-bit signed integer
  • i64 – a 64-bit signed integer
  • double – a 64-bit floating point number
  • string – a text string encoded using UTF-8 encoding

3.2. Special Types

  • binary – a sequence of unencoded bytes
  • optional – a Java 8’s Optional type

3.3. Structs

Thrift structs are the equivalent of classes in OOP languages but without inheritance. A struct has a set of strongly typed fields, each with a unique name as an identifier. Fields may have various annotations (numeric field IDs, optional default values, etc.).

3.4. Containers

Thrift containers are strongly typed containers:

  • list – an ordered list of elements
  • set – an unordered set of unique elements
  • map<type1,type2> – a map of strictly unique keys to values

Container elements may be of any valid Thrift type.

3.5. Exceptions

Exceptions are functionally equivalent to structs, except that they inherit from the native exceptions.

3.6. Services

Services are actually communication interfaces defined using Thrift types. They consist of a set of named functions, each with a list of parameters and a return type.

4. Source Code Generation

4.1. Language Support

There’s a long list of currently supported languages:

  • C++
  • C#
  • Go
  • Haskell
  • Java
  • Javascript
  • Node.js
  • Perl
  • PHP
  • Python
  • Ruby

You can check the full list here.

4.2. Using Library’s Executable File

Just download the latest version, build and install it if necessary, and use the following syntax:

cd path/to/thrift
thrift -r --gen [LANGUAGE] [FILENAME]

In the commands set above, [LANGUAGE] is one of the supported languages and [FILENAME] is a file with IDL definition.

Note the -r flag. It tells Thrift to generate code recursively once it notices includes in a given .thrift file.

4.3. Using Maven Plugin

Add the plugin in your pom.xml file:

<plugin>
   <groupId>org.apache.thrift.tools</groupId>
   <artifactId>maven-thrift-plugin</artifactId>
   <version>0.1.11</version>
   <configuration>
      <thriftExecutable>path/to/thrift</thriftExecutable>
   </configuration>
   <executions>
      <execution>
         <id>thrift-sources</id>
         <phase>generate-sources</phase>
         <goals>
            <goal>compile</goal>
         </goals>
      </execution>
   </executions>
</plugin>

After that just execute the following command:

mvn clean install

Note that this plugin will not have any further maintenance anymore. Please visit this page for more information.

5. Example of a Client-Server Application

5.1. Defining Thrift File

Let’s write some simple service with exceptions and structures:

namespace cpp com.baeldung.thrift.impl
namespace java com.baeldung.thrift.impl

exception InvalidOperationException {
    1: i32 code,
    2: string description
}

struct CrossPlatformResource {
    1: i32 id,
    2: string name,
    3: optional string salutation
}

service CrossPlatformService {

    CrossPlatformResource get(1:i32 id) throws (1:InvalidOperationException e),

    void save(1:CrossPlatformResource resource) throws (1:InvalidOperationException e),

    list <CrossPlatformResource> getList() throws (1:InvalidOperationException e),

    bool ping() throws (1:InvalidOperationException e)
}

As you can see, the syntax is pretty simple and self-explanatory. We define a set of namespaces (per implementation language), an exception type, a struct, and finally a service interface which will be shared across different components.

Then just store it as a service.thrift file.

5.2. Compiling and Generating a Code

Now it’s time to run a compiler which will generate the code for us:

thrift -r -out generated --gen java /path/to/service.thrift

As you might see, we added a special flag -out to specify the output directory for generated files. If you did not get any errors, the generated directory will contain 3 files:

  • CrossPlatformResource.java
  • CrossPlatformService.java
  • InvalidOperationException.java

Let’s  generate a C++ version of the service by running:

thrift -r -out generated --gen cpp /path/to/service.thrift

Now we get 2 different valid implementations (Java and C++) of the same service interface.

5.3. Adding a Service Implementation

Although Thrift has done most of the work for us, we still need to write our own implementations of the CrossPlatformService. In order to do that, we just need to implement a CrossPlatformService.Iface interface:

public class CrossPlatformServiceImpl implements CrossPlatformService.Iface {

    @Override
    public CrossPlatformResource get(int id) 
      throws InvalidOperationException, TException {
        return new CrossPlatformResource();
    }

    @Override
    public void save(CrossPlatformResource resource) 
      throws InvalidOperationException, TException {
        saveResource();
    }

    @Override
    public List<CrossPlatformResource> getList() 
      throws InvalidOperationException, TException {
        return Collections.emptyList();
    }

    @Override
    public boolean ping() throws InvalidOperationException, TException {
        return true;
    }
}

5.4. Writing a Server

As we said, we want to build a cross-platform client-server application, so we need a server for it. The great thing about Apache Thrift is that it has its own client-server communication framework which makes communication a piece of cake:

public class CrossPlatformServiceServer {
    public void start() throws TTransportException {
        TServerTransport serverTransport = new TServerSocket(9090);
        server = new TSimpleServer(new TServer.Args(serverTransport)
          .processor(new CrossPlatformService.Processor<>(new CrossPlatformServiceImpl())));

        System.out.print("Starting the server... ");

        server.serve();

        System.out.println("done.");
    }

    public void stop() {
        if (server != null && server.isServing()) {
            System.out.print("Stopping the server... ");

            server.stop();

            System.out.println("done.");
        }
    }
}

First thing is to define a transport layer with the implementation of TServerTransport interface (or abstract class, to be more precise). Since we are talking about server, we need to provide a port to listen to. Then we need to define a TServer instance and choose one of the available implementations:

  • TSimpleServer – for simple server
  • TThreadPoolServer – for multi-threaded server
  • TNonblockingServer – for non-blocking multi-threaded server

And finally, provide a processor implementation for chosen server which was already generated for us by Thrift, i.e. CrossPlatofformService.Processor class.

5.5. Writing a Client 

And here is the client’s implementation:

TTransport transport = new TSocket("localhost", 9090);
transport.open();

TProtocol protocol = new TBinaryProtocol(transport);
CrossPlatformService.Client client = new CrossPlatformService.Client(protocol);

boolean result = client.ping();

transport.close();

From a client perspective, the actions are pretty similar.

First of all, define the transport and point it to our server instance, then choose the suitable protocol. The only difference is that here we initialize the client instance which was, once again, already generated by Thrift, i.e. CrossPlatformService.Client class.

Since it is based on .thrift file definitions we can directly call methods described there. In this particular example, client.ping() will make a remote call to the server which will respond with true.

6. Conclusion

In this article, we’ve shown you the basic concepts and steps in working with Apache Thrift, and we’ve shown how to create a working example which utilizes Thrift library.

As usually, all the examples can be always found over in the GitHub repository.

Guide to Guava’s PreConditions

$
0
0

1. Overview

In this tutorial, we’ll show how to use the Google Guava’s Preconditions class.

The Preconditions class provides a list of static methods for checking that a method or a constructor is invoked with valid parameter values. If a precondition fails, a tailored exception is thrown.

2. Google Guava’s Preconditions

Each static method in the Preconditions class has three variants:

  • No arguments. Exceptions are thrown without an error message
  • An extra Object argument acting as an error message. Exceptions are thrown with an error message
  • An extra String argument, with an arbitrary number of additional Object arguments acting as an error message with a placeholder. This behaves a bit like printf, but for GWT compatibility and efficiency it only allows %s indicators

Let’s have a look at how to use the Preconditions class.

2.1. Maven Dependency

Let’s start by adding Google’s Guava library dependency in the pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>21.0</version>
</dependency>

The latest version of the dependency can be checked here.

3. checkArgument

The method checkArgument of the Preconditions class ensures the truthfulness of the parameters passed to the calling method. This method accepts a boolean condition and throws an IllegalArgumentException when the condition is false.

Let’s see how we can use this method with some examples.

3.1. Without an Error Message

We can use checkArgument without passing any extra parameter to the checkArgument method:

@Test
public void whenCheckArgumentEvaluatesFalse_throwsException() {
    int age = -18;
 
    assertThatThrownBy(() -> Preconditions.checkArgument(age > 0))
      .isInstanceOf(IllegalArgumentException.class)
      .hasMessage(null).hasNoCause();
}

3.2. With an Error Message

We can get a meaningful error message from the checkArgument method by passing an error message:

@Test
public void givenErrorMsg_whenCheckArgEvalsFalse_throwsException() {
    int age = -18;
    String message = "Age can't be zero or less than zero.";
 
    assertThatThrownBy(() -> Preconditions.checkArgument(age > 0, message))
      .isInstanceOf(IllegalArgumentException.class)
      .hasMessage(message).hasNoCause();
}

3.3. With a Template Error Message

We can get a meaningful error message along with dynamic data from the checkArgument method by passing an error message:

@Test
public void givenTemplateMsg_whenCheckArgEvalsFalse_throwsException() {
    int age = -18;
    String message = "Age should be positive number, you supplied %s.";
 
    assertThatThrownBy(
      () -> Preconditions.checkArgument(age > 0, message, age))
      .isInstanceOf(IllegalArgumentException.class)
      .hasMessage(message, age).hasNoCause();
}

4. checkElementIndex

The method checkElementIndex checks that an index is a valid index in a list, string or an array of a specified size. An element index may range from 0 inclusive to size exclusive. You don’t pass a list, string or array directly, you just pass its size. This method throws an IndexOutOfBoundsException if the index is not a valid element index, else it returns an index that’s being passed to the method.

Let’s see how we can use this method by showing a meaningful error message from the checkElementIndex method by passing an error message when it throws an exception:

@Test
public void givenArrayAndMsg_whenCheckElementEvalsFalse_throwsException() {
    int[] numbers = { 1, 2, 3, 4, 5 };
    String message = "Please check the bound of an array and retry";
 
    assertThatThrownBy(() -> 
      Preconditions.checkElementIndex(6, numbers.length - 1, message))
      .isInstanceOf(IndexOutOfBoundsException.class)
      .hasMessageStartingWith(message).hasNoCause();
}

5. checkNotNull

The method checkNotNull checks whether a value supplied as a parameter is null. It returns the value that’s been checked. If the value that has been passed to this method is null, then a NullPointerException is thrown.

Next, we are going to show how to use this method by showing how to get a meaningful error message from the checkNotNull method by passing an error message:

@Test
public void givenNullString_whenCheckNotNullWithMessage_throwsException () {
    String nullObject = null;
    String message = "Please check the Object supplied, its null!";
 
    assertThatThrownBy(() -> Preconditions.checkNotNull(nullObject, message))
      .isInstanceOf(NullPointerException.class)
      .hasMessage(message).hasNoCause();
}

We can also get a meaningful error message based on dynamic data from the checkNotNull method by passing a parameter to the error message:

@Test
public void whenCheckNotNullWithTemplateMessage_throwsException() {
    String nullObject = null;
    String message = "Please check the Object supplied, its %s!";
 
    assertThatThrownBy(
      () -> Preconditions.checkNotNull(nullObject, message, 
        new Object[] { null }))
      .isInstanceOf(NullPointerException.class)
      .hasMessage(message, nullObject).hasNoCause();
}

6. checkPositionIndex

The method checkPositionIndex checks that an index passed as an argument to this method is a valid index in a list, string or array of a specified size. A position index may range from 0 inclusive to size inclusive. You don’t pass the list, string or array directly, you just pass its size.

This method throws an IndexOutOfBoundsException if the index passed is not between 0 and the size given, else it returns the index value.

Let’s see how we can get a meaningful error message from the checkPositionIndex method:

@Test
public void givenArrayAndMsg_whenCheckPositionEvalsFalse_throwsException() {
    int[] numbers = { 1, 2, 3, 4, 5 };
    String message = "Please check the bound of an array and retry";
 
    assertThatThrownBy(
      () -> Preconditions.checkPositionIndex(6, numbers.length - 1, message))
      .isInstanceOf(IndexOutOfBoundsException.class)
      .hasMessageStartingWith(message).hasNoCause();
}

7. checkState

The method checkState checks the validity of the state of an object and is not dependent on the method arguments. For example, an Iterator might use this to check that next has been called before any call to remove. This method throws an IllegalStateException if the state of an object (boolean value passed as an argument to the method) is in an invalid state.

Let’s see how we can use this method by showing a meaningful error message from the checkState method by passing an error message when it throws an exception:

@Test
public void givenStatesAndMsg_whenCheckStateEvalsFalse_throwsException() {
    int[] validStates = { -1, 0, 1 };
    int givenState = 10;
    String message = "You have entered an invalid state";
 
    assertThatThrownBy(
      () -> Preconditions.checkState(
        Arrays.binarySearch(validStates, givenState) > 0, message))
      .isInstanceOf(IllegalStateException.class)
      .hasMessageStartingWith(message).hasNoCause();
}

8. Conclusion

In this tutorial, we illustrated the methods of the PreConditions class from the Guava library. The Preconditions class provides a collection of static methods that are used to validate that a method or a constructor is invoked with valid parameter values.

The code belonging to the above examples can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as is.

Dealing with Backpressure with RxJava

$
0
0

1. Overview

In this article, we will look at the way the RxJava library helps us to handle backpressure.

Simply put – RxJava utilizes a concept of reactive streams by introducing Observables, to which one or many Observers can subscribe to. Dealing with possibly infinite streams is very challenging, as we need to face a problem of a backpressure.

It’s not difficult to get into a situation in which an Observable is emitting items more rapidly than a subscriber can consume them. We will look at the different solutions to the problem of growing buffer of unconsumed items.

2. Hot Observables Versus Cold Observables

First, let’s create a simple consumer function that will be used as a consumer of elements from Observables that we will define later:

public class ComputeFunction {
    public static void compute(Integer v) {
        try {
            System.out.println("compute integer v: " + v);
            Thread.sleep(1000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
}

Our compute() function is simply printing the argument. The important thing to notice here is an invocation of a Thread.sleep(1000) method – we are doing it to emulate some long running task that will cause Observable to fill up with items quicker that Observer can consume them.

We have two types of the Observables – Hot and Cold  that are totally different when it comes to a backpressure handling.

2.1. Cold Observables

A cold Observable emits a particular sequence of items but can begin emitting this sequence when its Observer finds it to be convenient, and at whatever rate the Observer desires, without disrupting the integrity of the sequence. Cold Observable is providing items in a lazy way.

The Observer is taking elements only when it is ready to process that item, and items do not need to be buffered in an Observable because they are requested in a pull fashion.

For example, if you create an Observable based on a static range of elements from one to one million, that Observable would emit the same sequence of items no matter how frequently those items are observed:

Observable.range(1, 1_000_000)
  .observeOn(Schedulers.computation())
  .subscribe(ComputeFunction::compute);

When we start our program, items will be computed by Observer lazily and will be requested in a pull fashion. The Schedulers.computation() method means that we want to run our Observer within a computation thread pool in RxJava. 

The output of a program will consist of a result of a compute() method invoked for one by one item from an Observable:

compute integer v: 1
compute integer v: 2
compute integer v: 3
compute integer v: 4
...

Cold Observables do not need to have any form of a backpressure because they work in a pull fashion. Examples of items emitted by a cold Observable might include the results of a database query, file retrieval, or web request.

2.2. Hot Observables

A hot Observable begins generating items and emits them immediately when they are created. It is contrary to a Cold Observables pull model of processing. Hot Observable emits items at its own pace, and it is up to its observers to keep up.

When the Observer is not able to consume items as quickly as they are produced by an Observable they need to be buffered or handled in some other way, as they will fill up the memory, finally causing OutOfMemoryException.

Let’s consider an example of hot Observable, that is producing a 1 million items to an end consumer that is processing those items. When a compute() method in the Observer takes some time to process every item, the Observable is starting to fill up a memory with items, causing a program to fail:

PublishSubject<Integer> source = PublishSubject.<Integer>create();

source.observeOn(Schedulers.computation())
  .subscribe(ComputeFunction::compute, Throwable::printStackTrace);

IntStream.range(1, 1_000_000).forEach(source::onNext);

Running that program will fail with a MissingBackpressureException because we didn’t define a way of handling overproducing Observable.

Examples of items emitted by a hot Observable might include mouse & keyboard events, system events, or stock prices.

3. Buffering Overproducing Observable

The first way to handle overproducing Observable is to define some kind of a buffer for elements that cannot be processed by an Observer. 

We can do it by calling a buffer() method:

PublishSubject<Integer> source = PublishSubject.<Integer>create();
        
source.buffer(1024)
  .observeOn(Schedulers.computation())
  .subscribe(ComputeFunction::compute, Throwable::printStackTrace);

Defining a buffer with a size of 1024 will give an Observer some time to catch up to an overproducing source. The buffer will store items that were not yet processed.

We can increase a buffer size to have enough room for produced values.

Note however that generally, this may be only a temporary fix as the overflow can still happen if the source overproduces the predicted buffer size.

4. Batching Emitted Items

We can batch overproduced items in windows of N elements.

When Observable is producing elements quicker than Observer can process them, we can alleviate this by grouping produced elements together and sending a batch of elements to Observer that is able to process a collection of elements instead of element one by one:

PublishSubject<Integer> source = PublishSubject.<Integer>create();

source.window(500)
  .observeOn(Schedulers.computation())
  .subscribe(ComputeFunction::compute, Throwable::printStackTrace);

Using window() method with argument 500, will tell Observable to group elements into the 500-sized batches. This technique can reduce a problem of overproducing Observable when Observer is able to process a batch of elements quicker comparing to processing elements one by one.

5. Skipping Elements

If some of the values produced by Observable can be safely ignored, we can use the sampling within a specific time and throttling operators.

The methods sample() and throttleFirst() are taking duration as a parameter:

  • The sample() method periodically looks into the sequence of elements and emits the last item that was produced within the duration specified as a parameter
  • The throttleFirst() method emits the first item that was produced after the duration specified as a parameter

The duration is a time after which one specific element is picked from the sequence of produced elements. We can specify a strategy for handling backpressure by skipping elements:

PublishSubject<Integer> source = PublishSubject.<Integer>create();

source.sample(100, TimeUnit.MILLISECONDS)
  .observeOn(Schedulers.computation())
  .subscribe(ComputeFunction::compute, Throwable::printStackTrace);

We specified that strategy of skipping elements will be a sample() method. We want a sample of a sequence from 100 milliseconds duration. That element will be emitted to the Observer.

Remember, however, that these operators only reduce the rate of value reception by the downstream Observer and thus they may still lead to MissingBackpressureException.

6. Handling a Filling Observable Buffer

In case that our strategies of sampling or batching elements do not help with filling up a bufferwe need to implement a strategy of handling cases when a buffer is filling up.

We need to use an onBackpressureBuffer() method to prevent BufferOverflowException.

The onBackpressureBuffer() method takes three arguments: a capacity of an Observable buffer, a method that is invoked when a buffer is filling up, and a strategy for handling elements that need to be discarded from a buffer. Strategies for overflow are in a BackpressureOverflow class.

There are 4 types of actions that can be executed when the buffer fills up:

  • ON_OVERFLOW_ERROR –  this is the default behavior signaling a BufferOverflowException when the buffer is full
  • ON_OVERFLOW_DEFAULT –  currently it is the same as ON_OVERFLOW_ERROR
  • ON_OVERFLOW_DROP_LATEST –  if an overflow would happen, the current value will be simply ignored and only the old values will be delivered once the downstream Observer requests
  • ON_OVERFLOW_DROP_OLDEST – drops the oldest element in the buffer and adds the current value to it

Let’s see how to specify that strategy:

Observable.range(1, 1_000_000)
  .onBackpressureBuffer(16, () -> {}, BackpressureOverflow.ON_OVERFLOW_DROP_OLDEST)
  .observeOn(Schedulers.computation())
  .subscribe(e -> {}, Throwable::printStackTrace);

Here our strategy for handling the overflowing buffer is dropping the oldest element in a buffer and adding newest item produced by an Observable.

Note that the last two strategies cause a discontinuity in the stream as they drop out elements. In addition, they won’t signal BufferOverflowException.

7. Dropping All Overproduced Elements

Whenever the downstream Observer is not ready to receive an element, we can use an onBackpressureDrop() method to drop that element from the sequence.

We can think of that method as an onBackpressureBuffer() method with a capacity of a buffer set to zero with a strategy ON_OVERFLOW_DROP_LATEST. 

This operator is useful when we can safely ignore values from a source Observable (such as mouse moves or current GPS location signals) as there will be more up-to-date values later on:

Observable.range(1, 1_000_000)
  .onBackpressureDrop()
  .observeOn(Schedulers.computation())
  .doOnNext(ComputeFunction::compute)
  .subscribe(v -> {}, Throwable::printStackTrace);

The method onBackpressureDrop() is eliminating a problem of overproducing Observable but needs to be used with caution.

8. Conclusion

In this article, we looked at a problem of overproducing Observable and ways of dealing with a backpressure. We looked at strategies of buffering, batching and skipping elements when the Observer is not able to consume elements as quickly as they are produced by an Observable.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.


Guide to PriorityBlockingQueue in Java

$
0
0

1. Introduction

In this article, we’ll focus on the PriorityBlockingQueue class and go over some practical examples.

Starting with the assumption that we already know what a Queue is, we will first demonstrate how elements in the PriorityBlockingQueue are ordered by priority.

Following this, we will demonstrate how this type of queue can be used to block a thread.

Finally, we will show how using these two features together can be useful when processing data across multiple threads.

2. Priority of Elements

Unlike a standard queue, you can’t just add any type of element to a PriorityBlockingQueue. There are two options:

  1. Adding elements which implement Comparable
  2. Adding elements which do not implement Comparable, on the condition that you provide a Comparator as well

By using either the Comparator or the Comparable implementations to compare elements, the PriorityBlockingQueue will always be sorted.

The aim is to implement comparison logic in a way in which the highest priority element is always ordered first. Then, when we remove an element from our queue, it will always be the one with the highest priority.

To begin with, let’s make use of our queue in a single thread, as opposed to using it across multiple ones. By doing this, it makes it easy to prove how elements are ordered in a unit test:

PriorityBlockingQueue<Integer> queue = new PriorityBlockingQueue<>();
ArrayList<Integer> polledElements = new ArrayList<>();
 
queue.add(1);
queue.add(5);
queue.add(2);
queue.add(3);
queue.add(4);

queue.drainTo(polledElements);

assertThat(polledElements).containsExactly(1, 2, 3, 4, 5);

As we can see, despite adding the elements to the queue in a random order, they will be ordered when we start polling them. This is because the Integer class implements Comparable, which will, in turn, be used to make sure we take them out from the queue in ascending order.

It’s also worth noting that when two elements are compared and are the same, there’s no guarantee of how they will be ordered.

3. Using the Queue to Block

If we were dealing with a standard queue, we would call poll() to retrieve elements. However, if the queue was empty, a call to poll() would return null.

The PriorityBlockingQueue implements the BlockingQueue interface, which gives us some extra methods that allow us to block when removing from an empty queue. Let’s try using the take() method, which should do exactly that:

PriorityBlockingQueue<Integer> queue = new PriorityBlockingQueue<>();

new Thread(() -> {
  System.out.println("Polling...");

  try {
      Integer poll = queue.take();
      System.out.println("Polled: " + poll);
  } catch (InterruptedException e) {
      e.printStackTrace();
  }
}).start();

Thread.sleep(TimeUnit.SECONDS.toMillis(5));
System.out.println("Adding to queue");
queue.add(1);

Although using sleep() is a slightly brittle way of demonstrating things, when we run this code we will see:

Polling...
Adding to queue
Polled: 1

This proves that take() blocked until an item was added:

  1. The thread will print “Polling” to prove that it’s started
  2. The test will then pause for around five seconds, to prove the thread must have called take() by this point
  3. We add to the queue, and should more or less instantly see “Polled: 1” to prove that take() returned an element as soon as it become available

It’s also worth mentioning that the BlockingQueue interface also provides us with ways of blocking when adding to full queues.

However, a PriorityBlockingQueue is unbounded. This means that it will never be full, thus it will always possible to add new elements.

4. Using Blocking and Prioritization Together

Now that we’ve explained the two key concepts of a PriorityBlockingQueue, let’s use them both together. We can simply expand on our previous example, but this time add more elements to the queue:

Thread thread = new Thread(() -> {
    System.out.println("Polling...");
    while (true) {
        try {
            Integer poll = queue.take();
            System.out.println("Polled: " + poll);
        } 
        catch (InterruptedException e) { 
            e.printStackTrace();
        }
    }
});

thread.start();

Thread.sleep(TimeUnit.SECONDS.toMillis(5));
System.out.println("Adding to queue");

queue.addAll(newArrayList(1, 5, 6, 1, 2, 6, 7));
Thread.sleep(TimeUnit.SECONDS.toMillis(1));

Again, while this is a little brittle because of the use of sleep(), it still shows us a valid use case. We now have a queue which blocks, waiting for elements to be added. We’re then adding lots of elements at once, and then showing that they will be handled in priority order. The output will look like this:

Polling...
Adding to queue
Polled: 1
Polled: 1
Polled: 2
Polled: 5
Polled: 6
Polled: 6
Polled: 7

5. Conclusion

In this guide, we’ve demonstrated how we can use a PriorityBlockingQueue in order to block a thread until some items have been added to it, and also that we are able to process those items based on their priority.

The implementation of these examples can be found over on GitHub. This is a Maven-based project, so should be easy to run as is.

Spring @RequestMapping New Shortcut Annotations

$
0
0

1. Overview

Spring 4.3. introduced some very cool method-level composed annotations to smooth out the handling @RequestMapping in typical Spring MVC projects.

In this article, we will learn how to use them in an efficient way.

2. New Annotations

Typically, if we want to implement the URL handler using traditional @RequestMapping annotation, it would have been something like this:

@RequestMapping(value = "/get/{id}", method = RequestMethod.GET)

The new approach makes it possible to shorten this simply to:

@GetMapping("/get/{id}")

Spring currently supports five types of inbuilt annotations for handling different types of incoming HTTP request methods which are GET, POST, PUT, DELETE and PATCH. These annotations are:

  • @GetMapping
  • @PostMapping
  • @PutMapping
  • @DeleteMapping
  • @PatchMapping

From the naming convention we can see that each annotation is meant to handle respective incoming request method type, i.e. @GetMapping is used to handle GET type of request method, @PostMapping is used to handle POST type of request method, etc.

3. How It Works

All of the above annotations are already internally annotated with @RequestMapping and the respective value in the method element.

For example, if we’ll look at the source code of @GetMapping annotation, we can see that it’s already annotated with RequestMethod.GET in the following way:

@Target({ java.lang.annotation.ElementType.METHOD })
@Retention(RetentionPolicy.RUNTIME)
@Documented
@RequestMapping(method = { RequestMethod.GET })
public @interface GetMapping {
    // abstract codes
}

All the other annotations are created in the same way, i.e. @PostMapping is annotated with RequestMethod.POST@PutMapping is annotated with RequestMethod.PUT, etc.

The full source code of the annotations is available here.

4. Implementation

Let’s try to use these annotations to build a quick REST application.

Please note that since we would use Maven to build the project and Spring MVC to create our application, we need to add necessary dependencies in the pom.xml:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>4.3.6.RELEASE</version>
</dependency>

The latest version of spring-webmvc is available in the Central Maven Repository.

Now, we need to create the controller to map incoming request URL. Inside this controller, we would use all of these annotations one by one.

4.1. @GetMapping

@GetMapping("/get")
public @ResponseBody ResponseEntity<String> get() {
    return new ResponseEntity<String>("GET Response", HttpStatus.OK);
}
@GetMapping("/get/{id}")
public @ResponseBody ResponseEntity<String>
  getById(@PathVariable String id) {
    return new ResponseEntity<String>("GET Response : " 
      + id, HttpStatus.OK);
}

4.2. @PostMapping

@PostMapping("/post")
public @ResponseBody ResponseEntity<String> post() {
    return new ResponseEntity<String>("POST Response", HttpStatus.OK);
}

4.3. @PutMapping

@PutMapping("/put")
public @ResponseBody ResponseEntity<String> put() {
    return new ResponseEntity<String>("PUT Response", HttpStatus.OK);
}

4.4. @DeleteMapping

@DeleteMapping("/delete")
public @ResponseBody ResponseEntity<String> delete() {
    return new ResponseEntity<String>("DELETE Response", HttpStatus.OK);
}

4.5. @PatchMapping

@PatchMapping("/patch")
public @ResponseBody ResponseEntity<String> patch() {
    return new ResponseEntity<String>("PATCH Response", HttpStatus.OK);
}

Points to note:

  • We have used the necessary annotations to handle proper incoming HTTP methods with URIFor example, @GetMapping to handle “/get” URI, @PostMapping to handle “/post” URI and so on
  • Since we are making an REST-based application, we are returning a constant string (unique to each request type) with 200 response code to simplify the application. We have used Spring’s @ResponseBody annotation in this case.
  • If we had to handle any URL path variable, we can simply do it in much less way we used to do in case of using @RequestMapping.

5. Testing the Application

To test the application we need to create a couple of test cases using JUnit. We would use SpringJUnit4ClassRunner to initiate the test class. We would create five different test cases to test each annotation and every handler we declared in the controller.

Let’s simple the example test case of @GetMapping:

@Test 
public void giventUrl_whenGetRequest_thenFindGetResponse() 
  throws Exception {

    MockHttpServletRequestBuilder builder = MockMvcRequestBuilders
      .get("/get");

    ResultMatcher contentMatcher = MockMvcResultMatchers.content()
      .string("GET Response");

    this.mockMvc.perform(builder).andExpect(contentMatcher)
      .andExpect(MockMvcResultMatchers.status().isOk());

}

As we can see, we are expecting a constant string “GET Response“, once we hit the GET URL “/get”.

Now, let’s create the test case to test @PostMapping:

@Test 
public void givenUrl_whenPostRequest_thenFindPostResponse() 
  throws Exception {
    
    MockHttpServletRequestBuilder builder = MockMvcRequestBuilders
      .post("/post");
	
    ResultMatcher contentMatcher = MockMvcResultMatchers.content()
      .string("POST Response");
	
    this.mockMvc.perform(builder).andExpect(contentMatcher)
      .andExpect(MockMvcResultMatchers.status().isOk());
	
}

In the same way, we created the rest of the test cases to test all of the HTTP methods.

Alternatively, we can always use any common REST client, for example, PostMan, RESTClient etc, to test our application. In that case, we need to be a little careful to choose correct HTTP method type while using the rest client. Otherwise, it would throw 405 error status.

6. Conclusion

In this article, we had a quick introduction to the different types of @RequestMapping shortcuts for quick web development using traditional Spring MVC framework. We can utilize these quick shortcuts to create a clean code base.

Like always, you can find the source code for this tutorial in the Github project.

Spring Security – Cache Control Headers

$
0
0

1. Introduction

In this article, we’ll explore how we can control HTTP caching with Spring Security.

We’ll demonstrate its default behavior, and also explain the reasoning behind it. We’ll then look at ways to change this behavior, either partially or completely.

2. Default Caching Behaviour

By using cache control headers effectively, we can instruct our browser to cache resources and avoid network hops. This decreases latency, and also the load on our server.

By default, Spring Security sets specific cache control header values for us, without us having to configure anything.

First, let’s setup Spring Security for our application:

@Configuration
@EnableWebSecurity
@EnableGlobalMethodSecurity
public class SpringSecurityConfig extends WebSecurityConfigurerAdapter {
 
    @Override
    protected void configure(HttpSecurity http) throws Exception {}
}

We’re overriding configure() to do nothing, this means that we won’t need to be authenticated to hit an endpoint, enabling us to focus on purely testing caching.

Next, let’s implement a simple REST endpoint:

@GetMapping("/default/users/{name}")
public ResponseEntity<UserDto> getUserWithDefaultCaching(@PathVariable String name) {
    return ResponseEntity.ok(new UserDto(name));
}

The resulting cache-control header will look like this:

[cache-control: no-cache, no-store, max-age=0, must-revalidate]

Finally, let’s implement a test which hits the endpoint, and assert what headers are sent in the response:

given()
  .when()
  .get(getBaseUrl() + "/default/users/Michael")
  .then()
  .header("Cache-Control", "no-cache, no-store, max-age=0, must-revalidate")
  .header("Pragma", "no-cache");

Essentially, what this means is that a browser will never cache this response.

Whilst this may seem inefficient, there is actually a good reason for this default behavior – If one user logs out and another one logs in, we don’t want them to be able to see the previous users resources. It’s much safer to not cache anything by default, and leave us to be responsible for enabling caching explicitly.

3. Overriding the Default Caching Behaviour

Sometimes we might be dealing with resources which we do want to be cached. If we are going to enable it, it would be safest to do on a per resource basis. This means any other resources will still not be cached by default.

To do this, let’s try overriding the cache control headers in a single handler method, by use of the CacheControl cache. The CacheControl class is a fluent builder, which makes it easy for us to create different types of caching:

@GetMapping("/users/{name}")
public ResponseEntity<UserDto> getUser(@PathVariable String name) { 
    return ResponseEntity.ok()
      .cacheControl(CacheControl.maxAge(60, TimeUnit.SECONDS))
      .body(new UserDto(name));
}

Let’s hit this endpoint in our test, and assert that we have changed the headers:

given()
  .when()
  .get(getBaseUrl() + "/users/Michael")
  .then()
  .header("Cache-Control", "max-age=60");

As we can see, we’ve overridden the defaults, and now our response will be cached by a browser for 60 seconds.

4. Turning off the Default Caching Behavior

We can also turn off the default cache control headers of Spring Security altogether. This is quite a risky thing to do, and not really recommended. But, we if we really want to, then we can try it by overriding the configure method of the WebSecurityConfigurerAdapter:

@Override
protected void configure(HttpSecurity http) throws Exception {
    http.headers().disable();
}

Now, let’s make a request to our endpoint again and see what response we get:

given()
  .when()
  .get(getBaseUrl() + "/default/users/Michael")
  .then()
  .headers(new HashMap<String, Object>());

As we can see, no cache headers have been set at all. Again, this is not secure but proves how we can turn off the default headers if we want to.

7. Conclusion

This article demonstrates how Spring Security disables HTTP caching by default and explains that this is because we do not want to cache secure resources. We’ve also seen how we can disable or modify this behavior as we see fit.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Working with Microsoft Excel in Java

$
0
0

1. Introduction

In this tutorial, we will demonstrate the use of the Apache POI and JExcel APIs for working with Excel spreadsheets.

Both libraries can be used to dynamically read, write and modify the content of an Excel spreadsheet and provide an effective way of integrating Microsoft Excel into a Java Application.

2. Maven Dependencies

To begin, we will need to add the following dependencies to our pom.xml file:

<dependency>
    <groupId>org.apache.poi</groupId>
    <artifactId>poi</artifactId>
    <version>3.15</version>
</dependency>
<dependency>
    <groupId>org.apache.poi</groupId>
    <artifactId>poi-ooxml</artifactId>
    <version>3.15</version>
</dependency>

The latest versions of poi-ooxml and jxls-jexcel can be downloaded from Maven Central.

3. Apache POI

The Apache POI library supports both .xls and .xlsx files and is a more complex library than other Java libraries for working with Excel files.

It provides the Workbook interface for modeling an Excel file, and the Sheet, Row, and Cell interfaces that model the elements of an Excel file, as well as implementations of each interface for both file formats.

When working with the newer .xlsx file format, you would use the XSSFWorkbook, XSSFSheet, XSSFRow, and XSSFCell classes.

To work with the older .xls format, use the HSSFWorkbook, HSSFSheet, HSSFRow, and HSSFCell classes.

3.1. Reading from Excel

Let’s create a method that opens a .xlsx file, then reads content from the first sheet of the file.

The method for reading cell content varies depending on the type of the data in the cell. The type of the cell content can be determined using the getCellTypeEnum() method of the Cell interface.

First, let’s open the file from a given location:

FileInputStream file = new FileInputStream(new File(fileLocation));
Workbook workbook = new XSSFWorkbook(file);

Next, let’s retrieve the first sheet of the file and iterate through each row:

Sheet sheet = workbook.getSheetAt(0);

Map<Integer, List<String>> data = new HashMap<>();
int i = 0;
for (Row row : sheet) {
    data.put(i, new ArrayList<String>());
    for (Cell cell : row) {
        switch (cell.getCellTypeEnum()) {
            case STRING: ... break;
            case NUMERIC: ... break;
            case BOOLEAN: ... break;
            case FORMULA: ... break;
            default: data.get(new Integer(i)).add(" ");
        }
    }
    i++;
}

Apache POI has different methods for reading each type of data. Let’s expand on the content of each switch case above.

When the cell type enum value is STRING, the content will be read using the getRichStringCellValue() method of Cell interface:

data.get(new Integer(i)).add(cell.getRichStringCellValue().getString());

Cells having the NUMERIC content type can contain either a date or a number and are read in the following manner:

if (DateUtil.isCellDateFormatted(cell)) {
    data.get(i).add(cell.getDateCellValue() + "");
} else {
    data.get(i).add(cell.getNumericCellValue() + "");
}

For BOOLEAN values, we have the getBooleanCellValue() method:

data.get(i).add(cell.getBooleanCellValue() + "");

And when the cell type is FORMULA, we can use the getCellFormula() method:

data.get(i).add(cell.getCellFormula() + "");

3.2. Writing to Excel

Apache POI uses the same interfaces presented in the previous section for writing to an Excel file and has better support for styling than JExcel.

Let’s create a method that writes a list of persons to a sheet titled “Persons”. First, we will create and style a header row that contains “Name” and “Age” cells:

Workbook workbook = new XSSFWorkbook();

Sheet sheet = workbook.createSheet("Persons");
sheet.setColumnWidth(0, 6000);
sheet.setColumnWidth(1, 4000);

Row header = sheet.createRow(0);

CellStyle headerStyle = workbook.createCellStyle();
headerStyle.setFillForegroundColor(IndexedColors.LIGHT_BLUE.getIndex());
headerStyle.setFillPattern(FillPatternType.SOLID_FOREGROUND);

XSSFFont font = ((XSSFWorkbook) workbook).createFont();
font.setFontName("Arial");
font.setFontHeightInPoints((short) 16);
font.setBold(true);
headerStyle.setFont(font);

Cell headerCell = header.createCell(0);
headerCell.setCellValue("Name");
headerCell.setCellStyle(headerStyle);

headerCell = header.createCell(1);
headerCell.setCellValue("Age");
headerCell.setCellStyle(headerStyle);

Next, let’s write the content of the table with a different style:

CellStyle style = workbook.createCellStyle();
style.setWrapText(true);

Row row = sheet.createRow(2);
Cell cell = row.createCell(0);
cell.setCellValue("John Smith");
cell.setCellStyle(style);

cell = row.createCell(1);
cell.setCellValue(20);
cell.setCellStyle(style);

Finally, let’s write the content to a ‘temp.xlsx’ file in the current directory and close the workbook:

File currDir = new File(".");
String path = currDir.getAbsolutePath();
String fileLocation = path.substring(0, path.length() - 1) + "temp.xlsx";

FileOutputStream outputStream = new FileOutputStream(fileLocation);
workbook.write(outputStream);
workbook.close();

Let’s test the above methods in a JUnit test that writes content to the temp.xlsx file then reads the same file to verify it contains the text we have written:

public class ExcelTest {

    private ExcelPOIHelper excelPOIHelper;
    private static String FILE_NAME = "temp.xlsx";
    private String fileLocation;

    @Before
    public void generateExcelFile() throws IOException {
        File currDir = new File(".");
        String path = currDir.getAbsolutePath();
        fileLocation = path.substring(0, path.length() - 1) + FILE_NAME;

        excelPOIHelper = new ExcelPOIHelper();
        excelPOIHelper.writeExcel();
    }

    @Test
    public void whenParsingPOIExcelFile_thenCorrect() throws IOException {
        Map<Integer, List<String>> data
          = excelPOIHelper.readExcel(fileLocation);

        assertEquals("Name", data.get(0).get(0));
        assertEquals("Age", data.get(0).get(1));

        assertEquals("John Smith", data.get(1).get(0));
        assertEquals("20", data.get(1).get(1));
    }
}

4. JExcel

The JExcel library is a lightweight library having the advantage that it’s easier to use than Apache POI, but with the disadvantage that it only provides support for processing Excel files in the .xls (1997-2003) format.

At the moment, .xlsx files are not supported.

4.1. Reading from Excel

In order to work with Excel files, this library provides a series of classes that represent the different parts of an excel file. The Workbook class represents the entire collection of sheets. The Sheet class represents a single sheet, and the Cell class represents a single cell of a spreadsheet.

Let’s write a method that creates a workbook from a specified Excel file, gets the first sheet of the file, then traverses its content and adds each row in a HashMap:

public class JExcelHelper {

    public Map<Integer, List<String>> readJExcel(String fileLocation) 
      throws IOException, BiffException {
 
        Map<Integer, List<String>> data = new HashMap<>();

        Workbook workbook = Workbook.getWorkbook(new File(fileLocation));
        Sheet sheet = workbook.getSheet(0);
        int rows = sheet.getRows();
        int columns = sheet.getColumns();

        for (int i = 0; i < rows; i++) {
            data.put(i, new ArrayList<String>());
            for (int j = 0; j < columns; j++) {
                data.get(i)
                  .add(sheet.getCell(j, i)
                  .getContents());
            }
        }
        return data;
    }
}

4.2. Writing to Excel

For writing to an Excel file, the JExcel library offers classes similar to the ones used above, that model a spreadsheet file: WritableWorkbook, WritableSheet, and WritableCell.

The WritableCell class has subclasses corresponding to the different types of content that can be written: Label, DateTime, Number, Boolean, Blank, and Formula.

This library also provides support for basic formattings, such as controlling font, color and cell width.

Let’s write a method that creates a workbook called ‘temp.xls’ in the current directory, then writes the same content we wrote in the Apache POI section.

First, let’s create the workbook:

File currDir = new File(".");
String path = currDir.getAbsolutePath();
String fileLocation = path.substring(0, path.length() - 1) + "temp.xls";

WritableWorkbook workbook = Workbook.createWorkbook(new File(fileLocation));

Next, let’s create the first sheet and write the header of the excel file, containing “Name” and “Age” cells:

WritableSheet sheet = workbook.createSheet("Sheet 1", 0);

WritableCellFormat headerFormat = new WritableCellFormat();
WritableFont font
  = new WritableFont(WritableFont.ARIAL, 16, WritableFont.BOLD);
headerFormat.setFont(font);
headerFormat.setBackground(Colour.LIGHT_BLUE);
headerFormat.setWrap(true);

Label headerLabel = new Label(0, 0, "Name", headerFormat);
sheet.setColumnView(0, 60);
sheet.addCell(headerLabel);

headerLabel = new Label(1, 0, "Age", headerFormat);
sheet.setColumnView(0, 40);
sheet.addCell(headerLabel);

With a new style, let’s write the content of the table we’ve created:

WritableCellFormat cellFormat = new WritableCellFormat();
cellFormat.setWrap(true);

Label cellLabel = new Label(0, 2, "John Smith", cellFormat);
sheet.addCell(cellLabel);
Number cellNumber = new Number(1, 2, 20, cellFormat);
sheet.addCell(cellNumber);

It’s very important to remember to write to the file and close it at the end so it can be used by other processes, using the write() and close() methods of Workbook class:

workbook.write();
workbook.close();

5. Conclusion

This tutorial has illustrated how to use the Apache POI API and JExcel API to read and write an Excel file from a Java program.

The complete source code for this article can be found in the GitHub project.

Guide to Java 8 groupingBy Collector

$
0
0

1. Introduction

In this article, we’ll see how the groupingBy collector works using various examples.

To understand the material covered in this article, a basic knowledge of Java 8 features is needed. You can have a look at intro to Java 8 Streams and the guide to Java 8’s Collectors.

2. GroupingBy Collectors

The Java 8 Stream API lets us process collections of data in a declarative way.

The static factory methods Collectors.groupingBy() and Collectors.groupingByConcurrent() provide us with functionality similar to the ‘GROUP BY’ clause in the SQL language. They are used for grouping objects by some property and storing results in a Map instance.

The overloaded methods of groupingBy:

  • With a classification function as the method parameter:

static <T,K> Collector<T,?,Map<K,List<T>>> 
  groupingBy(Function<? super T,? extends K> classifier)
  • With a classification function and a second collector as method parameters:

static <T,K,A,D> Collector<T,?,Map<K,D>>
  groupingBy(Function<? super T,? extends K> classifier, 
    Collector<? super T,A,D> downstream)
  • With a classification function, a supplier method (that provides the Map implementation that will contain the end result), and a second collector as method parameters:

static <T,K,D,A,M extends Map<K,D>> Collector<T,?,M>
  groupingBy(Function<? super T,? extends K> classifier, 
    Supplier<M> mapFactory, Collector<? super T,A,D> downstream)

2.1. Example Code Setup

To demonstrate the usage of groupingBy(), let’s define a BlogPost class (we will use a stream of BlogPost objects):

class BlogPost {
    String title;
    String author;
    BlogPostType type;
    int likes;
}

The BlogPostType:

enum BlogPostType {
    NEWS,
    REVIEW,
    GUIDE
}

The List of BlogPost objects:

List<BlogPost> posts = Arrays.asList( ... );

Let’s also define a Tuple class that will be used to group posts by the combination of their type and author attributes:

class Tuple {
    BlogPostType type;
    String author;
}

2.2. Simple Grouping by a Single Column

Let’s start with the simplest groupingBy method, which only takes a classification function as its parameter. A classification function is applied to each element of the stream. The value that is returned by the function is used as a key to the map that we get from the groupingBy collector.

To group the blog posts in the blog post list by their type:

Map<BlogPostType, List<BlogPost>> postsPerType = posts.stream()
  .collect(groupingBy(BlogPost::getType));

2.3. Grouping by with a Complex Map Key Type

The classification function is not limited to returning only a scalar or String value. The key of the resulting map could be any object as long as we make sure that we implement the necessary equals and hashcode methods.

To group by the blog posts in the list by the type and author combined in a Tuple instance:

Map<Tuple, List<BlogPost>> postsPerTypeAndAuthor = posts.stream()
  .collect(groupingBy(post -> new Tuple(post.getType(), post.getAuthor())));

2.4. Modifying the Returned Map Value Type

The second overload of groupingBy takes an additional second collector (downstream collector), that is applied to the results of the first collector.

When we specify only a classification function and not a downstream collector, the toList() collector is used behind the scenes.

Let’s use the toSet() collector as the downstream collector and get a Set of blog posts (instead of a List):

Map<BlogPostType, Set<BlogPost>> postsPerType = posts.stream()
  .collect(groupingBy(BlogPost::getType, toSet()));

2.5. Providing a Secondary Group By Collector

A different application of the downstream collector is to do a secondary grouping by to the results of the first group by.

To group the List of BlogPosts first by author and then by type:

Map<String, Map<BlogPostType, List>> map = posts.stream()
  .collect(groupingBy(BlogPost::getAuthor, groupingBy(BlogPost::getType)));

2.6. Getting the Average from Grouped Results

By using the downstream collector we can apply aggregation functions in the results of the classification function.

To find the average number of likes for each blog post type:

Map<BlogPostType, Double> averageLikesPerType = posts.stream()
  .collect(groupingBy(BlogPost::getType, averagingInt(BlogPost::getLikes)));

2.7. Getting the Sum from Grouped Results

To calculate the total sum of likes for each type:

Map<BlogPostType, Integer> likesPerType = posts.stream()
  .collect(groupingBy(BlogPost::getType, summingInt(BlogPost::getLikes)));

2.8. Getting the Maximum or Minimum from Grouped Results

Another aggregation that we can perform is to get the blog post with the maximum number of likes:

Map<BlogPostType, Optional<BlogPost>> maxLikesPerPostType = posts.stream()
  .collect(groupingBy(BlogPost::getType,
  maxBy(comparingInt(BlogPost::getLikes))));

Similarly, we can apply the minBy downstream collector to get the blog post with the minimum number of likes.

Note that the maxBy and minBy collectors take into account the possibility that the collection to which it is applied could be empty. This is why the value type in the map is Optional<BlogPost>.

2.9. Getting a Summary for an Attribute of Grouped Results

The Collectors API offers a summarizing collector that can be used in cases when we need to calculate the count, sum, minimum, maximum and average of a numerical attribute at the same time.

Let’s calculate a summary for the likes attribute of the blog posts for each different type:

Map<BlogPostType, IntSummaryStatistics> likeStatisticsPerType = posts.stream()
  .collect(groupingBy(BlogPost::getType, 
  summarizingInt(BlogPost::getLikes)));

The IntSummaryStatistics object for each type contains the count, sum, average, min and max values for the likes attribute. Additional summary objects exist for double and long values.

2.10. Mapping Grouped Results to a Different Type

More complex aggregations can be achieved by applying a mapping downstream collector to the results of the classification function.

Let’s get a concatenation of the titles of the posts for each blog post type:

Map<BlogPostType, String> postsPerType = posts.stream()
  .collect(groupingBy(BlogPost::getType, 
  mapping(BlogPost::getTitle, joining(", ", "Post titles: [", "]"))));

What we have done here is to map each BlogPost instance to its title and then reduce the stream of post titles to a concatenated String. In this example, the type of the Map value is also different from the default List type.

2.11. Modifying the Return Map Type

When using the groupingBy collector, we cannot make assumptions about the type of the returned Map. If we want to be specific about which type of Map we want to get from the group by then we can use the third variation of the groupingBy method that allows us to change the type of the Map by passing a Map supplier function.

Let’s retrieve an EnumMap by the passing an EnumMap supplier function to the groupingBy method:

EnumMap<BlogPostType, List<BlogPost>> postsPerType = posts.stream()
  .collect(groupingBy(BlogPost::getType, 
  () -> new EnumMap<>(BlogPostType.class), toList()));

3. Concurrent Grouping By Collector

Similar to the groupingBy, there is the groupingByConcurrent collector, which leverages multi-core architectures. This collector has three overloaded methods that take exactly the same arguments as the respective overloaded methods of the groupingBy collector. The return type of the groupingByConcurrent collector, however, must be an instance of the ConcurrentHashMap class or a subclass of it.

To do a grouping operation concurrently, the stream needs to be parallel:

ConcurrentMap<BlogPostType, List<BlogPost>> postsPerType = posts.parallelStream()
  .collect(groupingByConcurrent(BlogPost::getType));

If we choose to pass a Map supplier function to the groupingByConcurrent collector, then we need to make sure that the function returns either a ConcurrentHashMap or a subclass of it.

4. Conclusion

In this article, we have seen several examples of the usage of the groupingBy collector that is offered by the Java 8 Collectors API.

We saw how groupingBy can be used to classify a stream of elements based on one of their attributes and how the results of the classification can be further collected, mutated and reduced to final containers.

The complete implementation of the examples for this article can be found in the GitHub project.

JAX-RS Client with Jersey

$
0
0

1. Overview

Jersey is an open source framework for developing RESTFul Web Services. It also has great inbuilt client capabilities.

In this quick tutorial, we will explore the creation of JAX-RS client using Jersey 2.

For a discussion on the creation of RESTful Web Services using Jersey, please refer to this article.

2. Maven Dependencies

Let’s begin by adding the required dependencies (for Jersey JAX-RS client) in the pom.xml:

<dependency>
    <groupId>org.glassfish.jersey.core</groupId>
    <artifactId>jersey-client</artifactId>
    <version>2.25.1</version>
</dependency>

To use Jackson 2.x as JSON provider:

<dependency>
    <groupId>org.glassfish.jersey.media</groupId>
    <artifactId>jersey-media-json-jackson</artifactId>
    <version>2.25.1</version>
</dependency>

The latest version of these dependencies can be found at jersey-client and jersey-media-json-jackson.

3. RESTFul Client in Jersey

We will develop a JAX-RS client to consume the JSON and XML REST APIs that we developed here (we need to make sure that the service is deployed and the URL is accessible).

3.1. Resource Representation Class

Let’s have a look at the resource representation class:

@XmlRootElement
public class Employee {
    private int id;
    private String firstName;

    // standard getters and setters
}

JAXB annotations like @XmlRootElement are required only if XML support is needed.

3.2. Creating an Instance of a Client

The first thing we need is an instance of a Client:

Client client = ClientBuilder.newClient();

3.3. Creating a WebTarget

Once we have the Client instance, we can create a WebTarget using the URI of the targeted web resource:

WebTarget webTarget 
  = client.target("http://localhost:8082/spring-jersey");

Using WebTarget, we can define a path to a specific resource:

WebTarget employeeWebTarget 
  = webTarget.path("resources/employees");

3.4. Building an HTTP Request Invocation

An invocation builder instance is created one of the WebTarget.request() methods:

Invocation.Builder invocationBuilder 
  = employeeWebTarget.request(MediaType.APPLICATION.JSON);

For XML format, MediaType.APPLICATION_XML can be used.

3.5. Invoking HTTP Requests

Invoking HTTP GET:

Response response 
  = invocationBuilder.get(Employee.class);

Invoking HTTP POST:

Response response 
  = invocationBuilder
  .post(Entity.entity(employee, MediaType.APPLICATION_JSON);

3.6. Sample REST Client

Let’s begin writing a simple REST client. The getJsonEmployee() method retrieves an Employee object based on the employee id. The JSON returned by the REST Web Service is deserialized to the Employee object before returning.

Using the JAX-RS API fluently to create web target, invocation builder and invoking a GET HTTP request:

public class RestClient {
 
    private static final String REST_URI 
      = "http://localhost:8082/spring-jersey/resources/employees";
 
    private Client client = ClientBuilder.newClient();

    public Employee getJsonEmployee(int id) {
        return client
          .target(REST_URI)
          .path(String.valueOf(id))
          .request(MediaType.APPLICATION_JSON)
          .get(Employee.class);
    }
    //...
}

Let’s now add a method for POST HTTP request. The createJsonEmployee() method creates an Employee by invoking the REST Web Service for Employee creation. The client API internally serializes the Employee object to JSON before invoking the HTTP POST method:

public Response createJsonEmployee(Employee emp) {
    return client
      .target(REST_URI)
      .request(MediaType.APPLICATION_JSON)
      .post(Entity.entity(emp, MediaType.APPLICATION_JSON));
}

4. Testing the Client

Let’s test our client with JUnit:

public class JerseyClientLiveTest {
 
    public static final int HTTP_CREATED = 201;
    private RestClient client = new RestClient();

    @Test
    public void givenCorrectObject_whenCorrectJsonRequest_thenResponseCodeCreated() {
        Employee emp = new Employee(6, "Johny");

        Response response = client.createJsonEmployee(emp);

        assertEquals(response.getStatus(), HTTP_CREATED);
    }
}

5. Conclusion

In this article, we have introduced JAX-RS client using Jersey 2 and developed a simple RESTFul Java client.

As always, the full source code is available in this Github project.

Hibernate One to Many Annotation Tutorial

$
0
0

1. Introduction

This quick Hibernate tutorial will take you through an example of one-to-many mapping using JPA annotations – an alternative to the XML descriptor approach.

In simple terms, one-to-many mapping means that one row in a table is mapped to multiple rows in another table.

2. Description

Let’s look at the following entity relationship diagram to see one-to-many association:

For this example, we will implement a Cart system, where we have a table for Cart and another table for Items. A Cart can have multiple items, so here we have a one-to-many mapping with cart_id as a primary key in the Cart table and this association is constrained by the foreign key in the Items table.

The code snippet below shows the implementation of a @OneToMany mapping of Cart class to the ItemEntry object, simply put, it means that a single Cart object can be associated with multiple ItemEntry objects:

public class Cart {

    //...     
 
    @OneToMany(mappedBy="cart")
    private Set<ItemEntry> items;
	
    //...
}

This bidirectional relationship between objects means that we are able to access Object A from Object B, and Object B from Object A. The mappedBy property is what we use to tell Hibernate which variable we are using to represent the parent class in our child class.

The following technologies and libraries are used in order to develop a sample Hibernate application that implements one-to-many association:

  • JDK 1.8 or later
  • Hibernate 4 or later
  • Maven 3 or later
  • MySQL Server 5.6 or later

3. Setup

3.1. Database Setup

Below is our database script for Cart and Items tables. We use the foreign key constraint for one-to-many mapping:

CREATE TABLE `Cart` (
  `cart_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`cart_id`)
) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8;

CREATE TABLE `Items` (
  `id` int(11) unsigned NOT NULL AUTO_INCREMENT,
  `cart_id` int(11) unsigned NOT NULL,
  PRIMARY KEY (`id`),
  KEY `cart_id` (`cart_id`),
  CONSTRAINT `items_ibfk_1` FOREIGN KEY (`cart_id`) REFERENCES `Cart` (`cart_id`)
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8;

Our database setup is ready, let’s move on to creating the Hibernate example project.

3.2. Maven Dependencies

We will then add the Hibernate and MySQL driver dependencies to our pom.xml file.The Hibernate dependency uses JBoss logging and it automatically gets added as transitive dependencies:

  • Hibernate version 5.2.7.Final
  • MySQL driver version 6.0.5

Please visit the Maven central repository for the latest versions of Hibernate and the MySQL dependencies.

3.3. Hibernate Configuration

Here is the configuration of Hibernate:

<hibernate-configuration>
    <session-factory>
        <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
        <property name="hibernate.connection.password">root</property>
        <property name="hibernate.connection.url">jdbc:mysql://localhost:3306/setup</property>
        <property name="hibernate.connection.username">root</property>
        <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
        <property name="hibernate.current_session_context_class">thread</property>
        <property name="hibernate.show_sql">true</property>
    </session-factory>
</hibernate-configuration>

3.4. HibernateAnnotationUtil Class

With the HibernateAnnotationUtil class, we just need to reference the new Hibernate configuration file:

private static SessionFactory sessionFactory;

private SessionFactory buildSessionFactory() {
     
    Configuration configuration = new Configuration();
    configuration.configure("hibernate-annotation.cfg.xml");
    System.out.println("Hibernate Annotation Configuration loaded");
        	
    ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder()
      .applySettings(configuration.getProperties()).build();
    System.out.println("Hibernate Annotation serviceRegistry created");
        	
    SessionFactory sessionFactory = configuration.buildSessionFactory(serviceRegistry);
        	
    return sessionFactory;
}	

public SessionFactory getSessionFactory() {
    if(sessionFactory == null) sessionFactory = buildSessionFactory();
    return sessionFactory;
}

4. The Models

The mapping related configurations will be done using JPA annotations in the model classes:

@Entity
@Table(name="CART")
public class Cart {

    //...

    @OneToMany(mappedBy="cart")
    private Set<ItemEntry> items;
	
    // getters and setters
}

Please note that the @OneToMany annotation is used to define the property in Items class that will be used to map the mappedBy variable. That’s why we have a property named “cart” in the Items class:

@Entity
@Table(name="ITEMS")
public class ItemEntry {
    
    //...
    @ManyToOne
    @JoinColumn(name="cart_id", nullable=false)
    private Cart cart;

    public ItemEntry() {}
    
    // getters and setters
}

It is importing to note that the @ManyToOne annotation is associated with Cart class variable. @JoinColumn annotation references the mapped column.

5. In Action

In the test program, we are creating a class with a main() method for getting the Hibernate Session and saving the model objects into the database implementing the one-to-many association:

sessionFactory = HibernateAnnotationUtil.getSessionFactory();
session = sessionFactory.getCurrentSession();
System.out.println("Session created");
	    
tx = session.beginTransaction();

session.save(cart);
session.save(item1);
session.save(item2);
	    
tx.commit();
System.out.println("Cart ID=" + cart.getId());
System.out.println("item1 ID=" + item1.getId()
  + ", Foreign Key Cart ID=" + item.getCart().getId());
System.out.println("item2 ID=" + item2.getId()
+ ", Foreign Key Cart ID=" + item.getCart().getId());

This is the output of our test program:

Session created
Hibernate: insert into CART values ()
Hibernate: insert into ITEMS (cart_id)
  values (?)
Hibernate: insert into ITEMS (cart_id)
  values (?)
Cart ID=7
item1 ID=11, Foreign Key Cart ID=7
item2 ID=12, Foreign Key Cart ID=7
Closing SessionFactory

6. Conclusion

We have seen how easy it is to implement the one-to-many relationship with the Hibernate ORM and MySQL database using JPA annotations.

The source code of this tutorial can be found over on GitHub.


Avoiding the ConcurrentModificationException in Java

$
0
0

1. Introduction

In this article, we’ll take a look at the ConcurrentModificationException class.

First, we’ll give an explanation how it works, and then prove it by using a test for triggering it.

Finally, we’ll try out some workarounds by using practical examples.

2. Triggering a ConcurrentModificationException

Essentially, the ConcurrentModificationException is used to fail-fast when something we are iterating on is modified. Let’s prove this with a simple test:

@Test(expected = ConcurrentModificationException.class)
public void whilstRemovingDuringIteration_shouldThrowException() throws InterruptedException {

    List<Integer> integers = newArrayList(1, 2, 3);

    for (Integer integer : integers) {
        integers.remove(1);
    }
}

As we can see, before finishing our iteration we are removing an element. That’s what triggers the exception.

3. Solutions

Sometimes, we might actually want to remove elements from a collection whilst iterating. If this is the case, then there are some solutions.

3.1. Using an Iterator Directly

for-each loop uses an Iterator behind the scenes but is less verbose. However, if we refactored our previous test to use an Iterator, we will have access to additional methods, such as remove(). Let’s try using this method to modify our list instead:

for (Iterator<Integer> iterator = integers.iterator(); iterator.hasNext();) {
    Integer integer = iterator.next();
    if(integer == 2) {
        iterator.remove();
    }
}

Now we will notice that there is no exception. The reason for this is that the remove() method does not cause a ConcurrentModificationException. It is safe to call while iterating.

3.2. Not Removing During Iteration

If we want to keep our for-each loop, then we can. It’s just that we need to wait until after iterating before we remove the elements. Let’s try this out by adding what we want to remove to a toRemove list as we iterate:

List<Integer> integers = newArrayList(1, 2, 3);
List<Integer> toRemove = newArrayList();

for (Integer integer : integers) {
    if(integer == 2) {
        toRemove.add(integer);
    }
}
integers.removeAll(toRemove);

assertThat(integers).containsExactly(1, 3);

This is another effective way of getting around the problem.

3.3. Using removeIf()

Java 8 introduced the removeIf() method to the Collection interface. This means that if we are working with it, we can use ideas of functional programming to achieve the same results again:

List<Integer> integers = newArrayList(1, 2, 3);

integers.removeIf(i -> i == 2);

assertThat(integers).containsExactly(1, 3);

This declarative style offers us the least amount of verbosity. However, depending on the use case, we may find other methods more convenient.

3.4. Filtering Using Streams

When diving into the world of functional/declarative programming, we can forget about mutating collections, instead, we can focus on elements that should be actually processed:

Collection<Integer> integers = newArrayList(1, 2, 3);

List<String> collected = integers
  .stream()
  .filter(i -> i != 2)
  .map(Object::toString)
  .collect(toList());

assertThat(collected).containsExactly("1", "3");

We’ve done the inverse to our previous example, by providing a predicate for determining elements to include, not exclude. The advantage is that we can chain together other functions alongside the removal. In the example, we use a functional map(), but could use even more operations if we want to.

4. Conclusion

In this article we’ve shown problems that you may encounter if you’re removing items from a collection whilst iterating, and also provided some solutions to negate the issue.

The implementation of these examples can be found over on GitHub. This is a Maven project, so should be easy to run as is.

Guide to WeakHashMap in Java

$
0
0

1. Overview

In this article, we will be looking at a WeakHashMap from the java.util package. We will see how that construct is useful when implementing a cache. The WeakHashMap is a hashtable-based implementation of the Map interface, with keys that are of a WeakReference type.

An entry in a WeakHashMap will automatically be removed when its key is no longer in ordinary use, meaning that there is no single Reference that point to that key. When the garbage collection (GC) process discards a key, its entry is effectively removed from the map, so this class behaves somewhat differently from other Map implementations.

2. Strong, Soft, and Weak References

To understand how the WeakHashMap works, we need to look at a WeakReference class – which is the basic construct for keys in the WeakHashMap implementation. In Java, we have three main types of references, which we’ll explain in the following sections.

2.1. Strong References

The strong reference is the most common type of Reference that we use in our day to day programming:

Integer prime = 1;

The variable prime has a strong reference to an Integer object with value 1. Any object which has a strong reference pointing to it is not eligible for GC.

2.2. Soft References

Simply put, an object that has a SoftReference pointing to it will not be garbage collected until the JVM absolutely needs memory.

Let’s see how we can create a SoftReference in Java:

Integer prime = 1;  
SoftReference<Integer> soft = new SoftReference<Integer>(prime); 
prime = null;

The prime object has a strong reference pointing to it.

Next, we are wrapping prime strong reference into a soft reference. After making that strong reference null, a prime object is eligible for GC but will be collected only when JVM absolutely needs memory.

2.3. Weak References

The objects that are referenced only by weak references are garbage collected eagerly; the GC won’t wait until it needs memory in that case.

We can create a WeakReference in Java in the following way:

Integer prime = 1;  
WeakReference<Integer> soft = new WeakReference<Integer>(prime); 
prime = null;

When we made a prime reference null, the prime object will be garbage collected in the next GC cycle, as there is no other strong reference pointing to it.

References of a WeakReference type are used as keys in WeakHashMap.

3. WeakHashMap as an Efficient Memory Cache

Let’s say that we want to build a cache that keeps big image objects as values, and image names as keys. We want to pick a proper map implementation for solving that problem.

Using a simple HashMap will not be a good choice because the value objects may occupy a lot of memory. What’s more, they’ll never be reclaimed from the cache by a GC process, even when they are not in use in our application anymore.

Ideally, we want a Map implementation that allows GC to automatically delete unused objects.  When a key of a big image object is not in use in our application in any place, that entry will be deleted from memory.

Fortunately, the WeakHashMap has exactly these characteristics. Let’s test our WeakHashMap and see how it behaves:

WeakHashMap<UniqueImageName, BigImage> map = new WeakHashMap<>();
BigImage bigImage = new BigImage("image_id");
UniqueImageName imageName = new UniqueImageName("name_of_big_image");

map.put(imageName, bigImage);
assertTrue(map.containsKey(imageName));

imageName = null;
System.gc();

await().atMost(10, TimeUnit.SECONDS).until(map::isEmpty);

We’re creating a WeakHashMap instance that will store our BigImage objects. We are putting a BigImage object as a value and an imageName object reference as a key. The imageName will be stored in a map as a WeakReference type.

Next, we set the imageName reference to be null, therefore there are no more references pointing to the bigImage object. The default behavior of a WeakHashMap is to reclaim an entry that has no reference to it on next GC, so this entry will be deleted from memory by the next GC process.

We are calling a System.gc() to force the JVM to trigger a GC process. After the GC cycle, our WeakHashMap will be empty:

WeakHashMap<UniqueImageName, BigImage> map = new WeakHashMap<>();
BigImage bigImageFirst = new BigImage("foo");
UniqueImageName imageNameFirst = new UniqueImageName("name_of_big_image");

BigImage bigImageSecond = new BigImage("foo_2");
UniqueImageName imageNameSecond = new UniqueImageName("name_of_big_image_2");

map.put(imageNameFirst, bigImageFirst);
map.put(imageNameSecond, bigImageSecond);
 
assertTrue(map.containsKey(imageNameFirst));
assertTrue(map.containsKey(imageNameSecond));

imageNameFirst = null;
System.gc();

await().atMost(10, TimeUnit.SECONDS)
  .until(() -> map.size() == 1);
await().atMost(10, TimeUnit.SECONDS)
  .until(() -> map.containsKey(imageNameSecond));

Note that only the imageNameFirst reference is set to null. The imageNameSecond reference remains unchanged. After GC is triggered, the map will contain only one entry – imageNameSecond.

4. Conclusion

In this article, we looked at types of references in Java to fully understand how java.util.WeakHashMap works. We created a simple cache that leverages behavior of a WeakHashMap and test if it works as we expected.

The implementation of all these examples and code snippets can be found in the GitHub project – which is a Maven project, so it should be easy to import and run as it is.

Strategy Design Pattern in Java 8

$
0
0

1. Introduction

In this article, we’ll look at how we can implement the strategy design pattern in Java 8.

First, we’ll give an overview of the pattern, and explain how it’s been traditionally implemented in older versions of Java.

Next, we’ll try out the pattern again, only this time with Java 8 lambdas, reducing the verbosity of our code.

2. Strategy Pattern

Essentially, the strategy pattern allows us to change the behavior of an algorithm at runtime.

Typically, we would start with an interface which is used to apply an algorithm, and then implement it multiple times for each possible algorithm.

Let’s say we have a requirement to apply different types of discounts to a purchase, based on whether it’s a Christmas, Easter or New Year. First, let’s create a Discounter interface which will be implemented by each of our strategies:

public interface Discounter {
    BigDecimal applyDiscount(BigDecimal amount);
}

Then let’s say we want to apply a 50% discount at Easter and a 10% discount at Christmas. Let’s implement our interface for each of these strategies:

public static class EasterDiscounter implements Discounter {
    @Override
    public BigDecimal applyDiscount(final BigDecimal amount) {
        return amount.multiply(BigDecimal.valueOf(0.5));
    }
}

public static class ChristmasDiscounter implements Discounter {
   @Override
   public BigDecimal applyDiscount(final BigDecimal amount) {
       return amount.multiply(BigDecimal.valueOf(0.9));
   }
}

Finally, let’s try a strategy in a test:

Discounter easterDiscounter = new EasterDiscounter();

BigDecimal discountedValue = easterDiscounter
  .applyDiscount(BigDecimal.valueOf(100));

assertThat(discountedValue)
  .isEqualByComparingTo(BigDecimal.valueOf(50));

This works quite well, but the problem is it can be a little bit of a pain to have to create a concrete class for each strategy. The alternative would be to use anonymous inner types, but that’s still quite verbose and not much handier than the previous solution:

Discounter easterDiscounter = new Discounter() {
    @Override
    public BigDecimal applyDiscount(final BigDecimal amount) {
        return amount.multiply(BigDecimal.valueOf(0.5));
    }
};

3. Leveraging Java 8

Since Java 8 has been released, the introduction of lambdas has made anonymous inner types more or less redundant. That means creating strategies in line is now a lot cleaner and easier.

Furthermore, the declarative style of functional programming lets us implement patterns that were not possible before.

3.1. Reducing Code Verbosity

Let’s try creating an inline EasterDiscounter, only this time using a lambda expression:

Discounter easterDiscounter = amount -> amount.multiply(BigDecimal.valueOf(0.5));

As we can see, our code is now a lot cleaner and more maintainable, achieving the same as before but in a single line. Essentially, a lambda can be seen as a replacement for an anonymous inner type.

This advantage becomes more apparent when we want to declare even more Discounters in line:

List<Discounter> discounters = newArrayList(
  amount -> amount.multiply(BigDecimal.valueOf(0.9)),
  amount -> amount.multiply(BigDecimal.valueOf(0.8)),
  amount -> amount.multiply(BigDecimal.valueOf(0.5))
);

When we want to define lots of Discounters, we can declare them statically all in one place. Java 8 even lets us define static methods in interfaces, if we want to.

So instead of choosing between concrete classes or anonymous inner types, let’s try creating lambdas all in a single class:

public interface Discounter {
    BigDecimal applyDiscount(BigDecimal amount);

    static Discounter christmasDiscounter() {
        return amount -> amount.multiply(BigDecimal.valueOf(0.9));
    }

    static Discounter newYearDiscounter() {
        return amount -> amount.multiply(BigDecimal.valueOf(0.8));
    }

    static Discounter easterDiscounter() {
        return amount -> amount.multiply(BigDecimal.valueOf(0.5));
    }
}

As we can see, we are achieving a lot in a not very much code.

3.2. Leveraging Function Composition

Let’s modify our Discounter interface so it extends the UnaryOperator interface, and then add a combine() method:

public interface Discounter extends UnaryOperator<BigDecimal> {
    default Discounter combine(Discounter after) {
        return value -> after.apply(this.apply(value));
    }
}

Essentially, we are refactoring our Discounter and leveraging a fact that applying a discount is a function that converts a BigDecimal instance into another BigDecimal instance, allowing us to access predefined methodsAs the UnaryOperator comes with an apply() method, we can just replace applyDiscount with it.

The combine() method is just an abstraction around applying one Discounter to the results of this. It uses the built-in functional apply() in order to achieve this.

Now, Let’s try applying multiple Discounters cumulatively to an amount. We will do this by using the functional reduce() and our combine():

Discounter combinedDiscounter = discounters
  .stream()
  .reduce(v -> v, Discounter::combine);

combinedDiscounter.apply(...);

Pay special attention to the first reduce argument. When no discounts provided, we need to return the unchanged value. This can be achieved by providing an identity function as the default discounter.

This is a useful and less verbose alternative to performing a standard iteration. If we consider the methods we are getting out of the box for functional composition, it also gives us a lot more functionality for free.

4. Conclusion

In this article, we’ve explained the strategy pattern, and also demonstrated how we can use lambda expressions to implement it in a way which is less verbose.

The implementation of these examples can be found over on GitHub. This is a Maven based project, so should be easy to run as is.

Java Web Weekly, Issue 163

$
0
0

A full week in the Java ecosystem. Here we go…

1. Spring and Java

>> Java Module System Hands-On Guide [sitepoint.com]

As Java 9 is getting closer and closer, it might be worth looking at a practical introduction to Project Jigsaw.

>> Proposal for a Java policy files crafting process [frankel.ch]

A few lessons learned during the process of developing policy files.

>> Java Time (JSR-310) enhancements in Java SE 9 [joda.org]

It turns out that java.time is not perfect and can be improved 🙂

>> Oracle Reminds Java Developers that Soon They Won’t Have a Browser to Run Applets [infoq.com]

Just a reminder that applets will soon not be runnable in any browser.

>> GitHub Research: Over 50% of Java Logging Statements Are Written Wrong [takipi.com]

The latest GitHub research shows that meaningful logging is not that common (especially in production environments).

>> Add full-text search to your application with Hibernate Search [thoughts-on-java.org]

Integrating Lucene/Elasticsearch with Hibernate-managed databases becomes much easier by using Hibernate Search.

>> MicroProfile Becomes Eclipse MicroProfile [infoq.com]

As the title suggests 🙂

>> Java 9’s Immutable Collections Are Easier To Create But Use With Caution [carlmartensen.com]

Introducing immutable collections to Collections API in Java 9 might be confusing. No distinction between mutable and immutable structures might not work out in the long term.

>> Configure Jenkins for Continuous Delivery of a Spring Boot application [pragmaticintegrator.com]

CD tutorial with Jenkins and Spring Boot.

>> In Praise of Laziness [sitepoint.com]

Lazineess at the language level in Java.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Jepsen: MongoDB 3.4.0-rc3 [jepsen.io]

Whenever one of these in-depth analysis comes out, I set aside time to read it.

Not because I’m necessarily working with that particular technology (I’ve luckily stayed away from MongoDB for a long time) – but because there’s so much to learn from these in-depth dives into how the store works.

>> How We Interview at Pivotal [pivotal.io]

There are definitely some nuggets to pick up from this one if you’re doing interviewing.

Also worth reading:

3. Musings

>> What do you mean by “Event-Driven”? [martinfowler.com]

An exploration of the “event-driven” concepts.

>> Elasticsearch Ransomware Attacks Highlight Need for Better Security [loggly.com]

Open-source is cool but we need to cross check if adoption of such technologies will not impose unnecessary risks.

>> Reputation Suicide, and Why I’m Quitting Disqus [daedtech.com]

Disqus is back to its old distasteful tricks again (yes, they’ve done it to this site as well).

>> On elegance [ontestautomation.com]

According to Dijkstra, elegance is a quality that decides between success and failure.

>> Hazelcast release Jet, open-source stream processing engine [infoq.com]

Hazelcast released a new interesting product – Jet – a stream processing engine.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Team interview [dilbert.com]

>> I see something in you [dilbert.com]

>> The problem is people [dilbert.com]

5. Pick of the Week

>> Wait, other people can take your time? [m.signalvnoise.com]

Quick Intro to Full-Text Search with ElasticSearch

$
0
0

1. Overview

Full-text search queries and performs linguistic searches against documents. It includes single or multiple words or phrases and returns documents that match search condition.

ElasticSearch is a search engine based on Apache Lucene, a free and open-source information retrieval software library. It provides a distributed, full-text search engine with an HTTP web interface and schema-free JSON documents.

This article examines ElasticSearch REST API and demonstrates basic operations using HTTP requests only.

2. Setup

In order to install ElasticSearch on your machine, please refer to the official setup guide.

RESTfull API runs on port 9200. Let us test if it is running properly using the following curl command:

curl -XGET 'http://localhost:9200/'

If you observe the following response, the instance is properly running:

{
  "name": "NaIlQWU",
  "cluster_name": "elasticsearch",
  "cluster_uuid": "enkBkWqqQrS0vp_NXmjQMQ",
  "version": {
    "number": "5.1.2",
    "build_hash": "c8c4c16",
    "build_date": "2017-01-11T20:18:39.146Z",
    "build_snapshot": false,
    "lucene_version": "6.3.0"
  },
  "tagline": "You Know, for Search"
}

3. Indexing Documents

ElasticSearch is document oriented. It stores and indexes documents. Indexing creates or updates documents. After indexing, you can search, sort, and filter complete documents—not rows of columnar data. This is a fundamentally different way of thinking about data and is one of the reasons ElasticSearch can perform a complex full-text search.

Documents are represented as JSON objects. JSON serialization is supported by most programming languages and has become the standard format used by the NoSQL movement. It is simple, concise, and easy to read.

We are going to use the following random entries to perform our full-text search:

{
  "title": "He went",
  "random_text": "He went such dare good fact. The small own seven saved man age."
}

{
  "title": "He oppose",
  "random_text": 
    "He oppose at thrown desire of no. \
      Announcing impression unaffected day his are unreserved indulgence."
}

{
  "title": "Repulsive questions",
  "random_text": "Repulsive questions contented him few extensive supported."
}

{
  "title": "Old education",
  "random_text": "Old education him departure any arranging one prevailed."
}

Before we can index a document, we need to decide where to store it. It’s possible to have multiple indexes, which in turn contain multiple types. These types hold multiple documents, and each document has multiple fields.

We are going to store our documents using the following scheme:

text: The index name.
article: The type name.
id: The ID of this particular example text-entry.

To add a document we are going to run the following command:

curl -XPUT 'localhost:9200/text/article/1?pretty'
  -H 'Content-Type: application/json' -d '
{
  "title": "He went",
  "random_text": 
    "He went such dare good fact. The small own seven saved man age."
}'

Here we are using id=1, we can add other entries using the same command and incremented id.

4. Retrieving Documents

After we add all our documents we can check how many documents, using the following command, we have in the cluster :

curl -XGET 'http://localhost:9200/_count?pretty' -d '
{
  "query": {
    "match_all": {}
  }
}'

Also, we can get a document using its id with the following command:

curl -XGET 'localhost:9200/text/article/1?pretty'

And we should get the following answer from elastic search:

{
  "_index": "text",
  "_type": "article",
  "_id": "1",
  "_version": 1,
  "found": true,
  "_source": {
    "title": "He went",
    "random_text": 
      "He went such dare good fact. The small own seven saved man age."
  }
}

As we can see this answer correspond with the entry added using the id 1.

5. Querying Documents

OK let’s perform a full-text search with the following command:

curl -XGET 'localhost:9200/text/article/_search?pretty' 
  -H 'Content-Type: application/json' -d '
{
  "query": {
    "match": {
      "random_text": "him departure"
    }
  }
}'

And we get the following result:

{
  "took": 32,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "failed": 0
  },
  "hits": {
    "total": 2,
    "max_score": 1.4513469,
    "hits": [
      {
        "_index": "text",
        "_type": "article",
        "_id": "4",
        "_score": 1.4513469,
        "_source": {
          "title": "Old education",
          "random_text": "Old education him departure any arranging one prevailed."
        }
      },
      {
        "_index": "text",
        "_type": "article",
        "_id": "3",
        "_score": 0.28582606,
        "_source": {
          "title": "Repulsive questions",
          "random_text": "Repulsive questions contented him few extensive supported."
        }
      }
    ]
  }
}

As we can see we are looking for “him departure” and we get two results with different scores. The first result is obvious because the text have the performed search inside of it and as we can see we have the score of 1.4513469.

The second result is retrieved because the target document contains the word “him”.

By default, ElasticSearch sorts matching results by their relevance score, that is, by how well each document matches the query. Note, that the score of the second result is small relative to the first hit, indicating lower relevance.

6. Fuzzy Search

Fuzzy matching treats two words that are “fuzzily” similar as if they were the same word. First, we need to define what we mean by fuzziness.

Elasticsearch supports a maximum edit distance, specified with the fuzziness parameter, of 2. The fuzziness parameter can be set to AUTO, which results in the following maximum edit distances:

  • 0 for strings of one or two characters
  • 1 for strings of three, four, or five characters
  • 2 for strings of more than five characters

you may find that an edit distance of 2 returns results that don’t appear to be related.

You may get better results, and better performance, with a maximum fuzziness of 1. Distance refers to the Levenshtein distance that is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits.

OK let us perform our search with fuzziness:

curl -XGET 'localhost:9200/text/article/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "took": 88,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "failed": 0
  },
  "hits": {
    "total": 4,
    "max_score": 1.4513469,
    "hits": [
      {
        "_index": "text",
        "_type": "article",
        "_id": "4",
        "_score": 1.4513469,
        "_source": {
          "title": "Old education",
          "random_text": "Old education him departure any arranging one prevailed."
        }
      },
      {
        "_index": "text",
        "_type": "article",
        "_id": "2",
        "_score": 0.39833328,
        "_source": {
          "title": "He oppose",
          "random_text":
            "He oppose at thrown desire of no. 
              \ Announcing impression unaffected day his are unreserved indulgence."
        }
      },
      {
        "_index": "text",
        "_type": "article",
        "_id": "3",
        "_score": 0.28582606,
        "_source": {
          "title": "Repulsive questions",
          "random_text": "Repulsive questions contented him few extensive supported."
        }
      },
      {
        "_index": "text",
        "_type": "article",
        "_id": "1",
        "_score": 0.0,
        "_source": {
          "title": "He went",
          "random_text": "He went such dare good fact. The small own seven saved man age."
        }
      }
    ]
  }
}'

As we can see the fuzziness give us more results.

We need to use fuzziness carefully because it tends to retrieve results that look unrelated.

7. Conclusion

In this quick tutorial we focused on indexing documents and querying Elasticsearch for full-text search, directly via it’s REST API.

We of course have APIs available for multiple programming languages when we need to – but the API is still quite convenient and language agnostic.

Viewing all 4702 articles
Browse latest View live