Quantcast
Channel: Baeldung
Viewing all 4735 articles
Browse latest View live

Java Web Weekly 110

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Q&A with Aleksey Shipilev on Compact Strings Optimization in OpenJDK 9 [infoq.com]

If you’re interested in the inner workings of JDK 9, this interview is well worth a read.

>> O Java EE 7 Application Servers, Where Art Thou? [antoniogoncalves.org]

Very interesting numbers on the current state of Java EE 7 application servers.

>> Introduction to Spring Rest Docs [yetanotherdevblog.com]

A solid and super-practical intro to a very cool new(ish) project out of the Spring ecosystem – Spring REST Docs.

>> Hystrix To Prevent Hysterix [keyholesoftware.com]

An good intro to Hystrix for a resilient system architecture.

The writeup is a bit verbose at the beginning, but it gets quite interesting and useful later on.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Thank you Waitrose, now fix your insecure site [troyhunt.com]

The right way to do HTTPS when it comes to sending user credentials over the wire. Not super complicated, but it seems that not everybody’s doing it right.

A useful and fun one to read.

>> Basics of Web Application Security: Encode HTML output [martinfowler.com]

The next installment in the security series I covered last week.

And a quick side-note – this article is what’s called an “evolving publication” – kind of a unique concept and maybe something that shows us that we don’t have to publish work in the same old way we’re used to.

Also worth reading:

3. Musings

>> We’re Not Beasts, So Let’s Not Act Like It [daedtech.com]

Being an employee and being a consultant coming in for a limited amount of time are two very different things. Not only different financially and organizationally, but at a fundamental level that has a lot more to do with mindset.

This writeup explores that difference in a practical and funny way – definitely have a read if you skirting the border between employee and consultant (or thinking about it).

>> The Tyranny of the P1 [dandreamsofcoding.com]

Dan is taking a page out of Amy Hoy‘s playbook and getting back to product basics.

Of course the problem is that these aren’t the fun features to build into the product and it takes oh so much discipline to stay away from those.

>> Is Unlimited PTO a Good Deal for Me? [daedtech.com]

The first time I read about the concept of unlimited vacation time, I was excited about the idea for about five minutes, and then started to understand the nuances and implications of what that actually meant.

This piece explores those nuances in a clear and insightful way.

>> The software engineer’s guide to asserting dominance in the workplace [medium.com]

Funniest thing I read all week.

Also worth reading:

4. Comics

And my favorite comisc of the week:

>> I didn’t know you could gift-wrap creepiness [dilbert.com]

>> Can we watch? [dilbert.com]

>> Backslashes [xkcd.com]

 

5. Pick of the Week

This week – a fantastic presentation about testing (an oldie but a goodie):

>> Test – Know Your Units (Oredev 2008)

 

I usually post about Dev stuff on Twitter - you can follow me there:



Modular RAML Using Includes, Libraries, Overlays and Extensions

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Introduction

In our first two articles on RAML – the RESTful API Modeling Language – we introduced some basic syntax, including the use of data types and JSON schema, and we showed how to simplify a RAML definition by extracting common patterns into resource types and traits.

In this article, we show how you can break your RAML API definition into modules by making use of includes, libraries, overlays, and extensions.

2. Our API

For the purpose of this article, we shall focus on the portion of our API involving the entity type called Foo.

Here are the resources making up our API:

  • GET /api/v1/foos
  • POST /api/v1/foos
  • GET /api/v1/foos/{fooId}
  • PUT /api/v1/foos/{fooId}
  • DELETE /api/v1/foos/{fooId}

3. Includes

The purpose of an include is to modularize a complex property value in a RAML definition by placing the property value in an external file.

Our first article touched briefly on the use of includes when we were specifying data types and examples whose properties were being repeated inline throughout the API.

3.1. General Usage and Syntax

The !include tag takes a single argument: the location of the external file containing the property value. This location may be an absolute URL, a path relative to the root RAML file, or a path relative to the including file.

A location starting with a forward slash (/) indicates a path relative to the location of the root RAML file, and a location beginning without a slash is interpreted to be relative to the location of the including file.

The logical corollary to the latter is that an included file may itself contain other !include directives.

Here is an example showing all three uses of the !include tag:

#%RAML 1.0
title: Baeldung Foo REST Services API
...
types: !include /types/allDataTypes.raml
resourceTypes: !include allResourceTypes.raml
traits: !include http://foo.com/docs/allTraits.raml

3.2. Typed Fragments

Rather than placing all the typesresource types or traits in their own respective include files, you can also use special types of includes known as typed fragments to break each of these constructs into multiple include files, specifying a different file for each typeresource type or trait.

You can also use typed fragments to define user documentation itemsnamed examples, annotationslibraries, overlays, and extensions. We will cover the use of overlays and extensions later in the article.

Although it is not required, the first line of an include file that is a typed fragment may be a RAML fragment identifier of the following format:

#%RAML 1.0 <fragment-type>

For example, the first line of a typed fragment file for a trait would be:

#%RAML 1.0 Trait

If a fragment identifier is used, then the contents of the file MUST contain only valid RAML for the type of fragment being specified.

Let’s look first at a portion of the traits section of our API:

traits:
  - hasRequestItem:
      body:
        application/json:
          type: <<typeName>>
  - hasResponseItem:
      responses:
          200:
            body:
              application/json:
                type: <<typeName>>
                example: !include examples/<<typeName>>.json

In order to modularize this section using typed fragments, we first rewrite the traits section as follows:

traits:
  - hasRequestItem: !include traits/hasRequestItem.raml
  - hasResponseItem: !include traits/hasResponseItem.raml

We would then write the typed fragment file hasRequestItem.raml:

#%RAML 1.0 Trait
body:
  application/json:
    type: <<typeName>>

The typed fragment file hasResponseItem.raml would look like this:

#%RAML 1.0 Trait
responses:
    200:
      body:
        application/json:
          type: <<typeName>>
          example: !include /examples/<<typeName>>.json

4. Libraries

RAML libraries may be used to modularize any number and combination of data types, security schemes, resource types, traits, and annotations.

4.1. Defining a Library

Although usually defined in an external file, which is then referenced as an include, a library may also be defined inline. A library contained in an external file can reference other libraries as well.

Unlike a regular include or typed fragment, a library contained in an external file must declare the top-level element names that are being defined.

Let’s rewrite our traits section as a library file:

#%RAML 1.0 Library
# This is the file /libraries/traits.raml
usage: This library defines some basic traits
traits:
  hasRequestItem:
    usage: Use this trait for resources whose request body is a single item
    body:
      application/json:
        type: <<typeName>>
  hasResponseItem:
    usage: Use this trait for resources whose response body is a single item
    responses:
        200:
          body:
            application/json:
              type: <<typeName>>
              example: !include /examples/<<typeName>>.json

4.2. Applying a Library

Libraries are applied via the top-level uses property, the value of which is one or more objects whose property names are the library names and whose property values make up the contents of the libraries.

Once we have created the libraries for our security schemes, data types, resource types, and traits, we can apply the libraries to the root RAML file:

#%RAML 1.0
title: Baeldung Foo REST Services API
uses:
  mySecuritySchemes: !include libraries/security.raml
  myDataTypes: !include libraries/dataTypes.raml
  myResourceTypes: !include libraries/resourceTypes.raml
  myTraits: !include libraries/traits.raml

4.3. Referencing a Library

A library is referenced by concatenating the library name, a dot (.), and the name of the element (e.g. data type, resource type, trait, etc) being referenced.

You may recall from our previous article how we refactored our resource types using the traits that we had defined. The following example shows how to rewrite our “item” resource type as a library, how to include the traits library file (shown above) within the new library, and how to reference the traits by prefixing the trait names with their library name qualifier (“myTraits“):

#%RAML 1.0 Library
# This is the file /libraries/resourceTypes.raml
usage: This library defines the resource types for the API
uses:
  myTraits: !include traits.raml
resourceTypes:
  item:
    usage: Use this resourceType to represent any single item
    description: A single <<typeName>>
    get:
      description: Get a <<typeName>> by <<resourcePathName>>
      is: [ myTraits.hasResponseItem, myTraits.hasNotFound ]
    put:
      description: Update a <<typeName>> by <<resourcePathName>>
      is: [ myTraits.hasRequestItem, myTraits.hasResponseItem, myTraits.hasNotFound ]
    delete:
      description: Delete a <<typeName>> by <<resourcePathName>>
      is: [ myTraits.hasNotFound ]
      responses:
        204:

5. Overlays and Extensions

Overlays and extensions are modules defined in external files that are used to extend an API. An overlay is used to extend non-behavioral aspects of an API, such as descriptions, usage directions, and user documentation items, whereas an extension is used to extend or override behavioral aspects of the API.

Unlike includes, which are referenced by other RAML files to be applied as if they were being coded inline, all overlay and extension files must contain a reference (via the top-level masterRef property) to its master file, which can be either a valid RAML API definition or another overlay or extension file, to which they are to be applied.

5.1. Definition

The first line of an overlay or extension file must be formatted as follows:

RAML 1.0 Overlay

And the first line of an overlay file must be formatted similarly:

RAML 1.0 Extension

5.2. Usage Constraints

When using a set of overlays and/or extensions, all of them must refer to the same master RAML file. In addition, RAML processing tools usually expect the root RAML file and all overlay and extension files to have a common file extension (e.g. “.raml”).

5.3. Use Cases for Overlays

The motivation behind overlays is to provide a mechanism for separating interface from implementation, thus allowing the more human-oriented parts of a RAML definition to change or grow more frequently, while the core behavioral aspects of the API remain stable.

A common use case for overlays is to provide user documentation and other descriptive elements in multiple languages. Let’s rewrite the title of our API and add some user documentation items:

#%RAML 1.0
title: API for REST Services used in the RAML tutorials on Baeldung.com
documentation:
  - title: Overview
  - content: |
      This document defines the interface for the REST services
      used in the popular RAML Tutorial series at Baeldung.com.
  - title: Copyright
  - content: Copyright 2016 by Baeldung.com. All rights reserved.

Here is how we would define a Spanish language overlay for this section:

#%RAML 1.0 Overlay
# File located at (archivo situado en):
# /overlays/es_ES/documentationItems.raml
masterRef: /api.raml
usage: |
  To provide user documentation and other descriptive text in Spanish
  (Para proporcionar la documentación del usuario y otro texto descriptivo
  en español)
title: |
  API para servicios REST utilizados en los tutoriales RAML
  en Baeldung.com
documentation:
  - title: Descripción general
  - content: |
      Este documento define la interfaz para los servicios REST
      utilizados en la popular serie de RAML Tutorial en Baeldung.com.
  - title: Derechos de autor
  - content: |
      Derechos de autor 2016 por Baeldung.com.
      Todos los derechos reservados.

Another common use case for overlays is to externalize annotation metadata, which essentially are a way of adding non-standard constructs to an API in order to provide hooks for RAML processors such as testing and monitoring tools.

5.4. Use Cases for Extensions

As you may infer from the name, extensions are used to extend an API by adding new behaviors and/or modifying existing behaviors of an API. An analogy from the object-oriented programming world would be a subclass extending a superclass, where the subclass can add new methods and/or override existing methods. An extension may also extend an API’s non-functional aspects.

An extension might be used, for example, to define additional resources that are exposed only to a select set of users, such as administrators or users having been assigned a particular role. An extension could also be used to add features for a newer version of an API.

Below is an extension that overrides the version of our API and adds resources that were unavailable in the previous version:

#%RAML 1.0 Extension
# File located at:
# /extensions/en_US/additionalResources.raml
masterRef: /api.raml
usage: This extension defines additional resources for version 2 of the API.
version: v2
/foos:
  /bar/{barId}:
    get:
      description: |
        Get the foo that is related to the bar having barId = {barId}
      typeName: Foo
      queryParameters:
        barId?: integer
        typeName: Foo
        is: [ hasResponseItem ]

And here is a Spanish-language overlay for that extension:

#%RAML 1.0 Overlay
# Archivo situado en:
# /overlays/es_ES/additionalResources.raml
masterRef: /api.raml
usage: |
  Se trata de un español demasiado que describe los recursos adicionales
  para la versión 2 del API.
version: v2
/foos:
  /bar/{barId}:
    get:
      description: |
        Obtener el foo que se relaciona con el bar tomando barId = {barId}

It is worth noting here that although we used an overlay for the Spanish-language overrides in this example because it does not modify any behaviors of the API, we could just as easily have defined this module to be an extension. And it may be more appropriately defined as an extension, given that its purpose is to override properties found in the English-language extension above it.

6. Conclusion

In this tutorial, we have introduced several techniques to make a RAML API definition more modular by separating common constructs into external files.

First, we showed how the include feature in RAML can be used to refactor individual, complex property values into reusable external file modules known as typed fragments. Next, we demonstrated a way of using the include feature to externalize certain sets of elements into reusable libraries. Finally, we extended some behavioral and non-behavioral aspects of an API through the use of overlays and extensions.

To learn even more about RAML modularization techniques, please visit the RAML 1.0 spec.

You can view the full implementation of the API definition used for this tutorial in the github project.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

Introduction to Spring Data Elasticsearch

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

In this article we’ll explore the basics of Spring Data Elasticsearch in a code-focused, practical manner.

We’ll show how to index, search, and query Elasticsearch in a Spring application using Spring Data – a Spring module for interaction with a popular open-source, Lucene-based search engine.

While Elasticsearch is schemaless, it can use mappings in order to tell the type of a field. When a document is indexed, its fields are processed according to their types. For example, a text field will be tokenized and filtered according to mapping rules. You could also create filters and tokenizers of your own.

2. Spring Data

Spring Data helps avoid boilerplate code. For example, if we define a repository interface that extends the ElasticsearchRepository interface provided by Spring Data Elasticsearch, CRUD operations for the corresponding document class will be made available by default.

Additionally, simply by declaring methods with names in a prescribed format, method implementations are generated for you – there is no need to write an implementation of the repository interface.

You can read more about Spring Data here.

2.1. Maven Dependency

Spring Data Elasticsearch provides a Java API for the search engine. In order to use it we need to add a new dependency to the pom.xml:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-elasticsearch</artifactId>
    <version>1.3.2.RELEASE</version>
</dependency>

2.2. Defining Repository Interfaces

Next we need to extend one of provided repository interfaces, replacing the generic types with our actual document and primary key types.

Notice that ElasticsearchRepository extends PagingAndSortingRepository, which provides built-in support for pagination and sorting.

In our example, we will use the paging feature in our custom search method:

public interface ArticleRepository extends ElasticsearchRepository<Article, String> {

    Page<Article> findByAuthorsName(String name, Pageable pageable);

    @Query("{\"bool\": {\"must\": [{\"match\": {\"authors.name\": \"?0\"}}]}}")
    Page<Article> findByAuthorsNameUsingCustomQuery(String name, Pageable pageable);
}

Notice that we added two custom methods. With the findByAuthorsName method, the repository proxy will create an implementation based on the method name. The resolution algorithm will determine that it needs to access the authors property and then search the name property of each item.

The second method, findByAuthorsNameUsingCustomQuery, uses an Elasticsearch boolean query, defined using the @Query annotation, which requires strict matching between the author’s name and the provided name argument.

2.3. Java Configuration

Let’s now explore the Spring configuration of our persistence layer here:

@Configuration
@EnableElasticsearchRepositories(basePackages = "com.baeldung.spring.data.es.repository")
@ComponentScan(basePackages = {"com.baeldung.spring.data.es.service"})
public class Config {

    @Bean
    public NodeBuilder nodeBuilder() {
        return new NodeBuilder();
    }

    @Bean
    public ElasticsearchOperations elasticsearchTemplate() {
        ImmutableSettings.Builder elasticsearchSettings = 
          ImmutableSettings.settingsBuilder()
          .put("http.enabled", "false") // 1
          .put("path.data", tmpDir.toAbsolutePath().toString()); // 2

        logger.debug(tmpDir.toAbsolutePath().toString());

        return new ElasticsearchTemplate(nodeBuilder()
          .local(true)
          .settings(elasticsearchSettings.build())
          .node()
          .client());
    }
}

Notice that we’re using a standard Spring enable-style annotation – @EnableElasticsearchRepositories  – to scan the provided package for Spring Data repositories.

We are also:

  1. Starting the Elasticsearch node without HTTP transport support
  2. Setting the location of the data files of each index allocated on the node

Finally – we’re also setting up an ElasticsearchOperations bean – elasticsearchTemplate – as our client to work against the Elasticsearch server.

3. Mappings

Let’s now define our first entity – a document called Article with a String id:

@Document(indexName = "blog", type = "article")
public class Article {

    @Id
    private String id;
    
    private String title;
    
    @Field(type = FieldType.Nested)
    private List<Author> authors;
    
    // standard getters and setters
}

Note that in the @Document annotation, we indicate that instances of this class should be stored in Elasticsearch in an index called “blog“, and with a document type of “article“. Documents with many different types can be stored in the same index.

Also notice that the authors field is marked as FieldType.Nested. This allows us to define the Author class separately, but have the individual instances of author embedded in an Article document when it is indexed in Elasticsearch.

4. Indexing Documents

Spring Data Elasticsearch generally auto-creates indexes based on the entities in the project.

However, you can also create an index programmatically, via the client template:

elasticsearchTemplate.createIndex(Article.class);

After the index is available, we can add a document to the index.

Let’s have a quick look at an example – indexing an article with two authors:

Article article = new Article("Spring Data Elasticsearch");
article.setAuthors(asList(new Author("John Smith"), new Author("John Doe")));
articleService.save(article);

5. Querying

5.1. Method Name-Based Query

The repository class we defined earlier had a findByAuthorsName method – which we can use for finding articles by author name:

String nameToFind = "John Smith";
Page<Article> articleByAuthorName
  = articleService.findByAuthorName(nameToFind, new PageRequest(0, 10));

By calling findByAuthorName with a PageRequest object, we obtain the first page of results (page numbering is zero-based), with that page containing at most 10 articles.

5.2. A Custom Query

There are a couple of ways to define custom queries for Spring Data Elasticsearch repositories. One way is to use the @Query annotation, as demonstrated in section 2.2.

Another option is to use a builder for custom query creation.

For example, we could search for articles that have word “data” in the title by building a query with the NativeSearchQueryBuilder:

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withFilter(regexpFilter("title", ".*data.*"))
  .build();
List<Article> articles = elasticsearchTemplate.queryForList(searchQuery, Article.class);

6. Updating and Deleting

In order to update or delete a document, we first need to retrieve that document.

String articleTitle = "Spring Data Elasticsearch";
SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchQuery("title", articleTitle).minimumShouldMatch("75%"))
  .build();

List<Article> articles = elasticsearchTemplate.queryForList(searchQuery, Article.class);

Now, to update the title of the article – we can modify the document, and use the save API:

article.setTitle("Getting started with Search Engines");
articleService.save(article);

As you may have guessed, in order to delete a document you can use the delete method:

articleService.delete(articles.get(0));

7. Conclusion

This was a quick and practical discussion of the basic use of Spring Data Elasticsearch.

To read more about the impressive features of Elasticsearch, you can find its documentation on the official website.

The example used in this article is available as a sample project in GitHub.

I usually post about Persistence on Twitter - you can follow me there:


Guava 19: What’s New?

$
0
0

1. Overview

Google Guava provides libraries with utilities that ease Java development. In this tutorial we will take a look to new functionality introduced in the Guava 19 release.

2. common.base Package Changes

2.1. Added CharMatcher Static Methods

CharMatcher, as its name implies, is used to check whether a string matches a set of requirements.

String inputString = "someString789";
boolean result = CharMatcher.javaLetterOrDigit().matchesAllOf(inputString);

In the example above, result will be true.

CharMatcher can also be used when you need to transform strings.

String number = "8 123 456 123";
String result = CharMatcher.whitespace().collapseFrom(number, '-');

In the example above, result will be “8-123-456-123”.

With the help of CharMatcher, you can count the number of occurrences of a character in a given string:

String number = "8 123 456 123";
int result = CharMatcher.digit().countIn(number);

In the example above, result will be 10.

Previous versions of Guava have matcher constants such as CharMatcher.WHITESPACE and CharMatcher.JAVA_LETTER_OR_DIGIT.

In Guava 19, these have been superseded by equivalent methods (CharMatcher.whitespace() and CharMatcher.javaLetterOrDigit(), respectively). This was changed to reduce the number of classes created when CharMatcher is used.

Using static factory methods allows classes to be created only as needed. In future releases, matcher constants will be deprecated and removed.

2.2. lazyStackTrace Method in Throwables

This method returns a List of stacktrace elements (lines) of a provided Throwable. It can be faster than iterating through the full stacktrace (Throwable.getStackTrace()) if only a portion is needed, but can be slower if you will iterate over the full stacktrace.

IllegalArgumentException e = new IllegalArgumentException("Some argument is incorrect");
List<StackTraceElement> stackTraceElements = Throwables.lazyStackTrace(e);

3. common.collect Package Changes

3.1.  Added FluentIterable.toMultiset()

In a previous Baeldung article, Whats new in Guava 18, we looked at FluentIterable. The toMultiset() method is used when you need to convert a FluentIterable to an ImmutableMultiSet.

User[] usersArray = {new User(1L, "John", 45), new User(2L, "Max", 15)};
ImmutableMultiset<User> users = FluentIterable.of(usersArray).toMultiset();

A Multiset is collection, like Set, that supports order-independent equality. The main difference between a Set and a Multiset is that Multiset may contain duplicate elements. Multiset stores equal elements as occurrences of the same single element, so you can call Multiset.count(java.lang.Object) to get the total count of occurrences of a given object.

Lets take a look to few examples:

List<String> userNames = Arrays.asList("David", "Eugen", "Alex", "Alex", "David", "David", "David");

Multiset<String> userNamesMultiset = HashMultiset.create(userNames);

assertEquals(7, userNamesMultiset.size());
assertEquals(4, userNamesMultiset.count("David"));
assertEquals(2, userNamesMultiset.count("Alex"));
assertEquals(1, userNamesMultiset.count("Eugen"));
assertThat(userNamesMultiset.elementSet(), anyOf(containsInAnyOrder("Alex", "David", "Eugen")));

You can easily determine the count of duplicate elements, which is far cleaner than with standard Java collections.

3.2. Added RangeSet.asDescendingSetOfRanges() and asDescendingMapOfRanges()

RangeSet is used to operate with nonempty ranges (intervals). We can describe a RangeSet as a set of disconnected, nonempty ranges. When you add a new nonempty range to a RangeSet, any connected ranges will be merged and empty ranges will be ignored:

Let’s take a look at some methods we can use to build new ranges: Range.closed(), Range.openClosed(), Range.closedOpen(), Range.open().

The difference between them is that open ranges don’t include their endpoints. They have different designation in mathematics. Open intervals are denoted with “(” or “)”, while closed ranges are denoted with “[” or “]”.

For example (0,5) means “any value greater than 0 and less than 5”, while (0,5] means “any value greater than 0 and less than or equal to 5”:

RangeSet<Integer> rangeSet = TreeRangeSet.create();
rangeSet.add(Range.closed(1, 10));

Here we added range [1, 10] to our RangeSet. And now we want to extend it by adding new range:

rangeSet.add(Range.closed(5, 15));

You can see that these two ranges are connected at 5, so RangeSet will merge them to a new single range, [1, 15]:

rangeSet.add(Range.closedOpen(10, 17));

These ranges are connected at 10, so they will be merged, resulting in a closed-open range, [1, 17). You can check if a value included in range or not using the contains method:

rangeSet.contains(15);

This will return true, because the range [1,17) contains 15. Let’s try another value:

rangeSet.contains(17);

This will return false, because range [1,17) doesn’t contain it’s upper endpoint, 17. You can also check if range encloses any other range using the encloses method:

rangeSet.encloses(Range.closed(2, 3));

This will return true because the range [2,3] falls completely within our range, [1,17).

There are a few more methods that can help you operate with intervals, such as Range.greaterThan(), Range.lessThan(), Range.atLeast(), Range.atMost(). The first two will add open intervals, the last two will add closed intervals. For example:

rangeSet.add(Range.greaterThan(22));

This will add a new interval (22, +∞) to your RangeSet, because it has no connections with other intervals.

With the help of new methods such as asDescendingSetOfRanges (for RangeSet)  and asDescendingMapOfRanges (for RangeSet) you can convert a RangeSet to a Set or Map.

3.3. Added Lists.cartesianProduct(List…) and Lists.cartesianProduct(List<List>>)

A Cartesian product returns every possible combination of two or more collections:

List<String> first = Lists.newArrayList("value1", "value2");
List<String> second = Lists.newArrayList("value3", "value4");

List<List<String>> cartesianProduct = Lists.cartesianProduct(first, second);

List<String> pair1 = Lists.newArrayList("value2", "value3");
List<String> pair2 = Lists.newArrayList("value2", "value4");
List<String> pair3 = Lists.newArrayList("value1", "value3");
List<String> pair4 = Lists.newArrayList("value1", "value4");

assertThat(cartesianProduct, anyOf(containsInAnyOrder(pair1, pair2, pair3, pair4)));

As you can see from this example, the resulting list will contain all possible combinations of provided lists.

3.4. Added Maps.newLinkedHashMapWithExpectedSize(int)

The initial size of a standard LinkedHashMap is 16 (you can verify this in the source of LinkedHashMap). When it reaches the load factor of HashMap (by default, 0.75), HashMap will rehash and double it size. But if you know that your HashMap will handle many key-value pairs, you can specify an initial size greater then 16, allowing you to avoid repeated rehashings:

LinkedHashMap<Object, Object> someLinkedMap = Maps.newLinkedHashMapWithExpectedSize(512);

3.5. Re-added Multisets.removeOccurrences(Multiset, Multiset)

This method is used to remove specified occurrences in Multiset:

Multiset<String> multisetToModify = HashMultiset.create();
Multiset<String> occurrencesToRemove = HashMultiset.create();

multisetToModify.add("John");
multisetToModify.add("Max");
multisetToModify.add("Alex");

occurrencesToRemove.add("Alex");
occurrencesToRemove.add("John");

Multisets.removeOccurrences(multisetToModify, occurrencesToRemove);

After this operation only “Max” will be left in multisetToModify.

Note that, if multisetToModify contained multiple instances of a given element while occurrencesToRemove contains only one instance of that element, removeOccurrences will only remove one instance.

4. common.hash Package Changes

4.1. Added Hashing.sha384()

The Hashing.sha384() method returns a hash function that implements the SHA-384 algorithm:

int inputData = 15;
        
HashFunction hashFunction = Hashing.sha384();
HashCode hashCode = hashFunction.hashInt(inputData);

The SHA-384 has for 15 is “0904b6277381dcfbddd…2240a621b2b5e3cda8”.

4.2. Added Hashing.concatenating(HashFunction, HashFunction, HashFunction…) and Hashing.concatenating(Iterable<HashFunction>)

With help of the Hashing.concatenating methods, you concatenate the results of a series of hash functions:

int inputData = 15;

HashFunction crc32Function = Hashing.crc32();
HashCode crc32HashCode = crc32Function.hashInt(inputData);

HashFunction hashFunction = Hashing.concatenating(Hashing.crc32(), Hashing.crc32());
HashCode concatenatedHashCode = hashFunction.hashInt(inputData);

The resulting concatenatedHashCode will be “4acf27794acf2779”, which is the same as the crc32HashCode (“4acf2779”) concatenated with itself.

In our example, a single hashing algorithm was used for clarity. This is not particularly useful, however. Combining two hash functions is useful when you need to make your hash stronger, as it can only be broken if two of your hashes are broken. For most cases, use two different hash functions.

5. common.reflect Package Changes

5.1. Added TypeToken.isSubtypeOf

TypeToken is used to manipulate and query generic types even in runtime, avoiding problems due to type erasure.

Java doesn’t retain generic type information for objects at runtime, so it is impossible to know if a given object has a generic type or not. But with the assistance of reflection, you can detect generic types of methods or classes. TypeToken uses this workaround to allow you to work with and query generic types without extra code.

In our example, you can see that, without the TypeToken method isAssignableFrom, will return true even though ArrayList<String> is not assignable from ArrayList<Integer>:

ArrayList<String> stringList = new ArrayList<>();
ArrayList<Integer> intList = new ArrayList<>();
boolean isAssignableFrom = stringList.getClass().isAssignableFrom(intList.getClass());

To solve this problem, we can check this with the help of TypeToken.

TypeToken<ArrayList<String>> listString = new TypeToken<ArrayList<String>>() { };
TypeToken<ArrayList<Integer>> integerString = new TypeToken<ArrayList<Integer>>() { };

boolean isSupertypeOf = listString.isSupertypeOf(integerString);

In this example, isSupertypeOf will return false.

In previous versions of Guava there was method isAssignableFrom for this purposes, but as of Guava 19, it is deprecated in favor of isSupertypeOf. Additionally, the method isSubtypeOf(TypeToken) can be used to determine if a class is a subtype of another class:

TypeToken<ArrayList<String>> stringList = new TypeToken<ArrayList<String>>() { };
TypeToken<List> list = new TypeToken<List>() { };

boolean isSubtypeOf = stringList.isSubtypeOf(list);

ArrayList is a subtype of List, so the result will be true, as expected.

6. common.io Package Changes

6.1. Added ByteSource.sizeIfKnown()

This method returns the size of the source in bytes, if it can be determined, without opening the data stream:

ByteSource charSource = Files.asByteSource(file);
Optional<Long> size = charSource.sizeIfKnown();

6.2. Added CharSource.length()

In previous version of Guava there was no method to determine the length of a CharSource. Now you can use CharSource.length() for this purpose.

6.3. Added CharSource.lengthIfKnown()

The same as for ByteSource, but with CharSource.lengthIfKnown() you can determine length of your file in characters:

CharSource charSource = Files.asCharSource(file, Charsets.UTF_8);
Optional<Long> length = charSource.lengthIfKnown();

7. Conclusion

Guava 19 introduced many useful additions and improvements to its growing library. It is well-worth considering for use in your next project.

The code samples in this article are available in the GitHub repository.

Java Web Weekly, Issue 111

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Reactive Spring [spring.io]

A quick announcement of the plans towards the reactive programming in Spring 5.

>> How to enable bytecode enhancement dirty checking in Hibernate [vladmihalcea.com]

An interesting Hibernate 5 feature – using bytecode enhancement to do dirty checking. Quick and to the point.

>> Dear API Designer. Are You Sure, You Want to Return a Primitive? [jooq.org]

Good API design is hard – that much should be clear by now.

But we’re all working towards getting better at it, and this writeup definitely makes some good points towards that.

>> Designing your own Spring Boot starter – part 1 [frankel.ch]

The first steps in putting together a Spring Boot style auto configuration – leveraging the wide array of flexible annotations in Boot.

This is no longer a new concept, but it’s still super powerful, especially if you chose to go beyond what the framework provides out of the box.

>> Preventing Session Hijacking With Spring [broadleafcommerce.com]

Solid read on protecting your system against session fixation attacks with Spring Security.

>> Java for small teams [ncrcoe.gitbooks.io]

This looks like a very useful collection of tactics and general practical advice for your first few years of doing Java work.

I haven’t read the whole thing, but the bits that I did read, I fully agreed with.

>> IntelliJ IDEA Pro Tips [medium.com]

A good array of more advanced tips to using IntelliJ well.

Getting the most out of your IDE can really make a day to day difference in your coding flow. I personally learned the most out of pairing sessions and watching my pair do stuff better than I did.

So this is definitely recommended reading if you’re an IntelliJ user (I’m not).

>> Announcing Extras for Eclipse [codeaffine.com]

And on that note – here’s some Eclipse goodness as well.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Data breaches, vBulletin and weak password hashing [troyhunt.com]

Read up on this if you’re doing any kind of security online. Good stuff.

>> Elasticsearch Cluster in a Jiffy [codecentric.de]

To the point options to bootstrap an Elasticsearch cluster. I’ll definitely give this a try soon, as I’m doing a lot of Elasticsearch work lately.

>> Jepsen: RethinkDB 2.2.3 reconfiguration [aphyr.com]

As always, if you’re interesting in the inner-workings of how persistence works, have a read.

This one is about RethinkDB – which I’ve personally never used, which didn’t make this piece any less interesting.

Also worth reading:

3. Musings

>> Costs And Benefits Of Comments [codefx.org]

Another interesting installment in the “comments” series.

This one is on my weekend reading list, but I wanted to include it here because I really enjoyed the past writeups.

>> Working with feature-toggled systems [martinfowler.com]

>> Final part of Feature Toggles [martinfowler.com]

The final two parts in what is now a complete reference article on using feature toggles in a system.

>> Mistakes Dev Managers Make [daedtech.com]

I fully agree that doing a good job as a manager comes down to trust. The trust the manager has in the team, and of course the way to team trust (or doesn’t trust) the manager.

>> Taobao’s Security Breach from a Log Perspective [loggly.com]

Yet another security breach story, and of course something that could have been avoided with just a few straightforward safeguards in place.

Looks like I timed the announcement of my next course – Learn Spring Security – at the perfect time :)

>> The 5 Golden Rules of Giving Awesome Customer Support [jooq.org]

Good advice all around.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> You read those same policies last week [dilbert.com]

>> Did you know it was hideous before I told you? [dilbert.com]

>> I don’t do that [dilbert.com]

 

5. Pick of the Week

After a couple of months of winding down after the intensity of writing and recording the Master Class of my last course, I’m finally well rested and ready to announce my next big project:

>> Learn Spring Security

 

I usually post about Dev stuff on Twitter - you can follow me there:


Hiring a New Technical Editor for Baeldung

$
0
0

Last August, I published this article here on Baeldung – it was the first job description I ever published on the site.

I good quite a few solid applications and have made a hire relatively quickly; he’s been working with me ever since as the Technical Editor of Baeldung.

Now, it’s time to do it again – and again, I’m reaching out to the community.

I’m looking for a part-time Technical Editor to help and work with new authors.

Who is the right candidate?

First – you need to be a developer yourself, working or actively involved in the Java and Spring ecosystem. All of these articles are code-centric, so being in the trenches and able to code is instrumental.

Second – you need to have your own technical site / blog in the Java ecosystem (or have some similar experience). This site cannot be super-small – a 3-post blog is really not enough to do a good evaluation.

Finally – and it almost goes without saying – you should have a good command of the English language.

What Will You Be Doing?

You’re going to work with authors, review their new article drafts and provide helpful feedback.

The goal is to generally make sure that the article hits a high level of quality before it gets published. More specifically – articles should match the Baeldung formatting, code and style guidelines.

Beyond formatting and style, articles should be code-focused, clean and easy to understand. Sometimes an article is almost there, but not quite – and the author needs be guided towards a better solution, or a better way of explaining some specific concept.

Typical Time Commitment and Budget

The typical number of new articles in any given week is 3 x 1250 word / 4 x 1000 word articles. And the typical article will take about 2 rounds of review until it’s ready to go.

All of this usually takes about 30 to 45 minutes of work for a small to medium article and can take 60 to 90 minutes for larger pieces.

Overall, you’ll spend somewhere around 12-14 hours / month. 

The budget for the position is 600$ / month.

Apply

If you think you’re well suited for this work, I’d love to work with you to help grow Baeldung.

Email me at eugen@baeldung.com with your details.

Cheers,

Eugen. 

A Guide to RESTEasy

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Introduction

JAX-RS (Java API for RESTful Web Services) is a set of Java API that provides support in creating REST APIs. And the framework makes good use of annotations to simplify the development and deployment of these APIs.

In this tutorial, we’ll use RESTEasy, the JBoss provided portable implementation of JAX-RS specification, in order to create a simple RESTful Web services.

2. Project Setup

We are going two consider two possible scenarios:

  • Standalone Setup – intended for working on every application server
  • JBoss AS Setup – to consider only for deployment in JBoss AS

2.1. Standalone Setup

Let’s start by using JBoss WildFly 10 with standalone setup.

JBoss WildFly 10 comes with RESTEasy version 3.0.11, but as you’ll see, we’ll configure the pom.xml with the new 3.0.14 version.

And thanks to the resteasy-servlet-initializer, RESTEasy provides integration with standalone Servlet 3.0 containers via the ServletContainerInitializer integration interface.

Let’s have a look at the pom.xml:

<properties>
    <resteasy.version>3.0.14.Final</resteasy.version>
</properties>
<dependencies>
    <dependency>
        <groupId>org.jboss.resteasy</groupId>
        <artifactId>resteasy-servlet-initializer</artifactId>
        <version>${resteasy.version}</version>
    </dependency>
    <dependency>
        <groupId>org.jboss.resteasy</groupId>
        <artifactId>resteasy-client</artifactId>
        <version>${resteasy.version}</version>
    </dependency>
</dependencies>

jboss-deployment-structure.xml

Within JBoss everything that is deployed as WAR, JAR or EAR is a module. These modules are referred to as dynamic modules.

Beside these, there are also some static modules in $JBOSS_HOME/modules. As JBoss has the RESTEasy static modules – for standalone deployment, the jboss-deployment-structure.xml is mandatory in order to exclude some of them.

In this way, all classes and JAR files contained in our WAR will be loaded:

<jboss-deployment-structure>
    <deployment>
        <exclude-subsystems>
            <subsystem name="resteasy" />
        </exclude-subsystems>
        <exclusions>
            <module name="javaee.api" />
            <module name="javax.ws.rs.api"/>
            <module name="org.jboss.resteasy.resteasy-jaxrs" />
        </exclusions>
        <local-last value="true" />
    </deployment>
</jboss-deployment-structure>

2.2. JBoss AS Setup

If you are going to run RESTEasy with JBoss version 6 or higher you can choose to adopt the libraries already bundled in the application server, thus simplifying the pom:

<dependencies>
    <dependency>
        <groupId>org.jboss.resteasy</groupId>
        <artifactId>resteasy-jaxrs</artifactId>
        <version>${resteasy.version}</version>
    </dependency>
<dependencies>

Notice that jboss-deployment-structure.xml is no longer needed.

3. Server Side Code

3.1. Servlet Version 3 web.xml

Let’s now have a quick look at the web.xml of our simple project here:

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://java.sun.com/xml/ns/javaee 
     http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd">

   <display-name>RestEasy Example</display-name>

   <context-param>
      <param-name>resteasy.servlet.mapping.prefix</param-name>
      <param-value>/rest</param-value>
   </context-param>

</web-app>

resteasy.servlet.mapping.prefix is needed only if you want prepend a relative path to the API application.

At this point, it’s very important to notice that we haven’t declared any Servlet in the web.xml  because resteasy servlet-initializer has been added as dependency in pom.xml. The reason for that is – RESTEasy provides org.jboss.resteasy.plugins.servlet.ResteasyServletInitializer class that implements javax.server.ServletContainerInitializer.

ServletContainerInitializer is an initializer and it’s executed before any servlet context will be ready – you can use this initializer to define servlets, filters or listeners for your app.

3.2. The Application Class

The javax.ws.rs.core.Application class is a standard JAX-RS class that you may implement to provide information on your deployment:

@ApplicationPath("/rest")
public class RestEasyServices extends Application {

    private Set<Object> singletons = new HashSet<Object>();

    public RestEasyServices() {
        singletons.add(new MovieCrudService());
    }

    @Override
    public Set<Object> getSingletons() {
        return singletons;
    }
}

As you can see – this is simply a class the lists all JAX-RS root resources and providers, and it is annotated with the @ApplicationPath annotation.

If you return any empty set for by classes and singletons, the WAR will be scanned for JAX-RS annotation resource and provider classes.

3.3. A Services Implementation Class

Finally, let’s see an actual API definition here:

@Path("/movies")
public class MovieCrudService {

    private Map<String, Movie> inventory = new HashMap<String, Movie>();

    @GET
    @Path("/getinfo")
    @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
    public Movie movieByImdbId(@QueryParam("imdbId") String imdbId) {
        if (inventory.containsKey(imdbId)) {
            return inventory.get(imdbId);
        } else {
            return null;
        }
    }

    @POST
    @Path("/addmovie")
    @Consumes({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
    public Response addMovie(Movie movie) {
        if (null != inventory.get(movie.getImdbId())) {
            return Response
              .status(Response.Status.NOT_MODIFIED)
              .entity("Movie is Already in the database.").build();
        }

        inventory.put(movie.getImdbId(), movie);
        return Response.status(Response.Status.CREATED).build();
    }
}

4. Conclusions

In this quick tutorial we introduced RESTEasy and we built a super simple API with it.

The example used in this article is available as a sample project in GitHub.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

Define Custom RAML Properties Using Annotations

$
0
0

1. Introduction

In this, the fourth article in our series on RAML – the RESTful API Modeling Language – we demonstrate how to use annotations to define custom properties for a RAML API specification. This process is also referred to as extending the metadata of the specification.

Annotations may be used to provide hooks for RAML processing tools requiring additional specifications that lie outside the scope of the official language.

2. Declaring Annotation Types

One or more annotation types may be declared using the top-level annotationTypes property.

In the simplest of cases, the annotation type name is all that is needed to specify it, in which case the annotation type value is implicitly defined to be a string:

annotationTypes:
  simpleImplicitStringValueType:

This is the equivalent to the more explicit annotation type definition shown here:

annotationTypes:
  simpleExplicitStringValueType:
    type: string

In other cases, an annotation type specification will contain a value object that is considered to be the annotation type declaration.

In these cases, the annotation type is defined using the same syntax as a data type with the addition of two optional attributes: allowedTargets, whose value is either a string or an array of strings limiting the types of target locations to which an annotation may be applied, and allowMultiple, whose boolean value states whether or not the annotation may be applied more than once within a single target (default is false).

Here is a brief example declaring an annotation type containing additional properties and attributes:

annotationTypes:
  complexValueType:
    allowMultiple: true
    properties:
      prop1: integer
      prop2: string
      prop3: boolean

2.1. Target Locations Supporting the Use of Annotations

Annotations may be applied to (used in) several root-level target locations, including the root level of the API itself, resource types, traits, data types, documentation items, security schemeslibraries, overlaysextensions, and other annotation types.

Annotations may also be applied to security scheme settings, resources, methods, response declarations, request bodies, response bodies, and named examples.

2.2. Restricting an Annotation Type’s Targets

To restrict an annotation type to one or more specific target location types, you would define its allowedTargets attribute.

When restricting an annotation type to a single target location type, you would assign the allowedTargets attribute a string value representing that target location type:

annotationTypes:
  supportsOnlyOneTargetLocationType:
    allowedTargets: TypeDeclaration

To allow multiple target location types for an annotation type, you would assign the allowedTargets attribute an array of string values representing those target location types:

annotationTypes:
  supportsMultipleTargetLocationTypes:
    allowedTargets: [ Library, Overlay, Extension ]

If the allowedTargets attribute is not defined on an annotation type, then by default, that annotation type may be applied to any of the supporting target location types.

3. Applying Annotation Types

Once you have defined the annotation types at the root level of your RAML API spec, you would apply them to their intended target locations, providing their property values at each instance. The application of an annotation type within a target location is referred to simply as an annotation on that target location.

3.1. Syntax

In order to apply an annotation type, add the annotation type name enclosed in parentheses () as an attribute of the target location and provide the annotation type value properties that the annotation type is to use for that specific target. If the annotation type is in a RAML library, then you would concatenate the library reference followed by a dot (.) followed by the annotation type name.

3.2. Example

Here is an example showing how we might apply some of the annotation types listed in the above code snippets to various resources and methods of our API:

/foos:
  type: myResourceTypes.collection
  (simpleImplicitStringValueType): alpha
  ...
  get:
    (simpleExplicitStringValueType): beta
  ...
  /{fooId}:
    type: myResourceTypes.item
    (complexValueType):
      prop1: 4
      prop2: testing
      prop3: true

4. Use Case

One potential use case for annotations would be defining and configuring test cases for an API.

Suppose we wanted to develop a RAML processing tool that can generate a series of tests against our API based on annotations. We could define the following annotation type:

annotationTypes:
  testCase:
    allowedTargets: [ Method ]
    allowMultiple: true
    usage: |
      Use this annotation to declare a test case.
      You may apply this annotation multiple times per location.
    properties:
      scenario: string
      setupScript?: string[]
      testScript: string[]
      expectedOutput?: string
      cleanupScript?: string[]

We could then configure a series of tests cases for our /foos resource by applying annotations as follows:

/foos:
  type: myResourceTypes.collection
  get:
    (testCase):
      scenario: No Foos
      setupScript: deleteAllFoosIfAny
      testScript: getAllFoos
      expectedOutput: ""
    (testCase):
      scenario: One Foo
      setupScript: [ deleteAllFoosIfAny, addInputFoos ]
      testScript: getAllFoos
      expectedOutput: '[ { "id": 999, "name": Joe } ]'
      cleanupScript: deleteInputFoos
    (testCase):
      scenario: Multiple Foos
      setupScript: [ deleteAllFoosIfAny, addInputFoos ]
      testScript: getAllFoos
      expectedOutput: '[ { "id": 998, "name": "Bob" }, { "id": 999, "name": "Joe" } ]'
      cleanupScript: deleteInputFoos

5. Conclusion

In this tutorial, we have shown how to extend the metadata for a RAML API specification through the use of custom properties called annotations.

First, we showed how to declare annotation types using the top-level annotationTypes property and enumerated the types of target locations to which they are allowed to be applied.

Next, we demonstrated how to apply annotations in our API and noted how to restrict the types of target locations to which a given annotation can be applied.

Finally, we introduced a potential use case by defining annotation types that could potentially be supported by a test generation tool and showing how one might apply those annotations to an API.

For more information about the use of annotations in RAML, please visit the RAML 1.0 spec.

You can view the full implementation of the API definition used for this tutorial in the github project.


Java Web Weekly 112

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> JUnit 5 – Setup [codefx.org]

A quick intro to what’s shaping up to become a very good step forward for JUnit – which bodes well for the entire ecosystem.

>> Reactor 2.5 : A Second Generation Reactive Foundation for the JVM [spring.io]

An update on what’s going on with the story reactive systems – seems like a lot of progress is being made.

>> An Ingenious Workaround to Emulate Sum Types in Java [jooq.org]

Some fun pushing the boundaries of java generics.

>> The New Hibernate ORM User Guide [in.relation.to]

A big update to the Hibernate docs, which are now going to 5.1 by default.

>> Memory Leaks: Fallacies and Misconceptions [plumbr.eu]

Some of the basics of what’s healthy and what’s when looking at the memory consumption of a JVM – simple and to the point.

>> Setting Up Distributed Infinispan Cache with Hibernate and Spring [techblog.bozho.net]

A conversationally written guide on setting up a caching layer for Hibernate with Spring. This will definitely come in handy for at least a few developers out there.

>> The Mute Design Pattern [jooq.org]

Hehe – now let’s have some fun.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Is Your Computer Stable? [codinghorror.com]

A solid set of tests you can (and should) run against your rig to make sure it’s in working order.

>> Stack Overflow: The Architecture – 2016 Edition [nickcraver.com]

Some cool numbers and behind the scene details of running StackOverflow. Very interesting to see what it takes to run SO the old-school way.

Also worth reading:

3. Musings

>> Everything you need to know about the Apple versus FBI case [troyhunt.com]

This is a long read, but an important one given the recent news in the privacy/security world.

>> The Paradox of Autonomy and Recognition [queue.acm.org]

An interesting (yet long) read about office politics and evaluating the work of other developers.

>> High Stakes Programming by Coincidence [daedtech.com]

Committing a fix you don’t quite understand is almost never a good idea, and imagining the stakes are high is an interesting way to think about it and quickly reach a decision.

Also worth reading:

 

4. Comics

And my favorite Dilberts of the week:

>> Why are you picking this vendor? [dilbert.com]

>> Let’s just say I’m “comfortable” [dilbert.com]

>> This is tech support. How may I abuse you? [dilbert.com]

 

5. Pick of the Week

>> Shields Down [randsinrepose.com]

 

I usually post about Dev stuff on Twitter - you can follow me there:


RESTEasy Client API

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Introduction

In the previous article we focused on the RESTEasy server side implementation of JAX-RS 2.0.

JAX-RS 2.0 introduces a new client API so that you can make HTTP requests to your remote RESTful web services. Jersey, Apache CXF, Restlet and RESTEasy are only a subset of the most popular implementations.

In this article we’ll explore how to consume the REST API by sending requests requests with a RESTEasy API.

2. Project Setup

Add in your pom.xml the following dependency:

<properties>
    <resteasy.version>3.0.14.Final</resteasy.version>
</properties>
<dependencies>
    <dependency>
        <groupId>org.jboss.resteasy</groupId>
        <artifactId>resteasy-client</artifactId>
        <version>${resteasy.version}</version>
    </dependency>
    ...
</dependencies>

3. Client Side Code

The client implementation is quite, being made up of 3 main classes:

    • Client
    • WebTarget
    • Response

The Client interface is a builder of WebTarget instances.

WebTarget represents a distinct URL or URL template from which you can build more sub-resource WebTargets or invoke requests on.

There are really two ways to create a Client:

  • The standard way, by using the org.jboss.resteasy.client.ClientRequest
  • RESTeasy Proxy Framework: by using the ResteasyClientBuilder class

We will focus on the RESTEasy Proxy Framework here.

Instead of using JAX-RS annotations to map an incoming request to your RESTFul Web Service method, the client framework builds an HTTP request that it uses to invoke on a remote RESTful Web Service.

So let’s start writing a Java interface and using JAX-RS annotations on the methods and on the interface.

3.1. The ServicesClient Interface

@Path("/movies")
public interface ServicesInterface {

    @GET
    @Path("/getinfo")
    @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
    Movie movieByImdbId(@QueryParam("imdbId") String imdbId);

    @POST
    @Path("/addmovie")
    @Consumes({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
    Response addMovie(Movie movie);

    @PUT
    @Path("/updatemovie")
    @Consumes({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
    Response updateMovie(Movie movie);

    @DELETE
    @Path("/deletemovie")
    Response deleteMovie(@QueryParam("imdbId") String imdbId);
}

3.2. The Movie Class

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "movie", propOrder = { "imdbId", "title" })
public class Movie {

    protected String imdbId;
    protected String title;

    // getters and setters
}

3.3. The Request Creation

We’ll now generate a proxy client that we can use to consume the API:

String transformerImdbId = "tt0418279";
Movie transformerMovie = new Movie("tt0418279", "Transformer 2");
final String path = "http://127.0.0.1:8080/RestEasyTutorial/rest"; 
 
ResteasyClient client = new ResteasyClientBuilder().build();
ResteasyWebTarget target = client.target(UriBuilder.fromPath(path));
ServicesInterface proxy = target.proxy(ServicesInterface.class);

// POST
Response moviesResponse = proxy.addMovie(transformerMovie);
System.out.println("HTTP code: " + moviesResponse.getStatus());
moviesResponse.close();

// GET
Movie movies = proxy.movieByImdbId(transformerImdbId);

// PUT
transformerMovie.setTitle("Transformer 4");
moviesResponse = proxy.updateMovie(transformerMovie);
moviesResponse.close();

// DELETE
moviesResponse = proxy.deleteMovie(batmanMovie.getImdbId());
moviesResponse.close();

Note that the RESTEasy client API is based on the Apache HttpClient.

Also note that, after each operation, we’ll need to close the response before we can perform a new operation. This is necessary because, by default, the client only has a single HTTP connection available.

Finally, note how we’re working with the DTOs directly – we’re not dealing with the marshal/unmarshal logic to and from JSON or XML; that happens behind the scenes using JAXB or Jackson since the Movie class was properly annotated.

3.4. The Request Creation with Connection Pool

One note from the previous example was that we only had a single connection available. If – for example, we try to do:

Response batmanResponse = proxy.addMovie(batmanMovie);
Response transformerResponse = proxy.addMovie(transformerMovie);

without invoke close() on batmanResponse – an exception will be thrown when second line is executed:

java.lang.IllegalStateException:
Invalid use of BasicClientConnManager: connection still allocated.
Make sure to release the connection before allocating another one.

Again – this simply happens because the default HttpClient used by RESTEasy is org.apache.http.impl.conn.SingleClientConnManager – which of course only makes a single connection available.

Now – to work around that limitation – the RestEasyClient instance must be created differently (with a connection pool):

PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
CloseableHttpClient httpClient = HttpClients.custom().setConnectionManager(cm).build();
cm.setMaxTotal(200); // Increase max total connection to 200
cm.setDefaultMaxPerRoute(20); // Increase default max connection per route to 20
ApacheHttpClient4Engine engine = new ApacheHttpClient4Engine(httpClient);

ResteasyClient client = new ResteasyClientBuilder().httpEngine(engine).build();
ResteasyWebTarget target = client.target(UriBuilder.fromPath(path));
ServicesInterface proxy = target.proxy(ServicesInterface.class);

Now we can benefit from a proper connection pool and can have multiple requests running through our client without necessarily having to release the connection each time.

4. Conclusion

In this quick tutorial we introduced the RESTEasy Proxy Framework and we built a super simple client API with it.

The framework gives us a few more helper methods to configure a client and can be defined as the mirror opposite of the JAX-RS server-side specifications.

The example used in this article is available as a sample project in GitHub.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

Hiring a New Technical Editor for Baeldung

$
0
0

Last August, I published this article here on Baeldung – it was the first job description I ever published on the site.

I good quite a few solid applications and have made a hire relatively quickly; he’s been working with me ever since as the Technical Editor of Baeldung.

Now, it’s time to do it again – and again, I’m reaching out to the community.

I’m looking for a part-time Technical Editor to help and work with new authors.

Budget and Time Commitment

The typical number of new articles in any given week is 3 x 1250 word / 4 x 1000 word articles. And the typical article will take about 2 rounds of review until it’s ready to go.

All of this usually takes about 30 to 45 minutes of work for a small to medium article and can take 60 to 90 minutes for larger pieces.

Overall, you’ll spend somewhere around 12-14 hours / month. 

The budget for the position is 600$ / month.

Who is the right candidate?

First – you need to be a developer yourself, working or actively involved in the Java and Spring ecosystem. All of these articles are code-centric, so being in the trenches and able to code is instrumental.

Second – you need to have your own technical site / blog in the Java ecosystem (or have some similar experience). This site cannot be super-small – a 3-post blog is really not enough to do a good evaluation.

Finally – and it almost goes without saying – you should have a good command of the English language.

What Will You Be Doing?

You’re going to work with authors, review their new article drafts and provide helpful feedback.

The goal is to generally make sure that the article hits a high level of quality before it gets published. More specifically – articles should match the Baeldung formatting, code and style guidelines.

Beyond formatting and style, articles should be code-focused, clean and easy to understand. Sometimes an article is almost there, but not quite – and the author needs be guided towards a better solution, or a better way of explaining some specific concept.

Apply

If you think you’re well suited for this work, I’d love to work with you to help grow Baeldung.

Email me at eugen@baeldung.com with your details.

Cheers,

Eugen. 

Using Apache Camel with Spring

$
0
0

I just announced the release of my "REST With Spring" Classes:

>> THE "REST WITH SPRING" CLASSES

1. Overview

This article will demonstrate how to configure and use Apache Camel with Spring.

Apache Camel provides quite a lot of useful components that support libraries such as JPA, Hibernate, FTP, Apache-CXF, AWS-S3 and of course many others – all to help integrating data between two different systems.

For example, using the Hibernate and Apache CXF components, you could pull data from a database and send it to another system over REST API calls.

In this tutorial, we’ll go over a simple Camel example – reading a file and converting its contents to uppercase and then back to lowercase. We’re going to use Camel’s File component and Spring 4.2.

Here are the full details of the example:

  1. Read file from source directory
  2. Convert file content to uppercase using a custom Processor
  3. Write converted output to a destination directory
  4. Convert file content to lowercase using Camel Translator
  5. Write converted output to a destination directory

2. Add Dependencies

To use Apache Camel with Spring, you will need the following dependencies in your POM file:

<properties>
    <env.camel.version>2.16.1</env.camel.version>
    <env.spring.version>4.2.4.RELEASE</env.spring.version>
</properties>

<dependencies>
    <dependency>
        <groupId>org.apache.camel</groupId>
        <artifactId>camel-core</artifactId>
        <version>${env.camel.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.camel</groupId>
        <artifactId>camel-spring</artifactId>
        <version>${env.camel.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.camel</groupId>
        <artifactId>camel-stream</artifactId>
        <version>${env.camel.version}</version>
    </dependency>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-context</artifactId>
        <version>${env.spring.version}</version>
    </dependency>
</dependencies>

So, we have:

  • camel-core – the main dependency for Apache Camel
  • camel-spring – enables us use Camel with Spring
  • camel-stream – an optional dependency, which you can use (for example) to display some messages on the console while routes are running
  • spring-context – the standard Spring dependency, required in our case as we are going to run Camel routes in a Spring context

3. Spring Camel Context

First, we’ll create the Spring Config file where we will later define our Camel routes.

Notice how the file contains all required Apache Camel and Spring namespaces and schema locations:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
        xmlns:camel="http://camel.apache.org/schema/spring"
	xmlns:util="http://www.springframework.org/schema/util"
	xsi:schemaLocation="http://www.springframework.org/schema/beans 
          http://www.springframework.org/schema/beans/spring-beans-4.2.xsd	
          http://camel.apache.org/schema/spring 
          http://camel.apache.org/schema/spring/camel-spring.xsd
          http://www.springframework.org/schema/util 
          http://www.springframework.org/schema/util/spring-util-4.2.xsd">

	<camelContext xmlns="http://camel.apache.org/schema/spring">
            <!-- Add routes here -->
	</camelContext>

</beans>

The <camelContext> element represents (unsurprisingly) the Camel context, which can be compared to a Spring application context. Now your context file is ready to start defining Camel routes.

3.1. Camel Route with Custom Processor

Next we’ll write our first route to convert file content to uppercase.

We need to define a source from which the route will read data. This can be a database, file, console, or any number of other sources. In our case, it will be file.

Then we need to define the processor of the data that will be read from the source. For this example, we are going to write a custom processor class. This class will be a Spring bean which will implement the standard Camel Processor Interface.

Once the data is processed, we need to tell the route to where to direct the processed data. Once again, this could be one of a wide variety of outputs, such as a database, file, or the console. In our case, we are going to store it in a file.

To set up these steps, including the input, processor, and output, add the following route to the Camel context file:

<route>
    <from uri="file://data/input" /> <!-- INPUT -->
    <process ref="myFileProcessor" /> <!-- PROCESS -->
    <to uri="file://data/outputUpperCase" /> <!-- OUTPUT -->
</route>

Additionally, we must define the myFileProcessor bean:

<bean id="myFileProcessor" class="org.apache.camel.processor.FileProcessor" />

3.2. Custom Uppercase Processor

Now we need to create the custom file processor we defined in our bean. It must implement the Camel Processor interface, defining a single process method, which takes an Exchange object as its input. This object provides the details of the data from the input source.

Our method must read the message from the Exchange, uppercase the content, and then set that new content back into the Exchange object:

public class FileProcessor implements Processor {

    public void process(Exchange exchange) throws Exception {
        String originalFileContent = (String) exchange.getIn().getBody(String.class);
        String upperCaseFileContent = originalFileContent.toUpperCase();
        exchange.getIn().setBody(upperCaseFileContent);
    }
}

This process method will be executed for every input received from the source.

3.3. Lowercase Processor

Now we will add another output to our Camel route. This time, we will convert the same input file’s data into lowercase. This time, we will not use a custom processor, however; we will use Apache Camel’s Message Translator feature. This is the updated Camel route:

<route>
    <from uri="file://data/input" />
    <process ref="myFileProcessor" />
    <to uri="file://data/outputUppperCase" />
    <transform>
        <simple>${body.toLowerCase()}</simple>
    </transform>
    <to uri="file://data/outputLowerCase" />
</route>

4. Running the Application

In order to have our routes be processed, we simply need to load the Camel context file into a Spring application context:

ClassPathXmlApplicationContext applicationContext = 
  new ClassPathXmlApplicationContext("camel-context.xml");

Once the route has been run successfully, two files will have been created: one with uppercase content, and one with lowercase content.

5. Conclusion

If you’re doing integration work, Apache Camel can definitely make things easier. The library provides plug-and-play components that will help you reduce boilerplate code and focus on the main logic of processing data.

And if you want to explore the Enterprise Integration Patterns concepts in detail, you should have a look at this book written by Gregor Hohpe and and Bobby Woolf, who conceptualize the EIPs very cleanly.

The example described in this article is available in a project on GitHub.

Sign Up to the newly released "REST With Spring" Master Class:

>> CHECK OUT THE CLASSES

A Guide to the Java ExecutorService

$
0
0

I usually post about Java stuff on Twitter - you should follow me there:

1. Overview

ExecutorService is a framework provided by the JDK which simplifies the execution of tasks in asynchronous mode. Generally speaking, ExecutorService automatically provides a pool of threads, and API for assigning tasks to it.

2. Instantiating ExecutorService 

2.1. Factory Methods of the Executors Class

The easiest way to create ExecutorService is to use one of the factory methods of the Executors class.

For example, the following line of code will create a thread-pool with 10 threads:

ExecutorService executor = Executors.newFixedThreadPool(10);

The are several other factory methods to create predefined ExecutorService that meet specific use cases. To find the best method for your needs, consult Oracle’s official documentation.

2.2. Directly Create an ExecutorService

Because ExecutorService is an interface, an instance of any its implementations can be used. There are several implementations to choose from in the java.util.concurrent package, or you can create your own.

For example, the ThreadPoolExecutor class has a few constructors which can be used to configure an executor service and its internal pool.

ExecutorService executorService = 
  new ThreadPoolExecutor(1, 1, 0L, TimeUnit.MILLISECONDS,   
  new LinkedBlockingQueue<Runnable>());

You may notice that the code above is very similar to the source code of the factory method newSingleThreadExecutor(). For most cases, detailed manual configuration isn’t necessary.

3. Assigning Tasks to the ExecutorService

ExecutorService can execute Runnable and Callable tasks. To keep things simple in this article, two primitive tasks will be used. Notice that lambda expressions are used here instead of anonymous inner classes:

Runnable runnableTask = () -> {
    try {
        TimeUnit.MILLISECONDS.sleep(300);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
};

Callable<String> callableTask = () -> {
    TimeUnit.MILLISECONDS.sleep(300);
    return "Task's execution";
};

List<Callable<String>> callableTasks = new ArrayList<>();
callableTasks.add(callableTask);
callableTasks.add(callableTask);
callableTasks.add(callableTask);

Tasks can be assigned to the ExecutorService using several methods, including execute(), which is inherited from the Executor interface, and also submit(), invokeAny(), invokeAll(). 

The execute() method is void, and it doesn’t give any possibility to get the result of task’s execution or to check the task’s status (is it running or executed).

executorService.execute(runnableTask);

submit() submits a Callable or a Runnable task to an ExecutorService and returns a result of type Future.

Future<String> future = 
  executorService.submit(callableTask);

invokeAny() assigns a collection of tasks to an ExecutorService, causing each to be executed, and returns the result of a successful execution of one task (if there was a successful execution).

String result = executorService.invokeAny(callableTasks);

invokeAll() assigns a collection of tasks to an ExecutorService, causing each to be executing, and returns the result of all task executions in the form of a list of objects of type Future.

List<Future<String>> futures = executorService.invokeAll(callableTasks);

Now, before going any further, two more things must be discussed: shutting down an ExecutorService and dealing with Future return types.

4. Shutting Down an ExecutorService

In general the ExecutorService will not be automatically destroyed when there is no task to process. It will stay alive and wait for new work to do.

In some cases this is very helpful; for example, if an app needs to process tasks which appear on an irregular basis, or the quantity of these tasks is not known at compile time.

On the other hand, an app could reach its end, but it will not be stopped because a waiting ExecutorService will cause the JVM to keep running.

To properly shut down an ExecutorService, we have the shutdown() and shutdownNow() APIs.

The shutdown() method doesn’t cause an immediate destruction of the ExecutorService. It will make the ExecutorService stop accepting new tasks and shut down after all running threads finish their current work.

executorService.shutdown();

The shutdownNow() method tries to destroy the ExecutorService immediately, but it doesn’t guarantee that all the running threads will be stopped at the same time. This method returns a list of tasks which are waiting to be processed. It is up to the developer to decide what to do with these tasks.

List<Runnable> notExecutedTasks = executorService.shutDownNow();

One good way to shut down the ExecutorService (which is also recommended by Oracle) is to use both of these methods combined with the awaitTermination() method. With this approach, the ExecutorService will first stop taking new tasks, the wait up to a specified period of time for all tasks to be completed. If that time expires, the execution is stopped immediately:

executorService.shutdown();
try {
    if (!executorService.awaitTermination(800, TimeUnit.MILLISECONDS)) {
        executorService.shutdownNow();
    } 
} catch (InterruptedException e) {
    executorService.shutdownNow();
}

5. The Future Interface

The submit() and invokeAll() methods return an object or a collection of objects of type Future, which allows us to get the result of a task’s execution or to check the task’s status (is it running or executed).

The Future interface provides a special blocking method get() which returns an actual result of the Callable task’s execution or null in the case of Runnable task. Calling the get() method while the task is still running will cause execution to block until the task is properly executed and the result is available.

Future<String> future = executorService.submit(callableTask);
String result = null;
try {
    result = future.get();
} catch (InterruptedException | ExecutionException e) {
    e.printStackTrace();
}

With very long blocking caused by the get() method, an application’s performance can degrade. If the resulting data is not crucial, it is possible to avoid such a problem by using timeouts:

String result = future.get(200, TimeUnit.MILLISECONDS);

If the execution period is longer than specified (in this case 200 milliseconds), a TimeoutException will be thrown.

The isDone() method can be used to check if the assigned task is already processed or not.

The Future interface also provides for the cancellation of task execution with the cancel() method, and to check the cancellation with isCancelled() method:

boolean canceled = future.cancel(true);
boolean isCancelled = future.isCancelled();

6. The ScheduledExecutorService Interface

The ScheduledExecutorService runs tasks after some predefined delay and/or periodically. Once again, the best way to instantiate a ScheduledExecutorService is to use the factory methods of the Executors class.

For this section, a ScheduledExecutorService with one thread will be used:

ScheduledExecutorService executorService = Executors
  .newSingleThreadScheduledExecutor();

To schedule a single task’s execution after a fixed delay, us the scheduled() method of the ScheduledExecutorService. There are two scheduled() methods that allow you to execute Runnable or Callable tasks:

Future<String> resultFuture = 
  executorService.schedule(callableTask, 1, TimeUnit.SECONDS);

The scheduleAtFixedRate() method lets execute a task periodically after fixed delay. The code above delays for one second before executing callableTask.

The following block of code will execute a task after initial delay of 100 milliseconds, and after that it will execute the same task every 450 milliseconds. If the processor needs more time to execute an assigned task than the period parameter of the scheduleAtFixedRate() method, the ScheduledExecutorService will wait until the current task is completed before starting the next:

Future<String> resultFuture = 
  service.scheduleAtFixedRate(callableTask, 100, 450, TimeUnit.MILLISECONDS);

If it is necessary to have a fixed length delay between iterations of the task, scheduleWithFixedDelay() should be used. For example, the following code will guarantee a 150 millisecond pause between the end of the current execution and the start of another one.

service.scheduleWithFixedDelay(task, 100, 150, TimeUnit.MILLISECONDS);

According to the scheduleAtFixedRate() and scheduleWithFixedDelay() method contracts, period execution of the task will end at the termination of the ExecutorService or if an exception is thrown during task execution.

7. ExecutorService vs. Fork/Join

After the release of Java 7, many developers decided that the ExecutorService framework should be replaced by the fork/join framework. This is not always the right decision, however. Despite the simplicity of usage and the frequent performance gains associated with fork/join, there is also a reduction in the amount of developer control over concurrent execution.

ExecutorService gives the developer the ability to control the number of generated threads and the granularity of tasks which should be executed by separate threads. The best use case for ExecutorService is the processing of independent tasks, such as transactions or requests according to the scheme “one thread for one task.”

In contrast, according to Oracle’s documentation, fork/join was designed to speed up work which can be broken into smaller pieces recursively.

8. Conclusion

Even despite the relative simplicity of ExecutorService, there are a few common pitfalls. Let’s summarize them:

Keeping an unused ExecutorService alive: There is a detailed explanation in the section 4 of this article about how to shut down an ExecutorService; 

Wrong thread-pool capacity while using fixed length thread-pool: It is very important to determine how many threads the application will need to execute tasks efficiently. A thread-pool that is too large will cause unnecessary overhead just to create threads which mostly will be in waiting mode. Too few can make an application seem unresponsive because of long waiting periods for tasks in the queue;

Calling a Future‘s get() method after task cancellation: An attempt to get the result of an already cancelled task will trigger a CancellationException.

Unexpectedly-long blocking with Future‘s get() method: Timeouts should be used to avoid unexpected waits.

The code for this article is available in a GitHub repository.

I usually post about Java stuff on Twitter - you should follow me there:


Java Web Weekly, Issue 113

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Spring Boot with Scala [java-allandsundry.com]

This is very cool if you’re into Scala.

I’m personally moving towards Clojure instead of Scala, but this still looks quite interesting to me.

>> Log management in Spring Boot [frankel.ch]

How to configure logging in Spring Boot (without having to go the native XML route) – very quick and to the point (much like Boot).

>> JUnit 5 – Basics [codefx.org]

Thought last weeks JUnit 5 post was all there was? Think again :)

I’m super excited about finally seeing some actual progress in the JUnit space, so this series should be fun to read and follow along with.

>> (Ab)using Java 8 FunctionalInterfaces as Local Methods [jooq.org]

As always, a very nice exploration of a lambdas in Java 8.

And I can no longer call lambdas “new”, they’re now just an arrow in the quiver.

>> Java EE 8 MVC: Working with form parameters [mscharhag.com]

A quick writeup that continues to explore the mapping of parameters to Controllers in Java EE 8. Very quick and to the point.

>> JetBrains Releases Kotlin 1.0 [infoq.com]

Kotlin isn’t something I’ve done any actual work with myself. But it is the official 1.0 release of what looks to be a language with some interesting syntax choices – which doesn’t happen all that often.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Automate Deployment & Management of Docker Cloud/Virtual Java Microservices with DCHQ [infoq.com]

Docker containers, continuous delivery of microservices and Event Sourcing.

Yep, every word of that sentence is an overused buzzword, yet the article solid and detailed, so worth a careful read if you’re doing work in this area.

>> Bind Parameters for Database Queries [martinfowler.com]

This weeks installment of the “Web Security Basics” series – getting into some foundation aspects of input sanitization.

And of course, SQL injection is not the only game in town.

>> Default HotSpot Maximum Direct Memory Size [marxsoftware.com]

Getting into some of the low level JVM details of direct memory access and sizing.

Also worth reading:

3. Musings

>> Why do you write accessor methods? [codecentric.de]

An “back to basics” style exploration of a core object oriented concept – accessor methods.

>> Splunk vs ELK: The Log Management Tools Decision Making Guide [takipi.com]

A solid guide to picking a log management tool and getting the most out of it.

>> Escaping the Legacy Skill Quicksand [daedtech.com]

Common sense advice about stepping up your tech skill game.

>> Controlling vehicle features of Nissan LEAFs across the globe via vulnerable APIs [troyhunt.com]

Hacking a connected car – this is scary stuff.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> You probably shouldn’t put your suggestions in form of questions [dilbert.com]

>> He’ll be under-communicating all day [dilbert.com]

>> It sounds better when you don’t do the math [dilbert.com]

 

5. Pick of the Week

I recently discovered I writer that I’m thoroughly enjoying:

>> The Gervais Principle, Or The Office According to “The Office” [ribbonfarm.com]

 

I usually post about Dev stuff on Twitter - you can follow me there:


Introduction to JsonPath

$
0
0

I usually post about Jackson and JSON stuff on Twitter - you can follow me there:

1. Overview

One of advantages of XML is the availability of processing – including XPath – which is defined as a W3C standard. For JSON, a similar tool called JSONPath has emerged.

This article will give an introduction to Jayway JsonPath, a Java implementation of the JSONPath specification. It describes setup, syntax, common APIs, and a demonstration of use cases.

2. Setup

In order to use JsonPath, we simply need to include a dependency in the Maven pom:

<dependency>
    <groupId>com.jayway.jsonpath</groupId>
    <artifactId>json-path</artifactId>
    <version>2.1.0</version>
</dependency>

3. Syntax

The following JSON structure will be used in this section to demonstrate the syntax and APIs of JsonPath:

{
    "tool": 
    {
        "jsonpath": 
        {
            "creator": 
            {
                "name": "Jayway Inc.",
                "location": 
                [
                    "Malmo",
                    "San Francisco",
                    "Helsingborg"
                ]
            }
        }
    },

    "book": 
    [
        {
            "title": "Beginning JSON",
            "price": 49.99
        },

        {
            "title": "JSON at Work",
            "price": 29.99
        }
    ]
}

3.1. Notation

JsonPath uses special notation to represent nodes and their connections to adjacent nodes in a JsonPath path. There are two styles of notation, namely dot and bracket.

Both of the following paths refer to the same node from the above JSON document, which is the third element within the location field of creator node, that is a child of jsonpath object belonging to tool under the root node.

With dot notation:

$.tool.jsonpath.creator.location[2]

With bracket notation:

$['tool']['jsonpath']['creator']['location'][2]

The dollar sign ($) represents root member object.

3.2. Operators

We have has several helpful operators in JsonPath:

Root node ($): This symbol denotes the root member of a JSON structure no matter it is an object or array. Its usage examples were included in the previous sub-section.

Current node (@): Represents the node that is being processed, mostly used as part of input expressions for predicates. Suppose we are dealing with book array in the above JSON document, the expression book[?(@.price == 49.99)] refers to the first book in that array.

Wildcard (*): Expresses all elements within the specified scope. For instance, book[*] indicates all nodes inside a book array.

3.3. Functions and Filters

JsonPath also has functions that can be used to the end of a path to synthesize that path’s output expressions: min(), max(), avg(), stddev(), length().

Finally – we have filters; these are boolean expressions to restrict returned lists of nodes to only those that calling methods need.

A few examples are equality (==), regular expression matching (=~), inclusion (in), check for emptiness (empty). Filters are mainly used for predicates.

For a full list and detailed explanations of different operators, functions and filters, please refer to JsonPath GitHub project.

4. Operations

Before we get into operations, a quick side-note – this section makes use of the JSON example structure we defined earlier.

4.1. Access to Documents

JsonPath has a convenient way to access JSON documents, which is through static read APIs:

<T> T JsonPath.read(String jsonString, String jsonPath, Predicate... filters);

The read APIs can work with static fluent APIs to provide more flexibility:

<T> T JsonPath.parse(String jsonString).read(String jsonPath, Predicate... filters);

Other overloaded variants of read can be used for different types of JSON sources, including Object, InputStream, URL and File.

To make things simple, the test for this part does not include predicates in the parameter list (empty varargs); predicates will be discussed in later sub-sections.

Let’s start by defining two sample paths to work on:

String jsonpathCreatorNamePath = "$['tool']['jsonpath']['creator']['name']";
String jsonpathCreatorLocationPath = "$['tool']['jsonpath']['creator']['location'][*]";

Next, we will create a DocumentContext object by parsing the given JSON source jsonDataSourceString. The newly created object will then be used to read content using the paths defined above:

DocumentContext jsonContext = JsonPath.parse(jsonDataSourceString);
String jsonpathCreatorName = jsonContext.read(jsonpathCreatorNamePath);
List<String> jsonpathCreatorLocation = jsonContext.read(jsonpathCreatorLocationPath);

The first read API returns a String containing the name of JsonPath creator, while the second returns a list of its addresses. And we’ll use the JUnit Assert API to confirm the methods work as expected:

assertEquals("Jayway Inc.", jsonpathCreatorName);
assertThat(jsonpathCreatorLocation.toString(), containsString("Malmo"));
assertThat(jsonpathCreatorLocation.toString(), containsString("San Francisco"));
assertThat(jsonpathCreatorLocation.toString(), containsString("Helsingborg"));

4.2. Predicates

Now that we’re done with the basics, let’s define a new JSON example to work on and illustrate the creation and usage of predicates:

{
    "book": 
    [
        {
            "title": "Beginning JSON",
            "author": "Ben Smith",
            "price": 49.99
        },

        {
            "title": "JSON at Work",
            "author": "Tom Marrs",
            "price": 29.99
        },

        {
            "title": "Learn JSON in a DAY",
            "author": "Acodemy",
            "price": 8.99
        },

        {
            "title": "JSON: Questions and Answers",
            "author": "George Duckett",
            "price": 6.00
        }
    ],

    "price range": 
    {
        "cheap": 10.00,
        "medium": 20.00
    }
}

Predicates determine true or false input values for filters to narrow down returned lists to only matched objects or arrays. A Predicate may easily be integrated into a Filter by using as an argument for its static factory method. The requested content can then be read out of a JSON string using that Filter:

Filter expensiveFilter = Filter.filter(Criteria.where("price").gt(20.00));
List<Map<String, Object>> expensive = JsonPath.parse(jsonDataSourceString)
  .read("$['book'][?]", expensiveFilter);
predicateUsageAssertionHelper(expensive);

We may also define our own customized Predicate and use it as an argument for the read API:

Predicate expensivePredicate = new Predicate() {
    public boolean apply(PredicateContext context) {
        String value = context.item(Map.class).get("price").toString();
        return Float.valueOf(value) > 20.00;
    }
};
List<Map<String, Object>> expensive = JsonPath.parse(jsonDataSourceString)
  .read("$['book'][?]", expensivePredicate);
predicateUsageAssertionHelper(expensive);

Finally, a predicate may be directly applied to read API without creation of any objects, which is called inline predicate:

List<Map<String, Object>> expensive = JsonPath.parse(jsonDataSourceString)
  .read("$['book'][?(@['price'] > $['price range']['medium'])]");
predicateUsageAssertionHelper(expensive);

All the three of the Predicate examples above are verified with the help of the following assertion helper method:

private void predicateUsageAssertionHelper(List<?> predicate) {
    assertThat(predicate.toString(), containsString("Beginning JSON"));
    assertThat(predicate.toString(), containsString("JSON at Work"));
    assertThat(predicate.toString(), not(containsString("Learn JSON in a DAY")));
    assertThat(predicate.toString(), not(containsString("JSON: Questions and Answers")));
}

5. Configuration

5.1. Options

Jayway JsonPath provides several options to tweak the default configuration:

  • Option.AS_PATH_LIST: Returns paths of the evaluation hits instead of their values.
  • Option.DEFAULT_PATH_LEAF_TO_NULL: Returns null for missing leaves.
  • Option.ALWAYS_RETURN_LIST: Returns a list even when the path is definite.
  • Option.SUPPRESS_EXCEPTIONS: Makes sure no exceptions are propagated from path evaluation.
  • Option.REQUIRE_PROPERTIES: Requires properties defined in path when an indefinite path is evaluated.

Here is how Option is applied from scratch:

Configuration configuration = Configuration.builder().options(Option.<OPTION>).build();

and how to add it to an existing configuration:

Configuration newConfiguration = configuration.addOptions(Option.<OPTION>);

5.2. SPIs

JsonPath’s default configuration with the help of Option should be enough for the majority of tasks. However, users with more complex use cases are able to modify the behavior of JsonPath according to their specific requirements – using three different SPIs:

  • JsonProvider SPI: Lets us change the ways JsonPath parses and handles JSON documents
  • MappingProvider SPI: Allows for customization of bindings between node values and returned object types
  • CacheProvider SPI: Adjusts the manners that paths are cached, which can help to increase performance

6. An Example Use Cases

Now that we have a good understanding of the functionality that JsonPath can be used for – let’s look at an example.

This section illustrates dealing with JSON data returned from a web service – assume we have a movie information service, which returns the following structure:

[
    {
        "id": 1,
        "title": "Casino Royale",
        "director": "Martin Campbell",
        "starring": 
        [
            "Daniel Craig",
            "Eva Green"
        ],
        "desc": "Twenty-first James Bond movie",
        "release date": 1163466000000,
        "box office": 594275385
    },

    {
        "id": 2,
        "title": "Quantum of Solace",
        "director": "Marc Forster",
        "starring": 
        [
            "Daniel Craig",
            "Olga Kurylenko"
        ],
        "desc": "Twenty-second James Bond movie",
        "release date": 1225242000000,
        "box office": 591692078
    },

    {
        "id": 3,
        "title": "Skyfall",
        "director": "Sam Mendes",
        "starring": 
        [
            "Daniel Craig",
            "Naomie Harris"
        ],
        "desc": "Twenty-third James Bond movie",
        "release date": 1350954000000,
        "box office": 1110526981
    },

    {
        "id": 4,
        "title": "Spectre",
        "director": "Sam Mendes",
        "starring": 
        [
            "Daniel Craig",
            "Lea Seydoux"
        ],
        "desc": "Twenty-fourth James Bond movie",
        "release date": 1445821200000,
        "box office": 879376275
    }
]

Where the value of release date field is duration since the Epoch in milliseconds, and box office is revenue of a movie in the cinema in US dollars.

We are going to handle five different working scenarios related to GET requests, supposing that the above JSON hierarchy has been extracted and stored in a String variable named jsonString.

6.1. Getting Object Data Given IDs

In this use case, a client requests detailed information on a specific movie by providing the server with the exact id of that one. This example demonstrates how the server looks for requested data before returning to the client.

Say we need to find a record with id equaling to 2. Below is how the process is implemented and tested.

The first step is to pick up the correct data object:

Object dataObject = JsonPath.parse(jsonString).read("$[?(@.id == 2)]");
String dataString = dataObject.toString();

The JUnit Assert API confirms the existence of several fields:

assertThat(dataString, containsString("2"));
assertThat(dataString, containsString("Quantum of Solace"));
assertThat(dataString, containsString("Twenty-second James Bond movie"));

6.2. Getting the Movie Title Given Starring

Let’s say we want to look for a movie starring an actress called Eva Green. Generally, the server needs to return title of the movie that Eva Green is included in the starring array.

The succeeding test will illustrate how to do that and validate the returned result:

@Test
public void givenStarring_whenRequestingMovieTitle_thenSucceed() {
    List<Map<String, Object>> dataList = JsonPath.parse(jsonString)
      .read("$[?('Eva Green' in @['starring'])]");
    String title = (String) dataList.get(0).get("title");

    assertEquals("Casino Royale", title);
}

6.3. Calculation of the Total Revenue

This scenario makes use of a JsonPath function called length() to figure out the number of movie records, in order to calculate the total revenue of all the movies. The implementation and testing are demonstrated as follows:

@Test
public void givenCompleteStructure_whenCalculatingTotalRevenue_thenSucceed() {
    DocumentContext context = JsonPath.parse(jsonString);
    int length = context.read("$.length()");
    long revenue = 0;
    for (int i = 0; i < length; i++) {
        revenue += context.read("$[" + i + "]['box office']", Long.class);
    }

    assertEquals(594275385L + 591692078L + 1110526981L + 879376275L, revenue);
}

6.4. Highest Revenue Movie

This use case exemplifies the usage of a non-default JsonPath configuration option, namely Option.AS_PATH_LIST, to find out the movie with highest revenue. The particular steps are described underneath.

At first, we need to extract a list of all the movies’ box office revenue, then convert it to an array for sorting:

DocumentContext context = JsonPath.parse(jsonString);
List<Object> revenueList = context.read("$[*]['box office']");
Integer[] revenueArray = revenueList.toArray(new Integer[0]);
Arrays.sort(revenueArray);

The highestRevenue variable may easily be picked up from the revenueArray sorted array, then used for working out the path to the movie record with highest revenue:

int highestRevenue = revenueArray[revenueArray.length - 1];
Configuration pathConfiguration = Configuration.builder().options(Option.AS_PATH_LIST).build();
List<String> pathList = JsonPath.using(pathConfiguration).parse(jsonString)
  .read("$[?(@['box office'] == " + highestRevenue + ")]");

Based on that calculated path, title of the corresponding movie can be determined and returned:

Map<String, String> dataRecord = context.read(pathList.get(0));
String title = dataRecord.get("title");

The whole process is verified by the Assert API:

assertEquals("Skyfall", title);

6.5. Latest Movie of a Director

This example will illustrate the way to figure out the lasted movie directed by a director named Sam Mendes.

To begin with, a list of all the movies directed by Sam Mendes is created:

DocumentContext context = JsonPath.parse(jsonString);
List<Map<String, Object>> dataList = context.read("$[?(@.director == 'Sam Mendes')]");

That list is used for extraction of release dates. Those dates will be stored in an array and then sorted:

List<Object> dateList = new ArrayList<>();
for (Map<String, Object> item : dataList) {
    Object date = item.get("release date");
    dateList.add(date);
}
Long[] dateArray = dateList.toArray(new Long[0]);
Arrays.sort(dateArray);

The lastestTime variable, which is the last element of the sorted array, is used in combination with the director field’s value to determine title of the requested movie:

long latestTime = dateArray[dateArray.length - 1];
List<Map<String, Object>> finalDataList = context.read("$[?(@['director'] 
  == 'Sam Mendes' && @['release date'] == " + latestTime + ")]");
String title = (String) finalDataList.get(0).get("title");

The following assertion proved that everything works as expected:

assertEquals("Spectre", title);

7. Conclusion

This tutorial has covered fundamental features of Jayway JsonPath – a powerful tool to traverse and parse JSON documents.

Although JsonPath has some drawbacks, such as a lack of operators for reaching parent or sibling nodes, it can be highly useful in a lot of scenarios.

The implementation of all these examples and code snippets can be found in a GitHub project.

I usually post about Jackson and JSON stuff on Twitter - you should follow me there:



Java Web Weekly, Issue 114

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Writing Unit Tests With Spock Framework: Introduction to Specifications, Part Three [petrikainulainen.net]

This article continues to explore testing with Spock, this time with a close look at specifications.

>> Parallel execution of blocking tasks with RxJava and Completable [solidsoft.wordpress.com]

RxJava is definitely a powerful tool and quite a nice API. Here’s a practical writeup showing some real-world scenarios of how to use it.

>> Oracle’s OpenJDK Cleanup of “Unsafe” Implementation [infoq.com]

A short update on what’s happening with Unsafe in Java 9.

>> How to Support Java 6, 8, 9 in a Single API [jooq.org]

Very interesting approach to supporting multiple Java versions in a public API. If you’re building or maintaining a public API – definitely worth checking out.

And, as a side-note – if you’re into marketing – this is a nice piece of being smart about the way you produce content that supports your product.

>> How to combine the Hibernate assigned generator with a sequence or an identity column [vladmihalcea.com]

The identity of an entity is a lot more complex than just slapping an @Id on and call it a day.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Sensible mutation testing: don’t go on a killing spree [codecentric.de]

Mutation testing makes the bogus metric that is code coverage slightly less bogus. It looks easy enough to set up, so I’ll definitely give this one a try.

>> How Not To Write Golden Master Tests [thecodewhisperer.com]

Like always, a solid deep-dive into the intricacies of getting to a well tested, easy to change system.

>> How to Detect and Analyze DDoS Attacks Using Log Analysis [loggly.com]

An interesting and certainly helpful look at how DDoS attacks work, how targets are usually picked and what you can do about it.

Hint – good logging can help see the pattern early. Reacting to it – well, that’s not as easy as just knowing it’s happening.

>> Should we use a coding standard? [devblog.avdi.org]

I’ve been in my fair share of coding standard discussions (let’s call them “discussions”) where I was trying to convince someone of something. It’s never fun and almost always unproductive – so I tend to approach this problem different now (hint – I’m a lot more flexible than in my early days).

This writeup goes over some of that process and makes some really good points you can pick up and use when your team is pulling the trigger on a coding standard.

Also worth reading:

3. Musings

>> The Majestic Monolith [m.signalvnoise.com]

Monoliths have a bad rap. It’s really important to understand though where monolith makes more sense and what kind of system really does need a microservice architecture.

That early decision has the clear potential of saving you many months of extra development work to get to where you need to be.

>> Prerequisites for Effective Code Review [daedtech.com]

Attempts of reviewing code are legion. Positive, useful code review cultures geared towards learning are few and far between.

And that’s definitely because the practice does require a few things to be in place in order to work well – not the least of which is some level of emotional maturity.

>> My next bet: VR is going to take off in the next 3 years… [lemire.me] and

>> Lost my bet: the PC isn’t dead… yet [lemire.me]

A couple of fun reads about how fast the general tech industry is moving forward.

>> How to Deploy Software [zachholman.com]

This isn’t a post, it’s a small book :)

It’s also an intelligent, clearly written writeup on what it takes to put your work out there and do it well.

Well worth reading if only to get rid of “deployment stress” (real medical condition) and 10x your chill factor when going to production.

>> InfrastructureAsCode [martinfowler.com]

A well known practice in the DevOps world, and hopefully outside of it as well.

I’m expecting this article to keep growing like the previous series here, following the super interesting Evolving Publication concept.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Stop everything you’re doing and build robots [dilbert.com]

>> We need to act more like a start-up [dilbert.com]

>> Studies show married people are happier [dilbert.com]

 

5. Pick of the Week

>> A Big Little Idea Called Legibility [ribbonfarm.com]

 

I usually post about Dev stuff on Twitter - you can follow me there:


OAuth2 for a Spring REST API – Handle the Refresh Token in AngularJS

$
0
0

Just announced my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

1. Overview

In this tutorial we’ll continue exploring the OAuth password flow that we started putting together in more our previous article and we’ll focus on how to handle the Refresh Token in an AngularJS app.

2. Access Token Expiration

First, remember that the client was obtaining an Access Token when the user was logging into the application:

function obtainAccessToken(params) {
    var req = {
        method: 'POST',
        url: "oauth/token",
        headers: {"Content-type": "application/x-www-form-urlencoded; charset=utf-8"},
        data: $httpParamSerializer(params)
    }
    $http(req).then(
        function(data) {
            $http.defaults.headers.common.Authorization= 'Bearer ' + data.data.access_token;
            var expireDate = new Date (new Date().getTime() + (1000 * data.data.expires_in));
            $cookies.put("access_token", data.data.access_token, {'expires': expireDate});
            window.location.href="index";
        },function() {
            console.log("error");
            window.location.href = "login";
        });   
}

Note how our Access Token is stored in a cookie which will expire based on when the token itself expires.

What’s important to understand is that the cookie itself is only used for storage and it doesn’t drive anything else in the OAuth flow. For example, the browser will never automatically send out the cookie to the server with requests.

Also note how we actually call this obtainAccessToken() function:

$scope.loginData = {
    grant_type:"password", 
    username: "", 
    password: "", 
    client_id: "fooClientIdPassword"
};

$scope.login = function() {   
    obtainAccessToken($scope.loginData);
}

3. The Proxy

We’re now going to have a Zuul proxy running in the front-end application and basically sitting between the front-end client and the Authorization Server.

Let’s configure the routes of the proxy:

zuul:
  routes:
    oauth:
      path: /oauth/**
      url: http://localhost:8081/spring-security-oauth-server/oauth

What’s interesting here is that we’re only proxying traffic to the Authorization Server and not anything else. We only really need the proxy to come in when the client is obtaining new tokens.

If you want to go over the basics of Zuul, have a quick read of the main Zuul article.

4. A Zuul Filter that does Basic Authentication

The first use of the proxy is simple – instead of revealing our app “client secret” in javascript, we will use a Zuul pre-filter to add Authorization header to access token requests:

@Component
public class CustomPreZuulFilter extends ZuulFilter {
    @Override
    public Object run() {
        RequestContext ctx = RequestContext.getCurrentContext();
        if (ctx.getRequest().getRequestURI().contains("oauth/token")) {
            byte[] encoded;
            try {
                encoded = Base64.encode("fooClientIdPassword:secret".getBytes("UTF-8"));
                ctx.addZuulRequestHeader("Authorization", "Basic " + new String(encoded));
            } catch (UnsupportedEncodingException e) {
                logger.error("Error occured in pre filter", e);
            }
        }
        return null;
    }

    @Override
    public boolean shouldFilter() {
        return true;
    }

    @Override
    public int filterOrder() {
        return -2;
    }

    @Override
    public String filterType() {
        return "pre";
    }
}

Now keep in mind that this doesn’t add any extra security and the only reason we’re doing it is because the token endpoint is secured with Basic Authentication using client credentials.

From the point of view of the implementation the type of the filter is especially worth noticing. We’re using a filter type of “pre” to process the request before passing it on.

5. Put the Refresh Token in a Cookie

On to the fun stuff.

What we’re planning to do here is to have the client get the Refresh Token as a cookie. Not just a normal cookie, but a secured, HTTP-only cookie with a very limited path (/oauth/token).

We’ll set up a Zuul post-filter to extract Refresh Token from the JSON body of the response and set it in the cookie:

@Component
public class CustomPostZuulFilter extends ZuulFilter {
    private ObjectMapper mapper = new ObjectMapper();

    @Override
    public Object run() {
        RequestContext ctx = RequestContext.getCurrentContext();
        try {
            InputStream is = ctx.getResponseDataStream();
            String responseBody = IOUtils.toString(is, "UTF-8");
            if (responseBody.contains("refresh_token")) {
                Map<String, Object> responseMap = mapper.readValue(
                  responseBody, new TypeReference<Map<String, Object>>() {});
                String refreshToken = responseMap.get("refresh_token").toString();
                responseMap.remove("refresh_token");
                responseBody = mapper.writeValueAsString(responseMap);

                Cookie cookie = new Cookie("refreshToken", refreshToken);
                cookie.setHttpOnly(true);
                cookie.setSecure(true);
                cookie.setPath(ctx.getRequest().getContextPath() + "/oauth/token");
                cookie.setMaxAge(2592000); // 30 days
                ctx.getResponse().addCookie(cookie);
            }
            ctx.setResponseBody(responseBody);
        } catch (IOException e) {
            logger.error("Error occured in zuul post filter", e);
        }
        return null;
    }

    @Override
    public boolean shouldFilter() {
        return true;
    }

    @Override
    public int filterOrder() {
        return 10;
    }

    @Override
    public String filterType() {
        return "post";
    }
}

A few interesting things to understand here:

  • We used a Zuul post-filter to read response and extract refresh token
  • We removed the value of the refresh_token from JSON response to make sure it’s never accessible to the front end outside of the cookie
  • We set the max age of the cookie to 30 days – as this matches the expire time of the token

6. Get and Use the Refresh Token from the Cookie

Now that we have the Refresh Token in the cookie, when the front-end AngularJS application tries to trigger a token refresh, it’s going to send the request at /oauth/token and so the browser will of course send that cookie.

So we’ll not have another filter in the proxy that will extract the Refresh Token from the cookie and send it forward as a HTTP parameter – so that the request is valid:

public Object run() {
    RequestContext ctx = RequestContext.getCurrentContext();
    ...
    HttpServletRequest req = ctx.getRequest();
    String refreshToken = extractRefreshToken(req);
    if (refreshToken != null) {
        Map<String, String[]> param = new HashMap<String, String[]>();
        param.put("refresh_token", new String[] { refreshToken });
        param.put("grant_type", new String[] { "refresh_token" });
        ctx.setRequest(new CustomHttpServletRequest(req, param));
    }
    ...
}

private String extractRefreshToken(HttpServletRequest req) {
    Cookie[] cookies = req.getCookies();
    if (cookies != null) {
        for (int i = 0; i < cookies.length; i++) {
            if (cookies[i].getName().equalsIgnoreCase("refreshToken")) {
                return cookies[i].getValue();
            }
        }
    }
    return null;
}

And here is our CustomHttpServletRequest – used to inject our refresh token parameters:

public class CustomHttpServletRequest extends HttpServletRequestWrapper {
    private Map<String, String[]> additionalParams;
    private HttpServletRequest request;

    public CustomHttpServletRequest(
      HttpServletRequest request, Map<String, String[]> additionalParams) {
        super(request);
        this.request = request;
        this.additionalParams = additionalParams;
    }

    @Override
    public Map<String, String[]> getParameterMap() {
        Map<String, String[]> map = request.getParameterMap();
        Map<String, String[]> param = new HashMap<String, String[]>();
        param.putAll(map);
        param.putAll(additionalParams);
        return param;
    }
}

Again, a lot of important implementation notes here:

  • The Proxy is extracting the Refresh Token from the Cookie
  • It’s then setting it into the refresh_token parameter
  • It’s also setting the grant_type to refresh_token
  • If there is no refreshToken cookie (either expired or first login) – then the Access Token request will be redirected with no change

7. Refreshing the Access Token from AngularJS

Finally, let’s modify our simple front-end application and actually make use of refreshing the token:

Here is our function refreshAccessToken():

$scope.refreshAccessToken = function() {
    obtainAccessToken($scope.refreshData);
}

And here our $scope.refreshData:

$scope.refreshData = {grant_type:"refresh_token"};

Note how we’re simply using the existing obtainAccessToken function – and just passing different inputs to it.

Also notice that we’re not adding the refresh_token ourselves – as that’s going to be taken care of by the Zuul filter.

8. Conclusion

In this OAuth tutorial we learned how to store the Refresh Token in an AngularJS client application, how to refresh an expired Access Token and how to leverage the Zuul proxy for all of that.

The full implementation of this tutorial can be found in the github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Just announced my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

The early-bird pricing will be up available the launch of the Starter Class.

The “Java and Spring in 2016” Survey

Exploring SpringMVC’s Form Tag Library

$
0
0

1. Overview

In the first article of this series we introduced the use of the form tag library and how to bind data to a controller.

In this article, we’ll cover the various tags that Spring MVC provides to help us create and validate forms.

2. The input Tag

We’ll get started with the input tag. This tag renders an HTML input tag using the bound value and type=’text’ by default:

<form:input path="name" />

Starting with Spring 3.1 you can use other HTML5-specific types, such as email, date, and others. For example, if we wanted to create an email field, we can use type=’email’:

<form:input type="email" path="email" />

Similarly, to create a date field, we can use type=’date’, which will render a date picker in many browsers compatible with HTML5:

<form:input type="date" path="dateOfBirth" />

3. The password Tag

This tag renders an HTML input tag with type=’password’ using the bound value. This HTML input masks the value typed into the field:

<form:password path="password" />

4. The textarea Tag

This tag renders an HTML textarea:

<form:textarea path="notes" rows="3" cols="20"/>

We can specify the number of rows and columns in the same way we would an HTML textarea.

5. The checkbox and checkboxes Tag

The checkbox tag renders an HTML input tag with type=’checkbox’. Spring MVC’s form tag library provides different approaches to the checkbox tag which should meet all our checkbox needs:

<form:checkbox path="receiveNewsletter" />

The above example generate a classic single checkbox, with a boolean value. If we set the bound value to true, this checkbox will be checked by default.

The following example generates multiple checkboxes. In this case, the checkbox values are hard-coded inside the JSP page:

Bird watching: <form:checkbox path="hobbies" value="Bird watching"/>
Astronomy: <form:checkbox path="hobbies" value="Astronomy"/>
Snowboarding: <form:checkbox path="hobbies" value="Snowboarding"/>

Here, the bound value is of type array or java.util.Collection:

String[] hobbies;

The purpose of the checkboxes tag is used to render multiple checkboxes, where the checkbox values are generated at runtime:

<form:checkboxes items="${favouriteLanguageItem}" path="favouriteLanguage" />

To generate the values we pass in an Array, a List or a Map containing the available options in the items property. We can initialize our values inside the controller:

List<String> favouriteLanguageItem = new ArrayList<String>();
favouriteLanguageItem.add("Java");
favouriteLanguageItem.add("C++");
favouriteLanguageItem.add("Perl");

Typically the bound property is a collection so it can hold multiple values selected by the user:

List<String> favouriteLanguage;

6. The radiobutton and radiobuttons Tag

This tag renders an HTML input tag with type=’radio’:

Male: <form:radiobutton path="sex" value="M"/>
Female: <form:radiobutton path="sex" value="F"/>

A typical usage pattern will involve multiple tag instances with different values bound to the same property:

private String sex;

Just like the checkboxes tag, the radiobuttons tag renders multiple HTML input tags with type=’radio’:

<form:radiobuttons items="${jobItem}" path="job" />

In this case, we might want to pass in the available options as an Array, a List or a Map containing the available options in the items property:

List<String> jobItem = new ArrayList<String>();
jobItem.add("Full time");
jobItem.add("Part time");

7. The select Tag

This tag renders an HTML select element:

<form:select path="country" items="${countryItems}" />

To generate the values we pass in an Array, a List or a Map containing the available options in the items property. Once again, we can initialize our values inside the controller:

Map<String, String> countryItems = new LinkedHashMap<String, String>();
countryItems.put("US", "United States");
countryItems.put("IT", "Italy");
countryItems.put("UK", "United Kingdom");
countryItems.put("FR", "France");

The select tag also support the use of nested option and options tags.

While the option tag renders a single HTML option, the options tag renders a list of HTML option tags.

The options tag takes an Array, a List or a Map containing the available options in the items property, just like the select tag:

<form:select path="book">
    <form:option value="-" label="--Please Select--"/>
    <form:options items="${books}" />
</form:select>

When we have the need to select several items at once, we can create a multiple list box. To render this type of list, just add the multiple=”true” attribute in the select tag.

<form:select path="fruit" items="${fruit}" multiple="true"/>

Here the bound property is an array or a java.util.Collection:

List<String> fruit;

8. The hidden Tag

This tag renders an HTML input tag with type=’hidden’ using the bound value:

<form:hidden path="id" value="12345" />

9. The errors tag

Field error messages are generated by validators associated with the controller. We can use the errors tag to render those field error messages:

<form:errors path="name" cssClass="error" />

This will display errors for the field specified in the path property. The error messages are rendered within a span tag by default, with .errors appended to the path value as the id, and optionally a CSS class from the cssClass property, which can be used to style the output:

<span id="name.errors" class="error">Name is required!</span>

To enclose the error messages with a different element instead of the default span tag, we can specify the preferred element inside the element attribute:

<form:errors path="name" cssClass="error" element="div" />

This renders the error messages within a div element:

<div id="name.errors" class="error">Name is required!</div>

In addition to having the capability to show errors for a specific input element, we can display the entire list of errors (regardless of field) for a given page. This is achieved by the use of the wildcard *:

<form:errors path="*" />

9.1. The Validator

To display errors for a given field we need to define a validator:

public class PersonValidator implements Validator {

    @Override
    public boolean supports(Class clazz) {
        return Person.class.isAssignableFrom(clazz);
    }

    @Override
    public void validate(Object obj, Errors errors) {
        ValidationUtils.rejectIfEmptyOrWhitespace(errors, "name", "required.name");
    }
}

In this case, if the field name is empty, the validator returns the error message identified by required.name from the resource bundle.

The resource bundle is defined in the Spring XML configuration file as follows:

<bean class="org.springframework.context.support.ResourceBundleMessageSource" id="messageSource">
     <property name="basename" value="messages" />
</bean>

Or in a pure Java configuration style:

@Bean
public MessageSource messageSource() {
    ResourceBundleMessageSource messageSource = new ResourceBundleMessageSource();
    messageSource.setBasenames("messages");
    return messageSource;
}

The error message is defined inside the messages.properties file:

required.name = Name is required!

To apply this validation, we need to include a reference to the validator in our controller and call the method validate in the controller method which is called when user submits the form:

@RequestMapping(value = "/addPerson", method = RequestMethod.POST)
public String submit(
  @ModelAttribute("person") Person person, 
  BindingResult result, 
  ModelMap modelMap) {

    validator.validate(person, result);

    if (result.hasErrors()) {
        return "personForm";
    }
    
    modelMap.addAttribute("person", person);
    return "personView";
}

9.2. JSR 303 Bean Validation

Starting from Spring 3, we can use JSR 303 (via the @Valid annotation) for bean validation. To do this we need a JSR303 validator framework on the classpath. We will use the Hibernate Validator (the reference implementation). Following is the dependency that we need to include in the POM:

<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-validator</artifactId>
    <version>5.1.1.Final</version>
</dependency>

To make Spring MVC support JSR 303 validation via the @Valid annotation, we need to enable the following in our Spring configuration file:

<mvc:annotation-driven/>

Or use the corresponding annotation @EnableWebMvc in a Java configuration:

@EnableWebMvc
@Configuration
public class ClientWebConfigJava extends WebMvcConfigurerAdapter {
    // All web configuration will go here
}

Next, we need to annotate the controller method that we want to validate with the @Valid annotation:

@RequestMapping(value = "/addPerson", method = RequestMethod.POST)
public String submit(
  @Valid @ModelAttribute("person") Person person, 
  BindingResult result, 
  ModelMap modelMap) {
 
    if(result.hasErrors()) {
        return "personForm";
    }
     
    modelMap.addAttribute("person", person);
    return "personView";
}

Now we can annotate the entity’s property to validate it with Hibernate validator annotation:

@NotEmpty
private String password;

By default, this annotation will display “may not be empty” if we leave the password input field empty.

We can override the default error message by creating a property in the resource bundle defined in the validator example. The key of the message follows the rule AnnotationName.entity.fieldname:

NotEmpty.person.password = Password is required!

10. Conclusion

In this tutorial we explored the various tags that Spring provides for working with forms.

We also had a look at the tag for validation error displaying and the configuration needed to display custom error messages.

All the examples above can be found in a GitHub project. This is an Eclipse-based project, so it should be easy to import and run as it is.

When the project runs locally, the form example can be accessed at:

http://localhost:8080/spring-mvc-xml/person

Java Web Weekly, Issue 115

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Core container refinements in Spring Framework 4.3 [spring.io]

It’s always nice when a framework matures and gets easier to work with – and Spring is doing just that with the upcoming 4.3.

>> Enjoying Java and Being More Productive with IntelliJ IDEA [jetbrains.com]

A nice guide to what makes IntelliJ a good choice as your default IDE. Obviously promotional in nature, but a solid writeup nevertheless.

>> JUnit 5 Alpha Simplifies Unit Testing [infoq.com]

Short the to the point look at what’s happening with the new JUnit alpha.

>> Java A’s new Local-Variable Type Inference [jooq.org]

If this gets implemented in Java 10, it will be a beautiful thing.

It also shows a degree of openness to community feedback that is rare for a language as mature and as well established as Java is 20 years in.

>> Using the TestNG ITestContext to create smarter REST Assured tests [ontestautomation.com]

Quick and highly practical writeup on using rest-assured to test an API and how to orchestrate the interaction with the Authorization Server in OAuth2.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Alternative Services [mnot.net]

The web never stops moving forward, and the standardization of these building blocks is really critical for this forward movement.

If you’re keeping an eye on this part of the ecosystem, definitely have a quick read.

Also worth reading:

3. Musings

>> We Hire the Best, Just Like Everyone Else [codinghorror.com]

Some against the grain advice on actually keeping an open mind when hiring someone and doing a good job at this notoriously difficult thing.

>> Chasing Developer Productivity Metrics [daedtech.com]

Putting a hard number of developer productivity is the white whale of our industry, and so a piece that manages to stay away from rehashing the well known and the obvious is a good read.

But the reason this one was perhaps more interesting to me is that more and more, I’m starting to build out a team around Baeldung, so the question of “productivity” isn’t just theory any more.

>> It’s Not Just Standing Up: Patterns for Daily Standup Meetings [martinfowler.com]

I don’t usually include Agile writeups here because they’re usually fluff. This one though may be worth the read (although I didn’t get through all of it).

>> A brief overview of hack.summit() 2016 (part 1) [advancedweb.hu]

Some really interesting talks here.

I’m still going through some and they’re a little bit meta, but there are some cool takeaways locked in these talks.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> There’s no kill switch on awesome [dilbert.com]

>> I played that on XBox [dilbert.com]

>> A pantless weasel [dilbert.com]

 

5. Pick of the Week

The Spring IO 2016 details are finally out:

>> Spring I/O 2016

If you’re picking your conferences for 2016, this one is a fantastic choice (I had a lot of fun last year).

And if you’re going to be there, hit me up and we’ll grab a bear.

I usually post about Dev stuff on Twitter - you can follow me there:


Viewing all 4735 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>