Quantcast
Channel: Baeldung
Viewing all 4700 articles
Browse latest View live

OpenJDK Project Loom

$
0
0

1. Overview

In this article, we’ll take a quick look at Project Loom. In essence, the primary goal of Project Loom is to support a high-throughput, lightweight concurrency model in Java.

2. Project Loom

Project Loom is an attempt by the OpenJDK community to introduce a lightweight concurrency construct to Java. The prototypes for Loom so far have introduced a change in the JVM as well as the Java library.

Although there is no scheduled release for Loom yet, we can access the recent prototypes on Project Loom’s wiki.

Before we discuss the various concepts of Loom, let’s discuss the current concurrency model in Java.

3. Java’s Concurrency Model

Presently, Thread represents the core abstraction of concurrency in Java. This abstraction, along with other concurrent APIs makes it easy to write concurrent applications.

However, since Java uses the OS kernel threads for the implementation, it fails to meet today’s requirement of concurrency. There are two major problems in particular:

  1. Threads cannot match the scale of the domain’s unit of concurrency. For example, applications usually allow up to millions of transactions, users or sessions. However, the number of threads supported by the kernel is much less. Thus, a Thread for every user, transaction, or session is often not feasible.
  2. Most concurrent applications need some synchronization between threads for every request. Due to this, an expensive context switch happens between OS threads.

A possible solution to such problems is the use of asynchronous concurrent APIs. Common examples are CompletableFuture and RxJava. Provided that such APIs don’t block the kernel thread, it gives an application a finer-grained concurrency construct on top of Java threads.

On the other hand, such APIs are harder to debug and integrate with legacy APIs. And thus, there is a need for a lightweight concurrency construct which is independent of kernel threads.

4. Tasks and Schedulers

Any implementation of a thread, either lightweight or heavyweight, depends on two constructs:

  1. Task (also known as a continuation) – A sequence of instructions that can suspend itself for some blocking operation
  2. Scheduler – For assigning the continuation to the CPU and reassigning the CPU from a paused continuation

Presently, Java relies on OS implementations for both the continuation and the scheduler.

Now, in order to suspend a continuation, it’s required to store the entire call-stack. And similarly, retrieve the call-stack on resumption. Since the OS implementation of continuations includes the native call stack along with Java’s call stack, it results in a heavy footprint.

A bigger problem, though, is the use of OS scheduler. Since the scheduler runs in kernel mode, there’s no differentiation between threads. And it treats every CPU request in the same manner.

This type of scheduling is not optimal for Java applications in particular.

For example, consider an application thread which performs some action on the requests and then passes on the data to another thread for further processing. Here, it would be better to schedule both these threads on the same CPU. But since the scheduler is agnostic to the thread requesting the CPU, this is impossible to guarantee.

Project Loom proposes to solve this through user-mode threads which rely on Java runtime implementation of continuations and schedulers instead of the OS implementation.

5. Fibers

In the recent prototypes in OpenJDK, a new class named Fiber is introduced to the library alongside the Thread class.

Since the planned library for Fibers is similar to Thread, the user implementation should also remain similar. However, there are two main differences:

  1. Fiber would wrap any task in an internal user-mode continuation. This would allow the task to suspend and resume in Java runtime instead of the kernel
  2. A pluggable user-mode scheduler (ForkJoinPool, for example) would be used

Let’s go through these two items in detail.

6. Continuations

A continuation (or co-routine) is a sequence of instructions that can yield and be resumed by the caller at a later stage.

Every continuation has an entry point and a yield point. The yield point is where it was suspended. Whenever the caller resumes the continuation, the control returns to the last yield point.

It’s important to realize that this suspend/resume now occurs in the language runtime instead of the OS. Therefore, it prevents the expensive context switch between kernel threads.

Similar to threads, Project Loom aims to support nested fibers. Since fibers rely on continuations internally, it must also support nested continuations. To understand this better, consider a class Continuation that allows nesting:

Continuation cont1 = new Continuation(() -> {
    Continuation cont2 = new Continuation(() -> {
        //do something
        suspend(SCOPE_CONT_2);
        suspend(SCOPE_CONT_1);
    });
});

As shown above, the nested continuation can suspend itself or any of the enclosing continuations by passing a scope variableFor this reason, they are known as scoped continuations.

Since suspending a continuation would also require it to store the call stack, it’s also a goal of project Loom to add lightweight stack retrieval while resuming the continuation.

7. Scheduler

Earlier, we discussed the shortcomings of the OS scheduler in scheduling relatable threads on the same CPU.

Although it’s a goal for Project Loom to allow pluggable schedulers with fibers, ForkJoinPool in asynchronous mode will be used as the default scheduler. 

ForkJoinPool works on the work-stealing algorithm. Thus, every thread maintains a task deque and executes the task from its head. Furthermore, any idle thread does not block, waiting for the task and pulls it from the tail of another thread’s deque instead. 

The only difference in asynchronous mode is that the worker threads steal the task from the head of another deque.

ForkJoinPool adds a task scheduled by another running task to the local queue. Hence, executing it on the same CPU. 

8. Conclusion

In this article, we discussed the problems in Java’s current concurrency model and the changes proposed by Project Loom.

In doing so, we also defined tasks and schedulers and looked at how Fibers and ForkJoinPool could provide an alternative to Java using kernel threads.


Check If a String Contains Multiple Keywords

$
0
0

1. Introduction

In this quick tutorial, we’ll find out how to detect multiple words inside of a string.

2. Our Example

Let’s suppose we have the string:

String inputString = "hello there, Baeldung";

Our task is to find whether the inputString contains the “hello” and “Baeldung” words.

So, let’s put our keywords into an array:

String[] words = {"hello", "Baeldung"};

Moreover, the order of the words isn’t important, and the matches should be case-sensitive.

3. Using String.contains()

As a start, we’ll show how to use the String.contains() method to achieve our goal.

Let’s loop over the keywords array and check the occurrence of each item inside of the inputString:

public static boolean containsWords(String inputString, String[] items) {
    boolean found = true;
    for (String item : items) {
        if (!inputString.contains(item)) {
            found = false;
            break;
        }
    }
    return found;
}

The contains() method will return true if the inputString contains the given item. When we don’t have any of the keywords inside our string, we can stop moving forward and return an immediate false.

Despite the fact that we need to write more code, this solution is fast for simple use cases.

4. Using String.indexOf()

Similar to the solution that uses the String.contains() method, we can check the indices of the keywords by using the String.indexOf() method. For that, we need a method accepting the inputString and the list of the keywords:

public static boolean containsWordsIndexOf(String inputString, String[] words) {
    boolean found = true;
    for (String word : words) {
        if (inputString.indexOf(word) == -1) {
            found = false;
            break;
        }
    }
    return found;
}

The indexOf() method returns the index of the word inside of the inputString. When we don’t have the word in the text, the index will be -1.

5. Using Regular Expressions

Now, let’s use a regular expression to match our words. For that, we’ll use the Pattern class.

First, let’s define the string expression. As we need to match two keywords, we’ll build our regex rule with two lookaheads:

Pattern pattern = Pattern.compile("(?=.*hello)(?=.*Baeldung)");

And for the general case:

StringBuilder regexp = new StringBuilder();
for (String word : words) {
    regexp.append("(?=.*").append(word).append(")");
}

After that, we’ll use the matcher() method to find() the occurrences:

public static boolean containsWordsPatternMatch(String inputString, String[] words) {

    StringBuilder regexp = new StringBuilder();
    for (String word : words) {
        regexp.append("(?=.*").append(word).append(")");
    }

    Pattern pattern = Pattern.compile(regexp.toString());

    return pattern.matcher(inputString).find();
}

But, regular expressions have a performance cost. If we have multiple words to look up, the performance of this solution might not be optimal.

6. Using Java 8 and List

And finally, we can use Java 8’s Stream API. But first, let’s do some minor transformations with our initial data:

List<String> inputString = Arrays.asList(inputString.split(" "));
List<String> words = Arrays.asList(words);

Now, it’s time to use the Stream API:

public static boolean containsWordsJava8(String inputString, String[] words) {
    List<String> inputStringList = Arrays.asList(inputString.split(" "));
    List<String> wordsList = Arrays.asList(words);

    return wordsList.stream().allMatch(inputStringList::contains);
}

The operation pipeline above will return true if the input string contains all of our keywords.

Alternatively, we can simply use the containsAll() method of the Collections framework to achieve the desired result:

public static boolean containsWordsArray(String inputString, String[] words) {
    List<String> inputStringList = Arrays.asList(inputString.split(" "));
    List<String> wordsList = Arrays.asList(words);

    return inputStringList.containsAll(wordsList);
}

However, this method works for whole words only. So, it would find our keywords only if they’re separated with whitespace within the text.

7. Using the Aho-Corasick Algorithm

Simply put, the Aho-Corasick algorithm is for text searching with multiple keywords. It has O(n) time complexity no matter how many keywords we’re searching for or how long the text length is.

Let’s include the Aho-Corasick algorithm dependency in our pom.xml:

<dependency>
    <groupId>org.ahocorasick</groupId>
    <artifactId>ahocorasick</artifactId>
    <version>0.4.0</version>
</dependency>

First, let’s build the trie pipeline with the words array of keywords. For that, we’ll use the Trie data structure:

Trie trie = Trie.builder().onlyWholeWords().addKeywords(words).build();

After that, let’s call the parser method with the inputString text in which we would like to find the keywords and save the results in the emits collection:

Collection<Emit> emits = trie.parseText(inputString);

And finally, if we print our results:

emits.forEach(System.out::println);

For each keyword, we’ll see the start position of the keyword in the text, the ending position, and the keyword itself:

0:4=hello
13:20=Baeldung

Finally, let’s see the complete implementation:

public static boolean containsWordsAhoCorasick(String inputString, String[] words) {
    Trie trie = Trie.builder().onlyWholeWords().addKeywords(words).build();

    Collection<Emit> emits = trie.parseText(inputString);
    emits.forEach(System.out::println);

    boolean found = true;
    for(String word : words) {
        boolean contains = Arrays.toString(emits.toArray()).contains(word);
        if (!contains) {
            found = false;
            break;
        }
    }

    return found;
}

In this example, we’re looking for whole words only. So, if we want to match not only the inputString but “helloBaeldung” as well, we should simply remove the onlyWholeWords() attribute from the Trie builder pipeline.

In addition, keep in mind that we also remove the duplicate elements from the emits collection, as there might be multiple matches for the same keyword.

8. Conclusion

In this article, we learned how to find multiple keywords inside a string. Moreover, we showed examples by using the core JDK, as well as with the Aho-Corasick library.

As usual, the complete code for this article is available over on GitHub.

Conditionally Enable Scheduled Jobs in Spring

$
0
0

1. Introduction

The Spring Scheduling library allows applications to execute code at specific intervals. Because the intervals are specified using the @Scheduled annotation, the intervals are typically static and cannot change over the life of an application.

In this tutorial, we’ll look at various ways to conditionally enable Spring scheduled jobs.

2. Using a Boolean Flag

The simplest way to conditionally enable a Spring scheduled job is to use a boolean variable that we check inside the scheduled job. The variable can be annotated with @Value to make it configurable using normal Spring configuration mechanisms:

@Configuration
@EnableScheduling
public class ScheduledJobs {
  @Value("${jobs.enabled:true}")
  private boolean isEnabled;

  @Scheduled(fixedDelay = 60000)
  public void cleanTempDirectory() {
    if(isEnabled) {
      // do work here
    }
  }
}

The downside is that the scheduled job will always be executed by Spring, which may not be ideal in some cases.

3. Using @ConditionalOnProperty

Another option is to use the @ConditionalOnProperty annotation. It takes a Spring property name and runs only if the property evaluates to true.

First, we create a new class that encapsulates the scheduled job code, including the schedule interval:

public class ScheduledJob {
    @Scheduled(fixedDelay = 60000)
    public void cleanTempDir() {
        // do work here
  }
}

Then we conditionally create a bean of that type:

@Configuration
@EnableScheduling
public class ScheduledJobs {
    @Bean
    @ConditionalOnProperty(value = "jobs.enabled", matchIfMissing = true, havingValue = "true")
    public ScheduledJob scheduledJob() {
        return new ScheduledJob();
    }
}

In this case, the job will run if the property jobs.enabled is set to true, or if it’s not present at all. The downside is that this annotation is available only in Spring Boot.

4. Using Spring Profiles

We can also conditionally enable a Spring scheduled job based on the profile that the application is running with. As an example, this approach is useful when a job should only be scheduled in the production environment.

This approach works well when the schedule is the same across all environments and it only needs to be disabled or enabled in specific profiles.

This works similarly to using @ConditionalOnProperty, except we use the @Profile annotation on our bean method:

@Profile("prod")
@Bean
public ScheduledJob scheduledJob() {
    return new ScheduledJob();
}

This would create the job only if the prod profile is active. Furthermore, it gives us the full set of options that come with the @Profile annotation: matching multiple profiles, complex spring expressions, and more.

One thing to be careful of with this approach is that the bean method will be executed if no profiles are specified at all.

5. Value Placeholder in Cron Expression

Using Spring value placeholders, not only can we conditionally enable a job, but we can also change its schedule:

@Scheduled(cron = "${jobs.cronSchedule:-}")
public void cleanTempDirectory() {
    // do work here
}

In this example, the job is disabled by default (using the special Spring cron disable expression).

If we want to enable the job, all we have to do is provide a valid cron expression for jobs.cronSchedule. We can do this just like any other Spring configuration: command line argument, environment variable, property file, and so on.

Unlike cron expressions, there’s no way to set a fixed delay or fixed rate value that disables a job. Therefore this approach only works with cron scheduled jobs.

6. Conclusion

In this tutorial, we’ve seen there are several different ways to conditionally enable a Spring scheduled job. Some approaches are simpler than others but may have limitations.

The full source code for the examples is available over on GitHub.

Spring Data JPA Query by Example

$
0
0

1. Introduction

In this tutorial, we’re going to learn how to query data with the Spring Data Query by Example API.

First, we’ll define the schema of the data we want to query. Next, we’ll examine a few of the relevant classes from Spring Data. And then, we’ll run through a few examples.

Let’s get started!

2. The Test Data

Our test data is a list of passenger names as well as the seat they occupied.

First Name Last Name Seat Number
Jill Smith 50
Eve Jackson 94
Fred Bloggs 22
Ricki Bobbie 36
Siya Kolisi 85

3. Domain

Let’s create the Spring Data Repository we need and provide our domain class and id type.

To begin with, we’ve modeled our Passenger as a JPA entity:

@Entity
class Passenger {

    @Id
    @GeneratedValue
    @Column(nullable = false)
    private Long id;

    @Basic(optional = false)
    @Column(nullable = false)
    private String firstName;

    @Basic(optional = false)
    @Column(nullable = false)
    private String lastName;

    @Basic(optional = false)
    @Column(nullable = false)
    private int seatNumber;

    // constructor, getters etc.
}

Instead of using JPA, we could’ve modeled it as another abstraction.

4. Query by Example API

Firstly, let’s take a look at the JpaRepository interface. As we can see it extends the QueryByExampleExecutor interface to support query by example:

public interface JpaRepository<T, ID>
  extends PagingAndSortingRepository<T, ID>, QueryByExampleExecutor<T> {}

This interface introduces more variants of the find() method that we’re familiar with from Spring Data. However, each method also accepts an instance of Example:

public interface QueryByExampleExecutor<T> {
    <S extends T> Optional<S> findOne(Example<S> var1);
    <S extends T> Iterable<S> findAll(Example<S> var1);
    <S extends T> Iterable<S> findAll(Example<S> var1, Sort var2);
    <S extends T> Page<S> findAll(Example<S> var1, Pageable var2);
    <S extends T> long count(Example<S> var1);
    <S extends T> boolean exists(Example<S> var1);
}

Secondly, the Example interface exposes methods to access the probe and the ExampleMatcher.

It’s important to realize that the probe is the instance of our Entity:

public interface Example<T> {

    static <T> org.springframework.data.domain.Example<T> of(T probe) {
        return new TypedExample(probe, ExampleMatcher.matching());
    }

    static <T> org.springframework.data.domain.Example<T> of(T probe, ExampleMatcher matcher) {
        return new TypedExample(probe, matcher);
    }

    T getProbe();

    ExampleMatcher getMatcher();

    default Class<T> getProbeType() {
        return ProxyUtils.getUserClass(this.getProbe().getClass());
    }
}

In summary, our probe and our ExampleMatcher together specify our query.

5. Limitations

Like all things, the Query by Example API has some limitations. For instance:

  • Nesting and grouping statements are not supported, for example:  (firstName = ?0 and lastName = ?1) or seatNumber = ?2
  • String matching only includes exact, case-insensitive, starts, ends, contains, and regex
  • All types other than String are exact-match only

Now that we’re a little more familiar with the API and its limitations, let’s dive into some examples.

6. Examples

6.1. Case-Sensitive Matching

Let’s start with a simple example and talk about the default behavior:

@Test
public void givenPassengers_whenFindByExample_thenExpectedReturned() {
    Example<Passenger> example = Example.of(Passenger.from("Fred", "Bloggs", null));

    Optional<Passenger> actual = repository.findOne(example);

    assertTrue(actual.isPresent());
    assertEquals(Passenger.from("Fred", "Bloggs", 22), actual.get());
}

In particular, the static Example.of() method builds an Example using ExampleMatcher.matching().

In other words, an exact match will be performed on all non-null properties of Passenger. Thus, the matching is case-sensitive on String properties.

However, it wouldn’t be too useful if all we could do was an exact match on all non-null properties.

This is where the ExampleMatcher comes in. By building our own ExampleMatcher, we can customize the behavior to suit our needs.

6.2. Case-Insensitive Matching

With that in mind, let’s have a look at another example, this time using withIgnoreCase() to achieve case-insensitive matching:

@Test
public void givenPassengers_whenFindByExampleCaseInsensitiveMatcher_thenExpectedReturned() {
    ExampleMatcher caseInsensitiveExampleMatcher = ExampleMatcher.matchingAll().withIgnoreCase();
    Example<Passenger> example = Example.of(Passenger.from("fred", "bloggs", null),
      caseInsensitiveExampleMatcher);

    Optional<Passenger> actual = repository.findOne(example);

    assertTrue(actual.isPresent());
    assertEquals(Passenger.from("Fred", "Bloggs", 22), actual.get());
}

In this example, notice that we first called ExampleMatcher.matchingAll() – it has the same behavior as ExampleMatcher.matching(), which we used in the previous example.

6.3. Custom Matching

We can also tune the behavior of our matcher on a per-property basis and match any property using ExampleMatcher.matchingAny():

@Test
public void givenPassengers_whenFindByExampleCustomMatcher_thenExpectedReturned() {
    Passenger jill = Passenger.from("Jill", "Smith", 50);
    Passenger eve = Passenger.from("Eve", "Jackson", 95);
    Passenger fred = Passenger.from("Fred", "Bloggs", 22);
    Passenger siya = Passenger.from("Siya", "Kolisi", 85);
    Passenger ricki = Passenger.from("Ricki", "Bobbie", 36);

    ExampleMatcher customExampleMatcher = ExampleMatcher.matchingAny()
      .withMatcher("firstName", ExampleMatcher.GenericPropertyMatchers.contains().ignoreCase())
      .withMatcher("lastName", ExampleMatcher.GenericPropertyMatchers.contains().ignoreCase());

    Example<Passenger> example = Example.of(Passenger.from("e", "s", null), customExampleMatcher);

    List<Passenger> passengers = repository.findAll(example);

    assertThat(passengers, contains(jill, eve, fred, siya));
    assertThat(passengers, not(contains(ricki)));
}

6.4. Ignoring Properties

On the other hand, we may also only want to query on a subset of our properties.

We achieve this by ignoring some properties using ExampleMatcher.ignorePaths(String… paths):

@Test
public void givenPassengers_whenFindByIgnoringMatcher_thenExpectedReturned() {
    Passenger jill = Passenger.from("Jill", "Smith", 50); 
    Passenger eve = Passenger.from("Eve", "Jackson", 95); 
    Passenger fred = Passenger.from("Fred", "Bloggs", 22);
    Passenger siya = Passenger.from("Siya", "Kolisi", 85);
    Passenger ricki = Passenger.from("Ricki", "Bobbie", 36);

    ExampleMatcher ignoringExampleMatcher = ExampleMatcher.matchingAny()
      .withMatcher("lastName", ExampleMatcher.GenericPropertyMatchers.startsWith().ignoreCase())
      .withIgnorePaths("firstName", "seatNumber");

    Example<Passenger> example = Example.of(Passenger.from(null, "b", null), ignoringExampleMatcher);

    List<Passenger> passengers = repository.findAll(example);

    assertThat(passengers, contains(fred, ricki));
    assertThat(passengers, not(contains(jill));
    assertThat(passengers, not(contains(eve)); 
    assertThat(passengers, not(contains(siya)); 
}

7. Conclusion

In this article, we’ve demonstrated how to use the Query by Example API.

We’ve demonstrated how to use Example and ExampleMatcher along with the QueryByExampleExecutor interface to query a table using an example data instance.

In conclusion, you can find the code over on GitHub.

Spring Dependency Injection

$
0
0

Notify User of Login From New Device or Location

$
0
0

1. Introduction

In this tutorial, we’re going to demonstrate how we can verify if our users are logging in from a new device/location.

We’re going to send them a login notification to let them know we’ve detected unfamiliar activity on their account.

2. Users’ Location and Device Details

There are two things we require: the locations of our users, and the information about the devices they use to log in.

Considering that we’re using HTTP to exchange messages with our users, we’ll have to rely solely on the incoming HTTP request and its metadata to retrieve this information.

Luckily for us, there are HTTP headers whose sole purpose is to carry this kind of information.

2.1. Device Location

Before we can estimate our users’ location, we need to obtain their originating IP Address.

We can do that by using:

  • X-Forwarded-For – the de facto standard header for identifying the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer
  • ServletRequest.getRemoteAddr() – a utility method that returns the originating IP of the client or the last proxy that sent the request

Extracting a user’s IP address from the HTTP request isn’t quite reliable since they may be tampered with. However, let’s simplify this in our tutorial and assume that won’t be the case.

Once we’ve retrieved the IP address, we can convert it to a real-world location through geolocation.

2.2. Device Details

Similarly to the originating IP address, there’s also an HTTP header that carries information about the device that was used to send the request called User-Agent.

In short, it carries information that allows us to identify the application typeoperating system, and software vendor/version of the requesting user agent.

Here’s an example of what it may look like:

User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 
  (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36

In our example above, the device is running on Mac OS X 10.14 and used Chrome 71.0 to send the request.

Rather than implement a User-Agent parser from scratch, we’re going to resort to existing solutions that have already been tested and are more reliable.

3. Detecting a New Device or Location

Now that we’ve introduced the information we need, let’s modify our AuthenticationSuccessHandler to perform validation after a user has logged in:

public class MySimpleUrlAuthenticationSuccessHandler 
  implements AuthenticationSuccessHandler {
    //...
    @Override
    public void onAuthenticationSuccess(
      final HttpServletRequest request,
      final HttpServletResponse response,
      final Authentication authentication)
      throws IOException {
        handle(request, response, authentication);
        //...
        loginNotification(authentication, request);
    }

    private void loginNotification(Authentication authentication, 
      HttpServletRequest request) {
        try {
            if (authentication.getPrincipal() instanceof User) { 
                deviceService.verifyDevice(((User)authentication.getPrincipal()), request); 
            }
        } catch(Exception e) {
            logger.error("An error occurred verifying device or location");
            throw new RuntimeException(e);
        }
    }
    //...
}

We simply added a call to our new component: DeviceService. This component will encapsulate everything we need to identify new devices/locations and notify our users.

However, before we move onto our DeviceService, let’s create our DeviceMetadata entity to persist our users’ data over time:

@Entity
public class DeviceMetadata {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    private Long userId;
    private String deviceDetails;
    private String location;
    private Date lastLoggedIn;
    //...
}

And its Repository:

public interface DeviceMetadataRepository extends JpaRepository<DeviceMetadata, Long> {
    List<DeviceMetadata> findByUserId(Long userId);
}

With our Entity and Repository in place, we can start gathering the information we need to keep a record of our users’ devices and their locations.

4. Extracting our User’s Location

Before we can estimate our user’s geographical location, we need to extract their IP address:

private String extractIp(HttpServletRequest request) {
    String clientIp;
    String clientXForwardedForIp = request
      .getHeader("x-forwarded-for");
    if (nonNull(clientXForwardedForIp)) {
        clientIp = parseXForwardedHeader(clientXForwardedForIp);
    } else {
        clientIp = request.getRemoteAddr();
    }
    return clientIp;
}

If there’s an X-Forwarded-For header in the request, we’ll use it to extract their IP address; otherwise, we’ll use the getRemoteAddr() method.

Once we have their IP address, we can estimate their location with the help of Maxmind:

private String getIpLocation(String ip) {
    String location = UNKNOWN;
    InetAddress ipAddress = InetAddress.getByName(ip);
    CityResponse cityResponse = databaseReader
      .city(ipAddress);
        
    if (Objects.nonNull(cityResponse) &&
      Objects.nonNull(cityResponse.getCity()) &&
      !Strings.isNullOrEmpty(cityResponse.getCity().getName())) {
        location = cityResponse.getCity().getName();
    }    
    return location;
}

5. Users’ Device Details

Since the User-Agent header contains all the information we need, it’s only a matter of extracting it. As we mentioned earlier, with the help of User-Agent parser (uap-java in this case), getting this information becomes quite simple:

private String getDeviceDetails(String userAgent) {
    String deviceDetails = UNKNOWN;
    
    Client client = parser.parse(userAgent);
    if (Objects.nonNull(client)) {
        deviceDetails = client.userAgent.family
          + " " + client.userAgent.major + "." 
          + client.userAgent.minor + " - "
          + client.os.family + " " + client.os.major
          + "." + client.os.minor; 
    }
    return deviceDetails;
}

6. Sending a Login Notification

To send a login notification to our user, we need to compare the information we extracted against past data to check if we’ve already seen the device, in that location, in the past.

Let’s take a look at our DeviceService.verifyDevice() method:

public void verifyDevice(User user, HttpServletRequest request) {
    
    String ip = extractIp(request);
    String location = getIpLocation(ip);

    String deviceDetails = getDeviceDetails(request.getHeader("user-agent"));
        
    DeviceMetadata existingDevice
      = findExistingDevice(user.getId(), deviceDetails, location);
        
    if (Objects.isNull(existingDevice)) {
        unknownDeviceNotification(deviceDetails, location,
          ip, user.getEmail(), request.getLocale());

        DeviceMetadata deviceMetadata = new DeviceMetadata();
        deviceMetadata.setUserId(user.getId());
        deviceMetadata.setLocation(location);
        deviceMetadata.setDeviceDetails(deviceDetails);
        deviceMetadata.setLastLoggedIn(new Date());
        deviceMetadataRepository.save(deviceMetadata);
    } else {
        existingDevice.setLastLoggedIn(new Date());
        deviceMetadataRepository.save(existingDevice);
    }
}

After extracting the information, we compare it against existing DeviceMetadata entries to check if there’s an entry containing the same information:

private DeviceMetadata findExistingDevice(
  Long userId, String deviceDetails, String location) {
    List<DeviceMetadata> knownDevices
      = deviceMetadataRepository.findByUserId(userId);
    
    for (DeviceMetadata existingDevice : knownDevices) {
        if (existingDevice.getDeviceDetails().equals(deviceDetails) 
          && existingDevice.getLocation().equals(location)) {
            return existingDevice;
        }
    }
    return null;
}

If there isn’t, we need to send a notification to our user to let them know that we’ve detected unfamiliar activity in their account. Then, we persist the information.

Otherwise, we simply update the lastLoggedIn attribute of the familiar device.

7. Conclusion

In this article, we demonstrated how we can send a login notification in case we detect unfamiliar activity in users’ accounts.

The full implementation of this tutorial can be found over on Github.

List Files in a Directory in Java

$
0
0

1. Overview

In this quick tutorial, we’ll look into different ways to list files within a directory.

2. Listing

If we want to list all the files in the directory and skip further digging into sub-directories, we can simply use java.io.File#listFiles:

public Set<String> listFilesUsingJavaIO(String dir) {
    return Stream.of(new File(dir).listFiles())
      .filter(file -> !file.isDirectory())
      .map(File::getName)
      .collect(Collectors.toSet());
}

3. DirectoryStream

However, Java 7 introduced a faster alternative to File#listFiles called DirectoryStream.

Let’s see what the equivalent looks like:

public Set<String> listFilesUsingDirectoryStream(String dir) throws IOException {
    Set<String> fileList = new HashSet<>();
    try (DirectoryStream<Path> stream = Files.newDirectoryStream(Paths.get(dir))) {
        for (Path path : stream) {
            if (!Files.isDirectory(path)) {
                fileList.add(path.getFileName()
                    .toString());
            }
        }
    }
    return fileList;
}

We can readily see that while DirectoryStream may be faster, it isn’t part of the Stream API and isn’t quite as amenable to working with it.

Also, DirectoryStream requires that we close the resource, meaning wrapping it with a try-with-resources, too.

4. Walking

Or, we can list all the files within a directory by walking it to a configured depth.

Let’s use java.nio.file.Files#walk to list all the files within a directory to a given depth:

public Set<String> listFilesUsingFileWalk(String dir, int depth) throws IOException {
    try (Stream<Path> stream = Files.walk(Paths.get(dir), depth)) {
        return stream
          .filter(file -> !Files.isDirectory(file))
          .map(Path::getFileName)
          .map(Path::toString)
          .collect(Collectors.toSet());
    }
}

Of course, remember to use try-with-resources so the file handle for dir gets closed properly.

Or, if we want to have more control over what happens with each file visited, we can also supply a visitor implementation:

public Set<String> listFilesUsingFileWalkAndVisitor(String dir) throws IOException {
    Set<String> fileList = new HashSet<>();
    Files.walkFileTree(Paths.get(dir), new SimpleFileVisitor<Path>() {
        @Override
        public FileVisitResult visitFile(Path file, BasicFileAttributes attrs)
          throws IOException {
            if (!Files.isDirectory(file)) {
                fileList.add(file.getFileName().toString());
            }
            return FileVisitResult.CONTINUE;
        }
    });
    return fileList;
}

This is handy when we want to do additional reading, moving, or deleting of files as we go.

5. Conclusion

In this quick tutorial, we explored different ways to list files within a directory.

As always, the full source code of the examples is available over on GitHub.

Java Weekly, Issue 266

$
0
0

Here we go…

1. Spring and Java

>> Testing CDI Beans and the Persistence Layer Under Java SE [in.relation.to]

A good write-up on using Weld to test interactions between the JPA layer and the business logic layer without incurring the overhead of container deployments.

>> Adding Input and Output Parameters to TestProject Actions [petrikainulainen.net]

A brief example of using the @Parameter annotation to create parameterized actions in a TestProject addon.

>> How to escape SQL reserved keywords with JPA and Hibernate [vladmihalcea.com]

A couple of approaches to the problem – using escape characters explicitly within @Table and @Column annotations, and using a global Hibernate property.

>> Securing Spring Boot Admin & actuator endpoints with Keycloak [blog.codecentric.de]

A comprehensive guide to help you secure both the admin app itself and the actuator endpoints of apps that it monitors.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Developing Microservices with Behavior Driven Development and Interface Oriented Design [infoq.com]

An integration of these two sets of design principles can help you create well-defined services that work well together.

>> The 4 Quality Gates Every SRE Team Must Check Before Promoting Code [blog.overops.com]

A set of comparative quality measurements, wherein machine learning and AI converge, that are easy to implement and can help you block bad releases from being deployed to production.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Best Product [dilbert.com]

>> New Forms [dilbert.com]

>> Self-Driving Car [dilbert.com]

4. Pick of the Week

>> Regex tutorial — A quick cheatsheet by examples [medium.com]


Java Bitwise Operators

$
0
0

1. Overview

Operators are used in the Java language to operate on data and variables.

In this tutorial, we’ll explore Bitwise Operators and how they work in Java.

2. Bitwise Operators

Bitwise operators work on binary digits or bits of input values. We can apply these to the integer types –  long, int, short, char, and byte.

Before exploring the different bitwise operators let’s first understand how they work.

Bitwise operators work on a binary equivalent of decimal numbers and perform operations on them bit by bit as per the given operator:

  • First, the operands are converted to their binary representation
  • Next, the operator is applied to each binary number and the result is calculated
  • Finally, the result is converted back to its decimal representation

Let’s understand with an example; let’s take two integers:

int value1 = 6;
int value2 = 5;

Next, let’s apply a bitwise OR operator on these numbers:

int result = 6 | 5;

To perform this operation, first, the binary representation of these numbers will be calculated:

Binary number of value1 = 0110
Binary number of value2 = 0101

Then the operation will be applied to each bit. The result returns a new binary number:

0110
0101
-----
0111

Finally, the result 0111 will be converted back to decimal which is equal to 7:

result : 7

Bitwise operators are further classified as bitwise logical and bitwise shift operators. Let’s now go through each type.

3. Bitwise Logical Operators

The bitwise logical operators are AND(&), OR(|), XOR(^), and NOT(~).

3.1. Bitwise OR (|)

The OR operator compares each binary digit of two integers and gives back 1 if either of them is 1.

This is similar to the || logical operator used with booleans. When two booleans are compared the result is true if either of them is true. Similarly, the output is 1 when either of them is 1.

We saw an example of this operator in the previous section:

@Test
public void givenTwoIntegers_whenOrOperator_thenNewDecimalNumber() {
    int value1 = 6;
    int value2 = 5;
    int result = value1 | value2;
    assertEquals(7, result);
}

Let’s see the binary representation of this operation:

0110
0101
-----
0111

Here, we can see that using OR, 0 and 0 will result in 0, while any combination with at least a 1 will result in 1.

3.2. Bitwise AND (&)

The AND operator compares each binary digit of two integers and gives back 1 if both are 1, otherwise it returns 0.

This is similar to the && operator with boolean values. When the values of two booleans are true the result of a && operation is true.

Let’s use the same example as above, except now using the & operator instead of the | operator:

@Test
public void givenTwoIntegers_whenAndOperator_thenNewDecimalNumber() {
    int value1 = 6;
    int value2 = 5;
    int result = value1 & value2;
    assertEquals(4, result);
}

Let’s also see the binary representation of this operation:

0110
0101
-----
0100

0100 is 4 in decimal, therefore, the result is:

result : 4

3.3. Bitwise XOR (^)

The XOR operator compares each binary digit of two integers and gives back 1 if both the compared bits are different. This means that if bits of both the integers are 1 or 0 the result will be 0; otherwise, the result will be 1:

@Test
public void givenTwoIntegers_whenXorOperator_thenNewDecimalNumber() {
    int value1 = 6;
    int value2 = 5;
    int result = value1 ^ value2;
    assertEquals(3, result);
}

And the binary representation:

0110
0101
-----
0011

0011 is 3 in decimal, therefore, the result is:

result : 3

3.4. Bitwise COMPLEMENT (~)

Bitwise Not or Complement operator simply means the negation of each bit of the input value. It takes only one integer and it’s equivalent to the ! operator.

This operator changes each binary digit of the integer, which means all 0 become 1 and all 1 become 0. The ! operator works similarly for boolean values: it reverses boolean values from true to false and vice versa.

Now let’s understand with an example how to find the complement of a decimal number.

Let’s do the complement of value1 = 6:

@Test
public void givenOneInteger_whenNotOperator_thenNewDecimalNumber() {
    int value1 = 6;
    int result = ~value1;
    assertEquals(-7, result);
}

The value in binary is:

value1 = 0000 0110

By applying the complement operator, the result will be:

0000 0110 -> 1111 1001

This is the one’s complement of the decimal number 6. And since the first (leftmost) bit is 1 in binary, it means that the sign is negative for the number that is stored.

Now, since the numbers are stored as 2’s complement, first we need to find its 2’s complement and then convert the resultant binary number into a decimal number:

1111 1001 -> 0000 0110 + 1 -> 0000 0111

Finally, 0000 0111 is 7 in decimal. Since the sign bit was 1 as mentioned above, therefore the resulting answer is:

result : -7

3.5. Bitwise Operator Table

Let’s summarize the result of the operators we’ve seen to so far in a comparison table:

A	B	A|B	A&B	A^B	~A
0	0	0	0	0	1
1	0	1	0	1	0
0	1	1	0	1	1
1	1	1	1	0	0

4. Bitwise Shift Operators

Binary shift operators shift all the bits of the input value either to the left or right based on the shift operator.

Let’s see the syntax for these operators:

value <operator> <number_of_times>

The left side of the expression is the integer that is shifted, and the right side of the expression denotes the number of times that it has to be shifted.

Bitwise shift operators are further classified as bitwise left and bitwise right shift operators.

4.1. Signed Left Shift [<<]

The left shift operator shifts the bits to the left by the number of times specified by the right side of the operand. After the left shift, the empty space in the right is filled with 0.

Another important point to note is that shifting a number by one is equivalent to multiplying it by 2, or, in general, left shifting a number by n positions is equivalent to multiplication by 2^n.

Let’s take the value 12 as the input value.

Now, we will move it by 2 places to the left (12 <<2) and see what will be the final result.

The binary equivalent of 12 is 00001100. After shifting to the left by 2 places, the result is 00110000, which is equivalent to 48 in decimal:

@Test
public void givenOnePositiveInteger_whenLeftShiftOperator_thenNewDecimalNumber() {
    int value = 12;
    int leftShift = value << 2;
    assertEquals(48, leftShift);
}

This works similarly for a negative value:

@Test
public void givenOneNegativeInteger_whenLeftShiftOperator_thenNewDecimalNumber() {
    int value = -12;
    int leftShift = value << 2;
    assertEquals(-48, leftShift);
}

4.2. Signed Right Shift [>>]

The right shift operator shifts all the bits to the right. The empty space in the left side is filled depending on the input number:

  • When an input number is negative, where the leftmost bit is 1, then the empty spaces will be filled with 1
  • When an input number is positive, where the leftmost bit is 0, then the empty spaces will be filled with 0

Let’s continue the example using 12 as input.

Now, we will move it by 2 places to the right(12 >>2) and see what will be the final result.

The input number is positive, so after shifting to the right by 2 places, the result is 0011, which is 3 in decimal:

@Test
public void givenOnePositiveInteger_whenSignedRightShiftOperator_thenNewDecimalNumber() {
    int value = 12;
    int rightShift = value >> 2;
    assertEquals(3, rightShift);
}

Also, for a negative value:

@Test
public void givenOneNegativeInteger_whenSignedRightShiftOperator_thenNewDecimalNumber() {
    int value = -12;
    int rightShift = value >> 2;
    assertEquals(-3, rightShift);
}

4.3. Unsigned Right Shift [>>>]

This operator is very similar to the signed right shift operator. The only difference is that the empty spaces in the left are filled with 0 irrespective of whether the number is positive or negative. Therefore, the result will always be a positive integer.

Let’s right shift the same value of 12:

@Test
public void givenOnePositiveInteger_whenUnsignedRightShiftOperator_thenNewDecimalNumber() {
    int value = 12;
    int unsignedRightShift = value >>> 2;
    assertEquals(3, unsignedRightShift);
}

And now, the negative value:

@Test
public void givenOneNegativeInteger_whenUnsignedRightShiftOperator_thenNewDecimalNumber() {
    int value = -12;
    int unsignedRightShift = value >>> 2;
    assertEquals(1073741821, unsignedRightShift);
}

5. Difference Between Bitwise and Logical Operators

There are a few differences between the bitwise operators we’ve discussed here and the more commonly known logical operators.

First, logical operators work on boolean expressions and return boolean values (either true or false), whereas bitwise operators work on binary digits of integer values (long, int, short, char, and byte) and return an integer.

Also, logical operators always evaluate the first boolean expression and, depending on its result and the operator used, may or may not evaluate the second. On the other hand, bitwise operators always evaluate both operands.

Finally, logical operators are used in making decisions based on multiple conditions, while bitwise operators work on bits and perform bit by bit operations.

6. Use Cases

Some potential use cases of bitwise operators are:

  • Communication stacks where the individual bits in the header attached to the data signify important information
  • In embedded systems to set/clear/toggle just one single bit of a specific register without modifying the remaining bits
  • To encrypt data for safety issues using the XOR operator
  • In data compression by converting data from one representation to another, to reduce the amount of space used

7. Conclusion

In this tutorial, we learned about the types of bitwise operators and how they’re different from logical operators. We also saw some potential use cases for them.

All the code examples in this article are available over on GitHub.

List of Primitive Integer Values in Java

$
0
0

1. Overview

In this tutorial, we’ll learn how to construct a list containing primitive integer values.

We’ll explore solutions using core Java and external libraries.

2. Autoboxing

In Java, generic type arguments must be reference types. This means we can’t do something like List<int>.

Instead, we can use List<Integer> and take advantage of autoboxing. Autoboxing helps us use the List<Integer> interface as if it contained primitive int values. Under the hood, it is still a collection of Objects and not primitives.

The core Java solution is just an adjustment to be able to use primitives with generic collections. Moreover, it comes with the cost of boxing and unboxing conversions.

However, there are other options in Java and additional third-party libraries that we can use. Let’s see how to use them below.

3. Using the Stream API

Oftentimes, we don’t actually need to create a list as much as we just need to operate on it.

In these cases, it might work to use Java 8’s Stream API instead of creating a list altogether. The IntSream class provides a sequence of primitive int elements that supports sequential aggregate operations.

Let’s have a quick look at an example:

IntStream stream = IntStream.of(5, 10, 0, 2, -8);

The IntStream.of() static method returns a sequential IntStream.

Similarly, we can create an IntStream from an existing array of ints:

int[] primitives = {5, 10, 0, 2, -8};
IntStream stream = IntStream.of(primitives);

Moreover, we can apply the standard Stream API operations to iterate, filter and aggregate the ints. For example, we can calculate the average of the positive int values:

OptionalDouble average = stream.filter(i -> i > 0).average();

Most importantly, no autoboxing is used while working with the streams.

Though, if we definitely need a concrete list, we’ll want to take a look at one of the following third-party libraries.

4. Using Trove 

Trove is a high-performance library which provides primitive collections for Java.

To setup Trove with Maven, we need to include the trov4j dependency in our pom.xml:

<dependency>
    <groupId>net.sf.trove4j</groupId>
    <artifactId>trove4j</artifactId>
    <version>3.0.2</version>
</dependency>

With Trove, we can create lists, maps, and sets.

For instance, there is an interface TIntList with its TIntArrayList implementation to work with a list of int values:

TIntList tList = new TIntArrayList();

Even though TIntList can’t directly implement List, it’s methods are very comparable. Other solutions that we discuss follow a similar pattern.

The greatest benefit of using TIntArrayList is performance and memory consumption gains. No additional boxing/unboxing is needed as it stores the data inside of an int[] array.

5. Using Fastutil

Another high-performance library to work with the primitives is Fastutil. Let’s add the fastutil dependency:

<dependency>
    <groupId>it.unimi.dsi</groupId>
    <artifactId>fastutil</artifactId>
    <version>8.1.0</version>
</dependency>

Now, we’re ready to use it:

IntArrayList list = new IntArrayList();

The default constructor IntArrayList() internally creates an array of primitives with the default capacity of 16.  In the same vein, we can initialize it from an existing array:

int[] primitives = new int[] {5, 10, 0, 2, -8};
IntArrayList list = new IntArrayList(primitives);

6. Using Colt

Colt is an open source, a high-performance library for scientific and technical computing. The cern.colt package contains resizable lists holding primitive data types such as int.

First, let’s add the colt dependency:

<dependency>
    <groupId>colt</groupId>
    <artifactId>colt</artifactId>
    <version>1.2.0</version>
</dependency>

The primitive list that offers this library is cern.colt.list.IntArrayList:

cern.colt.list.IntArrayList coltList = new cern.colt.list.IntArrayList();

The default initial capacity is ten.

7. Using Guava

Guava provides a number of ways of interfacing between primitive arrays and collection APIs. The com.google.common.primitives package has all the classes to accommodate primitive types.

For example, the ImmutableIntArray class lets us create an immutable list of int elements.

Let’s suppose, we have the following array of int values:

int[] primitives = new int[] {5, 10, 0, 2};

We can simply create a list with the array:

ImmutableIntArray list = ImmutableIntArray.builder().addAll(primitives).build();

Furthermore, it provides a list API with all the standard methods we would expect.

8. Conclusion

In this quick article, we showed multiple ways of creating lists with the primitive integers. In our examples, we used the Trove, Fastutil, Colt, and Guava libraries.

As usual, the complete code for this article is available over on GitHub.

Common String Operations in Java

$
0
0

1. Introduction

String-based values and operations are quite common in everyday development, and any Java developer must be able to handle them.

In this tutorial, we’ll provide a quick cheat sheet of common String operations.

Additionally, we’ll shed some light on the differences between equals and “==” and between StringUtils#isBlank and #isEmpty.

2. Transforming a Char into a String

A char represents one character in Java. But in most cases, we need a String.

So let’s start off with transforming chars into Strings:

String toStringWithConcatenation(final char c) {
    return String.valueOf(c);
}

3. Appending Strings

Another frequently needed operation is appending strings with other values, like a char:

String appendWithConcatenation(final String prefix, final char c) {
    return prefix + c;
}

We can append other basic types with a StringBuilder as well:

String appendWithStringBuilder(final String prefix, final char c) {
    return new StringBuilder(prefix).append(c).toString();
}

4. Getting a Character by Index

If we need to extract one character out of a string, the API provides everything we want:

char getCharacterByIndex(final String text, final int index) {
    return text.charAt(index);
}

Since a String uses a char[] as a backing data structure, the index starts at zero.

5. Handling ASCII Values

We can easily switch between a char and its numerical representation (ASCII) by casting:

int asciiValue(final char character) {
    return (int) character;
}

char fromAsciiValue(final int value) {
    Assert.isTrue(value >= 0 && value < 65536, "value is not a valid character");
    return (char) value;
}

Of course, since an int is 4 unsigned bytes and a char is 2 unsigned bytes, we need to check to make sure that we are working with legal character values.

6. Removing All Whitespace

Sometimes we need to get rid of some characters, most commonly whitespace. A good way is to use the replaceAll method with a regular expression:

String removeWhiteSpace(final String text) {
    return text.replaceAll("\\s+", "");
}

7. Joining Collections to a String

Another common use case is when we have some kind of Collection and want to create a string out of it:

<T> String fromCollection(final Collection<T> collection) { 
   return collection.stream().map(Objects::toString).collect(Collectors.joining(", "));
}

Notice that the Collectors.joining allows specifying the prefix or the suffix.

8. Splitting a String

Or on the other hand, we can split a string by a delimiter using the split method:

String[] splitByRegExPipe(final String text) {
   return text.split("\\|");
}

Again, we’re using a regular expression here, this time to split by a pipe. Since we want to use a special character, we have to escape it.

Another possibility is to use the Pattern class:

String[] splitByPatternPipe(final String text) {
    return text.split(Pattern.quote("|"));
}

9. Processing All Characters as a Stream

In the case of detailed processing, we can transform a string to an IntStream:

IntStream getStream(final String text) {
    return text.chars();
}

10. Reference Equality and Value Equality

Although strings look like a primitive type, they are not.

Therefore, we have to distinguish between reference equality and value equality. Reference equality always implies value equality, but in general not the other way around.  The first, we check with the ‘==’ operation and the latter, with the equals method:

@Test
public void whenUsingEquals_thenWeCheckForTheSameValue() {
    assertTrue("Values are equal", new String("Test").equals("Test"));
}

@Test
public void whenUsingEqualsSign_thenWeCheckForReferenceEquality() {
    assertFalse("References are not equal", new String("Test") == "Test");
}

Notice that literals are interned in the string pool. Therefore the compiler can at times optimize them to the same reference:

@Test
public void whenTheCompileCanBuildUpAString_thenWeGetTheSameReference() {
    assertTrue("Literals are concatenated by the compiler", "Test" == "Te"+"st");
}

11. Blank String vs. Empty String

There is a subtle difference between isBlank and isEmpty.

A string is empty if it’s null or has length zero. Whereas a string is blank if it’s null or contains only whitespace characters:

@Test
public void whenUsingIsEmpty_thenWeCheckForNullorLengthZero() {
    assertTrue("null is empty", isEmpty(null));
    assertTrue("nothing is empty", isEmpty(""));
    assertFalse("whitespace is not empty", isEmpty(" "));
    assertFalse("whitespace is not empty", isEmpty("\n"));
    assertFalse("whitespace is not empty", isEmpty("\t"));
    assertFalse("text is not empty", isEmpty("Anything!"));
}

@Test
public void whenUsingIsBlank_thenWeCheckForNullorOnlyContainingWhitespace() {
    assertTrue("null is blank", isBlank(null));
    assertTrue("nothing is blank", isBlank(""));
    assertTrue("whitespace is blank", isBlank("\t\t \t\n\r"));
    assertFalse("test is not blank", isBlank("Anything!"));
}

12. Conclusion

Strings are a core type in all kinds of applications. In this tutorial, we learned some key operations in common scenarios.

Furthermore, we gave directions to more detailed references.

Finally, the full code with all examples is available in our GitHub repository.

Java Classes and Objects

$
0
0

1. Overview

In this quick tutorial, we’ll look at two basic building blocks of the Java programming language – classes and objects. They’re basic concepts of Object Oriented Programming (OOP), which we use to model real-life entities.

In OOP, classes are blueprints or templates for objects. We use them to describe types of entities.

On the other hand, objects are living entities, created from classes. They contain certain states within their fields and present certain behaviors with their methods.

2. Classes

Simply put, a class represent a definition or a type of object. In Java, classes can contain fields, constructors, and methods.

Let’s see an example using a simple Java class representing a Car:

class Car {

    // fields
    String type;
    String model;
    String color;
    int speed;

    // constructor
    Car(String type, String model, String color) {
        this.type = type;
        this.model = model;
        this.color = color;
    }
    
    // methods
    int increaseSpeed(int increment) {
        this.speed = this.speed + increment;
        return this.speed;
    }
    
    // ...
}

This Java class represents a car in general. We can create any type of car from this class. We use fields to hold the state and a constructor to create objects from this class.

Every Java class has an empty constructor by default. We use it if we don’t provide a specific implementation as we did above. Here’s how the default constructor would look for our Car class:

Car(){}

This constructor simply initializes all fields of the object with their default values. Strings are initialized to null and integers to zero.

Now, our class has a specific constructor because we want our objects to have their fields defined when we create them:

Car(String type, String model) {
    // ...
}

To sum up, we wrote a class that defines a car. Its properties are described by fields, which contain the state of objects of the class, and its behavior is described using methods.

3. Objects

While classes are translated during compile time, objects are created from classes at runtime.

Objects of a class are called instances, and we create and initialize them with constructors:

Car focus = new Car("Ford", "Focus", "red");
Car auris = new Car("Toyota", "Auris", "blue");
Car golf = new Car("Volkswagen", "Golf", "green");

Now, we’ve created different Car objects, all from a single class. This is the point of it all, to define the blueprint in one place, and then, to reuse it many times in many places.

So far, we have three Car objects, and they’re all parked since their speed is zero. We can change this by invoking our increaseSpeed method:

focus.increaseSpeed(10);
auris.increaseSpeed(20);
golf.increaseSpeed(30);

Now, we’ve changed the state of our cars – they’re all moving at different speeds.

Furthermore, we can and should define access control to our class, its constructors, fields, and methods. We can do so by using access modifiers, as we’ll see in the next section.

4. Access Modifiers

In the previous examples, we omitted access modifiers to simplify the code. By doing so, we actually used a default package-private modifier. That modifier allows access to the class from any other class in the same package.

Usually, we’d use a public modifier for constructors to allow access from all other objects:

public Car(String type, String model, String color) {
    // ...
}

Every field and method in our class should’ve also defined access control by a specific modifier. Classes usually have public modifiers, but we tend to keep our fields private.

Fields hold the state of our object, therefore we want to control access to that state. We can keep some of them private, and others public. We achieve this with specific methods called getters and setters.

Let’s have a look at our class with fully-specified access control:

public class Car {
    private String type;
    // ...

    public Car(String type, String model, String color) {
       // ...
    }

    public String getColor() {
        return color;
    }

    public void setColor(String color) {
        this.color = color;
    }

    public int getSpeed() {
        return speed;
    }

    // ...
}

Our class is marked public, which means we can use it in any package. Also, the constructor is public, which means we can create an object from this class inside any other object.

Our fields are marked private, which means they’re not accessible from our object directly, but we provide access to them through getters and setters.

The type and model fields do not have getters and setters, because they hold internal data of our objects. We can define them only through the constructor during initialization.

Furthermore, the color can be accessed and changed, whereas speed can only be accessed, but not changed. We enforced speed adjustments through specialized public methods increaseSpeed() and decreaseSpeed().

In other words, we use access control to encapsulate the state of the object.

5. Conclusion

In this article, we went through two basic elements of the Java language, classes, and objects, and showed how and why they are used. We also introduced the basics of access control and demonstrated its usage.

To learn other concepts of Java language, we suggest reading about inheritance, the super keyword, and abstract classes as a next step.

The complete source code for the example is available over on GitHub.

A Solid Guide to SOLID Principles

$
0
0

1. Introduction

In this tutorial, we’ll be discussing the SOLID principles of Object-Oriented Design.

First, we’ll start by exploring the reasons they came about and why we should consider them when designing software. Then, we’ll outline each principle alongside some example code to emphasize the point.

2. The Reason for SOLID Principles

The SOLID principles were first conceptualized by Robert C. Martin in his 2000 paper, Design Principles and Design Patterns. These concepts were later built upon by Michael Feathers, who introduced us to the SOLID acronym. And in the last 20 years, these 5 principles have revolutionized the world of object-oriented programming, changing the way that we write software.

So, what is SOLID and how does it help us write better code? Simply put, Martin’s and Feathers’ design principles encourage us to create more maintainable, understandable, and flexible software. Consequently, as our applications grow in size, we can reduce their complexity and save ourselves a lot of headaches further down the road!

The following 5 concepts make up our SOLID principles:

  1. Single Responsibility
  2. Open/Closed
  3. Liskov Substitution
  4. Interface Segregation
  5. Dependency Injection

While some of these words may sound daunting, they can be easily understood with some simple code examples. In the following sections, we’ll take a deep dive into what each of these principles means, along with a quick Java example to illustrate each one.

3. Single Responsibility

Let’s kick things off with the single responsibility principle. As we might expect, this principle states that a class should only have one responsibility. Furthermore, it should only have one reason to change.

How does this principle help us to build better software? Let’s see a few of its benefits:

  1. Testing – A class with one responsibility will have far fewer test cases
  2. Lower coupling – Less functionality in a single class will have fewer dependencies
  3. Organization – Smaller, well-organized classes are easier to search than monolithic ones

Take, for example, a class to represent a simple book:

public class Book {

    private String name;
    private String author;
    private String text;

    //constructor, getters and setters
}

In this code, we store the name, author, and text associated with an instance of a Book.

Let’s now add a couple of methods to query the text:

public class Book {

    private String name;
    private String author;
    private String text;

    //constructor, getters and setters

    // methods that directly relate to the book properties
    public String replaceWordInText(String word){
        return text.replaceAll(word, text);
    }

    public boolean isWordInText(String word){
        return text.contains(word);
    }
}

Now, our Book class works well, and we can store as many books as we like in our application. But, what good is storing the information if we can’t output the text to our console and read it?

Let’s throw caution to the wind and add a print method:

public class Book {
    //...

    void printTextToConsole(){
        // our code for formatting and printing the text
    }
}

This code does, however, violate the single responsibility principle we outlined earlier. To fix our mess, we should implement a separate class that is concerned only with printing our texts:

public class BookPrinter {

    // methods for outputting text
    void printTextToConsole(String text){
        //our code for formatting and printing the text
    }

    void printTextToAnotherMedium(String text){
        // code for writing to any other location..
    }
}

Awesome. Not only have we developed a class that relieves the Book of its printing duties, but we can also leverage our BookPrinter class to send our text to other media.

Whether it’s email, logging, or anything else, we have a separate class dedicated to this one concern.

4. Open for Extension, Closed for Modification

Now, time for the ‘O’ – more formally known as the open-closed principle. Simply put, classes should be open for extension, but closed for modification. In doing so, we stop ourselves from modifying existing code and causing potential new bugs in an otherwise happy application.

Of course, the one exception to the rule is when fixing bugs in existing code.

Let’s explore the concept further with a quick code example. As part of a new project, imagine we’ve implemented a Guitar class.

It’s fully fledged and even has a volume knob:

public class Guitar {

    private String make;
    private String model;
    private int volume;

    //Constructors, getters & setters
}

We launch the application, and everyone loves it. However, after a few months, we decide the Guitar is a little bit boring and could do with an awesome flame pattern to make it look a bit more ‘rock and roll’.

At this point, it might be tempting to just open up the Guitar class and add a flame pattern – but who knows what errors that might throw up in our application.

Instead, let’s stick to the open-closed principle and simply extend our Guitar class:

public class SuperCoolGuitarWithFlames extends Guitar {

    private String flameColor;

    //constructor, getters + setters
}

By extending the Guitar class we can be sure that our existing application won’t be affected.

5. Liskov Substitution

Next up on our list is Liskov substitution, which is arguably the most complex of the 5 principles. Simply put, if class A is a subtype of class B, then we should be able to replace with without disrupting the behavior of our program.

Let’s just jump straight to the code to help wrap our heads around this concept:

public interface Car {

    void turnOnEngine();
    void accelerate();
}

Above, we define a simple Car interface with a couple of methods that all cars should be able to fulfill – turning on the engine, and accelerating forward.

Let’s implement our interface and provide some code for the methods:

public class MotorCar implements Car {

    private Engine engine;

    //Constructors, getters + setters

    public void turnOnEngine() {
        //turn on the engine!
        engine.on();
    }

    public void accelerate() {
        //move forward!
        engine.powerOn(1000);
    }
}

As our code describes, we have an engine that we can turn on, and we can increase the power. But wait, its 2019, and Elon Musk has been a busy man.

We are now living in the era of electric cars:

public class ElectricCar implements Car {

    public void turnOnEngine() {
        throw new AssertionError("I don't have an engine!");
    }

    public void accelerate() {
        //this acceleration is crazy!
    }
}

By throwing a car without an engine into the mix, we are inherently changing the behavior of our program. This is a blatant violation of Liskov substitution and is a bit harder to fix than our previous 2 principles.

One possible solution would be to rework our model into interfaces that take into account the engine-less state of our Car.

6. Interface Segregation

The ‘I ‘ in SOLID stands for interface segregation, and it simply means that larger interfaces should be split into smaller ones. By doing so, we can ensure that implementing classes only need to be concerned about the methods that are of interest to them.

For this example, we’re going to try our hands as zookeepers. And more specifically, we’ll be working in the bear enclosure.

Let’s start with an interface that outlines our roles as a bear keeper:

public interface BearKeeper {
    void washTheBear();
    void feedTheBear();
    void petTheBear();
}

As avid zookeepers, we’re more than happy to wash and feed our beloved bears. However, we’re all too aware of the dangers of petting them. Unfortunately, our interface is rather large, and we have no choice than to implement the code to pet the bear.

Let’s fix this by splitting our large interface into 3 separate ones:

public interface BearCleaner {
    void washTheBear();
}

public interface BearFeeder {
    void feedTheBear();
}

public interface BearPetter {
    void petTheBear();
}

Now, thanks to interface segregation, we’re free to implement only the methods that matter to us:

public class BearCarer implements BearCleaner, BearFeeder {

    public void washTheBear() {
        //I think we missed a spot...
    }

    public void feedTheBear() {
        //Tuna Tuesdays...
    }
}

And finally, we can leave the dangerous stuff to the crazy people:

public class CrazyPerson implements BearPetter {

    public void petTheBear() {
        //Good luck with that!
    }
}

Going further, we could even split our BookPrinter class from our example earlier to use interface segregation in the same way. By implementing a Printer interface with a single print method, we could instantiate separate ConsoleBookPrinter and OtherMediaBookPrinter classes.

7. Dependency Injection

When someone mentions the words ‘Dependency Injection’, a few frameworks probably come to mind – Google’s Guice, or perhaps Spring. However, the truth is that we don’t need a complex framework to understand the principle.

Dependency injection is simply the technique by which the dependencies of a class are injected upon creation, avoiding the hazardous new keyword.

To demonstrate this, let’s go old-school and bring to life a Windows 98 computer with code:

public class Windows98Machine {}

But what good is a computer without a monitor and keyboard? Let’s add one of each to our constructor so that every Windows98Computer we instantiate comes pre-packed with a Monitor and a Keyboard:

public class Windows98Machine {

    private final Keyboard keyboard;
    private final Monitor monitor;

    public Windows98Machine() {
        monitor = new Monitor();
        keyboard = new Keyboard();
    }

}

This code will work, and we’ll be able to use the Keyboard and Monitor freely within our Windows98Computer class. Problem solved? Not quite. By declaring the Keyboard and Monitor with the new keyword, we’ve tightly coupled these 3 classes together.

Not only does this make our Windows98Computer hard to test, but we’ve also lost the ability to switch out our Keyboard class with a subclass should the need arise. And we’re stuck with our Monitor class, too.

Let’s see how the same example looks when we apply some simple dependency injection:

public class Windows98Machine{

    private final Keyboard keyboard;
    private final Monitor monitor;

    public Windows98Machine(Keyboard keyboard, Monitor monitor) {
        this.keyboard = keyboard;
        this.monitor = monitor;
    }
}

Excellent! We’ve decoupled the dependencies and are free to test our Windows98Machine with whichever testing framework we choose.

8. Conclusion

In this tutorial, we’ve taken a deep dive into the SOLID principles of object-oriented design.

We started with a quick bit of SOLID history and the reasons these principles exist.

Letter by letter, we’ve broken down the meaning of each principle with a quick code example that violates it. We then saw how to fix our code and make it adhere to the SOLID principles.

As always, the code is available over on GitHub.

Method References in Java

$
0
0

1. Overview

One of the most welcome changes in Java 8 was the introduction of lambda expressions, as these allow us to forego anonymous classes, greatly reducing boilerplate code and improving readability.

Method references are a special type of lambda expressions. They’re often used to create simple lambda expressions by referencing existing methods.

There are four kinds of method references:

  • Static methods
  • Instance methods of particular objects
  • Instance methods of an arbitrary object of a particular type
  • Constructor

In this tutorial, we’ll explore method references in Java.

2. Reference to a Static Method

We’ll begin with a very simple example, printing a list of Strings:

List<String> messages = Arrays.asList("Hello", "Baeldung", "readers!");

We can achieve this by leveraging a simple lambda expression calling System.out.println directly:

messages.forEach(word -> System.out.println(word));

Or, we can use a method reference to simply refer to the println static method:

messages.forEach(System.out::println);

Notice that method references always utilize the :: operator.

3. Reference to an Instance Method of a Particular Object

To demonstrate this type of method reference, let’s consider two classes:

public class Bicycle {

    private String brand;
    private Integer frameSize;
    // standard constructor, getters and setters
}

public class BicycleComparator implements Comparator {

    @Override
    public int compare(Bicycle a, Bicycle b) {
        return a.getFrameSize().compareTo(b.getFrameSize());
    }

}

And, let’s create a BicycleComparator object to compare bicycle frame sizes:

BicycleComparator bikeFrameSizeComparator = new BicycleComparator();

We could use a lambda expression to sort bicycles by frame size, but we’d need to specify two bikes for comparison:

createBicyclesList().stream()
  .sorted((a, b) -> bikeFrameSizeComparator.compare(a, b));

Instead, we can use a method reference to have the compiler handle parameter passing for us:

createBicyclesList().stream()
  .sorted(bikeFrameSizeComparator::compare);

The method reference is much cleaner and more readable, as our intention is clearly shown by the code.

4. Reference to an Instance Method of an Arbitrary Object of a Particular Type

This type of method reference is similar to the previous example, but without having to create a custom object to perform the comparison.

Let’s create an Integer list that we want to sort:

List<Integer> numbers = Arrays.asList(5, 3, 50, 24, 40, 2, 9, 18);

If we use a classic lambda expression, both parameters need to be explicitly passed, while using a method reference is much more straightforward:

numbers.stream()
  .sorted((a, b) -> Integer.compare(a, b));
numbers.stream()
  .sorted(Integer::compare);

Even though it’s still a one-liner, the method reference is much easier to read and understand.

5. Reference to a Constructor

We can reference a constructor in the same way that we referenced a static method in our first example. The only difference is that we’ll use the new keyword.

Let’s create a Bicycle array out of a String list with different brands:

List<String> bikeBrands = Arrays.asList("Giant", "Scott", "Trek", "GT");

First, we’ll add a new constructor to our Bicycle class:

public Bicycle(String brand) {
    this.brand = brand;
    this.frameSize = 0;
}

Next, we’ll use our new constructor from a method reference and make a Bicycle array from the original String list:

bikeBrands.stream()
  .map(Bicycle::new)
  .toArray(Bicycle[]::new);

Notice how we called both Bicycle and Array constructors using a method reference, giving our code a much more concise and clear appearance.

6. Additional Examples and Limitations

As we’ve seen so far, method references are a great way to make our code and intentions very clear and readable. However, we can’t use them to replace all kinds of lambda expressions since they have some limitations.

Their main limitation is a result of what’s also their biggest strength: the output from the previous expression needs to match the input parameters of the referenced method signature.

Let’s see an example of this limitation:

createBicyclesList().forEach(b -> System.out.printf(
  "Bike brand is '%s' and frame size is '%d'%n",
  b.getBrand(),
  b.getFrameSize()));

This simple case can’t be expressed with a method reference, because the printf method requires 3 parameters in our case, and using createBicyclesList().forEach() would only allow the method reference to infer one parameter (the Bicycle object).

Finally, let’s explore how to create a no-operation function that can be referenced from a lambda expression.

In this case, we’ll want to use a lambda expression without using its parameters.

First, let’s create the doNothingAtAll method:

private static <T> void doNothingAtAll(Object... o) {
}

As it is a varargs method, it will work in any lambda expression, no matter the referenced object or number of parameters inferred.

Now, let’s see it in action:

createBicyclesList()
  .forEach((o) -> MethodReferenceExamples.doNothingAtAll(o));

7. Conclusion

In this quick tutorial, we learned what method references are in Java and how to use them to replace lambda expressions, thereby improving readability and clarifying the programmer’s intent.

All code presented in this article is available over on GitHub.

Accessing Spring MVC Model Objects in JavaScript

$
0
0

1. Overview

In this tutorial, we’re going to show how to access Spring MVC objects in JSP views that contain JavaScript code. We’ll use Spring Boot and the JSP template engine in our examples, but the idea works for other template engines as well.

We’re going to describe two cases: when JavaScript code is embedded or internal to the web page generated by the engine, and when it is external to the page – for example, in a separate JavaScript file.

2. Setup

Let’s assume that we’ve already configured a Spring Boot web application that uses JSP template engine. Otherwise, you might find these tutorials useful to start:

Furthermore, let’s suppose that our application has a controller corresponding to an endpoint /index that renders a view from the JSP template named index.jsp. This template might include an embedded or an external JavaScript code, say script.js.

Our goal is to be able to access Spring MVC parameters from either embedded or external JavaScript (JS) code.

3. Access the Parameters

First of all, we need to create the model variables that we want to use from the JS code.

In Spring MVC, there are various ways of doing this. Let’s use the ModelAndView approach:

@RequestMapping("/index")
public ModelAndView index(Map<String, Object> model) {
    model.put("number", 1234);
    model.put("message", "Hello from Spring MVC!");
    return new ModelAndView("/index");
}

We can find other possibilities in our tutorial on Model, ModelMap, and ModelView in Spring MVC.

4. Embedded JS Code

Embedded JS code is nothing but a part of the index.jsp file that is located inside the <script> element. We can pass Spring MVC variables there quite straightforwardly:

<script>
    var number = <c:out value="${number}"></c:out>;
    var message = "<c:out value="${message}"></c:out>";
</script>

As we explained in our Guide to JavaServer Pages, the JSP template engine replaces every JSTL string by a value that is available in the scope of the current execution. This means that the template engine transforms the code mentioned above into HTML:

<script>
    var number = 1234;
    var message = "Hello from Spring MVC!";
</script>

5. External JS Code

Let’s say that our external JS code is included in the index.jsp file using the same <script> tag, in which we specify the src attribute:

<script src="/js/script.js"></script>

Now, if we want to use the Spring MVC parameters from script.js, we should:

  1. define JS variables in embedded JS code as we did in the previous section
  2. access these variables from the script.js file

Note that the external JS code should be invoked after the initialization of the variables of the embedded JS code.

This can be achieved in two ways: by specifying the order of a JS code execution or by using an asynchronous JS code execution.

5.1. Specify the Order of Execution

We can do this by declaring the external JS code after the embedded one in index.jsp:

<script>
    var number =  <c:out value="${number}"></c:out>;
    var message = "<c:out value="${message}"></c:out>";
</script>
<script src="/js/script.js"></script>

5.2. Asynchronous Code Execution

In this case, the order in which we declare the external and embedded JS code in the index.jsp is of no importance, but we should place the code from script.js into a typical on-page-loaded wrapper:

window.onload = function() {
    // JS code
};

Despite the simplicity of this code, the most common practice is to use jQuery instead. We include this library as the first <script> element in the index.jsp file:

<!DOCTYPE html>
<html>
    <head>
        <script src="/js/jquery.js"></script>
        ...
    </head>
 ...
</html>

Now, we may place the JS code inside the following jQuery wrapper:

$(function () {
    // JS code
});

With this wrapper, we can guarantee that the JS code is executed only when the whole page content, and hence all other embedded JS code, is completely loaded.

6. A Bit of Explanation

In Spring MVC, the JSP template engine parses only the template file (index.jsp in our case) and converts it into an HTML file. This file, in turn, might include references to other resources that are out of the scope of the template engine. It’s the user’s browser that parses those resources and renders the HTML view.

Therefore, those resources are not parsed by the template engine, and we may inject variables defined in the controller only into the embedded JS code that then becomes available for the external JS code.

7. Conclusion

In this tutorial, we’ve learned how to access Spring MVC parameters inside JavaScript code.

As always, the complete code examples are in our GitHub repository.


Announcing “Learn Spring”

$
0
0

The New Course

How do I get started with Spring?” is, by far, the most common question I get.

Probably next to “What does Baeldung actually mean?” – there’s a thread on Quora if you’re curious 🙂

The site is a good place to start, but it’s also the slow way to learn.

A guided project, carefully explained through video, with full support in case you get stuck – is just so much quicker.

 

So – this course has been a long time coming.

This is the official announcement – the first course I’m releasing in about 3 years now:

“Learn Spring”!

The Course Material

Let’s start with the high-level outline of the new material:

  • Module 1 – Getting Started With Spring 5
  • Module 2 – Dependency Injection and the Spring Context
  • Module 3 – Project Configuration
  • Module 4 – Deep Dive into Spring Boot 2
  • Module 5 – Persistence and Data Access
  • Module 6 – Web Basics and Spring MVC
  • Module 7 – Templating Engines and Spring MVC
  • Module 8 – Building a REST API
  • Module 9 – Advanced Features in Spring

You can find the full lesson plan here, on the new course page.

The Course Pricing

Now, in terms of pricing, as I always do with my courses – the $147 announcement price is the lowest price the course is ever going to be at.

As I’m sure you’re aware, I never do discounts and I always increase the price of any course over time, as I add new material.

Like all of my courses, Learn Spring is similarly structured into three packages:

  • the Master Class
  • the Certification Class and
  • the Coaching Class

The Course Timeline

The plan for this course is simple – I’ll release it fully on the 15th of  May.

Of course, I’ll release new lessons as they’re ready to go live, until that point, as I always do with new course material.

Guice vs Spring – Dependency Injection

$
0
0

1. Introduction

Google Guice and Spring are two robust frameworks used for dependency injection. Both frameworks cover all the notions of dependency injection, but each one has its own way of implementing them.

In this tutorial, we’ll discuss how the Guice and Spring frameworks differ in configuration and implementation.

2. Maven Dependencies

Let’s start by adding the Guice and Spring Maven dependencies into our pom.xml file:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>5.1.4.RELEASE</version>
</dependency>

<dependency>
    <groupId>com.google.inject</groupId>
    <artifactId>guice</artifactId>
    <version>4.2.2</version>
</dependency>

We can always access the latest spring-context or guice dependencies from Maven Central.

3. Dependency Injection Configuration

Dependency injection is a programming technique that we use to make our classes independent of their dependencies.

In this section, we’ll refer to several core features that differ between Spring and Guice in their ways of configuring dependency injection.

3.1. Spring Wiring

Spring declares the dependency injection configurations in a special configuration class. This class must be annotated by the @Configuration annotation. The Spring container uses this class as a source of bean definitions.

Classes managed by Spring are called Spring beans

Spring uses the @Autowired annotation to wire the dependencies automatically. @Autowired is part of Spring’s built-in core annotations. We can use @Autowired on member variables, setter methods, and constructors.

Spring also supports @Inject. @Inject is part of the Java CDI (Contexts and Dependency Injection) that defines a standard for dependency injection.

Let’s say that we want to automatically wire a dependency to a member variable. We can simply annotate it with @Autowired:

@Component
public class UserService {
    @Autowired
    private AccountService accountService;
}
@Component
public class AccountServiceImpl implements AccountService {
}

Secondly, let’s create a configuration class to use as the source of beans while loading our application context:

@Configuration
@ComponentScan("com.baeldung.di.spring")
public class SpringMainConfig {
}

Note that we’ve also annotated UserService and AccountServiceImpl with @Component to register them as beans. It’s the @ComponentScan annotation that will tell Spring where to search for annotated components.

Even though we’ve annotated AccountServiceImpl, Spring can map it to the AccountService since it implements AccountService.

Then, we need to define an application context to access the beans. Let’s just note that we’ll refer to this context in all of our Spring unit tests:

ApplicationContext context = new AnnotationConfigApplicationContext(SpringMainConfig.class);

Now at runtime, we can retrieve the AccountService instance from our UserService bean:

UserService userService = context.getBean(UserService.class);
assertNotNull(userService.getAccountService());

3.2. Guice Binding

Guice manages its dependencies in a special class called a module. A Guice module has to extend the AbstractModule class and override its configure() method.

Guice uses binding as the equivalent to wiring in Spring. Simply put, bindings allow us to define how dependencies are going to be injected into a class. Guice bindings are declared in our module’s configure() method.

Instead of @Autowired, Guice uses the @Inject annotation to inject the dependencies. 

Let’s create an equivalent Guice example:

public class GuiceUserService {
    @Inject
    private AccountService accountService;
}

Secondly, we’ll create the module class which is a source of our binding definitions:

public class GuiceModule extends AbstractModule {
    @Override
    protected void configure() {
        bind(AccountService.class).to(AccountServiceImpl.class);
    }
}

Normally, we expect Guice to instantiate each dependency object from their default constructors if there isn’t any binding defined explicitly in the configure() method. But since interfaces can’t be instantiated directly, we need to define bindings to tell Guice which interface will be paired with which implementation.

Then, we need to define an Injector using GuiceModule to get instances of our classes. Let’s just note that all of our Guice tests will use this Injector:

Injector injector = Guice.createInjector(new GuiceModule());

Finally, at runtime we retrieve a GuiceUserService instance with a non-null accountService dependency:

GuiceUserService guiceUserService = injector.getInstance(GuiceUserService.class);
assertNotNull(guiceUserService.getAccountService());

3.3. Spring’s @Bean Annotation

Spring also provides a method level annotation @Bean to register beans as an alternative to its class level annotations like @Component. The return value of a @Bean annotated method is registered as a bean in the container.

Let’s say that we have an instance of BookServiceImpl that we want to make available for injection. We could use @Bean to register our instance:

@Bean 
public BookService bookServiceGenerator() {
    return new BookServiceImpl();
}

And now we can get a BookService bean:

BookService bookService = context.getBean(BookService.class);
assertNotNull(bookService);

3.4. Guice’s @Provides Annotation

As an equivalent of Spring’s @Bean annotation, Guice has a built-in annotation @Provides to do the same job. Like @Bean, @Provides is only applied to the methods.

Now let’s implement the previous Spring bean example with Guice. All we need to do is to add the following code into our module class:

@Provides
public BookService bookServiceGenerator() {
    return new BookServiceImpl();
}

And now, we can retrieve an instance of BookService:

BookService bookService = injector.getInstance(BookService.class);
assertNotNull(bookService);

3.5. Classpath Component Scanning in Spring

Spring provides a @ComponentScan annotation detects and instantiates annotated components automatically by scanning pre-defined packages.

The @ComponentScan annotation tells Spring which packages will be scanned for annotated components. It is used with @Configuration annotation.

3.6. Classpath Component Scanning in Guice

Unlike Spring, Guice doesn’t have such a component scanning feature. But it’s not difficult to simulate it. There are some plugins like Governator that can bring this feature into Guice.

3.7. Object Recognition in Spring

Spring recognizes objects by their names. Spring holds the objects in a structure which is roughly like a Map<String, Object>. This means that we cannot have two objects with the same name.

Bean collision due to having multiple beans of the same name is one common problem Spring developers hit. For example, let’s consider the following bean declarations:

@Configuration
@Import({SpringBeansConfig.class})
@ComponentScan("com.baeldung.di.spring")
public class SpringMainConfig {
    @Bean
    public BookService bookServiceGenerator() {
        return new BookServiceImpl();
    }
}
@Configuration
public class SpringBeansConfig {
    @Bean
    public AudioBookService bookServiceGenerator() {
        return new AudioBookServiceImpl();
    }
}

As we remember, we already had a bean definition for BookService in SpringMainConfig class.

To create a bean collision here, we need to declare the bean methods with the same name. But we are not allowed to have two different methods with the same name in one class. For that reason, we declared the AudioBookService bean in another configuration class.

Now, let’s refer these beans in a unit test:

BookService bookService = context.getBean(BookService.class);
assertNotNull(bookService); 
AudioBookService audioBookService = context.getBean(AudioBookService.class);
assertNotNull(audioBookService);

The unit test will fail with:

org.springframework.beans.factory.NoSuchBeanDefinitionException:
No qualifying bean of type 'AudioBookService' available

First, Spring registered the AudioBookService bean with “bookServiceGenerator” name in its bean map. Then, it had to override it by the bean definition for BookService due to the “no duplicate names allowed” nature of the HashMap data structure.

Lastly, we can overcome this issue by making bean method names unique or setting the name attribute to a unique name for each @Bean.

3.8. Object Recognition in Guice

Unlike Spring, Guice basically has a Map<Class<?>, Object> structure. This means that we cannot have multiple bindings to the same type without using additional metadata.

Guice provides binding annotations to enable defining multiple bindings for the same type. Let’s see what happens if we have two different bindings for the same type in Guice.

public class Person {
}

Now, let’s declare two different binding for the Person class:

bind(Person.class).toConstructor(Person.class.getConstructor());
bind(Person.class).toProvider(new Provider<Person>() {
    public Person get() {
        Person p = new Person();
        return p;
    }
});

And here is how we can get an instance of Person class:

Person person = injector.getInstance(Person.class);
assertNotNull(person);

This will fail with:

com.google.inject.CreationException: A binding to Person was already configured at GuiceModule.configure()

We can overcome this issue by just simply discarding one of the bindings for the Person class.

3.9. Optional Dependencies in Spring

Optional dependencies are dependencies which are not required when autowiring or injecting beans.

For a field that has been annotated with @Autowired, if a bean with matching data type is not found in the context, Spring will throw NoSuchBeanDefinitionException.

However, sometimes we may want to skip autowiring for some dependencies and leave them as null without throwing an exception:

Now let’s take a look at the following example:

@Component
public class BookServiceImpl implements BookService {
    @Autowired
    private AuthorService authorService;
}
public class AuthorServiceImpl implements AuthorService {
}

As we can see from the code above, AuthorServiceImpl class hasn’t been annotated as a component. And we’ll assume that there isn’t a bean declaration method for it in our configuration files.

Now, let’s run the following test to see what happens:

BookService bookService = context.getBean(BookService.class);
assertNotNull(bookService);

Not surprisingly, it will fail with:

org.springframework.beans.factory.NoSuchBeanDefinitionException: 
No qualifying bean of type 'AuthorService' available

We can make authorService dependency optional by using Java 8’s Optional type to avoid this exception.

public class BookServiceImpl implements BookService {
    @Autowired
    private Optional<AuthorService> authorService;
}

Now, our authorService dependency is more like a container that may or may not contain a bean of AuthorService type. Even though there isn’t a bean for AuthorService in our application context, our authorService field will still be non-null empty container. Hence, Spring won’t have any reason to throw NoSuchBeanDefinitionException.

As an alternative to Optional, we can use @Autowired‘s required attribute, which is set to true by default, to make a dependency optional.  We can set the required attribute to false to make a dependency optional for autowiring.

Hence, Spring will skip injecting the dependency if a bean for its data type is not available in the context. The dependency will remain set to null:

@Component
public class BookServiceImpl implements BookService {
    @Autowired(required = false)
    private AuthorService authorService;
}

Sometimes marking dependencies optional can be useful since not all the dependencies are always required.

With this in mind, we should remember that we’ll need to use extra caution and null-checks during development to avoid any NullPointerException due to the null dependencies.

3.10. Optional Dependencies in Guice

Just like Spring, Guice can also use Java 8’s Optional type to make a dependency optional.

Let’s say that we want to create a class and with a Foo dependency:

public class FooProcessor {
    @Inject
    private Foo foo;
}

Now, let’s define a binding for the Foo class:

bind(Foo.class).toProvider(new Provider<Foo>() {
    public Foo get() {
        return null;
    }
});

Now let’s try to get an instance of FooProcessor in a unit test:

FooProcessor fooProcessor = injector.getInstance(FooProcessor.class);
assertNotNull(fooProcessor);

Our unit test will fail with:

com.google.inject.ProvisionException:
null returned by binding at GuiceModule.configure(..)
but the 1st parameter of FooProcessor.[...] is not @Nullable

In order to skip this exception, we can make the foo dependency optional with a simple update:

public class FooProcessor {
    @Inject
    private Optional<Foo> foo;
}

@Inject doesn’t have a required attribute to mark the dependency optional. An alternative approach to make a dependency optional in Guice is to use the @Nullable annotation.

Guice tolerates injecting null values in case of using @Nullable as expressed in the exception message above. Let’s apply the @Nullable annotation:

public class FooProcessor {
    @Inject
    @Nullable
    private Foo foo;
}

4. Implementations of Dependency Injection Types

In this section, we’ll take a look at the dependency injection types and compare the implementations provided by Spring and Guice by going through several examples.

4.1. Constructor Injection in Spring

In constructor-based dependency injection, we pass the required dependencies into a class at the time of instantiation.

Let’s say that we want to have a Spring component and we want to add dependencies through its constructor. We can annotate that constructor with @Autowired:

@Component
public class SpringPersonService {

    private PersonDao personDao;

    @Autowired
    public SpringPersonService(PersonDao personDao) {
        this.personDao = personDao;
    }
}

Let’s retrieve a SpringPersonService bean in a test:

SpringPersonService personService = context.getBean(SpringPersonService.class);
assertNotNull(personService);

4.2. Constructor Injection in Guice

We can rearrange the previous example to implement constructor injection in Guice. Note that Guice uses @Inject instead of @Autowired.

public class GuicePersonService {

    private PersonDao personDao;

    @Inject
    public GuicePersonService(PersonDao personDao) {
        this.personDao = personDao;
    }
}

Here is how we can get an instance of GuicePersonService class from the injector in a test:

GuicePersonService personService = injector.getInstance(GuicePersonService.class);
assertNotNull(personService);

4.3. Setter or Method Injection in Spring

In setter-based dependency injection, the container will call setter methods of the class, after invoking the constructor to instantiate the component.

Let’s say that we want Spring to autowire a dependency using a setter method. We can annotate that setter method with @Autowired:

@Component
public class SpringPersonService {

    private PersonDao personDao;

    @Autowired
    public void setPersonDao(PersonDao personDao) {
        this.personDao = personDao;
    }
}

Whenever we need an instance of SpringPersonService class, Spring will autowire the personDao field by invoking the setPersonDao() method.

We can get a SpringPersonService bean and access its personDao field in a test as below:

SpringPersonService personService = context.getBean(SpringPersonService.class);
assertNotNull(personService);
assertNotNull(personService.getPersonDao());

4.4. Setter or Method Injection in Guice

We’ll simply change our example a bit to achieve setter injection in Guice.

public class GuicePersonService {

    private PersonDao personDao;

    @Inject
    public void setPersonDao(PersonDao personDao) {
        this.personDao = personDao;
    }
}

Every time we get an instance of GuicePersonService class from the injector, we’ll have the personDao field passed to the setter method above.

Here is how we can create an instance of GuicePersonService class and access its personDao field in a test:

GuicePersonService personService = injector.getInstance(GuicePersonService.class);
assertNotNull(personService);
assertNotNull(personService.getPersonDao());

4.5. Field Injection in Spring

We already saw how to apply field injection both for Spring and Guice in all of our examples. So, it’s not a new concept for us. But let’s just list it again for completeness.

In the case of field-based dependency injection, we inject the dependencies by marking them with @Autowired or @Inject.

4.6. Field Injection in Guice

As we mentioned in the section above, we already covered the field injection for Guice using @Inject.

5. Conclusion

In this tutorial, we explored the several core differences between Guice and Spring frameworks in their ways of implementing dependency injection. As always, Guice and Spring code samples are over on GitHub.

Differences Between Oracle JDK and OpenJDK

$
0
0

1. Introduction

In this article, we’ll explore the differences between Oracle Java Development Kit and OpenJDK. We’ll first take a quick look at each of them and then make a comparison. After that, we’ll see a list of other JDK implementations.

2. Oracle JDK and Java SE History

JDK (Java Development Kit) is a software development environment used in Java platform programming. It contains a complete Java Runtime Environment, a so-called private runtime. The name came from the fact that it contains more tools than the standalone JRE as well as the other components needed for developing Java applications.

Oracle strongly recommends using the term JDK to refer to the Java SE (Standard Edition) Development Kit (there are also Enterprise Edition and Micro Edition platforms).

Let’s take a look at the Java SE history:

  • JDK Beta – 1995
  • JDK 1.0 – January 1996
  • JDK 1.1 – February 1997
  • J2SE 1.2 – December 1998
  • J2SE 1.3 – May 2000
  • J2SE 1.4 – February 2002
  • J2SE 5.0 – September 2004
  • Java SE 6 – December 2006
  • Java SE 7 – July 2011
  • Java SE 8 (LTS) – March 2014
  • Java SE 9 – September 2017
  • Java SE 10 (18.3) – March 2018
  • Java SE 11 (18.9 LTS) – September 2018
  • Java SE 12 (19.3) – March 2019

Note: the versions in italics are no longer supported.

We can see that the major releases of Java SE came approximately every two years until Java SE 7. It took five years to move from Java SE 6, and another three to reach Java SE 8 afterward.

Since Java SE 10, we can expect new releases every six months. However, not all releases will be the Long-Term-Support (LTS) releases. As a result of Oracle’s release plan, the LTS product releases will happen only every three years.

Java SE 11 is the latest LTS version, and Java SE 8 will be receiving free public updates until December 2020 for non-commercial usage.

This development kit got its current name after Oracle bought Sun Microsystems in 2010. Before that, the name was SUN JDK, and it was the official implementation of the Java programming language.

3. OpenJDK

OpenJDK is a free and open-source implementation of the Java SE Platform Edition. It was initially released in 2007 as the result of the development that Sun Microsystems started in 2006.

Certainly, we should emphasize that the OpenJDK is an official reference implementation of a Java Standard Edition since version SE 7.

Initially, it was based only on the JDK 7. But, since Java 10, the open-source reference implementation of the Java SE platform is the responsibility of the JDK Project. And, just like for the Oracle, the JDK Project will also deliver new feature releases every six months.

We should note that before this long-running project, there were JDK Release Projects that released one feature and then got discontinued.

Let’s now check out the OpenJDK versions:

  • OpenJDK 6 project – based on JDK 7, but modified to provide an open-source version of Java 6
  • OpenJDK 7 project – 28 July 2011
  • OpenJDK 7u project – this project develops updates to Java Development Kit 7
  • OpenJDK 8 project – 18 March 2014
  • OpenJDK 8u project – this project develops updates to Java Development Kit 8
  • OpenJDK 9 project – 21 September 2017
  • JDK project release 10 – 20 March 2018
  • JDK project release 11 – 25 September 2018
  • JDK project release 12 – Stabilization phase

4. Oracle JDK vs. OpenJDK

In this section, we’ll focus on the key differences between Oracle JDK and OpenJDK.

4.1. Release Schedule

As we mentioned, Oracle will deliver releases every three years, while OpenJDK will be released every six months.

Oracle provides long term support for its releases. On the other hand, OpenJDK supports the changes to a release only until the next version is released.

4.2. Licenses

Oracle JDK was licensed under Oracle Binary Code License Agreement, whereas OpenJDK has the GNU General Public License (GNU GPL) version 2 with a linking exception.

There are some licensing implications when using Oracle’s platform. Public updates for Oracle Java SE 8 released after January 2019 will not be available for business, commercial, or production use without a commercial license, as Oracle announced. However, OpenJDK is completely open source and can be used it freely.

4.3. Performance

There is no real technical difference between the two since the build process for the Oracle JDK is based on that of OpenJDK.

When it comes to performance, Oracle’s is much better regarding responsiveness and JVM performance. It puts more focus on stability due to the importance it gives to its enterprise customers.

OpenJDK, in contrast, will deliver releases more often. As a result, we can encounter problems with instability. Based on community feedback, we know some OpenJDK users have encountered performance issues.

4.4. Features

If we compare features and options, we’ll see that the Oracle product has Flight Recorder, Java Mission Control, and Application Class-Data Sharing features, while OpenJDK has the Font Renderer feature.

Also, Oracle has more Garbage Collection options and better renderers, as we can see in another comparison.

4.5. Development and Popularity

Oracle JDK is fully developed by Oracle Corporation whereas the OpenJDK is developed by Oracle, OpenJDK, and the Java Community. However, the top-notch companies like Red Hat, Azul Systems, IBM, Apple Inc., SAP AG also take an active part in its development.

As we can see from the link from the previous subsection, when it comes to the popularity with the top companies that use Java Development Kits in their tools, such as Android Studio or IntelliJ IDEA, the Oracle JDK is more preferred.

On the other hand, major Linux distributions (Fedora, Ubuntu, Red Hat Enterprise Linux) provide OpenJDK as the default Java SE implementation.

5. Changes Since Java 11

As we can see in Oracle’s blog post, there are some important changes starting with Java 11.

First of all, Oracle will change its historical “BCL” license with a combination of an open source GNU General Public License v2, with the Classpath Exception (GPLv2+CPE) and commercial license when using the Oracle JDK as part of an Oracle product or service, or when open source software is not welcome.

Each license will have different builds, but those will be functionally identical with only some cosmetic and packaging differences.

Also, traditionally “commercial features” such as Flight Recorder, Java Mission Control, and Application Class-Data Sharing, as well as the Z Garbage Collector, are now available in OpenJDK. Therefore, Oracle JDK and OpenJDK builds are essentially identical from Java 11 onward.

Let’s check out the main differences:

  • Oracle’s kit for Java 11 emits a warning when using the -XX:+UnlockCommercialFeatures option, whereas in OpenJDK builds, this option results in an error
  • Oracle JDK offers a configuration to provide usage log data to the “Advanced Management Console” tool
  • Oracle has always required third party cryptographic providers to be signed by a known certificate, while cryptography framework in OpenJDK has an open cryptographic interface, which means there is no restriction as to which providers can be used
  • Oracle JDK 11 will continue to include installers, branding, and JRE packaging, whereas OpenJDK builds are currently available as zip and tar.gz files
  • The javac –release command behaves differently for the Java 9 and Java 10 targets due to the presence of some additional modules in Oracle’s release
  • The output of the java –version and java -fullversion commands will distinguish Oracle’s builds from OpenJDK builds

6. Other JDK implementations

Let’s now take a quick look at other active Java Development Kit implementations.

6.1. Free and Open Source

The following implementations, listed in alphabetical order, are open source and free to use:

  • Amazon Corretto
  • Azul Zulu
  • Bck2Brwsr
  • CACAO
  • Codename One
  • DoppioJVM
  • Eclipse OpenJ9
  • GraalVM CE
  • HaikuVM
  • HotSpot
  • Jamiga
  • JamVM
  • Jelatine JVM
  • Jikes RVM (Jikes Research Virtual Machine)
  • JVM.go
  • leJOS
  • Maxine
  • Multi-OS Engine
  • RopeVM
  • uJVM

6.2. Proprietary Implementations

There are also copyrighted implementations:

  • Azul Zing JVM
  • CEE-J
  • Excelsior JET
  • GraalVM EE
  • Imsys AB
  • JamaicaVM (aicas)
  • JBlend (Aplix)
  • MicroJvm (IS2T – Industrial Smart Software Technology)
  • OJVM
  • PTC Perc
  • SAP JVM
  • Waratek CloudVM for Java

Along with the active implementations listed above, we can see the list of inactive implementations and a short description of every implementation.

7. Conclusion

In this article, we focused on the two most popular Java Development Kits today.

We first described each of them and then emphasized the most notable differences between those. Then, we paid special attention to the changes and differences since Java 11. Finally, we listed other active implementations that are available today.

Validation in Spring Boot

$
0
0

1. Overview

When it comes to validating user input, Spring Boot provides strong support for this common, yet critical, task out of the box.

Although Spring Boot supports seamless integration with custom validators, the de-facto standard for performing validation is Hibernate Validator, the Bean Validation framework’s reference implementation.

In this tutorial, we’ll look at how to validate domain objects in Spring Boot.

2. The Maven Dependencies

In this case, we’ll learn how to validate domain objects in Spring Boot by building a basic REST controller.

The controller will first take a domain object, then it will validate it with Hibernate Validator, and finally, it will persist it into an in-memory H2 database.

The project’s dependencies are fairly standard:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency> 
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency> 
<dependency> 
    <groupId>com.h2database</groupId> 
    <artifactId>h2</artifactId>
    <version>1.4.197</version> 
    <scope>runtime</scope>
</dependency>

As shown above, we included spring-boot-starter-web in our pom.xml file, because we’ll need it for creating the REST controller. Additionally, let’s make sure to check the latest versions of spring-boot-starter-jpa and the H2 database on Maven Central.

3. A Simple Domain Class

With our project’s dependencies already in place, next, we need to define an example JPA entity class, whose role will just be modeling users.

Let’s have a look at this class:

@Entity
public class User {
    
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private long id;
    @NotBlank(message = "Name is mandatory")
    private String name;
    @NotBlank(message = "Email is mandatory")
    private String email;
    
    // standard constructors / setters / getters / toString
        
}

The implementation of our User entity class is pretty anemic indeed. But it shows in a nutshell how to use Bean Validation’s constraints to constrain the name and email fields.

For simplicity’s sake, we constrained the target fields using only the @NotBlank constraint. Also, we specified the error messages with the message attribute.

Therefore, when Spring Boot validates the class instance, the constrained fields must be not null and their trimmed length must be greater than zero.

Additionally, Bean Validation provides many other handy constraints besides @NotBlank. This allows us to apply and combine different validation rules to the constrained classes. For further information, please read the official bean validation docs.

4. The UserRepository Interface

Since we’ll use Spring Data JPA for fetching and saving users to the in-memory H2 database, we need to define a simple repository interface for having basic CRUD functionality on User objects:

@Repository
public interface UserRepository extends CrudRepository<User, Long> {}

5. Implementing a REST Controller

Of course, we need to implement a layer that allows us to get the values assigned to our User object’s constrained fields.

Therefore, we can validate them and perform a few further tasks, depending on the validation results.

Spring Boot makes all this seemingly-complex process really simple through the implementation of a REST controller.

Let’s look at the REST controller methods that fetch from and persist users in the database:

@RestController
public class UserController {
    
    @Autowired
    private final UserRepository userRepository;

    @GetMapping("/users")
    public List<User> getUsers() {
        return (List<User>) userRepository.findAll();
    }

    @PostMapping("/users")
    ResponseEntity<String> addUser(@Valid @RequestBody User user) {
        return ResponseEntity.ok("User is valid");
    }
    
    // standard constructors / other methods
    
}

The getUsers() method simply returns a List of User objects stored in the database. Spring Boot will automatically marshal the List and send it back as a JSON representation, as part of the response body.

In a Spring REST context, the implementation of the addUser() method is fairly standard.

Of course, the most relevant part is the use of the @Valid annotation.

When Spring Boot finds an argument annotated with @Valid, it automatically bootstraps the default JSR 380 implementation — Hibernate Validator — and validates the argument.

When the target argument fails to pass the validation, Spring Boot throws a MethodArgumentNotValidException exception.

6. The @ExceptionHandler Annotation

While it’s really handy to have Spring Boot validating automatically the User object passed on to the addUser() method, the missing facet of this process though is how we process the validation results.

The @ExceptionHandler annotation allows us to handle specified types of exceptions through one single method.

Therefore, we can use it for processing the validation errors:

@ResponseStatus(HttpStatus.BAD_REQUEST)
@ExceptionHandler(MethodArgumentNotValidException.class)
public Map<String, String> handleValidationExceptions(
  MethodArgumentNotValidException ex) {
    Map<String, String> errors = new HashMap<>();
    ex.getBindingResult().getAllErrors().forEach((error) -> {
        String fieldName = ((FieldError) error).getField();
        String errorMessage = error.getDefaultMessage();
        errors.put(fieldName, errorMessage);
    });
    return errors;
}

We specified the MethodArgumentNotValidException exception as the exception to be handled. In consequence, Spring Boot will call this method when the specified User object is invalid.

The method stores the name and post-validation error message of each invalid field in a Map. Next, it sends the Map back to the client as a JSON representation for further processing.

Simply put, the REST controller allows us to easily process requests to different endpoints, validate User objects, and send the responses in JSON format.

The design is flexible enough to handle controller responses through several web tiers, ranging from template engines such as Thymeleaf, to a full-featured JavaScript framework, such as Angular.

7. Testing the REST Controller

We can easily test the functionality of our REST controller with an integration test.

Let’s start mocking/autowiring the UserRepository interface implementation, along with the UserController instance and a MockMvc object :

@RunWith(SpringRunner.class) 
@WebMvcTest
@AutoConfigureMockMvc
public class UserControllerIntegrationTest {

    @MockBean
    private UserRepository userRepository;
    
    @Autowired
    UserController userController;

    @Autowired
    private MockMvc mockMvc;

    //...
    
}

Since we’re only testing the web layer, we use the @WebMvcTest annotation. It allows us to easily test requests and responses using the set of static methods implemented by the MockMvcRequestBuilders and MockMvcResultMatchers classes.

Now, let’s test the addUser() method, with a valid and an invalid User object passed in the request body:

@Test
public void whenPostRequestToUsersAndValidUser_thenCorrectResponse() throws Exception {
    MediaType textPlainUtf8 = new MediaType(MediaType.TEXT_PLAIN, Charset.forName("UTF-8"));
    String user = "{\"name\": \"bob\", \"email\" : \"bob@domain.com\"}";
    mockMvc.perform(MockMvcRequestBuilders.post("/users")
      .content(user)
      .contentType(MediaType.APPLICATION_JSON_UTF8))
      .andExpect(MockMvcResultMatchers.status().isOk())
      .andExpect(MockMvcResultMatchers.content()
        .contentType(textPlainUtf8));
}

@Test
public void whenPostRequestToUsersAndInValidUser_thenCorrectReponse() throws Exception {
    String user = "{\"name\": \"\", \"email\" : \"bob@domain.com\"}";
    mockMvc.perform(MockMvcRequestBuilders.post("/users")
      .content(user)
      .contentType(MediaType.APPLICATION_JSON_UTF8))
      .andExpect(MockMvcResultMatchers.status().isBadRequest())
      .andExpect(MockMvcResultMatchers.jsonPath("$.name", Is.is("Name is mandatory")))
      .andExpect(MockMvcResultMatchers.content()
        .contentType(MediaType.APPLICATION_JSON_UTF8));
    }
}

In addition, we can test the REST controller API using a free API lifecycle testing application, such as Postman or Katalon Studio.

8. Running the Sample Application

Finally, we can run our example project with a standard main() method:

@SpringBootApplication
public class Application {
    
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
    
    @Bean
    public CommandLineRunner run(UserRepository userRepository) throws Exception {
        return (String[] args) -> {
            User user1 = new User("Bob", "bob@domain.com");
            User user2 = new User("Jenny", "jenny@domain.com");
            userRepository.save(user1);
            userRepository.save(user2);
            userRepository.findAll().forEach(System.out::println);
        };
    }
}

As expected, we should see a couple of User objects printed out in the console.

Additionally, a GET request to the http://localhost:8080/users endpoint will return a JSON array with the users persisted in the database:

[
  {
    "name":"Bob",
    "email":"bob@domain.com"
  },
  {
    "name":"Jenny",
    "email":"jenny@domain.com"
  }
]

A POST request to the http://localhost:8080/users endpoint with a valid User object will return the String “User is valid”.

Likewise, a POST request with a User object without name and email values will return the following response:

{
  "name":"Name is mandatory",
  "email":"Email is mandatory"
}

9. Conclusion

In this tutorial, we learned the basics of performing validation in Spring Boot.

As usual, all the examples shown in this tutorial are available over on GitHub.

Marker Interfaces in Java

$
0
0

1. Introduction

In this quick tutorial, we’ll learn about marker interfaces in Java.

2. Marker Interfaces

A marker interface is an interface that has no methods or constants inside it. It provides run-time type information about objects, so the compiler and JVM have additional information about the object.

A marker interface is also called a tagging interface.

3. JDK Marker Interfaces

Java has many built-in marker interfaces, such as Serializable, Cloneable, and Remote.

Let’s take the example of the Cloneable interface. If we try to clone an object that doesn’t implement this interface, the JVM throws a CloneNotSupportedException. Hence, the Cloneable marker interface is an indicator to the JVM that we can call the Object.clone() method.

In the same way, when calling the ObjectOutputStream.writeObject() method, the JVM checks if the object implements the Serializable marker interface. When it’s not the case, a NotSerializableException is thrown. Therefore, the object isn’t serialized to the output stream.

4. Custom Marker Interface

Let’s create our own marker interface.

For example, we could create a marker that indicates whether an object can be removed from the database:

public interface Deletable {
}

In order to delete an entity from the database, the object representing this entity has to implement our Deletable marker interface:

public class Entity implements Deletable {
    // implementation details
}

Let’s say that we have a DAO object with a method for removing entities from the database. We can write our delete() method so that only objects implementing our marker interface can be deleted:

public class ShapeDao {

    // other dao methods

    public boolean delete(Object object) {
        if (!(object instanceof Deletable)) {
            return false;
        }

        // delete implementation details
        
        return true;
    }
}

As we can see, we are giving an indication to the JVM, about the runtime behavior of our objects. If the object implements our marker interface, it can be deleted from the database.

5. Marker Interfaces vs. Annotations

By introducing annotations, Java has provided us with an alternative to achieve the same results as the marker interfaces. Moreover, like marker interfaces, we can apply annotations to any class, and we can use them as indicators to perform certain actions.

So what is the key difference?

Unlike annotations, interfaces allow us to take advantage of polymorphism. As a result, we can add additional restrictions to the marker interface.

For instance, let’s add a restriction that only a Shape type can be removed from the database:

public interface Shape {
    double getArea();
    double getCircumference();
}

In this case, our marker interface, let’s call it DeletableShape, will look like the following:

public interface DeletableShape extends Shape {
}

Then our class will implement the marker interface:

public class Rectangle implements DeletableShape {
    // implementation details
}

Therefore, all DeletableShape implementations are also Shape implementations. Obviously, we can’t do that using annotations.

However, every design decision has trade-offs and polymorphism can be used as a counter-argument against marker interfaces. In our example, every class extending Rectangle will automatically implement DeletableShape.

6. Marker Interfaces vs. Typical Interfaces

In the previous example, we could get the same results by modifying our DAO’s delete() method to test whether our object is a Shape or not, instead of testing whether it’s a Deletable:

public class ShapeDao { 

    // other dao methods 
    
    public boolean delete(Object object) {
        if (!(object instanceof Shape)) {
            return false;
        }
    
        // delete implementation details
        
        return true;
    }
}

So why create a marker interface when we can achieve the same results using a typical interface?

Let’s imagine that, in addition to the Shape type, we want to remove the Person type from the database as well. In this case, there are two options to achieve that:

The first option is to add an additional check to our previous delete() method to verify whether the object to delete is an instance of Person or not.

public boolean delete(Object object) {
    if (!(object instanceof Shape || object instanceof Person)) {
        return false;
    }
    
    // delete implementation details
        
    return true;
}

But what if we have more types that we want to remove from the database as well? Obviously, this won’t be a good option because we have to change our method for every new type.

The second option is to make the Person type implement the Shape interface, which acts as a marker interface. But is a Person object really a Shape? The answer is clearly no, and that makes the second option worse than the first one.

Hence, although we can achieve the same results by using a typical interface as a marker, we’ll end up with a poor design.

7. Conclusion

In this article, we discussed what marker interfaces are and how they can be used. Then we looked at some built-in Java examples of this type of interfaces and how they are used by the JDK.

Next, we created our own marker interface and weighed it against using an annotation. Finally, we end up by seeing why it’s a good practice to use a marker interface in some scenarios instead of a traditional interface.

As always, the code can be found on GitHub.

Viewing all 4700 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>