Quantcast
Channel: Baeldung
Viewing all 4700 articles
Browse latest View live

Health Indicators in Spring Boot

$
0
0

1. Overview

Spring Boot provides a few different ways to inspect the status and health of a running application and its components. Among those approaches, the HealthContributor and HealthIndicator APIs are two of the notable ones.

In this tutorial, we're going to get familiar with these APIs, learn how they work, and see how we can contribute custom information to them.

2. Dependencies

Health information contributors are part of the Spring Boot actuator module, so we need the appropriate Maven dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency

3. Built-in HealthIndicators

Out of the box, Spring Boot registers many HealthIndicators to report the healthiness of a particular application aspect.

Some of those indicators are almost always registered, such as DiskSpaceHealthIndicator or PingHealthIndicator. The former reports the current state of the disk and the latter serves as a ping endpoint for the application.

On the other hand, Spring Boot registers some indicators conditionally. That is if some dependencies are on the classpath or some other conditions are met, Spring Boot might register a few other HealthIndicators, too. For instance, if we're using relational databases, then Spring Boot registers DataSourceHealthIndicator. Similarly, it'll register CassandraHealthIndicator if we happen to use Cassandra as our data store.

In order to inspect the health status of a Spring Boot application, we can call the /actuator/health endpoint. This endpoint will report an aggregated result of all registered HealthIndicators.

Also, to see the health report from one specific indicator, we can call the /actuator/health/{name} endpoint. For instance, calling the /actuator/health/diskSpace endpoint will return a status report from the DiskSpaceHealthIndicator:

{
  "status": "UP",
  "details": {
    "total": 499963170816,
    "free": 134414831616,
    "threshold": 10485760,
    "exists": true
  }
}

4. Custom HealthIndicators

In addition to the built-in ones, we can register custom HealthIndicators to report the health of a component or subsystem. In order to that, all we have to do is to register an implementation of the HealthIndicator interface as a Spring bean.

For instance, the following implementation reports a failure randomly:

@Component
public class RandomHealthIndicator implements HealthIndicator {

    @Override
    public Health health() {
        double chance = ThreadLocalRandom.current().nextDouble();
        Health.Builder status = Health.up();
        if (chance > 0.9) {
            status = Health.down();
        }
        return status.build();
    }
}

According to the health report from this indicator, the application should be up only 90% of the time. Here we're using Health builders to report the health information.

In reactive applications, however, we should register a bean of type ReactiveHealthIndicator. The reactive health() method returns a Mono<Health> instead of a simple Health. Other than that, other details are the same for both web application types.

4.1. Indicator Name

To see the report for this particular indicator, we can call the /actuator/health/random endpoint. For instance, here's what the API response might look like:

{"status": "UP"}

The random in the /actuator/health/random URL is the identifier for this indicator. The identifier for a particular HealthIndicator implementation is equal to the bean name without the HealthIndicator suffix. Since the bean name is randomHealthIdenticator, the random prefix will be the identifier.

With this algorithm, if we change the bean name to, say, rand:

@Component("rand")
public class RandomHealthIndicator implements HealthIndicator {
    // omitted
}

Then the indicator identifier will be rand instead of random.

4.2. Disabling the Indicator

To disable a particular indicator, we can set the management.health.<indicator_identifier>.enabled” configuration property to false. For instance, if we add the following to our application.properties:

management.health.random.enabled=false

Then Spring Boot will disable the RandomHealthIndicator. To activate this configuration property, we should also add the @ConditionalOnEnabledHealthIndicator annotation on the indicator:

@Component
@ConditionalOnEnabledHealthIndicator("random")
public class RandomHealthIndicator implements HealthIndicator { 
    // omitted
}

Now if we call the /actuator/health/random, Spring Boot will return a 404 Not Found HTTP response:

@SpringBootTest
@AutoConfigureMockMvc
@TestPropertySource(properties = "management.health.random.enabled=false")
class DisabledRandomHealthIndicatorIntegrationTest {

    @Autowired
    private MockMvc mockMvc;

    @Test
    void givenADisabledIndicator_whenSendingRequest_thenReturns404() throws Exception {
        mockMvc.perform(get("/actuator/health/random"))
          .andExpect(status().isNotFound());
    }
}

Please note that disabling built-in or custom indicators is similar to each other. Therefore, we can apply the same configuration to the built-in indicators, too.

4.3. Additional Details

In addition to reporting the status, we can attach additional key-value details using the withDetail(key, value):

public Health health() {
    double chance = ThreadLocalRandom.current().nextDouble();
    Health.Builder status = Health.up();
    if (chance > 0.9) {
        status = Health.down();
    }

    return status
      .withDetail("chance", chance)
      .withDetail("strategy", "thread-local")
      .build();
}

Here we're adding two pieces of information to the status report. Also, we can achieve the same thing by passing a Map<String, Object> to the withDetails(map) method:

Map<String, Object> details = new HashMap<>();
details.put("chance", chance);
details.put("strategy", "thread-local");
        
return status.withDetails(details).build();

Now if we call the /actuator/health/random, we might see something like:

{
  "status": "DOWN",
  "details": {
    "chance": 0.9883560157173152,
    "strategy": "thread-local"
  }
}

We can verify this behavior with an automated test, too:

mockMvc.perform(get("/actuator/health/random"))
  .andExpect(jsonPath("$.status").exists())
  .andExpect(jsonPath("$.details.strategy").value("thread-local"))
  .andExpect(jsonPath("$.details.chance").exists());

Sometimes an exception occurs while communicating to a system component such as Database or Disk. We can report such exceptions using the withException(ex) method:

if (chance > 0.9) {
    status.withException(new RuntimeException("Bad luck"));
}

We can also pass the exception to the down(ex) method we saw earlier:

if (chance > 0.9) {
    status = Health.down(new RuntimeException("Bad Luck"));
}

Now the health report will contain the stack trace:

{
  "status": "DOWN",
  "details": {
    "error": "java.lang.RuntimeException: Bad Luck",
    "chance": 0.9603739107139401,
    "strategy": "thread-local"
  }
}

4.4. Details Exposure

The management.endpoint.health.show-details configuration property controls the level of details each health endpoint can expose. 

For instance, if we set this property to always, then Spring Boot will always return the details field in the health report, just like the above example.

On the other hand, if we set this property to never, then Spring Boot will always omit the details from the output. There is also the when_authorized value which exposes the additional details only for authorized users. A user is authorized if and only if:

  • She's authenticated
  • And she possesses the roles specified in the management.endpoint.health.roles configuration property

4.5. Health Status

By default, Spring Boot defines four different values as the health Status:

  • UP — The component or subsystem is working as expected
  • DOWN — The component is not working
  • OUT_OF_SERVICE — The component is out of service temporarily
  • UNKNOWN — The component state is unknown

These states are declared as public static final instances instead of Java enums. So it's possible to define our own custom health states. To do that, we can use the status(name) method:

Health.Builder warning = Health.status("WARNING");

The health status affects the HTTP status code of the health endpoint. By default, Spring Boot maps the DOWN, and OUT_OF_SERVICE states to throw a 503 status code. On the other hand, UP and any other unmapped statuses will be translated to a 200 OK status code.

To customize this mapping, we can set the management.endpoint.health.status.http-mapping.<status> configuration property to the desired HTTP status code number:

management.endpoint.health.status.http-mapping.down=500
management.endpoint.health.status.http-mapping.out_of_service=503
management.endpoint.health.status.http-mapping.warning=500

Now Spring Boot will map the DOWN status to 500, OUT_OF_SERVICE to 503, and WARNING to 500 HTTP status codes:

mockMvc.perform(get("/actuator/health/warning"))
  .andExpect(jsonPath("$.status").value("WARNING"))
  .andExpect(status().isInternalServerError());

Similarly, we can register a bean of type HttpCodeStatusMapper to customize the HTTP status code mapping:

@Component
public class CustomStatusCodeMapper implements HttpCodeStatusMapper {

    @Override
    public int getStatusCode(Status status) {
        if (status == Status.DOWN) {
            return 500;
        }
        
        if (status == Status.OUT_OF_SERVICE) {
            return 503;
        }
        
        if (status == Status.UNKNOWN) {
            return 500;
        }

        return 200;
    }
}

The getStatusCode(status) method takes the health status as the input and returns the HTTP status code as the output. Also, it's possible to map custom Status instances:

if (status.getCode().equals("WARNING")) {
    return 500;
}

By default, Spring Boot registers a simple implementation of this interface with default mappings. The SimpleHttpCodeStatusMapper is also capable of reading the mappings from the configuration files, as we saw earlier.

5. Health Information vs Metrics

Non-trivial applications usually contain a few different components. For instance, consider a Spring Boot applications using Cassandra as its database, Apache Kafka as its pub-sub platform, and Hazelcast as its in-memory data grid.

We should use HealthIndicators to see whether the application can communicate with these components or not. If the communication link fails or the component itself is down or slow, then we have an unhealthy component that we should be aware of. In other words, these indicators should be used to report the healthiness of different components or subsystems.

On the contrary, we should avoid using HealthIndicators to measure values, count events, or measure durations. That's why we have metrics. Put simply, metrics are a better tool to report CPU usage, load average, heap size, HTTP response distributions, and so on.

6. Conclusion

In this tutorial, we saw how to contribute more health information to actuator health endpoints. Moreover, we had in-depth coverage of different components in the health APIs such as HealthStatus, and the status to HTTP status mapping.

To wrap things up, we had a quick discussion on the difference between health information and metrics and also, learned when to use each of them.

As usual, all the examples are available over on GitHub.


Using application.yml vs application.properties in Spring Boot

$
0
0

1. Overview

A common practice in Spring Boot is using an external configuration to define our properties. This allows us to use the same application code in different environments.

We can use properties files, YAML files, environment variables, and command-line arguments.

In this short tutorial, we'll explore the main differences between properties and YAML files.

2. Properties Configuration

By default, Spring Boot can access configurations set in an application.properties file, which uses a key-value format:

spring.datasource.url=jdbc:h2:dev
spring.datasource.username=SA
spring.datasource.password=password

Here, each line is a single configuration. Therefore, we must express hierarchical data by using the same prefixes for our keys. And, in this example, every key belongs to spring.datasource.

2.1. Placeholders in Properties

Within our values, we can use placeholders with the ${} syntax to refer to the contents of other keys, system properties, or environment variables.

app.name=MyApp
app.description=${app.name} is a Spring Boot application

2.2. List Structure

If we have the same kind of properties with different values, we can represent the list structure with array indices:

application.servers[0].ip=127.0.0.1
application.servers[0].path=/path1
application.servers[1].ip=127.0.0.2
application.servers[1].path=/path2
application.servers[2].ip=127.0.0.3
application.servers[2].path=/path3

3. YAML Configuration

3.1. YAML Format

As well as Java properties files, we can also use YAML based configuration files in our Spring Boot application. YAML is a convenient format for specifying hierarchical configuration data.

Now, let's take the same example from our properties file and convert it to YAML:

spring:
    datasource:
        password: password
        url: jdbc:h2:dev
        username: SA

This can be more readable than its property file alternative as it does not contain repeated prefixes.

3.2. List Structure

YAML has a more concise format for expressing lists:

application:
    servers:
    -   ip: '127.0.0.1'
        path: '/path1'
    -   ip: '127.0.0.2'
        path: '/path2'
    -   ip: '127.0.0.3'
        path: '/path3'

3.3. Multiple Profiles

A bonus of using YAML is that it can store multiple profiles in the same file. And, in YAML, three dashes indicate the start of a new document. So, all the profiles can be described in the same file:

logging
    file:
        name: myapplication.log
spring:
    profiles: staging
    datasource:
        password: ''
        url: jdbc:h2:staging
        username: SA
---
spring:
    profiles: integration
    datasource:
        password: 'password'
        url: jdbc:mysql://localhost:3306/db_integration
        username: user

In this example, we have two spring sections with different profiles tagged. Also, we can have a common set of properties at the root level — in this case, the logging.file.name property will be the same in all profiles.

3.4. Profiles Across Multiple Files

As an alternative to having different profiles in the same file, we can store multiple profiles across different files. And, this is the only method available when using properties files.

We achieve this by putting the name of the profile in the file name — for example, application-dev.yml or application-dev.properties.

4. Spring Boot Usage

Now that we've defined our configurations let's see how to access them.

4.1. Value Annotation

We can inject the values of our properties using the @Value annotation:

@Value("${key.something}")
private String injectedProperty;

Here, the property key.something is injected via field injection into one of our objects.

4.2. Environment Abstraction

We can also obtain the value of a property using the Environment API:

@Autowired
private Environment env;

public String getSomeKey(){
    return env.getProperty("key.something");
}

4.3. ConfigurationProperties Annotation

Finally, we can also use the @ConfigurationProperties annotation to bind our properties to type-safe structured objects:

@ConfigurationProperties(prefix = "mail")
public class ConfigProperties {
    String name;
    String description;
...

5. Conclusion

In this article, we've seen some differences between properties and yml Spring Boot configuration files. We also saw how their values could refer to other properties. Finally, we looked at how to inject the values into our runtime.

As always, all the code examples are available over on GitHub.

Web Server Graceful Shutdown in Spring Boot

$
0
0

1. Overview

In this quick tutorial, we're going to see how we can configure Spring Boot applications to handle shutdowns more gracefully.

2. Graceful Shutdown

As of Spring Boot 2.3, Spring Boot now supports the graceful shutdown feature for all four embedded web servers (Tomcat, Jetty, Undertow, and Netty) on both servlet and reactive platforms.

To enable the graceful shutdown, all we have to do is to set the server.shutdown property to graceful in our application.properties file:

server.shutdown=graceful

Then, Tomcat, Netty, and Jetty will stop accepting new requests at the network layer. Undertow, on the other hand, will continue to accept new requests but send an immediate 503 Service Unavailable response to the clients.

By default, the value of this property is equal to immediate, which means the server gets shut down immediately.

Some requests might get accepted just before the graceful shutdown phase begins. In that case, the server will wait for those active requests to finish their work up to a specified amount of time. We can configure this grace period using the spring.lifecycle.timeout-per-shutdown-phase configuration property:

spring.lifecycle.timeout-per-shutdown-phase=1m

If we add this, the server will wait up to one minute for active requests to complete. The default value for this property is 30 seconds.

3. Conclusion

In this short tutorial, we saw how we could take advantage of the new graceful shutdown feature in Spring Boot 2.3.

Reading a Line at a Given Line Number From a File in Java

$
0
0

1. Overview

In this quick article, we're going to look at different ways of reading a line at a given line number inside a file.

2. Input File

Let's start by creating a simple file named inputLines.txt that we'll use in all of our examples:

Line 1
Line 2
Line 3
Line 4
Line 5

3. Using BufferedReader

Let's look at the well known BufferedReader class and its advantage of not storing the entire file into memory.

We can read a file line by line and stop when we desire:

@Test
public void givenFile_whenUsingBufferedReader_thenExtractedLineIsCorrect() {
    try (BufferedReader br = Files.newBufferedReader(Paths.get(FILE_PATH))) {
        for (int i = 0; i < 3; i++) {
            br.readLine();
        }

        String extractedLine = br.readLine();
        assertEquals("Line 4", extractedLine);
    }
}

4. Using Scanner

Another similar approach we can take is using a Scanner:

@Test
public void givenFile_whenUsingScanner_thenExtractedLineIsCorrect() {
    try (Scanner scanner = new Scanner(new File(FILE_PATH))) {
        for (int i = 0; i < 3; i++) {
            scanner.nextLine();
        }

        String extractedLine = scanner.nextLine();
        assertEquals("Line 4", extractedLine);
    }
}

Although on small files, the difference between BufferedReader and Scanner might not be noticeable, on larger files, the Scanner will be slower since it also does parsing and has a smaller buffer size.

5. Using the File API

5.1. Small Files

We can use Files.readAllLines() from the File API to easily read the contents of a file into memory and extract the line we desire:

@Test
public void givenSmallFile_whenUsingFilesAPI_thenExtractedLineIsCorrect() {
    String extractedLine = Files.readAllLines(Paths.get(FILE_PATH)).get(4);

    assertEquals("Line 5", extractedLine);
}

5.2. Large Files

If we need to work with large files, we should use the lines method, which returns a Stream so that we can read the file line by line:

@Test
public void givenLargeFile_whenUsingFilesAPI_thenExtractedLineIsCorrect() {
    try (Stream lines = Files.lines(Paths.get(FILE_PATH))) {
        String extractedLine = lines.skip(4).findFirst().get();

        assertEquals("Line 5", extractedLine);
    }
}

6. Using Apache Commons IO

Another option is using the FileUtils class of the commons-io package, which reads the whole file and returns the lines as a list of Strings:

@Test
public void givenFile_whenUsingFileUtils_thenExtractedLineIsCorrect() {
    ClassLoader classLoader = getClass().getClassLoader();
    File file = new File(classLoader.getResource("linesInput.txt").getFile());

    List<String> lines = FileUtils.readLines(file, "UTF-8");

    String extractedLine = lines.get(0);
    assertEquals("Line 1", extractedLine);
}

We can also use the IOUtils class to achieve the same result, except this time, the whole content is returned as a String, and we have to do the splitting ourselves:

@Test
public void givenFile_whenUsingIOUtils_thenExtractedLineIsCorrect() {
    String fileContent = IOUtils.toString(new FileInputStream(FILE_PATH), StandardCharsets.UTF_8);

    String extractedLine = fileContent.split(System.lineSeparator())[0];
    assertEquals("Line 1", extractedLine);
}

7. Conclusion

In this quick article, we've gone over the most common ways of reading a line at a given line number from a file.

As usual, the examples are available over on GitHub.

Spring MVC Async vs Spring WebFlux

$
0
0

1. Introduction

In this tutorial, we'll explore the @Async annotation in Spring MVC, and then we'll get familiar with Spring WebFlux. Our goal is to have a better understanding of the difference between these two.

2. Implementation Scenario

Here, we want to choose a scenario to show how we can implement a simple web application with each of these APIs. Moreover, we're especially interested to see more about thread management and blocking or non-blocking I/O in each case.

Let's choose a web application with one endpoint that returns a string result. The point here is that the request will pass through a Filter with a small 200ms delay, and then the Controller needs 500ms to calculate and return the result.

Next, we're going to simulate a load with Apache ab on both endpoints and monitor our app behavior with JConsole.

It may worth mentioning that in this article, our goal is not a benchmark between these two APIs, just a small load test so we can trace the thread management.

3. Spring MVC Async

Spring 3.0 introduced the @Async annotation. @Async‘s goal is to allow the application to run heavy-load jobs on a separate thread. Also, the caller can wait for the result if interested. Hence the return type must not be void, and it be can be any of Future, CompletableFuture, or ListenableFuture.

Moreover, Spring 3.2 introduced the org.springframework.web.context.request.async package that, together with Servlet 3.0, brings the joy of the asynchronous process to the web layer. Thus, since Spring 3.2, @Async can be used in classes annotated as @Controller or @RestController.

When the client initiates a request, it goes through all matched filters in the filter chain until it arrives at the DispatcherServlet instance.

Then, the servlet takes care of the async dispatching of the request. It marks the request as started by calling AsyncWebRequest#startAsync, transfers the request handling to an instance of WebSyncManager, and finishes its job without committing the response. The filter chain also is traversed in the reverse direction to the root.

WebAsyncManager submits the request processing job in its associated ExecutorService. Whenever the result is ready, it notifies DispatcherServlet for returning the response to the client.

4. Spring Async Implementation

Let's start the implementation by writing our application class, AsyncVsWebFluxApp. Here, @EnableAsync does the magic of enabling async for our Spring Boot application:

@SpringBootApplication
@EnableAsync
public class AsyncVsWebFluxApp {
    public static void main(String[] args) {
        SpringApplication.run(AsyncVsWebFluxApp.class, args);
    }
}

Then we have AsyncFilter, which implements javax.servlet.Filter. Don't forget to simulate the delay in the doFilter method:

@Component
public class AsyncFilter implements Filter {
    ...
    @Override
    public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain)
      throws IOException, ServletException {
        // sleep for 200ms 
        filterChain.doFilter(servletRequest, servletResponse);
    }
}

Finally, we develop our AsyncController with the “/async_result” endpoint:

@RestController
public class AsyncController {
    @GetMapping("/async_result")
    @Async
    public CompletableFuture getResultAsyc(HttpServletRequest request) {
        // sleep for 500 ms
        return CompletableFuture.completedFuture("Result is ready!");
    }
}

Because of the @Async above getResultAsync, this method is executed in a separate thread on the application's default ExecutorService. However, it is possible to set up a specific ExecutorService for our method.

Test time! Let's run the application, install Apache ab, or any tools to simulate the load. Then we can send a bunch of concurrent requests over the “async_result” endpoint. We can execute JConsole and attach it to our java application to monitor the process:

ab -n 1600 -c 40 localhost:8080/async_result

 

5. Spring WebFlux

Spring 5.0 has introduced WebFlux to support the reactive web in a non-blocking manner. WebFlux is based on the reactor API, just another awesome implementation of the reactive stream.

Spring WebFlux supports reactive backpressure and Servlet 3.1+ with its non-blocking I/O. Hence, it can be run on Netty, Undertow, Jetty, Tomcat, or any Servlet 3.1+ compatible server.

Although all servers don't use the same thread management and concurrency control model, Spring WebFlux will work fine as long as they are supporting non-blocking I/O and reactive backpressure.

Spring WebFlux allows us to decompose the logic in a declarative way with Mono, Flux, and their rich operator sets. Moreover, we can have functional endpoints besides its @Controller annotated ones.

6. Spring WebFlux Implementation

For WebFlux implementation, we go the same path as async. So at first, let's create the AsyncVsWebFluxApp:

@SpringBootApplication
public class AsyncVsWebFluxApp {
    public static void main(String[] args) {
        SpringApplication.run(AsyncVsWebFluxApp.class, args);
    }
}

Then let's write our WebFluxFilter, which implements WebFilter. We'll generate an intentional delay and then pass the request to the filter chain:

@Component
public class WebFluxFilter implements org.springframework.web.server.WebFilter {

    @Override
    public Mono filter(ServerWebExchange serverWebExchange, WebFilterChain webFilterChain) {
        return Mono
          .delay(Duration.ofMillis(200))
          .then(
            webFilterChain.filter(serverWebExchange)
          );
    }
}

At last, we have our WebFluxController. It exposes an endpoint called “/flux_result” and returns a Mono<String> as the response:

@RestController
public class WebFluxController {

    @GetMapping("/flux_result")
    public Mono getResult(ServerHttpRequest request) {
       return Mono.defer(() -> Mono.just("Result is ready!"))
         .delaySubscription(Duration.ofMillis(500));
    }
}

For the test, we're taking the same approach as with our async sample application. Here's the sample result for:

ab -n 1600 -c 40 localhost:8080/flux_result

7. What's the Difference?

Spring Async supports Servlet 3.0 specifications, but Spring WebFlux supports Servlet 3.1+. It brings a number of differences:

  • Spring Async I/O model during its communication with the client is blocking. It may cause a performance problem with slow clients. On the other hand, Spring WebFlux provides a non-blocking I/O model.
  • Reading the request body or request parts is blocking in Spring Async, whiles it is non-blocking in Spring WebFlux.
  • In Spring Async, Filters and Servlets are working synchronously, but Spring WebFlux supports full asynchronous communication.
  • Spring WebFlux is compatible with wider ranges of Web/Application servers than Spring Async, like Netty, and Undertow.

Moreover, Spring WebFlux supports reactive backpressure, so we have more control over how we should react to fast producers than both Spring MVC Async and Spring MVC.

Spring Flux also has a tangible shift towards functional coding style and declarative API decomposition thanks to Reactor API behind it.

Do all of these items lead us to use Spring WebFlux? Well, Spring Async or even Spring MVC might be the right answer to a lot of projects out there, depending on the desired load scalability or availability of the system.

Regarding scalability, Spring Async gives us better results than synchronous Spring MVC implementation. Spring WebFlux, because of its reactive nature, provides us elasticity and higher availability.

8. Conclusion

In this article, we learned more about Spring Async and Spring WebFlux, then we had a comparison of them theoretically and practically with a basic load test.

As always, complete code for the Async sample and the WebFlux sample are available over GitHub.

How to Turn Off Swagger-ui in Production

$
0
0

1. Overview

The Swagger user interface allows us to view information about our REST services. This can be very convenient for development. However, owing to security concerns, we might not want to allow this behavior in our public environments.

In this short tutorial, we'll look at how to turn Swagger off in production.

2. Swagger Configuration

To set up Swagger with Spring, we define it in a configuration bean.

Let's create a SwaggerConfig class:

@Configuration
@EnableSwagger2
public class SwaggerConfig implements WebMvcConfigurer {
    @Bean
    public Docket api() {
        return new Docket(DocumentationType.SWAGGER_2).select()
                .apis(RequestHandlerSelectors.basePackage("com.baeldung"))
                .paths(PathSelectors.regex("/.*"))
                .build();
    }
    @Override
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry.addResourceHandler("swagger-ui.html")
                .addResourceLocations("classpath:/META-INF/resources/");
        registry.addResourceHandler("/webjars/**")
                .addResourceLocations("classpath:/META-INF/resources/webjars/");
    }
}

By default, this configuration bean is always injected into our Spring context. Thus, Swagger becomes available for all environments.

To disable Swagger in production, let's toggle whether this configuration bean is injected.

3. Using Spring Profiles

In Spring, we can use the @Profile annotation to enable or disable the injection of beans.

Let's try using a SpEL expression to match the “swagger” profile, but not the “prod” profile:

@Profile({"!prod && swagger"})

This forces us to be explicit about environments where we want to activate Swagger. It also helps to prevent accidentally turning it on in production.

We can add the annotation to our configuration:

@Configuration 
@Profile({"!prod && swagger"})
@EnableSwagger2 
public class SwaggerConfig implements WebMvcConfigurer {
    ...
}

Now, let's test that it works, by launching our application with different settings for the spring.profiles.active property:

  -Dspring.profiles.active=prod // Swagger is disabled
  -Dspring.profiles.active=prod,anyOther // Swagger is disabled
  -Dspring.profiles.active=swagger // Swagger is enabled
  -Dspring.profiles.active=swagger,anyOtherNotProd // Swagger is enabled
  none // Swagger is disabled

4. Using Conditionals

Spring Profiles can be too coarse-grained a solution for feature toggles. This approach can lead to configuration errors and lengthy, unmanageable lists of profiles.

As an alternative, we can use @ConditionalOnExpression, which allows specifying custom properties for enabling a bean:

@Configuration
@ConditionalOnExpression(value = "${useSwagger:false}")
@EnableSwagger2
public class SwaggerConfig implements WebMvcConfigurer {
    ...
}

If the “useSwagger” property is missing, the default here is false.

To test this, we can either set the property in the application.properties (or application.yaml) file, or set it as a VM option:

-DuseSwagger=true

We should note that this example does not include any way of guaranteeing that our production instance cannot accidentally have useSwagger set to true.

5. Avoiding Pitfalls

If enabling Swagger is a security concern, then we need to choose a strategy that's mistake-proof, but easy to use.

Some SpEL expressions can work against these aims when we use @Profile:

@Profile({"!prod"}) // Leaves Swagger enabled by default with no way to disable it in other profiles
@Profile({"swagger"}) // Allows activating Swagger in prod as well
@Profile({"!prod", "swagger"}) // Equivalent to {"!prod || swagger"} so it's worse than {"!prod"} as it provides a way to activate Swagger in prod too

This is why our @Profile example used:

@Profile({"!prod && swagger"})

This solution is probably the most rigorous, as it makes Swagger disabled by default and guarantees it cannot be enabled in “prod”

6. Conclusion

In this article, we looked at solutions for disabling Swagger in production.

We looked at how to toggle the bean that turns Swagger on, via the @Profile and @ConditionalOnExpression annotations. We also considered how to protect against misconfiguration and undesirable defaults.

As always, the example code from this article can be found over on GitHub.

        

Introduction to ArchUnit

$
0
0

1. Overview

In this article, we'll show how to check the architecture of a system using ArchUnit.

2. What is ArchUnit?

The link between architecture traits and maintainability is a well-studied topic in the software industry. Defining a sound architecture for our systems is not enough, though. We need to verify that the code implemented adheres to it.

Simply put, ArchUnit is a test library that allows us to verify that an application adheres to a given set of architectural rules. But, what is an architectural rule? Even more, what do we mean by architecture in this context?

Let's start with the latter. Here, we use the term architecture to refer to the way we organize the different classes in our application into packages.

The architecture of a system also defines how packages or groups of packages – also known as layers –  interact. In more practical terms, it defines whether code in a given package can call a method in a class belonging to another one. For instance, let's suppose that our application's architecture contains three layers: presentation, service, and persistence.

One way to visualize how those layers interact is by using a UML package diagram with a package representing each layer:

Just by looking at this diagram, we can figure out some rules:

  • Presentation classes should only depend on service classes
  • Service classes should only depend on persistence classes
  • Persistence classes should not depend on anyone else

Looking at those rules, we can now go back and answer our original question. In this context, an architectural rule is an assertion about the way our application classes interact with each other.

So now, how do we check that our implementation observes those rules? Here is where ArchUnit comes in. It allows us to express our architectural constraints using a fluent API and validate them alongside other tests during a regular build.

3. ArchUnit Project Setup

ArchUnit integrates nicely with the JUnit test framework, and so, they are typically used together.  All we have to do is add the archunit-junit4 dependency to match our JUnit version:

<dependency>
    <groupId>com.tngtech.archunit</groupId>
    <artifactId>archunit-junit4</artifactId>
    <version>0.14.1</version>
    <scope>test</scope>
</dependency>

As its artifactId implies, this dependency is specific for the JUnit 4 framework.

There's also an archunit-junit5 dependency if we are using JUnit 5:

<dependency>
    <groupId>com.tngtech.archunit</groupId>
    <artifactId>archunit-junit5</artifactId>
    <version>0.14.1</version>
    <scope>test</scope>
</dependency>

4. Writing ArchUnit Tests

Once we've added the appropriate dependency to our project, let's start writing our architecture tests. Our test application will be a simple SpringBoot REST application that queries Smurfs. For simplicity, this test application only contains the Controller, Service, and Repository classes.

We want to verify that this application complies with the rules we've mentioned before. So, let's start with a simple test for the “presentation classes should only depend on service classes” rule.

4.1. Our First Test

The first step is to create a set of Java classes that will be checked for rules violations. We do this by instantiating the ClassFileImporter class and then using one of its importXXX() methods:

JavaClasses jc = new ClassFileImporter()
  .importPackages("com.baeldung.archunit.smurfs");

In this case, the JavaClasses instance contains all classes from our main application package and its sub-packages. We can think of this object as being analogous to a typical test subject used in regular unit tests, as it will be the target for rule evaluations.

Architectural rules use one of the static methods from the ArchRuleDefinition class as the starting point for its fluent API calls. Let's try to implement the first rule defined above using this API. We'll use the classes() method as our anchor and add additional constraints from there:

ArchRule r1 = classes()
  .that().resideInAPackage("..presentation..")
  .should().onlyDependOnClassesThat()
  .resideInAPackage("..service..");
r1.check(jc);

Notice that we need to call the check() method of the rule we've created to run the check. This method takes a JavaClasses object and will throw an exception if there's a violation.

This all looks good, but we'll get a list of errors if we try to run it against our code:

java.lang.AssertionError: Architecture Violation [Priority: MEDIUM] - 
  Rule 'classes that reside in a package '..presentation..' should only 
  depend on classes that reside in a package '..service..'' was violated (6 times):
... error list omitted

Why? The main problem with this rule is the onlyDependsOnClassesThat(). Despite what we've put in the package diagram, our actual implementation has dependencies on JVM and Spring framework classes, hence the error.

4.2. Rewriting Our First Test

One way to solve this error is to add a clause that takes into account those additional dependencies:

ArchRule r1 = classes()
  .that().resideInAPackage("..presentation..")
  .should().onlyDependOnClassesThat()
  .resideInAPackage("..service..", "java..", "javax..", "org.springframework..");

With this change, our check will stop failing. This approach, however, suffers from maintainability issues and feels a bit hacky. We can avoid those issues rewriting our rule using the noClasses() static method as our starting point:

ArchRule r1 = noClasses()
  .that().resideInAPackage("..presentation..")
  .should().dependOnClassesThat()
  .resideInAPackage("..persistence..");

Of course, we can also point that this approach is deny-based instead of the allow-based one we had before. The critical point is that whatever approach we choose, ArchUnit will usually be flexible enough to express our rules.

5. Using the Library API

ArchUnit makes the creation of complex architectural rules an easy task thanks to its built-in rules. Those, in turn, can also be combined, allowing us to create rules using a higher level of abstraction. Out of the box, ArchUnit offers the Library API, a collection of prepackaged rules that address common architecture concerns:

  • Architectures: Support for layered and onion (a.k.a. Hexagonal or “ports and adapters”) architectures rule checks
  • Slices: Used to detect circular dependencies, or “cycles”
  • General: Collection of rules related to best coding practices such as logging, use of exceptions, etc.
  • PlantUML: Checks whether our code base adheres to a given UML model
  • Freeze Arch Rules: Save violations for later use, allowing to report only new ones. Particularly useful to manage technical debts

Covering all those rules is out of scope for this introduction, but let's take a look at the Architecture rule package. In particular, let's rewrite the rules in the previous section using the layered architecture rules. Using these rules requires two steps: first, we define the layers of our application. Then, we define which layer accesses are allowed:

LayeredArchitecture arch = layeredArchitecture()
   // Define layers
  .layer("Presentation").definedBy("..presentation..")
  .layer("Service").definedBy("..service..")
  .layer("Persistence").definedBy("..persistence..")
  // Add constraints
  .whereLayer("Presentation").mayNotBeAccessedByAnyLayer()
  .whereLayer("Service").mayOnlyBeAccessedByLayers("Presentation")
  .whereLayer("Persistence").mayOnlyBeAccessedByLayers("Service");
arch.check(jc);

Here, layeredArchitecture() is a static method from the Architectures class. When invoked, it returns a new LayeredArchitecture object, which we then use to define names layers and assertions regarding their dependencies. This object implements the ArchRule interface so that we can use it just like any other rule.

The cool thing about this particular API is that it allows us to create in just a few lines of code rules that would otherwise require us to combine multiple individual rules.

6. Conclusion

In this article, we've explored the basics of using ArchUnit in our projects. Adopting this tool is a relatively simple task that can have a positive impact on overall quality and reduce maintenance costs in the long run.

As usual, all code is available over on GitHub.

        

Customizing the Login Page for Keycloak

$
0
0

1. Overview

Keycloak is a third-party authorization server used to manage our web or mobile applications' authentication and authorization requirements. It uses a default login page to sign-in users on our app's behalf.

In this tutorial, we'll focus on how we can customize the login page for our Keycloak server so that we can have a different look and feel for it. We'll see this for both standalone as well as embedded servers.

We'll build on top of customizing themes for the Keycloak tutorial to do that.

2. Customizing a Standalone Keycloak Server

Continuing with our example of the custom theme, let's see the standalone server first.

2.1. Admin Console Settings

To start the server, let's navigate to the directory where our Keycloak distribution is kept, and run this command from its bin folder:

./standalone.sh -Djboss.socket.binding.port-offset=100

Once the server is started, we only need to refresh the page to see our changes reflected, thanks to the modifications we previously made to the standalone.xml.

Now let's create a new folder, named login, inside the themes/custom directory. To keep things simple, we'll first copy all the contents of the themes/keycloak/login directory here. This is the default login page theme.

Then, we'll go to the admin console, key-in the initial1/zaq1!QAZ credentials and go to the Themes tab for our realm:

We'll select custom for the Login Theme and save our changes.

With that set, we can now try some customizations. But before that, let's have a look at the default login page:

 

2.2. Adding Customizations

Now let's say we need to change the background. For that, we'll open login/resources/css/login.css and change the class definition:

.login-pf body {
    background: #39a5dc;
    background-size: cover;
    height: 100%;
}

To see the effect, let's refresh the page:

Next, let's try to change the labels for the username and password.

To achieve that, we need to create a new file, messages_en.properties in the theme/login/messages folder. This file overrides the default message bundle being used for the given properties:

usernameOrEmail=Enter Username:
password=Enter Password:

To test, again refresh the page:

Suppose we want to change the entire HTML or a part of it, we'll need to override the freemarker templates that Keycloak uses by default. The default templates for the login page are kept in the base/login directory.

Let's say we want WELCOME TO BAELDUNG to be displayed in place of SPRINGBOOTKEYCLOAK.

For that, we'll need to copy base/login/template.ftl to our custom/login folder.

In the copied file, change the line:

<div id="kc-header-wrapper" class="${properties.kcHeaderWrapperClass!}">
    ${kcSanitize(msg("loginTitleHtml",(realm.displayNameHtml!'')))?no_esc}
</div>

To:

<div id="kc-header-wrapper" class="${properties.kcHeaderWrapperClass!}">
    WELCOME TO BAELDUNG
</div>

The login page would now display our custom message instead of the realm name.

3. Customizing an Embedded Keycloak Server

The first step here is to add all the artifacts we changed for the standalone server to the source code of our embedded authorization server.

So, let's create a new folder login inside src/main/resources/themes/custom with these contents:

Now all we need to do is to add instruction in our realm definition file, baeldung-realm.json so that custom is used for our login theme type:

 

"loginTheme": "custom",

We've already redirected to the custom theme directory so that our server knows from where to pick up the theme files for the login page.

For testing, let's hit the login page:

As we can see, all the customizations done earlier for the standalone server, such as the background, label names, and page title, are appearing here.

4. Bypassing Keycloak Login Page

Technically, we can completely bypass the Keycloak login page by using the password or direct access grant flow. However, it's strongly recommended that this grant type shouldn't be used at all.

In this case, there is no intermediary step of getting an authorization code, and then receiving the access token in exchange. Instead, we can directly send the user credentials via a REST API call and get the access token in response.

This effectively means that we can use our login page to collect the user's id and password, and along with the client id and secret, send it to Keycloak in a POST to its token endpoint.

But again, since Keycloak provides a rich feature set of login options – such as remember me, password reset, and MFA – to name a few, there is little reason to bypass it.

5. Conclusion

In this tutorial, we learned how to change the default login page for Keycloak and add our customizations.

We saw this for both a standalone and an embedded instance.

Lastly, we briefly went over how to bypass Keycloak's login page entirely and why not to do that.

As always, the source code is available over on GitHub.

        

Java Weekly, Issue 348

$
0
0

1. Spring and Java

>> Seeing Register Allocation Working in Java [chrisseaton.com]

An insightful and in-depth take on how GraalVM allocates variables in machine registers.

>> Logging In Spring Boot [reflectoring.io]

Exploring different aspects of logging in Spring Boot: motivation, best practices, configuration, and aggregation architecture.

>> Mapping Arrays with Hibernate [thorben-janssen.com]

And a comparison of different options to map arrays with Hibernate: element collections, binary types, and of course, vendor-specific arrays with Hibernate custom types.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Kotlin 1.4 Brings New Language Features, Improved Compilers, and Tools [infoq.com]

An overview of new features in Kotlin 1.4: SAM interfaces, collection improvements, ArrayDeque, trailing comma, and many more!

Also worth reading:

3. Musings

>> Scrum Essentials Cards [queue.acm.org]

Successful examples and case studies of using scrum essential cards to improve the teamwork.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Ted Takes Selfie With Bear [dilbert.com]

>> Ratio Is Too High [dilbert.com]

5. Pick of the Week

>> Avoiding the trap of low-knowledge, high-confidence theories [blog.asmartbear.com]

        

Introduction to keytool

$
0
0

1. Overview

In this short tutorial, we're going to introduce the keytool command. We'll learn how to use keytool to create a new certificate and check the information for that certificate.

2. What Is keytool?

Java includes the keytool utility in its releases. We use it to manage keys and certificates and store them in a keystore. The keytool command allows us to create self-signed certificates and show information about the keystore.

In the following sections, we're going to go through different functionalities of this utility.

3. Creating a Self-Signed Certificate

First of all, let's create a self-signed certificate that could be used to establish secure communication between projects in our development environment, for example.

In order to generate the certificate, we're going to open a command-line prompt and use keytool command with the -genkeypair option:

keytool -genkeypair -alias <alias> -keypass <keypass> -validity <validity> -storepass <storepass>

Let's learn more about each of these parameters:

  • alias – the name for our certificate
  • keypass – the password of the certificate. We'll need this password to have access to the private key of our certificate
  • validity – the time (in days) of the validity of our certificate
  • storepass – the password for the keystore. This will be the password of the keystore if the store doesn't exist

For example, let's generate a certificate named “cert1” that has a private key of “pass123” and is valid for one year. We'll also specify “stpass123” as the keystore password:

keytool -genkeypair -alias cert1 -keypass pass123 -validity 365 -storepass stpass123

After executing the command, it'll ask for some information that we'll need to provide:

What is your first and last name?
  [Unknown]:  Name
What is the name of your organizational unit?
  [Unknown]:  Unit
What is the name of your organization?
  [Unknown]:  Company
What is the name of your City or Locality?
  [Unknown]:  City
What is the name of your State or Province?
  [Unknown]:  State
What is the two-letter country code for this unit?
  [Unknown]:  US
Is CN=Name, OU=Unit, O=Company, L=City, ST=State, C=US correct?
  [no]:  yes

As mentioned, if we haven't created the keystore before, creating this certificate will create it automatically.

We could also execute the -genkeypair option without parameters. If we don't provide them in the command line and they're mandatory, we'll be prompted for them.

Note that it's generally advised not to provide the passwords (-keypass or -storepass) on the command line in production environments.

4. Listing Certificates in the Keystore

Next, we're going to learn how to view the certificates that are stored in our keystore. For this purpose, we'll use the -list option:

keytool -list -storepass <storepass> 

The output for the executed command will show the certificate that we've created:

Keystore type: JKS
Keystore provider: SUN
Your keystore contains 1 entry
cert1, 02-ago-2020, PrivateKeyEntry, 
Certificate fingerprint (SHA1): 0B:3F:98:2E:A4:F7:33:6E:C4:2E:29:72:A7:17:E0:F5:22:45:08:2F

If we want to get the information for a concrete certificate, we just need to include the -alias option to our command. To get further information than provided by default, we'll also add the -v (verbose) option:

keytool -list -v -alias <alias> -storepass <storepass> 

This will provide us all the information related to the requested certificate:

Alias name: cert1
Creation date: 02-ago-2020
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: CN=Name, OU=Unit, O=Company, L=City, ST=State, C=US
Issuer: CN=Name, OU=Unit, O=Company, L=City, ST=State, C=US
Serial number: 11d34890
Valid from: Sun Aug 02 20:25:14 CEST 2020 until: Mon Aug 02 20:25:14 CEST 2021
Certificate fingerprints:
	 MD5:  16:F8:9B:DF:2C:2F:31:F0:85:9C:70:C3:56:66:59:46
	 SHA1: 0B:3F:98:2E:A4:F7:33:6E:C4:2E:29:72:A7:17:E0:F5:22:45:08:2F
	 SHA256: 8C:B0:39:9F:A4:43:E2:D1:57:4A:6A:97:E9:B1:51:38:82:0F:07:F6:9E:CE:A9:AB:2E:92:52:7A:7E:98:2D:CA
Signature algorithm name: SHA256withDSA
Subject Public Key Algorithm: 2048-bit DSA key
Version: 3
Extensions: 
#1: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: A1 3E DD 9A FB C0 9F 5D   B5 BE 2E EC E2 87 CD 45  .>.....].......E
0010: FE 0B D7 55                                        ...U
]
]

5. Other Features

Apart from the functionalities that we've already seen, there are many additional features available in this tool.

For example, we can delete the certificate we created from the keystore:

keytool -delete -alias <alias> -storepass <storepass>

Another example is that we will even be able to change the alias of a certificate:

keytool -changealias -alias <alias> -destalias <new_alias> -keypass <keypass> -storepass <storepass>

Finally, to get more information about the tool, we can ask for help through the command line:

keytool -help

6. Conclusion

In this quick tutorial, we've learned a bit about the keytool utility. We've also learned to use some basic features included in this tool.

        

Listing Kafka Topics

$
0
0

1. Overview

In this quick tutorial, we're going to see how we can list all topics in an Apache Kafka cluster.

First, we'll set up a single-node Apache Kafka and Zookeeper cluster. Then, we'll ask that cluster about its topics.

2. Setting Up Kafka

Before listing all the topics in a Kafka cluster, let's set up a test single-node Kafka cluster in three steps:

  • Downloading Kafka and Zookeeper
  • Starting Zookeeper Service
  • Starting Kafka Service

First, we should make sure to download the right Kafka version from the Apache site. Once the download finishes, we should extract the downloaded archive:

$ tar xvf kafka_2.13-2.6.0.tgz

Kafka is using Apache Zookeeper to manage its cluster metadata, so we need a running Zookeeper cluster.

For test purposes, we can run a single-node Zookeeper instance using the zookeeper-server-start.sh script in the bin directory:

$ cd kafka_2.13-2.6.0 # extracted directory
$ ./bin/zookeeper-server-start.sh config/zookeeper.properties

This will start a Zookeeper service listening on port 2181. After this, we can use another script to run the Kafka server:

$ ./bin/kafka-server-start.sh config/server.properties

After a while, a Kafka broker will start. Let's add a few topics to this simple cluster:

$ bin/kafka-topics.sh --create --topic users.registrations --replication-factor 1 \
  --partitions 2  --zookeeper localhost:2181
$ bin/kafka-topics.sh --create --topic users.verfications --replication-factor 1 \
  --partitions 2  --zookeeper localhost:2181

Now that everything is ready, let's see how we can list Kafka topics.

3. Listing Topics

To list all Kafka topics in a cluster, we can use the bin/kafka-topics.sh shell script bundled in the downloaded Kafka distribution. All we have to do is to pass the –list option along with the information about the cluster. For instance, we can pass the Zookeeper service address:

$ bin/kafka-topics.sh --list --zookeeper localhost:2181
users.registrations
users.verfications

As shown above, the –list option tells the kafka-topics.sh shell script to list all the topics. In this case, we have two topics to store user-related events. If there is no topic in the cluster, then the command will return silently without any result.

Also, in order to talk to the Kafka cluster, we need to pass the Zookeeper service URL using the –zookeeper option.

It's even possible to pass the Kafka cluster address directly using the –bootstrap-server option:

$ ./bin/kafka-topics.sh --bootstrap-server=localhost:9092 --list
users.registrations
users.verfications

Our single-instance Kafka cluster listens to the 9092 port, so we specified “localhost:9092” as the bootstrap server. Put simply, bootstrap servers are Kafka brokers.

If we don't pass the information necessary to talk to a Kafka cluster, the kafka-topics.sh shell script will complain with an error:

$ ./bin/kafka-topics.sh --list
Exception in thread "main" java.lang.IllegalArgumentException: Only one of --bootstrap-server or --zookeeper must be specified
        at kafka.admin.TopicCommand$TopicCommandOptions.checkArgs(TopicCommand.scala:721)
        at kafka.admin.TopicCommand$.main(TopicCommand.scala:52)
        at kafka.admin.TopicCommand.main(TopicCommand.scala)

As shown above, the shell scripts require us to pass either the –bootstrap-server or –zookeeper option.

4. Topic Details

Once we've found a list of topics, we can take a peek at the details of one specific topic. To do that, we can use the –describe –topic <topic name>” combination of options:

$ ./bin/kafka-topics.sh --bootstrap-server=localhost:9092 --describe --topic users.registrations
Topic: users.registrations      PartitionCount: 2       ReplicationFactor: 1    Configs: segment.bytes=1073741824
        Topic: users.registrations      Partition: 0    Leader: 0       Replicas: 0     Isr: 0
        Topic: users.registrations      Partition: 1    Leader: 0       Replicas: 0     Isr: 0

These details include information about the specified topic such as the number of partitions and replicas, among others. Similar to other commands, we must pass the cluster information or Zookeeper address. Otherwise, we won't be able to talk to the cluster.

5. Conclusion

In this short tutorial, we learned how to list all topics in a Kafka cluster. Along the way, we saw how to set up a simple, single-node Kafka cluster.

Currently, Apache Kafka is using Zookeeper to manage its cluster metadata. This, however, will change shortly as part of KIP-500, as Kafka is going to have its own metadata quorum.

        

Largest Power of 2 That Is Less Than the Given Number

$
0
0

1. Overview

In this article, we'll be seeing how to find the largest power of 2 that is less than the given number.

For our examples, we'll take the sample input 9. 20 is 1, the least valid input for which we can find the power of 2 less than the given input is 2. Hence we'll only consider inputs greater than 1 as valid.

2. Naive Approach

Let's start with 20, which is 1, and we'll keep multiplying the number by 2 until we find a number that is less than the input:

public long findLargestPowerOf2LessThanTheGivenNumber(long input) {
    Assert.isTrue(input > 1, "Invalid input");
    long firstPowerOf2 = 1;
    long nextPowerOf2 = 2;
    while (nextPowerOf2 < input) {
        firstPowerOf2 = nextPowerOf2;
        nextPowerOf2 = nextPowerOf2 * 2;
    }
    return firstPowerOf2;
}

Let's understand the code for sample input = 9.

The initial value for firstPowerOf2 is 1 and nextPowerOf2 is 2. As we can see, 2 < 9 is true, and we get inside the while loop.

For the first iteration, firstPowerOf2 is 2 and nextPowerOf2 is 2 * 2 = 4. Again 4 < 9 so lets continue the while loop.

For the second iteration, firstPowerOf2 is 4 and nextPowerOf2 is 4 * 2 = 8. Now 8 < 9, let's keep going.

For the third iteration, firstPowerOf2 is 8 and nextPowerOf2 is 8 * 2 = 16. The while condition 16 < 9 is false, so it breaks out of the while loop. 8 is the largest power of 2 which is less than 9.

Let's run some tests to validate our code:

assertEquals(8, findPowerOf2LessThanTheGivenNumber(9));
assertEquals(16, findPowerOf2LessThanTheGivenNumber(32));

The time complexity of our solution is O(log2(N)). In our case, we iterated log2(9) = 3 times.

3. Using Math.log

Log base 2 will give how many times we can divide a number by 2 recursively in other words, log2 of a number gives the power of 2. Let's look at some examples to understand this.

log2(8) = 3 and log2(16) = 4. In general, we can see that y = log2x where x = 2y.

Hence, if we find a number that is divisible by 2, we subtract 1 from it so that we avoid a scenario where the number is a perfect power of 2.

Math.log is log10. To compute log2(x), we can use the formula log2(x)=log10(x)/log10(2)

Let's put that in code:

public long findLargestPowerOf2LessThanTheGivenNumberUsingLogBase2(long input) {
    Assert.isTrue(input > 1, "Invalid input");
    long temp = input;
    if (input % 2 == 0) {
        temp = input - 1;
    }
    long power = (long) (Math.log(temp) / Math.log(2));
    long result = (long) Math.pow(2, power);
    return result;
}

Assuming our sample input as 9, the initial value of temp is 9.

9 % 2 is 1, so our temp variable is 9. Here we are using modulo division, which will give the remainder of 9/2.

To find the log2(9), we do log10(9) / log10(2) = 0.95424 / 0.30103 ~= 3.

Now, the result is 23 which is 8.

Let's verify our code:

assertEquals(8, findLargestPowerOf2LessThanTheGivenNumberUsingLogBase2(9));
assertEquals(16, findLargestPowerOf2LessThanTheGivenNumberUsingLogBase2(32));

In reality, Math.pow will be doing the same iteration that we did in approach 1. Hence we can say that for this solution too, the time complexity is O(Log2(N)).

4. Bitwise Technique

For this approach, we'll use the bitwise shift technique. First, let's look at the binary representations for the power of 2 considering we have 4 bits to represent the number

20 1 0001
21 2 0010
22 4 0100
23 8 1000

Looking closely, we can observe that we can compute the power of 2 by left shifting the bytes for 1. ie. 22 is left shift bytes for 1 by 2 places and so on.

Let's code using bitshift technique:

public long findLargestPowerOf2LessThanTheGivenNumberUsingBitShiftApproach(long input) {
    Assert.isTrue(input > 1, "Invalid input");
    long result = 1;
    long powerOf2;
    for (long i = 0; i < Long.BYTES * 8; i++) {
        powerOf2 = 1 << i;
        if (powerOf2 >= input) {
            break;
        }
        result = powerOf2;
    }
    return result;
}

In the above code, we are using long as our data type, which uses 8 bytes or 64 bits. So we'll be computing the power of 2 up to 264. We are using the bit shift operator << to find the power of 2. For our sample input 9, after the 4th iteration, the value of powerOf2 = 16 and result = 8 where we break out of the loop as 16 > 9 the input.

Let's check if our code is working as expected:

assertEquals(8, findLargestPowerOf2LessThanTheGivenNumberUsingBitShiftApproach(9));
assertEquals(16, findLargestPowerOf2LessThanTheGivenNumberUsingBitShiftApproach(32));

The worst-case time complexity for this approach is again O(log2(N)), similar to what we saw for the first approach. However, this approach is better as a bit shift operation is more efficient compared to multiplication.

5. Bitwise AND

For our next approach, we'll be using this formula 2n AND 2n -1 = 0.

Let's look at some examples to understand how it works.

The binary representation of 4 is 0100, and 3 is 0011.

Let's do a bitwise AND operation on these two numbers. 0100 AND 0011 is 0000. We can say the same for any power of 2 and a number less than it. Let's take 16 (24) and 15 which is represented as 1000, 0111 respectively. Again, we see that the bitwise AND on these two results in 0. We can also say that the AND operation on any other number apart from these 2 won't result in a 0.

Let's see the code for solving this problem using bitwise AND:

public long findLargestPowerOf2LessThanTheGivenNumberUsingBitwiseAnd(long input) { 
    Assert.isTrue(input > 1, "Invalid input");
    long result = 1;
    for (long i = input - 1; i > 1; i--) {
        if ((i & (i - 1)) == 0) {
            result = i;
            break;
        }
    }
    return result;
}

In the above code, we loop over numbers less than our input. Whenever we find the bitwise AND of a number and number-1 is zero, we break out of the loop, as we know that number will be a power of 2. In this case for our sample input 9, we break out of the loop when i = 8 and i – 1 = 7.

Now, let's verify a couple of scenarios:

assertEquals(8, findLargestPowerOf2LessThanTheGivenNumberUsingBitwiseAnd(9));
assertEquals(16, findLargestPowerOf2LessThanTheGivenNumberUsingBitwiseAnd(32));

The worst-case time complexity for this approach is O(N/2) when the input is an exact power 2. As we can see, this is not the most efficient solution, but it is good to know this technique as it could come handy when tackling similar problems.

6. Conclusion

We have seen different approaches for finding the largest power of 2 that is less than the given number. We also noticed how bitwise operations can simplify computations in some cases.

The complete source code with unit tests for this article can be found over on GitHub.

        

IllegalArgumentException or NullPointerException for a Null Parameter?

$
0
0

1. Introduction

Among the decisions we make as we're writing our applications, many are about when to throw exceptions and which type to throw.

In this quick tutorial, we're going to tackle the issue of which exception to throw when someone passes a null parameter to one of our methods: IllegalArgumentExcpetion or NullPointerException.

We'll explore the topic by examining the arguments for both sides.

2. IllegalArgumentException

First, let's look at the arguments for throwing an IllegalArgumentException.

Let's create a simple method that throws an IllegalArgumentException when passed a null:

public void processSomethingNotNull(Object myParameter) {
    if (myParameter == null) {
        throw new IllegalArgumentException("Parameter 'myParameter' cannot be null");
    }
}

Now, let's move onto the arguments in favor of IllegalArgumentException.

2.1. It's How the Javadoc Says to Use It

When we read the Javadoc for IllegalArgumentException, it says it's for use when an illegal or inappropriate value is passed to a method. We can consider a null object to be illegal or inappropriate if our method isn't expecting it, and this would be an appropriate exception for us to throw.

2.2. It Matches Developer Expectations

Next, let's think about how we, as developers, think when we see stack traces in our applications. A very common scenario in which we receive a NullPointerException is when we've accidentally tried to access a null object. In this case, we're going to go as deep into the stack as we can to see what we're referencing that's null.

When we get an IllegalArgumentException, we are likely to assume that we're passing something wrong to a method. In this case, we'll look in the stack for the bottom-most method we're calling and start our debugging from there. If we consider this way of thinking, the IllegalArgumentException is going to get us into our stack closer to where the mistake is being made.

 2.3. Other Arguments

Before we move onto the arguments for NullPointerException, let's look at a couple of smaller points in favor of IllegalArgumentException. Some developers feel that only the JDK should be throwing NullPointerException. As we'll see in the next section, the Javadoc doesn't support this theory. Another argument is that it's more consistent to use IllegalArgumentException since that's what we'd use for other illegal parameter values.

3. NullPointerException

Next, let's consider the arguments for NullPointerException.

Let's create an example that throws a NullPointerException:

public void processSomethingElseNotNull(Object myParameter) {
    if (myParameter == null) {
        throw new NullPointerException("Parameter 'myParameter' cannot be null");
    }
}

3.1. It's How the Javadoc Says to Use It

According to the Javadoc for NullPointerException, NullPointerException is meant to be used for attempting to use null where an object is required. If our method parameter isn't intended to be null, then we could reasonably consider this as an object being required and throw the NullPointerException.

3.2. It's Consistent with JDK APIs

Let's take a moment to think about many of the common JDK methods we call during development. Many of them throw a NullPointerException if we provide a null. Additionally, Objects.requireNonNull() throws a NullPointerException if we pass in null. According to the Objects documentation, it exists mostly for validating parameters.

In addition to JDK methods that throw NullPointerException, we can find other examples of specific exception types being thrown from methods in the Collections API. ArrayList.addAll(index, Collection) throws an IndexOutOfBoundsException if the index is outside of the list size and throws a NullPointerException if the collection is null. These are two very specific exception types rather than the more generic IllegalArgumentException.

We could consider IllegalArgumentException to be meant for cases when we don't have a more specific exception type available to us.

4. Conclusion

As we've seen in this tutorial, this is a question that doesn't have a clear answer. The documentation for the two exceptions seems to overlap in that when taken alone, they both sound appropriate. There are also additional compelling arguments for both sides based on how developers do their debugging and on patterns seen in the JDK methods themselves.

Whichever exception we chose, we should be consistent throughout our application. Additionally, we can make our exceptions more useful by providing meaningful information to the exception constructor. For example, our applications will be easier to debug if we provide the parameter name in the exception message.

As always, the example code is available over on GitHub.

        

Testing Quarkus Applications

$
0
0

1. Overview

Quarkus makes it very easy these days to develop robust and clean applications. But how about testing?

In this tutorial, we'll take a close look at how a Quarkus application can be tested. We'll explore the testing possibilities offered by Quarkus and present concepts like dependency management and injection, mocking, profile configuration, and more specific things like Quarkus annotations and testing a native executable.

2. Setup

Let's start from the basic Quarkus project configured in our previous Guide to QuarkusIO.

First, we'll add the quarkus-reasteasy-jackson, quarkus-hibernate-orm-panache, quarkus-jdbc-h2, quarkus-junit5-mockito, and quarkus-test-h2 Maven dependencies:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-resteasy-jackson</artifactId>
</dependency>
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-hibernate-orm-panache</artifactId>
</dependency>
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-jdbc-h2</artifactId>
</dependency>
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-junit5-mockito</artifactId>
</dependency>
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-test-h2</artifactId>
</dependency>

Next, let's create our domain entity:

public class Book extends PanacheEntity {
    private String title;
    private String author;
}

We continue by adding a simple Panache repository, with a method to search for books:

public class BookRepository implements PanacheRepository {
    public Stream<Book> findBy(String query) {
        return find("author like :query or title like :query", with("query", "%"+query+"%")).stream();
    }
}

Now, let's write a LibraryService to hold any business logic:

public class LibraryService {
    public Set<Book> find(String query) {
        if (query == null) {
            return bookRepository.findAll().stream().collect(toSet());
        }
        return bookRepository.findBy(query).collect(toSet());
    }
}

And finally, let's expose our service functionality through HTTP by creating a LibraryResource:

@Path("/library")
public class LibraryResource {
    @GET
    @Path("/book")
    public Set findBooks(@QueryParam("query") String query) {
        return libraryService.find(query);
    }
}

3. @Alternative Implementations

Before writing any tests, let's make sure we have some books in our repository. With Quarkus, we can use the CDI @Alternative mechanism to provide a custom bean implementation for our tests. Let's create a TestBookRepository that extends BookRepository:

@Priority(1)
@Alternative
@ApplicationScoped
public class TestBookRepository extends BookRepository {
    @PostConstruct
    public void init() {
        persist(new Book("Dune", "Frank Herbert"),
          new Book("Foundation", "Isaac Asimov"));
    }
}

We place this alternative bean in our test package, and because of the @Priority(1) and @Alternative annotations, we're sure any test will pick it up over the actual BookRepository implementation. This is one way we can provide a global mock that all our Quarkus tests can use. We'll explore more narrow-focused mocks shortly, but now, let's move on to creating our first test.

4. HTTP Integration Test

Let's begin by creating a simple REST-assured integration test:

@QuarkusTest
class LibraryResourceIntegrationTest {
    @Test
    void whenGetBooksByTitle_thenBookShouldBeFound() {
        given().contentType(ContentType.JSON).param("query", "Dune")
          .when().get("/library/book")
          .then().statusCode(200)
          .body("size()", is(1))
          .body("title", hasItem("Dune"))
          .body("author", hasItem("Frank Herbert"));
    }
}

This test, annotated with @QuarkusTest, first starts the Quarkus application and then performs a series of HTTP requests against our resource's endpoint.

Now, let's make use of some Quarkus mechanisms to try and further improve our test.

4.1. URL Injection With @TestHTTPResource

Instead of hard-coding the path of our HTTP endpoint, let's inject the resource URL:

@TestHTTPResource("/library/book")
URL libraryEndpoint;

And then, let's use it in our requests:

given().param("query", "Dune")
  .when().get(libraryEndpoint)
  .then().statusCode(200);

Or, without using Rest-assured, let's simply open a connection to the injected URL and test the response:

@Test
void whenGetBooks_thenBooksShouldBeFound() throws IOException {
    assertTrue(IOUtils.toString(libraryEndpoint.openStream(), defaultCharset()).contains("Asimov"));
}

As we can see, @TestHTTPResource URL injection gives us an easy and flexible way of accessing our endpoint.

4.2. @TestHTTPEndpoint

Let's take this further and configure our endpoint using the Quarkus provided @TestHTTPEndpoint annotation:

@TestHTTPEndpoint(LibraryResource.class)
@TestHTTPResource("book")
URL libraryEndpoint;

This way, if we ever decide to change the path of the LibraryResource, the test will pick up the correct path without us having to touch it.

@TestHTTPEndpoint can also be applied at the class level, in which case REST-assured will automatically prefix all requests with the Path of the LibraryResource:

@QuarkusTest
@TestHTTPEndpoint(LibraryResource.class)
class LibraryHttpEndpointIntegrationTest {
    @Test
    void whenGetBooks_thenShouldReturnSuccessfully() {
        given().contentType(ContentType.JSON)
          .when().get("book")
          .then().statusCode(200);
    }
}

5. Context and Dependency Injection

When it comes to dependency injection, in Quarkus tests, we can use @Inject for any required dependency. Let's see this in action by creating a test for our LibraryService:

@QuarkusTest
class LibraryServiceIntegrationTest {
    @Inject
    LibraryService libraryService;
    @Test
    void whenFindByAuthor_thenBookShouldBeFound() {
        assertFalse(libraryService.find("Frank Herbert").isEmpty());
    }
}

Now, let's try to test our Panache BookRepository:

class BookRepositoryIntegrationTest {
    @Inject
    BookRepository bookRepository;
    @Test
    void givenBookInRepository_whenFindByAuthor_thenShouldReturnBookFromRepository() {
        assertTrue(bookRepository.findBy("Herbert").findAny().isPresent());
    }
}

But when we run our test, it fails. That's because it requires running within the context of a transaction and there is none active. This can be fixed simply by adding @Transactional to the test class. Or, if we prefer, we can define our own stereotype to bundle both @QuarkusTest and @Transactional. Let's do this by creating the @QuarkusTransactionalTest annotation:

@QuarkusTest
@Stereotype
@Transactional
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
public @interface QuarkusTransactionalTest {
}

Now, let's apply it to our test:

@QuarkusTransactionalTest
class BookRepositoryIntegrationTest

As we can see, because Quarkus tests are full CDI beans, we can take advantage of all the CDI benefits like dependency injection, transactional contexts, and CDI interceptors.

6. Mocking

Mocking is a critical aspect of any testing effort. As we've already seen above, Quarkus tests can make use of the CDI @Alternative mechanism. Let's now dive deeper into the mocking capabilities Quarkus has to offer.

6.1. @Mock

As a slight simplification of the @Alternative approach, we can use the @Mock stereotype annotation. This bundles together the @Alternative and @Primary(1) annotations.

6.2. @QuarkusMock

If we don't want to have a globally defined mock, but would rather have our mock only within the scope of one test, we can use @QuarkusMock:

@QuarkusTest
class LibraryServiceQuarkusMockUnitTest {
    @Inject
    LibraryService libraryService;
    @BeforeEach
    void setUp() {
        BookRepository mock = Mockito.mock(TestBookRepository.class);
        Mockito.when(mock.findBy("Asimov"))
          .thenReturn(Arrays.stream(new Book[] {
            new Book("Foundation", "Isaac Asimov"),
            new Book("I Robot", "Isaac Asimov")}));
        QuarkusMock.installMockForType(mock, BookRepository.class);
    }
    @Test
    void whenFindByAuthor_thenBooksShouldBeFound() {
        assertEquals(2, libraryService.find("Asimov").size());
    }
}

6.3. @InjectMock

Let's simplify things a bit and use the Quarkus @InjectMock annotation instead of @QuarkusMock:

@QuarkusTest
class LibraryServiceInjectMockUnitTest {
    @Inject
    LibraryService libraryService;
    @InjectMock
    BookRepository bookRepository;
    @BeforeEach
    void setUp() {
        when(bookRepository.findBy("Frank Herbert"))
          .thenReturn(Arrays.stream(new Book[] {
            new Book("Dune", "Frank Herbert"),
            new Book("Children of Dune", "Frank Herbert")}));
    }
    @Test
    void whenFindByAuthor_thenBooksShouldBeFound() {
        assertEquals(2, libraryService.find("Frank Herbert").size());
    }
}

6.4. @InjectSpy

If we're only interested in spying and not replacing bean behavior, we can use the provided @InjectSpy annotation:

@QuarkusTest
class LibraryResourceInjectSpyIntegrationTest {
    @InjectSpy
    LibraryService libraryService;
    @Test
    void whenGetBooksByAuthor_thenBookShouldBeFound() {
        given().contentType(ContentType.JSON).param("query", "Asimov")
          .when().get("/library/book")
          .then().statusCode(200);
        verify(libraryService).find("Asimov");
    }
}

7. Test Profiles

We might want to run our tests in different configurations. For this, Quarkus offers the concept of a test profile. Let's create a test that runs against a different database engine using a customized version of our BookRepository, and that will also expose our HTTP resources at a different path from the one already configured.

For this, we start by implementing a QuarkusTestProfile:

public class CustomTestProfile implements QuarkusTestProfile {
    @Override
    public Map<String, String> getConfigOverrides() {
        return Collections.singletonMap("quarkus.resteasy.path", "/custom");
    }
    @Override
    public Set<Class<?>> getEnabledAlternatives() {
        return Collections.singleton(TestBookRepository.class);
    }
    @Override
    public String getConfigProfile() {
        return "custom-profile";
    }
}

Let's now configure our application.properties by adding a custom-profile config property that will change our H2 storage from memory to file:

%custom-profile.quarkus.datasource.jdbc.url = jdbc:h2:file:./testdb

Finally, with all the resources and configuration in place, let's write our test:

@QuarkusTest
@TestProfile(CustomBookRepositoryProfile.class)
class CustomLibraryResourceManualTest {
    public static final String BOOKSTORE_ENDPOINT = "/custom/library/book";
    @Test
    void whenGetBooksGivenNoQuery_thenAllBooksShouldBeReturned() {
        given().contentType(ContentType.JSON)
          .when().get(BOOKSTORE_ENDPOINT)
          .then().statusCode(200)
          .body("size()", is(2))
          .body("title", hasItems("Foundation", "Dune"));
    }
}

As we can see from the @TestProfile annotation, this test will use the CustomTestProfile. It will make HTTP requests to the custom endpoint overridden in the profile's getConfigOverrides method. Moreover, it will use the alternative book repository implementation configured in the getEnabledAlternatives method. And finally, by using the custom-profile defined in getConfigProfile, it will persist data in a file rather than memory.

One thing to note is that Quarkus will shut down and then restart with the new profile before this test is executed. This adds some time as the shutdown/restart happens, but it's the price to be paid for the extra flexibility.

8. Testing Native Executables

Quarkus offers the possibility to test native executables. Let's create a native image test:

@NativeImageTest
@QuarkusTestResource(H2DatabaseTestResource.class)
class NativeLibraryResourceIT extends LibraryHttpEndpointIntegrationTest {
}

And now, by running:

mvn verify -Pnative

We'll see the native image being built and the tests running against it.

The @NativeImageTest annotation instructs Quarkus to run this test against the native image, while the @QuarkusTestResource will start an H2 instance into a separate process before the test begins. The latter is needed for running tests against native executables as the database engine is not embedded into the native image.

The @QuarkusTestResource annotation can also be used to start custom services, like Testcontainers, for example. All we need to do is implement the QuarkusTestResourceLifecycleManager interface and annotate our test with:

@QuarkusTestResource(OurCustomResourceImpl.class)

You will need a GraalVM for building the native image.

Also, take notice that, at the moment, injection does not work with native image testing. The only thing that runs natively is the Quarkus application, not the test itself.

9. Conclusion

In this article, we saw how Quarkus offers excellent support for testing our application. From simple things like dependency management, injection, and mocking, to more complex aspects like configuration profiles and native images, Quarkus provides us with many tools to create powerful and clean tests.

As always, the complete code is available over on GitHub.

The post Testing Quarkus Applications first appeared on Baeldung.

        

Gradle Source Sets

$
0
0

1. Overview

Source sets give us a powerful way to structure source code in our Gradle projects.

In this quick tutorial, we're going to see how to use them.

2. Default Source Sets

Before jumping into the defaults, let's first explain what source sets are. As the name implies, source sets represent a logical grouping of source files.

We'll cover the configuration of Java projects, but the concepts are also applicable to other Gradle project types.

2.1. Default Project Layout

Let's start with a simple project structure:

source-sets 
  ├── src 
  │    └── main 
  │         ├── java 
  │         │    ├── SourceSetsMain.java
  │         │    └── SourceSetsObject.java
  │         └── test 
  │              └── SourceSetsTest.java
  └── build.gradle 

Now let's take a look at the build.gradle:

apply plugin : "java"
description = "Source Sets example"
test {
    testLogging {
        events "passed", "skipped", "failed"
    }
}
dependencies {   
    implementation('org.apache.httpcomponents:httpclient:4.5.12')
    testImplementation('junit:junit:4.12')
}

The Java plugin assumes src/main/java and src/test/java as default source directories

Let's craft a simple utility task:

task printSourceSetInformation(){
    doLast{
        sourceSets.each { srcSet ->
            println "["+srcSet.name+"]"
            print "-->Source directories: "+srcSet.allJava.srcDirs+"\n"
            print "-->Output directories: "+srcSet.output.classesDirs.files+"\n"
            println ""
        }
    }
}

We're printing just a few source set properties here. We can always check the full JavaDoc for more information.

Let's run it and see what we get:

$ ./gradlew printSourceSetInformation
> Task :source-sets:printSourceSetInformation
[main]
-->Source directories: [.../source-sets/src/main/java]
-->Output directories: [.../source-sets/build/classes/java/main]
[test]
-->Source directories: [.../source-sets/src/test/java]
-->Output directories: [.../source-sets/build/classes/java/test]

Notice we have two default source sets: main and test.

2.2. Default Configurations

The Java plugin also automatically creates some default Gradle configurations for us.

They follow a special naming convention: <sourceSetName><configurationName>.

We use them to declare the dependencies in build.gradle:

dependencies { 
    implementation('org.apache.httpcomponents:httpclient:4.5.12') 
    testImplementation('junit:junit:4.12') 
}

Notice that we specify implementation instead of mainImplementation. This is an exception to the naming convention.

By default, testImplementation configuration extends implementation and inherits all its dependencies and outputs.

Let's improve our helper task and see what this is about:

task printSourceSetInformation(){
    doLast{
        sourceSets.each { srcSet ->
            println "["+srcSet.name+"]"
            print "-->Source directories: "+srcSet.allJava.srcDirs+"\n"
            print "-->Output directories: "+srcSet.output.classesDirs.files+"\n"
            print "-->Compile classpath:\n"
            srcSet.compileClasspath.files.each { 
                print "  "+it.path+"\n"
            }
            println ""
        }
    }
}

Let's take a look at the output:

[main]
// same output as before
-->Compile classpath:
  .../httpclient-4.5.12.jar
  .../httpcore-4.4.13.jar
  .../commons-logging-1.2.jar
  .../commons-codec-1.11.jar
[test]
// same output as before
-->Compile classpath:
  .../source-sets/build/classes/java/main
  .../source-sets/build/resources/main
  .../httpclient-4.5.12.jar
  .../junit-4.12.jar
  .../httpcore-4.4.13.jar
  .../commons-logging-1.2.jar
  .../commons-codec-1.11.jar
  .../hamcrest-core-1.3.jar

The test source set contains the outputs of main in its compile classpath and also includes its dependencies.

Next, let's create our unit test:

public class SourceSetsTest {
    @Test
    public void whenRun_ThenSuccess() {
        
        SourceSetsObject underTest = new SourceSetsObject("lorem","ipsum");
        
        assertThat(underTest.getUser(), is("lorem"));
        assertThat(underTest.getPassword(), is("ipsum"));
    }
}

Here we test a simple POJO that stores two values. We can use it directly because the main outputs are in our test classpath.

Next, let's run this from Gradle:

./gradlew clean test
> Task :source-sets:test
com.baeldung.test.SourceSetsTest > whenRunThenSuccess PASSED

3. Custom Source Sets

So far, we've seen some sensible defaults. However, in practice, we often need custom source sets, especially for integration tests.

That's because we might want to have specific test libraries only on the integration tests classpath. We also might want to execute them independently of unit tests.

3.1. Defining Custom Source Sets

Let's craft a separate source directory for our integration tests:

source-sets 
  ├── src 
  │    └── main 
  │         ├── java 
  │         │    ├── SourceSetsMain.java
  │         │    └── SourceSetsObject.java
  │         ├── test 
  │         │    └── SourceSetsTest.java
  │         └── itest 
  │              └── SourceSetsITest.java
  └── build.gradle 

Next, let's configure it in our build.gradle using the sourceSets construct:

sourceSets {
    itest {
        java {
        }
    }
}
dependencies {
    implementation('org.apache.httpcomponents:httpclient:4.5.12')
    testImplementation('junit:junit:4.12')
}
// other declarations omitted

Notice we did not specify any custom directory. That's because our folder matches the name of the new source set (itest).

We can customize what directories are included with the srcDirs property:

sourceSets{
    itest {
        java {
            srcDirs("src/itest")
        }
    }
}

Remember our helper task from the beginning? Let's rerun it and see what it prints:

$ ./gradlew printSourceSetInformation
> Task :source-sets:printSourceSetInformation
[itest]
-->Source directories: [.../source-sets/src/itest/java]
-->Output directories: [.../source-sets/build/classes/java/itest]
-->Compile classpath:
  .../source-sets/build/classes/java/main
  .../source-sets/build/resources/main
[main]
 // same output as before
[test]
 // same output as before

3.2. Assigning Source Set Specific Dependencies

Remember default configurations? We now get some configurations for the itest source set as well.

Let's use itestImplementation to assign a new dependency:

dependencies {
    implementation('org.apache.httpcomponents:httpclient:4.5.12')
    testImplementation('junit:junit:4.12')
    itestImplementation('com.google.guava:guava:29.0-jre')
}

This one only applies to integration tests.

Let's modify our previous test and add it as an integration test:

public class SourceSetsItest {
    @Test
    public void givenImmutableList_whenRun_ThenSuccess() {
        SourceSetsObject underTest = new SourceSetsObject("lorem", "ipsum");
        List someStrings = ImmutableList.of("Baeldung", "is", "cool");
        assertThat(underTest.getUser(), is("lorem"));
        assertThat(underTest.getPassword(), is("ipsum"));
        assertThat(someStrings.size(), is(3));
    }
}

To be able to run it, we need to define a custom test task that uses the compiled outputs:

// source sets declarations
// dependencies declarations 
task itest(type: Test) {
    description = "Run integration tests"
    group = "verification"
    testClassesDirs = sourceSets.itest.output.classesDirs
    classpath = sourceSets.itest.runtimeClasspath
}

These declarations are evaluated during the configuration phase. As a result, their order is important.

For example, we cannot reference the itest source set in the task body before this is declared.

Let's see what happens if we run the test:

$ ./gradlew clean itest
// some compilation issues
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':source-sets:compileItestJava'.
> Compilation failed; see the compiler error output for details.

Unlike the previous run, we get a compilation error this time. So what happened?

This new source set creates an independent configuration.

In other words, itestImplementation does not inherit the JUnit dependency, nor does it get the outputs of main.

Let's fix this in our Gradle configuration:

sourceSets{
    itest {
        compileClasspath += sourceSets.main.output
        runtimeClasspath += sourceSets.main.output
        java {
        }
    }
}
// dependencies declaration
configurations {
    itestImplementation.extendsFrom(testImplementation)
    itestRuntimeOnly.extendsFrom(testRuntimeOnly)
}

Now let's rerun our integration test:

$ ./gradlew clean itest
> Task :source-sets:itest
com.baeldung.itest.SourceSetsItest > givenImmutableList_whenRun_ThenSuccess PASSED

The test passes.

3.3. Eclipse IDE Handling

We've seen so far how to work with source sets directly with Gradle. However, most of the time, we'll be using an IDE (such as Eclipse).

When we import the project, we get some compilation issues:

However, if we run the integrations test from Gradle, we get no errors:

$ ./gradlew clean itest
> Task :source-sets:itest
com.baeldung.itest.SourceSetsItest > givenImmutableList_whenRun_ThenSuccess PASSED

So what happened? In this case, the guava dependency belongs to itestImplementation.

Unfortunately, the Eclipse Buildship Gradle plugin does not handle these custom configurations very well.

Let's fix this in our build.gradle:

apply plugin: "eclipse"
// previous declarations
eclipse {
    classpath {
        plusConfigurations+=[configurations.itestCompileClasspath] 
    } 
}

Let's explain what we did here. We appended our configuration to the Eclipse classpath.

If we refresh the project, the compilation issues are gone.

However, there's a drawback to this approach: The IDE does not distinguish between configurations.

This means we can easily import guava in our test sources (which we specifically wanted to avoid).

4. Conclusion

In this tutorial, we covered the basics of Gradle source sets.

Then we explained how custom source sets work and how to use them in Eclipse.

As usual, we can find the complete source code over on GitHub.

The post Gradle Source Sets first appeared on Baeldung.

        

Guide to ArrayStoreException

$
0
0

1. Overview

ArrayStoreException is thrown at runtime in Java when an attempt is made to store the incorrect type of object in an array of objects. Since ArrayStoreException is an unchecked exception, it isn't typical to handle or declare it.

In this tutorial, we'll demonstrate the cause of ArrayStoreException, how to handle it, and best practices for avoiding it.

2. Causes of ArrayStoreException

Java throws an ArrayStoreException when we try to store a different type of object in an array instead of the declared type.

Suppose we instantiated an array with String type and later tried to store Integer in it. In this case, during runtime, ArrayStoreException is thrown:

Object array[] = new String[5];
array[0] = 2;

The exception will be thrown at the second line of code when we try to store an incorrect value type in the array:

Exception in thread "main" java.lang.ArrayStoreException: java.lang.Integer
    at com.baeldung.array.arraystoreexception.ArrayStoreExceptionExample.main(ArrayStoreExceptionExample.java:9)

Since we declared array as an Object, the compilation is error-free.

3. Handling the ArrayStoreException

The handling of this exception is pretty straightforward. As any other exception, it also needs to be surrounded in a try-catch block for handling:

try{
    Object array[] = new String[5];
    array[0] = 2;
}
catch (ArrayStoreException e) {
    // handle the exception
}

4. Best Practices to Avoid this Exception

It is recommended to declare the array type as a specific class, such as String or Integer, instead of Object. When we declare the array type as Object, then the compiler will not throw any error.

But declaring the array with the base class and then storing objects of a different class will lead to a compilation error. Let's see this with a quick example:

String array[] = new String[5];
array[0] = 2;

In the above example, we declare the array type as String and try to store an Integer in it. This will lead to a compilation error:

Exception in thread "main" java.lang.Error: Unresolved compilation problem: 
  Type mismatch: cannot convert from int to String
    at com.baeldung.arraystoreexception.ArrayStoreExampleCE.main(ArrayStoreExampleCE.java:8)

It's better if we catch errors at compile-time rather than runtime as we have more control over the former.

5. Conclusion

In this tutorial, we learned the causes, handling, and prevention of ArrayStoreException in Java.

The complete example is available over on GitHub.

The post Guide to ArrayStoreException first appeared on Baeldung.

        

Assert Two Lists for Equality Ignoring Order in Java

$
0
0

1. Overview

Sometimes when writing unit tests, we need to make order agnostic comparison of lists. In this short tutorial, we'll take a look at different examples of how we can write such unit tests.

2. Setup

As per the List#equals Java documentation, two lists are equal if they contain the same elements in the same order. Therefore we can't merely use the equals method as we want to do order agnostic comparison.

Throughout this tutorial, we'll use these three lists as example inputs for our tests:

List first = Arrays.asList(1, 3, 4, 6, 8);
List second = Arrays.asList(8, 1, 6, 3, 4);
List third = Arrays.asList(1, 3, 3, 6, 6);

There are different ways to do order agnostic comparison. Let's take a look at them one by one.

3. Using JUnit

JUnit is a well-know framework used for unit testing in the Java ecosystem.

We can use the logic below to compare the equality of two lists using the assertTrue and assertFalse methods.

Here we check the size of both lists and check if the first list contains all elements of the second list and vice versa. Although this solution works, it's not very readable. So now let's look at some alternatives:

@Test
public void whenTestingForOrderAgnosticEquality_ShouldBeTrue() {
    assertTrue(first.size() == second.size() && first.containsAll(second) && second.containsAll(first));
}

In this first test, the size of both lists is compared before we check if the elements in both lists are the same. As both of these conditions return true, our test will pass.

Let's now take a look at a failing test:

@Test
public void whenTestingForOrderAgnosticEquality_ShouldBeFalse() {
    assertFalse(first.size() == third.size() && first.containsAll(third) && third.containsAll(first));
}

Contrastingly, in this version of the test, although the size of both lists is the same, all elements don't match.

4. Using AssertJ

AssertJ is an opensource community-driven library used for writing fluent and rich assertions in Java tests.

To use it in our maven project, let's add the assertj-core dependency in the pom.xml file:

<dependency>
    <groupId>org.assertj</groupId>
    <artifactId>assertj-core</artifactId>
    <version>3.16.1</version>
</dependency>

Let's write a test to compare the equality of two list instance of the same element and same size:

@Test
void whenTestingForOrderAgnosticEqualityBothList_ShouldBeEqual() {
    assertThat(first).hasSameElementsAs(second);
}

In this example, we verify first contains all the elements of the given iterable and nothing else, in any order. The main limitation of this approach is the hasSameElementsAs method ignores duplicates.

Let's look at this in practice to see what we mean:

@Test
void whenTestingForOrderAgnosticEqualityBothList_ShouldNotBeEqual() {
    List a = Arrays.asList("a", "a", "b", "c");
    List b = Arrays.asList("a", "b", "c");
    assertThat(a).hasSameElementsAs(b);
}

In this test, although we have the same elements, the size of both lists is not equal, but the assertion will still be true, as it ignores the duplicates. To make it work we need to add a size check for both lists:

assertThat(a).hasSize(b.size()).hasSameElementsAs(b);

Adding a check for the size of both our lists followed by the method hasSameElementsAs will indeed fail as expected.

5. Using Hamcrest

If we are already using Hamcrest or want to use it for writing unit tests, here is how we can use the Matchers#containsInAnyOrder method for order agnostic comparison.

To use Hamcrest in our maven project, let's add the hamcrest-all dependency in pom.xml file:

<dependency>
    <groupId>org.hamcrest</groupId>
    <artifactId>hamcrest-all</artifactId>
    <version>1.3</version>
</dependency>

Let's look at the test:

@Test
public void whenTestingForOrderAgnosticEquality_ShouldBeEqual() {
    assertThat(first, Matchers.containsInAnyOrder(second.toArray()));
}

Here the method containsInAnyOrder creates an order agnostic matcher for Iterables, which does matching with examined Iterable elements. This test matches the elements of two lists, ignoring the order of elements in the list.

Thankfully this solution doesn't suffer from the same problem as explained in the previous section, so we don't need to compare the sizes explicitly.

6. Using Apache Commons

Another library or framework apart from JUnit, Hamcrest, or AssertJ, we can use is Apache CollectionUtils. It provides utility methods for common operations that cover a wide range of use cases and helps us avoid writing boilerplate code.

To use it in our maven project, let's add the commons-collections4 dependency in pom.xml file:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-collections4</artifactId>
    <version>4.4</version>
</dependency>

Here is a test using CollectionUtils:

@Test
public void whenTestingForOrderAgnosticEquality_ShouldBeTrueIfEqualOtherwiseFalse() {
    assertTrue(CollectionUtils.isEqualCollection(first, second));
    assertFalse(CollectionUtils.isEqualCollection(first, third));
}

The isEqualCollection method returns true if the given collections contain precisely the same elements with the same cardinalities. Otherwise, it returns false.

7. Conclusion

In this article, we have explored how to check the equality of two List instances, where the elements of both lists are ordered differently.

All these examples can be found over on GitHub.

The post Assert Two Lists for Equality Ignoring Order in Java first appeared on Baeldung.

        

The Spring @ConditionalOnProperty Annotation

$
0
0

1. Overview

In this short tutorial, we're going to shed light on the main purpose of the @ConditionalOnProperty annotation.

First, we'll start with a bit of background about what @ConditionalOnProperty is. Then, we'll look at some practical examples to help understand how it works and what features it brings.

2. The Purpose of @ConditionalOnProperty

Typically, when developing Spring-based applications, we may need to create some beans conditionally based on the presence and the value of a configuration property.

For example, we may want to register a DataSource bean to point to a production or a test database depending on if we set a property value to “prod” or “test”.

Fortunately, achieving that isn't as hard as it might look upon first glance. The Spring framework provides the @ConditionalOnProperty annotation precisely for this purpose.

In short, the @ConditionalOnProperty enables bean registration only if an environment property is present and has a specific value. By default, the specified property must be defined and not equal to false.

Now that we're familiar with the purpose of the @ConditionalOnProperty annotation, let's dig deeper to see how it works.

3. The @ConditionalOnProperty Annotation in Practice

To exemplify the use of @ConditionalOnProperty, we'll develop a basic notification system. To keep things simple for now, let's assume that we want to send email notifications.

First, we'll need to create a simple service to send a notification message. For instance, consider the NotificationSender interface:

public interface NotificationSender {
    String send(String message);
}

Next, let's provide an implementation of the NotificationSender interface to send our emails:

public class EmailNotification implements NotificationSender {
    @Override
    public String send(String message) {
        return "Email Notification: " + message;
    }
}

Now, let's see how to make use of the @ConditionalOnProperty annotation. Let's configure the NotificationSender bean in such a way that it will only be loaded if the property notification.service is defined:

@Bean(name = "emailNotification")
@ConditionalOnProperty(prefix = "notification", name = "service")
public NotificationSender notificationSender() {
    return new EmailNotification();
}

As we can see, the prefix and name attributes are used to denote the configuration property that should be checked.

Finally, we need to add the last missing piece of the puzzle. Let's define our custom property in the application.properties file:

notification.service=email

4. Advanced Configuration

As we have already learned, the @ConditionalOnProperty annotation allows us to register beans conditionally depending on the presence of a configuration property.

However, we can do more than just that with this annotation. So, let's explore!

Let's suppose we want to add another notification service — for example, a service that will allow us to send SMS notifications.

To do that, we need to create another NotificationSender implementation:

public class SmsNotification implements NotificationSender {
    @Override
    public String send(String message) {
        return "SMS Notification: " + message;
    }
}

Since we have two implementations, let's see how we can use @ConditionalOnProperty to load the right NotificationSender bean conditionally.

For this purpose, the annotation provides the havingValue attribute. Quite interestingly, it defines the value that a property must have in order for a specific bean to be added to the Spring container.

Now, let's specify under what condition we want to register the SmsNotification implementation in the context:

@Bean(name = "smsNotification")
@ConditionalOnProperty(prefix = "notification", name = "service", havingValue = "sms")
public NotificationSender notificationSender2() {
    return new SmsNotification();
}

With the help of the havingValue attribute, we made it clear that we want to load SmsNotification only when notification.service is set to sms.

It's worth mentioning that @ConditionalOnProperty has another attribute called matchIfMissing. This attribute specifies whether the condition should match in case the property is not available.

Now, let's put all the pieces together and write a simple test case to confirm that everything works as expected:

@Test
public void whenValueSetToEmail_thenCreateEmailNotification() {
    this.contextRunner.withPropertyValues("notification.service=email")
        .withUserConfiguration(NotificationConfig.class)
        .run(context -> {
            assertThat(context).hasBean("emailNotification");
            NotificationSender notificationSender = context.getBean(EmailNotification.class);
            assertThat(notificationSender.send("Hello From Baeldung!")).isEqualTo("Email Notification: Hello From Baeldung!");
            assertThat(context).doesNotHaveBean("smsNotification");
        });
}

5. Conclusion

In this short tutorial, we highlighted the purpose of using the @ConditionalOnProperty annotation. Then, we showcased, through a practical example, how to use it to load Spring beans conditionally.

As always, the full source code of this tutorial is available over on GitHub.

The post The Spring @ConditionalOnProperty Annotation first appeared on Baeldung.

        

Listing Kafka Consumers

$
0
0

1. Overview

In this quick tutorial, we'll learn how to list Kafka consumer groups and also take a peek at their details.

2. Prerequisites

To run the examples in this tutorial, we'll need a Kafka cluster to send our requests to. This can be a full-blown Kafka cluster running on a production environment, or it can be a test-specific, single-instance Kafka cluster.

For the sake of simplicity, we're going to assume that we have a single-node cluster listening to port 9092 with a Zookeeper instance listening to the 2181 port on the localhost.

Furthermore, note that we're running all example commands from the Kafka installation directory.

3. Adding Topics and Consumers

Before listing the consumers on a particular Kafka cluster, let's add a few topics first using the kafka-topics.sh shell script:

$ ./bin/kafka-topics.sh --create --topic users.registrations --replication-factor 1 \ 
  --partitions 2 --zookeeper localhost:2181
$ ./bin/kafka-topics.sh --create --topic users.verfications --replication-factor 1 \ 
  --partitions 2 --zookeeper localhost:2181

Now, we need to add a few consumer groups, too. The simplest way is to use the console consumer bundled in Kafka distributions:

$ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic users.registrations --group new-user
$ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic users.registrations --group new-user

Here, we've used the kafka-console-consumer.sh shell script to add two consumers listening to the same topic. These consumers are in the same group, so the messages from topic partitions will be spread across the members of the group. This way we can implement the competing consumers pattern in Kafka.

Let's consume from another topic, too:

$ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic users.verifications

Since we didn't specify a group for the consumer, the console consumer created a new group, with itself as the lone member.

We'll see this new group in the next section, where we'll learn how to list consumers and consumer groups on the Kafka cluster.

4. Listing Consumers

To list the consumers in the Kafka cluster, we can use the kafka-consumer-groups.sh shell script. The –list option will list all the consumer groups:

$ ./bin/kafka-consumer-groups.sh --list --bootstrap-server localhost:9092
new-user
console-consumer-40123

In addition to the –list option, we're passing the –bootstrap-server option to specify the Kafka cluster address. We have three individual consumers in two groups, so the result contains only two groups.

To see the members of the first group, we can use the “–group <name> –describe –members” options:

$ ./bin/kafka-consumer-groups.sh --describe --group new-user --members --bootstrap-server localhost:9092
GROUP           CONSUMER-ID                    HOST            CLIENT-ID            #PARTITIONS
new-user        consumer-new-user-1-b90...     /127.0.0.1      consumer-new-user-1  1
new-user        consumer-new-user-1-af8...     /127.0.0.1      consumer-new-user-1  1

Here, we can see that there are two individual consumers in our new-user group, each consuming from one partition.

If we omit the –members option, it'll list the consumers in the group, the partition number each is listening to, and their offsets:

$ ./bin/kafka-consumer-groups.sh --describe --group new-user --bootstrap-server localhost:9092
GROUP           TOPIC                       PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG          
new-user        users.registrations         1          3               3               0              
new-user        users.registrations         0          5               5               0            

One more thing to note is that the cluster or bootstrap server address is required for this command. If we omit the cluster connection information, the shell script will throw an error:

$ ./bin/kafka-consumer-groups.sh --list
Missing required argument "[bootstrap-server]"
// truncated

5. Conclusion

In this short tutorial, we added a few Kafka topics and consumer groups at first. Then, we learned how to list consumer groups and view the details for each group.

The post Listing Kafka Consumers first appeared on Baeldung.

       

Related Stories

 

Guide to @DynamicPropertySource in Spring

$
0
0

1. Overview

Today's applications don't live in isolation: we usually need to connect to various external components such as PostgreSQL, Apache Kafka, Cassandra, Redis, and other external APIs.

In this tutorial, we're going to see how Spring Framework 5.2.5 facilitates testing such applications with the introduction of dynamic properties.

First, we'll start by defining the problem and seeing how we used to solve the problem in a less than ideal way. Then, we'll introduce the @DynamicPropertySource annotation and see how it offers a better solution to the same problem. In the end, we'll also take a look at another solution from test frameworks that can be superior compared to pure Spring solutions.

2. The Problem: Dynamic Properties

Let's suppose we're developing a typical application that uses PostgreSQL as its database. We'll begin with a simple JPA entity:

@Entity
@Table(name = "articles")
public class Article {
    @Id
    @GeneratedValue(strategy = IDENTITY)
    private Long id;
    private String title;
    private String content;
    // getters and setters
}

To make sure this entity works as expected, we should write a test for it to verify its database interactions. Since this test needs to talk to a real database, we should set up a PostgreSQL instance beforehand.

There are different approaches to set up such infrastructural tools during test executions. As a matter of fact, there are three main categories of such solutions:

  • Set up a separate database server somewhere just for the tests
  • Use some lightweight, test-specific alternatives or fakes such as H2
  • Let the test itself manage the lifecycle of the database

As we shouldn't differentiate between our test and production environments, there are better alternatives compared to using test doubles such as H2. The third option, in addition to working with a real database, offers better isolation for tests. Moreover, with technologies like Docker and Testcontainers, it's easy to implement the third option.

Here's what our test workflow will look like if we use technologies like Testcontainers:

  1. Set up a component such as PostgreSQL before all tests. Usually, these components listen to random ports.
  2. Run the tests.
  3. Tear down the component.

If our PostgreSQL container is going to listen to a random port every time, then we should somehow set and change the spring.datasource.url configuration property dynamically. Basically, each test should have its own version of that configuration property.

When the configurations are static, we can easily manage them using Spring Boot's configuration management facility. However, when we're facing dynamic configurations, the same task can be challenging.

Now that we know the problem, let's see a traditional solution for it.

3. Traditional Solution

The first approach to implement dynamic properties is to use a custom ApplicationContextInitializer. Basically, we set up our infrastructure first and use the information from the first step to customize the ApplicationContext:

@SpringBootTest
@Testcontainers
@ContextConfiguration(initializers = ArticleTraditionalLiveTest.EnvInitializer.class)
class ArticleTraditionalLiveTest {
    @Container
    static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:11")
      .withDatabaseName("prop")
      .withUsername("postgres")
      .withPassword("pass")
      .withExposedPorts(5432);
    static class EnvInitializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
        @Override
        public void initialize(ConfigurableApplicationContext applicationContext) {
            TestPropertyValues.of(
              String.format("spring.datasource.url=jdbc:postgresql://localhost:%d/prop", postgres.getFirstMappedPort()),
              "spring.datasource.username=postgres",
              "spring.datasource.password=pass"
            ).applyTo(applicationContext);
        }
    }
    // omitted 
}

Let's walk through this somewhat complex setup. JUnit will create and start the container before anything else. After the container is ready, the Spring extension will call the initializer to apply the dynamic configuration to the Spring Environment. Clearly, this approach is a bit verbose and complicated.

Only after these steps can we write our test:

@Autowired
private ArticleRepository articleRepository;
@Test
void givenAnArticle_whenPersisted_thenShouldBeAbleToReadIt() {
    Article article = new Article();
    article.setTitle("A Guide to @DynamicPropertySource in Spring");
    article.setContent("Today's applications...");
    articleRepository.save(article);
    Article persisted = articleRepository.findAll().get(0);
    assertThat(persisted.getId()).isNotNull();
    assertThat(persisted.getTitle()).isEqualTo("A Guide to @DynamicPropertySource in Spring");
    assertThat(persisted.getContent()).isEqualTo("Today's applications...");
}

4. The @DynamicPropertySource

Spring Framework 5.2.5 introduced the @DynamicPropertySource annotation to facilitate adding properties with dynamic values. All we have to do is to create a static method annotated with @DynamicPropertySource and having just a single DynamicPropertyRegistry instance as the input:

@SpringBootTest
@Testcontainers
public class ArticleLiveTest {
    @Container
    static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:11")
      .withDatabaseName("prop")
      .withUsername("postgres")
      .withPassword("pass")
      .withExposedPorts(5432);
    @DynamicPropertySource
    static void registerPgProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.datasource.url", 
          () -> String.format("jdbc:postgresql://localhost:%d/prop", postgres.getFirstMappedPort()));
        registry.add("spring.datasource.username", () -> "postgres");
        registry.add("spring.datasource.password", () -> "pass");
    }
    
    // tests are same as before
}

As shown above, we're using the add(String, Supplier<Object>) method on the given DynamicPropertyRegistry to add some properties to the Spring Environment. This approach is much cleaner compared to the initializer one we saw earlier. Please note that methods annotated with @DynamicPropertySource must be declared as static and must accept only one argument of type DynamicPropertyRegistry

Basically, the main motivation behind the @DynmicPropertySource annotation is to more easily facilitate something that was already possible. Although it was initially designed to work with Testcontainers, it's possible to use it wherever we need to work with dynamic configurations.

5. An Alternative: Test Fixtures

So far, in both approaches, the fixture setup and the test code are tightly intertwined. Sometimes, this tight coupling of two concerns complicates the test code, especially when we have multiple things to set up. Imagine what the infrastructure setup would look like if we were using PostgreSQL and Apache Kafka in a single test.

In addition to that, the infrastructure setup and applying dynamic configurations will be duplicated in all tests that need them.

To avoid these drawbacks, we can use test fixtures facilities that most testing frameworks provide. For instance, in JUnit 5, we can define an extension that starts a PostgreSQL instance before all tests, configures Spring Boot, and stops the PostgreSQL instance after running tests:

public class PostgreSQLExtension implements BeforeAllCallback, AfterAllCallback {
    private PostgreSQLContainer<?> postgres;
    @Override
    public void beforeAll(ExtensionContext context) {
        postgres = new PostgreSQLContainer<>("postgres:11")
          .withDatabaseName("prop")
          .withUsername("postgres")
          .withPassword("pass")
          .withExposedPorts(5432);
        postgres.start();
        String jdbcUrl = String.format("jdbc:postgresql://localhost:%d/prop", postgres.getFirstMappedPort());
        System.setProperty("spring.datasource.url", jdbcUrl);
        System.setProperty("spring.datasource.username", "postgres");
        System.setProperty("spring.datasource.password", "pass");
    }
    @Override
    public void afterAll(ExtensionContext context) {
        postgres.stop();
    }
}

Here, we're implementing AfterAllCallback and BeforeAllCallback to create a JUnit 5 extension. This way, JUnit 5 will execute the beforeAll() logic before running all the tests, and the logic in the afterAll() method after running the tests. With this approach, our test code will be as clean as:

@SpringBootTest
@ExtendWith(PostgreSQLExtension.class)
public class ArticleTestFixtureLiveTest {
    // just the test code
}

In addition to being more readable, we can easily reuse the same functionality just by adding the @ExtendWith(PostgreSQLExtension.class) annotation. There's no need to copy-paste the whole PostgreSQL setup everywhere we need it, as we did in the other two approaches.

6. Conclusion

In this tutorial, we first saw how hard can it be to test a Spring component that depends on something like a database. Then, we introduced three solutions for this problem, each improving upon what the previous solution had to offer.

As usual, all the examples are available over on GitHub.

The post Guide to @DynamicPropertySource in Spring first appeared on Baeldung.

        
Viewing all 4700 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>