Quantcast
Channel: Baeldung
Viewing all 4469 articles
Browse latest View live

Java 8 Stream Operation on the Empty List

$
0
0

1. Introduction

Java 8 brought a paradigm shift in the way we handle collections and data manipulation with the introduction of Streams. Stream APIs offer a concise and expressive way to perform operations on data, enabling developers to write more readable, robust, and efficient code.

In this tutorial, we’ll delve into the interesting world of Stream operations, focusing on the empty List. Although working with an empty List might seem trivial, it unveils some powerful aspects of the Stream API.

2. Converting an Empty List to a Stream

We can easily obtain a Stream from an empty List using the stream() method:

List<String> emptyList = new ArrayList<>();
Stream<String> emptyStream = emptyList.stream();

This enables us to perform various Stream operations on an empty List just as on a non-empty List. However, we must note that the result of the operation could be empty since the source of the Stream is empty. Furthermore, it could be interesting to explore more about working with empty Stream in Java.

3. Significance of an Empty Stream for Handling NullPointerException

One notable advantage of using Streams with empty Lists is the prevention of NullPointerExceptions. Let’s consider the following example, where the getList() method may return null:

List<String> nameList = getList(); // Assume getList() may return null
// Without Stream
if (nameList != null) {
    for (String str : nameList) {
        System.out.println("Length of " + name + ": " + name.length());
    }
}

Here, in the non-stream approach, we must check for null before iterating over the List to avoid a NullPointerException.

On the other hand, using Stream, we can perform a long chain of operations without specifically handling the null checks and also avoiding NullPointerException:

// With Stream
Optional.ofNullable(nameList)
  .ifPresent(list -> list.stream()
    .map(name -> "Length of " + name + ": " + name.length())
    .forEach(System.out::println));

Here, we’ve used Optional.ofNullable() to wrap nameList, preventing a NullPointerException if nameList is null. We then use the ifPresent() method to execute the Stream operations only if the list isn’t null.

This ensures that the Stream operations are applied only when the List is non-null, preventing any potential NullPointerException. Moreover, the code is more concise, and operations on an empty Stream won’t result in any Exceptions or errors.

However, if the getList() method returns an empty List instead of a null, then with an empty Stream, the map() operation would get nothing to work upon. Hence, it results in a new empty Stream, leaving nothing to print in the forEach() call.

In summary, both the traditional and Stream approaches aim to print the length of names from a List. The Stream approach, however, leverages Optional and Stream operations, providing a more functional and concise way to handle potential null values and empty Lists. This results in code that is both safer and more expressive.

4. Collecting a Stream of an Empty List Into Another List

Stream offers a clean way to perform operations and collect results. Even when working with an empty List, we can utilize Stream operations and collectors effectively. Here’s a simple example of collecting elements from an empty List into another List through a Stream:

List<String> emptyList = new ArrayList<>();
List<String> collectedList = emptyList.stream().collect(Collectors.toList());
System.out.println(collectedList); // Output: []

Here, collect() is a terminal operation, and it performs mutable reduction on the elements of the Stream.

Similarly, performing an intermediate operation such as filter() and collecting the result in any collection would result in an empty Stream:

List<String> emptyList = new ArrayList<>();
List<String> collectedList = emptyList.stream()
  .filter(s -> s.startsWith("a"))
  .collect(Collectors.toList());

This demonstrates that Stream operations on an empty List can be seamlessly integrated into collecting results without any issues.

5. Conclusion

In conclusion, Java 8 Stream operations on an empty List showcase the elegance and robustness of the Stream API. The ability to effortlessly convert an empty List to a Stream, handle potential NullPointerExceptions more gracefully, and seamlessly perform operations such as collecting into another List makes Streams a powerful tool for developers.

By understanding and utilizing these features, developers can write more concise and expressive code, making the most out of the Stream API, even when dealing with empty Lists.

As always, the source code accompanying the article is available over on GitHub.

       

Add Authorities as Custom Claims in JWT Access Tokens in Spring Authorization Server

$
0
0

1. Overview

Adding custom claims to JSON Web Token (JWT) access tokens can be crucial in many scenarios. Custom claims allow us to include additional information in the token payload.

In this tutorial, we’ll learn how to add resource owner authorities to a JWT access token in the Spring Authorization Server.

2. Spring Authorization Server

The Spring Authorization Server is a new project in the Spring ecosystem designed to provide Authorization Server support to Spring applications. It aims to simplify the process of implementing OAuth 2.0 and OpenID Connect (OIDC) authorization servers using the familiar and flexible Spring programming model.

2.1. Maven Dependencies

Let’s start by importing the spring-boot-starter-web, spring-boot-starter-security, spring-boot-starter-test, and spring-security-oauth2-authorization-server dependencies to the pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>2.5.4</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
    <version>2.5.4</version>
</dependency>
<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-oauth2-authorization-server</artifactId>
    <version>0.2.0</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <version>2.5.4</version>
</dependency>

Alternatively, we can add the spring-boot-starter-oauth2-authorization-server dependency to our pom.xml file:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-oauth2-authorization-server</artifactId>
    <version>3.2.0</version>
</dependency>

2.2. Project Setup

Let’s set up the Spring Authorization Server for issuing access tokens. To keep things simple, we’ll be using the Spring Security OAuth Authorization Server application.

Let’s assume that we’re using the authorization server project available on GitHub.

3. Add Basic Custom Claims to JWT Access Tokens

In a Spring Security OAuth2-based application, we can add custom claims to JWT access tokens by customizing the token creation process in the Authorization Server. This type of claim can be useful for injecting additional information into JWTs, which can then be used by resource servers or other components in the authentication and authorization flow.

3.1. Add Basic Custom Claims

We can add our custom claims to an access token using the OAuth2TokenCustomizer<JWTEncodingContext> bean. By using it, every access token that is issued by the authorization server will have the custom claims populated.

Let’s add the OAuth2TokenCustomizer bean in the DefaultSecurityConfig class:

@Bean
@Profile("basic-claim")
public OAuth2TokenCustomizer<JwtEncodingContext> jwtTokenCustomizer() {
    return (context) -> {
      if (OAuth2TokenType.ACCESS_TOKEN.equals(context.getTokenType())) {
        context.getClaims().claims((claims) -> {
          claims.put("claim-1", "value-1");
          claims.put("claim-2", "value-2");
        });
      }
    };
}

The OAuth2TokenCustomizer interface is part of the Spring Security OAuth2 library and is used to customize OAuth 2.0 tokens. In this case, it specifically customizes JWT tokens during the encoding process. 

The lambda expression passed to the jwtTokenCustomizer() bean defines the customization logic. The context parameter represents the JwtEncodingContext during the token encoding process. 

First, we use the context.getTokenType() method to check whether the token being processed is an access token. Then, we obtain the claims associated with the JWT being constructed by using the context.getClaims() method. Finally, we add custom claims to the JWT.

In this example, two claims (“claim-1” and “claim-2“) with corresponding values (“value-1” and “value-2“) are added.

3.2. Test the Custom Claims

For testing, we are going to use the client_credentials grant type. 

First, we’ll define the client_credentials grant type from AuthorizationServerConfig as an authorized grant type in the RegisteredClient object:

@Bean
public RegisteredClientRepository registeredClientRepository() {
    RegisteredClient registeredClient = RegisteredClient.withId(UUID.randomUUID().toString())
      .clientId("articles-client")
      .clientSecret("{noop}secret")
      .clientAuthenticationMethod(ClientAuthenticationMethod.CLIENT_SECRET_BASIC)
      .authorizationGrantType(AuthorizationGrantType.CLIENT_CREDENTIALS)
      .authorizationGrantType(AuthorizationGrantType.AUTHORIZATION_CODE)
      .authorizationGrantType(AuthorizationGrantType.REFRESH_TOKEN)
      .redirectUri("http://127.0.0.1:8080/login/oauth2/code/articles-client-oidc")
      .redirectUri("http://127.0.0.1:8080/authorized")
      .scope(OidcScopes.OPENID)
      .scope("articles.read")
      .build();
    return new InMemoryRegisteredClientRepository(registeredClient);
}

Then, let’s create a test case in the CustomClaimsConfigurationTest class:

@ActiveProfiles(value = "basic-claim")
public class CustomClaimsConfigurationTest {
    private static final String ISSUER_URL = "http://localhost:";
    private static final String USERNAME = "articles-client";
    private static final String PASSWORD = "secret";
    private static final String GRANT_TYPE = "client_credentials";
    @Autowired
    private TestRestTemplate restTemplate;
    @LocalServerPort
    private int serverPort;
    @Test
    public void givenAccessToken_whenGetCustomClaim_thenSuccess() throws ParseException {
        String url = ISSUER_URL + serverPort + "/oauth2/token";
        HttpHeaders headers = new HttpHeaders();
        headers.setBasicAuth(USERNAME, PASSWORD);
        MultiValueMap<String, String> params = new LinkedMultiValueMap<>();
        params.add("grant_type", GRANT_TYPE);
        HttpEntity<MultiValueMap<String, String>> requestEntity = new HttpEntity<>(params, headers);
        ResponseEntity<TokenDTO> response = restTemplate.exchange(url, HttpMethod.POST, requestEntity, TokenDTO.class);
        SignedJWT signedJWT = SignedJWT.parse(response.getBody().getAccessToken());
        JWTClaimsSet claimsSet = signedJWT.getJWTClaimsSet();
        Map<String, Object> claims = claimsSet.getClaims();
        assertEquals("value-1", claims.get("claim-1"));
        assertEquals("value-2", claims.get("claim-2"));
    } 
    
    static class TokenDTO {
        @JsonProperty("access_token")
        private String accessToken;
        @JsonProperty("token_type")
        private String tokenType;
        @JsonProperty("expires_in")
        private String expiresIn;
        @JsonProperty("scope")
        private String scope;
        public String getAccessToken() {
            return accessToken;
        }
    }
}

Let’s walk through the key parts of our test to understand what is going on:

  • Start by constructing a URL for the OAuth2 token endpoint.
  • Retrieve a response containing the TokenDTO class from a POST request to the token endpoint. Here, we create an HTTP request entity with headers (basic authentication) and parameters (grant type).
  • Parse the Access Token from the response using the SignedJWT class. Also, we extract claims from the JWT and store them in the Map<String, Object>.
  • Assert that specific claims in the JWT have expected values using JUnit assertions.

This test confirms that our token encoding process is working properly and our claims are being generated as expected. Awesome!

In addition, we can get the access token using the curl command: 

curl --request POST \
  --url http://localhost:9000/oauth2/token \
  --header 'Authorization: Basic YXJ0aWNsZXMtY2xpZW50OnNlY3JldA==' \
  --header 'Content-Type: application/x-www-form-urlencoded' \
  --data grant_type=client_credentials

Here, the credentials are encoded as a Base64 string of the client ID and client secret, delimited by a single colon “:”.

Now, we can run our Spring Boot application with the profile basic-claim.

If we obtain an access token and decode it using jwt.io, we find the test claims in the token’s body:

{
  "sub": "articles-client",
  "aud": "articles-client",
  "nbf": 1704517985,
  "scope": [
    "articles.read",
    "openid"
  ],
  "iss": "http://auth-server:9000",
  "exp": 1704518285,
  "claim-1": "value-1",
  "iat": 1704517985,
  "claim-2": "value-2"
}

As we can see, the value of the test claims is as expected.

We’ll discuss adding authority as claims to access tokens in the following section.

4. Add Authorities as Custom Claims to JWT Access Tokens

Adding authorities as custom claims to JWT access tokens is often a crucial aspect of securing and managing access in a Spring Boot application. Authorities, typically represented by GrantedAuthority objects in Spring Security, indicate what actions or roles a user is allowed to perform. By including these authorities as custom claims in JWT access tokens, we provide a convenient and standardized way for resource servers to understand the user’s permissions.

4.1. Add Authorities as Custom Claims

First, we use a simple in-memory user configuration with a set of authorities in the DefaultSecurityConfig class:

@Bean
UserDetailsService users() {
    UserDetails user = User.withDefaultPasswordEncoder()
      .username("admin")
      .password("password")
      .roles("USER")
      .build();
    return new InMemoryUserDetailsManager(user);
}

A single user with the username “admin” password “password” and role “USER” is created.

Now, let’s populate a custom claim in the access token with those authorities:

@Bean
@Profile("authority-claim")
public OAuth2TokenCustomizer<JwtEncodingContext> tokenCustomizer(@Qualifier("users") UserDetailsService userDetailsService) {
    return (context) -> {
      UserDetails userDetails = userDetailsService.loadUserByUsername(context.getPrincipal().getName());
      Collection<? extends GrantedAuthority> authorities = userDetails.getAuthorities();
      context.getClaims().claims(claims ->
         claims.put("authorities", authorities.stream().map(authority -> authority.getAuthority()).collect(Collectors.toList())));
    };
}

First, we define a lambda function that implements the OAuth2TokenCustomizer<JwtEncodingContext> interface. This function customizes the JWT during the encoding process.

Then, we retrieve the UserDetails object associated with the current principal (user) from the injected UserDetailsService. The principal’s name is typically the username.

After that, we retrieve the collection of GrantedAuthority objects associated with the user. 

Finally, we retrieve the JWT claims from the JwtEncodingContext and apply customizations. It includes adding a custom claim named “authorities” to the JWT. Also, this claim contains a list of authority strings obtained from the GrantedAuthority objects associated with the user. 

4.2. Test the Authorities Claims

Now that we’ve configured the authorization server, let’s test it. For that, we’ll use the client-server project available on GitHub.

Let’s create a REST API client that will fetch the list of claims from the access token:

@GetMapping(value = "/claims")
public String getClaims(
  @RegisteredOAuth2AuthorizedClient("articles-client-authorization-code") OAuth2AuthorizedClient authorizedClient
) throws ParseException {
    SignedJWT signedJWT = SignedJWT.parse(authorizedClient.getAccessToken().getTokenValue());
    JWTClaimsSet claimsSet = signedJWT.getJWTClaimsSet();
    Map<String, Object> claims = claimsSet.getClaims();
    return claims.get("authorities").toString();
}

The @RegisteredOAuth2AuthorizedClient annotation is used in a Spring Boot controller method to indicate that the method expects an OAuth 2.0 authorized client to be registered with the specified client ID. In this case, the client ID is “articles-client-authorization-code“.

Let’s run our Spring Boot application with the profile authority-claim.

Now when we go into the browser and try to access the http://127.0.0.1:8080/claims page, we’ll be automatically redirected to the OAuth server login page under http://auth-server:9000/login URL.

After providing the proper username and password, the authorization server will redirect us back to the requested URL, the list of claims.

5. Conclusion

Overall, the ability to add custom claims to JWT access tokens provides a powerful mechanism for tailoring tokens to the specific needs of our application and enhancing the overall security and functionality of our authentication and authorization system.

In this article, we learned how to add custom claims and user authorities to a JWT access token in the Spring Authorization Server.

As always, the full source code is available over on GitHub.

       

Java Weekly, Issue 524

$
0
0

1. Spring and Java

>> Architecting with Java Persistence: Patterns and Strategies [infoq.com]

Exploring different Java persistence patterns — plain driver-based approaches, active records and repositories. A solid overview.

>> 11 JPA and Hibernate query hints every developer should know [thorben-janssen.com]

JPA hints to influence the query execution plan — including read-only transactions, flush mode, timeout, and loading graphs.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> TDD: For Those Who Don’t Know How to Design Software [blog.thecodewhisperer.com]

An interesting take on the evolutionary nature of many software problems — using TDD as a means of knowing more. Good stuff as always.

Also worth reading:

3. Pick of the Week

>> Best engineers are focusing on helping others [eng-leadership.com]

       

Replace Non-Printable Unicode Characters in Java

$
0
0

1. Introduction

Non-printable Unicode characters are control characters, style markers, and other invisible symbols that we can find in text but aren’t meant to show. Besides, these letters can cause problems with text handling, showing, and saving. So, it’s very important to have ways of changing or getting rid of such characters as required.

In this tutorial, we’ll look at different ways to replace it.

2. Using Regular Expressions

Java’s String class has strong ways to handle text changes, and regular expressions provide a short way to match and replace patterns in strings. We can use simple patterns to find and change non-printable Unicode letters as follows:

@Test
public void givenTextWithNonPrintableChars_whenUsingRegularExpression_thenGetSanitizedText() {
    String originalText = "\n\nWelcome \n\n\n\tto Baeldung!\n\t";
    String expected = "Welcome to Baeldung!";
    String regex = "[\\p{C}]";
    Pattern pattern = Pattern.compile(regex);
    Matcher matcher = pattern.matcher(originalText);
    String sanitizedText = matcher.replaceAll("");
    assertEquals(expected, sanitizedText);
}

In this test method, the regular expression \\p{C} represents any control characters (non-printable Unicode characters) in a given originalText. Besides, we compile the regular expression into a pattern using the Pattern.compile(regex) method, and then we create a Matcher object by calling this pattern with the originalText as a parameter.

Then, we call the Matcher.replaceAll() method to replace all instances of matched control characters with an empty string and hence eradicate them from the source text. Lastly, we compare the sanitizedtext with the expected string using the assertEquals() method.

3. Custom Implementation

We can utilize another approach to go through the letters of our text and remove special Unicode characters based on their numbers. Let’s take a simple example:

@Test
public void givenTextWithNonPrintableChars_whenCustomImplementation_thenGetSanitizedText() {
    String originalText = "\n\nWelcome \n\n\n\tto Baeldung!\n\t";
    String expected = "Welcome to Baeldung!";
    StringBuilder strBuilder = new StringBuilder();
    originalText.codePoints().forEach((i) -> {
        if (i >= 32 && i != 127) {
            strBuilder.append(Character.toChars(i));
        }
    });
    assertEquals(expected, strBuilder.toString());
}

Here, we employ originalText.codePoints() and a forEach loop to iterate through the Unicode code of the original text. Then, we set the condition to eliminate characters with values below 32 and equal to 127, representing non-printable and control characters, respectively.

We then append the characters to the StringBuilder object using the strBuilder.append(Character.toChars (i)) method.

4. Conclusion

In conclusion, this tutorial delved into addressing the challenges posed by non-printable Unicode characters in written text. The exploration encompassed two distinct methods leveraging regular expressions in Java’s String class and implementing a custom solution.

As always, the complete code samples for this article can be found over on GitHub.

       

Converting Integer to BigDecimal in Java

$
0
0

1. Overview

BigDecimal is designed to work with large floating point numbers. It solves the problem with floating point arithmetic and provides a way to control the precision. Additionally, it has plenty of conventional methods for operations over the numbers.

We can leverage the features of BigDecimal by converting an Integer. In this tutorial, we’ll learn several ways to do this and discuss their pros and cons.

2. Constructor Conversion

One of the most straightforward ways to use constructor conversion. BigDecimal exposes constructors that can convert from a wide range of inputs. Thus, we can pass a given Integer to the BigDecimal constructor:

@ParameterizedTest
@ArgumentsSource(BigDecimalConversionArgumentsProvider.class)
void giveIntegerWhenConvertWithConstructorToBigDecimalThenConversionCorrect(Integer given, BigDecimal expected) {
    BigDecimal actual = new BigDecimal(given);
    assertThat(actual).isEqualTo(expected);
}

However, with this approach, we’ll force BigDecimal to create a new object every time:

@ParameterizedTest
@ValueSource(ints = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void giveIntegerWhenConvertWithConstructorToBigDecimalThenConversionWithoutCaching(Integer given) {
    BigDecimal firstBigDecimal = new BigDecimal(given);
    BigDecimal secondBigDecimal = new BigDecimal(given);
    assertThat(firstBigDecimal)
      .isEqualTo(secondBigDecimal)
      .isNotSameAs(secondBigDecimal);
}

3. Static Factory Conversion

Another technique involves a static factory, and it’s similar to the previous example:

@ParameterizedTest
@ArgumentsSource(BigDecimalConversionArgumentsProvider.class)
void giveIntegerWhenConvertWithValueOfToBigDecimalThenConversionCorrect(Integer given, BigDecimal expected) {
    BigDecimal actual = BigDecimal.valueOf(given);
    assertThat(actual).isEqualTo(expected);
}

It provides one benefit: it can cache values, unlike constructor conversion. Thus, we can reuse the same object in different contexts. Because BigDecimal is immutable, it doesn’t create any problems.

4. Caching

The BigIntegers.valueOf() factory caches the values from zero to ten. All the values are defined in the static ZERO_THROUGH_TEN array inside the BigDecimal class:

private static final BigDecimal[] ZERO_THROUGH_TEN = {
  new BigDecimal(BigInteger.ZERO,       0,  0, 1),
  new BigDecimal(BigInteger.ONE,        1,  0, 1),
  new BigDecimal(BigInteger.TWO,        2,  0, 1),
  new BigDecimal(BigInteger.valueOf(3), 3,  0, 1),
  new BigDecimal(BigInteger.valueOf(4), 4,  0, 1),
  new BigDecimal(BigInteger.valueOf(5), 5,  0, 1),
  new BigDecimal(BigInteger.valueOf(6), 6,  0, 1),
  new BigDecimal(BigInteger.valueOf(7), 7,  0, 1),
  new BigDecimal(BigInteger.valueOf(8), 8,  0, 1),
  new BigDecimal(BigInteger.valueOf(9), 9,  0, 1),
  new BigDecimal(BigInteger.TEN,        10, 0, 2),
};

The valueOf(long) factory uses this array inside:

public static BigDecimal valueOf(long val) {
    if (val >= 0 && val < ZERO_THROUGH_TEN.length)
        return ZERO_THROUGH_TEN[(int)val];
    else if (val != INFLATED)
        return new BigDecimal(null, val, 0, 0);
    return new BigDecimal(INFLATED_BIGINT, val, 0, 0);
}

We can see that for certain values, the BigDecimal objects are the same:

@ParameterizedTest
@ValueSource(ints = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void giveIntegerWhenConvertWithValueOfToBigDecimalThenConversionCachesTheResults(Integer given) {
    BigDecimal firstBigDecimal = BigDecimal.valueOf(given);
    BigDecimal secondBigDecimal = BigDecimal.valueOf(given);
    assertThat(firstBigDecimal).isSameAs(secondBigDecimal);
}

This might benefit performance if we use many BigDecimal values from zero to ten. Also, as BigDecimal objects are immutable, we can implement our cache for the numbers we use repeatedly in our application.

At the same time, the numbers outside of this range won’t use caching:

@ParameterizedTest
@ValueSource(ints = {11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21})
void giveIntegerWhenConvertWithValueOfToBigDecimalThenConversionWontCacheTheResults(Integer given) {
    BigDecimal firstBigDecimal = BigDecimal.valueOf(given);
    BigDecimal secondBigDecimal = BigDecimal.valueOf(given);
    assertThat(firstBigDecimal)
      .isEqualTo(secondBigDecimal)
      .isNotSameAs(secondBigDecimal);
}

Thus, relying on identity equality in the production code isn’t advisable.

5. Conclusion

BigDecimal is a good choice when we need to make operations on floating point numbers, avoiding rounding errors. Also, it allows us to use massive numbers that cannot be represented otherwise. BigDecimal provides various methods for conversion from other types, for example, from an Integer.

As usual, all the code from this tutorial is available over on GitHub.

       

Call a Method on Each Element of a List in Java

$
0
0

1. Overview

When we work with Java, whether we’re working with pre-Java 8 code or embracing the functional elegance of the Stream API in Java 8 and beyond, calling a method on each element of a list is a fundamental operation.

In this tutorial, we’ll explore the methods and techniques available for calling a method on each list element.

2. Introduction to the Problem

As usual, let’s understand the problem quickly through an example. Let’s say we have the Player class:

class Player {
    private int id;
    private String name;
    private int score;
    public Player(int id, String name, int score) {
        this.id = id;
        this.name = name;
        this.score = score;
    }
   // getter and setter methods are omitted
}

Then, let’s initialize a list of Players as our input:

List<Player> PLAYERS = List.of(
  new Player(1, "Kai", 42),
  new Player(2, "Eric", 43),
  new Player(3, "Saajan", 64),
  new Player(4, "Kevin", 30),
  new Player(5, "John", 5));

Let’s say we want to execute a method on each player in the PLAYERS list. However, this requirement can have two scenarios:

  • Perform an action on each element and don’t care about the returned value, such as printing each player’s name
  • The method returns a result, effectively transforming the input list to another list, for example, extracting player names into a new List<String> from PLAYERS

In this tutorial, we’ll discuss both scenarios. Furthermore, we’ll see how to achieve them in pre-Java 8 and Java 8+.

Next, let’s see them in action.

3. Traditional Approach (Prior to Java 8)

Before Java 8, loop was the base technique when we wanted to call a method on each element in a list. However, some external libraries may provide convenient methods that allow us to solve the problem more efficiently.

Next, let’s take a close look at them.

3.1. Performing an Action on Each Element

Looping through the elements and calling the method can be the most straightforward solution to perform an action on each element:

for (Player p : PLAYERS) {
    log.info(p.getName());
}

If we check the console after running the for loop above, we’ll see the log output. Each player’s name was printed:

21:14:47.219 [main] INFO ... - Kai
21:14:47.220 [main] INFO ... - Eric
21:14:47.220 [main] INFO ... - Saajan
21:14:47.220 [main] INFO ... - Kevin
21:14:47.220 [main] INFO ... - John

3.2. Transforming to Another List

Similarly, if we want to extract players’ names by calling player.getName(), we can first declare an empty string list and add each player’s name in a loop:

List<String> names = new ArrayList<>();
for (Player p : PLAYERS) {
    names.add(p.getName());
}
assertEquals(Arrays.asList("Kai", "Eric", "Saajan", "Kevin", "John"), names);

3.3. Using Guava’s transform() Method

Alternatively, we can use Guava‘s Lists.transform() method to apply the list transformation.

The first step to use the Guava library is to add the dependency to our pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>32.1.3-jre</version>
</dependency>

The latest Guava version can be checked here.

Then, we can use the Lists.transform() method:

List<String> names = Lists.transform(PLAYERS, new Function<Player, String>() {
    @Override
    public String apply(Player input) {
        return input.getName();
    }
});
assertEquals(Arrays.asList("Kai", "Eric", "Saajan", "Kevin", "John"), names);

As the code above shows, we passed an anonymous Function<Player, String> object to the transform() method. Of course, we must implement the apply() method in the Function<F, T> interface to perform the transformation logic from F (Player) to T (String).

4. Java 8 and Beyond: The Stream API

The Stream API was introduced in Java 8, providing a convenient way to work with collections. Next, let’s see how our problem can be solved in Java 8+.

4.1. Performing an Action on Each Element

In Java 8, the forEach() method is a new method in the Iterable interface:

default void forEach(Consumer<? super T> action) {
    Objects.requireNonNull(action);
    for (T t : this) {
        action.accept(t);
    }
}

As we can see, forEach() wraps the loop implementation and makes the code on the caller’s side easier to read.

As Iterable is the supertype of the Collection interface, forEach() is available in all Collection types, such as List and Set.

The forEach() method expects a Consumer object as the argument. It’s ideal to perform an action on each list’s element. For example, let’s run this line of code:

PLAYERS.forEach(player -> log.info(player.getName()));

We see the expected output get printed:

21:14:47.223 [main] INFO ... - Kai
21:14:47.223 [main] INFO ... - Eric
21:14:47.224 [main] INFO ... - Saajan
21:14:47.224 [main] INFO ... - Kevin
21:14:47.224 [main] INFO ... - John

In the code above, we passed a lambda expression as the Consumer object to the forEach() method.

4.2. Transforming to Another List

To transform elements within a list by applying a specific function and gathering these modified elements into a new list, the Stream.map() method can be employed. Of course, we must first call stream() to convert our list to a Stream, and collect() the transformed elements:

List<String> names = PLAYERS.stream()
  .map(Player::getName)
  .collect(Collectors.toList());
assertEquals(List.of("Kai", "Eric", "Saajan", "Kevin", "John"), names);

As we can see, compared to Guava’s Lists.transform(), the Stream.map() approach is more fluent and easier to understand.

It’s worth noting that the “Player::getName” we passed to the map() method is a method reference. It works just as well if we replace the method reference with this lambda expression: “player -> player.getName().

5. Conclusion

In this article, we explored two scenarios for invoking a method on each element of a list. We delved into various solutions addressing this challenge, considering both pre-Java 8 and Java 8 and later versions.

As always, the complete source code for the examples is available over on GitHub.

       

Create Kubernetes Operators with the Java Operator SDK

$
0
0

1. Introduction

In this tutorial, we’ll present the concept of Kubernetes operators and how we can implement them using the Java Operator SDK. To illustrate this, we’ll implement an operator that simplifies the task of deploying an instance of the OWASP’s Dependency-Track application to a cluster.

2. What Is a Kubernetes Operator?

In Kubernetes parlance, an Operator is a software component, usually deployed in a cluster, that manages the lifecycle of a set of resources. It extends the native set of controllers, such as replicaset and job controllers, to manage complex or interrelated components as a single-managed unit.

Let’s look at a few common use cases where operators are used:

  • Enforce best practices when deploying applications to a cluster
  • Keep track and recover from accidentally removing/changing resources used by an application
  • Automate housekeeping tasks associated with an application, such as regular backups and cleanups
  • Automate off-cluster resource provisioning — for example, storage buckets and certificates
  • Improve application developers’ experience when interacting with Kubernetes in general
  • Improve overall security by allowing users to manage only application-level resources instead of low-level ones such as pods and deployments
  • Expose application-specific resources (a.k.a. Custom Resource Definitions) as Kubernetes resources

This last use case is quite interesting. It allows a solution provider to leverage the existing practices around regular Kubernetes resources to manage application-specific resources. The main benefit is that anyone adopting this application can use existing infrastructure-as-code tools.

To give us an idea of the different kinds of available operators, we can check the OperatorHub.io site. There, we’ll find operators for popular databases, API managers, development tools, and others.

3. Operators and CRDs

Custom Resource Definitions, or CRDs for short, are a Kubernetes extension mechanism that allows us to store structured data in a cluster. As with almost everything on this platform, the CRD definition itself is also a resource.

This meta-definition describes the scope of a given CRD instance (namespace-based or global) and the schema used to validate CRD instances. Once registered, users can create CRD instances as if they were native ones. Cluster administrators can also include CRDs as part of role definitions, thus granting access only to authorized users and applications.

Now, registering a CRD on itself does not imply that Kubernetes will use it in any way. As far as Kubernetes is concerned, a CRD instance is just an entry in its internal database. Since none of the standard Kubernetes native controllers know what to do with it, nothing will happen.

This is where the controller part of an operator comes into play. Once deployed, it will watch for events related to the corresponding custom resources and act in response to them.

Here, the act part is the important one. The terminology is inspired by Control Theory, which can be summarized in the following diagram:

4. Implementing an Operator

Let’s review the main tasks we have to complete to create an Operator:

  • Define a model of the target resources we’ll manage through the operator
  • Create a CRD that captures the required parameters needed to deploy those resources
  • Create a controller that watches a cluster for events related to the registered CRD

For this tutorial, we’ll implement an operator for the OWASP flagship project, Dependency-Track. This application allows users to track vulnerabilities in libraries used across an organization, thus allowing software security professionals to evaluate and address any issues found.

Dependency-Track’s Docker distribution consists of two components: API and frontend services, each with its own image. When deploying them to a Kubernetes cluster, the common practice is to wrap each one in a Deployment to manage the Pods that run these images.

That’s not all, however. We also need a few extra resources for a complete solution:

  • Services to act as load balancers in front of each Deployment
  • An Ingress to expose the application to the external world
  • A Persistent Volume claim to store vulnerability definitions downloaded from public sources
  • ConfigMap and Secret resources to store generic and sensitive parameters, respectively

Moreover, we also need to properly set liveness/readiness probes, resource limits, and other minutiae that a regular user should not be concerned about.

Let’s see how we can simplify this task with an Operator.

5. Defining the Model

Our operator will focus on the minimal set of resources needed to run a Dependency-Track system. Fortunately, the provided images have sensible default values, so we only need one piece of information: the external URL used to access the application.

This leaves database and storage settings out for now, but once we get the basics right, adding those features is straightforward.

We will, however, leave some leeway for customization. In particular, it’s convenient to allow users to override the image and version used for the deployments, as they’re constantly evolving.

Let’s see a diagram of a Dependency-Track installation showing all its components:

The required model parameters are:

  • Kubernetes namespace where the resources will be created
  • A name used for the installation and to derive each component name
  • The hostname to use with the Ingress resource
  • Optional extra annotations to add to the Ingress. We need those as some cloud providers (AWS, for example) require them to work properly.

6. Controller Project Setup

The next step would be to define the CRD schema by hand, but since we’re using the Java Operator SDK, this will be taken care of. Instead, let’s move to the controller project itself.

We’ll start with a standard Spring Boot 3 WebFlux application and add the required dependencies:

<dependency>
    <groupId>io.javaoperatorsdk</groupId>
    <artifactId>operator-framework-spring-boot-starter</artifactId>
    <version>5.4.0</version>
</dependency>
<dependency>
    <groupId>io.javaoperatorsdk</groupId>
    <artifactId>operator-framework-spring-boot-starter-test</artifactId>
    <version>5.4.0</version>
    <scope>test</scope>
    <exclusions>
        <exclusion>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-slf4j2-impl</artifactId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>io.fabric8</groupId>
    <artifactId>crd-generator-apt</artifactId>
    <version>6.9.2</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.bouncycastle</groupId>
    <artifactId>bcprov-jdk18on</artifactId>
    <version>1.77</version>
</dependency>
<dependency>
    <groupId>org.bouncycastle</groupId>
    <artifactId>bcpkix-jdk18on</artifactId>
    <version>1.77</version>
</dependency>

The latest version of these dependencies is available on Maven Central:

The first two are required to implement and test the operator, respectively. crd-generator-apt is the annotation processor that generates the CRD definition from annotated classes. Finally, the bouncycastle libraries are required to support modern encryption standards.

Notice the exclusion added to the test starter. We’ve removed the log4j dependency because it conflicts with logback.

7. Implementing the Primary Resource

A Primary Resource class represents a CRD that users will deploy into a cluster. It is identified using the @Group and @Version annotations so the CRD annotation processor can generate the appropriate CRD definition at compile-time:

@Group("com.baeldung")
@Version("v1")
public class DeptrackResource extends CustomResource<DeptrackSpec, DeptrackStatus> implements Namespaced {
    @JsonIgnore
    public String getFrontendServiceName() {
        return this.getMetadata().getName() + "-" + DeptrackFrontendServiceResource.COMPONENT;
    }
    @JsonIgnore
    public String getApiServerServiceName() {
        return this.getMetadata().getName() + "-" + DeptrackApiServerServiceResource.COMPONENT;
    }
}

Here, we leverage the SDK’s class CustomResource to implement our DeptrackResource. Besides the base class, we’re also using Namespaced, a marker interface that informs the annotation processor that our CRD instances will be deployed to a Kubernetes namespace.

We’ve added just two helper methods to the class, which we’ll later use to derive names for the frontend and API services. We need the @JsonIgnore annotation, in this case, to avoid issues when serializing/deserializing instances CRD instances in API calls to Kubernetes.

8. Specification and Status Classes

The CustomResource class requires two template parameters:

  • A specification class with the parameters supported by our model
  • A status class with information about the dynamic state of our system

In our case, we have just a few parameters, so this specification is quite simple:

public class DeptrackSpec {
    private String apiServerImage = "dependencytrack/apiserver";
    private String apiServerVersion = "";
    private String frontendImage = "dependencytrack/frontend";
    private String frontendVersion = "";
    private String ingressHostname;
    private Map<String, String> ingressAnnotations;
    // ... getters/setters omitted
}

As for the status class, we’ll just extend ObservedGenerationAwareStatus:

public class DeptrackStatus extends ObservedGenerationAwareStatus {
}

Using this approach, the SDK will automatically increment the observedGeneration status field on each update. This is a common practice used by controllers to track changes in a resource.

9. Reconciler

Next, we need to create a Reconciler class that is responsible for managing the overall state of the Dependency-Track system. Our class must implement this interface, which takes the resource class as a parameter:

@ControllerConfiguration(dependents = {
    @Dependent(name = DeptrackApiServerDeploymentResource.COMPONENT, type = DeptrackApiServerDeploymentResource.class),
    @Dependent(name = DeptrackFrontendDeploymentResource.COMPONENT, type = DeptrackFrontendDeploymentResource.class),
    @Dependent(name = DeptrackApiServerServiceResource.COMPONENT, type = DeptrackApiServerServiceResource.class),
    @Dependent(name = DeptrackFrontendServiceResource.COMPONENT, type = DeptrackFrontendServiceResource.class),
    @Dependent(type = DeptrackIngressResource.class)
})
@Component
public class DeptrackOperatorReconciler implements Reconciler<DeptrackResource> {
    @Override
    public UpdateControl<DeptrackResource> reconcile(DeptrackResource resource, Context<DeptrackResource> context) throws Exception {
        return UpdateControl.noUpdate();
    }
}

The key point here is the @ControllerConfiguration annotation. Its dependents property lists individual resources whose lifecycle will be linked to the primary resource.

For deployments and services, we need to specify a name property in addition to the resource’s type to distinguish them. As for the Ingress, there’s no need for a name since there’s just one for each deployed Dependency-Track resource.

Notice that we’ve also added a @Component annotation. We need this so the operator’s autoconfiguration logic detects the reconciler and adds it to its internal registry.

10. Dependent Resource Classes

For each resource that we want to create in the cluster as a result of a CRD deployment, we need to implement a KubernetesDependentResource class. These classes must be annotated with @KubernetesDependent and are responsible for managing the lifecycle of those resources in response to changes in the primary resource.

The SDK provides the CRUDKubernetesDependentResource utility class that vastly simplifies this task. We just need to override the desired() method, which returns a description of the desired state for the dependent resource:

@KubernetesDependent(resourceDiscriminator = DeptrackApiServerDeploymentResource.Discriminator.class)
public class DeptrackApiServerDeploymentResource extends CRUDKubernetesDependentResource<Deployment, DeptrackResource> {
    public static final String COMPONENT = "api-server";
    private Deployment template;
    public DeptrackApiServerDeploymentResource() {
        super(Deployment.class);
        this.template = BuilderHelper.loadTemplate(Deployment.class, "templates/api-server-deployment.yaml");
    }
    @Override
    protected Deployment desired(DeptrackResource primary, Context<DeptrackResource> context) {
        ObjectMeta meta = fromPrimary(primary, COMPONENT)
          .build();
        return new DeploymentBuilder(template)
          .withMetadata(meta)
          .withSpec(buildSpec(primary, meta))
          .build();
    }
    private DeploymentSpec buildSpec(DeptrackResource primary, ObjectMeta primaryMeta) {
        return new DeploymentSpecBuilder()
          .withSelector(buildSelector(primaryMeta.getLabels()))
          .withReplicas(1)
          .withTemplate(buildPodTemplate(primary,primaryMeta))
          .build();
    }
    private LabelSelector buildSelector(Map<String, String> labels) {
        return new LabelSelectorBuilder()
          .addToMatchLabels(labels)
          .build();
    }
    private PodTemplateSpec buildPodTemplate(DeptrackResource primary, ObjectMeta primaryMeta) {
        return new PodTemplateSpecBuilder()
          .withMetadata(primaryMeta)
          .withSpec(buildPodSpec(primary))
          .build();
    }
    private PodSpec buildPodSpec(DeptrackResource primary) {
        String imageVersion = StringUtils.hasText(primary.getSpec().getApiServerVersion()) ?
          ":" + primary.getSpec().getApiServerVersion().trim() : "";
        String imageName = StringUtils.hasText(primary.getSpec().getApiServerImage()) ?
          primary.getSpec().getApiServerImage().trim() : Constants.DEFAULT_API_SERVER_IMAGE;
        return new PodSpecBuilder(template.getSpec().getTemplate().getSpec())
          .editContainer(0)
            .withImage(imageName + imageVersion)
            .and()
          .build();
    }
}

In this case, we create Deployment using the available builder classes. The data itself comes partly from metadata extracted from the primary resource passed to the method and from a template read at initialization time. This approach allows us to use existing deployments that are already battle-proven as a template and modify only what’s really needed.

Finally, we need to specify a Discriminator class, which the operator engine uses to target the right resource class when processing events from multiple sources of the same kind. Here, we’ll use an implementation based on the ResourceIDMatcherDiscriminator utility class available in the framework:

class Discriminator extends ResourceIDMatcherDiscriminator<Deployment, DeptrackResource> {
     public Discriminator() {
         super(COMPONENT, (p) -> new ResourceID(
           p.getMetadata().getName() + "-" + COMPONENT,
           p.getMetadata().getNamespace()));
     }
}

The utility class requires an event source name and a mapping function. The latter takes a primary resource instance and returns the resource identifier (namespace + name) for the associated component.

Since all resource classes share the same basic structure, we won’t reproduce them here. Instead, we recommend checking the source code to see how each resource is built.

11. Local Testing

Since the controller is just a regular Spring application, we can use regular test frameworks to create unit and integration tests for our application.

The Java Operator SDK also offers a convenient mock Kubernetes implementation that helps with simple test cases. To use this mock implementation in test classes, we use the @EnableMockOperator together with the standard @SpringBootTest:

@SpringBootTest
@EnableMockOperator(crdPaths = "classpath:META-INF/fabric8/deptrackresources.com.baeldung-v1.yml")
class ApplicationUnitTest {
    @Autowired
    KubernetesClient client;
    @Test
    void whenContextLoaded_thenCrdRegistered() {
        assertThat(
          client
            .apiextensions()
            .v1()
            .customResourceDefinitions()
            .withName("deptrackresources.com.baeldung")
            .get())
          .isNotNull();
    }
}

The crdPath property contains the location where the annotation processor creates the CRD definition YAML file. During test initialization, the mock Kubernetes service will automatically register it so we can create a CRD instance and check whether the expected resources are correctly created.

The SDK’s test infrastructure also configures a Kubernetes client that we can use to simulate deployments and check whether the expected resources are correctly created. Notice that there’s no need for a working Kubernetes cluster!

12. Packaging and Deployment

To package our controller project, we can use a Dockerfile or, even better, Spring Boot’s build-image goal. We recommend the latter, as it ensures that the image follows recommended best practices regarding security and layer organization.

Once we’ve published the image to a local or remote registry, we must create a YAML manifest to deploy the controller into an existing cluster.

This manifest contains the deployment itself that manages the controller and supporting resources:

  • The CRD definition
  • A namespace where the controller will “live”
  • A Cluster Role listing all APIs used by the controller
  • A Service Account
  • A Cluster Role Binding that links the role to the account

The resulting manifest is available in our GitHub repository.

13. CRD Deployment Test

To complete our tutorial, let’s create a simple Dependency-Track CRD manifest and deploy it. We’ll use a dedicated namespace (“test”) and expose it.

For our test, we’re using a local Kubernetes that listens on IP address 172.31.42.16, so we’ll use deptrack.172.31.42.16.nip.io as the hostname. NIP.IO is a DNS service that resolves any hostname in the form *.1.2.3.4.nip.io to the IP address 1.2.3.4, so we don’t need to set up any DNS entry.

Let’s have a look at the deployment manifest:

apiVersion: com.baeldung/v1
kind: DeptrackResource
metadata:
  namespace: test
  name: deptrack1
  labels:
    project: tutorials
  annotations:
    author: Baeldung
spec:
  ingressHostname: deptrack.172.31.42.16.nip.io

Now, let’s deploy it with kubectl:

$ kubectl apply -f k8s/test-resource.yaml
deptrackresource.com.baeldung/deptrack1 created

We can get the controller logs to see that it reacted to the CRD creation and created the dependent resources:

$ kubectl get --namespace test deployments
NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
deptrack1-api-server   0/1     1            0           62s
deptrack1-frontend     1/1     1            1           62s
$ kubectl get --namespace test services 
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deptrack1-frontend-service ClusterIP 10.43.122.76 <none> 8080/TCP 2m17s
$ kubectl get --namespace test ingresses
NAME CLASS HOSTS ADDRESS PORTS AGE
deptrack1-ingress traefik deptrack.172.31.42.16.nip.io 172.31.42.16 80 2m53s

As expected, the test namespace now has two deployments, two services, and an ingress. If we open a browser and point to https://deptrack.172.31.42.16.nip.io, we’ll see the application’s login page. This shows that the solution was correctly deployed.

To complete the test, let’s remove the CRD:

$ kubectl delete --namespace test deptrackresource/deptrack1
deptrackresource.com.baeldung "deptrack1" deleted

Since Kubernetes knows which resources are linked to the CRD, they’ll also be deleted:

$ kubectl get --namespace test deployments
No resources found in test namespace.

14. Conclusion

In this tutorial, we’ve shown how to implement a basic Kubernetes Operator using the Java Operator SDK. Despite the amount of required boilerplate code, the implementation is straightforward.

Also, the SDK handles most of the heavy lifting of state reconciliation, leaving developers the task of defining the best way to handle complex deployments.

As usual, all code is available over on GitHub.

       

Converting Integer to BigDecimal in Java

$
0
0

1. Overview

BigDecimal is designed to work with large floating point numbers. It solves the problem with floating point arithmetic and provides a way to control the precision. Additionally, it has plenty of conventional methods for operations over the numbers.

We can leverage the features of BigDecimal by converting an Integer. In this tutorial, we’ll learn several ways to do this and discuss their pros and cons.

2. Constructor Conversion

One of the most straightforward ways to use constructor conversion. BigDecimal exposes constructors that can convert from a wide range of inputs. Thus, we can pass a given Integer to the BigDecimal constructor:

@ParameterizedTest
@ArgumentsSource(BigDecimalConversionArgumentsProvider.class)
void giveIntegerWhenConvertWithConstructorToBigDecimalThenConversionCorrect(Integer given, BigDecimal expected) {
    BigDecimal actual = new BigDecimal(given);
    assertThat(actual).isEqualTo(expected);
}

However, with this approach, we’ll force BigDecimal to create a new object every time:

@ParameterizedTest
@ValueSource(ints = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void giveIntegerWhenConvertWithConstructorToBigDecimalThenConversionWithoutCaching(Integer given) {
    BigDecimal firstBigDecimal = new BigDecimal(given);
    BigDecimal secondBigDecimal = new BigDecimal(given);
    assertThat(firstBigDecimal)
      .isEqualTo(secondBigDecimal)
      .isNotSameAs(secondBigDecimal);
}

3. Static Factory Conversion

Another technique involves a static factory, and it’s similar to the previous example:

@ParameterizedTest
@ArgumentsSource(BigDecimalConversionArgumentsProvider.class)
void giveIntegerWhenConvertWithValueOfToBigDecimalThenConversionCorrect(Integer given, BigDecimal expected) {
    BigDecimal actual = BigDecimal.valueOf(given);
    assertThat(actual).isEqualTo(expected);
}

It provides one benefit: it can cache values, unlike constructor conversion. Thus, we can reuse the same object in different contexts. Because BigDecimal is immutable, it doesn’t create any problems.

4. Caching

The BigIntegers.valueOf() factory caches the values from zero to ten. All the values are defined in the static ZERO_THROUGH_TEN array inside the BigDecimal class:

private static final BigDecimal[] ZERO_THROUGH_TEN = {
  new BigDecimal(BigInteger.ZERO,       0,  0, 1),
  new BigDecimal(BigInteger.ONE,        1,  0, 1),
  new BigDecimal(BigInteger.TWO,        2,  0, 1),
  new BigDecimal(BigInteger.valueOf(3), 3,  0, 1),
  new BigDecimal(BigInteger.valueOf(4), 4,  0, 1),
  new BigDecimal(BigInteger.valueOf(5), 5,  0, 1),
  new BigDecimal(BigInteger.valueOf(6), 6,  0, 1),
  new BigDecimal(BigInteger.valueOf(7), 7,  0, 1),
  new BigDecimal(BigInteger.valueOf(8), 8,  0, 1),
  new BigDecimal(BigInteger.valueOf(9), 9,  0, 1),
  new BigDecimal(BigInteger.TEN,        10, 0, 2),
};

The valueOf(long) factory uses this array inside:

public static BigDecimal valueOf(long val) {
    if (val >= 0 && val < ZERO_THROUGH_TEN.length)
        return ZERO_THROUGH_TEN[(int)val];
    else if (val != INFLATED)
        return new BigDecimal(null, val, 0, 0);
    return new BigDecimal(INFLATED_BIGINT, val, 0, 0);
}

We can see that for certain values, the BigDecimal objects are the same:

@ParameterizedTest
@ValueSource(ints = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void giveIntegerWhenConvertWithValueOfToBigDecimalThenConversionCachesTheResults(Integer given) {
    BigDecimal firstBigDecimal = BigDecimal.valueOf(given);
    BigDecimal secondBigDecimal = BigDecimal.valueOf(given);
    assertThat(firstBigDecimal).isSameAs(secondBigDecimal);
}

This might benefit performance if we use many BigDecimal values from zero to ten. Also, as BigDecimal objects are immutable, we can implement our cache for the numbers we use repeatedly in our application.

At the same time, the numbers outside of this range won’t use caching:

@ParameterizedTest
@ValueSource(ints = {11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21})
void giveIntegerWhenConvertWithValueOfToBigDecimalThenConversionWontCacheTheResults(Integer given) {
    BigDecimal firstBigDecimal = BigDecimal.valueOf(given);
    BigDecimal secondBigDecimal = BigDecimal.valueOf(given);
    assertThat(firstBigDecimal)
      .isEqualTo(secondBigDecimal)
      .isNotSameAs(secondBigDecimal);
}

Thus, relying on identity equality in the production code isn’t advisable.

5. Conclusion

BigDecimal is a good choice when we need to make operations on floating point numbers, avoiding rounding errors. Also, it allows us to use massive numbers that cannot be represented otherwise. BigDecimal provides various methods for conversion from other types, for example, from an Integer.

As usual, all the code from this tutorial is available over on GitHub.

       

Convert Joda-Time DateTime to Date and Vice Versa

$
0
0

1. Introduction

Joda-Time is a very popular Java library concerning date and time manipulation. It gives a much more intuitive and flexible API than that usually offered by the standard DateTime class.

In this tutorial, we’ll look at how to transform Joda-Time DateTime objects into standard Java Date ones and vice versa.

2. Setting Up Joda-Time

First, we should ensure that our project includes the joda-time library:

<dependency>
    <groupId>joda-time</groupId>
    <artifactId>joda-time</artifactId>
    <version>2.12.6</version>
</dependency>

Alternatively, we can download the jar file and put it in our classpath.

3. Convert Joda-Time DateTime to Java Date

To convert a Joda-Time DateTime object to a standard Java Date we use the method called toDate(). Below is a simple example:

@Test
public void givenJodaDateTime_whenConvertingToJavaDate_thenConversionIsCorrect() {
    DateTime jodaDateTime = new DateTime();
    java.util.Date javaDate = jodaDateTime.toDate();
    assertEquals(jodaDateTime.getMillis(), javaDate.getTime());
}

In this test method, we create a new instance of DateTime from Joda-Time named jodaDateTime. Subsequently, we called the toDate() method on this Joda DateTime instance to obtain a corresponding java.util.Date object.

The test is performed using the assertEquals method, which asserts that the time in milliseconds retrieved from the original Joda DateTime object equals the time obtained from the new DateTime object created through the java.util.Date.

4. Convert Java Date to Joda-Time DateTime

It is also straightforward to convert a plain Java Date object into Joda-Time DateTime. We can use the DateTime constructor designed for java.util.Date parameter as follows:

@Test
public void givenJavaDate_whenConvertingToJodaDateTime_thenConversionIsCorrect() {
    java.util.Date javaDate = new java.util.Date();
    DateTime jodaDateTime = new DateTime(javaDate);
    assertEquals(javaDate.getTime(), jodaDateTime.getMillis());
}

Within the above test method, we actively instantiate a new java.util.Date object, representing the current date and time. Subsequently, we create a corresponding Joda DateTime object using the provided Java Date. The actual validation occurs using the assertEquals method, where we verify that the time in milliseconds is retrieved from the original java.util.Date object is equal to the time represented by the Joda DateTime object

5. Conclusion

In conclusion, one of the usual operations when working with dates and time in Java is converted between Joda-Time DateTime objects and standard Java Date.

Now that we have gone through the examples presented above, it should be easy for us to implement Joda-Time in our projects and easily convert these two types.

As usual, the accompanying source code can be found over on GitHub.

       

Solving the ParameterResolutionException in JUnit 5

$
0
0

1. Overview

JUnit 5 introduced some powerful features, including support for parameterized testing. Writing parameterized tests can save a lot of time, and in many cases, they can be enabled with a simple combination of annotations.

However, incorrect configuration can lead to exceptions that are difficult to debug since JUnit manages many aspects of test execution behind the scenes.

One such exception is the ParameterResolutionException:

org.junit.jupiter.api.extension.ParameterResolutionException: No ParameterResolver registered for parameter ...

In this tutorial, we’ll explore the causes of this exception and how to solve it.

2. JUnit 5’s ParameterResolver 

To understand the cause of this exception, we first need to understand what the message is telling us we’re missing: a ParameterResolver.

In JUnit 5, the ParameterResolver interface was introduced to allow developers to extend JUnit’s basic functionality and write tests that take parameters of any type. Let’s look at a simple ParameterResolver implementation:

public class FooParameterResolver implements ParameterResolver {
    @Override
    public boolean supportsParameter(ParameterContext parameterContext, ExtensionContext extensionContext)
      throws ParameterResolutionException {
        // Parameter support logic
    }
    @Override
    public Object resolveParameter(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
        // Parameter resolution logic
    }
}

We can see that the class has two main methods:

  • supportsParameter(): determines if a parameter type is supported
  • resolveParameter(): returns a parameter for test execution

Because the ParameterResolutionException is thrown in the absence of a ParameterResolver implementation, we won’t get too concerned with implementation details just yet. Let’s first discuss some potential causes of the exception.

3. The ParameterResolutionException

The ParameterResolutionException can be difficult to debug, especially for those less familiar with parameterized testing.

To start, let’s define a simple Book class that we’ll write unit tests for:

public class Book {
    private String title;
    private String author;
    // Standard getters and setters
}

For our example, we’ll write some unit tests for Book that verify different title values. Let’s start with two very simple tests:

@Test
void givenWutheringHeights_whenCheckingTitleLength_thenTitleIsPopulated() {
    Book wuthering = new Book("Wuthering Heights", "Charlotte Bronte");
    assertThat(wuthering.getTitle().length()).isGreaterThan(0);
}
@Test
void givenJaneEyre_whenCheckingTitleLength_thenTitleIsPopulated() {
    Book jane = new Book("Jane Eyre", "Charlotte Bronte");
    assertThat(wuthering.getTitle().length()).isGreaterThan(0);
}

It’s easy to see these two tests are basically doing the same thing: setting the Book title and checking the length. We can simplify the tests by combining them into a single parameterized test. Let’s discuss some ways in which this refactoring could go wrong.

3.1. Passing Parameters to @Test Methods

Taking a very quick approach, we may believe passing parameters to the @Test annotated method is enough:

@Test
void givenTitleAndAuthor_whenCreatingBook_thenFieldsArePopulated(String title, String author) {
    Book book = new Book(title, author);
    assertThat(book.getTitle().length()).isGreaterThan(0);
    assertThat(book.getAuthor().length()).isGreaterThan(0);
}

The code compiles and runs, but thinking about this a little further, we should question where these parameters are coming from. Running this example, we see an exception:

org.junit.jupiter.api.extension.ParameterResolutionException: No ParameterResolver registered for parameter [java.lang.String arg0] in method ...

JUnit has no way to know what parameters to pass to the test method.

Let’s continue refactoring our unit test and look into another cause of the ParameterResolutionException.

3.2. Competing Annotations

We could supply the missing parameters with a ParameterResolver as we mentioned earlier, but let’s start more simply with a value source. Since there are two values — title and author — we can use a CsvSource to provide these values to our test.

Additionally, we’re missing a key annotation: @ParameterizedTest. This annotation informs JUnit that our test is parameterized and has test values injected into it.

Let’s make a quick attempt at refactoring:

@ParameterizedTest
@CsvSource({"Wuthering Heights, Charlotte Bronte", "Jane Eyre, Charlotte Bronte"})
@Test
void givenTitleAndAuthor_whenCreatingBook_thenFieldsArePopulated(String title, String author) {
    Book book = new Book(title, author);
    assertThat(book.getTitle().length()).isGreaterThan(0);
    assertThat(book.getAuthor().length()).isGreaterThan(0);
}

This seems reasonable. However, when we run the unit test, we see something interesting: two passing test runs and a third failing test run. Looking closer, we see a warning as well:

WARNING: Possible configuration error: method [...] resulted in multiple TestDescriptors [org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor, org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor].
This is typically the result of annotating a method with multiple competing annotations such as @Test, @RepeatedTest, @ParameterizedTest, @TestFactory, etc.

By adding competing test annotations, we’ve unintentionally created multiple TestDescriptors. What this means is that JUnit is still running the original @Test version of our test along with our new parameterized test.

Simply removing the @Test annotation fixes this issue.

3.3. Working with a ParameterResolver

Earlier, we discussed a simple example of a ParameterResolver implementation. Now that we have a working test, let’s introduce a BookParameterResolver:

public class BookParameterResolver implements ParameterResolver {
    @Override
    public boolean supportsParameter(ParameterContext parameterContext, ExtensionContext extensionContext)
      throws ParameterResolutionException {
        return parameterContext.getParameter().getType() == Book.class;
    }
    @Override
    public Object resolveParameter(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
        return parameterContext.getParameter().getType() == Book.class
            ? new Book("Wuthering Heights", "Charlotte Bronte")
            : null;
    }
}

This is a simple example that just returns a single Book instance for testing. Now that we have a ParameterResolver to provide us with test values, we should be able to go back to the test from our first example. Again, we may try:

@Test
void givenTitleAndAuthor_whenCreatingBook_thenFieldsArePopulated(String title, String author) {
    Book book = new Book(title, author);
    assertThat(book.getTitle().length()).isGreaterThan(0);
    assertThat(book.getAuthor().length()).isGreaterThan(0);
}

But as we see when running this test, the same exception persists. The cause is slightly different though — now that we have a ParameterResolver, we still need to tell JUnit how to use it.

Fortunately, this is as simple as adding the @ExtendWith annotation to the outer class that contains our test methods:

@ExtendWith(BookParameterResolver.class)
public class BookUnitTest {
    @Test
    void givenTitleAndAuthor_whenCreatingBook_thenFieldsArePopulated(String title, String author) {
        // Test contents...
    }
    // Other unit tests
}

Running this again, we see a successful test execution.

4. Conclusion

In this article, we discussed JUnit 5’s ParameterResolutionException and how missing or competing configurations can cause this exception. As always, all of the code for the article can be found over on GitHub.

       

Using a Custom TrustStore in Java

$
0
0

1. Introduction

In this tutorial, we’re going to take a look at how to use custom TrustStore in Java. We’re going first to override the default TrustStore and then explore the ways to combine certificates from multiple TrustStores. We’ll also see what the known problems and challenges are and how we can surpass them.

2. Overriding Custom TrustStore

So, first, let’s override the default TrustStore. Most likely, it’s going to be the cacerts file located in lib/security/cacerts for JDK 9 and above. For JDKs below version 9, cacerts is located under jre/lib/security/cacerts. To override it, we need to pass a VM argument -Djavax.net.ssl.trustStore with the absolute path to the TrustStore to be used as the value. For instance, if we would launch JVM like this:

java -Djavax.net.ssl.trustStore=/path/to/another/truststore.p12 app.jar

Then, instead of cacerts, Java would use /path/to/another/truststore.p12 as a TrustStore.

However, this approach has a small problem. When we override the location of the TrustStore, the default cacerts TrustStore won’t be taken into account anymore. That means, that all of the trusted CA’s certificates that come with JDK preinstalled will now no longer be available.

3. Combining Multiple TrustStores

So, to solve the problem listed above, we can do either of two things:

  • include all of the default cacerts certificates into the new TrustStore that we want to use
  • try to programmatically ask Java to look into both TrustStores during the resolution of the trust of the entity

We’ll review both approaches below as they have their pros and cons.

4. Merging TrustStores

The first approach is a relatively simple solution to the problem. In this case, we can create a new TrustStore from the default one. By doing so, we make sure that the new TrustStore will include all of the initial CA certificates:

keytool -importkeystore -srckeystore cacerts -destkeystore new_trustStore.p12 -srcstoretype PKCS12 -deststoretype PKCS12

Then, we import the certificates we need into the newly created TrustStore:

keytool -import -alias SomeSelfSignedCertificate -keystore new_trustStore.p12 -file /path/to/certificate/to/add

We can modify the initial TrustStore (meaning the cacerts itself), which might be a viable option. Other applications that rely on this exact JDK installation are the only thing to consider. They will receive these newly added certificates into default cacerts as well. That could or could not be OK, depending on the requirements.

5. Programmatically Consider Both TrustStores

This approach is a bit more complicated than the one we described. The challenge is that in JDK, there are no built-in ways to ask TrustManager (the one that decides to trust somebody) to consider multiple TrustStores. So we would have to implement it ourselves.

The first thing to do is to get the instance of the TrustManagerFactory. When we have it, we’ll be able to get the TrustManager that we need:

TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
trustManagerFactory.init((KeyStore) null);

So here, we get the default TrustManagerFactory, and then we initialize it with null as an argument. The init method initializes the TrustManagerFactory with the given TrustStore. As a side note, Java KeyStore and TrustStore are both represented by the KeyStore Java class. So when we pass null as an argument, TrustManagerFactory would initialize itself with default TrustStore (cacerts).

Once we have that, we should get the actual TrustManager from TrustManagerFactory. More specifically, we need the X509TrustManager. This TrustManager is the one that is responsible for determining whether the given x509 Certificate should be trusted or not:

X509TrustManager defaultX509CertificateTrustManager = null;
for (TrustManager trustManager : trustManagerFactory.getTrustManagers()) {
    if (trustManager instanceof X509TrustManager x509TrustManager) {
        defaultX509CertificateTrustManager = x509TrustManager;
        break;
    }
}

So, we have the default JDK’s X509TrustManager, which knows only about default cacerts. Now, we need to load our own TrustStore and initialize the new TrustManagerFactory with this new TrustStore of ours:

try (FileInputStream myKeys = new FileInputStream("new_TrustStore.p12")) {
    KeyStore myTrustStore = KeyStore.getInstance(KeyStore.getDefaultType());
    myTrustStore.load(myKeys, "new_TrustStore_pwd".toCharArray());
    trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
    trustManagerFactory.init(myTrustStore);
    X509TrustManager myTrustManager = null;
    for (TrustManager tm : trustManagerFactory.getTrustManagers()) {
        if (tm instanceof X509TrustManager x509TrustManager) {
            myTrustManager = x509TrustManager;
            break;
        }
    }
}

As we can see, we have loaded our TrustStore into a new KeyStore object using the given password. Then we get yet another default TrustManagerFactory instance (getInstance() method always returns a new object) and initialize it with our TrustStore. Then, in the same way as above, we find the X509TrustManager, which considers our TrustStore now. Now, the only thing left is configuring SSLContext to use both X509TrustManager implementations – the default one and ours.

6. Reconfiguring SSLContext

Now, we need to teach the SSLContext to use our 2 X509TrustManagers. The problem is that we cannot pass them separately into SSLContext. That is because SSLContext, surprisingly, will use only the first X509TrustManager it finds and will ignore the rest. To overcome this, we need to create a single finalX509TrustManager that is a wrapper over our two X509TrustManagers:

X509TrustManager finalDefaultTm = defaultX509CertificateTrustManager;
X509TrustManager finalMyTm = myTrustManager;
X509TrustManager wrapper = new X509TrustManager() {
    private X509Certificate[] mergeCertificates() {
        ArrayList<X509Certificate> resultingCerts = new ArrayList<>();
        resultingCerts.addAll(Arrays.asList(finalDefaultTm.getAcceptedIssuers()));
        resultingCerts.addAll(Arrays.asList(finalMyTm.getAcceptedIssuers()));
        return resultingCerts.toArray(new X509Certificate[resultingCerts.size()]);
    }
    @Override
    public X509Certificate[] getAcceptedIssuers() {
        return mergeCertificates();
    }
    @Override
    public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException {
        try {
            finalMyTm.checkServerTrusted(chain, authType);
        } catch (CertificateException e) {
            finalDefaultTm.checkServerTrusted(chain, authType);
        }
    }
    @Override
    public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException {
        finalDefaultTm.checkClientTrusted(mergeCertificates(), authType);
    }
};

And then initialize the TLS SSLContext with our wrapper:

SSLContext context = SSLContext.getInstance("TLS");
context.init(null, new TrustManager[] { wrapper }, null);
SSLContext.setDefault(context);

We’re also setting this SSLContext as a default one. This is just in case, since most of the clients that want to establish a secure connection would use TLS SSLContext. This serves as a backup option, though. We’re finally done.

7. Conclusion

In this article, we explored the ways how to use certificates from different TrustStores in one Java application.

Unfortunately, in Java, if we specify the TrustStore location from the command line, this would instruct Java to use only the specified TrustStore. So our options are either to modify the default cacerts TrustStore file or create a brand new TrustStore file that would contain all required CA certificate entries. A more complex approach is to force SSLContext to consider both TrustStores programmatically.

Still, all of these options would work, and we should use the one that fits our requirements.

As always, the entire source code for this article is available over on GitHub.

       

What Is the Error: “Non-static method cannot be referenced from a static context”?

$
0
0

1. Overview

When we work with Java, we often run into problems that demand a deeper understanding of the language’s intricacies. One common puzzle is the error message: “Non-static method … cannot be referenced from a static context.” This error may seem daunting to beginners and can even confuse experienced programmers.

In this tutorial, we’ll delve into the reasons behind this error and explore ways to resolve it.

2. Introduction to the Problem

As usual, let’s understand the problem quickly through an example. Let’s say we have the ToolBox class:

class ToolBox {
    private String concat(String str1, String str2) {
        return str1 + str2;
    }
    static String joinTwoStrings(String str1, String str2) {
        return concat(str1, str2); //<-- compilation error
    }
}

The ToolBox class has the concat() method. We didn’t want everybody to call it, so we declared it a private method. Also, we have a static method joinTwoStrings(), which invokes the concat() method internally.

However, if we compile it, a compilation error occurs:

java: non-static method concat(java.lang.String,java.lang.String) cannot be referenced from a static context

Next, let’s understand what this error message means and see how to solve it.

3. What Does the Error Mean?

Before we tackle the non-static method issue, let’s understand the concept of a static context in Java.

In Java, the keyword “static” is used to declare elements that belong to the class rather than instances. Static members are shared among all instances of a class and can be accessed without creating an object of the class.

However, on the other hand, non-static methods are associated with instances of a class and cannot be invoked without creating an object. They can rely on the specific state of an object, and their behavior may vary depending on the values of instance variables.

The compilation error “Non-static method … cannot be referenced from a static context” occurs when an attempt is made to call a non-static method from a static context. This static context could be a static method, a static block, or the main() method, which is always static.

Now that we understand why the problem happens, let’s see how to fix it.

4. Resolving the Problem

We’ve learned that non-static members cannot be invoked without creating an instance. Then, depending on the requirement, we have a couple of ways to fix the problem.

Next, let’s take a closer look at them.

4.1. Calling Static Method From a Static Context

The first solution is turning the instance method into a static method. If we’ve done that transition, there won’t be any problem if we call it from a static context:

class ToolBox {
    private static String concatStatic(String str1, String str2) {
        return str1 + str2;
    }
    static String joinTwoStrings(String str1, String str2) {
        return concatStatic(str1, str2);
    }
}

As we can see in the code above, to make it easier to spot the changes we made, we used the new method name concatStatic. Further, we made it a static method by adding the static keyword.

Now, if we call the static joinTwoStrings() method, we get the expected result:

assertEquals("ab", ToolBox.joinTwoStrings("a", "b"));

4.2. Creating an Instance and Calling the Instance Method

Sometimes, the requirement doesn’t allow us to change the instance method to a static one. In this case, we can refactor the static method, first creating an instance and then calling the instance method:

class ToolBox {
    private String concat(String str1, String str2) {
        return str1 + str2;
    }
    static String creatingInstanceJoinTwoStrings(String str1, String str2) {
        ToolBox toolBox = new ToolBox();
        return toolBox.concat(str1, str2);
    }
}

Now, if we call the static creatingInstanceJoinTwoStrings() method, it works properly:

assertEquals("ab", ToolBox.creatingInstanceJoinTwoStrings("a", "b"));

Alternatively, we can consider whether the creatingInstanceJoinTwoStrings() method in this class must be static. If not, we can also convert the static method into a regular instance method:

class ToolBox {
    private String concat(String str1, String str2) {
        return str1 + str2;
    }
    String instanceJoinTwoStrings(String str1, String str2) {
        return concat(str1, str2);
    }
}

With this fix, the instanceJoinTwoStrings() method is no longer static. So, it can invoke the private concrete() instance method directly.

Of course, when we utilize instanceJoinTwoStrings(), we must first create a ToolBox object:

ToolBox toolBox = new ToolBox();
assertEquals("ab", toolBox.instanceJoinTwoStrings("a", "b"));

5. Can a Static Method Be Called by an Instance?

We’ve learned we cannot reference non-static members from a static context. Some might ask, can we call a static method in an instance method?

Next, let’s give it a test:

class ToolBox {
    private static String concatStatic(String str1, String str2) {
        return str1 + str2;
    }
    String instanceCallStaticJoinTwoStrings(String str1, String str2) {
        return concatStatic(str1, str2);
    }
}

As we can see in the code above, the instance method instanceCallStaticJoinTwoStrings() calls the private static method concatStatic().

The code compiles. Further, if we test it, it works properly:

ToolBox toolBox = new ToolBox();
assertEquals("ab", toolBox.instanceCallStaticJoinTwoStrings("a", "b"));

So, the answer to the question is yes.

In Java, calling a static method from an instance method is allowed. This is because static members aren’t tied to a specific instance. Instead, they’re associated with the class itself and can be invoked using the class name. In our code, we called concatStatic(str1, str2) without the class name “ToolBox.concatStatic(str1, str2)” since we’re already in the ToolBox class.

6. Conclusion

In this article, we explored the compilation error “Non-static method cannot be referenced from a static context,” delving into its causes and examining various resolutions to address and fix this issue.

As always, the complete source code for the examples is available over on GitHub.

       

Difference Between flush() and close() in Java FileWriter

$
0
0

1. Overview

File handling is an essential aspect we frequently encounter. When it comes to writing data to files, the FileWriter class is commonly used. Within this class, two important methods, flush() and close(), play distinct roles in managing file output streams.

In this tutorial, we’ll address FileWriter‘s common usage and delve into the differences between its flush() and close() methods.

2. FileWriter and try-with-resources

The try-with-resources statement in Java is a powerful mechanism for managing resources, especially when dealing with I/O operations such as file handling. It automatically closes the resources specified in its declaration once the code block is exited, whether normally or due to an exception.

However, there might be situations where using FileWriter with try-with-resources is not ideal or necessary.

FileWriter may behave differently with or without try-with-resources. Next, let’s take a closer look at it.

2.1. Using FileWriter With try-with-resources

If we use FileWriter with try-with-resources, the FileWriter object gets flushed and closed automatically when we exit the try-with-resources block.

Next, let’s show this in a unit test:

@Test
void whenUsingFileWriterWithTryWithResources_thenAutoClosed(@TempDir Path tmpDir) throws IOException {
    Path filePath = tmpDir.resolve("auto-close.txt");
    File file = filePath.toFile();
    try (FileWriter fw = new FileWriter(file)) {
        fw.write("Catch Me If You Can");
    }
    List<String> lines = Files.readAllLines(filePath);
    assertEquals(List.of("Catch Me If You Can"), lines);
}

Since our test will write and read files, we employed JUnit5’s temporary directory extension (@TempDir). With this extension, we can concentrate on testing the core logic without manually creating and managing temporary directories and files for testing purposes.

As the test method shows, we write a string in the try-with-resources block. Then, when we check the file content using Files.readAllLines()we get the expected result.

2.2. Using FileWriter Without try-with-resources

However, when we use FileWriter without try-with-resources, the FileWriter object won’t automatically get flushed and closed:

@Test
void whenUsingFileWriterWithoutFlush_thenDataWontBeWritten(@TempDir Path tmpDir) throws IOException {
    Path filePath = tmpDir.resolve("noFlush.txt");
    File file = filePath.toFile();
    FileWriter fw = new FileWriter(file);
    fw.write("Catch Me If You Can");
    List<String> lines = Files.readAllLines(filePath);
    assertEquals(0, lines.size());
    fw.close(); //close the resource
}

As the test above shows, although we wrote some text to the file by calling FileWriter.write(), the file was still empty.

Next, let’s figure out how to solve this problem.

3. FileWriter.flush() and FileWriter.close()

In this section, let’s first solve the “file is still empty” problem. Then, we’ll discuss the difference between FileWriter.flush() and FileWriter.close().

3.1. Solving “The File Is Still Empty” Problem

First, let’s understand quickly why the file is still empty after we called FileWriter.write(). 

When we invoke FileWriter.write(), the data is not immediately written to the file on the disk. Instead, it is temporarily stored in a buffer. Consequently, to visualize the data in the file, it is necessary to flush the buffered data to the file.

The straightforward way is to call the flush() method:

@Test
void whenUsingFileWriterWithFlush_thenGetExpectedResult(@TempDir Path tmpDir) throws IOException {
    Path filePath = tmpDir.resolve("flush1.txt");
    File file = filePath.toFile();
    
    FileWriter fw = new FileWriter(file);
    fw.write("Catch Me If You Can");
    fw.flush();
    List<String> lines = Files.readAllLines(filePath);
    assertEquals(List.of("Catch Me If You Can"), lines);
    fw.close(); //close the resource
}

As we can see, after calling flush(), we can obtain the expected data by reading the file.

Alternatively, we can call the close() method to transfer the buffered data to the file. This is because close() first performs flush, then closes the file stream writer.

Next, let’s create a test to verify this:

@Test
void whenUsingFileWriterWithClose_thenGetExpectedResult(@TempDir Path tmpDir) throws IOException {
    Path filePath = tmpDir.resolve("close1.txt");
    File file = filePath.toFile();
    FileWriter fw = new FileWriter(file);
    fw.write("Catch Me If You Can");
    fw.close();
    List<String> lines = Files.readAllLines(filePath);
    assertEquals(List.of("Catch Me If You Can"), lines);
}

So, it looks pretty similar to the flush() call. However, the two methods serve different purposes when handling file output streams.

Next, let’s have a close look at their differences.

3.2. The Difference Between the flush() and close() Methods

The flush() method is primarily used to force any buffered data to be written immediately without closing the FileWriter, while the close() method both performs flushing and releases associated resources.

In other words, invoking flush() ensures that the buffered data is promptly written to disk, allowing continued write or append operations to the file without closing the stream. Conversely, when close() is called, it writes the existing buffered data to the file and then closes it. Consequently, further data cannot be written to the file unless a new stream is opened, such as by initializing a new FileWriter object.

Next, let’s understand this through some examples:

@Test
void whenUsingFileWriterWithFlushMultiTimes_thenGetExpectedResult(@TempDir Path tmpDir) throws IOException {
    List<String> lines = List.of("Catch Me If You Can", "A Man Called Otto", "Saving Private Ryan");
    Path filePath = tmpDir.resolve("flush2.txt");
    File file = filePath.toFile();
    FileWriter fw = new FileWriter(file);
    for (String line : lines) {
        fw.write(line + System.lineSeparator());
        fw.flush();
    }
    List<String> linesInFile = Files.readAllLines(filePath);
    assertEquals(lines, linesInFile);
    fw.close(); //close the resource
}

In the above example, we called write() three times to write three lines to the file. After each write() call, we invoked flush(). In the end, we can read the three lines from the target file.

However, if we attempt to write data after calling FileWriter.close(), IOException with the error message “Stream closed” will raise:

@Test
void whenUsingFileWriterWithCloseMultiTimes_thenGetIOExpectedException(@TempDir Path tmpDir) throws IOException {
    List<String> lines = List.of("Catch Me If You Can", "A Man Called Otto", "Saving Private Ryan");
    Path filePath = tmpDir.resolve("close2.txt");
    File file = filePath.toFile();
    FileWriter fw = new FileWriter(file);
    //write and close
    fw.write(lines.get(0) + System.lineSeparator());
    fw.close();
    //writing again throws IOException
    Throwable throwable = assertThrows(IOException.class, () -> fw.write(lines.get(1)));
    assertEquals("Stream closed", throwable.getMessage());
}

4. Conclusion

In this article, we explored FileWriter‘s common usage. Also, we discussed the difference between FileWriter‘s flush() and close() methods.

As always, the complete source code for the examples is available over on GitHub.

       

Maven Dependencies Failing With a 501 Error “HTTPS Required”

$
0
0

1. Overview

In this tutorial, we’ll learn about the error “Return code is: 501, ReasonPhrase: HTTPS Required”. We’ll start by understanding what this error means, and then explore the steps to resolve it.

2. Maven Moving to HTTPS

Maven ensures the automatic download of external libraries from the Maven central repository. However, downloading over HTTP raises security concerns, such as the risk of man-in-the-middle (MITM) attacks. During this attack, malicious code may be injected during the build phase, which can infect the downstream components and their end-users.

To maintain data integrity and encryption, the Maven Central Repository from January 15, 2020, has stopped communication over HTTP. It means any attempts to access the central repository with HTTP will result in an error “Return code is: 501, ReasonPhrase: HTTPS Required”. To fix this, we need to ensure that dependencies are fetched using HTTPS instead of HTTP.

3. Update Maven Version

From Maven version 3.2.3, the central repository is, by default, accessed via HTTPS. If we’re using an older version of Maven, we can update the Maven version to 3.2.3 or later to fix the error.

To update the Maven version, we can download the latest stable build version from the official Apache Maven download page.

Maven provides a settings file, settings.xml, which we can use to configure the Maven installation. This settings.xml file contains all the local and remote repositories links. To fix this error, we need to make sure that we’re using HTTPS in Maven settings. Here are the steps to verify and update the Maven settings:

4.1. Fix mirrors Section in settings.xml

If a <mirrors> section exists in the settings.xml file, we need to make sure the URL for the mirror is https://repo.maven.apache.org/maven2/. If the section doesn’t exist, we can add it like this:

<mirrors>
    <mirror>
        <id>central</id>
        <url>https://repo.maven.apache.org/maven2/</url>
        <mirrorOf>central</mirrorOf>
    </mirror>
</mirrors>

4.2. Fix pluginRepositories Section in settings.xml

Similar to the mirrors section, we may also have a pluginRepositories section where we need to use the URL with HTTPS:

<pluginRepositories>
    <pluginRepository>
        <id>central</id>
        <url>https://repo.maven.apache.org/maven2/</url>
    </pluginRepository>
</pluginRepositories>

4.3. Fix repositories Section in pom.xml

The pom.xml file also contains a repository section where we need to use the URL with HTTPS:

<repositories>
    <repository>
        <id>central</id>
        <name>Central Repository</name>
        <url>https://repo.maven.apache.org/maven2</url>
        <layout>default</layout>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
    </repository>
</repositories>

After making these changes, Maven should download the dependencies over HTTPS.

5. Fix When the Build Environment Doesn’t Support HTTPS

Sometimes, we may face technical constraints, such as using JDK6 in the build environments or lacking HTTPS support. These limitations can impede our transition to HTTPS.

To support such scenarios, the Maven team has established a dedicated domain for insecure traffic. We can replace all the existing references with this URL to facilitate downloads over HTTP.

6. Conclusion

In this tutorial, we explored the different ways to resolve the “Return code is: 501, ReasonPhrase: HTTPS Required” error. First, we explored the basic details of the error.

Afterward, we looked at the fix by updating the Maven version or fixing the settings.xml file.

       

Read Input Character-by-Character in Java

$
0
0

1. Introduction

In many Java applications, we need to read the input data character by character since it is a common task, especially when working with lots of data from a stream source.

In this tutorial, we’ll look at various ways to read one character at a time in Java.

2. Using BufferedReader for Console Input

We can utilize BufferedReader to perform reading character-by-character from the console. Note that this method is helpful if we seek to read characters interactively.

Let’s take an example:

@Test
public void givenInputFromConsole_whenUsingBufferedStream_thenReadCharByChar() throws IOException {
    ByteArrayInputStream inputStream = new ByteArrayInputStream("TestInput".getBytes());
    System.setIn(inputStream);
    try (BufferedReader buffer = new BufferedReader(new InputStreamReader(System.in))) {
        char[] result = new char[9];
        int index = 0;
        int c;
        while ((c = buffer.read()) != -1) {
            result[index++] = (char) c;
        }
        assertArrayEquals("TestInput".toCharArray(), result);
    }
}

Here, we simply simulate the console input by instantiating a ByteArrayInputStream with the “TestInput” content. Then, we read characters from System.in using BufferedReader. Afterward, we use the read() method to read one character as integer code and cast it into a char. Finally, we use the assertArrayEquals() method to verify that the read characters match the expected input.

3. Using FileReader for Reading from Files

When working on files, FileReader is an appropriate choice for reading character by character:

@Test
public void givenInputFromFile_whenUsingFileReader_thenReadCharByChar() throws IOException {
    File tempFile = File.createTempFile("tempTestFile", ".txt");
    FileWriter fileWriter = new FileWriter(tempFile);
    fileWriter.write("TestFileContent");
    fileWriter.close();
    try (FileReader fileReader = new FileReader(tempFile.getAbsolutePath())) {
        char[] result = new char[15];
        int index = 0;
        int charCode;
        while ((charCode = fileReader.read()) != -1) {
            result[index++] = (char) charCode;
        }
        assertArrayEquals("TestFileContent".toCharArray(), result);
    }
}

In the above code, we create a temporary test file with the content “tempTestFile” for simulation. Then, we use a FileReader to establish a connection to the file specified by its path using the tempFile.getAbsolutePath() method. Within a try-with-resources block, we read the file character by character.

4. Using Scanner for Tokenized Input

For a more dynamic approach that allows tokenized input, we can use Scanner:

@Test
public void givenInputFromConsole_whenUsingScanner_thenReadCharByChar() {
    ByteArrayInputStream inputStream = new ByteArrayInputStream("TestInput".getBytes());
    System.setIn(inputStream);
    try (Scanner scanner = new Scanner(System.in)) {
        char[] result = scanner.next().toCharArray();
        assertArrayEquals("TestInput".toCharArray(), result);
    }
}

We simulate the console input in the above test method by instantiating a ByteArrayInputStream with the “TestInput” content. Then, we utilize the hasNext() method to verify if there is another token. Afterward, we utilize the next() method to fetch the current one as a String.

5. Conclusion

In conclusion, we explored diverse methods in Java for reading characters, covering interactive console input using BufferedReader, file-based character reading with FileReader, and tokenized input handling via Scanner, offering developers versatile approaches to process character data efficiently in various scenarios.

As always, the complete code samples for this article can be found over on GitHub.

       

Java Weekly, Issue 525

$
0
0

1. Spring and Java

>> The best way to test the data access layer [vladmihalcea.com]

Evaluating the pros and cons of different approaches of testing data access layers: unit testing vs integration/system testing

>> Hello eBPF: Developing eBPF Apps in Java (1) [foojay.io]

Java meets eBPF: attaching a program directly to the Linux kernel and getting all sorts of information out. Interesting.

>> Stepping in 2024 with Powerful Java Language Features [inside.java]

And, towards a more concise Java: String templates, unnamed patterns, variables, unnamed classes, and instance main methods

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Kicking the tires of Docker Scout [frankel.ch]

Meet Docker Scout: detecting security vulnerabilities in Docker images

Also worth reading:

3. Pick of the Week

>> Clever code is probably the worst code you could write [engineerscodex.com]

       

Check if a Float Value is Equivalent to an Integer Value in Java

$
0
0

1. Introduction

The floating-point numbers are typically represented using Java’s float or double data type. However, precision imposes a limitation as they use binary representations of these values. When they are directly compared to integer values, the results might be unexpected.

In this tutorial, we’ll discuss various approaches to check if a float value is equivalent to an integer value in Java.

2. Using Type Casting

One simple way is to use type casting to convert the float value into an integer and then compare it.

Here’s an example:

float floatValue = 10.0f;
@Test
public void givenFloatAndIntValues_whenCastingToInt_thenCheckIfFloatValueIsEquivalentToIntegerValue() {
    int intValue = (int) floatValue;
    assertEquals(floatValue, intValue);
}

In this snippet, we initialize the floatValue with 10.0f. Then, we use type casting to convert it into an integer, and finally, we check if the floatValue is equivalent to the casted integer value intValue.

3. Compared with a Tolerance

Due to the floating-point precision limitations, using a tolerance when comparing float and integer values is often more suitable. This allows for variation due to the binary nature.

Let’s see the following code snippet:

@Test
public void givenFloatAndIntValues_whenUsingTolerance_thenCheckIfFloatValueIsEquivalentToIntegerValue() {
    int intValue = 10;
    float tolerance = 0.0001f;
    assertTrue(Math.abs(floatValue - intValue) <= tolerance);
}

Here, we initialize a float variable (tolerance) with 0.0001f. Then, we check if the absolute result of the difference between the floatValue and intValue variables is less than or equal to the tolerance value we set.

4. Using Float.compare()

Java offers the Float.compare() method for accurate float comparison. This method treats NaN values and negative zero as reliable comparison mechanisms.

Here’s an example:

@Test
public void givenFloatAndIntValues_whenUsingFloatCompare_thenCheckIfFloatValueIsEquivalentToIntegerValue() {
    int intValue = 10;
    assertEquals(Float.compare(floatValue, intValue), 0);
}

In this example, we utilize the Float.compare() method to check whether they are matched. The float.compare() method returns 0 if the two variables are equal, a negative number if the first variable is less than the second variable, and a positive number otherwise.

5. Using Math.round()

Another approach uses the Math.round() method. The built-in math method returns the closest long to the argument:

@Test
public void givenFloatAndIntValues_wheUsingRound_thenCheckIfFloatValueIsEquivalentToIntegerValue() {
    int intValue = 10;
    assertEquals(intValue, Math.round(floatValue));
}

Here, we directly round the float value using the Math.round() method and then check if the rounded value is equivalent to the integer value.

6. Using Scanner

We can use the Scanner class to dynamically detect the type of user input, whether an integer or a float number. This approach enables interactive contributions, thereby making the program more flexible:

@Test
public void givenFloatAndIntValues_whenUsingScanner_thenCheckIfFloatValueIsEquivalentToIntegerValue() {
    String input = "10.0";
    Scanner sc = new Scanner(new ByteArrayInputStream(input.getBytes()));
    float actualFloatValue;
    if (sc.hasNextInt()) {
        int intValue = sc.nextInt();
        actualFloatValue = intValue;
    } else if (sc.hasNextFloat()) {
        actualFloatValue = sc.nextFloat();
    } else {
        actualFloatValue = Float.NaN;
    }
    sc.close();
    assertEquals(floatValue, actualFloatValue);
}

Here, we simulate user input as a string. The Scanner is then used to dynamically detect whether the input is an integer or a float, and the result is compared with the original float value.

7. Conclusion

In conclusion, we take a good overview of ways to verify if a float value equals an integer in Java.

As usual, the accompanying source code can be found over on GitHub.

       

Using Static Methods Instead of Deprecated JsonParser

$
0
0

1. Introduction

Efficient JSON parsing is one of the most important tasks in Java programming when it comes to data manipulation and communication.

The Gson library offers a versatile JsonParser class to simplify the conversion process. Moreover, it’s important to note that this class has been deprecated, eliminating the need for instantiation. Instead, we can utilize the provided static methods for the conversion process.

In this tutorial, we’ll delve into how to utilize the static methods instead of the deprecated JsonParser for efficient JSON parsing in Java.

2. Deprecated JsonParser

Here is an example of using the deprecated JsonParser to parse a JSON string:

String jsonString = "{\"name\": \"John\", \"age\":30, \"city\":\"New York\"}";
JsonObject jsonObject = new JsonParser().parse(jsonString).getAsJsonObject();

The deprecated JsonParser instance may still function, but developers are encouraged to move on with new and improved practices.

3. Embracing Static Methods

The Gson library offers static methods as replacements for the deprecated ones. Moreover, it is a more elegant and easier-to-understand parsing way of JSON.

Let’s explore the recommended static methods:

3.1. Parse from String

We can parse a JSON string directly into JsonObject without using a deprecated instance of JsonParser using the parseString() static method.

Firstly, let’s set up a JSON string describing person-related data and read an associated JsonObject with given keys like name, age, and city underlying class constructor of DeprecatedJsonParserUnitTest:

String jsonString = "{\"name\": \"John\", \"age\":30, \"city\":\"New York\"}";
JsonObject expectedJsonObject = new JsonObject();
DeprecatedJsonParserUnitTest() {
    expectedJsonObject.addProperty("name", "John");
    expectedJsonObject.addProperty("age", 30);
    expectedJsonObject.addProperty("city", "New York");
}

Now, let’s parse the jsonString directly into JsonObject:

@Test
public void givenJsonString_whenUsingParseString_thenJsonObjectIsExpected() {
    JsonObject jsonObjectAlt = JsonParser.parseString(jsonString).getAsJsonObject();
    assertEquals(expectedJsonObject, jsonObjectAlt);
}

In this test method, we verify that the parsed jsonObjectAlt matches the expectedJsonObject created earlier.

3.2. Parse from StringReader

There are cases when the obtained JSON data comes from a StringReader. We can use the parseReader() static method to get the same result without using obsolete components:

@Test
public void givenJsonString_whenUsingParseReader_thenJsonObjectIsExpected() {
    StringReader reader = new StringReader(jsonString);
    JsonObject jsonObject = JsonParser.parseReader(reader).getAsJsonObject();
    assertEquals(expectedJsonObject, jsonObject);
}

Here, we initialize a StringReader called reader. Then, we use the JsonParser.parseReader() method to parse the JSON data into a JsonObject.

3.3. Parse from JsonReader

When dealing with a JsonReader, the parseReader() static method is still an effective and contemporary decision that avoids outdated constructions. Let’s take an example:

@Test
public void givenJsonReader_whenParseUsingJsonReader_thenJsonObjectIsExpected() {
    JsonReader jsonReader = new JsonReader(new StringReader(jsonString));
    JsonObject jsonObject = JsonParser.parseReader(jsonReader).getAsJsonObject();
    assertEquals(expectedJsonObject, jsonObject);
}

In the above test method, we begin by instantiating a JsonReader named jsonReader with the contents of the JSON string. Then, we utilize the JsonParser.parseReader() method to parse such JSON data into a JsonObject.

4. Conclusion

In conclusion, JsonParser was deprecated, and there are excellent alternative static methods provided by the Gson class, such as parseString(), parseReader(), and parseJson().

As always, the complete code samples for this article can be found over on GitHub.

       

Global Exception Handling with Spring Cloud Gateway

$
0
0

1. Overview

In this tutorial, we’ll explore the nuances of implementing a global exception-handling strategy within Spring Cloud Gateway, delving into its technicalities and best practices.

In modern software development, especially in microservices, efficient management of APIs is crucial. This is where Spring Cloud Gateway plays an important role as a key component of the Spring ecosystem. It acts like a gatekeeper, directing traffic and requests to the appropriate microservices and providing cross-cutting concerns, such as security, monitoring/metrics, and resiliency.

However, in such a complex environment, the certainty of exceptions due to network failures, service downtime, or application bugs demands a robust exception-handling mechanism. The global exception handling in Spring Cloud Gateway ensures a consistent approach for error handling across all services and enhances the entire system’s resilience and reliability.

2. The Need for Global Exception Handling

Spring Cloud Gateway is a project part of the Spring ecosystem, designed to serve as an API gateway in microservices architectures, and its main role is to route requests to the appropriate microservices based on pre-established rules. The Gateway provides functionalities like security (authentication and authorization), monitoring, and resilience (circuit breakers). By handling requests and directing them to appropriate backend services, it effectively manages cross-cutting concerns like security and traffic management.

In distributed systems like microservices, exceptions may arise from multiple sources, such as network issues, service unavailability, downstream service errors, and application-level bugs, which are common culprits. In such environments, handling exceptions in a localized manner (i.e., within each service) can lead to fragmented and inconsistent error handling. This inconsistency can make debugging cumbersome and degrade the user’s experience:

API Gateway Error Handling Diagram

Global exception handling addresses this challenge by providing a centralized exception management mechanism that ensures that all exceptions, regardless of their source, are processed consistently, providing standardized error responses.

This consistency is critical for system resilience, simplifying error tracking and analysis. It also enhances the user’s experience by providing a precise and consistent error format, helping users understand what went wrong.

3. Implementing Global Exception Handling in Spring Cloud Gateway

Implementing global exception handling in Spring Cloud Gateway involves several critical steps, each ensuring a robust and efficient error management system.

3.1. Creating a Custom Global Exception Handler

A global exception handler is essential for catching and handling exceptions that occur anywhere within the Gateway. To do that, we need to extend AbstractErrorWebExceptionHandler and add it to the Spring context. By doing so, we create a centralized handler that intercepts all exceptions.

@Component
public class CustomGlobalExceptionHandler extends AbstractErrorWebExceptionHandler {
    // Constructor and methods
}

This class should be designed to handle various types of exceptions, from general exceptions like NullPointerException to more specific ones like HttpClientErrorException. The goal is to cover a broad spectrum of possible errors. The main method of such a class is shown below.

@Override
protected RouterFunction<ServerResponse> getRoutingFunction(ErrorAttributes errorAttributes) {
    return RouterFunctions.route(RequestPredicates.all(), this::renderErrorResponse);
}
// other methods

In this method, we can apply a handler function to the error based on a predicate evaluated using the current request and deal with it properly. It’s important to notice that the global exception handler only deals with exceptions thrown within the gateway context. That means response codes like  5xx or 4xx are not included in the context of the global exception handler, and those should be handled using route or global filters.

The AbstractErrorWebExceptionHandler offers many methods that help us deal with the exceptions thrown during the request handling.

private Mono<ServerResponse> renderErrorResponse(ServerRequest request) {
    ErrorAttributeOptions options = ErrorAttributeOptions.of(ErrorAttributeOptions.Include.MESSAGE);
    Map<String, Object> errorPropertiesMap = getErrorAttributes(request, options);
    Throwable throwable = getError(request);
    HttpStatusCode httpStatus = determineHttpStatus(throwable);
    errorPropertiesMap.put("status", httpStatus.value());
    errorPropertiesMap.remove("error");
    return ServerResponse.status(httpStatus)
      .contentType(MediaType.APPLICATION_JSON_UTF8)
      .body(BodyInserters.fromObject(errorPropertiesMap));
}
private HttpStatusCode determineHttpStatus(Throwable throwable) {
    if (throwable instanceof ResponseStatusException) {
        return ((ResponseStatusException) throwable).getStatusCode();
    } else if (throwable instanceof CustomRequestAuthException) {
        return HttpStatus.UNAUTHORIZED;
    } else if (throwable instanceof RateLimitRequestException) {
        return HttpStatus.TOO_MANY_REQUESTS;
    } else {
        return HttpStatus.INTERNAL_SERVER_ERROR;
    }
}

Looking at the code above, two methods provided by the Spring team are relevant. These are getErrorAttributes() and getError(), and such methods provide context as well as error information, which are important to handle the exceptions properly.

Finally, these methods collect the data provided by Spring context, hide some details, and adjust the status code and response based on the exception type. The CustomRequestAuthException and RateLimitRequestException are custom exceptions that will be further explored soon.

3.2. Configuring the GatewayFilter

The gateway filters are components that intercept all incoming requests and outgoing responses:

Spring cloud gateway diagram

By implementing GatewayFilter or GlobalFilter and adding it to the Spring context, we ensure that requests are handled uniformly and properly

public class MyCustomFilter implements GatewayFilter {
    // Implementation details
}

This filter can be used to log incoming requests, which is helpful for debugging. In the event of an exception, the filter should redirect the flow to the GlobalExceptionHandler. The difference between them is that GlobalFilter targets all upcoming requests, while GatewayFilter targets particular routes defined in the RouteLocator.

Next, let’s have a look at two samples of filter implementations:

public class MyCustomFilter implements GatewayFilter {
    @Override
    public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
        if (isAuthRoute(exchange) && !isAuthorization(exchange)) {
            throw new CustomRequestAuthException("Not authorized");
        }
        return chain.filter(exchange);
    }
    private static boolean isAuthorization(ServerWebExchange exchange) {
        return exchange.getRequest().getHeaders().containsKey("Authorization");
    }
    private static boolean isAuthRoute(ServerWebExchange exchange) {
        return exchange.getRequest().getURI().getPath().equals("/test/custom_auth");
    }
}

The MyCustomFilter in our sample simulates a gateway validation. The idea was to fail and avoid the request if no authorization header is present. If that were the case, the exception would be thrown, handing the error to the global exception handler.

@Component
class MyGlobalFilter implements GlobalFilter {
    @Override
    public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
        if (hasReachedRateLimit(exchange)) {
            throw new RateLimitRequestException("Too many requests");
        }
        return chain.filter(exchange);
    }
    private boolean hasReachedRateLimit(ServerWebExchange exchange) {
        // Simulates the rate limit being reached
        return exchange.getRequest().getURI().getPath().equals("/test/custom_rate_limit") && 
          (!exchange.getRequest().getHeaders().containsKey("X-RateLimit-Remaining") || 
            Integer.parseInt(exchange.getRequest().getHeaders().getFirst("X-RateLimit-Remaining")) <= 0);
    }
}

Lastly, in MyGlobalFilter, the filter checks all requests but only fails for a particular route. It simulates the validation of a rate limit using headers. As it is a GlobalFilter, we need to add it to the Spring context.

Once again, once the exception happens, the global exception handler takes care of response management.

3.3. Uniform Exception Handling

Consistency in exception handling is vital. This involves setting up a standard error response format, including the HTTP status code, an error message (response body), and any additional information that might be helpful for debugging or user comprehension.

private Mono<ServerResponse> renderErrorResponse(ServerRequest request) {
    // Define our error response structure here
}

Using this approach, we can adapt the response based on the exception type. For example, a 500 Internal Server problem indicates a server-side exception, a 400 Bad Request indicates a client-side problem, and so on. As we saw in our sample, Spring context already provides some data, but the response can be customized.

4. Advanced Considerations

Advanced considerations include implementing enhanced logging for all exceptions. This can involve integrating external monitoring and logging tools like Splunk, ELK Stack, etc. Additionally, categorizing exceptions and customizing error messages based on these categories can significantly aid in troubleshooting and improving user communication.

Testing is crucial to ensure the effectiveness of your global exception handlers. This involves writing unit and integration tests to simulate various exception scenarios. Tools like JUnit and Mockito are instrumental in this process, allowing you to mock services and test how your exception handler responds to different exceptions.

5. Conclusion

Best practices in implementing global exception handling include keeping the error-handling logic simple and comprehensive. It’s important to log every exception for future analysis and periodically update the handling logic as new exceptions are identified. Regularly reviewing the exception-handling mechanisms also helps keep up with the evolving microservices architecture.

Implementing global exception handling in Spring Cloud Gateway is crucial to developing robust microservices architecture. It ensures a consistent error-handling strategy across all services and significantly improves the system’s resilience and reliability. Developers can build a more user-friendly and maintainable system by following this article’s implementation strategies and best practices.

As usual, all code samples used in this article are available over on GitHub.

       

PriorityQueue iterator() Method in Java

$
0
0

1. Introduction

One of the essential methods in PriorityQueue is the iterator() method. This method allows seamless traversal of the elements stored in the queue. In this tutorial, we’ll explore the iterator() method’s functionality and demonstrate its effective use in various scenarios.

2. Overview of PriorityQueue

The PriorityQueue class in Java functions as a data structure, enabling the storage of elements in a queue based on their priority.

PriorityQueue internally utilizes a binary heap, a tree-like structure where elements are arranged based on priority. The highest-priority element resides at the root, and child nodes inherit their parent’s priority. This arrangement ensures that the highest-priority element is positioned at the front while the lowest is placed at the back.

Additionally, the PriorityQueue class implements the Queue interface and offers a range of methods for manipulating the elements within the queue, including the iterator() method. The iterator() method is a part of the Iterable interface, and it is used to obtain an iterator over a collection of elements. The signature of the iterator() method is defined as:

public Iterator<E> iterator()

The iterator() method returns an Iterator over the elements in the queue. The type of parameter E specifies the type of elements in the queue. This method does not take any arguments

3. Iterator Characteristics

Let’s delve into the key characteristics of the iterator:

3.1. Fail-Fast Guarantee

The iterator returned by the iterator() method is a fail-fast iterator. This means that if we attempt to modify the queue (add or remove elements) while an iterator is in use, a ConcurrentModificationException will be thrown. This behavior ensures that the iterator will always reflect the current state of the queue.

In the code, we modify the PriorityQueue by adding one more element after we obtain the iterator:

PriorityQueue<Integer> numbers = new PriorityQueue<>();
numbers.add(3);
numbers.add(1);
numbers.add(2);
Iterator<Integer> iterator = numbers.iterator();
numbers.add(4);
try {
    while (iterator.hasNext()) {
        System.out.println(iterator.next());
    }
} catch (ConcurrentModificationException e) {
    System.out.println("ConcurrentModificationException occurred during iteration.");
}

The output of this program will be:

ConcurrentModificationException occurred during iteration.

3.2. Traversal Order

The iterator() method traverses the heap structure in a specific way, often based on the level-order traversal method. This means it visits elements level by level, starting from the top of the heap and working its way down. This approach is efficient for accessing elements but might not always produce a strictly priority-based order.

Let’s look at an example of how to use the iterator() method to iterate over the elements in a PriorityQueue:

PriorityQueue<Integer> queue = new PriorityQueue<>();
queue.add(3);
queue.add(1);
queue.add(2);
Iterator<Integer> iterator = queue.iterator();
while (iterator.hasNext()) {
    Integer element = iterator.next();
    System.out.println(element);
}

In this example, we create a PriorityQueue of integers and add three elements to it. We then obtain an iterator over the elements in the queue and use a while loop to iterate over the elements, printing each one to the console. The output of this program will be:

1
3
2

Internally, the PriorityQueue looks like:

   1
  / \
 3   2

During iteration, the iterator traverses the elements in level order, producing the order 1, 3, and 2. While this order maintains the general structure of the heap, it does not strictly adhere to the priority-based ordering.

4. Comparator Interface

In certain scenarios, we might want to order elements in the PriorityQueue based on a custom criterion. This can be achieved by utilizing the Comparator interface. This interface allows us to define a comparison function that can be used to order the elements in the queue.

The Comparator interface has a single compare() method, which takes two arguments of the same type and returns an integer value. The value returned by the compare() method determines the ordering of the elements in the queue.

Let’s consider the following example, where we have a Person class, and the requirement is to prioritize individuals based on their age. To address this, we’ll create a custom comparator:

class PersonAgeComparator implements Comparator<Person> {
    @Override
    public int compare(Person p1, Person p2) {
        return p1.age - p2.age; 
    }
}

Subsequently, we’ll create a PriorityQueue with custom ordering. We need to pass an instance of the PersonAgeComparator interface to the constructor of the PriorityQueue. The elements in the queue will then be ordered according to the comparison function defined by the Comparator:

PriorityQueue<Person> priorityQueue = new PriorityQueue<>(new PersonAgeComparator());
priorityQueue.add(new Person("Alice", 25));
priorityQueue.add(new Person("Bob", 30));
priorityQueue.add(new Person("Charlie", 22));
Iterator<Person> iterator = priorityQueue.iterator();
while (iterator.hasNext()) {
    Person person = iterator.next();
    System.out.println("Name: " + person.name + ", Age: " + person.age);
}

The output of this program will be:

Name: Charlie, Age: 22
Name: Bob, Age: 30
Name: Alice, Age: 25

5. Ordered Retrieval

The previous example didn’t display elements in strict ascending age order, even though we used a custom Comparator. The internal structure of PriorityQueue might lead to unexpected outcomes during direct iteration. This is because the iterator follows a level-order traversal, which results in a different sequence during iteration, as it visits elements level by level. 

To ensure elements are retrieved in the exact order of their priority, we can use the poll() method. This method specifically removes the element with the highest priority (in this case, the lowest age) from the PriorityQueue and returns it.

Let’s see how to use the poll() method to retrieve the element in ordering:

while (!priorityQueue.isEmpty()) {
    Person person = priorityQueue.poll();
    System.out.println("Name: " + person.name + ", Age: " + person.age);
}

The output of this program will now be:

Name: Charlie, Age: 22
Name: Alice, Age: 25
Name: Bob, Age: 30

6. Use Case

Although iterator() might not be ideal for strictly-ordered retrieval, it excels in scenarios where the priority order isn’t crucial — for instance, capitalizing the person’s name in PriorityQueue or calculating statistics like average age, regardless of priority. Let’s illustrate the use case with an example:

while (iterator.hasNext()) {
    Person person = iterator.next();
    person.setName(person.getName().toUpperCase());
}

7. Conclusion

In this article, we’ve explored the PriorityQueue class in Java, emphasizing the role of the iterator() method. It’s important to note that while the PriorityQueue maintains sorted order internally, the iterator() method does not guarantee traversal in that order. Therefore, we use the iterator() method to perform operations that don’t rely on the priority order.

As always, the code is available over on GitHub.

       
Viewing all 4469 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>