Quantcast
Channel: Baeldung
Viewing all 4470 articles
Browse latest View live

Implementing GraphQL Mutation Without Returning Data

$
0
0

1. Introduction

GraphQL is a powerful query language for APIs and provides a flexible and efficient way to interact with our data. When dealing with mutations, it’s typical to perform updates or additions to data on the server. However, in some scenarios, we might need to mutate without returning any data.

In GraphQL, the default behavior is to enforce non-nullability for fields in the schema, meaning that a field must always return a value and cannot be null unless explicitly marked as nullable. While this strictness contributes to the clarity and predictability of the API, there are instances where returning null might be necessary. However, it’s generally considered a best practice to avoid returning null values.

In this article, we’ll explore techniques for implementing GraphQL mutations without retrieving or returning specific information.

2. Prerequisites

For our example, we’ll need the following dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-graphql</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

The Spring Boot GraphQL Starter provides an excellent solution for quickly setting up a GraphQL server. By leveraging auto-configuration and adopting an annotation-based programming approach, we only need to focus on writing the essential code for our service.

We’ve included the web starter in our config because GraphQL is transport-agnostic. This utilizes Spring MVC to expose the GraphQL API over HTTP. We can access this via the default /graphql endpoint. We can also use other starters, like Spring Webflux, for different underlying implementations.

3. Using Nullable Type

Unlike some programming languages, GraphQL mandates an explicit declaration of nullability for each field in the schema. This approach enhances clarity, allowing us to convey when a field may lack value.

3.1. Writing the Schema

The Spring Boot GraphQL starter automatically locates GraphQL Schema files under the src/main/resources/graphql/** location. It builds the correct structure based on them, and wires special beans to this structure.

We’ll start by creating the schema.graphqls file, and defining the schema for our example:

type Post {
    id: ID
    title: String
    text: String
    category: String
    author: String
}
type Mutation {
    createPostReturnNullableType(title: String!, text: String!, category: String!, authorId: String!) : Int
}

We’ll have a Post entity and a mutation to create a new post. Also, for our schema to pass validation, it must have a query. So, we’ll implement a dummy query that returns a list of posts:

type Query {
    recentPosts(count: Int, offset: Int): [Post]!
}

3.2. Using Beans to Represent Types

In the GraphQL server, every complex type is associated with a Java bean. These associations are established based on the object and property names. That being said, we’ll create a POJO class for our posts:

public class Post {
    private String id;
    private String title;
    private String text;
    private String category;
    private String author;
    // getters, setters, constructor
}

Unmapped fields or methods on the Java bean are overlooked within the GraphQL schema, posing no issues.

3.3. Creating the Mutation Resolver

We must mark the handler functions with the @MutationMapping tag. These methods should be placed within regular @Controller components in our application, registering the classes as data-modifying components in our GraphQL application:

@Controller
public class PostController {
    List<Post> posts = new ArrayList<>();
    @MutationMapping
    public Integer createPost(@Argument String title, @Argument String text, @Argument String category, @Argument String author) {
        Post post = new Post();
        post.setId(UUID.randomUUID().toString());
        post.setTitle(title);
        post.setText(text);
        post.setCategory(category);
        post.setAuthor(author);
        posts.add(post);
        return null;
    }
}

We must annotate the parameters of the method with @Argument according to the properties from the schema. When declaring the schema, we determined that our mutation would return an Int type, without the exclamation mark. This allowed the return value to be null.

4. Creating Custom Scalar

In GraphQL, scalars are the atomic data types that represent the leaf nodes in a GraphQL query or schema.

4.1. Scalars and Extended Scalars

According to the GraphQL specification, all implementations must include the following scalar types: String, Boolean, Int, Float, or ID. Besides that, graphql-java-extended-scalars adds more custom-made scalars like Long, BigDecimal, or LocalDate. However, neither the original nor the extended set of scalars have a special one for the null value. So, we’ll build our scalar in this section.

4.2. Creating the Custom Scalar

To create a custom scalar, we should initialize a GraphQLScalarType singleton instance. We’ll utilize the Builder design pattern to create our scalar:

public class GraphQLVoidScalar {
    public static final GraphQLScalarType Void = GraphQLScalarType.newScalar()
      .name("Void")
      .description("A custom scalar that represents the null value")
      .coercing(new Coercing() {
          @Override
          public Object serialize(Object dataFetcherResult) {
              return null;
          }
          @Override
          public Object parseValue(Object input) {
              return null;
          }
          @Override
          public Object parseLiteral(Object input) {
              return null;
          }
      })
      .build();
}

The key components of the scalar are name, description, and coercing. Although the name and description are self-explanatory, the hard part of creating a custom scalar is graphql.schema.Coercing implementation. This class is responsible for three functions:

  • parseValue(): accepts a variable input object and transforms it into the corresponding Java runtime representation
  • parseLiteral(): receives an AST literal graphql.language.Value as input and transform it into the Java runtime representation
  • serialize(): accepts a Java object and converts it into the output shape for that scalar

Although the implementation of coercing can be quite complicated for a complex object, in our case, we’ll return null for each method.

4.3. Register the Custom Scalar

We’ll start by creating a configuration class where we register our scalar:

@Configuration
public class GraphQlConfig {
    @Bean
    public RuntimeWiringConfigurer runtimeWiringConfigurer() {
        return wiringBuilder -> wiringBuilder.scalar(GraphQLVoidScalar.Void);
    }	
}

We create a RuntimeWiringConfigurer bean where we configure the runtime wiring for our GraphQL schema. In this bean, we use the scalar() method provided by the RuntimeWiring class to register our custom type.

4.4. Integrate the Custom Scalar

The final step is to integrate the custom scalar into our GraphQL schema by referencing it using the defined name. In this case, we use the scalar in the schema by simply declaring scalar Void.

This step ensures that the GraphQL engine recognizes and utilizes our custom scalar throughout the schema. Now, we can integrate the scalar into our mutation:

scalar Void
type Mutation {
    createPostReturnCustomScalar(title: String!, text: String!, category: String!, authorId: String!) : Void
}

Also, we’ll update the mapped method signature to return our scalar:

public Void createPostReturnCustomScalar(@Argument String title, @Argument String text, @Argument String category, @Argument String author)

5. Conclusion

In this article, we explored implementing GraphQL mutations without returning specific data. We demonstrated setting up a server quickly with the Spring Boot GraphQL Starter. Furthermore, we introduced a custom Void scalar to handle null values, showcasing how to extend GraphQL’s capabilities.

As always, the complete code snippets are available over on GitHub.

       

Modify and Print List Items With Java Streams

$
0
0
start here featured

1. Overview

When we work with Java, manipulating lists is a fundamental skill.

In this quick tutorial, we’ll explore different ways to modify or transform a list and then print its elements in Java.

2. Modifying and Printing a List

Printing a list of elements isn’t a challenge to us. For example, we can call the print action in the forEach() method:

List<String> theList = Lists.newArrayList("Kai", "Liam", "Eric", "Kevin");
theList.forEach(element -> log.info(element));

In the code above, we used an SLF4J logger to output elements in the given list. When we execute the code, we can see the four names are printed in the console:

Kai
Liam
Eric
Kevin

If we intend to modify the elements in the list before printing them, we can utilize the List.replaceAll() method.

Next, let’s convert each String element in theList to uppercase and print the modified values in a test method:

List<String> theList = Lists.newArrayList("Kai", "Liam", "Eric", "Kevin");
theList.replaceAll(element -> element.toUpperCase());
theList.forEach(element -> log.info(element));
assertEquals(List.of("KAI", "LIAM", "ERIC", "KEVIN"), theList);

As we can see, we use a lambda expression in the replaceAll() method to perform the case conversion. After running the test, we can see the uppercase values in the console:

KAI
LIAM
ERIC
KEVIN

It’s worth noting that the replaceAll() method requires the list object to be a mutable list, such as the Arraylist used in the above code. The method throws an UnsupportedOperationException if the list is immutable, such as the list objects returned by Collection.singletonList() and List.of().

Therefore, in practical scenarios, it’s often preferable to transform the original list into a new one rather than directly modifying it. Next, let’s explore how to transform a list and seamlessly output its elements efficiently.

3. Transforming and Printing a List Using the Stream API

The Stream API, introduced in Java 8, significantly changed the way we handle collections of objects. Streams provide a declarative and functional approach to processing data, offering a concise and expressive way to perform operations on collections.

For example, we can take a list as the source, use the map() method to transform elements in the stream, and print the elements using forEachOrdered() in this way:

theList.stream()
  .map(... <the transformation logic> ...)
  .forEachOrdered( ... <print the element> ...)

The code is pretty straightforward. However, it’s important to note that Stream.forEachOrdered() is a terminal operation. This terminal operation essentially marks the end of the stream pipeline. Consequently, the stream object becomes inaccessible after this method is called. This limitation implies that subsequent stream operations, such as collecting the transformed elements, are no longer feasible.

Therefore, we’d like to achieve our goal through a different approach, one that allows us to continue performing operations on the stream.

A straightforward idea is to include the printing method call in map():

List<String> theList = List.of("Kai", "Liam", "Eric", "Kevin");
List<String> newList = theList.stream()
  .map(element -> {
      String newElement = element.toUpperCase();
      log.info(newElement);
      return newElement;
  })
  .collect(Collectors.toList());
assertEquals(List.of("KAI", "LIAM", "ERIC", "KEVIN"), newList);

In this way, printing the stream doesn’t terminate the stream pipeline, and we can still perform a Collector operation afterward. Of course, the transformed elements are printed in the console:

KAI
LIAM
ERIC
KEVIN

However, one drawback of this approach is that it unnecessarily adds irrelevant logic to the map() method. Next, let’s improve it by employing the peek() method:

List<String> theList = List.of("Kai", "Liam", "Eric", "Kevin");
List<String> newList = theList.stream()
  .map(element -> element.toUpperCase())
  .peek(element -> log.info(element))
  .collect(Collectors.toList());
assertEquals(List.of("KAI", "LIAM", "ERIC", "KEVIN"), newList);

Unlike forEachOrdered(), peek() is an intermediate operation. It performs the provided action on each element in the stream and returns the stream. Therefore, we can add further operations to the stream pipeline after invoking peek(), such as collect() in the above code.

The peek() method accepts a Consumer instance as the parameter. In our example, we passed a lambda expression as the Consumer to peek().

When we give this test a run, it passes, and the expected output is printed to the console:

KAI
LIAM
ERIC
KEVIN

4. Conclusion

In this article, we first demonstrated how to modify and print a list using the replaceAll() + forEach() approach. Then, we explored how to use the Stream API to transform and print elements in a stream.

As always, the complete source code for the examples is available over on GitHub.

       

Count the Number of Unique Digits in an Integer using Java

$
0
0

1. Overview

In this short tutorial, we’ll explore how to count the number of unique digits in an integer using Java.

2. Understanding the Problem

Given an integer, our goal is to count how many unique digits it contains. For example, the integer 567890 has six unique digits, while 115577 has only three unique digits (1, 5, and 7).

3. Using a Set

The most straightforward way to find the number of unique digits in an integer is by using a Set. Sets inherently eliminate duplicates, which makes them perfect for our use case:

public static int countWithSet(int number) {
    number = Math.abs(number);
    Set<Character> uniqueDigits = new HashSet<>();
    String numberStr = String.valueOf(number);
    for (char digit : numberStr.toCharArray()) {
        uniqueDigits.add(digit);
    }
    return uniqueDigits.size();
}

Let’s break down our algorithm’s steps:

  • Convert the integer into a string to easily iterate over each digit.
  • Iterate through each character of the string and add to a HashSet.
  • The size of the HashSet after iteration gives us the count of unique digits.

The time complexity of this solution is O(n), where n is the number of digits in the integer. Adding to a HashSet and checking its size are both O(1) operations, but we still have to iterate through each digit.

4. Using Stream API

Java’s Stream API provides a concise and modern solution to count the number of unique digits in an integer. This method leverages the power of streams to process sequences of elements, including distinct elements, in a collection-like manner:

public static long countWithStreamApi(int number) {
    return String.valueOf(Math.abs(number)).chars().distinct().count();
}

Let’s examine the steps involved:

  • Convert the number to a string.
  • Obtain a stream of characters by using the chars() method from the string.
  • Use the distinct() method to filter out duplicate digits.
  • Use the count() method to get the number of unique digits.

The time complexity is the same as the first solution.

5. Using Bit Manipulation

Let’s explore one more solution. Bit manipulation also offers a way to track unique digits:

public static int countWithBitManipulation(int number) {
    if (number == 0) {
        return 1;
    }
    number = Math.abs(number);
    int mask = 0;
    while (number > 0) {
        int digit = number % 10;
        mask |= 1 << digit;
        number /= 10;
    }
    return Integer.bitCount(mask);
}

Here are the steps of our code this time:

  • Initialize an integer mask to 0. Each bit in mask will represent a digit from 0-9.
  • Iterate through each digit of the number.
  • For each digit, create a bit representation. If the digit is d, then the bit representation is 1 << d.
  • Use bitwise OR to update mask. This marks the digit as seen.
  • Count the number of bits set to 1 in mask. This count is the number of unique digits.

The time complexity is also the same as the above solutions.

6. Conclusion

This article provided different ways to count the number of unique digits in an integer, along with their time complexities.

The example code from this article can be found over on GitHub.

       

Query Hints in Spring Data JPA

$
0
0

1. Introduction

In this tutorial, we’ll explore the fundamentals of query hints in Spring Data JPA. These hints help to optimize database queries and potentially improve application performance by influencing the decision-making process of the optimizer. We’ll also discuss their functionality and how to apply them effectively.

2. Understanding Query Hints

Query hints in Spring Data JPA are a powerful tool that can help optimize database queries and improve application performance. Unlike directly controlling execution, they influence the decision-making process of the optimizer.

In Spring Data JPA, we find these hints in the org.hibernate.annotations package, alongside various annotations and classes associated with Hibernate, a prevalent persistence provider. It’s crucial to note that the interpretation and execution of these hints often depend on the underlying persistence provider, such as Hibernate or EclipseLink, making them vendor-specific.

3. Using Query Hints

Spring Data JPA offers various ways to leverage query hints for optimizing the database queries. Let’s explore the common approaches.

3.1. Annotation-Based Configuration

Spring Data JPA provides an easy way to add query hints to the JPA queries using annotations. The @QueryHints annotation enables the specification of an array of JPA @QueryHint hints intended for application to the generated SQL query.

Let’s consider the following example, where we set the JDBC fetch size hint to limit the result return size:

@Repository
public interface EmployeeRepository extends JpaRepository<Employee, Long> {
    @QueryHints(value = { @QueryHint(name = "org.hibernate.fetchSize", value = "50") })
    List<Employee> findByGender(String gender);
}

In this example, we’ve added the @QueryHints annotation to the findByGender() method of the EmployeeRepository interface to control the number of entities fetched at once. Furthermore, we can apply the @QueryHints annotation at the repository level to impact all queries within the repository:

@Repository
@QueryHints(value = { @QueryHint(name = "org.hibernate.fetchSize", value = "50") })
public interface EmployeeRepository extends JpaRepository<Employee, Long> {
    // Repository methods...
}

This action ensures that the specified query hint applies to all queries within the EmployeeRepository interface, hence promoting consistency across the repository’s queries.

3.2. Configuring Query Hints Programmatically

In addition to annotation-based and dynamic approaches, we can configure query hints programmatically using the EntityManager object. This method offers granular control over query hint configuration. Below is an example of programmatically setting a custom SQL comment hint:

@PersistenceContext
private EntityManager entityManager;
@Override
List<Employee> findRecentEmployees(int limit, <span class="hljs-type">boolean</span> readOnly) {
    Query query = entityManager.createQuery("SELECT e FROM Employee e ORDER BY e.joinDate DESC", Employee.class)
      .setMaxResults(limit)
      .setHint(<span class="hljs-string">"org.hibernate.readOnly"</span>, readOnly);
    return query.getResultList();
}

In this example, we pass a boolean flag as an argument to indicate whether the hint should be set to true or false. This flexibility allows us to adapt the query behavior based on runtime conditions.

3.3. Define the Named Query in the Entity

Query hints can be applied using the @NamedQuery annotation directly within the Entity class. This allows us to define a named query alongside specific hints. For example, let’s consider the following code snippet:

@Entity
@NamedQuery(name = "selectEmployee", query = "SELECT e FROM Employee e",
  hints = @QueryHint(name = "org.hibernate.fetchSize", value = "50"))
public class Employee {
    // Entity properties and methods
}

Once defined within the Entity class, the named query selectEmployee with associated hints can be invoked using the EntityManager‘s createNamedQuery() method:

List<Employee> employees = em.createNamedQuery("selectEmployee").getResultList();

4. Query Hint Usage Scenarios

Query hints can be used in a variety of scenarios to optimize the query performance. Here are some common use cases.

4.1. Timeout Management

In scenarios where queries may run for extended durations, it becomes crucial to implement effective timeout management strategies. By utilizing the javax.persistence.query.timeout hint, we can establish a maximum execution time for queries. This practice ensures that queries don’t exceed specified time thresholds.

The hint accepts a value in milliseconds, and if the query exceeds the limit, it throws a LockTimeoutException. Here’s an example where we set  a timeout of 5000 milliseconds for retrieving active employees:

@QueryHints(value = {@QueryHint(name = "javax.persistence.query.timeout<em>"</em>, value = "5000")})
List<Employee> findActiveEmployees(long inactiveDaysThreshold);

4.2. Caching Query Results

Query hints can be used to enable caching of query results using the jakarta.persistence.cache.retrieveMode hint. When set to USE, JPA attempts to retrieve the entity from the cache first before going to the database. On the other hand, setting it to BYPASS instructs JPA to ignore the cache and fetch the entity directly from the database.

Furthermore, we can also use jakarta.persistence.cache.storeMode to specify how JPA should handle storing entities in the second-level cache. When set to USE, JPA adds entities to the cache and updates existing ones. BYPASS mode instructs Hibernate to only update existing entities in the cache but not add new ones. Lastly, REFRESH mode refreshes entities in the cache before retrieving them, ensuring that the cached data is up-to-date.

Below is an example demonstrating the usage of these hints:

@QueryHints(value = {
    @QueryHint(name = "jakarta.persistence.cache.retrieveMode", value = "USE"),
    @QueryHint(name = "jakarta.persistence.cache.storeMode", value = "USE")
})
List<Employee> findEmployeesByName(String name);

In this scenario, both retrieveMode and storeMode are configured to USE, indicating that Hibernate actively utilizes the second-level cache for both retrieving and storing entities.

4.3. Optimizing Query Execution Plans

Query hints can be used to influence the execution plan generated by the database optimizer. For instance, when the data remains unaltered, we can use the org.hibernate.readOnly hint to denote that the query is read-only:

@QueryHints(@QueryHint(name = "org.hibernate.readOnly", value = "true"))
User findByUsername(String username);

4.4. Custom SQL Comment

The org.hibernate.comment hint allows for the addition of a custom SQL comment to queries, aiding in query analysis and debugging. This feature is particularly useful when we want to provide context or notes within the generated SQL statements.

Here’s an example:

@QueryHints(value = { @QueryHint(name = "org.hibernate.comment", value = "Retrieve employee older than specified age\"") })
List findByAgeGreaterThan(int age);

5. Conclusion

In this article, we learned about the importance of query hints in Spring Data JPA and their significant impact on optimizing database queries to enhance application performance. We explored various techniques, including annotation-based configuration and direct JPQL manipulation, to apply query hints effectively.

As always, the source code for the examples is available over on GitHub.

       

Generate Juggler Sequence in Java

$
0
0
Contact Us Featured

1. Overview

The juggler sequence stands out for its intriguing behavior and elegant simplicity.

In this tutorial, we’ll understand the juggler sequence and explore how to generate the sequence using a given initial number in Java.

2. Understanding the Juggler Sequence

Before we dive into the code to generate juggler sequences, let’s quickly understand what a juggler sequence is.

In number theory, a juggler sequence is an integer sequence defined recursively as follows:

  • Start with a positive integer n as the first term of the sequence.
  • If n is even, the next term is n1/2, rounded down to the nearest integer.
  • If n is odd, then the next term is n3/2, rounded down to the nearest integer.

This process continues until it reaches 1, where the sequence terminates. 

It’s worth mentioning that both n1/2 and n3/2 can be transformed into square root calculations:

  • n1/2 is the square root of n. Therefore n1/2 = sqrt(n)
  • n3/2 = n1 * n1/2 = n * sqrt(n)

An example may help us to understand the sequence quickly:

Given number: 3
-----------------
 3 -> odd  ->  3 * sqrt(3) -> (int)5.19.. -> 5
 5 -> odd  ->  5 * sqrt(5) -> (int)11.18.. -> 11
11 -> odd  -> 11 * sqrt(11)-> (int)36.48.. -> 36
36 -> even -> sqrt(36) -> (int)6 -> 6
 6 -> even -> sqrt(6) -> (int)2.45.. -> 2
 2 -> even -> sqrt(2) -> (int)1.41.. -> 1
 1
sequence: 3, 5, 11, 36, 6, 2, 1

It’s worth noting that it’s conjectured that all juggler sequences eventually reach 1, but this conjecture has not been proven. Therefore, we’re effectively blocked from completing a Big O time complexity analysis.

Now that we know how a juggler sequence is generated, let’s implement some sequence generation methods in Java.

3. The Loop-Based Solution

Let’s first implement a loop-based generation method:

class JugglerSequenceGenerator {
 
    public static List<Integer> byLoop(int n) {
        if (n <= 0) {
            throw new IllegalArgumentException("The initial integer must be greater than zero.");
        }
        List<Integer> seq = new ArrayList<>();
        int current = n;
        seq.add(current);
        while (current != 1) {
            int next = (int) (Math.sqrt(current) * (current % 2 == 0 ? 1 : current));
            seq.add(next);
            current = next;
        }
        return seq;
    }
   
}

The code looks pretty straightforward. Let’s quickly pass through the code and understand how it works:

  • First, validate the input n, as the initial number must be a positive integer.
  • Then, create the seq list to store the result sequence, assign the initial integer to current, and add it to seq.
  • while loop is responsible for generating each term and appending it to the sequence based on the calculation we discussed earlier.
  • Once the loop terminates (when current becomes 1), the generated sequence stored in the seq list is returned.

Next, let’s create a test method to verify whether our loop-based approach can generate the expected result:

assertThrows(IllegalArgumentException.class, () -> JugglerSequenceGenerator.byLoop(0));
assertEquals(List.of(3, 5, 11, 36, 6, 2, 1), JugglerSequenceGenerator.byLoop(3));
assertEquals(List.of(4, 2, 1), JugglerSequenceGenerator.byLoop(4));
assertEquals(List.of(9, 27, 140, 11, 36, 6, 2, 1), JugglerSequenceGenerator.byLoop(9));
assertEquals(List.of(21, 96, 9, 27, 140, 11, 36, 6, 2, 1), JugglerSequenceGenerator.byLoop(21));
assertEquals(List.of(42, 6, 2, 1), JugglerSequenceGenerator.byLoop(42));

4. The Recursion-Based Solution

Alternatively, we can generate a juggler sequence from a given number recursively. First, let’s add the byRecursion() method to the JugglerSequenceGenerator class:

public static List<Integer> byRecursion(int n) {
    if (n <= 0) {
        throw new IllegalArgumentException("The initial integer must be greater than zero.");
    }
    List<Integer> seq = new ArrayList<>();
    fillSeqRecursively(n, seq);
    return seq;
}

As we can see, the byRecursion() method is the entry point of another juggler sequence generator. It validates the input number and prepares the result sequence list. However, the main sequence generation logic is implemented in the fillSeqRecursively() method:

private static void fillSeqRecursively(int current, List<Integer> result) {
    result.add(current);
    if (current == 1) {
        return;
    }
    int next = (int) (Math.sqrt(current) * (current % 2 == 0 ? 1 : current));
    fillSeqRecursively(next, result);
}

As the code shows, the method calls itself recursively with the next value and the result list. This means that the method will repeat the process of adding the current number to the sequence, checking termination conditions, and calculating the next term until the termination condition (current == 1) is met.

The recursion approach passes the same test:

assertThrows(IllegalArgumentException.class, () -> JugglerSequenceGenerator.byRecursion(0));
assertEquals(List.of(3, 5, 11, 36, 6, 2, 1), JugglerSequenceGenerator.byRecursion(3));
assertEquals(List.of(4, 2, 1), JugglerSequenceGenerator.byRecursion(4));
assertEquals(List.of(9, 27, 140, 11, 36, 6, 2, 1), JugglerSequenceGenerator.byRecursion(9));
assertEquals(List.of(21, 96, 9, 27, 140, 11, 36, 6, 2, 1), JugglerSequenceGenerator.byRecursion(21));
assertEquals(List.of(42, 6, 2, 1), JugglerSequenceGenerator.byRecursion(42));

5. Conclusion

In this article, we first discussed what a juggler sequence is. It’s important to note that it’s not yet proven that all juggler sequences eventually reach 1.

Also, we explored two approaches to generate the juggler sequence starting from a given integer.

As always, the complete source code for the examples is available over on GitHub.

       

Java Weekly, Issue 532

$
0
0

1. Spring and Java

>> Bending pause times to your will with Generational ZGC [netflixtechblog.com]

The surprising and not-so-surprising benefits of generational ZGC: reduced tail latencies, efficiency, operational simplicity, memory, and more. Really good stuff.

>> JDK 22 and JDK 23: What We Know So Far [infoq.com]

Exploring the recent advancements in Java 22 and beyond: a comprehensive overview of the latest features and enhancements.

>> Issue 315 – Random Newsletter [javaspecialists.eu]

And the RandomGenerator in Java 17: a detailed exploration of the newly introduced random generator interface.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Patterns of Legacy Displacement: Event Interception [martinfowler.com]

How to intercept any updates to the system state and route some of them to a new component.

Also worth reading:

3. Pick of the Week

>> Microsoft JDConf 2024, on the 27th and 28th [jdconf.com]

       

Migrate Application from Spring Security 5 to Spring Security 6/Spring Boot 3

$
0
0

1. Overview

Spring Security 6 comes with several major changes, including the removal of classes, and deprecated methods, and the introduction of new methods.

Migrating from Spring Security 5 to Spring Security 6 can be done incrementally without breaking the existing code base. Also, we can use third-party plugins like OpenRewrite to facilitate migration to the latest version.

In this tutorial, we’ll learn how to migrate an existing application using Spring Security 5 to Spring Security 6. We’ll replace deprecated methods and utilize lambda DSL to simplify configuration. Also, we’ll utilize OpenRewrite to make migration faster.

2. Spring Security and Spring Boot Version

Spring Boot is based on the Spring framework, and the versions of Spring Boot use the latest version of the Spring framework. Spring Boot 2 defaults to Spring Security 5, while Spring Boot 3 uses Spring Security 6.

To use Spring Security in a Spring Boot application, we always add the spring-boot-starter-security dependency to the pom.xml.

However, we can override the default Spring Security version by specifying a desired version in the properties section of pom.xml:

<properties>
    <spring-security.version>5.8.9</spring-security.version>
</properties>

Here, we specify that we are using Spring Security 5.8.9 in our project, overwriting the default version.

Notably, we can also use Spring Security 6 in Spring Boot 2 by overriding the default version in the properties section.

3. What’s New in Spring Security 6

Spring Security 6 introduces several feature updates to improve security and robustness. It now requires at least Java version 17 and uses the jakarta namespace.

One of the major changes is the removal of WebSecurityConfigurerAdapter in favor of component-based security configuration.

Also, the authorizeRequests() is removed and replaced with authorizeHttpRequests() to define authorization rules.

Furthermore, it introduces methods like requestMatcher() and securityMatcher() to replace antMatcher() and mvcMatcher() for configuring security for request resources. The requestMatcher() method is more secure because it chooses the appropriate RequestMatcher implementation for request configuration.

Other deprecated methods like cors() and csrf() now have functional style alternatives.

4. Project Setup

To begin, let’s bootstrap a Spring Boot 2.7.5 project by adding spring-boot-starter-web and spring-boot-starter-security to the pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>2.7.5</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
    <version>2.7.5</version>
</dependency>

The spring-boot-starter-security dependency uses the Spring Security 5.

Next, let’s create a class named WebSecurityConfig:

@EnableWebSecurity
@EnableGlobalMethodSecurity(prePostEnabled = true)
class WebSecurityConfig extends WebSecurityConfigurerAdapter {
}

Here, we annotate the class with @EnableWebSecurity to initiate the process of configuring security for web requests. Also, we enable method-level authorization. Next, the class extends the WebSecurityConfigurerAdapter class to provide various security configuration methods.

Furthermore, let’s define an in-memory user for authentication:

@Override
void configure(AuthenticationManagerBuilder auth) throws Exception {
    UserDetails user = User.withDefaultPasswordEncoder()
      .username("Admin")
      .password("password")
      .roles("ADMIN")
      .build();
    auth.inMemoryAuthentication().withUser(user);
}

In the method above, we create an in-memory user by overriding the default configuration.

Moving on, let’s exclude static resources from the security by overriding the configure(WebSecurity web) method:

@Override
void configure(WebSecurity web) {
    web.ignoring().antMatchers("/js/**", "/css/**");
}

Finally, let’s create HttpSecurity by overriding the configure(HttpSecurity http) method:

@Override
void configure(HttpSecurity http) throws Exception {
    http
      .authorizeRequests()
      .antMatchers("/").permitAll()
      .anyRequest().authenticated()
      .and()
      .formLogin()
      .and()
      .httpBasic();
}

Notably, this setup showcases typical Spring Security 5 configuration. In the subsequent section, we’ll migrate this code to Spring Security 6.

5. Migrating Project to Spring Security 6

Spring recommends an incremental migration approach to prevent breaking existing code when updating to Spring Security 6. Before upgrading to Spring Security 6, we can first upgrade our Spring Boot application to Spring Security 5.8.5 and update the code to use new features. Migrating to 5.8.5 prepares us for expected changes in version 6.

While migrating incrementally, our IDE can warn us of deprecated features. This aids the incremental update process.

For simplicity, let’s migrate the sample project straight to Spring Security 6 by updating the application to use Spring Boot version 3.2.2. In a case where the application uses Spring Boot version 2, we can specify Spring Security 6 in the properties section.

To begin the migration process, let’s modify the pom.xml to use the latest Spring Boot version:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>3.2.2</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
    <version>3.2.2</version>
</dependency>

In the initial setup, we use Spring Boot 2.7.5, which uses Spring Security 5 under the hood.

Notably, the minimum Java version for Spring Boot 3 is Java 17.

In the subsequent sub-sections, we’ll refactor the existing code to use Spring Security 6.

5.1. @Configuration Annotation

Before Spring Security 6, the @Configuration annotation is part of @EnableWebSecurity, but with the latest update, we have to annotate our security configuration with the @Configuration annotation:

@Configuration
@EnableWebSecurity
@EnableMethodSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
}

Here, we introduce the @Configuration annotation to the existing code base because it’s no longer part of the @EnableWebSecurity annotation. Also, the annotation is no longer part of @EnableMethodSecurity, @EnableWebFluxSecurity, or @EnableGlobalMethodSecurity annotations.

Additionally, @EnableGlobalMethodSecurity is marked for deprecation and to be replaced by @EnableMethodSecurity. By default, it enables Spring’s pre-post annotations. Hence, we introduce @EnableMethodSecurity to provide authorization at the method level

5.2. WebSecurityConfigurerAdapter

The latest update removes the WebSecurityConfigurerAdapter class and adopts component-based configuration:

@Configuration
@EnableWebSecurity
public class WebSecurityConfig {
}

Here, we remove the WebSecurityConfigurerAdapter, and this eliminates overriding methods for security configuration. Instead, we can register a bean for security configuration. We can register the WebSecurityCustomizer bean to configure web security, the SecurityFilterChain bean to configure HTTP security, the InMemoryUserDetails bean to register custom users, etc.

5.3. WebSecurityCustomizer Bean

Let’s modify the method that excludes static resources by publishing a WebSecurityCustomizer bean:

@Bean
WebSecurityCustomizer webSecurityCustomizer() {
   return (web) -> web.ignoring().requestMatchers("/js/**", "/css/**");
}

The WebSecurityCustomizer interface replaces configure(Websecurity web) from the WebSecurityConfigurerAdapter interface.

5.4. AuthenticationManager Bean

In the earlier section, we created an in-memory user by overriding configure(AuthenticationManagerBuilder auth) from the WebSecurityConfigurerAdapter.

Let’s refactor the authentication credentials logic by registering InMemoryUserDetailsManager bean instead:

@Bean
InMemoryUserDetailsManager userDetailsService() {
    UserDetails user = User.withDefaultPasswordEncoder()
      .username("Admin")
      .password("admin")
      .roles("USER")
      .build();
    return new InMemoryUserDetailsManager(user);
}

Here, we define an in-memory user with a USER role to provide role-based authorization.

5.5. HTTP Security Configuration

In the previous Spring Security version, we configure HttpSecurity by overriding the configure method from the WebSecurityConfigurer class. Since it’s removed in the latest version, let’s register the SecurityFilterChain bean for HTTP security configuration:

@Bean
SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
    http
      .authorizeHttpRequests(
          request -> request
            .requestMatchers("/").permitAll()
            .anyRequest().authenticated()
      )
      .formLogin(Customizer.withDefaults())
      .httpBasic(Customizer.withDefaults());
   return http.build();
}

In the code above, we replace the authorizeRequest() method with authorizeHttpRequests(). The new method uses AuthorizationManager API, which simplifies reuse and customization.

Additionally, it improves performance by delaying authentication lookup. Authentication lookup occurs only when a request requires authorization.

When we don’t have a customized rule, we use the Customizer.withDefaults() method to use the default configuration.

Additionally, we use requestMatchers() instead of antMatcher() or mvcMatcher() to secure resources.

5.6. RequestCache

The request cache helps save user requests when they are required to authenticate and redirect users to the request once they successfully authenticate. Before Spring Security 6, RequestCache checks on every incoming request to see if there are any saved requests to redirect to. This read the HttpSession on every RequestCache.

However, in Spring Security 6, the request cache only checks if the request contains a special parameter name “continue“. This improves performance and prevents unnecessary reading of HttpSession.

6. Using OpenRewrite

Moreover, we can use third-party tools like OpenRewrite to migrate an existing Spring Boot application to Spring Boot 3. Since Spring Boot 3 uses Spring Security 6, it also migrates the security configuration to version 6.

To use OpenRewrite, we can add a plugin to the pom.xml:

<plugin>
    <groupId>org.openrewrite.maven</groupId>
    <artifactId>rewrite-maven-plugin</artifactId>
    <version>5.23.1</version>
    <configuration>
        <activeRecipes>
            <recipe>org.openrewrite.java.spring.boot3.UpgradeSpringBoot_3_0</recipe>
        </activeRecipes>
    </configuration>
    <dependencies>
        <dependency>
            <groupId>org.openrewrite.recipe</groupId>
            <artifactId>rewrite-spring</artifactId>
            <version>5.5.0</version>
        </dependency>
    </dependencies>
</plugin>

Here, we specify upgrading to Spring Boot version 3 through the recipe property. OpenRewrite has a lot of recipes to choose from for upgrading a Java project.

Finally, let’s run the migration command:

$ mvn rewrite:run

The command above migrates the project to Spring Boot 3, including the security configuration. However, OpenRewrite currently doesn’t use lambda DSL for the migrated security configuration. Of course, this is likely to change in future releases.

7. Conclusion

In this article, we saw a step-by-step guide for migrating an existing code base using Spring Security 5 to Spring Security 6 by replacing deprecated classes and methods. Additionally, we saw how to use a third-party plugin to automate the migration. As always, the full source code for the examples is available over on GitHub.

       

Document Query Parameters with Spring REST Docs

$
0
0

1. Overview

Documentation is crucial for any piece of code that we intend to share with the world, especially if this code is relatively complex. Good API documentation not only attracts developers to use it but also shows the quality of the product. A company with sloppy written documentation might also have a sloppy written API.

However, developers like writing code for machines, not text for people.

In this tutorial, we’ll explore how to combine writing documentation and writing APIs with Spring REST Docs. We’ll take query parameter documentation as an example.

2. API

Let’s consider a straightforward API with a single endpoint:

@RestController
@RequestMapping("/books")
public class BookController {
    private final BookService service;
    public BookController(BookService service) {
        this.service = service;
    }
    @GetMapping
    public List<Book> getBooks(@RequestParam(name = "page") Integer page) {
        return service.getBooks(page);
    }
}

This endpoint returns a collection of books available on our website. However, due to the massive volume of the books available, we cannot return all of them. Clients provide a page number of our catalog, and we send them the information only for this page.

We decided to make this parameter required. In this case, it’s a default setup. This way, we improve the performance of our service and don’t allow clients to request too much data in one go.

However, we must provide information about our decision and explain the rules clients should follow. In this case, the clients receive an error message if the parameters aren’t present.

3. Documentation

The usual approach to writing documentation is to write documentation, meaning that a developer has to write the same thing twice. First in the code and then in the text, explaining how to interact with the system. However, this is wasteful, and we cannot assume that all developers follow this.

Documentation is quite a formal document that aims for clarity rather than inspirational insights, clever wording, or innovative plot structure. Thus, why don’t we generate documentation from code? This way, we won’t write the same thing twice, and all the changes are reflected in the documentation.

Spring REST Docs does precisely this. However, it generates the documentation not from the code, as it doesn’t provide much context but from the tests. This way, we can express quite complex cases and examples. Another benefit is that documentation won’t be generated if our tests fail.

4. Tests With Documentation

Spring REST Docs support major testing frameworks for REST testing. We’ll consider the example for MockMvc, WebTestClient, and REST-assured. However, the main idea and the structure would be similar for all of them.

Also, we’ll be using JUnit 5 as the base for our test cases, but it’s possible to set up Spring REST Docs with JUnit 4.

All of the testing methods below require an additional extension:

@ExtendWith({RestDocumentationExtension.class, SpringExtension.class})

These are special classes for documentation generation.

4.1. WebTestClient

Let’s start with WebTestClient, a more modern REST testing method. As was mentioned previously, we need to extend the test class with additional extensions. Also, we need to configure it:

@BeforeEach
public void setUp(ApplicationContext webApplicationContext, RestDocumentationContextProvider restDocumentation) {
    this.webTestClient = WebTestClient.bindToApplicationContext(webApplicationContext)
      .configureClient()
      .filter(documentationConfiguration(restDocumentation))
      .build();
}

After that, we can write a test that wouldn’t only check our API but also provide information about the request:

@Test
@WithMockUser
void givenEndpoint_whenSendGetRequest_thenSuccessfulResponse() {
    webTestClient.get().uri("/books?page=2")
      .exchange().expectStatus().isOk().expectBody()
      .consumeWith(document("books",
        requestParameters(parameterWithName("page").description("The page to retrieve"))));
}

4.2. WebMvcTest and MockMvc

In general, this method is very similar to the previous one. It also requires a correct setup:

@BeforeEach
public void setUp(WebApplicationContext webApplicationContext, RestDocumentationContextProvider restDocumentation) {
    this.mockMvc = webAppContextSetup(webApplicationContext)
      .apply(documentationConfiguration(restDocumentation))
      .alwaysDo(document("{method-name}", preprocessRequest(prettyPrint()), preprocessResponse(prettyPrint())))
      .build();
}

The test method looks the same, except for the fact that we’re using MockMvc and its API:

@Test
void givenEndpoint_whenSendGetRequest_thenSuccessfulResponse() throws Exception {
    mockMvc.perform(get("/books?page=2"))
      .andExpect(status().isOk())
      .andDo(document("books",
        requestParameters(parameterWithName("page").description("The page to retrieve"))));
}

4.3. REST-assured

Lastly, let’s check an example with REST-assuredBecause we need a running server for this one, we shouldn’t use @WebMvcTest or @AutoconfigureMockMvc. Here, we use @AutoconfigureWebMvc and also provide the correct port:

@BeforeEach
void setUp(RestDocumentationContextProvider restDocumentation, @LocalServerPort int port) {
    this.spec = new RequestSpecBuilder()
      .addFilter(documentationConfiguration(restDocumentation))
      .setPort(port)
      .build();
}

However, the tests look generally the same:

@Test
@WithMockUser
void givenEndpoint_whenSendGetRequest_thenSuccessfulResponse() {
    RestAssured.given(this.spec).filter(document("users", requestParameters(
        parameterWithName("page").description("The page to retrieve"))))
      .when().get("/books?page=2")
      .then().assertThat().statusCode(is(200));
}

5. Generated Documentation

However, at this point, we still don’t have a generated documentation. To get the result, we need to go through additional steps.

5.1. Generated Snippets

We can find generated snippets in the target folder after running our tests. However, we can configure the output directory to define a different place to store the snippets. In general, they look like this:

[source,bash]
----
$ curl 'http://localhost:8080/books?page=2' -i -X GET
----

At the same time, we can see the information about our parameters, which is stored in a .adoc file.

|===
|Parameter|Description
|`+page+`
|The page to retrieve
|===

5.2. Documentation Generation

The next step is to provide a configuration for AsciiDoctor to create an HTML with more readable documentation. AsciiDoc is a simple yet powerful markup language. We can use it for various purposes, such as generating HTML and PDFs or writing a book.

Thus, as we want to generate HTML documentation, we need to outline the template for our HTML:

= Books With Spring REST Docs
How you should interact with our bookstore:
.request
include::{snippets}/books/http-request.adoc[]
.request-parameters
include::{snippets}/books/request-parameters.adoc[]
.response
include::{snippets}/books/http-response.adoc[]

In our case, we use a simple format, but it’s possible to create a more elaborate custom format that would be appealing and informative. The flexibility of AsciiDoc helps us with this.

5.3. Generated HTML

After correct setup and configuration, when we can attach the generation goal to a Maven phase:

<executions>
    <execution>
        <id>generate-docs</id>
        <phase>package</phase>
        <goals>
            <goal>process-asciidoc</goal>
        </goals>
        <configuration>
            <backend>html</backend>
            <doctype>book</doctype>
            <attributes>
                <snippets>${snippetsDirectory}</snippets>
            </attributes>
            <sourceDirectory>src/docs/asciidocs</sourceDirectory>
            <outputDirectory>target/generated-docs</outputDirectory>
        </configuration>
    </execution>
</executions>

We can run a required mvn command and trigger the generation. The template we defined in the previous section renders the following HTML:

We can attach this process to our pipeline and always have relevant and correct documentation. Another benefit is that this process reduces manual work, which is wasteful and error-prone.

6. Conclusion

Documentation is an essential part of software. The developers acknowledge this, but only a few consistently write or maintain it. Spring REST Docs allow us to generate good documentation with minimal effort based on the code rather than our understanding of what API should do.

As usual, all the code from this tutorial is available over on GitHub.

       

Introduction to JFreeChart

$
0
0

1. Overview

In this tutorial, we’ll see how to use JFreeChart, a comprehensive Java library for creating a wide variety of charts. We can use it to integrate graphical data representations into a Swing application. It also includes a separate extension for JavaFX.

We’ll start with the basics, covering setup and chart creation, and try a few different types of charts.

2. Creating Our First Chart

JFreeChart allows us to create line charts, bar charts, pie charts, scatter plots, time series charts, histograms, and others. It can also combine different charts into a single view.

2.1. Setting Up Dependencies

To get started, we need to add the jfreechart artifact to our pom.xml file:

<dependency>
    <groupId>org.jfree</groupId>
    <artifactId>jfreechart</artifactId>
    <version>1.5.4</version>
</dependency>

We should always check the latest release and its JDK compatibility to make sure our project is up-to-date and working properly. In this case, version 1.5.4 requires JDK8 or later.

2.2. Creating a Basic Line Chart

Let’s start by using DefaultCategoryDataset to create a dataset for our graph:

DefaultCategoryDataset dataset = new DefaultCategoryDataset();
dataset.addValue(200, "Sales", "January");
dataset.addValue(150, "Sales", "February");
dataset.addValue(180, "Sales", "March");
dataset.addValue(260, "Sales", "April");
dataset.addValue(300, "Sales", "May");

Now, we can create a JFreeChart object that uses the previous dataset to plot a line chart.

The ChartFactory.createLineChart method takes the chart title, the x-axis and y-axis labels, and the dataset as parameters:

JFreeChart chart = ChartFactory.createLineChart(
    "Monthly Sales",
    "Month",
    "Sales",
    dataset);

Next, a ChartPanel object is essential for displaying our chart within a Swing component. This object is then used as a JFrame content pane to create the application window:

ChartPanel chartPanel = new ChartPanel(chart);
JFrame frame = new JFrame();
frame.setSize(800, 600);
frame.setContentPane(chartPanel);
frame.setLocationRelativeTo(null);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setVisible(true);

When we run this code, we’ll see our Monthly Sales graph:

JFreeChart - Line Chart Example

As we can see, we’ve created a chart using only a small amount of code.

3. Exploring Different Types of Charts

In the remaining examples, we’ll try some different types of charts. We won’t need to change the code much.

3.1. Bar Charts

We can modify the JFreeChart creation code to convert our line graph to a bar chart:

JFreeChart chart = ChartFactory.createBarChart(
    "Monthly Sales",
    "Month",
    "Sales",
    dataset);

This plots the dataset from the previous example:

JFreeChart - Bar Chart Example

As we can see, JFreeChart is quite flexible, making it easy to display the same data with different kinds of charts.

3.2. Pie Charts

Pie charts show the proportion of parts within a whole. To create a pie chart, we need to use the DefaultPieDataset class to create our dataset. We also use the createPieChart() method to build our JFreeChart object:

DefaultPieDataset<String> dataset = new DefaultPieDataset<>();
dataset.setValue("January", 200);
dataset.setValue("February", 150);
dataset.setValue("March", 180);
JFreeChart chart = ChartFactory.createPieChart(
    "Monthly Sales",
    dataset,
    true,    // include legend
    true,    // generate tooltips
    false);  // no URLs

Tooltips showing absolute sales in a month and relative percentage of the total are visible when we hover the mouse over a slice of the pie:

JFreeChart - Pie Chart Example

Finally, we should note that there are several variations of the ChartFactory.createPieChart() method for finer customization.

3.3. Time Series Charts

Time series charts show trends in data over time. To construct the dataset, we need a TimeSeriesCollection object, which is a collection of TimeSeries objects, each of which is a sequence of data items containing a value related to a specific time period:

TimeSeries series = new TimeSeries("Monthly Sales");
series.add(new Month(1, 2024), 200);
series.add(new Month(2, 2024), 150);
series.add(new Month(3, 2024), 180);
TimeSeriesCollection dataset = new TimeSeriesCollection();
dataset.addSeries(series);
JFreeChart chart = ChartFactory.createTimeSeriesChart(
    "Monthly Sales",
    "Date",
    "Sales",
    dataset,
    true,    // legend
    false,   // tooltips
    false);  // no URLs

Let’s see the result:

JFreeChart - Time Series Chart Example

This example shows the power of JFreeChart in plotting time series data, making it easy to track changes over time.

3.4. Combination Charts

Combination charts allow us to combine different types of charts into one. Compared to the previous examples, the code is a bit more complex.

We need to use DefaultCategoryDataset to store the data, but here, we create two instances, one for each chart type:

DefaultCategoryDataset lineDataset = new DefaultCategoryDataset();
lineDataset.addValue(200, "Sales", "January");
lineDataset.addValue(150, "Sales", "February");
lineDataset.addValue(180, "Sales", "March");
DefaultCategoryDataset barDataset = new DefaultCategoryDataset();
barDataset.addValue(400, "Profit", "January");
barDataset.addValue(300, "Profit", "February");
barDataset.addValue(250, "Profit", "March");

CategoryPlot creates the plot area that contains both types of charts. It allows us to assign datasets to renderers — LineAndShapeRenderer for lines, and BarRenderer for bars:

CategoryPlot plot = new CategoryPlot();
plot.setDataset(0, lineDataset);
plot.setRenderer(0, new LineAndShapeRenderer());
plot.setDataset(1, barDataset);
plot.setRenderer(1, new BarRenderer());
plot.setDomainAxis(new CategoryAxis("Month"));
plot.setRangeAxis(new NumberAxis("Value"));
plot.setOrientation(PlotOrientation.VERTICAL);
plot.setRangeGridlinesVisible(true);
plot.setDomainGridlinesVisible(true);

Finally, let’s use JFreeChart to create the final chart:

JFreeChart chart = new JFreeChart(
    "Monthly Sales and Profit",
    null,  // null means to use default font
    plot,  // combination chart as CategoryPlot
    true); // legend

This setup allows for a combined presentation of data that visually demonstrates the synergy between sales and profit:

This way, JFreeChart can present complex datasets that require multiple chart types for better understanding and analysis.

4. Conclusion

In this article, we’ve explored the creation of different types of charts with JFreeChart, including line charts, bar charts, pie charts, time series charts, and combination charts.

This introduction only scratches the surface of what JFreeChart can do.

As always, the code for this article is available over on GitHub.

       

Finding the Parent of a Node in a Binary Search Tree with Java

$
0
0

1. Introduction

A Binary Search Tree (BST) is a data structure that helps us efficiently solve real-world problems.

In this post, we’ll look at how to solve the problem of finding the parent of a node in a BST.

2. What Is a Binary Search Tree?

A BST is a tree where each node points to at most two nodes, often called left and right children. Additionally, each node’s value is greater than the left child and smaller than the right child.

For instance, let’s picture three nodes, A=2, B=1, and C=4. Hence, one possible BST has A as the root, B as its left child, and C as its right child.

In the next sections, we’ll use a BST structure implemented with a default insert() method to exercise the problem of finding the parent of a node.

3. The Parent of a Node in a Binary Search Tree

In the following sections, we’ll describe the problem of finding a node’s parent in a BST and exercise a few approaches to solve it.

3.1. Description of the Problem

As we’ve seen throughout the article, a given node of a BST has pointers to its left and right children.

For instance, let’s picture a simple BST with three nodes:

The node 8 contains two children, 5 and 12. Hence, node 8 is the parent of nodes 5 and 12.

The problem consists of finding the parent of any given node value. In other words, we must find the node where any of its children equals the target value. For instance, in the BST of the image above, if we input 5 into our program, we expect 8 as output. If we input 12, we also expect 8.

The edge cases for this problem are finding the parent for either the topmost root node or a node that doesn’t exist in the BST. In both cases, there’s no parent node.

3.2. Test Structure

Before diving into the various solutions, let’s first define a basic structure for our tests:

class BinaryTreeParentNodeFinderUnitTest {
    TreeNode subject;
    @BeforeEach
    void setUp() {
        subject = new TreeNode(8);
        subject.insert(5);
        subject.insert(12);
        subject.insert(3);
        subject.insert(7);
        subject.insert(1);
        subject.insert(4);
        subject.insert(11);
        subject.insert(14);
        subject.insert(13);
        subject.insert(16);
    }
}

The BinaryTreeParentNodeFinderUnitTest defines a setUp() method that creates the following BST:

 

4. Implementing a Recursive Solution

The straightforward solution to the problem is using recursion to traverse the tree and early return the node where any of its children is equal to the target value.

Let’s first define a public method in the TreeNode class:

TreeNode parent(int target) throws NoSuchElementException {
    return parent(this, new TreeNode(target));
}

Now, let’s define the recursive version of the parent() method in the TreeNode class:

TreeNode parent(TreeNode current, TreeNode target) throws NoSuchElementException {
    if (target.equals(current) || current == null) {
        throw new NoSuchElementException(format("No parent node found for 'target.value=%s' " +
            "The target is not in the tree or the target is the topmost root node.",
            target.value));
    }
    if (target.equals(current.left) || target.equals(current.right)) {
        return current;
    }
    return parent(target.value < current.value ? current.left : current.right, target);
}

The algorithm first checks if the current node is the topmost root node or the node doesn’t exist in the tree. In both situations, the node doesn’t have a parent, so we throw a NoSuchElementException.

Then, the algorithm checks if any current node children equal the target. If so, the current node is the parent of the target node. Thus, we return current.

Finally, we traverse the BST using recursive calls to left or right, depending on the target value.

Let’s test our recursive solution:

@Test
void givenBinaryTree_whenFindParentNode_thenReturnCorrectParentNode() {
    assertThrows(NoSuchElementException.class, () -> subject.parent(1231));
    assertThrows(NoSuchElementException.class, () -> subject.parent(8));
    assertEquals(8, subject.parent(5).value);
    assertEquals(5, subject.parent(3).value);
    assertEquals(5, subject.parent(7).value);
    assertEquals(3, subject.parent(4).value);
    // assertions for other nodes
}

In the worst-case scenario, the algorithm executes at most n recursive operations with O(1) cost each to find the parent node, where n is the number of nodes in the BST. Thus, it is O(n) in time complexity. That time falls to O(log n) in well-balanced BSTs since its height is always at most log n.

Additionally, the algorithm uses heap space for the recursive calls. Hence, in the worst-case scenario, the recursive calls stop when we find a leaf node. Therefore, the algorithm stacks at most h recursive calls, which makes it O(h) in space complexity, where h is the BST’s height.

5. Implementing an Iterative Solution

Pretty much any recursive solution has an iterative version. Particularly, we can also find the parent of a BST using a stack and while loops instead of recursion.

For that, let’s add the iterativeParent() method to the TreeNode class:

TreeNode iterativeParent(int target) {
    return iterativeParent(this, new TreeNode(target));
}

The method above is simply an interface to the helper method below:

TreeNode iterativeParent(TreeNode current, TreeNode target) {
    Deque <TreeNode> parentCandidates = new LinkedList<>();
    String notFoundMessage = format("No parent node found for 'target.value=%s' " +
        "The target is not in the tree or the target is the topmost root node.",
        target.value);
    if (target.equals(current)) {
        throw new NoSuchElementException(notFoundMessage);
    }
    while (current != null || !parentCandidates.isEmpty()) {
        while (current != null) {
            parentCandidates.addFirst(current);
            current = current.left;
        }
        current = parentCandidates.pollFirst();
        if (target.equals(current.left) || target.equals(current.right)) {
            return current;
        }
        current = current.right;
    }
    throw new NoSuchElementException(notFoundMessage);
}

The algorithm first initializes a stack to store parent candidates. Then it mostly depends on four main parts:

  1. The outer while loop checks if we are visiting a non-leaf node or if the stack of parent candidates is not empty. In both cases, we should continue traversing the BST until we find the target parent.
  2. The inner while loop checks again if we are visiting a non-leaf node. At that point, visiting a non-leaf node means we should traverse left first since we use an in-order traversal. Thus, we add the parent candidate to the stack and continue traversing left.
  3. After visiting the left nodes, we poll a node from the Deque, check if that node is the target’s parent, and return it if so. We keep traversing to the right if we don’t find a parent.
  4. Finally, if the main loop completes without returning any node, we can assume that the node doesn’t exist or it’s the topmost root node.

Now, let’s test the iterative approach:

@Test
void givenBinaryTree_whenFindParentNodeIteratively_thenReturnCorrectParentNode() {
    assertThrows(NoSuchElementException.class, () -> subject.iterativeParent(1231));
    assertThrows(NoSuchElementException.class, () -> subject.iterativeParent(8));
    assertEquals(8, subject.iterativeParent(5).value);
    assertEquals(5, subject.iterativeParent(3).value);
    assertEquals(5, subject.iterativeParent(7).value);
    assertEquals(3, subject.iterativeParent(4).value);
    
    // assertion for other nodes
}

In the worst case, we need to traverse the entire o tree to find the parent, which makes the iterative solution O(n) in space complexity. Again, if the BST is well-balanced, we can do the same in O(log n).

When we reach a leaf node, we start polling elements from the parentCandidates stack. Hence, that additional stack to store the parent candidates contains, at most, h elements, where h is the height of the BST. Therefore, it also has O(h) space complexity.

6. Creating a BST With Parent Pointers

Another solution to the problem is to modify the existing BST data structure to store each node’s parent.

For that, let’s create another class named ParentKeeperTreeNode with a new field called parent:

class ParentKeeperTreeNode {
    int value;
    ParentKeeperTreeNode parent;
    ParentKeeperTreeNode left;
    ParentKeeperTreeNode right;
    // value field arg constructor
    // equals and hashcode
}

Now, we need to create a custom insert() method to also save the parent node:

void insert(ParentKeeperTreeNode currentNode, final int value) {
    if (currentNode.left == null && value < currentNode.value) {
        currentNode.left = new ParentKeeperTreeNode(value);
        currentNode.left.parent = currentNode;
        return;
    }
    if (currentNode.right == null && value > currentNode.value) {
        currentNode.right = new ParentKeeperTreeNode(value);
        currentNode.right.parent = currentNode;
        return;
    }
    if (value > currentNode.value) {
        insert(currentNode.right, value);
    }
    if (value < currentNode.value) {
        insert(currentNode.left, value);
    }
}

The insert() method also saves the parent when creating a new left or right child for the current node. In that case, since we are creating a new child, the parent is always the current node we are visiting.

Finally, we can test the BST version that stores parent pointers:

@Test
void givenParentKeeperBinaryTree_whenGetParent_thenReturnCorrectParent() {
    ParentKeeperTreeNode subject = new ParentKeeperTreeNode(8);
    subject.insert(5);
    subject.insert(12);
    subject.insert(3);
    subject.insert(7);
    subject.insert(1);
    subject.insert(4);
    subject.insert(11);
    subject.insert(14);
    subject.insert(13);
    subject.insert(16);
    assertNull(subject.parent);
    assertEquals(8, subject.left.parent.value);
    assertEquals(8, subject.right.parent.value);
    assertEquals(5, subject.left.left.parent.value);
    assertEquals(5, subject.left.right.parent.value);
    // tests for other nodes
}

In that type of BST, we calculate parents during node insertion. Thus, to verify the results, we can simply check the parent reference in each node.

Therefore, instead of calculating the parent() of each given node in O(h), we can get it immediately by reference in O(1) time. Additionally, each node’s parent is just a reference to another existing object in memory. Thus, the space complexity is also O(1).

That version of BST is helpful when we often need to retrieve the parent of a node since the parent() operation is well-optimized.

7. Conclusion

In that article, we saw the problem of finding the parent of any given node of a BST.

We’ve exercised three solutions to the problem with code examples. One uses recursion to traverse the BST. The other uses a stack to store parent candidates and traverse the BST. The last one keeps a parent reference in each node to get it in constant time.

As always, the source code is available over on GitHub.

       

How to Mock Amazon S3 for Integration Test

$
0
0

1. Introduction

In this article, we’ll learn how to mock Amazon S3 (Simple Storage Service) to run integration tests for Java applications.

To demonstrate how it works, we’ll create a CRUD (create, read, update, delete) service that uses the AWS SDK to interact with the S3. Then, we’ll write integration tests for each operation using a mocked S3 service.

2. S3 Overview

Amazon Simple Storage Service (S3) is a highly scalable and secure cloud storage service provided by Amazon Web Services (AWS). It uses an object storage model, allowing users to store and retrieve data from anywhere on the web.

The service is accessible by a REST-style API, and AWS provides an SDK for Java applications to perform actions like creating, listing, and deleting S3 buckets and objects.

Next, let’s start creating the Java CRUD service for S3 using the AWS SDK and implement the create, read, update, and delete operations.

3. Demo S3 CRUD Java Service

Before we can start using S3, we need to add a dependency to AWS SDK into our project:

<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>s3</artifactId>
    <version>2.20.52</version>
</dependency>

To view the latest version, we can check Maven Central.

Next, we create the S3CrudService class with software.amazon.awssdk.services.s3.S3Client as a dependency:

class S3CrudService {
    private final S3Client s3Client;
    public S3CrudService(S3Client s3Client) {
        this.s3Client = s3Client;
    }
    // ...
}

Now that we’ve created the service, let’s implement the createBucket(), createObject(), getObject(), and deleteObject() operations by using the S3Client API provided by AWS SDK:

void createBucket(String bucketName) {
    // build bucketRequest
    s3Client.createBucket(bucketRequest);
}
void createObject(String bucketName, File inMemoryObject) {
    // build putObjectRequest
    s3Client.putObject(request, RequestBody.fromByteBuffer(inMemoryObject.getContent()));
}
Optional<byte[]> getObject(String bucketName, String objectKey) {
    try {
        // build getObjectRequest
        ResponseBytes<GetObjectResponse> responseResponseBytes = s3Client.getObjectAsBytes(getObjectRequest);
        return Optional.of(responseResponseBytes.asByteArray());
    } catch (S3Exception e) {
        return Optional.empty();
    }
}
boolean deleteObject(String bucketName, String objectKey) {
    try {
        // build deleteObjectRequest
        s3Client.deleteObject(deleteObjectRequest);
        return true;
    } catch (S3Exception e) {
        return false;
    }
}

Now that we have the S3 operations created, let’s learn how to implement integration tests using a mocked S3 service.

4. Use S3Mock Library for Integration Testing

For this tutorial, we have chosen to use the S3Mock library provided by Adobe under an open-source Apache V2 license. S3Mock is a lightweight server that implements the most commonly used operations of the Amazon S3 API. For the supported S3 operations, we can check the dedicated section in the S3Mock repository readme file.

The library developers recommend running the S3Mock service in isolation, preferably using the provided Docker container.

Following the recommendation, let’s use Docker and Testcontainers to run the S3Mock service for the integration tests.

4.1. Dependencies

Next, let’s add the necessary dependencies to run S3Mock together with Testcontainers:

<dependency>
    <groupId>com.adobe.testing</groupId>
    <artifactId>s3mock</artifactId>
    <version>3.3.0</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>com.adobe.testing</groupId>
    <artifactId>s3mock-testcontainers</artifactId>
    <version>3.3.0</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>junit-jupiter</artifactId>
    <version>1.19.4</version>
    <scope>test</scope>
</dependency>

We can check the s3mock, s3mock-testcontainers, junit-jupiter links on Maven Central to view the latest version.

4.2. Setup

As a prerequisite, we must have a running Docker environment to ensure that Test Containers can be started.

When we use the @TestConainers and @Container annotations on the integration test class, the latest Docker image for S3MockContainer is pulled from the registry and started within the local Docker environment:

@Testcontainers
class S3CrudServiceIntegrationTest {
    @Container
    private  S3MockContainer s3Mock = new S3MockContainer("latest");
}

Before running the integration test, let’s create an S3Client instance within the @BeforeEach lifecycle method:

@BeforeEach
void setUp() {
    var endpoint = s3Mock.getHttpsEndpoint();
    var serviceConfig = S3Configuration.builder()
      .pathStyleAccessEnabled(true)
      .build();
    var httpClient = UrlConnectionHttpClient.builder()
      .buildWithDefaults(AttributeMap.builder()
        .put(TRUST_ALL_CERTIFICATES, Boolean.TRUE)
        .build());
    s3Client = S3Client.builder()
      .endpointOverride(URI.create(endpoint))
      .serviceConfiguration(serviceConfig)
      .httpClient(httpClient)
      .build();
}

In the setup() method, we initialized an instance of S3Client using the builder offered by the S3Client interface. Within this initialization, we specified configurations for the following parameters:

  • endpointOverwrite: This parameter is configured to define the address of the S3 mocked service.
  • pathStyleAccessEnabled: We set this parameter to true in the service configuration.
  • TRUST_ALL_CERTIFICATES: Additionally, we configured an httpClient instance with all certificates trusted, indicated by setting TRUST_ALL_CERTIFICATES to true.

4.3. Writing Integration Test for the S3CrudService

As we finish with the infrastructure setup, let’s write some integration tests for the S3CrudService operations.

First, let’s create a bucket and verify its successful creation:

var s3CrudService = new S3CrudService(s3Client);
s3CrudService.createBucket(TEST_BUCKET_NAME);
var createdBucketName = s3Client.listBuckets().buckets().get(0).name();
assertThat(TEST_BUCKET_NAME).isEqualTo(createdBucketName);

After successfully creating the bucket, let’s upload a new object in S3.

To do so, first, we generate an array of bytes using FileGenerator, and then the createObject() method saves it as an object in the already created bucket:

var fileToSave = FileGenerator.generateFiles(1, 100).get(0);
s3CrudService.createObject(TEST_BUCKET_NAME, fileToSave);

Next, let’s call the getObject() method with the file name of the already saved file to confirm if the object was indeed saved in S3:

var savedFileContent = s3CrudService.getObject(TEST_BUCKET_NAME, fileToSave.getName());
assertThat(Arrays.equals(fileToSave.getContent().array(), savedFileContent)).isTrue();

Finally, let’s test that the deleteObject() also works as expected. To begin with, we call the deleteObject() method with the bucket name and the targeted filename. Subsequently, we call again the getObject() and check that the result is empty:

s3CrudService.deleteObject(TEST_BUCKET_NAME,fileToSave.getName());
var deletedFileContent = s3CrudService.getObject(TEST_BUCKET_NAME, fileToSave.getName());
assertThat(deletedFileContent).isEmpty();

5. Conclusion

In this tutorial, we learned how to write integration tests that depend on the AWS S3 service by using the S3Mock library to mock a real S3 service.

To demonstrate this, first, we implemented a basic CRUD service that creates, reads, and deletes objects from S3. Then, we implemented the integration tests using the S3Mock library.

As always, the full implementation of this article can be found over on GitHub.

      

Related Stories

 

Serialization with FlatBuffers in Java

$
0
0

1. Introduction

In this tutorial, we’ll explore FlatBuffers in Java and perform serialization and deserialization using it.

2. Serialization in Java

Serialization is the process of converting Java objects into a stream of bytes that can be transmitted over a network or persist in a file. Java provides an inbuilt object serialization mechanism through the java.io.Serializable interface and the java.io.ObjectOutputStream and java.io.ObjectInputStream classes.

However, owing to its several downsides, including a complicated approach to dealing with complex object graphs and dependent classes, several libraries are available for serialization and deserialization in Java.

Some of the widely used Java serialization libraries include Jackson and Gson. A newer standard for object serialization format is Protocol Buffers. Protocol Buffers is a language-agnostic binary serialization format developed by Google. They are used in high-performance environments and distributed systems where efficiency and interoperability are critical.

3. FlatBuffers

FlatBuffers is an efficient cross-platform serialization library developed by Google. It supports several languages, such as C, C++, Java, Kotlin, and Go. FlatBuffers were created for game development; therefore, performance and low memory overheads are default considerations in its design.

FlatBuffers and Protocol Buffers are created by Google and are very similar binary-based data formats. Both of these formats support efficient high-speed serialization and deserialization. The primary difference is that FlatBuffers doesn’t need additional data unpacking to an intermediate data structure before access.

3.1. Introduction to the FlatBuffers Library

A complete FlatBuffers implementation consists of the following components:

  • A FlatBuffer schema file
  • A flatc compiler
  • Serialization and deserialization code

The FlatBuffer schema file serves as a template for the structure of the data model we’ll use. The syntax for the schema file follows a similar pattern to that of C-type or other interface description language (IDL) formats. We need to define the schema and the flatc compiler, then compile the schema file.

3.2. Tables and Schemas

A FlatBuffer is a binary buffer that contains nested objects (such as structs, tables, and vectors) organized using offsets.

This arrangement allows data to be traversed in place, similar to traditional pointer-based data structures. However, unlike many in-memory data structures, FlatBuffers strictly adhere to rules of alignment and endianness (always little), ensuring cross-platform compatibility. Moreover, for table objects, FlatBuffers offers both forward and backward compatibility.

Tables in FlatBuffers are the most basic data structures used to represent complex structures with named fields. Tables are similar to classes or structs in some languages and support fields of several types, such as int, short, string, struct, vectors, and even other tables.

3.3. The flatc compiler

The flatc compiler is a crucial tool provided by FlatBuffers that generates code in various programming languages, such as  C++ and Java, to help serialize and deserialize data according to the schema. This compiler inputs the schema definition and generates code in the desired programming language.

In upcoming sections, we’ll compile our schema files to generate code. However, we need to build and set up our compiler first to be able to use it.

We start by cloning the flatbuffers library into our system:

$ git clone https://github.com/google/flatbuffers.git

Once the flatbuffers directory is created, we use cmake to build the library into an executable. CMake (Cross-platform Make)  is an open-source, platform-independent build system designed to automate the process of building software projects:

$ cd flatbuffers
$ mkdir build
$ cd build
$ cmake ..

This completes the flatc compiler build process. We can verify the success of the installation by printing the version:

$ ./flatc --version
flatc version 23.5.26

The compiled files are now stored under the /flatbuffers/build path, and the flatc executable is also available in the same directory. We’ll use this file to build all schema files, and therefore, we can create a shortcut or alias to this path.

4. Working With FlatBuffers

In this section, we’ll explore the FlatBuffers library by implementing our use case. Let’s consider that we are developing a game across different terrains such as the sea, mountain, and plain land. Each terrain has its own set of unique properties.

The terrain information is necessary to load the game level and needs to be transmitted across the network to the players. Efficient serialization and deserialization are a must.

4.1. Schema Definition

The first thing we should start with is defining our terrain schema type. A terrain is a table in our flatbuffer. It can have many attributes, such as a name (Sea, Land, Mountain, Desert, etc.), color, and position (in the form of 3d vector coordinates). The terrain can have an effect applied as well. For example, there might be a sandstorm in a desert or a flood in the land. The effect can be a separate table within the original schema.

With this understanding, let’s write our schema as follows:

namespace MyGame.baeldung;
enum Color:byte { Brown = 0, Red = 1, Green = 2, Blue = 3, White = 4 }
struct Vec3 {
  x:float;
  y:float;
  z:float;
}
table Terrain {
  pos:Vec3; // Struct.
  name:string;
  color:Color = Blue;
  navigation: string;
  effects: [Effect]
}
table Effect {
  name:string;
  damage:short;
}
root_type Terrain;

We have an enum for identifying the color of the terrain, a struct for the coordinates, and two tables, the Terrain and Effect, with Terrain being the root type.

4.2. Schema Compilation

The flatc compiler is ready, and we use it to compile our schema file terrain.fbs:

$ cd <path to schema>
$ flatc --java terrain.fbs

We should note that the flatc path might vary from system to system depending on the installation location described in the previous section.

4.3. Creating Objects and Perform Serialization

The schema has already been compiled and is ready to go. We can start creating some terrains for our game using the schema. As part of this example walkthrough, we’ll create a desert terrain and a few effects for our terrain.

To use FlatBuffers in Java, we need to add a Maven dependency:

<dependency>
    <groupId>com.google.flatbuffers</groupId>
    <artifactId>flatbuffers-java</artifactId>
    <version>23.5.26</version>
</dependency>

We can now import the flatbuffers library along with the generated files from our schema:

import MyGame.terrains.*;
import com.google.flatbuffers.FlatBufferBuilder;

The files generated as part of the compilation process go under the same path defined in the schema’s namespace section (MyGame in our case).

An Effect class is available for us to use as a result of the compilation, which provides a createEffect() method. We’ll use that to create our desired effect. We’ll start by creating a builder object with an initial buffer size of 1024 bytes:

FlatBufferBuilder builder = new FlatBufferBuilder(INITIAL_BUFFER);
int sandstormOffset = builder.createString("sandstorm");
short damage = 3;
int sandStorm = MyGame.terrains.Effect.createEffect(builder, sandstormOffset, damage);

We can add more effects in the same way.

Next, we create our desert terrain. Let’s assign a color to the terrain, and give it a name and its navigation location:

byte color = MyGame.terrains.Color.YELLOW;
int terrainName = builder.createString("Desert");
int navigationName = builder.createString("south");

We add more terrain metadata and the effects using the auto-generated static methods of the Terrain class:

int effectOffset = MyGame.terrains.Terrain.createEffectsVector(builder, effects);
startTerrain(builder);
addName(builder, terrainName);
addColor(builder, color);
addNavigation(builder, navigationName);
addEffects(builder, effectOffset);
int desert = endTerrain(builder);
builder.finish(desert);

Let’s now serialize our terrain and its effects in our flatbuffer. We can store the buffer or transmit it over the network to clients:

ByteBuffer buf = builder.dataBuffer();

4.4. Deserialisation Using FlatBuffers

Let’s deserialize the flatbuffer object and access the terrain. We’ll start with a serialized array of bytes created from the buffer, and we’ll convert it into a ByteBuffer buffer:

ByteBuffer buf = java.nio.ByteBuffer.wrap(buffer);

This allows us to get an accessor to the root Terrain object from the buffer and access all its attributes:

Terrain myTerrain = Terrain.getRootAsTerrain(buf)
Assert.assertEquals(terrain.name(), "Desert");
Assert.assertEquals(terrain.navigation(), "south");

The compiler-generated code shows that each of the entity’s attributes comes with an associated accessor. We can access the associated effects as well:

Effect effect1 = terrain.effectsVector().get(0);
Effect effect2 = terrain.effectsVector().get(2);
Assert.assertEquals(effect1.name(), "Sandstorm");
Assert.assertEquals(effect2.name(), "Drought");

4.5. Mutating FlatBuffers

FlatBuffers are mostly read-only, owing to their static template structure. However, we might face a scenario where we need to change something in a flatbuffer before sending it to another piece of code. Let’s say we want to update the damage score of a sandstorm effect from the existing value of 3 to 10.

In such cases, in-place mutation of flatbuffers comes in handy.

Mutation of a flatbuffer is only possible if we build the schema with a –gen-mutable argument:

$ ./../flatc --gen-mutable --java terrain.fbs

This provides us with a mutate() method on all the accessors, which we can use to modify the value of a flatbuffer in place:

Assert.assertEquals(effect1.damage(), 3);
effect1.mutateDamage((short) 10);
Assert.assertEquals(effect1.damage(), 10);

5. JSON Conversion Using FlatBuffers

The flatc compiler provides techniques to convert binary files to JSON and vice-versa. Let’s say we have a JSON file for our terrain. We can use the compiler to create a binary file out of the JSON file using the following code:

flatc --binary <template file> <json file>
$ flatc --binary terrain.fbs sample_terrain.json

Conversely, we can convert a binary file to a full-fledged JSON file as well:

flatc --json --raw-binary <template file> -- <binary file>
$ flatc --json --raw-binary terrain.fbs -- sample_terrain.bin

6. Benefits of Using FlatBuffers

The usage of this cross-platform serialization library comes with a plethora of benefits:

  • FlatBuffers organizes hierarchical data in a flat binary buffer, which we can directly access without the overhead of parsing or unpacking
  • Incremental changes to our data structure are automatically and cleanly incorporated, making it easy to maintain backward compatibility with our evolving models
  • They are also efficient in terms of memory utilization, as we only need the memory space occupied by the buffer to access your data
  • They leave a tiny code footprint. The generated code is minimal, and we only need a single small header as a dependency, making integration a breeze
  • FlatBuffers are strongly typed; hence, we can catch errors in compile time

7. Conclusion

In this article, we explored the FlatBuffers library and its capabilities to serialize and deserialize complex data. We took a hands-on approach to implementing code using the library and looked at the benefits and use cases of flatbuffers.

As usual, the code is available over on GitHub.

       

Get the Initials of a Name in Java

$
0
0

1. Introduction

When handling names, it can be beneficial to shorten them into abbreviated strings using each word’s first character. In this tutorial, let’s look at different ways to implement this functionality in Java.

2. Assumptions

When creating abbreviations, we only consider words that begin with the alphabet. Any other words are excluded from the process. Additionally, the abbreviation may result in an empty string with no valid words. Furthermore, we convert the entire string to uppercase.

3. Using Loops

We can split the text by spaces and use a for loop to iterate over each word. Subsequently, we can take the first character of each valid word and build the initials:

String getInitialUsingLoop(String name) {
    if (name == null || name.isEmpty()) {
        return "";
    }
    String[] parts = name.split("\\s+");
    StringBuilder initials = new StringBuilder();
    for (String part : parts) {
        if (part.matches("[a-zA-Z].*")) {
            initials.append(part.charAt(0));
        }
    }
    return initials.toString().toUpperCase();
}

In the above code, we check if a word starts with the alphabet using a regex and then extract the first character to form the abbreviation.

We can write a unit test to check different cases using JUnit:

@ParameterizedTest
@CsvSource({"John F Kennedy,JFK", ",''", "'',''", "Not Correct   88text,NC", "michael jackson,MJ", "123,''", "123 234A,''", "1test 2test, ''"})
void getInitialFromName_usingLoop(String input, String expected) {
    String initial = getInitialUsingLoop(input);
    assertEquals(expected, initial);
}

In the above test case, we utilized JUnit’s parameterized test feature to specify multiple input and expected output combinations. As a result, we can ensure comprehensive coverage and validation of the functionality under different conditions.

4. Using StringTokenizer

We can use the StringTokenizer to split the text into words. Let’s look at the implementation:

String getInitialUsingStringTokenizer(String name) 
    if (name == null || name.isEmpty()) {
        return "";
    }
    StringTokenizer tokenizer = new StringTokenizer(name);
    StringBuilder initials = new StringBuilder();
    while (tokenizer.hasMoreTokens()) {
        String part = tokenizer.nextToken();
        if (part.matches("[a-zA-Z].*")) {
            initials.append(part.charAt(0));
        }
    }
    return initials.toString().toUpperCase();
}

This code is similar to the previous implementation, except we use StringTokenizer instead of the split() method.

We can utilize the same parameterized test as before for this method.

5. Using Regular Expressions

Another way to implement this functionality is by using regular expressions. We can capture the first character of each valid word by using a Regex Capture:

String getInitialUsingRegex(String name) {
    if (name == null || name.isEmpty()) {
        return "";
    }
    Pattern pattern = Pattern.compile("\\b[a-zA-Z]");
    Matcher matcher = pattern.matcher(name);
    StringBuilder initials = new StringBuilder();
    while (matcher.find()) {
        initials.append(matcher.group());
    }
    return initials.toString().toUpperCase();
}

Similarly, we can create a test case to validate the implementation. 

6. Using the Stream API

We can also use the functional programming-based Stream API, which has been available since Java 8. Now, let’s delve into the implementation:

String getInitialUsingStreamsAPI(String name) {
    if (name == null || name.isEmpty()) {
        return "";
    }
    return Arrays.stream(name.split("\\s+"))
      .filter(part -> part.matches("[a-zA-Z].*"))
      .map(part -> part.substring(0, 1))
      .collect(Collectors.joining())
      .toUpperCase();
}

In this scenario, we combined the filter(), map(), and collect() methods to accomplish the goal. We can use a similar parameterized test to verify this implementation as well.

7. Conclusion

This article discussed various methods for extracting the initials from a name in Java. These methods can also generate acronyms for any text, not just names. Furthermore, we explored the traditional loop-based approaches, regular expression, and more functional programming approaches to achieve the same outcome. Depending on the specific scenario, developers can choose the approach that best suits their needs.

As always, the sample code used in this tutorial is available over on GitHub.

 

       

Simplified Array Operations on JsonNode Without Typecasting in Jackson

$
0
0

1. Overview

Working with JSON (JavaScript Objеct Notation) in Java often involves using librariеs like Jackson, which provides various classеs to rеprеsеnt this type of data, such as JsonNodе, ObjectNode, and ArrayNode.

In this tutorial, we’ll еxplorе different approaches to simplifying array operations on a JsonNodе without explicitly casting it to ArrayNode in Java. This is necessary when manipulating the data directly in our code.

2. Understanding JsonNode and ArrayNode

JsonNode is an abstract class in the Jackson library that represents a node in the JSON tree. It’s the base class for all nodes and is capable of storing different types of data, including objects, arrays, strings, numbers, booleans, and null values. JsonNode instances are immutable, meaning we can’t set properties on them.

ArrayNode is a specific type of JsonNode that represents a JSON array. It extends the functionality of JsonNode to include methods for working with arrays, such as adding, removing, and accessing elements by index.

3. Using JsonNode‘s get() Method

By using JsonNode methods, we can transform it to ArrayNode without explicitly casting. This approach is useful when we need to perform specific actions or validations on each element within a JSON array:

@Test
void givenJsonNode_whenUsingJsonNodeMethods_thenConvertToArrayNode() throws JsonProcessingException {
    int count = 0;
    String json = "{\"objects\": [\"One\", \"Two\", \"Three\"]}";
    JsonNode arrayNode = new ObjectMapper().readTree(json).get("objects");
    assertNotNull(arrayNode, "The 'objects' array should not be null");
    assertTrue(arrayNode.isArray(), "The 'objects' should be an array");
    if (arrayNode.isArray()) {
        for (JsonNode objNode : arrayNode) {
            assertNotNull(objNode, "Array element should not be null");
            count++;
         }
    }
    assertEquals(3, count, "The 'objects' array should have 3 elements");
}

This approach also ensures that we’re working with an array structure before attempting to iterate over its elements, helping prevent potential runtime errors related to unexpected JSON structures.

4. Using createArrayNode()

In Jackson, we can create a JSON object using the createObjectNode() method. Similarly, we can use the createArrayNode() method of the ObjectMapper class to create a JSON Array. The method createArrayNode() will return a reference of ArrayNode class:

@Test
void givenJsonNode_whenUsingCreateArrayNode_thenConvertToArrayNode() throws Exception {
    ObjectMapper objectMapper = new ObjectMapper();
    JsonNode originalJsonNode = objectMapper.readTree("{\"objects\": [\"One\", \"Two\", \"Three\"]}");
    ArrayNode arrayNode = objectMapper.createArrayNode();
    originalJsonNode.get("objects").elements().forEachRemaining(arrayNode::add);
    assertEquals("[\"One\",\"Two\",\"Three\"]", arrayNode.toString());
}

This approach is useful when we need to transform a specific part of a JSON structure into an ArrayNode without explicitly casting. Creating an ArrayNode explicitly communicates that we’re working with an Array, making the code more readable and expressive.

5. Using StreamSuppport Class

StreamSupport is a utility class that provides static methods for creating Streams and Spliterators over various data structures, including collections, arrays, and specialized iterators. The string is deserialized into a JsonNode object using ObjectMapper. Here, we’re creating a Stream from the Spliterator of the objects array, and the elements are collected into the List<JsonNode>:

@Test
void givenJsonNode_whenUsingStreamSupport_thenConvertToArrayNode() throws Exception {
    String json = "{\"objects\": [\"One\", \"Two\", \"Three\"]}";
    JsonNode obj = new ObjectMapper().readTree(json);
    List<JsonNode> objects = StreamSupport
      .stream(obj.get("objects").spliterator(), false)
      .collect(Collectors.toList());
    assertEquals(3, objects.size(), "The 'objects' list should contain 3 elements");
    JsonNode firstObject = objects.get(0);
    assertEquals("One", firstObject.asText(), "The first element should be One");
}

This approach is useful when we want to leverage Java Streams for a concise and expressive way to extract and process elements from a JSON array.

6. Using Iterator

An Iterator is one of many ways we can traverse a collection. In this approach, we utilized an iterator to traverse the elements of the objects array in the given JSON structure:

@Test
void givenJsonNode_whenUsingIterator_thenConvertToArrayNode() throws Exception {
    String json = "{\"objects\": [\"One\", \"Two\", \"Three\"]}";
    JsonNode datasets = new ObjectMapper().readTree(json);
    Iterator<JsonNode> iterator = datasets.withArray("objects").elements();
    int count = 0;
    while (iterator.hasNext()) {
        JsonNode dataset = iterator.next();
        System.out.print(dataset.toString() + " ");
        count++;
    }
    assertEquals(3, count, "The 'objects' list should contain 3 elements");
}

This approach reduces the overall complexity by directly iterating through the elements. It provides a straightforward mechanism for customizing the processing of JSON elements during iteration.

7. Conclusion

In this tutorial, we explored various approaches to simplifying array operations on JsonNode without explicitly typecasting it to ArrayNode in Jackson.

As always, the source code is available over on GitHub.

       

Blowfish Encryption Algorithm

$
0
0

1. Overview

Originally designed as an alternative to the DES encryption algorithm, the Blowfish encryption algorithm is one of the most popular encryption algorithms available today. Blowfish is a symmetric-key block cipher designed by Bruce Schneier in 1993. This algorithm has a block size of 64 bits and a key length of 446 bits, which is better than the DES and 3DES algorithms.

In this tutorial, we’ll learn how to implement encryption and decryption using Blowfish ciphers with the Java Cryptography Architecture (JCA) available in JDK.

2. Generating Secret Key

Since Blowfish is a symmetric-key block cipher, it uses the same key for both encryption and decryption. Accordingly, we’ll create a secret key to encrypt texts in the next steps. This secret key should be preserved securely and shouldn’t be shared in public. Let’s define the secret key:

// Generate a secret key
String secretKey = "MyKey123";
byte[] keyData = secretKey.getBytes();
// Build the SecretKeySpec using Blowfish algorithm
SecretKeySpec secretKeySpec = new SecretKeySpec(keyData, "Blowfish");

Next, we can proceed to build the cipher with encryption mode:

// Build the cipher using Blowfish algorithm
Cipher cipher = Cipher.getInstance("Blowfish");

Then, we’ll initialize the cipher with encryption mode (Cipher.ENCRYPT_MODE) and use our secret key:

// Initialize cipher in encryption mode with secret key
cipher.init(Cipher.ENCRYPT_MODE, secretKeySpec);

3. Encrypting Strings

Let’s see how to use the instantiated Blowfish cipher with the secret key to encrypt Strings:

// the text to encrypt
String secretMessage = "Secret message to encrypt";
// encrypt message
byte[] encryptedBytes = cipher.doFinal(secretMessage.getBytes(StandardCharsets.UTF_8));

As we can see, the cipher gives us an encrypted message in the form of a byte array. However, if we’d like to store it in a database or send the encrypted message via REST API, it would be more suitable and safer to encode it with the Base64 alphabet:

// encode with Base64 encoder
String encryptedtext = Base64.getEncoder().encodeToString(encryptedBytes);

Now, we get the final encrypted text, which is readable and easy to handle.

4. Decrypting Strings

Decrypting Strings with the Blowfish encryption algorithm is equally simple. Let’s see it in action.

First, we need to initialize the cipher with decryption mode (Cipher.DECRYPT_MODE) along with the SecretKeySpec:

// Create the Blowfish Cipher
Cipher cipher = Cipher.getInstance("Blowfish");
// Initialize with decrypt mode & SecretKeySpec
cipher.init(Cipher.DECRYPT_MODE, secretKeySpec);

Next, we can use this cipher to decrypt the message:

// decode using Base64 and decrypt the message
byte[] decrypted = cipher.doFinal(Base64.getDecoder().decode(encryptedtext));
// convert the decrypted bytes to String
String decryptedString = new String(decrypted, StandardCharsets.UTF_8);

Finally, we can verify the results to ensure the decryption process performs correctly by comparing it to the original value:

Assertions.assertEquals(secretMessage, decrypedText);

In addition, we can notice that we use StandardCharsets.UTF_8 charset during both encryption and decryption. This way, we can be sure that encryption or decryption always replaces the input text containing malformed and unmappable char sequences with the replacement byte array of UTF-8 charset.

5. Working With Files

Sometimes, we may need to encrypt or decrypt the whole file instead of individual Strings. The Blowfish encryption algorithm allows to encrypt and decrypt the whole files. Let’s see an example to create a temp file with some sample content:

String originalContent = "some secret text file";
Path tempFile = Files.createTempFile("temp", "txt");
writeFile(tempFile, originalContent);

Next, we need to transform the content into a byte array:

byte[] fileBytes = Files.readAllBytes(tempFile);

Now, we can proceed with the encryption of the whole file using encryption cipher:

Cipher encryptCipher = Cipher.getInstance("Blowfish");
encryptCipher.init(Cipher.ENCRYPT_MODE, secretKeySpec);
byte[] encryptedFileBytes = encryptCipher.doFinal(fileBytes);

Finally, we can overwrite the encrypted content in the temp file:

try (FileOutputStream stream = new FileOutputStream(tempFile.toFile())) {
    stream.write(encryptedFileBytes);
}

Decrypting the whole file is a similar process. The only difference is to change the cipher mode to do decryption:

encryptedFileBytes = Files.readAllBytes(tempFile);
Cipher decryptCipher = Cipher.getInstance("Blowfish");
decryptCipher.init(Cipher.DECRYPT_MODE, secretKeySpec);
byte[] decryptedFileBytes = decryptCipher.doFinal(encryptedFileBytes);
try (FileOutputStream stream = new FileOutputStream(tempFile.toFile())) {
    stream.write(decryptedFileBytes);
}

Finally, we can verify whether the file content matches the original value:

String fileContent = readFile(tempFile);
Assertions.assertEquals(originalContent, fileContent);

6. Weakness and Successors

Blowfish was one of the first secure encryption algorithms not subject to patents and freely available for public use. Although the Blowfish algorithm performs better than DES and 3DES algorithms in terms of encryption speed, it has some limitations due to its inherent design.

The Blowfish algorithm uses a 64-bit block size as opposed to AES’s 128-bit block size. Hence, this makes it vulnerable to birthday attacks, specifically in the HTTPS context. Attackers have already demonstrated that they can leverage the 64-bit block size ciphers to perform plaintext recovery (by decrypting ciphertext). Moreover, because of its small block size open-source projects such as GnuPG recommend that the Blowfish algorithm won’t be used to encrypt files larger than 4 GB.

Changing new secret keys slows down the process. For instance, each new key needs pre-processing and takes about 4 KB of text, which is slower when compared to other block ciphers.

Bruce Schneier has recommended migrating to his Blowfish successor, the Twofish encryption algorithm, which has a block size of 128 bits. It also has a free license and is available for public use.

In 2005, Blowfish II was released, which was developed by people other than Bruce Schneier. Blowfish II has the same design but has twice as many S tables and uses 64-bit integers instead of 32-bit integers. Also, it works on 128-bit blocks like the AES algorithm.

Advanced Encryption Standard (AES) is a popular and widely used symmetric-key encryption algorithm. AES supports varying key lengths such as 128, 192, and 256 bits to encrypt and decrypt data. However, its block size is fixed at 128 bits.

7. Conclusion

In this article, we learned about the generation of secret keys and how to encrypt and decrypt Strings using the Blowfish encryption algorithm. Also, we saw encrypting and decrypting files are equally simple. Finally, we also discussed the weaknesses and various successors of Blowfish.

As always, the full source code of the article is available over on GitHub.

       

Spring Security AuthorizationManager

$
0
0

1. Introduction

Spring Security is an extension of the Spring Framework that makes it easy to build common security practices into our applications. This includes things like user authentication and authorization, API protection, and much more.

In this tutorial, we look at one of the many pieces inside Spring Security: the AuthorizationManager. We’ll see how it fits into the larger Spring Security ecosystem, as well as various use cases for how it can help secure our applications.

2. What Is Spring Security AuthorizationManager

The Spring AuthorizationManager is an interface that allows us to check if an authenticated entity has access to a secured resource. AuthorizationManager instances are used by Spring Security to make final access control decisions for request-based, method-based, and message-based components.

As background, Spring Security has a few key concepts that are helpful to understand before looking at the specific role of the AuthorizationManager:

  • Entity: anything that can make a request into the system. This could be a human user or a remote web service, for example.
  • Authentication: the process of verifying that an entity is who they say they are. This can be via username/password, token, or any number of other methods.
  • Authorization: the process of verifying an entity has access to a resource
  • Resource: any information the system makes available for access — for example, a URL or document
  • Authority: often referred to as a Role, this is a logical name representing the permissions an entity has. A single entity may have zero or more authorities granted to it.

With these concepts in mind, we can dive deeper into the AuthorizationManager interface.

2.1. How to Use AuthorizationManager

AuthorizationManager is a simple interface that contains only two methods:

AuthorizationDecision check(Supplier<Authentication> authentication, T object);
void verify(Supplier<Authentication> authentication, T object);

Both methods look similar because they take the same arguments:

  • authentication: a Supplier that provides an Authentication object representing the entity making the request.
  • object: the secure object being requested (will vary depending on the nature of the request)

However, each method serves a different purpose. The first method returns an AuthorizationDecision, which is a simple wrapper around a boolean value that indicates whether or not the entity can access the secure object.

The second method doesn’t return anything. Instead, it simply performs the authorization check and throws an AccessDeniedException if the entity is not authorized to access the secure object.

2.2. Older Versions of Spring Security

It’s worth noting that the AuthorizationManager interface was introduced in Spring Security 5.0. Prior to this interface, the primary method for authorization was via the AccessDecisionManager interface. While the AccessDecisionManager interface still exists in recent versions of Spring Security, it is deprecated and should be avoided in favor of AuthorizationManager.

3. Implementations of AuthorizationManager

Spring provides several implementations of the AuthorizationManager interface. In the following sections, we’ll take a look at several of them.

3.1. AuthenticatedAuthorizationManager

The first implementation we’ll look at is the AuthenticatedAuthorizationManager. Put simply, this class returns a positive authorization decision based solely on whether or not the entity is authenticated. Additionally, it supports three levels of authentication:

  • anonymous: the entity is not authenticated
  • remember me: the entity is authenticated and is using remembered credentials
  • fully authenticated: the entity is authenticated and not using remembered credentials

Note that this is the default AuthorizationManager that Spring Boot creates for web-based applications. By default, all endpoints will allow access regardless of role or authority, as long as it comes from an authenticated entity.

3.2. AuthoritiesAuthorizationManager

This implementation works similarly to the previous one, except it can make decisions based on multiple authorities. This is more suitable for complex applications where resources may need to be accessible by more than one authority.

Consider a blogging system that uses different roles to manage the publishing process. The resource for creating and saving an article might be accessible to both the Author and Editor roles. However, the resource for publishing is available to only the Editor role.

3.3. AuthorityAuthorizationManager

This implementation is fairly straightforward. It makes all of its authorization decisions based on whether the entity has a specific role.

This implementation works well for simple applications where each resource requires a single role or authority. For example, it would work well for protecting a specific set of URLs to only entities with an Administrator role.

Note that this implementation delegates its decision-making to an instance of AuthoritiesAuthorizationManager. It’s also the implementation that Spring uses whenever we call hasRole() or hasAuthorities() while customizing a SecurityFilterChain.

3.4. RequestMatcherDelegatingAuthorizationManager

This implementation doesn’t actually make authorization decisions. Instead, it delegates to another implementation based on URL patterns, usually one of the above manager classes.

For example, if we have some URLs that are public and available to anyone, we could delegate those URLs to a no-op implementation that always returns a positive authorization. We could then delegate secured requests to an AuthoritiesAuthorizationManager that handles checking for roles.

In fact, this is exactly what Spring does when we add a new request matcher to a SecurityFilterChain. Each time we configure a new request matcher and specify one or more required roles or authorities, Spring just creates a new instance of this class along with an appropriate delegate.

3.5. ObservationAuthorizationManager

The final implementation we’ll look at is the ObservationAuthorizationManager. This class is really just a wrapper around another implementation, with the added ability to log metrics related to authorization decisions. Spring will automatically use this implementation whenever a valid ObservationRegistry is available in the application.

3.6. Other Implementations

It’s worth mentioning that several other implementations exist in Spring Security. Most of them are related to the various Spring Security annotations used to secure methods:

  • SecuredAuthorizationManager -> @Secured
  • PreAuthorizeAuthorizationManager -> @PreAuthorize
  • PostAuthorizeAuthorizationManager -> @PostAuthorize

Essentially, any Spring Security annotation we can use to secure resources has a corresponding AuthorityManager implementation.

3.7. Using Multiple AuthorizationManagers

In practice, we rarely ever use just a single instance of AuthorizationManager. Let’s take a look at an example SecurityFilterChain bean:

@Bean
SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
    http.authorizeHttpRequests((authorize) -> authorize
        .requestMatchers("/posts/publish/**").hasRole("EDITOR")
        .requestMatchers("/posts/create/**").hasAnyRole("EDITOR", "AUTHOR")
        .anyRequest().permitAll());
    return http.build();
}

This example uses five different AuthorizationManager instances:

  • The call to hasRole() creates an instance of AuthorityAuthorizationManager, which in turn delegates to a new instance of AuthoritiesAuthorizationManager.
  • The call to hasAnyRole() also creates an instance of AuthorityAuthorizationManager, which in turn delegates to a new instance of AuthoritiesAuthorizationManager.
  • The call to permitAll() uses a static no-op AuthorizationManager provided by Spring Security that always provides a positive authorization decision.

Additional request matchers with their own roles, along with any method-based annotations, would all create additional AuthorizationManager instances.

4. Using a Custom AuthorizationManager

The provided implementations above are sufficient for many applications. However, as with many interfaces in Spring, it is entirely possible to create a custom AuthorizationManager to suit whatever needs we have.

Let’s define a custom AuthorizationManager:

AuthorizationManager<RequestAuthorizationContext> customAuthManager() {
    return new AuthorizationManager<RequestAuthorizationContext>() {
        @Override
        public AuthorizationDecision check(Supplier<Authentication> authentication, RequestAuthorizationContext object) {
            // make authorization decision
        }
    };
}

We would then pass this instance while customizing the SecurityFilterChain:

SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
    http.authorizeHttpRequests((authorize) ->
                    authorize.requestMatchers("/custom/**").access(customAuthManager())
    return http.build();
}

In this case, we’re making authorization decisions using a RequestAuthorizationContext. This class provides access to the underlying HTTP request, meaning we can make decisions based on things like cookies, headers, and more. We can also delegate to a third-party service, database, or cache, among other constructs, to make any type of authorization decision we want.

5. Conclusion

In this article, we’ve taken a close look at how Spring Security handles authorization. We saw the generic AuthorizationManager interface and how its two methods make authorization decisions.

We also saw various implementations of this implementation, and how they are used in various places in the Spring Security framework.

Finally, we created a simple custom implementation that can be used to make any type of authorization decisions that we need in our applications.

As always, the code examples in this article are available over on GitHub.

       

OpenAPI Generator Custom Templates

$
0
0

1. Introduction

OpenAPI Generator is a tool that allows us to quickly generate client and server code from REST API definitions, supporting multiple languages and frameworks. Although most of the time the generated code is ready to be used with no modifications, there may be scenarios in which we need to customize it.

In this tutorial, we’ll learn how to use custom templates to address these scenarios.

2. OpenAPI Generator Project Setup

Before exploring customization, let’s run through a quick overview of a typical usage scenario for this tool: generating server-side code from a given API definition. We assume we already have a base Spring Boot MVC application built with Maven, so we’ll use the appropriate plugin for that:

<plugin>
    <groupId>org.openapitools</groupId>
    <artifactId>openapi-generator-maven-plugin</artifactId>
    <version>7.3.0</version>
    <executions>
        <execution>
            <goals>
                <goal>generate</goal>
            </goals>
            <configuration>
                <inputSpec>${project.basedir}/src/main/resources/api/quotes.yaml</inputSpec>
                <generatorName>spring</generatorName>
                <supportingFilesToGenerate>ApiUtil.java</supportingFilesToGenerate>
                <templateResourcePath>${project.basedir}/src/templates/JavaSpring</templateResourcePath>
                <configOptions>
                    <dateLibrary>java8</dateLibrary>
                    <openApiNullable>false</openApiNullable>
                    <delegatePattern>true</delegatePattern>
                    <apiPackage>com.baeldung.tutorials.openapi.quotes.api</apiPackage>
                    <modelPackage>com.baeldung.tutorials.openapi.quotes.api.model</modelPackage>
                    <documentationProvider>source</documentationProvider>
                </configOptions>
            </configuration>
        </execution>
    </executions>
</plugin>

With this configuration, the generated code will go into the target/generated-sources/openapi folder. Moreover, our project also needs to add a dependency to the OpenAPI V3 annotation library:

<dependency>
    <groupId>io.swagger.core.v3</groupId>
    <artifactId>swagger-annotations</artifactId>
    <version>2.2.3</version>
</dependency>

The latest versions of the plugins and this dependency are available on Maven Central:

The API for this tutorial consists of a single GET operation that returns a quote for a given financial instrument symbol:

openapi: 3.0.0
info:
  title: Quotes API
  version: 1.0.0
servers:
  - description: Test server
    url: http://localhost:8080
paths:
  /quotes/{symbol}:
    get:
      tags:
        - quotes
      summary: Get current quote for a security
      operationId: getQuote
      parameters:
        - name: symbol
          in: path
          required: true
          description: Security's symbol
          schema:
            type: string
            pattern: '[A-Z0-9]+'
      responses:
        '200':
            description: OK
            content:
              application/json:
                schema:
                  $ref: '#/components/schemas/QuoteResponse'
components:
  schemas:
    QuoteResponse:
      description: Quote response
      type: object
      properties:
        symbol:
          type: string
          description: security's symbol
        price:
          type: number
          description: Quote value
        timestamp:
          type: string
          format: date-time

Even without any written code, the resulting project can already serve API calls thanks to the default implementation of the QuotesApi – although it will always return a 502 error since the method is not implemented.

3. API Implementation

The next step is to code an implementation of the QuotesApiDelegate interface. Since we’re using a delegate pattern, we don’t need to worry about MVC or OpenAPI-specific annotations, as those will be kept apart in the generated controller.

This approach ensures that, if we later decide to add a library like SpringDoc or similar to the project, the annotations upon which those libraries depend will always be in sync with the API definition. Another benefit is that contract modifications will also change the delegate interface, thus making the project unbuildable. This is good, as it minimizes runtime errors that can happen in code-first approaches.

In our case, the implementation consists of a single method that uses a BrokerService to retrieve quotes:

@Component
public class QuotesApiImpl implements QuotesApiDelegate {
    // ... fields and constructor omitted
    @Override
    public ResponseEntity<QuoteResponse> getQuote(String symbol) {
        var price = broker.getSecurityPrice(symbol);
        var quote = new QuoteResponse();
        quote.setSymbol(symbol);
        quote.setPrice(price);
        quote.setTimestamp(OffsetDateTime.now(clock));
        return ResponseEntity.ok(quote);
    }
}

We also inject a Clock to provide the timestamp field required by the returned QuoteResponse. This is a small implementation detail that makes it easier to unit-test code that uses the current time. For instance, we can simulate the behavior of the code under test at a specific point in time using Clock.fixed(). The unit test for the implementation class uses this approach.

Finally, we’ll implement a BrokerService that simply returns a random quote, which is enough for our purposes.

We can verify that this code works as expected by running the integration test:

@Test
void whenGetQuote_thenSuccess() {
    var response = restTemplate.getForEntity("http://localhost:" + port + "/quotes/BAEL", QuoteResponse.class);
    assertThat(response.getStatusCode())
      .isEqualTo(HttpStatus.OK);
}

4. OpenAPI Generator Customization Scenario

So far, we’ve implemented a service with no customization. Let’s consider the following scenario: As an API definition author, I’d like to specify that a given operation may return a cached result. The OpenAPI specification allows this kind of non-standard behavior through a mechanism called vendor extensions, which can be applied to many (but not all) elements.

For our example, we’ll define an x-spring-cacheable extension to be applied on any operation we want to have this behavior. This is the modified version of our initial API with this extension applied:

# ... other definitions omitted
paths:
  /quotes/{symbol}:
    get:
      tags:
        - quotes
      summary: Get current quote for a security
      operationId: getQuote
      x-spring-cacheable: true
      parameters:
# ... more definitions omitted

Now, if we run the generator again with mvn generate-sources, nothing will happen. This is expected because, although still valid, the generator doesn’t know what to do with this extension. More precisely, the templates used by the generator don’t make any use of the extension.

Upon closer examination of the generated code, we see that we can achieve our goal by adding a @Cacheable annotation on the delegate interface methods that match API operations having our extension. Let’s explore how to do this next.

4.1. Customization Options

The OpenAPI Generator tool supports two customization approaches:

  • Adding a new custom generator, created from scratch or by extending an existing one
  • Replacing templates used by an existing generator with a custom one

The first option is more “heavy-weight” but allows full control of the artifacts generated. It’s the only option when our goal is to support code generation for a new framework or language, but we’ll not cover it here.

For now, all we need is to change a single template, which is the second option. The first step, then, is to find this template. The official documentation recommends using the CLI version of the tool to extract all templates for a given generator.

However, when using the Maven plugin, it’s usually more convenient to look it up directly on the GitHub repository. Notice that, to ensure compatibility, we’ve picked the source tree for the tag that corresponds to the plugin version in use.

In the resources folder, each sub-folder has templates used for a specific generator target. For Spring-based projects, the folder name is JavaSpring. There, we’ll find the Mustache templates used to render the server code. Most templates are named sensibly, so it’s not hard to figure out which one we need: apiDelegate.mustache.

4.2. Template Customization

Once we’ve located the templates we want to customize, the next step is to place them in our project so the Maven plugin can use them. We’ll put the soon-to-customize template under the folder src/templates/JavaSpring so that it doesn’t get mixed with other sources or resources.

Next, we need to add a configuration option to the plugin informing about our directory:

<configuration>
    <inputSpec>${project.basedir}/src/main/resources/api/quotes.yaml</inputSpec>
    <generatorName>spring</generatorName>
    <supportingFilesToGenerate>ApiUtil.java</supportingFilesToGenerate>
    <templateResourcePath>${project.basedir}/src/templates/JavaSpring</templateResourcePath>
    ... other unchanged properties omitted
</configuration>

To verify that the generator is correctly configured, let’s add a comment on top of the template and re-generate the code:

/*
* Generated code: do not modify !
* Custom template with support for x-spring-cacheable extension
*/
package {{package}};
... more template code omitted

Next, running mvn clean generate-sources will yield a new version of the QuotesDelegateApi with the comment:

/*
* Generated code: do not modify!
* Custom template with support for x-spring-cacheable extension
*/
package com.baeldung.tutorials.openapi.quotes.api;
... more code omitted

This shows that the generator picked our custom template instead of the native one.

4.3. Exploring the Base Template

Now, let’s take a look at our template to find the proper place to add our customization. We can see that there’s a section defined by the {{#operation}} {{/operation}} tags that outputs the delegate’s methods in the rendered class:

    {{#operation}}
        // ... many mustache tags omitted
        {{#jdk8-default-interface}}default // ... more template logic omitted 
    {{/operation}}

Inside this section, the template uses several properties of the current context – an operation – to generate the corresponding method’s declaration.

In particular, we can find information about vendor extensions under {{vendorExtension}}. This is a map where the keys are extension names, and the value is a direct representation of whatever data we’ve put in the definition. This means we can use extensions where the value is an arbitrary object or just a simple string.

To get a JSON representation of the full data structure that the generator passes to the template engine, add the following globalProperties element to the plugin’s configuration:

<configuration>
    <inputSpec>${project.basedir}/src/main/resources/api/quotes.yaml</inputSpec>
    <generatorName>spring</generatorName>
    <supportingFilesToGenerate>ApiUtil.java</supportingFilesToGenerate>
    <templateResourcePath>${project.basedir}/src/templates/JavaSpring</templateResourcePath>
    <globalProperties>
        <debugOpenAPI>true</debugOpenAPI>
        <debugOperations>true</debugOperations>
    </globalProperties>
...more configuration options omitted

Now, when we run mvn generate-sources again, the output will have this JSON representation right after the message ## Operation Info##:

[INFO] ############ Operation info ############
[ {
  "appVersion" : "1.0.0",
... many, many lines of JSON omitted

4.4. Adding @Cacheable to Operations

We’re now ready to add the required logic to support caching operation results. One aspect that might be useful is to allow users to specify a cache name, but not require them to do so.

To support this requirement, we’ll support two variants of our vendor extension. If the value is simply true, a default cache name will be used:

paths:
  /some/path:
    get:
      operationId: getSomething
      x-spring-cacheable: true

Otherwise, it will expect an object with a name property that we’ll use as the cache name:

paths:
  /some/path:
    get:
      operationId: getSomething
      x-spring-cacheable:
        name: mycache

This is how the modified template looks with the required logic to support both variants:

{{#vendorExtensions.x-spring-cacheable}}
@org.springframework.cache.annotation.Cacheable({{#name}}"{{.}}"{{/name}}{{^name}}"default"{{/name}})
{{/vendorExtensions.x-spring-cacheable}}
{{#jdk8-default-interface}}default // ... template logic omitted 

We’ve added the logic to add the annotation right before the method’s signature definition. Notice the use of {{#vendorExtensions.x-spring-cacheable}} to access the extension value. According to Mustache rules, the inner code will be executed only if the value is “truthy”, meaning something that evaluates to true in a Boolean context. Despite this somewhat loose definition, it works fine here and is quite readable.

As for the annotation itself, we’ve opted to use “default” for the default cache name. This allows us to further customize the cache, although the details on how to do this are outside the scope of this tutorial.

5. Using the Modified Template

Finally, let’s modify our API definition to use our extension:

... more definitions omitted
paths:
  /quotes/{symbol}:
    get:
      tags:
        - quotes
      summary: Get current quote for a security
      operationId: getQuote
      x-spring-cacheable: true
        name: get-quotes

Let’s run mvn generate-sources once again to create a new version of QuotesApiDelegate:

... other code omitted
@org.springframework.cache.annotation.Cacheable("get-quotes")
default ResponseEntity<QuoteResponse> getQuote(String symbol) {
... default method's body omitted

We see that the delegate interface now has the @Cacheable annotation. Moreover, we see that the cache name corresponds to the name attribute from the API definition.

Now, for this annotation to have any effect, we also need to add the @EnableCaching annotation to a @Configuration class or, as in our case, to the main class:

@SpringBootApplication
@EnableCaching
public class QuotesApplication {
    public static void main(String[] args) {
        SpringApplication.run(QuotesApplication.class, args);
    }
}

To verify that the cache is working as expected, let’s write an integration test that calls the API multiple times:

@Test
void whenGetQuoteMultipleTimes_thenResponseCached() {
    var quotes = IntStream.range(1, 10).boxed()
      .map((i) -> restTemplate.getForEntity("http://localhost:" + port + "/quotes/BAEL", QuoteResponse.class))
      .map(HttpEntity::getBody)
      .collect(Collectors.groupingBy((q -> q.hashCode()), Collectors.counting()));
    assertThat(quotes.size()).isEqualTo(1);
}

We expect all responses to return identical values, so we’ll collect them and group them by their hash codes. If all responses produce the same hash code, then the resulting map will have a single entry. Note that this strategy works because the generated model class implements the hashCode() method using all fields.

6. Conclusion

In this article, we’ve shown how to configure the OpenAPI Generator tool to use a custom template that adds support for a simple vendor extension.

As usual, all code is available over on GitHub.

       

Finding the Majority Element of an Array in Java

$
0
0

1. Introduction

In this tutorial, we’ll explore different approaches to finding the majority element within an array. For each approach, we’ll provide their respective code implementations and analysis of time and space complexities.

2. Problem Statement

Let’s understand the problem of finding the majority element in an array. We’re given an array of integers and our objective is to determine if a majority element exists within it.

A majority element appears more frequently than any other element, surpassing the threshold of occurring more than n/2 times in the array, where n represents the array’s length. This means identifying the element that dominates the array in terms of occurrence frequency.

Before delving into each approach, we’ll utilize the provided sample data as input:

int[] nums = {2, 3, 2, 4, 2, 5, 2};

3. Using a for Loop

One straightforward approach to finding the majority element involves iterating through the array with a for loop. This approach involves iterating through the array using a for loop and maintaining a count of occurrences for each element. We’ll then check if any element satisfies the majority condition, meaning it appears in more than half the slots of the array.

3.1. Implementation

In this implementation, we iterate through the array using a for loop. For each element in the array, we initialize a count variable to keep track of its occurrences. Subsequently, we iterate through the array again to count the occurrences of the current element.

As we iterate through the array, if we encounter a majority element with a count greater than n/2, we can immediately return the element:

int majorityThreshold = nums.length / 2;
Integer majorityElement = null;
for (int i = 0; i < nums.length; i++) {
    int count = 0;
    for (int j = 0; j < nums.length; j++) {
        if (nums[i] == nums[j]) {
            count++;
        }
    }
    if (count > majorityThreshold) {
        majorityElement = nums[i];
        <span class="hljs-keyword">break</span>;
    }
}
assertEquals(2, majorityElement);

3.2. Complexity Analysis

The time complexity of the for-loop approach is O(n^2). This quadratic time complexity arises due to the nested loops utilized in the implementation, where each element in the array is compared against every other element. On the other hand, the space complexity is O(1).

While the approach is simple to implement and has minimal space overhead, it’s not the most efficient for large arrays due to its quadratic time complexity.

4. Using a Sorting Approach

In this approach, we leverage a sorting algorithm to efficiently identify the majority element in an array. The strategy involves sorting the array in ascending order, which enables us to identify consecutive occurrences of elements.

Given that a majority element appears more than half the size of the array, after sorting, it will either occupy the middle index (if the array length is odd) or be next adjacent to the middle elements (if the array length is even). Consequently, by examining the middle elements of the sorted array, we can ascertain whether one of them qualifies as the majority element.

4.1. Implementation

First, we use Arrays.sort() to sort the array in ascending order. This step is crucial as it enables us to identify consecutive occurrences of elements more easily. Next, we iterate through the sorted array and keep track of the middle element’s occurrence count. Inside the loop, we also check if the count is greater than half the size of the array.

If it is, it means the current element has appeared more than half the time, and it’s identified as the majority element. The code then returns this element. Let’s demonstrate this concept using a code snippet:

Arrays.sort(nums);
int majorityThreshold = nums.length / 2;
int count = 0;
Integer majorityElement = null;
for (int i = 0; i < nums.length; i++) {
    if (nums[i] == nums[majorityThreshold]) {
        count++;
    }
    if (count > majorityThreshold) {
        majorityElement = nums[majorityThreshold];
        <span class="hljs-keyword">break</span>;
    }
}
assertEquals(2, majorityElement);

4.2. Complexity Analysis

The time complexity of this approach is typically O(n log n) due to sorting, and the space complexity is O(1) as it uses constant extra space. This approach is slightly more efficient compared to the for-loop approach, but it might not be the most optimal solution for very large arrays due to the sorting operation’s time.

5. Using HashMap

This approach uses a HashMap to store the frequency of each element in the array.

5.1. Implementation

In this approach, we iterate through the array, incrementing the count of each element we encounter in the HashMap. Finally, we iterate through the HashMap and check if any element’s count is greater than half the size of the array. If a majority element is found, we return it; otherwise, we return -1 to indicate that no majority element exists in the array.

Here’s an example implementation:

Map<Integer, Integer> frequencyMap = new HashMap<>();
Integer majorityElement = null;
for (int num : nums) {
    frequencyMap.put(num, frequencyMap.getOrDefault(num, 0) + 1);
}
int majorityThreshold = nums.length / 2;
for (Map.Entry<Integer, Integer> entry : frequencyMap.entrySet()) {
    if (entry.getValue() > majorityThreshold) {
        majorityElement = entry.getKey();
        <span class="hljs-keyword">break</span>;
    }
}
assertEquals(2, majorityElement);

5.2. Complexity Analysis

Overall, using a HashMap is a more efficient approach, especially for larger datasets, due to its linear time complexity. It has a time complexity of O(n) due to iterating through the array once and iterating through the HashMap once.

However, this approach requires additional space for HashMap, which can be a concern for memory-constrained environments. In the worst-case scenario, the space complexity will be O(n) as the HashMap might store all unique elements in the array.

6. Using the Boyer-Moore Voting Algorithm

This algorithm is popularly used to find the majority element in a sequence of elements using linear time complexity and a fixed amount of memory.

6.1. Implementation

In the initialization step, we create two variables: a candidate element and a count. The candidate element is set to the first element in the array, and the count is set to 1.

Next, in the iteration step, we loop through the remaining elements in the array. For each subsequent element, we increment the count if the current element is the same as the candidate element. This signifies that this element also potentially contributes to being the majority. Otherwise, we decrease the count. This counteracts the previous votes for the candidate.

If the count reaches 0, the candidate element is reset to the current element, and the count is set back to 1. This is because if previous elements cancel each other out, the current element might be a new contender for the majority.

After iterating through the entire array, we verify by iterating through the array again and counting the occurrences of the candidate element. If the candidate appears more than n/2 times, we return it as the majority element. Otherwise, we return -1.

Let’s proceed with the implementation:

int majorityThreshold = nums.length / 2;
int candidate = nums[0];
int count = 1;
int majorityElement = -1;
for (int i = 1; i < nums.length; i++) {
    if (count == 0) {
        candidate = nums[i];
        count = 1;
    } else if (candidate == nums[i]) {
        count++;
    } else {
        count--;
    }
}
count = 0;
for (int num : nums) {
    if (num == candidate) {
        count++;
    }
}
majorityElement = count > majorityThreshold ? candidate : -1;
assertEquals(2, majorityElement);

Here is the breakdown of the iteration step:

Initial stage: [Candidate (2), Count (1)]
Iteration 1: [Candidate (2), Count (0), Element (3)] // "3" != candidate, count--
Iteration 2: [Candidate (2), Count (1), Element (2)] // "2" == candidate, count++
Iteration 3: [Candidate (2), Count (0), Element (4)] // "4" != candidate, count--
Iteration 4: [Candidate (2), Count (1), Element (2)] // "2" == candidate, count++
Iteration 5: [Candidate (2), Count (0), Element (5)] // "5" != candidate, count--
Iteration 6: [Candidate (2), Count (1), Element (2)] // "2" == candidate, count++

6.2. Complexity Analysis

This algorithm has a time complexity of O(n) and a space complexity of O(1), making it an efficient solution for finding the majority element in an array.

7. Summary

This table summarizes the time and space complexities of each approach as well as their advantages. It provides a quick overview of the trade-offs and benefits of each approach.

Approach Time Complexity Space Complexity Pros & Cons
For loop O(n^2) O(1) – Straightforward to implement
– Requires minimal additional space
– Inefficient for large arrays due to nested loops
Sorting O(n log n) O(1) or O(n) – Simple implementation
– No additional space overhead if in-place sorting is used
– Introduces additional time complexity due to sorting
HashMap O(n) O(n) – Linear time complexity for both processing and space usage
– Efficiently handles large arrays
– Requires additional space for HashMap storage
Boyer-Moore Voting O(n) O(1) – Optimal time and space complexity
– Efficient for large arrays

8. Conclusion

In this article, we explored various approaches to finding the majority element in an array.

The for-loop approach provides a simple implementation but is inefficient for large arrays due to its nested loops. The HashMap approach provides linear time complexity and efficiently handles large arrays, but it requires additional space for HashMap storage.

Finally, the Boyer-Moore Voting Algorithm offers optimal time and space complexity and is efficient for large arrays.

As always, the source code for the examples is available over on GitHub.

       

Finding the Peak Elements of a List

$
0
0
start here featured

1. Introduction

Peak elements within an array are important for numerous algorithms, offering valuable insights into the dataset’s characteristics. In this tutorial, let’s explore the concept of peak elements, explaining their significance and exploring efficient methods to identify them, both in single and multiple peak scenarios.

2. What is a Peak Element?

A peak element in an array is defined as an element that is strictly greater than its adjacent elements. Edge elements are considered to be in a peak position if they are greater than their only neighboring element.

In scenarios where elements are equal, a strict peak does not exist. Instead, a peak is the first instance where an element exceeds its neighbors.

2.1. Examples

To better understand the idea of peak elements, take a look at the following examples:

Example 1:

List: [1, 2, 20, 3, 1, 0]
Peak Element: 20

Here, 20 is a peak since it is greater than its neighboring elements.

Example 2:

List: [5, 13, 15, 25, 40, 75, 100]
Peak Element: 100

100 is a peak because it’s greater than 75 and has no element to its right.

Example 3:

List: [9, 30, 13, 2, 23, 104, 67, 12]
Peak Element: 30 or 104, as both are valid peaks

Both 30 and 104 qualify as peaks.

3. Finding Single Peak Elements

When an array contains only one peak element, a straightforward approach is to utilize linear search. This algorithm scans through the array elements, comparing each with its neighbors until finding a peak. The time complexity of this method is O(n), where n is the size of the array.

public class SinglePeakFinder {
    public static OptionalInt findSinglePeak(int[] arr) {
        int n = arr.length;
        if (n < 2) {
            return n == 0 ? OptionalInt.empty() : OptionalInt.of(arr[0]);
        }
        if (arr[0] >= arr[1]) {
            return OptionalInt.of(arr[0]);
        }
        for (int i = 1; i < n - 1; i++) {
            if (arr[i] >= arr[i - 1] && arr[i] >= arr[i + 1]) {
                return OptionalInt.of(arr[i]);
            }
        }
        if (arr[n - 1] >= arr[n - 2]) {
            return OptionalInt.of(arr[n - 1]);
        }
        return OptionalInt.empty();
    }
}

The algorithm iterates through the array from index 1 to n-2, checking if the current element is greater than its neighbors. If a peak is found, an OptionalInt containing the peak is returned. Additionally, the algorithm handles edge cases where the peak is at the extremes of the array.

public class SinglePeakFinderUnitTest {
    @Test
    void findSinglePeak_givenArrayOfIntegers_whenValidInput_thenReturnsCorrectPeak() {
        int[] arr = {0, 10, 2, 4, 5, 1};
        OptionalInt peak = SinglePeakFinder.findSinglePeak(arr);
        assertTrue(peak.isPresent());
        assertEquals(10, peak.getAsInt());
    }
    @Test
    void findSinglePeak_givenEmptyArray_thenReturnsEmptyOptional() {
        int[] arr = {};
        OptionalInt peak = SinglePeakFinder.findSinglePeak(arr);
        assertTrue(peak.isEmpty());
    }
    @Test
    void findSinglePeak_givenEqualElementArray_thenReturnsCorrectPeak() {
        int[] arr = {-2, -2, -2, -2, -2};
        OptionalInt peak = SinglePeakFinder.findSinglePeak(arr);
        assertTrue(peak.isPresent());
        assertEquals(-2, peak.getAsInt());
    }
}

In the case of bitonic arrays—characterized by a monotonically increasing sequence followed by a monotonically decreasing sequence—the peak can be found more efficiently. By applying a modified binary search technique, we can locate the peak in O(log n) time, significantly reducing the complexity.

It’s important to note that determining whether an array is bitonic requires examination, which, in the worst case, can approach linear time. Therefore, the efficiency gain with the binary search approach is most impactful when the array’s bitonic nature is known.

4. Finding Multiple Peak Elements

Identifying multiple peak elements in an array typically requires examining each element in relation to its neighbors, leading to a linear search algorithm with a time complexity of O(n). This approach ensures no potential peak is overlooked, making it suitable for general arrays.

In specific scenarios, when the array structure allows for segmenting into predictable patterns, modified binary search techniques can be applied to find peaks more efficiently. Let’s use a modified binary search algorithm to achieve a time complexity of O(log n).

Algorithm Explanation:

  • Initialize Pointers: Start with two pointers, low and high, representing the range of the array.
  • Binary Search: Calculate the middle index mid of the current range.
  • Compare Mid with Neighbors: Check if the element at index mid is greater than its neighbors.
    • If true, mid is a peak.
    • If false, move towards the side with the greater neighbor, ensuring we move towards a potential peak.
  • Repeat: Continue the process until the range is reduced to a single element.
public class MultiplePeakFinder {
    public static List<Integer> findPeaks(int[] arr) {
        List<Integer> peaks = new ArrayList<>();
        if (arr == null || arr.length == 0) {
            return peaks;
        }
        findPeakElements(arr, 0, arr.length - 1, peaks, arr.length);
        return peaks;
    }
    private static void findPeakElements(int[] arr, int low, int high, List<Integer> peaks, int length) {
        if (low > high) {
            return;
        }
        int mid = low + (high - low) / 2;
        boolean isPeak = (mid == 0 || arr[mid] > arr[mid - 1]) && (mid == length - 1 || arr[mid] > arr[mid + 1]);
        boolean isFirstInSequence = mid > 0 && arr[mid] == arr[mid - 1] && arr[mid] > arr[mid + 1];
        if (isPeak || isFirstInSequence) {
            
            if (!peaks.contains(arr[mid])) {
                peaks.add(arr[mid]);
            }
        }
        
        findPeakElements(arr, low, mid - 1, peaks, length);
        findPeakElements(arr, mid + 1, high, peaks, length);
    }
}

The MultiplePeakFinder class employs a modified binary search algorithm to identify multiple peak elements in an array efficiently. The findPeaks method initializes two pointers, low and high, representing the range of the array.

It calculates the middle index (mid) and checks if the element at mid is greater than its neighbors. If true, it marks mid as a peak and continues the search in the potential peak-rich region.

public class MultiplePeakFinderUnitTest {
    @Test
    void findPeaks_givenArrayOfIntegers_whenValidInput_thenReturnsCorrectPeaks() {
        MultiplePeakFinder finder = new MultiplePeakFinder();
        int[] array = {1, 13, 7, 0, 4, 1, 4, 45, 50};
        List<Integer> peaks = finder.findPeaks(array);
        assertEquals(3, peaks.size());
        assertTrue(peaks.contains(4));
        assertTrue(peaks.contains(13));
        assertTrue(peaks.contains(50));
    }
}

The efficiency of binary search for finding peaks depends on the array’s structure, allowing for peak detection without checking every element. However, without knowing the array’s structure or if it lacks a suitable pattern for binary search, linear search is the most dependable method, guaranteeing no peak is overlooked.

5. Handling Edge Cases

Understanding and addressing edge cases is crucial for ensuring the robustness and reliability of the peak element algorithm.

5.1. Array With No Peaks

In scenarios where the array contains no peak elements, it is essential to indicate this absence. Let’s return an empty array when no peaks are found:

public class PeakElementFinder {
    public List<Integer> findPeakElements(int[] arr) {
        int n = arr.length;
        List<Integer> peaks = new ArrayList<>();
        if (n == 0) {
            return peaks;
        }
        for (int i = 0; i < n; i++) {
            if (isPeak(arr, i, n)) {
                peaks.add(arr[i]);
            }
        }
        return peaks;
    }
    private boolean isPeak(int[] arr, int index, int n) {
        return arr[index] >= arr[index - 1] && arr[index] >= arr[index + 1];
    }
}

The findPeakElement method iterates through the array, utilizing the isPeak helper function to identify peaks. If no peaks are found, it returns an empty array.

public class PeakElementFinderUnitTest {
    @Test
    void findPeakElement_givenArrayOfIntegers_whenValidInput_thenReturnsCorrectPeak() {
        PeakElementFinder finder = new PeakElementFinder();
        int[] array = {1, 2, 3, 2, 1};
        List<Integer> peaks = finder.findPeakElements(array);
        assertEquals(1, peaks.size());
        assertTrue(peaks.contains(3));
    }
    @Test
    void findPeakElement_givenArrayOfIntegers_whenNoPeaks_thenReturnsEmptyList() {
        PeakElementFinder finder = new PeakElementFinder();
        int[] array = {};
        List<Integer> peaks = finder.findPeakElements(array);
        assertEquals(0, peaks.size());
    }
}

5.2. Array With Peaks at Extremes

When peaks exist at the first or last element, special consideration is necessary to avoid undefined neighbor comparisons. Let’s add a conditional check in the isPeak method to handle these cases:

private boolean isPeak(int[] arr, int index, int n) {
    if (index == 0) {
        return n > 1 ? arr[index] >= arr[index + 1] : true;
    } else if (index == n - 1) {
        return arr[index] >= arr[index - 1];
    }
    return arr[index] >= arr[index - 1] && arr[index] >= arr[index + 1];
}

This modification ensures that peaks at the extremes are correctly identified without attempting comparisons with undefined neighbors.

public class PeakElementFinderUnitTest {
    @Test
    void findPeakElement_givenArrayOfIntegers_whenPeaksAtExtremes_thenReturnsCorrectPeak() {
        PeakElementFinder finder = new PeakElementFinder();
        int[] array = {5, 2, 1, 3, 4};
        List<Integer> peaks = finder.findPeakElements(array);
        assertEquals(2, peaks.size());
        assertTrue(peaks.contains(5));
        assertTrue(peaks.contains(4));
    }
}

5.3. Dealing With Plateaus (Consecutive Equal Elements)

In cases where the array contains consecutive equal elements, returning the index of the first occurrence is crucial. The isPeak function handles this by skipping consecutive equal elements:

private boolean isPeak(int[] arr, int index, int n) {
    if (index == 0) {
        return n > 1 ? arr[index] >= arr[index + 1] : true;
    } else if (index == n - 1) {
        return arr[index] >= arr[index - 1];
    } else if (arr[index] == arr[index + 1] && arr[index] > arr[index - 1]) {
        int i = index;
        while (i < n - 1 && arr[i] == arr[i + 1]) {
            i++;
        }
        return i == n - 1 || arr[i] > arr[i + 1];
    } else {
        return arr[index] >= arr[index - 1] && arr[index] >= arr[index + 1];
    }
}

The findPeakElement function skips consecutive equal elements, ensuring that the index of the first occurrence is returned when identifying peaks.

public class PeakElementFinderUnitTest {
    @Test
    void findPeakElement_givenArrayOfIntegers_whenPlateaus_thenReturnsCorrectPeak() {
        PeakElementFinder finder = new PeakElementFinder();
        int[] array = {1, 2, 2, 2, 3, 4, 5};
        List<Integer> peaks = finder.findPeakElements(array);
        assertEquals(1, peaks.size());
        assertTrue(peaks.contains(5));
    }
}

6. Conclusion

Understanding the techniques to find peak elements enables developers to make informed decisions when designing efficient and resilient algorithms. There are various approaches to discovering peak elements, with methods offering different time complexities, such as O(log n) or O(n).

The selection among these methods depends on specific requirements and application constraints. Choosing the right algorithm aligns with the efficiency and performance goals aimed to achieve in the application.

You can find all the code samples over on GitHub.

       

Obtaining the Last Path Segment of a URI in Java

$
0
0

1. Introduction

Working with Uniform Resource Identifiers (URIs) is a common operation that is mostly used in web development and file management.

Besides, one of the most common needs is to get the last path segment out of a URL (the last segment is the last segment after the last ‘/’ character).

In this tutorial, we’ll investigate different ways to obtain the last segment of a URL.

2. Using URI Class

The java.net.URI class enables an object-oriented approach for URI parsing and manipulation. To make it easier, let’s take an example:

@Test
public void givenURL_whenUsingURIClass_thenGetLastPathSegment() throws URISyntaxException {
    URI uri = new URI("https://www.example.com/path/to/resource");
    String path = uri.getPath();
    String[] segments = path.split("/");
    String lastSegment = segments[segments.length - 1];
    assertEquals("resource", lastSegment);
}

The given method initializes a URI with a sample URL. Subsequently, the URI’s path is extracted using the getPath() method. The path is then split into segments based on the forward-slash (“/”) delimiter. The last path segment is then determined by accessing the last element of the segment array.

Finally, the test asserts that the last path segment matches the expected value, affirming that the functionality correctly extracts the intended resource from the URL.

3. Using Path Class

In Java 7, the java.nio.file.Path class provides a platform-independent representation for files and paths. Providing an effective way to extract the last segment of URI. Here’s an example:

@Test
public void givenURL_whenUsingPathClass_thenGetLastPathSegment() {
    String exampleURI = "https://www.example.com/path/to/resource";
    try {
        URI uri = new URI(exampleURI);
        String pathString = uri.getPath();
        Path path = Paths.get(pathString);
        Path lastSegment = path.getName(path.getNameCount() - 1);
        assertEquals("resource", lastSegment.toString());
    } catch (Exception e) {
        fail("Exception occurred: " + e.getMessage());
    }
}

As in the previous section, we first initialize a URI and use the getPath() method. Subsequently, we create a Path object named path from the obtained pathString. The last segment is determined using the getName() method with an index calculation. The last path segment is then converted to a string for comparison.

4. Using FilenameUtils Class

Apache Commons IO library has a FilenameUtils class that is available as a utility class for common file and path tasks. Let’s take an example:

@Test
public void givenURL_whenUsingFilenameUtilsClass_thenGetLastPathSegment() throws URISyntaxException {
    String exampleURI = "https://www.example.com/path/to/resource";
    URI uri = new URI(exampleURI);
    String path = uri.getPath();
    String lastSegment = FilenameUtils.getName(path);
    assertEquals("resource", lastSegment);
}

After extracting the path using the getPath() method, we utilize the FilenameUtils class to obtain the last path segment using the getName() method, which takes the path as a parameter.

5. Using Regular Expressions

In extracting the last path segment from a URL, regex provides an elegant solution for flexible and precise pattern definitions. Here’s an example:

@Test
public void givenURL_whenUsingRegularExpression_thenGetLastPathSegment() throws URISyntaxException {
    URI uri = new URI("https://www.example.com/path/to/resource");
    String path = uri.getPath();
    Pattern pattern = Pattern.compile(".*/(.+)");
    Matcher matcher = pattern.matcher(path);
    if (!matcher.find()) {
        fail("Regex pattern didn't match.");
    }
    String lastSegment = matcher.group(1);
    assertEquals("resource", lastSegment);
}

Here, we define a regular expression pattern “/(.+)” to capture the last segment of the URL path precisely. Leveraging the Pattern and Matcher classes, we compile and apply the regex pattern to the path string using the compile() and matcher() methods.

Moreover, a conditional check further validates the success of the regex pattern application using the find() method. Upon successful matching, the last path segment is extracted using the group(1) method from the Matcher object.

6. Conclusion

In conclusion, this tutorial explored multiple Java methods, including the URI class, Path class, FilenameUtils, and regular expressions, providing diverse approaches to extract the last path segment from a URL effectively.

As usual, the accompanying source code can be found over on GitHub.

       
Viewing all 4470 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>