Quantcast
Channel: Baeldung
Viewing all 4469 articles
Browse latest View live

Calculate Number of Weekdays Between Two Dates in Java

$
0
0

1. Overview

In this tutorial, we’ll look at two different methods in Java for calculating the number of weekdays between two dates. We’ll look at a readable version using Streams and a less readable but more efficient option that doesn’t loop at all.

2. Full Search Using Streams

First, let’s see how we can do this with Streams. The plan is to loop over every day between our two dates and count the weekdays:

long getWorkingDaysWithStream(LocalDate start, LocalDate end){
    return start.datesUntil(end)
      .map(LocalDate::getDayOfWeek)
      .filter(day -> !Arrays.asList(DayOfWeek.SATURDAY, DayOfWeek.SUNDAY).contains(day))
      .count();
}

To start we’ve utilized LocalDate‘s datesUntil() method. This method returns a Stream of all the dates from the start (inclusive) to the end date (exclusive).

Next, we’ve used map() and LocalDate‘s getDayOfWeek() to transform each date into a day. For example, this would change 10-01-2023 to Wednesday.

Following that, we filter out all the weekend days by checking them against the DaysOfWeek enum. Finally, we can count up the days left, as we know these will all be weekdays.

This method isn’t the quickest, as we have to look at it every single day. However, it’s easily understandable and offers the opportunity to easily put in extra checks or processing if needed.

3. Efficient Search Without Looping

The other option we have is to not loop over all the days, but instead, apply the rules we know about the days of the week. There are several steps we need here, and a few edge cases to take care of.

3.1. Setting up Initial Dates

To start, we’ll define our method signature which will be a lot like our previous one:

long getWorkingDaysWithoutStream(LocalDate start, LocalDate end)

The first step in processing these dates is to exclude any weekends at the start and end. So for the start date, if it’s a weekend we’ll take the following Monday. We’ll also track the fact that we did this with a boolean:

boolean startOnWeekend = false;
if(start.getDayOfWeek().getValue() > 5){
    start = start.with(TemporalAdjusters.next(DayOfWeek.MONDAY));
    startOnWeekend = true;
}

We’ve used the TemporalAdjusters class here, specifically its next() method which lets us jump to the next specified day.

We can then do the same for the end date – if it’s a weekend, we take the previous Friday. This time we’ll use TemporalAdjusters.previous() to take us to the first occurrence of the day we want before the given date:

boolean endOnWeekend = false;
if(end.getDayOfWeek().getValue() > 5){
    end = end.with(TemporalAdjusters.previous(DayOfWeek.FRIDAY));
    endOnWeekend = true;
}

3.2. Accounting for Edge Cases

This already presents us with a potential edge case, if we start on Saturday and end on Sunday. In that case, our start date will now be Monday, and the end date the Friday before. It doesn’t make sense for the start to be after the end, so we can cover this potential use case with a quick check:

if(start.isAfter(end)){
    return 0;
}

We also need to cover another edge case which is why we kept track of starting and ending on a weekend. This is optional and depends on how we want to count the days. For example, if we counted between a Tuesday and Friday in the same week we’d say there are three days between them.

We’d also say there are five weekdays between a Saturday and the following Saturday. However, if we move the start and end days to Monday and Friday as we’re doing here, that now counts as four days. So to counteract that we can simply add a day if required:

long addValue = startOnWeekend || endOnWeekend ? 1 : 0;

3.3. Final Calculations

We’re now in a position to calculate the total amount of weeks between the start and end. For this, we’ll use ChronoUnit’between() method. This method calculates the time between two Temporal objects, in the specified unit which is WEEKS in our case:

long weeks = ChronoUnit.WEEKS.between(start, end);

Finally, we can use everything we’ve gathered so far to get our final value for the number of weekdays:

return ( weeks * 5 ) + ( end.getDayOfWeek().getValue() - start.getDayOfWeek().getValue() ) + addValue;

The steps here are firstly to multiply the number of weeks by the number of weekdays per week. We haven’t accounted for non-whole weeks yet so we add on the extra days between the start day of the week and the end day of the week. To finish we add the adjustment for starting or finishing on a weekend.

4. Conclusion

In this article, we’ve looked at two options for calculating the number of weekdays between two dates.

First, we saw how to use a Stream and check each day individually. This method offers simplicity and readability at the expense of efficiency.

The second option is to apply the rules we know about the days of the week to figure it out without a loop. This offers efficiency at the expense of readability and maintainability.

As always, the full code for the examples is available over on GitHub.

       

Rotate a Vertex Around a Certain Point in Java

$
0
0

1. Overview

When working with computer graphics and game development, the ability to rotate vertices around a certain point is a fundamental skill.

In this quick tutorial, we’ll explore different approaches to rotating a vertex around a certain point in Java.

2. Understanding the Problem Statement

Let’s say we’ve two points on a 2D plane: point A with coordinates(x1, y1) and point B with coordinates (x2, y2). We want to rotate point A around point B by a certain degree. The direction of rotation is counter-clockwise if the rotation angle is positive and clockwise if the rotation angle is negative:

Rotating a vertex around a point

In the above graph, point A’ is the new point after performing the rotation. The rotation from A to A’ is counter-clockwise, showing that the rotation angle is positive 45 degrees.

3. Using Origin as the Rotation Point

In this approach, we’ll first translate the vertex and the rotation point to the origin. Once translated, we’ll apply the rotation around the origin by the required angle. After performing the rotation, we’ll translate them back to their original position.

3.1. Rotate a Point P Around the Origin

First, let’s understand how to rotate point P around the origin. For rotation, we’ll be using formulas that involve trigonometric functions. The formula for calculating the new coordinates for rotating a point P (x,y) around the origin (0,0) in a counter-clockwise direction is:

rotatedXPoint = x * cos(angle) - y * sin(angle)
rotatedYPoint = x * sin(angle) + x * cos(angle)

The rotatedXPoint and rotatedYPoint represent the new coordinates of point P after it has been rotated. If we need to rotate the point clockwise, we need to use the negative rotation angle.

3.2. Rotation Around a Given Point

We’ll shift the rotation point to the origin by subtracting the x-coordinate of the rotation point from the x-coordinate of the vertex and similarly subtracting the y-coordinate of the rotation point from the y-coordinate of the vertex.

These translated coordinates represent the vertex position relative to this new origin. Afterward, we’ll perform the rotation as previously described and apply a reverse translation by adding back the x and y coordinates.

Let’s use this approach to rotate a vertex around a point:

public Point2D.Double usingOriginAsRotationPoint(Point2D.Double vertex, Point2D.Double rotationPoint, double angle) {
    double translatedToOriginX = vertex.x - rotationPoint.x;
    double translatedToOriginY = vertex.y - rotationPoint.y;
    double rotatedX = translatedToOriginX * Math.cos(angle) - translatedToOriginY * Math.sin(angle);
    double rotatedY = translatedToOriginX * Math.sin(angle) + translatedToOriginY * Math.cos(angle);
    double reverseTranslatedX = rotatedX + rotationPoint.x;
    double reverseTranslatedY = rotatedY + rotationPoint.y;
    return new Point2D.Double(reverseTranslatedX, reverseTranslatedY);
}

Let’s test this approach to rotate a vertex:

void givenRotationPoint_whenUseOrigin_thenRotateVertex() {
    Point2D.Double vertex = new Point2D.Double(2.0, 2.0);
    Point2D.Double rotationPoint = new Point2D.Double(0.0, 1.0);
    double angle = Math.toRadians(45.0);
    Point2D.Double rotatedVertex = VertexRotation.usingOriginAsRotationPoint(vertex, rotationPoint, angle);
    assertEquals(0.707, rotatedVertex.getX(), 0.001);
    assertEquals(3.121, rotatedVertex.getY(), 0.001);
}

4. Using the AffineTransform Class

In this approach, we’ll leverage the AffineTransform class, which is used to perform geometric transformations like translations, scales, rotations, and flips.

We’ll first use the getRotateInstance() to create a rotation transformation matrix based on the specified angle and rotation point. Subsequently, we’ll use the transform() method to apply the transformation to the vertex and perform rotation. Let’s take a look at this approach:

public Point2D.Double usingAffineTransform(Point2D.Double vertex, Point2D.Double rotationPoint, double angle) {
    AffineTransform affineTransform = AffineTransform.getRotateInstance(angle, rotationPoint.x, rotationPoint.y);
    Point2D.Double rotatedVertex = new Point2D.Double();
    affineTransform.transform(vertex, rotatedVertex);
    return rotatedVertex;
}

Let’s test this approach to rotate a vertex:

void givenRotationPoint_whenUseAffineTransform_thenRotateVertex() {
    Point2D.Double vertex = new Point2D.Double(2.0, 2.0);
    Point2D.Double rotationPoint = new Point2D.Double(0.0, 1.0);
    double angle = Math.toRadians(45.0);
    Point2D.Double rotatedVertex = VertexRotation.usingAffineTransform(vertex, rotationPoint, angle);
    assertEquals(0.707, rotatedVertex.getX(), 0.001);
    assertEquals(3.121, rotatedVertex.getY(), 0.001);
}

5. Conclusion

In this tutorial, we’ve discussed ways to rotate a vertex around a certain point.

As always, the code used in the examples is available over on GitHub.

       

List vs. Set in @OneToMany JPA

$
0
0

1. Overview

Spring JPA and Hibernate provide a powerful tool for seamless database communication. However, as clients delegate more control to the frameworks, including query generation, the result might be far from what we expect.

There’s usually confusion about what to use, Lists or Sets with to-many relationships. Often, this confusion is amplified by the fact that Hibernate uses similar names for its bags, lists, and sets but with slightly different meanings behind them.

In most cases, Sets are more suitable for one-to-many or many-to-many relationships. However, they have particular performance implications that we should be aware of.

In this tutorial, we’ll learn the difference between Lists and Sets in the context of entity relationships and review several examples of different complexities. Also, we’ll identify the pros and cons of each approach.

2. Testing

We’ll be using a dedicated library to test the number of requests. Checking the logs isn’t a good solution as it’s not automated and might work only on simple examples. When requests generate tens and hundreds of queries, using logs isn’t efficient enough.

First of all, we need the io.hypersistenceNote that the number in the artifact ID is the Hibernate version:

<dependency>
    <groupId>io.hypersistence</groupId>
    <artifactId>hypersistence-utils-hibernate-63</artifactId>
    <version>3.7.0</version>
</dependency>

Additionally, we’ll be using the util library for log analysis:

<dependency>
    <groupId>com.vladmihalcea</groupId>
    <artifactId>db-util</artifactId>
    <version>1.0.7</version>
</dependency>

We can use these libraries for exploratory tests and cover crucial parts of our application. This way, we ensure that changes in the entity classes don’t create some invisible side effects in the query generation.

We should wrap our data source with the provided utilities to make it work. We can use BeanPostProcessor to do this: 

@Component
public class DataSourceWrapper implements BeanPostProcessor {
    public Object postProcessBeforeInitialization(Object bean, String beanName) {
        return bean;
    }
    public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
        if (bean instanceof DataSource originalDataSource) {
            ChainListener listener = new ChainListener();
            SLF4JQueryLoggingListener loggingListener = new SLF4JQueryLoggingListener();
            loggingListener.setQueryLogEntryCreator(new InlineQueryLogEntryCreator());
            listener.addListener(loggingListener);
            listener.addListener(new DataSourceQueryCountListener());
            return ProxyDataSourceBuilder
              .create(originalDataSource)
              .name("datasource-proxy")
              .listener(listener)
              .build();
        }
        return bean;
    }
}

The rest is simple. In our tests, we’ll use SQLStatementCountValidator to validate the number and the type of the queries.

3. Domain

To make the examples more relevant and easier to follow, we’ll be using a model for a social networking website. We’ll have different relationships between groups, users, posts, and comments.

However, we’ll build up the complexity step by step, adding entities to highlight the differences and the performance effect. This is important as simple models with only a few relationships won’t provide a complete picture. At the same time, overly complex ones might overwhelm the information, making it hard to follow.

For these examples, we’ll use only the eager fetch type for to-many relationships. In general, Lists and Sets behave similarly when we use lazy fetch.

In the visuals, we’ll be using Iterable as a to-many field type. This is done only for brevity, so we don’t need to repeat the same visuals for Lists and Sets. We’ll explicitly define a dedicated type in each section and show it in the code.

4. Users and Posts

First of all, let’s consider only the part of our domain. Here, we’ll be taking into account only users and posts:

For the first example, we’ll have a simple bidirectional relationship between users and posts. Users can have many posts. At the same time, a post can have only one user as an author.

4.1. Lists and Sets Joins

Let’s check the behavior of the queries when we request only one user. We’ll consider the following two scenarios for Set and List:

@Data
@Entity
public class User {
    // Other fields
    @OneToMany(cascade = CascadeType.ALL, mappedBy = "author", fetch = FetchType.EAGER)
    protected List<Post> posts;
}

Set-based User looks quite similar:

@Data
@Entity
public class User {
    // Other fields
    @OneToMany(cascade = CascadeType.ALL, mappedBy = "author", fetch = FetchType.EAGER)
    protected Set<Post> posts;
}

While fetching a User, Hibernate generates a single query with LEFT JOIN to get all the information in one go. This is true for both cases:

SELECT u.id, u.email, u.username, p.id, p.author_id, p.content
FROM simple_user u
         LEFT JOIN post p ON u.id = p.author_id
WHERE u.id = ?

While we have only one query, the user’s data will be repeated for each row. This means that we’ll see the ID, email, and username as many times as many posts a particular user has:

u.id u.email u.username p.id p.author_id p.content
101 user101@email.com user101 1 101 “User101 post 1”
101 user101@email.com user101 2 101 “User101 post 2”
102 user102@email.com user102 3 102 “User102 post 1”
102 user102@email.com user102 4 102 “User102 post 2”
103 user103@email.com user103 5 103 “User103 post 1”
103 user103@email.com user103 6 103 “User103 post 2”

If the user table has many columns or posts, this may create a performance problem. We can address this issue by specifying the fetch mode explicitly.

4.2. Lists and Sets N+1

At the same time, while fetching multiple users, we encounter an infamous N+1 problem. This is true for List-based Users:

@Test
void givenEagerListBasedUser_WhenFetchingAllUsers_ThenIssueNPlusOneRequests() {
    List<User> users = getService().findAll();
    assertSelectCount(users.size() + 1);
}

Also, this is true for Set-based Users:

@Test
void givenEagerSetBasedUser_WhenFetchingAllUsers_ThenIssueNPlusOneRequests() {
List<User> users = getService().findAll();
assertSelectCount(users.size() + 1);
}

There will be only two kinds of queries. The first one fetches all the users:

SELECT u.id, u.email, u.username
FROM simple_user u

And N number of subsequent requests to get the posts for each User:

SELECT p.id, p.author_id, p.content
FROM post p
WHERE p.author_id = ?

Thus, we don’t have any differences between Lists and Sets for these types of relationships.

5. Groups, Users and Posts

Let’s consider more complex relationships and add groups to our model. They create unidirectional many-to-many relationships with users:

Because the relationships between Users and Posts remain the same, old tests will be valid and produce the same results. Let’s create similar tests for groups.

5.1. Lists and N + 1

We’ll have the following Group class with @ManyToMany relationships:

@Data
@Entity
public class Group {
    @Id
    private Long id;
    private String name;
    @ManyToMany(fetch = FetchType.EAGER)
    private List<User> members;
}

Let’s try to fetch all the groups:

@Test
void givenEagerListBasedGroup_whenFetchingAllGroups_thenIssueNPlusMPlusOneRequests() {
    List<Group> groups = groupService.findAll();
    Set<User> users = groups.stream().map(Group::getMembers).flatMap(List::stream).collect(Collectors.toSet());
    assertSelectCount(groups.size() + users.size() + 1);
}

Hibernate will issue additional queries for each group to get the members and for each member to get their posts. Thus, we’ll have three types of queries:

SELECT g.id, g.name
FROM interest_group g
SELECT gm.interest_group_id, u.id, u.email, u.username
FROM interest_group_members gm
         JOIN simple_user u ON u.id = gm.members_id
WHERE gm.interest_group_id = ?
SELECT p.author_id, p.id, p.content
FROM post p
WHERE p.author_id = ?

Overall, we’ll get 1 + N + M number of requests. N is the number of groups, and M is the number of unique users in these groups. Let’s try to fetch a single group:

@ParameterizedTest
@ValueSource(longs = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void givenEagerListBasedGroup_whenFetchingAllGroups_thenIssueNPlusOneRequests(Long groupId) {
    Optional<Group> group = groupService.findById(groupId);
    assertThat(group).isPresent();
    assertSelectCount(1 + group.get().getMembers().size());
}

We’ll have a similar situation, but we’ll get all the User data in a single query using LEFT JOIN. Thus, there will be only two types of queries:

SELECT g.id, gm.interest_group_id, u.id, u.email, u.username, g.name
FROM interest_group g
         LEFT JOIN (interest_group_members gm JOIN simple_user u ON u.id = gm.members_id)
                   ON g.id = gm.interest_group_id
WHERE g.id = ?
SELECT p.author_id, p.id, p.content
FROM post p
WHERE p.author_id = ?

Overall, we’ll have N + 1 requests, where N is the number of group members.

5.2 Sets and Cartesian Product

While working with Sets, we’ll see a different picture. Let’s check our Set-based Group class:

@Data
@Entity
public class Group {
    @Id
    private Long id;
    private String name;
    @ManyToMany(fetch = FetchType.EAGER)
    private Set<User> members;
}

Fetching all the groups will produce a slightly different result from the List-based groups:

@Test
void givenEagerSetBasedGroup_whenFetchingAllGroups_thenIssueNPlusOneRequests() {
    List<Group> groups = groupService.findAll();
    assertSelectCount(groups.size() + 1);
}

Instead of N + M + 1 from the previous example. We’ll have just N + 1 but get more complex queries. We’ll still have a separate query to get all the groups, but Hibernate fetches users and their posts in a single query using two JOINs:

SELECT g.id, g.name
FROM interest_group g
SELECT u.id,
       u.username,
       u.email,
       p.id,
       p.author_id,
       p.content,
       gm.interest_group_id,
FROM interest_group_members gm
         JOIN simple_user u ON u.id = gm.members_id
         LEFT JOIN post p ON u.id = p.author_id
WHERE gm.interest_group_id = ?

Although we reduced the number of queries, the result set might contain duplicated data due to JOINs and, subsequently, a Cartesian product. We’ll get repeated group information for all the users in the group, and all of that will be repeated for each user post:

u.id u.username u.email p.id p.author_id p.content gm.interest_group_id
301 user301 user301@email.com 201 301 “User301’s post 1” 101
302 user302 user302@email.com 202 302 “User302’s post 1” 101
303 user303 user303@email.com NULL NULL NULL 101
304 user304 user304@email.com 203 304 “User304’s post 1” 102
305 user305 user305@email.com 204 305 “User305’s post 1” 102
306 user306 user306@email.com NULL NULL NULL 102
307 user307 user307@email.com 205 307 “User307’s post 1” 103
308 user308 user308@email.com 206 308 “User308’s post 1” 103
309 user309 user309@email.com NULL NULL NULL 103

After reviewing the previous queries, it’s obvious why fetching a single group would issue a single request:

@ParameterizedTest
@ValueSource(longs = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void givenEagerSetBasedGroup_whenFetchingAllGroups_thenCreateCartesianProductInOneQuery(Long groupId) {
    groupService.findById(groupId);
    assertSelectCount(1);
}

We’ll use only the second query with JOINs, reducing the number of requests:

SELECT u.id,
       u.username,
       u.email,
       p.id,
       p.author_id,
       p.content,
       gm.interest_group_id,
FROM interest_group_members gm
         JOIN simple_user u ON u.id = gm.members_id
         LEFT JOIN post p ON u.id = p.author_id
WHERE gm.interest_group_id = ?

5.3. Removals using Lists and Sets

Another interesting difference between Sets and Lists is how they remove objects. This only applies to the @ManyToMany relationships. Let’s consider a more straightforward case with Sets first:

@ParameterizedTest
@ValueSource(longs = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void givenEagerListBasedGroup_whenRemoveUser_thenIssueOnlyOneDelete(Long groupId) {
    groupService.findById(groupId).ifPresent(group -> {
        Set<User> members = group.getMembers();
        if (!members.isEmpty()) {
            reset();
            Set<User> newMembers = members.stream().skip(1).collect(Collectors.toSet());
            group.setMembers(newMembers);
            groupService.save(group);
            assertSelectCount(1);
            assertDeleteCount(1);
        }
    });
}

The behavior is quite reasonable, and we just remove the record from the join table. We’ll see in the logs only two queries:

SELECT g.id, g.name,
       u.id, u.username, u.email,
       p.id, p.author_id, p.content,
       m.interest_group_id,
FROM interest_group g
         LEFT JOIN (interest_group_members m JOIN simple_user u ON u.id = m.members_id)
                   ON g.id = m.interest_group_id
         LEFT JOIN post p ON u.id = p.author_id
DELETE
FROM interest_group_members
WHERE interest_group_id = ? AND members_id = ?

We have an additional selection only because the test methods aren’t transactional, and the original group isn’t stored in our persistence context.

Overall, Sets behave the way we would assume. Now, let’s check the Lists behavior:

@ParameterizedTest
@ValueSource(longs = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void givenEagerListBasedGroup_whenRemoveUser_thenIssueRecreateGroup(Long groupId) {
    groupService.findById(groupId).ifPresent(group -> {
        List<User> members = group.getMembers();
        int originalNumberOfMembers = members.size();
        assertSelectCount(ONE + originalNumberOfMembers);
        if (!members.isEmpty()) {
            reset();
            members.remove(0);
            groupService.save(group);
            assertSelectCount(ONE + originalNumberOfMembers);
            assertDeleteCount(ONE);
            assertInsertCount(originalNumberOfMembers - ONE);
        }
    });
}

Here, we have several queries: SELECT, DELETE, and INSERT. The problem is that Hibernate removes the entire group from the join table and recreates it anew. Again, we have the initial select statements due to the lack of persistence context in the test methods:

SELECT u.id, u.email, u.username, g.name,
       g.id, gm.interest_group_id,
FROM interest_group g
         LEFT JOIN (interest_group_members gm JOIN simple_user u ON u.id = gm.members_id)
                   ON g.id = gm.interest_group_id
WHERE g.id = ?
SELECT p.author_id, p.id, p.content
FROM post p
WHERE p.author_id = ?
DELETE
FROM interest_group_members
WHERE interest_group_id = ? 
    
INSERT
INTO interest_group_members (interest_group_id, members_id)
VALUES (?, ?)

The code will produce one query to get all the group members. N requests to get the posts, where N is the number of members. One request to delete the entire group, and N – 1 requests to add members again. In general, we can think about it as 1 + 2N.

Lists don’t produce a Cartesian product not because of the performance consideration. As Lists allow repeated elements, Hibernate has problems distinguishing Cartesian duplicates and the duplicates in the collections. 

This is why it’s recommended to use only Sets with @ManyToMany annotation. Otherwise, we should prepare for the dramatic performance impact.

6. Complete Domain

Now, let’s consider a more realistic domain with many different relationships:

Now, we have quite an interconnected domain model. There are several one-to-many relationships, bidirectional many-to-many relationships, and transitive circular relationships.

6.1. Lists

First, let’s consider the relationships where we use List for all to-many relationships. Let’s try to fetch all the users from the database:

@ParameterizedTest
@MethodSource
void givenEagerListBasedUser_WhenFetchingAllUsers_ThenIssueNPlusOneRequests(ToIntFunction<List<User>> function) {
    int numberOfRequests = getService().countNumberOfRequestsWithFunction(function);
    assertSelectCount(numberOfRequests);
}
static Stream<Arguments> givenEagerListBasedUser_WhenFetchingAllUsers_ThenIssueNPlusOneRequests() {
    return Stream.of(
      Arguments.of((ToIntFunction<List<User>>) s -> {
          int result = 2 * s.size() + 1;
          List<Post> posts = s.stream().map(User::getPosts)
            .flatMap(List::stream)
            .toList();
          result += posts.size();
          return result;
      })
    );
}

This request would result in many different queries. First, we’ll get all the users’ IDs. Then, separate requests for all the groups and posts for each user. Finally, we’ll fetch the information about each post.

Overall, we’ll issue lots of queries, but at the same time, we won’t have any joins between several to-many relationships. This way, we avoid a Cartesian product and have a lower amount of data returned, as we don’t have duplicates, but we use more requests.

While fetching a single user, we’ll have an interesting situation:

@ParameterizedTest
@ValueSource(longs = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void givenEagerListBasedUser_WhenFetchingOneUser_ThenUseDFS(Long id) {
    int numberOfRequests = getService()
      .getUserByIdWithFunction(id, this::countNumberOfRequests);
    assertSelectCount(numberOfRequests);
}

The countNumberOfRequests method is a util method that uses DFS to count the number of entities and calculate the number of requests:

Get all the posts for user #2
The user wrote the following posts: 1,2,3
 Check all the commenters for post #1: 3,8,9,10
  Get all the posts for user #10: 22
   Check all the commenters for post #22: 3,6,7,10
    Get all the posts for user #3: 4,5,6
     Check all the commenters for post #4: 2,4,9
      Get all the posts for user #9: 19,20,21
       Check all the commenters for post #19: 3,4,8,9,10
        Get all the posts for user #8: 16,17,18
         Check all the commenters for post #16: 
         Check all the commenters for post #17: 2,4,9
          Get all the posts for user #4: 7,8,9,10
           Check all the commenters for post #7: 
           Check all the commenters for post #8: 
           Check all the commenters for post #9: 1,5,6
            Get all the posts for user #1: 
            Get all the posts for user #5: 11,12,13,14
             Check all the commenters for post #11: 2,3,8
             Check all the commenters for post #12: 10
             Check all the commenters for post #13: 4,9,10
             Check all the commenters for post #14: 
            Get all the posts for user #6: 
           Check all the commenters for post #10: 2,5,6,8
         Check all the commenters for post #18: 1,2,3,4,5
       Check all the commenters for post #20: 
       Check all the commenters for post #21: 7
        Get all the posts for user #7: 15
         Check all the commenters for post #15: 1
     Check all the commenters for post #5: 1,2,5,8
     Check all the commenters for post #6: 
 Check all the commenters for post #2: 
 Check all the commenters for post #3: 1,3,6

The result is a transitive closure. For a single user with ID #2, we have to do 42(!) requests to the database. Although the main issue is the eager fetch type, it shows the explosion in the request number if we’re using Lists.

Lazy fetch might produce a similar issue when we trigger the load for most of the internal fields. This might be intentional based on the domain logic. Also, it might be accidental, for example, incorrect overrides for toString(), equals(T), and hashCode() methods. 

6.2. Sets

Let’s change all the Lists in our domain model to Sets and make similar tests:

@Test
void givenEagerSetBasedUser_WhenFetchingAllUsers_ThenIssueNPlusOneRequestsWithCartesianProduct() {
    List<User> users = getService().findAll();
    assertSelectCount(users.size() + 1);
}

First, we’ll have fewer requests to get all the users, which should be better overall. However, if we look at the requests, we can see the following:

SELECT profile.id, profile.biography, profile.website, profile.profile_picture_url,
       user.id, user.email, user.username,
       user_group.members_id,
       interest_group.id, interest_group.name,
       post.id, post.author_id, post.content,
       comment.id, comment.text, comment.post_id,
       comment_author.id, comment_author.profile_id, comment_author.username, comment_author.email,
       comment_author_group_member.members_id,
       comment_author_group.id, comment_author_group.name
FROM profile profile
         LEFT JOIN simple_user user
ON profile.id = user.profile_id
    LEFT JOIN (interest_group_members user_group
    JOIN interest_group interest_group
    ON interest_group.id = user_group.groups_id)
    ON user.id = user_group.members_id
    LEFT JOIN post post ON user.id = post.author_id
    LEFT JOIN comment comment ON post.id = comment.post_id
    LEFT JOIN simple_user comment_author ON comment_author.id = comment.author_id
    LEFT JOIN (interest_group_members comment_author_group_member
    JOIN interest_group comment_author_group
    ON comment_author_group.id = comment_author_group_member.groups_id)
    ON comment_author.id = comment_author_group_member.members_id
WHERE profile.id = ?

This query pulls an immense amount of data from the database, and we have one such query for each user. Another thing is that the result set will contain duplicates due to the Cartesian product. Getting a single user would give us a similar result, fewer requests but with massive result sets.

7. Pros and Cons

We used eager fetch in this tutorial to highlight the difference in the default behavior of Lists and Sets. While loading data eagerly might improve the performance and simplify the interaction with the database, it should be used cautiously.

Although eager fetch is usually considered to solve the N+1 problem, it’s not always the case. The behavior depends on multiple factors and the overall structure of the relationships between domain entities.

Sets are preferable to use with too many relationships for several reasons. First, in most cases, the collection that doesn’t allow duplicates reflects the domain model perfectly. We cannot have two identical users in a group, and a user cannot have two identical posts.

Another thing is that Sets are more flexible. While the default fetch mode for Sets is to create a join, we can define it explicitly by using fetch mode.

The delete behavior for many-to-many relationships using Lists produces an overhead. It’s hard to notice the difference on small datasets, but we can experience high latency with lots of data.

To avoid these problems, it’s a good idea to cover the crucial parts of our interaction with the database with tests. It would ensure that some seemingly insignificant change in one part of our domain model won’t introduce huge overhead in generated queries.

8. Conclusion

In most situations, we should use Sets for to-many relationships. This provides us with mode controllable relationships and avoids overheads on deletes.

However, all the changes and ideas about improving the domain model should be profiled and tested. The issues might not expose themselves to small datasets and simplistic entity relationships.

As usual, all the code from this tutorial is available over on GitHub.

       

How to Determine if a String Contains Invalid Encoded Characters

$
0
0

1. Introduction

Invalidly еncodеd charactеrs can lеad to various issues, including data corruption and sеcurity vulnеrabilitiеs. Hence, ensuring that the data is properly еncodеd when working with strings is essential. Especially when dealing with character еncodings like the UTF-8 or ISO-8859-1.

In this tutorial, we’ll go through the process of dеtеrmining if a Java string contains invalid еncodеd characters.

2. Charactеr Encoding in Java

Java supports various characters еncodings. Furthermore, thе Charsеt class provides a way to handlе thеm – the most common еncodings arе UTF-8 and ISO-8859-1.

Let’s take an example:

String input = "Hеllo, World!";
byte[] utf8Bytes = input.getBytes(StandardCharsets.UTF_8);
String utf8String = new String(utf8Bytes, StandardCharsets.UTF_8);

Thе String class allows us to convеrt bеtwееn diffеrеnt еncodings using thе gеtBytеs and String constructors.

3. Using String Encoding

The following codе providеs a mеthod for using Java to dеtеct and managе invalid charactеrs in a givеn string, еnsuring robust handling of charactеr еncoding issues:

String input = "HÆllo, World!";
@Test
public void givenInputString_whenUsingStringEncoding_thenFindIfInvalidCharacters() {
    byte[] bytes = input.getBytes(StandardCharsets.UTF_8);
    boolean found = false;
    for (byte b : bytes) {
        found = (b & 0xFF) > 127 ? true : found;
    }
    assertTrue(found);
}

In this test method, we bеgin by convеrting thе input string into an array of bytеs using thе UTF-8 charactеr еncoding standard. Subsеquеntly, we use a loop to itеratе through еach bytе, chеcking whеthеr thе valuе еxcееds 127, which is indicativе of an invalid charactеr.

If any invalid characters arе dеtеctеd, a boolеan found flag is sеt to truе. Finally, we assеrt thе prеsеncе of invalid charactеrs using the assеrtTruе() mеthod if thе flag is truе; othеrwisе, we assеrt thе absеncе of invalid charactеrs using the assеrtFalsе() method.

4. Using Rеgular Exprеssions

Rеgular еxprеssions prеsеnts an altеrnativе way for dеtеcting invalid characters in a givеn string.

Here’s an example:

@Test
public void givenInputString_whenUsingRegexPattern_thenFindIfInvalidCharacters() {
    String regexPattern = "[^\\x00-\\x7F]+";
    Pattern pattern = Pattern.compile(regexPattern);
    Matcher matcher = pattern.matcher(input);
    assertTrue(matcher.find());
}

Here, we еmploy a rеgеx pattеrn to idеntify any characters outsidе thе ASCII rangе (0 to 127). Then, we compile the regexPattern dеfinеd as “[^\x00-\x7F]+” using the Pattern.compile() method. This pattern targеts characters that don’t fall within this range.

Then, we crеatе a Matchеr objеct to apply the pattеrn to the input string. If thе Matchеr finds any matchеs using the matcher.find() method, indicating thе prеsеncе of invalid charactеrs.

5. Conclusion

In conclusion, this tutorial has provided comprеhеnsivе insights into charactеr еncoding in Java and dеmonstratеd two еffеctivе mеthods, utilizing string еncoding and rеgular еxprеssions, for dеtеcting and managing invalid charactеrs in strings, thеrеby еnsuring data intеgrity and sеcurity.

As always, the complete code samples for this article can be found over on GitHub.

       

Difference between ZoneOffset.UTC and ZoneId.of(“UTC”)

$
0
0

1. Introduction

Date and time information must be processed accurately in Java, and this involves managing time zones. ZoneOffset.UTC and ZoneId.of(“UTC”) are two standard methods that we can use for displaying Coordinated Universal Time (UTC). While both look like UTC, they have some differences.

In this tutorial, we’ll take an overview of both methods, key differences, and use cases.

2. The ZoneOffset.UTC

The java.time package is introduced from Java 8 and offers classes such as ZoneId and ZoneOffset that we can use to represent time zones. Beyond, the ZoneOffset.UTC is a constant member of the ZoneOffset class. It represents the fixed offset of UTC, which is always +00: 00. This translates to the fact regardless of the season, UTC is the same.

Here’s an example of using ZoneOffset.UTC:

@Test
public void givenOffsetDateTimeWithUTCZoneOffset_thenOffsetShouldBeUTC() {
    OffsetDateTime dateTimeWithOffset = OffsetDateTime.now(ZoneOffset.UTC);
    assertEquals(dateTimeWithOffset.getOffset(), ZoneOffset.UTC);
}

In the above code snippet, we first create an OffsetDateTime object representing the current date and time with the UTC offset. Following, we specify the UTC zone offset (which is zero hours ahead of UTC) using the ZoneOffset.UTC constant. Afterward, the result is verified using the assertEquals() method.

3. The ZoneId.of(“UTC”)

On the other hand, ZoneId.of(“UTC”) creates a ZoneId instance representing the UTC zone. Unlike ZoneOffset.UTC, ZoneId.of(“UTC”) can be used to represent other time zones by changing the zone ID. Here’s an example:

@Test
public void givenZonedDateTimeWithUTCZoneId_thenZoneShouldBeUTC() {
    ZonedDateTime zonedDateTime = ZonedDateTime.now(ZoneId.of("UTC"));
    assertEquals(zonedDateTime.getZone(), ZoneId.of("UTC"));
}

In the above code block, we create a ZonedDateTime object to represent the current date and time in the UTC zone. Then, we use the ZoneId.of(“UTC”) to specify the UTC zone.

4. The Differences and Use Cases

The following table summarizes the key differences between ZoneOffset.UTC and ZoneId.of(“UTC”):

Characteristic ZoneOffset.UTC ZoneId.of(“UTC”)
Immutability Constant and Immutable Flexible and Immutable
Usage Fixed offset of UTC Can represent various time zones

The following table provides use cases for both methods:

Use Case ZoneOffset.UTC ZoneId.of(“UTC”)
Fixed Offset Suitable for applications dealing exclusively with UTC N/A (Use ZoneOffset.UTC)
Flexibility for Different Time Zones Use ZoneOffset.UTC if the fixed offset is sufficient Suitable for scenarios involving multiple time zones
Handling Various Time Zones Use ZoneOffset.UTC for fixed UTC offset Provides flexibility for handling different time zones

5. Conclusion

In conclusion, we take a good overview of the ZoneOffset.UTC and ZoneId.of(“UTC”) methods. Moreover, it is important to distinguish between both methods when handling time zones in Java.

As usual, the accompanying source code can be found over on GitHub.

       

Calculating the Power of Any Number in Java Without Using Math pow() Method

$
0
0

1. Introduction

Calculating the powеr of a numbеr is a fundamental opеration in mathеmatics. And, while Java provides the convenient Math.pow() mеthod, there may be instances where we prеfеr implementing our powеr calculation.

In this tutorial, we’ll еxplorе sеvеral approachеs to calculate the power of a number in Java without rеlying on the built-in mеthod.

2. Itеration Approach

A straightforward way to calculate the powеr of a numbеr is through itеration. Here, we’ll multiply the base by itsеlf for the specified numbеr of n times. A simple example:

double result = 1;
double base = 2;
int exponent = 3;
@Test
void givenBaseAndExponentNumbers_whenUtilizingIterativeApproach_thenReturnThePower() {
    for (int i = 0; i < exponent; i++) {
        result *= base;
    }
    assertEquals(8, result);
}

The provided codе initializes variables base, еxponеnt, and a rеsult. Subsequently, we calculate the powеr of the base raised to the еxponеnt by multiplying the rеsult by the base in еach itеration. The final result is then assеrtеd to bе еqual to 8, sеrving as a validation of thе itеrativе powеr calculation

This mеthod is simple and еffеctivе for intеgеr еxponеnts, but it bеcomеs inеfficiеnt for larger еxponеnts.

3. Rеcursion Approach

Another approach involves using rеcursion to calculate the powеr of a numbеr. In this approach, we’ll divide the problem into smaller sub-problems. Here’s a good example:

@Test
public void givenBaseAndExponentNumbers_whenUtilizingRecursionApproach_thenReturnThePower() {
    result = calculatePowerRecursively(base, exponent);
    assertEquals(8, result);
}
private double calculatePowerRecursively(double base, int exponent) {
    if (exponent == 0) {
        return 1;
    } else {
        return base * calculatePowerRecursively(base, exponent - 1);
    }
}

Here, the test method calls the helper method calculatePowerRecursively that uses rеcursion to calculate the powеr, with a base case for еxponеnt 0 rеturning 1, and otherwise multiplying the base by the rеsult of the rеcursivе call with a dеcrеasеd еxponеnt.

While rеcursion provides a clеan and concise solution, it may lеad to a stack overflow for large еxponеnts due to the rеcursivе calls.

4. Binary Exponentiation (Fast Powеr Algorithm)

A more еfficiеnt approach is binary еxponеntiation, also known as the fast powеr algorithm. Here, we’ll utilize the rеcursion and divide and conquer strategies as follows:

@Test
public void givenBaseAndExponentNumbers_whenUtilizingFastApproach_thenReturnThePower() {
    result = calculatePowerFast(base, exponent);
    assertEquals(8, result);
}
private double calculatePowerFast(double base, int exponent) {
    if (exponent == 0) {
        return 1;
    }
    double halfPower = calculatePowerFast(base, exponent / 2);
    if (exponent % 2 == 0) {
        return halfPower * halfPower;
    } else {
        return base * halfPower * halfPower;
    }
}

In this example, the hеlpеr mеthod еmploys a divide-and-conquer strategy, rеcursivеly computing the powеr by calculating the powеr of the base to half of the еxponеnt and then adjusting based on whether the еxponеnt is еvеn or odd. If еvеn, it squares the half powеr; if odd, it multiplies the base by the square of the half powеr.

Moreover, the binary еxponеntiation significantly rеducеs the numbеr of rеcursivе calls and pеrforms wеll for large еxponеnts.

5. Conclusion

In summary, we еxplorеd various approaches to calculate the powеr of a numbеr in Java without rеlying on the Math.pow() mеthod. Thеsе altеrnativеs provide flеxibility based on the constraints of our application.

As usual, the accompanying source code can be found over on GitHub.

       

Improving Test Coverage and Readability With Spock’s Data Pipes and Tables

$
0
0

1. Introduction

Spock is a great framework for writing tests, especially regarding increasing test coverage.

In this tutorial, we’ll explore Spock’s data pipes and how to improve our line and branch code coverage by adding extra data to a data pipe. We’ll also look at what to do when our data gets too big.

2. The Subject of Our Test

Let’s start with a method that adds two numbers but with a twist. If the first or second number is 42, then return 42:

public class DataPipesSubject {
    int addWithATwist(final int first, final int second) {
        if (first == 42 || second == 42) {
            return 42;
        }
        return first + second;
    }
}

We want to test this method using various combinations of inputs.

Let’s see how to write and evolve a simple test to feed our inputs via a data pipe.

3. Preparing Our Data-Driven Test

Let’s create a test class with a test for a single scenario and then build on it to add data pipes:

First, let’s create our DataPipesTest class with the subject of our test:

@Title("Test various ways of using data pipes")
class DataPipesTest extends Specification {
    @Subject
    def dataPipesSubject = new DataPipesSubject()
<br />    // ...<br />}

We’ve used Spock’s @Title annotation around the class to give ourselves some extra context for upcoming tests.

We’ve also annotated the subject of our test with Spock’s @Subject annotation. Note that we should be careful to import our Subject from spock.lang rather than from javax.security.auth.

Although not strictly necessary, this syntactic sugar helps us quickly identify what’s being tested.

Now let’s create a test with our first two inputs, 1 and 2, using Spock’s given/when/then syntax:

def "given two numbers when we add them then our result is the sum of the inputs"() {
    given: "some inputs"
    def first = 1
    def second = 2
    and: "an expected result"
    def expectedResult = 3
    when: "we add them together"
    def result = dataPipesSubject.addWithATwist(first, second)
    then: "we get our expected answer"
    result == expectedResult
}

To prepare our test for data pipes, let’s move our inputs from the given/and blocks into a where block:

def "given a where clause with our inputs when we add them then our result is the sum of the inputs"() {
    when: "we add our inputs together"
    def result = dataPipesSubject.addWithATwist(first, second)
    then: "we get our expected answer"
    result == expectedResult
    where: "we have various inputs"
    first = 1
    second = 2
    expectedResult = 3
}

Spock evaluates the where block and implicitly adds any variables as parameters to the test. So, Spock sees our method declaration like this:

def "given some declared method parameters when we add our inputs then those types are used"(int first, int second, int expectedResult)

Note that when we coerce our data into a specific type, we declare the type and variable as a method parameter.

Since our test is very simple, let’s condense the when and then blocks into a single expect block:

def "given an expect block to simplify our test when we add our inputs then our result is the sum of the two numbers"() {
    expect: "our addition to get the right result"
    dataPipesSubject.addWithATwist(first, second) == expectedResult
    where: "we have various inputs"
    first = 1
    second = 2
    expectedResult = 3
}

Now that we’ve simplified our test, we’re ready to add our first data pipe.

4. What are Data Pipes?

Data pipes in Spock are a way of feeding different combinations of data into our tests. This helps to keep our test code readable when we have more than one scenario to consider.

Pipes can be any Iterable – we can even create our own if it implements the Iterable interface!

4.1. Simple Data Pipes

Since arrays are Iterable, let’s start by converting our single inputs into arrays and using data pipes ‘<<‘ to feed them into our test:

where: "we have various inputs"
first << [1]
second << [2]
expectedResult << [3]

We can add additional test cases by adding entries to each array data pipe.

So let’s add some data to our pipes for the scenarios 2 + 2 = 4 and 3 +  5 = 8:

first << [1, 2, 3]
second << [2, 2, 5]
expectedResult << [3, 4, 8]

To make our test a bit more readable, let’s combine our first and second inputs into a multi-variable array data pipe, leaving our expectedResult separate for now:

where: "we have various inputs"
[first, second] << [
    [1, 2],
    [2, 2],
    [3, 5]
]
and: "an expected result"
expectedResult << [3, 4, 8]

Since we can refer to feeds that we’ve already defined, we could replace our expected result data pipe with the following:

expectedResult = first + second

But let’s combine it with our input pipes since the method we’re testing has some subtleties that would break a simple addition:

[first, second, expectedResult] << [
    [1, 2, 3],
    [2, 2, 4],
    [3, 5, 8]
]

4.2. Maps and Methods

When we want more flexibility, and we’re using Spock 2.2 or later, we can feed our data using a Map as our data pipe:

where: "we have various inputs in the form of a map"
[first, second, expectedResult] << [
    [
        first : 1,
        second: 2,
        expectedResult: 3
    ],
    [
        first : 2,
        second: 2, 
        expectedResult: 4
    ]
]

We can also pipe in our data from a separate method.

[first, second, expectedResult] << dataFeed()

Let’s see what our map data pipe looks like when we move it into a dataFeed method:

def dataFeed() {
    [ 
        [
            first : 1,
            second: 2,
            expectedResult: 3
        ],
        [
            first : 2,
            second: 2,
            expectedResult: 4
        ]
    ]
}

Although this approach works, using multiple inputs still feels clunky. Let’s look at how Spock’s Data Tables can improve this.

5. Data Tables

Spock’s data table format takes one or more data pipes, making them more visually appealing.

Let’s rewrite the where block in our test method to use a data table instead of a collection of data pipes:

where: "we have various inputs"
first | second || expectedResult
1     | 2      || 3
2     | 2      || 4
3     | 5      || 8

So now, each row contains the inputs and expected results for a particular scenario, which makes our test scenarios much easier to read.

As a visual cue and for best practice, we’ve used double ‘||’ to separate our inputs from our expected result.

When we run our test with code coverage for these three iterations, we see that not all the lines of execution are covered. Our addWithATwist method has a special case when either input is 42:

if (first == 42 || second == 42) {
    return 42;
}

So, let’s add a scenario where our first input is 42, ensuring that our code executes the line inside our if statement. Let’s also add a scenario where our second input is 42 to ensure that our tests cover all the execution branches:

42    | 10     || 42
1     | 42     || 42

So here’s our final where block with iterations that give our code line and branch coverage:

where: "we have various inputs"
first | second || expectedResult
1     | 2      || 3
2     | 2      || 4
3     | 5      || 8
42    | 10     || 42
1     | 42     || 42

When we execute these tests, our test runner renders a row for each iteration:

DataPipesTest
 - use table to supply the inputs
    - use table to supply the inputs [first: 1, second: 2, expectedResult: 3, #0]
    - use table to supply the inputs [first: 2, second: 2, expectedResult: 4, #1]
...

6. Readability Improvements

We have a few techniques that we can use to make our tests even more readable.

6.1. Inserting Variables Into Our Method Name

When we want more expressive test executions, we can add variables to our method name.

So let’s enhance our test’s method name by inserting the column header variables from our table, prefixed with a ‘#’, and also add a scenario column:

def "given a #scenario case when we add our inputs, #first and #second, then we get our expected result: #expectedResult"() {
    expect: "our addition to get the right result"
    dataPipesSubject.addWithATwist(first, second) == expectedResult
    where: "we have various inputs"
    scenario       | first | second || expectedResult
    "simple"       | 1     | 2      || 3
    "double 2"     | 2     | 2      || 4
    "special case" | 42    | 10     || 42
}

Now, when we run our test, our test runner renders the output as the more expressive:

DataPipesTest
- given a #scenario case when we add our inputs, #first and #second, then we get our expected result: #expectedResult
  - given a simple case when we add our inputs, 1 and 2, then we get our expected result: 3
  - given a double 2 case when we add our inputs, 2 and 2, then we get our expected result: 4
...

When we use this approach but type the data pipe name incorrectly, Spock will fail the test with a message similar to this:

Error in @Unroll, could not find a matching variable for expression: myWrongVariableName

As before, we can reference our feeds in our table data using a feed we’ve already declared, even in the same row.

So, let’s add a row that references our column header variables: first and second:

scenario              | first | second || expectedResult
"double 2 referenced" | 2     | first  || first + second

6.2. When Table Columns Get Too Wide

Our IDEs may contain intrinsic support for Spock’s tables – we can use IntelliJ’s “format code” feature (Ctrl+Alt+L) to align the columns in the table for us! Knowing this, we can add our data quickly without worrying about the layout and format it afterward.

Sometimes, however, the length of data items in our tables causes a formatted table row to become too wide to fit on one line. Usually, that’s when we have Strings in our input.

To demonstrate this, let’s create a method that takes a String as an input and simply adds an exclamation mark:

String addExclamation(final String first) {
    return first + '!';
}

Let’s now create a test with a long string as an input:

def "given long strings when our tables our too big then we can use shared or static variables to shorten the table"() {
    expect: "our addition to get the right result"
    dataPipesSubject.addExclamation(longString) == expectedResult
    where: "we have various inputs"
    longString                                                                                                  || expectedResult
    'When we have a very long string we can use a static or @Shared variable to make our tables easier to read' || 'When we have a very long string we can use a static or @Shared variable to make our tables easier to read!'
}

Now, let’s make this table more compact by replacing the string with a static or @Shared variable. Note that our table can’t use variables declared in our test – our table can only use static, @Shared, or calculated values.

So, let’s declare a static and shared variable and use those in our table instead:

static def STATIC_VARIABLE = 'When we have a very long string we can use a static variable'
@Shared
def SHARED_VARIABLE = 'When we have a very long string we can annotate our variable with @Shared'
<br />...<br />
scenario         | longString      || expectedResult
'use of static'  | STATIC_VARIABLE || $STATIC_VARIABLE + '!'
'use of @Shared' | SHARED_VARIABLE || "$SHARED_VARIABLE!"

Now our table is much more compact! We’ve also used Groovy’s String interpolation to build our expected result in our @Shared row to show how that can help readability.

Another way we can make a large table more readable is to split the table into multiple sections by using two or more underscores ‘__’:

where: "we have various inputs"
first | second
1     | 2
2     | 3
3     | 5
__
expectedResult | _
3              | _
5              | _
8              | _

Of course, we need to have the same number of rows across the split tables.

Spock tables must have at least two columns, but after we split our table, expectedResult would have been on its own, so we’ve added an empty ‘_’ column to meet this requirement.

6.3. Alternative Table Separators

Sometimes, we may not want to use ‘|’ as a separator. In such cases, we can use ‘;’ instead:

first ; second ;; expectedResult
1     ; 2      ;; 3
2     ; 3      ;; 5
3     ; 5      ;; 8

But we can’t mix and match both ‘|’ and ‘;’ column separators in the same table!

7. Conclusion

In this article, we learned how to use Spock’s data feeds in a where block. We’ve learned how data tables are a visually nicer representation of data feeds and how we can improve our test coverage by simply adding a row of data to a data table. We’ve also explored a few ways of making our data more readable, especially when dealing with large data values or when our tables get too big.

As usual, the source for this article can be found over on GitHub.

       

Event Externalization with Spring Modulith

$
0
0

1. Overview

In this article, we’ll discuss the need to publish messages within a @Transactional block and the associated performance challenges, such as prolonged database connection times. To tackle this, we’ll utilize Spring Modulith‘s features to listen to Spring application events and automatically publish them to a Kafka topic.

2. Transactional Operations and Message Brokers

For the code examples of this article, we’ll assume we’re writing the functionality responsible for saving an Article on Baeldung:

@Service
class Baeldung {
    private final ArticleRepository articleRepository;
    // constructor
    @Transactional
    public void createArticle(Article article) {
        validateArticle(article);
        article = addArticleTags(article);
        // ... other business logic
        
        articleRepository.save(article);
    }
}

Additionally, we’ll need to notify other parts of the system about this new Article. With this information, other modules or services will react accordingly, creating reports or sending newsletters to the website’s readers.

The easiest way to achieve this is to inject a dependency who knows how to publish this event. For our example, let’s use KafkaOperations to send a message to the “baeldung.articles.published” topic and use the Article‘s slug() as the key:

@Service
class Baeldung {
    private final ArticleRepository articleRepository;
    private final KafkaOperations<String, ArticlePublishedEvent> messageProducer;
    // constructor
    @Transactional
    public void createArticle(Article article) {
        // ... business logic
        validateArticle(article);
        article = addArticleTags(article);
        article = articleRepository.save(article);
        messageProducer.send(
          "baeldung.articles.published",
          article.slug(),
          new ArticlePublishedEvent(article.slug(), article.title())
        ).join();
    }
}

However, this approach is not ideal for a few different reasons. From a design point of view, we have coupled the domain service with the message producer. Moreover, the domain service directly depends on the lower-level component, breaking one of the fundamental Clean Architecture rules.

Furthermore, this approach will also have performance implications because everything is happening within a @Transacional method. As a result, the database connection acquired for saving the Article will be kept open until the message is successfully published.

Lastly, saving the entity and publishing the message will be done as an atomic operation. In other words, if the producer fails to publish the event, the database transaction will be rolled back.

3. Dependency Inversion Using Spring Events

We can leverage Spring Events to improve the design of our solution. Our goal is to avoid publishing the messages to Kafka directly from our domain service. Let’s remove the KafkaOperations dependency and publish an internal application event instead:

@Service
public class Baeldung {
    private final ApplicationEventPublisher applicationEvents;
    private final ArticleRepository articleRepository;
    // constructor
    @Transactional
    public void createArticle(Article article) {
        // ... business logic
        validateArticle(article);
        article = addArticleTags(article);
        article = articleRepository.save(article);
        applicationEvents.publishEvent(
          new ArticlePublishedEvent(article.slug(), article.title()));
    }
}

In addition to this, we’ll have a dedicated Kafka producer as part of our infrastructure layer. This component will listen to the ArticlePublishedEvents and delegate the publishing to the underlying KafkaOperations bean:

@Component
class ArticlePublishedKafkaProducer {
    private final KafkaOperations<String, ArticlePublishedEvent> messageProducer;
    // constructor 
    @EventListener
    public void publish(ArticlePublishedEvent article) {
        Assert.notNull(article.slug(), "Article Slug must not be null!");
        messageProducer.send("baeldung.articles.published", article.splug(), event);
    }
}

With this abstraction, the infrastructure component now depends on the event produced by the domain service. In other words, we’ve managed to reduce the coupling and invert the source code dependency. Furthermore, if other modules are interested in the Article creation, they can now seamlessly listen to these application events and react accordingly.

4. Atomic Vs. Non-atomic Operations

Now, let’s delve into the performance considerations. To begin, we must determine whether rolling back when the communication with the message broker fails is the desired behavior. This choice varies based on the specific context.

In case we do not need this atomicity, it’s imperative to free the database connection and publish the events asynchronously. To simulate this, we can try to create an article without a slug, causing ArticlePublishedKafkaProducer::publish to fail:

@Test
void whenPublishingMessageFails_thenArticleIsStillSavedToDB() {
    var article = new Article(null, "Introduction to Spring Boot", "John Doe", "<p> Spring Boot is [...] </p>");
    baeldung.createArticle(article);
    assertThat(repository.findAll())
      .hasSize(1).first()
      .extracting(Article::title, Article::author)
      .containsExactly("Introduction to Spring Boot", "John Doe");
}

If we run the test now, it will fail. This happens because ArticlePublishedKafkaProducer throws an exception that will cause the domain service to roll back the transaction. However, we can make the event listener asynchronous by replacing the @EventListener annotation with @TransactionalEventListener and @Async:

@Async
@TransactionalEventListener
public void publish(ArticlePublishedEvent event) {
    Assert.notNull(event.slug(), "Article Slug must not be null!");
    messageProducer.send("baeldung.articles.published", event);
}

If we re-run the test now, we’ll notice that the exception is logged, the event was not published, and the entity is saved to the database. Moreover, the database connection was released sooner,  allowing other threads to use it.

5. Event Externalization with Spring Modulith

We successfully tackled the design and performance concerns of the original code example through a two-step approach:

  • Dependency inversion using Spring application events
  • Asynchronous publishing utilizing @TransactionalEventListener and @Async

 Spring Modulith allows us to further simplify our code, providing built-in support for this pattern. Let’s start by adding the maven dependencies for spring-modulith-events-api to our pom.xml:

<dependency>
    <groupId>org.springframework.modulith</groupId>
    <artifactId>spring-modulith-events-api</artifactId>
    <version>1.1.2</version>
</dependency>

This module can be configured to listen to application events and automatically externalize them to various message systems. We’ll stick to our original example and focus on Kafka. For this integration, we’ll need to add the spring-modulith-events-kafka dependency:

<dependency> 
    <groupId>org.springframework.modulith</groupId> 
    <artifactId>spring-modulith-events-kafka</artifactId> 
    <version>1.1.2</version> 
</dependency>

Now, we need to update the ArticlePublishedEvent and annotate it with @Externalized. This annotation requires the name and the key of the routing target. In other words, the Kafka topic and the message key. For the key, we’ll use a SpEL expression that will invoke Article::slug():

@Externalized("baeldung.article.published::#{slug()}")
public record ArticlePublishedEvent(String slug, String title) {
}

6. Event Externalization Configuration

Even though the @Externalized annotation’s value is useful for concise SpEL expressions, there are situations where we might want to avoid using it:

  • In cases where the expression becomes overly complex
  • When we aim to separate information about the topic from the application event
  • If we want distinct models for the application event and the externalized event

For these use cases, we can configure the necessary routing and event mapping using EventExternalizationConfiguration’s builder. After that, we simply need to expose this configuration as a Spring bean:

@Bean
EventExternalizationConfiguration eventExternalizationConfiguration() {
    return EventExternalizationConfiguration.externalizing()
      .select(EventExternalizationConfiguration.annotatedAsExternalized())
      .route(
        ArticlePublishedEvent.class,
        it -> RoutingTarget.forTarget("baeldung.articles.published").andKey(it.slug())
      )
      .mapping(
        ArticlePublishedEvent.class,
        it -> new ArticlePublishedKafkaEvent(it.slug(), it.title())
      )
      .build();
}

In this case, we’ll remove the routing information from the ArticlePublishedEvent and keep the @Externalized annotation with no value:

@Externalized
public record ArticlePublishedEvent(String slug, String title) {
}

7. Conclusions

In this article, we discussed the scenarios that require us to publish a message from within a transactional block. We discovered that this pattern can have big performance implications because it can block the database connection for a longer time.

After that, we used Spring Modulith’s features to listen to Spring application events and automatically publish them to a Kafka topic. This approach allowed us to externalize the events asynchronously and free the database connection sooner.

The complete source code can be found over on GitHub.

       

How to Find the URL of a Service in Kubernetes

$
0
0

1. Overview

Networking is an integral part of Kubernetes, with Service being one of its primitive networking objects. Kubernetes Service allows us to expose a network application to the external world. However, to access it, we must know its URL.

In this hands-on tutorial, we’ll discuss finding and using the Kubernetes service’s URL as a reliable network endpoint.

2. Setting up an Example

We need to create a few Kubernetes objects to use as an example. To begin, let’s create Namespace objects.

2.1. Creating Kubernetes Namespaces

Kubernetes namespaces allow us to isolate the resources within the same cluster. So, let’s use the create command to create two namespaces – dev and stg:

$ kubectl create ns dev
namespace/dev created
$ kubectl create ns stg
namespace/stg created

2.2. Creating Kubernetes Deployments

In the previous step, we created two namespaces. Now, let’s deploy the Redis pod to those namespaces:

$ kubectl create deploy redis-dev --image=redis:alpine -n dev
deployment.apps/redis-dev created
$ kubectl create deploy redis-stg --image=redis:alpine -n stg
deployment.apps/redis-stg created

Next, let’s verify that the pods have been created and that they’re in a healthy state:

$ kubectl get pods -n dev
NAME                         READY   STATUS    RESTARTS   AGE
redis-dev-7b647c797c-c2mmg   1/1     Running   0          16s
$ kubectl get pods -n stg
NAME                        READY   STATUS    RESTARTS   AGE
redis-stg-d66978466-plfpv   1/1     Running   0          9s

Here, we can observe that the status of both pods is Running.

Now, the required setup is ready. In the upcoming sections, we’ll create a few Service objects to establish communication with these pods.

3. Finding the URL of the ClusterIP Service

In Kubernetes, the default service type is ClusterIP. For ClusterIP services, we can use the service name or its IP address as its URL. This allows us to limit communication only within the cluster. Let’s understand this with a simple example.

3.1. Creating the ClusterIP Services

First, let’s create ClusterIP Service objects in both namespaces:

$ kubectl expose deploy redis-dev --port 6379 --type ClusterIP -n dev
service/redis-dev exposed
$ kubectl expose deploy redis-stg --port 6379 --type ClusterIP -n stg
service/redis-stg exposed

In this example, we’ve used the expose command to create a Service object. The expose command uses the selectors of the Deployment object and creates a Service using the same selectors.

In the following sections, we’ll discuss how to find and use the names of these services as URLs.

3.2. Using the URL of the ClusterIP Service in the Same Namespace

First, let’s use the get command to find the service name from the dev namespace:

$ kubectl get svc -n dev
NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
redis-dev   ClusterIP   10.100.18.154   <none>        6379/TCP   9s

In the output, the first column shows the service name. In our case, it’s redis-dev.

Now, let’s exec to the Redis pod that is deployed in the dev namespace, connect to the Redis server using redis-dev as a hostname, and execute the PING command:

$ kubectl exec -it redis-dev-7b647c797c-c2mmg -n dev -- sh
/data # redis-cli -h redis-dev PING
PONG
/data # exit

Here, we can see that the Redis server responds with the PONG message.

Finally, we execute the exit command to exit from the pod.

3.3. Using the URL of the ClusterIP Service From Another Namespace

Let’s examine the format of a ClusterIP service URL:

<service-name>.<namespace>.<cluster-name>:<service-port>

We didn’t use the namespace and cluster-name in the previous example because we executed the command from the same namespace and cluster. In addition to that, we also skipped service-port because the Service was exposed using the default Redis port 6379.

However, we need to specify a namespace name to use the ClusterIP service from another namespace. Let’s understand this with an example.

First, let’s find out the name of the service in the stg namespace:

$ kubectl get svc -n stg
NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
redis-stg   ClusterIP   10.110.213.51   <none>        6379/TCP   9s

Now, let’s exec to the Redis pod that is deployed in the dev namespace, connect to the Redis server using redis-stg.stg as a hostname, and execute the PING command:

$ kubectl exec -it redis-dev-7b647c797c-c2mmg -n dev -- sh
/data # redis-cli -h redis-stg.stg PING
PONG
/data # exit

In this example, we can see that the Redis server sends a PONG reply.

An important thing to note is that we’ve used redis-stg.stg as a hostname, where redis-stg is the service name and stg is the namespace name in which the Service object was created.

3.4. Cleaning Up

In the previous examples, we saw how to use the Service name as the URL.

Now, let’s use the delete command to clean up the services from the dev and stg namespaces:

$ kubectl delete svc redis-dev -n dev
service "redis-dev" deleted
$ kubectl delete svc redis-stg -n stg
service "redis-stg" deleted

4. Finding the URL of the NodePort Service

The NodePort service allows external connectivity to the application using the IP address and port of the Kubernetes node. Let’s understand this by creating a NodePort service.

4.1. Creating the NodePort Service

First, let’s use the expose command to create a Service with the type NodePort:

$ kubectl expose deploy redis-dev --port 6379 --type NodePort -n dev
service/redis-dev exposed

Next, let’s verify that the Service has been created:

$ kubectl get svc -n dev
NAME        TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
redis-dev   NodePort   10.111.147.176   <none>        6379:30243/TCP   2s

Here, we can see that the Service type is NodePort. In the second-to-last column, it shows that the Kubernetes node’s port 30243 is mapped to the pod’s port 6379.

Now, we can use the IP address of the Kubernetes node and port 30243 from outside the cluster to access the Redis server. Let’s see this in action.

4.2. Using the URL of the NodePort Service

First, let’s find the IP address of the Kubernetes node:

$ kubectl get nodes -o wide
NAME       STATUS   ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
baeldung   Ready    control-plane   24h   v1.28.3   192.168.49.2   <none>        Ubuntu 22.04.3 LTS   5.15.0-41-generic   docker://24.0.7

Here, we’ve used the -o wide option with the Node objects to show the additional fields.

In the above output, the column with the header INTERNAL-IP shows the Kubernetes node’s IP address.

Now, from the external machine, let’s connect to the Redis server using 192.168.49.2 as a hostname, 30243 as a port number, and execute the PING command:

$ redis-cli -h 192.168.49.2 -p 30243 PING
PONG

Here, we can see that the Redis server responds with a PONG message.

4.3. Cleaning Up

In the next section, we’re going to see the usage of the LoadBalancer service. But before that, let’s do the cleanup of the NodePort service from the dev namespace:

$ kubectl delete svc redis-dev -n dev
service "redis-dev" deleted

5. Finding the URL of the LoadBalancer Service

Just like the NodePort service, the LoadBalancer service also allows external connectivity to the application using the load balancer’s IP address. To understand this, let’s create a LoadBalancer service:

5.1. Creating the LoadBalancer Service

Let’s use the expose command to create a LoadBalancer service in the dev namespace:

$ kubectl expose deploy redis-dev --port 6379 --type LoadBalancer -n dev
service/redis-dev exposed

5.2. Using the URL of the LoadBalancer Service

Next, let’s use the get command to find the load balancer’s IP address:

$ kubectl get svc -n dev
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)          AGE
redis-dev   LoadBalancer   10.111.167.249   192.168.49.10   6379:32637/TCP   7s

In this example, the column with the header EXTERNAL-IP represents the LoadBalancer service’s IP address.

In our case, the load balancer’s IP address is 192.168.49.10.

Now, from the external machine, let’s connect to the Redis server using 192.168.49.10 as a hostname and execute the PING command:

$ redis-cli -h 192.168.49.10 PING
PONG

In the output, we can see that the Redis server replies with a PONG message.

6. Cleaning Up

Deleting all unwanted objects to tidy up the cluster is a good practice. This helps us in lowering our costs by reducing hardware consumption.

So, let’s use the delete command to remove the dev and stg namespaces:

$ kubectl delete ns dev
namespace "dev" deleted
$ kubectl delete ns stg
namespace "stg" deleted

This command deletes the namespace itself and all objects that are present in this namespace.

Here, we deleted namespaces directly, as this is the test setup. However, we should be very careful while performing delete operations in the production environment.

7. Conclusion

In this article, we discussed how to find and use the URL of a service in Kubernetes.

First, we saw how to use the ClusterIP service name as a URL from the same and another namespace. Then, we discussed how to find and use the URL of the NodePort service.

Finally, we discussed using the load balancer’s IP address as a service URL.

       

Introduction to gRPC with Spring Boot

$
0
0

1. Overview

gRPC is a high-performance, open-source RPC framework initially developed by Google. It helps to eliminate boilerplate code and connect polyglot services in and across data centers. The API is based on Protocol Buffers, which provides a protoc compiler to generate code for different supported languages.

We can view gRPC as an alternative to REST, SOAP, or GraphQL, built on top of HTTP/2 to use features like multiplexing or streaming connections.

In this tutorial, we’ll learn how to implement gRPC service providers and consumers with Spring Boot.

2. Challenges

First, we can note that there is no direct support for gRPC in Spring Boot. Only Protocol Buffers are supported, which allows us to implement protobuf-based REST services. So, we need to include gRPC by using a third-party library, or by managing a few challenges by ourselves:

  • Platform-dependent compiler: The protoc compiler is platform-dependent. So, if the stubs should be generated during build-time, the build gets more complex and error-prone.
  • Dependencies: We need compatible dependencies within our Spring Boot application. Unfortunately, protoc for Java adds a javax.annotation.Generated annotation, which forces us to add a dependency to the old Java EE Annotations for Java library for compilation.
  • Server Runtime: gRPC service providers need to run within a server. The gRPC for Java project provides a shaded Netty, which we need to either include in our Spring Boot application or replace by a server already provided by Spring Boot.
  • Message Transport: Spring Boot provides different clients, like the RestClient (blocking) or the WebClient (non-blocking), that unfortunately cannot be configured and used for gRPC, because gRPC uses custom transport technologies for both blocking and non-blocking calls.
  • Configuration: Because gRPC brings its own technologies, we need configuration properties to configure them the Spring Boot way.

3. Sample Projects

Fortunately, there are third-party Spring Boot Starters that we can use to master the challenges for us, such as the one from LogNet or the grpc ecosystem project. Both starters are easy to integrate, but the latter one has both provider and consumer support as well as many other integration features, so that’s the one we chose for our examples.

In this sample, we design just a simple HelloWorld API with a single Proto file:

syntax = "proto3";
option java_package = "com.baeldung.helloworld.stubs";
option java_multiple_files = true;
message HelloWorldRequest {
    // a name to greet, default is "World"
    optional string name = 1;
}
message HelloWorldResponse {
    string greeting = 1;
}
service HelloWorldService {
    rpc SayHello(stream HelloWorldRequest) returns (stream HelloWorldResponse);
}

As we can see, we use the Bidirectional Streaming feature.

3.1. gRPC stubs

Because the stubs are the same for both provider and consumer, we generate them within a separate, Spring-indepentent project. This has the advantage that the project’s lifecycle, including the protoc compiler configuration and the Java EE Annotations for Java dependency, can be isolated from the Spring Boot project’s lifecycle.

3.2. Service Provider

Implementing the service provider is pretty easy. First, we need to add the dependencies for the starter and our stubs project:

<dependency>
    <groupId>net.devh</groupId>
    <artifactId>grpc-server-spring-boot-starter</artifactId>
    <version>2.15.0.RELEASE</version>
</dependency>
<dependency>
    <groupId>com.baeldung.spring-boot-modules</groupId>
    <artifactId>helloworld-grpc-java</artifactId>
    <version>1.0.0-SNAPSHOT</version>
</dependency>

There’s no need to include Spring MVC or WebFlux because the starter dependency brings the shaded Netty server. We can configure it within the application.yml, for example, by configuring the server port:

grpc:
  server:
    port: 9090

Then, we need to implement the service and annotate it with @GrpcService:

@GrpcService
public class HelloWorldController extends HelloWorldServiceGrpc.HelloWorldServiceImplBase {
    @Override
    public StreamObserver<HelloWorldRequest> sayHello(
        StreamObserver<HelloWorldResponse> responseObserver
    ) {
        // ...
    }
}

3.3. Service Consumer

For the service consumer, we need to add the dependencies to the starter and the stubs:

<dependency>
    <groupId>net.devh</groupId>
    <artifactId>grpc-client-spring-boot-starter</artifactId>
    <version>2.15.0.RELEASE</version>
</dependency>
<dependency>
    <groupId>com.baeldung.spring-boot-modules</groupId>
    <artifactId>helloworld-grpc-java</artifactId>
    <version>1.0.0-SNAPSHOT</version>
</dependency>

Then, we configure the connection to the service in the application.yml:

grpc:
  client:
    hello:
      address: localhost:9090
      negotiation-type: plaintext

The name “hello” is a custom one. This way, we can configure multiple connections and refer to this name when injecting the gRPC client into our Spring component:

@GrpcClient("hello")
HelloWorldServiceGrpc.HelloWorldServiceStub stub;

4. Pitfalls

Implementing and consuming a gRPC service with Spring Boot is pretty easy. But there are some pitfalls that we should be aware of.

4.1. SSL-Handshake

Transferring data over HTTP means sending information unencrypted unless we use SSL. The integrated Netty server does not use SSL by default, so we need to explicitly configure it.

Otherwise, for local tests, we can leave the connection unprotected. In this case, we need to configure the consumer, as already shown:

grpc:
  client:
    hello:
      negotiation-type: plaintext

The default for the consumer is to use TLS, while the default for the provider is to skip SSL encryption. So, the defaults for consumer and provider don’t match each other.

4.2. Consumer Injection Without @Autowired

We implement the consumer by injecting a client object into our Spring component:

@GrpcClient("hello")
HelloWorldServiceGrpc.HelloWorldServiceStub stub;

This is implemented by a BeanPostProcessor and works as an addition to Spring’s built-in dependency injection mechanism. That means we can’t use the @GrpcClient annotation in conjunction with @Autowired or constructor injection. Instead, we’re restricted to using field injection.

We could only separate the injection by using a configuration class:

@Configuration
public class HelloWorldGrpcClientConfiguration {
    @GrpcClient("hello")
    HelloWorldServiceGrpc.HelloWorldServiceStub helloWorldClient;
    @Bean
    MyHelloWorldClient helloWorldClient() {
      return new MyHelloWorldClient(helloWorldClient);
    }
}

4.3. Mapping Transfer Objects

The data types generated by protoc can fail when invoking setters with null values:

public HelloWorldResponse map(HelloWorldMessage message) {
    return HelloWorldResponse
      .newBuilder()
      .setGreeting( message.getGreeting() ) // might be null
      .build();
}

So, we need null checks before invoking the setters. When we use mapping frameworks, we need to configure the mapper generation to do such null checks. A MapStruct mapper, for example, would need some special configuration:

@Mapper(
  componentModel = "spring",
  nullValuePropertyMappingStrategy = NullValuePropertyMappingStrategy.IGNORE,
  nullValueCheckStrategy = NullValueCheckStrategy.ALWAYS
)
public interface HelloWorldMapper {
    HelloWorldResponse map(HelloWorldMessage message);
}

4.4. Testing

The starter doesn’t include any special support for implementing tests. Even the gRPC for Java project has only minimal support for JUnit 4, and no support for JUnit 5.

4.5. Native Images

When we want to build native images, there’s currently no support for gRPC. Because the client injection is done via reflection, this won’t work without extra configuration.

5. Conclusion

In this article, we’ve learned that we can easily implement gRPC providers and consumers within our Spring Boot application. We should note, however, that this comes with some restrictions, like missing support for testing and native images.

As usual, all the code implementations are available over on GitHub.

       

Java Weekly, Issue 526

$
0
0

1. Spring and Java

>> How to use the Java CountDownLatch [vladmihalcea.com]

How to use the Java CountDownLatch to write test cases that take concurrency into consideration. A good read as always.

>> JEP 455: Primitive Types in Patterns, instanceof, and switch (Preview) [openjdk.org]

Pattern matching for primitives: extended support for primitive types for switch and instanceof are coming to Java.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Another Attack Vector For SMS Interception [techblog.bozho.net]

Yet another reason that we shouldn’t rely on SMS for two-factor authentication: compromised intermediaries.

Also worth reading:

3. Pick of the Week

>> Clever code is probably the worst code you could write [engineerscodex.com]

       

Monkey Patching in Java

$
0
0

1. Introduction

In software development, we often need to adapt and enhance our systems’ existing functionalities. Sometimes, modifying the existing codebase may not be possible or may not be the most pragmatic solution. Therefore, a solution to this problem is monkey patching. This technique allows us to modify a class or module runtime without altering its original source code.

In this article, we’ll explore how monkey patching can be used in Java, when to use it, and its drawbacks.

2. Monkey Patching

The term monkey patching originates from an earlier term, guerilla patch, which refers to changing code sneakily at runtime without any rules. It gained popularity thanks to the flexibility of dynamic programming languages, such as Java, Python, or Ruby.

Monkey patching enables us to modify or extend classes or modules at runtime. This allows us to tweak or augment existing code without requiring direct alterations to the source. It is particularly useful when adjustments are imperative, but direct modification is either unfeasible or undesirable due to various constraints.

In Java, monkey patching can be achieved through various techniques. These methods include proxies, bytecode instrumentation, aspect-oriented programming, reflection, or decorator patterns. Each method offers its unique approach, suitable for specific scenarios.

Next, we will create a trivial money converter with a hardcoded exchange rate from EUR to USD to apply monkey patching using different approaches.

public interface MoneyConverter {
    double convertEURtoUSD(double amount);
}
public class MoneyConverterImpl implements MoneyConverter {
    private final double conversionRate;
    public MoneyConverterImpl() {
        this.conversionRate = 1.10;
    }
    @Override
    public double convertEURtoUSD(double amount) {
        return amount * conversionRate;
    }
}

3. Dynamic Proxies

In Java, the use of proxies is a powerful technique for implementing monkey patching. A proxy is a wrapper that passes method invocation through its own facilities. This provides us with an opportunity to modify or enhance the behavior of the original class.

Notably, dynamic proxies stand as a fundamental proxy mechanism in Java. Moreover, they are widely used by frameworks like Spring Framework.

A good example is the @Transactional annotation. When applied to a method, the associated class undergoes dynamic proxy wrapping at runtime. Upon invoking the method, Spring redirects the call to the proxy. After that, the proxy initiates a new transaction or joins the existing one. Subsequently, the actual method is called. Note that, to be able to benefit from this transactional behavior, we need to rely on Spring’s dependency injection mechanism because it’s based on dynamic proxies.

Let’s use dynamic proxies to wrap our conversion method with some logs for our money converter. First, we must create a subtype of java.lang.reflect.InvocationHandler:

public class LoggingInvocationHandler implements InvocationHandler {
    private final Object target;
    public LoggingInvocationHandler(Object target) {
        this.target = target;
    }
    @Override
    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
        System.out.println("Before method: " + method.getName());
        Object result = method.invoke(target, args);
        System.out.println("After method: " + method.getName());
        return result;
    }
}

Next, let’s create a test to verify if logs surrounded the conversion method:

@Test
public void whenMethodCalled_thenSurroundedByLogs() {
    ByteArrayOutputStream logOutputStream = new ByteArrayOutputStream();
    System.setOut(new PrintStream(logOutputStream));
    MoneyConverter moneyConverter = new MoneyConverterImpl();
    MoneyConverter proxy = (MoneyConverter) Proxy.newProxyInstance(
      MoneyConverter.class.getClassLoader(),
      new Class[]{MoneyConverter.class},
      new LoggingInvocationHandler(moneyConverter)
    );
    double result = proxy.convertEURtoUSD(10);
    Assertions.assertEquals(11, result);
    String logOutput = logOutputStream.toString();
    assertTrue(logOutput.contains("Before method: convertEURtoUSD"));
    assertTrue(logOutput.contains("After method: convertEURtoUSD"));
}

4. Aspect-Oriented Programming

Aspect-oriented programming (AOP) is a paradigm that addresses the cross-cutting concerns in software development, offering a modular and cohesive approach to separate concerns that would otherwise be scattered throughout the codebase. This is achieved by adding additional behavior to existing code, without modifying the code itself.

In Java, we can leverage AOP through frameworks like AspectJ or Spring AOP. While Spring AOP provides a lightweight and Spring-integrated approach, AspectJ offers a more powerful and standalone solution.

In monkey patching, AOP provides an elegant solution by allowing us to apply changes to multiple classes or methods in a centralized manner. Using aspects, we can address concerns like logging or security policies that need to be applied consistently across various components without altering the core logic.

Let’s try to surround the same method with the same logs. To do so, we will use the AspectJ framework, and we need to add the spring-boot-starter-aop dependency to our project:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-aop</artifactId>
    <version>3.2.2</version>
</dependency>

We can find the latest version of the library on Maven Central.

In Spring AOP, aspects are typically applied to Spring-managed beans. Therefore, we will define our money converter as a bean for simplicity:

@Bean
public MoneyConverter moneyConverter() {
    return new MoneyConverterImpl();
}

Now, we need to define our aspect to surround our conversion method with logs:

@Aspect
@Component
public class LoggingAspect {
    @Before("execution(* com.baeldung.monkey.patching.converter.MoneyConverter.convertEURtoUSD(..))")
    public void beforeConvertEURtoUSD(JoinPoint joinPoint) {
        System.out.println("Before method: " + joinPoint.getSignature().getName());
    }
    @After("execution(* com.baeldung.monkey.patching.converter.MoneyConverter.convertEURtoUSD(..))")
    public void afterConvertEURtoUSD(JoinPoint joinPoint) {
        System.out.println("After method: " + joinPoint.getSignature().getName());
    }
}

Next, we can create a test to verify if our aspect is applied correctly:

@Test
public void whenMethodCalled_thenSurroundedByLogs() {
    ByteArrayOutputStream logOutputStream = new ByteArrayOutputStream();
    System.setOut(new PrintStream(logOutputStream));
    double result = moneyConverter.convertEURtoUSD(10);
    Assertions.assertEquals(11, result);
    String logOutput = logOutputStream.toString();
    assertTrue(logOutput.contains("Before method: convertEURtoUSD"));
    assertTrue(logOutput.contains("After method: convertEURtoUSD"));
}

5. Decorator Pattern

Decorator is a design pattern that allows us to attach behavior to objects by placing them inside wrapper objects. Therefore, we can assume that a decorator provides an enhanced interface to the original object.

In the context of monkey patching, it provides a flexible solution for enhancing or modifying the behavior of classes without directly modifying their code. We can create decorator classes that implement the same interfaces as the original classes and introduce additional functionality by wrapping instances of the base classes.

This pattern is particularly useful when dealing with a set of related classes that share common interfaces. By employing the Decorator Pattern, modifications can be applied selectively, allowing for a modular and non-intrusive way to adapt or extend the functionality of individual objects.

The Decorator Pattern contrasts with other monkey patching techniques, offering a more structured and explicit approach to augmenting object behavior. Its versatility makes it well-suited for scenarios where a clear separation of concerns and a modular approach to code modification are desired.

To implement this pattern, we will create a new class that will implement the MoneyConverter interface. It will have a property of type MoneyConverter, which will process the request. Moreover, the purpose of our decorator is just to add some logs and forward the money conversion request.

public class MoneyConverterDecorator implements MoneyConverter {
    private final MoneyConverter moneyConverter;
    public MoneyConverterDecorator(MoneyConverter moneyConverter) {
        this.moneyConverter = moneyConverter;
    }
    @Override
    public double convertEURtoUSD(double amount) {
        System.out.println("Before method: convertEURtoUSD");
        double result = moneyConverter.convertEURtoUSD(amount);
        System.out.println("After method: convertEURtoUSD");
        return result;
    }
}

Now, let’s create a test to check if the logs were added:

@Test
public void whenMethodCalled_thenSurroundedByLogs() {
    ByteArrayOutputStream logOutputStream = new ByteArrayOutputStream();
    System.setOut(new PrintStream(logOutputStream));
    MoneyConverter moneyConverter = new MoneyConverterDecorator(new MoneyConverterImpl());
    double result = moneyConverter.convertEURtoUSD(10);
    Assertions.assertEquals(11, result);
    String logOutput = logOutputStream.toString();
    assertTrue(logOutput.contains("Before method: convertEURtoUSD"));
    assertTrue(logOutput.contains("After method: convertEURtoUSD"));
}

6. Reflection

Reflection is the ability of a program to examine and modify its behavior at runtime. In Java, we can use it with the help of the java.lang.reflect package or the Reflections library. While it provides significant flexibility, it should be used carefully due to its potential impact on code maintainability and performance.

One common application of reflection for monkey patching involves accessing class metadata, inspecting fields and methods, and even invoking methods at runtime. Therefore, this capability opens the door to making runtime modifications without directly altering the source code.

Let’s suppose that the conversion rate was updated to a new value. We can’t change it because we didn’t create setters for our converter class, and it is hardcoded. Therefore, we can use reflection to break encapsulation and update the conversion rate to the new value:

@Test
public void givenPrivateField_whenUsingReflection_thenBehaviorCanBeChanged() throws IllegalAccessException, NoSuchFieldException {
    MoneyConverter moneyConvertor = new MoneyConverterImpl();
    Field conversionRate = MoneyConverterImpl.class.getDeclaredField("conversionRate");
    conversionRate.setAccessible(true);
    conversionRate.set(moneyConvertor, 1.2);
    double result = moneyConvertor.convertEURtoUSD(10);
    assertEquals(12, result);
}

7. Bytecode Instrumentation

Through bytecode instrumentation, we can dynamically modify the bytecode of compiled classes. One popular framework for bytecode instrumentation is the Java Instrumentation API. This API was introduced with the purpose of collecting data for utilization by various tools. As these modifications are exclusively additive, such tools don’t alter the application’s state or behavior. For example, such tools include monitoring agents, profilers, coverage analyzers, and event loggers.

However, this approach introduces a more advanced level of complexity, and it’s crucial to handle it with care due to its potential impact on the runtime behavior of our application.

8. Use Cases of Monkey Patching

Monkey patching finds utility in various scenarios where making runtime modifications to code becomes a pragmatic solution. One common use case is fixing urgent bugs in third-party libraries or frameworks without waiting for official updates. It enables us to swiftly address some issues by patching the code temporarily.

Another scenario is extending or modifying the behavior of existing classes or methods in situations where direct code alterations are challenging or impractical. Also, in testing environments, monkey patching proves beneficial for introducing mock behaviors or altering functionalities temporarily to simulate different scenarios.

Furthermore, monkey patching can be employed when rapid prototyping or experimentation is required. This allows us to iterate quickly and explore various implementations without committing to permanent changes.

9. Risks of Monkey Patching

Despite its utility, monkey patching introduces some risks that should be carefully considered. Potential side effects and conflicts represent one significant risk, as modifications made at runtime might interact unpredictably. Moreover, this lack of predictability can lead to challenging debugging scenarios and increased maintenance overhead.

Furthermore, monkey patching can compromise code readability and maintainability. Injecting changes dynamically may obscure the actual behavior of the code, making it challenging for us to understand and maintain, especially in large projects.

Security concerns may also arise with monkey patching, as it can introduce vulnerabilities or malicious behavior. Additionally, the reliance on monkey patching may discourage the adoption of standard coding practices and systematic solutions to problems, leading to a less robust and cohesive codebase.

10. Conclusion

In this article, we learned that monkey patching may prove helpful and powerful in some scenarios. It can also be achieved through various techniques, each with its benefits and drawbacks. However, this approach should be used carefully because it can lead to performance, readability, maintainability, and security issues.

As always, the source code is available over on GitHub.

       

Set an Environment Variable at Runtime in Java

$
0
0

1. Overview

Java provides a simple way of interacting with environment variables. We can access them but cannot change them easily. However, in some cases, we need more control over the environment variables, especially for test scenarios.

In this tutorial, we’ll learn how to address this problem and programmatically set or change environment variables. We’ll be talking only about using it in a testing context. Using dynamic environment variables for domain logic should be discouraged, as it is prone to problems.

2. Accessing Environment Variables

The process of accessing the environment variables is pretty straightforward. The System class provides us with such functionality:

@Test
void givenOS_whenGetPath_thenVariableIsPresent() {
    String classPath = System.getenv("PATH");
    assertThat(classPath).isNotNull();
}

Also, if we need to access all variables, we can do this:

@Test
void givenOS_whenGetEnv_thenVariablesArePresent() {
    Map<String, String> environment = System.getenv();
    assertThat(environment).isNotNull();
}

However, the System doesn’t expose any setters, and the Map we receive is unmodifiable.

3. Changing Environment Variables

We can have different cases where we want to change or set an environment variable. As our processes are involved in a hierarchy, thus we have three options:

  • a child process changes/sets the environment variable of a parent
  • a process changes/sets its environment variables
  • a parent process changes/sets the environment variables of a child

We’ll talk only about the last two cases. The first one is more complex and cannot be easily rationalized for test purposes. Also, it generally cannot be achieved in pure Java and often involves some advanced coding in C/C++.

We’ll concentrate only on Java solutions to this problem. Although JNI is part of Java, it’s more involved, and the solution should be implemented in C/C++. Also, the solution might have issues with portability. That’s why we won’t investigate these approaches in detail.

4. Current Process

Here, we have several options. Some of them might be viewed as hacks, as it’s not guaranteed that they will work on all the platforms.

4.1. Using Reflection API

Technically, we can change the System class to ensure that it will provide us with the values we need using Reflection API:

@SuppressWarnings("unchecked")
private static Map<String, String> getModifiableEnvironment()
  throws ClassNotFoundException, NoSuchFieldException, IllegalAccessException {
    Class<?> environmentClass = Class.forName(PROCESS_ENVIRONMENT);
    Field environmentField = environmentClass.getDeclaredField(ENVIRONMENT);
    assertThat(environmentField).isNotNull();
    environmentField.setAccessible(true);
    Object unmodifiableEnvironmentMap = environmentField.get(STATIC_METHOD);
    assertThat(unmodifiableEnvironmentMap).isNotNull();
    assertThat(unmodifiableEnvironmentMap).isInstanceOf(UMODIFIABLE_MAP_CLASS);
    Field underlyingMapField = unmodifiableEnvironmentMap.getClass().getDeclaredField(SOURCE_MAP);
    underlyingMapField.setAccessible(true);
    Object underlyingMap = underlyingMapField.get(unmodifiableEnvironmentMap);
    assertThat(underlyingMap).isNotNull();
    assertThat(underlyingMap).isInstanceOf(MAP_CLASS);
    return (Map<String, String>) underlyingMap;
}

However, this approach would break the boundaries of modules. Thus, on Java 9 and above, it might result in a warning, but the code will compile. While in Java 16 and above, it throws an error:

java.lang.reflect.InaccessibleObjectException: 
Unable to make field private static final java.util.Map java.lang.ProcessEnvironment.theUnmodifiableEnvironment accessible: 
module java.base does not "opens java.lang" to unnamed module @2c9f9fb0

To overcome the latter problem, we need to open the system modules for reflective access. We can use the following VM options:

--add-opens java.base/java.util=ALL-UNNAMED 
--add-opens java.base/java.lang=ALL-UNNAMED

While running this code from a module, we can use its name instead of ALL-UNNAMED.

However, the getenv(String) implementation might differ from platform to platform. Also, we don’t have any guarantees about the API of internal classes, so the solution might not work in all setups.

To save some typing, we can use an already implemented solution from the JUnit Pioneer library:

<dependency>
    <groupId>org.junit-pioneer</groupId>
    <artifactId>junit-pioneer</artifactId>
    <version>2.2.0</version>
    <scope>test</scope>
</dependency>

It uses a similar idea but offers a more declarative approach:

@Test
@SetEnvironmentVariable(key = ENV_VARIABLE_NAME, value = ENV_VARIABLE_VALUE)
void givenVariableSet_whenGetEnvironmentVariable_thenReturnsCorrectValue() {
    String actual = System.getenv(ENV_VARIABLE_NAME);
    assertThat(actual).isEqualTo(ENB_VARIABLE_VALUE);
}

@SetEnvironmentVariable helps us to define the environment variables. However, because it uses reflection, we have to provide access to the closed modules as we did previously.

4.2. JNI

Another approach is to use JNI and implement the code that would set the environment variables using C/C++. It’s a more invasive approach and requires minimal C/C++ skills. At the same time, it doesn’t have a problem with reflexive access.

However, we cannot guarantee that it will update the variables in Java runtime. Our application can cache the variables on startup, and any further changes won’t have any effect. We don’t have this problem while changing the underlying Map using reflection, as it changes the value only on the Java side.

Also, this approach would require a custom solution for different platforms. Because all OSs handle environment variables differently, the solution won’t be as cross-platform as the pure Java implementation.

5. Child Process

ProcessBuilder can help us to create a child process directly from Java. It’s possible to run any process with it. However, we’ll use it to run our JUnit tests:

@Test
void givenChildProcessTestRunner_whenRunTheTest_thenAllSucceed()
  throws IOException, InterruptedException {
    ProcessBuilder processBuilder = new ProcessBuilder();
    processBuilder.inheritIO();
    Map<String, String> environment = processBuilder.environment();
    environment.put(CHILD_PROCESS_CONDITION, CHILD_PROCESS_VALUE);
    environment.put(ENVIRONMENT_VARIABLE_NAME, ENVIRONMENT_VARIABLE_VALUE);
    Process process = processBuilder.command(arguments).start();
    int errorCode = process.waitFor();
    assertThat(errorCode).isZero();
}

ProcessBuilder provides API to access environment variables and start a separate process. We can even run a Maven test goal and identify which tests we want to execute:

public static final String CHILD_PROCESS_TAG = "child_process";
public static final String TAG = String.format("-Dgroups=%s", CHILD_PROCESS_TAG);
private final String testClass = String.format("-Dtest=%s", getClass().getName());
private final String[] arguments = {"mvn", "test", TAG, testClass};

This process picks up the tests in the same class with a specific tag:

@Test
@EnabledIfEnvironmentVariable(named = CHILD_PROCESS_CONDITION, matches = CHILD_PROCESS_VALUE)
@Tag(CHILD_PROCESS_TAG)
void givenChildProcess_whenGetEnvironmentVariable_thenReturnsCorrectValue() {
    String actual = System.getenv(ENVIRONMENT_VARIABLE_NAME);
    assertThat(actual).isEqualTo(ENVIRONMENT_VARIABLE_VALUE);
}

It’s possible to customize this solution and tailor it to specific requirements.

6. Docker Environment

However, if we need more configuration or a more specific environment, it’s better to use Docker and Testcontainers. It would provide us with more control, especially with integration tests. Let’s outline the Dockerfile first:

FROM maven:3.9-amazoncorretto-17
WORKDIR /app
COPY /src/test/java/com/baeldung/setenvironment/SettingDockerEnvironmentVariableUnitTest.java \
 ./src/test/java/com/baeldung/setenvironment/
COPY /docker-pom.xml ./
ENV CUSTOM_DOCKER_ENV_VARIABLE=TRUE
ENTRYPOINT mvn -f docker-pom.xml test

We’ll copy the required test and run it inside a container. Also, we provide environment variables in the same file.

We can use a CI/CD setup to pick up the container or Testcontainers inside our tests to run the test. While it’s not the most elegant solution, it might help us run all the tests in a single click. Let’s consider a simplistic example:

class SettingTestcontainerVariableUnitTest {
    public static final String CONTAINER_REPORT_FILE = "/app/target/surefire-reports/TEST-com.baeldung.setenvironment.SettingDockerEnvironmentVariableUnitTest.xml";
    public static final String HOST_REPORT_FILE = "./container-test-report.xml";
    public static final String DOCKERFILE = "./Dockerfile";
    @Test
    void givenTestcontainerEnvironment_whenGetEnvironmentVariable_thenReturnsCorrectValue() {
        Path dockerfilePath = Paths.get(DOCKERFILE);
        GenericContainer container = new GenericContainer(
          new ImageFromDockerfile().withDockerfile(dockerfilePath));
        assertThat(container).isNotNull();
        container.start();
        while (container.isRunning()) {
            // Busy spin
        }
        container.copyFileFromContainer(CONTAINER_REPORT_FILE, HOST_REPORT_FILE);
    }
}

However, containers don’t provide a convenient API to copy a folder to get all reports. The simplest way to do this is the withFileSystemBind() method, but it’s deprecated. Another approach is to create a bind in the Dockerfile directly.

We can rewrite the example using ProcessBuillder. The main idea is to tie the Docker and usual tests into the same suite. 

7. Conclusion

Java allows us to work with the environment variables directly. However, changing their values or setting new ones isn’t easy.

If we need this in our domain logic, it signals that we’ve violated several SOLID principles in most cases. However, during testing, more control over environment variables might simplify the process and allow us to check more specific cases.

Although we can use reflection, spinning a new process or building an entirely new environment using Docker is a more appropriate solution.

As usual, all the code from this tutorial is available over on GitHub.

       

Convert Gregorian to Hijri Date in Java

$
0
0

1. Overview

The Gregorian and Hijri calendars represent two distinct systems for measuring time.

In this tutorial, we’ll look at various approaches to converting a Gregorian Date to a Hijri Date.

2. Gregorian vs. Hijri Calendar

Let’s understand the difference between the Gregorian and Hijri calendars. The Gregorian calendar follows the solar year, consisting of 12 months with fixed lengths. The Hijri Calendar follows the lunar year and has 12 months alternating between 29 and 30 days.

In the Hijri calendar, the length of each month depends on the period of a complete moon revolution around the Earth. The Gregorian calendar consists of 365 or 366 days, whereas the Hijri calendar has 354 or 355 days. It means a Hijri year is approximately 11 days shorter than a Gregorian year.

3. Using the HijrahDate Class

In this approach, we’ll use the HijrahDate class from the java.time.chrono package. This class was introduced in Java 8 for modern date and time operations. It provides multiple methods to create and manipulate Hijri dates.

3.1. Using the from() Method

We’ll use the from() method of the HijrahDate class to convert a date from the Gregorian to the Hijri calendar. This method takes a LocalDate object representing Gregorian date as an input and returns a HijriDate object:

public HijrahDate usingFromMethod(LocalDate gregorianDate) {
    return HijrahDate.from(gregorianDate);
}

Now, let’s run our test:

void givenGregorianDate_whenUsingFromMethod_thenConvertHijriDate() {
    LocalDate gregorianDate = LocalDate.of(2013, 3, 31);
    HijrahDate hijriDate = GregorianToHijriDateConverter.usingFromMethod(gregorianDate);
    assertEquals(1434, hijriDate.get(ChronoField.YEAR));
    assertEquals(5, hijriDate.get(ChronoField.MONTH_OF_YEAR));
    assertEquals(19, hijriDate.get(ChronoField.DAY_OF_MONTH));
}

3.2. Using the HijrahChronology Class

In this approach, we’ll use the java.time.chrono.HijrahChronology class, which represents the Hijri (Islamic) calendar system.

The HijrahChoronology.INSTANCE method creates an instance of the Hijri calendar system. We’ll use it to create the ChronoLocalDate object to convert a Gregorian date to a Hijri date:

public HijrahDate usingHijrahChronology(LocalDate gregorianDate) {
    HijrahChronology hijrahChronology = HijrahChronology.INSTANCE;
    ChronoLocalDate hijriChronoLocalDate = hijrahChronology.date(gregorianDate);
    return HijrahDate.from(hijriChronoLocalDate);
}

Now, let’s test this approach:

void givenGregorianDate_whenUsingHijrahChronologyClass_thenConvertHijriDate() {
    LocalDate gregorianDate = LocalDate.of(2013, 3, 31);
    HijrahDate hijriDate = GregorianToHijriDateConverter.usingHijrahChronology(gregorianDate);
    assertEquals(1434, hijriDate.get(ChronoField.YEAR));
    assertEquals(5, hijriDate.get(ChronoField.MONTH_OF_YEAR));
    assertEquals(19, hijriDate.get(ChronoField.DAY_OF_MONTH));
}

4. Using Joda-Timе

Joda-Timе is a popular datе and timе manipulation library for Java, offering an altеrnativе to thе standard Java Datе and Timе API with a morе intuitivе intеrfacе.

In Joda-Time, the IslamicChronology class represents the Hijri(Islamic) calendar. We’ll be using the DateTime’s withChronology() method with an IslamicChornology instance to convert the Gregorian date to a Hijri date:

public DateTime usingJodaDate(DateTime gregorianDate) {
    return gregorianDate.withChronology(IslamicChronology.getInstance());
}

Now, let’s test this approach:

void givenGregorianDate_whenUsingJodaDate_thenConvertHijriDate() {
    DateTime gregorianDate = new DateTime(2013, 3, 31, 0, 0, 0);
    DateTime hijriDate = GregorianToHijriDateConverter.usingJodaDate(gregorianDate);
    assertEquals(1434, hijriDate.getYear());
    assertEquals(5, hijriDate.getMonthOfYear());
    assertEquals(19, hijriDate.getDayOfMonth());
}

5. Using the UmmalquraCalendar Class

The ummalqura-calendar library has the UmmalquraCalendar class, which is derived from Java 8. To include the ummalqura-calendar library, we need to add the following dependency:

<dependency>
    <groupId>com.github.msarhan</groupId>
    <artifactId>ummalqura-calendar</artifactId>
    <version>2.0.2</version>
</dependency>

We’ll use its setTime() method to perform the Gregorian to a Hijri date conversion:

public UmmalquraCalendar usingUmmalquraCalendar(GregorianCalendar gregorianCalendar) throws ParseException {
    UmmalquraCalendar hijriCalendar = new UmmalquraCalendar();
    hijriCalendar.setTime(gregorianCalendar.getTime());
    return hijriCalendar;
}

Now, let’s test this approach:

void givenGregorianDate_whenUsingUmmalquraCalendar_thenConvertHijriDate() throws ParseException {
    GregorianCalendar gregorianCalenar = new GregorianCalendar(2013, Calendar.MARCH, 31);
    UmmalquraCalendar ummalquraCalendar = GregorianToHijriDateConverter.usingUmmalquraCalendar(gregorianCalenar);
    assertEquals(1434, ummalquraCalendar.get(Calendar.YEAR));
    assertEquals(5, ummalquraCalendar.get(Calendar.MONTH) + 1);
    assertEquals(19, ummalquraCalendar.get(Calendar.DAY_OF_MONTH));
}

6. Conclusion

In this tutorial, we’ve discussed various ways to convert a Gregorian date to a Hijri date.

As always, the code used in the examples is available over on GitHub.

       

Creating Unicode Character From Its Code Point Hex String

$
0
0

1. Overview

Java’s support for Unicode makes it straightforward to work with characters from diverse languages and scripts

In this tutorial, we’ll explore and learn how to obtain Unicode characters from their code points in Java.

2. Introduction to the Problem

Java’s Unicode support allows us to build internationalized applications quickly. Let’s look at a couple of examples:

static final String U_CHECK = "✅"; // U+2705
static final String U_STRONG = "强"; // U+5F3A

In the example above, both the check mark✅” and “” (“Strong” in Chinese) are Unicode characters.

We know that Unicode characters can be represented correctly in Java if our string follows the pattern of an escaped ‘u’ and a hexadecimal number, for example:

String check = "\u2705";
assertEquals(U_CHECK, check);
String strong = "\u5F3A";
assertEquals(U_STRONG, strong);

In some scenarios, we’re given the hexadecimal number after “\u” and need to get the corresponding Unicode character. For instance, the check mark “✅” should be produced when we receive the number “2705″ in the string format.

The first idea we might come up with is concatenating “\\u” and the number. However, this doesn’t do the job:

String check = "\\u" + "2705";
assertEquals("\\u2705", check);
String strong = "\\u" + "5F3A";
assertEquals("\\u5F3A", strong);

As the test shows, concatenating “\\u” and a number, such as “2705”, produces a literal string “\\u2705 instead of the check mark “✅”.

Next, let’s explore how to convert the given number to the Unicode string.

3. Understanding the Hexadecimal Number After “\u

Unicode assigns a unique code point to every character, providing a universal way to represent text across different languages and scripts. A code point is a numerical value that uniquely identifies a character in the Unicode standard.

To create a Unicode character in Java, we need to understand the code point of the desired character. Once we have the code point, we can use Java’s char data type and the escape sequence ‘\u’ to represent the Unicode character.

In the “\uxxxx” notation, “xxxx” is the character’s code point in the hexadecimal representation. For example, the hexadecimal ASCII code of ‘A‘ is 41 (decimal: 65). Therefore, we can get the string “A” if we use the Unicode notation “\u0041”:

assertEquals("A", "\u0041");

So next, let’s see how to get the desired Unicode character from the hexadecimal number after “\u”.

4. Using the Character.toChars() Method

Now we understand what the hexadecimal number after “\u” indicates. When we received “2705,” it was the hexadecimal representation of a character’s code point.

If we get the code point integer, the Character.toChars(int codePoint) method can give us the char array that holds the code point’s Unicode representation. Finally, we can call String.valueOf() to get the target string:

Given "2705"
 |_ Hex(codePoint) = 0x2705
     |_ codePoint = 9989 (decimal)
         |_ char[] chars = Character.toChars(9989) 
            |_ String.valueOf(chars)
               |_"✅"

As we can see, to obtain our target character, we must find the code point first.

The code point integer can be obtained by parsing the provided string in the hexadecimal (base-16) radix using the Integer.parseInt() method. 

So next, let’s put everything together:

int codePoint = Integer.parseInt("2705", 16); // Decimal int: 9989
char[] checkChar = Character.toChars(codePoint);
String check = String.valueOf(checkChar);
assertEquals(U_CHECK, check);
codePoint = Integer.parseInt("5F3A", 16); // Decimal int: 24378
char[] strongChar = Character.toChars(codePoint);
String strong = String.valueOf(strongChar);
assertEquals(U_STRONG, strong);

It’s worth noting that if we work with Java 11 or later version, we can get the string directly from the code point integer using the Character.toString() method, for example:

// For Java 11 and later versions:
assertEquals(U_STRONG, Character.toString(codePoint));

Of course, we can wrap the implementation above in a method:

String stringFromCodePointHex(String codePointHex) {
    int codePoint = Integer.parseInt(codePointHex, 16);
    // For Java 11 and later versions: return Character.toString(codePoint)
    char[] chars = Character.toChars(codePoint);
    return String.valueOf(chars);
}

Finally, let’s test the method to make sure it produces the expected result:

assertEquals("A", stringFromCodePointHex("0041"));
assertEquals(U_CHECK, stringFromCodePointHex("2705"));
assertEquals(U_STRONG, stringFromCodePointHex("5F3A"));

5. Conclusion

In this article, we first learned the significance of “xxxx” in the “\uxxxx” notation, then explored how to obtain the target Unicode string from the hexadecimal representation of a given code point.

As always, the complete source code for the examples is available over on GitHub.

       

N+1 Problem in Hibernate and Spring Data JPA

$
0
0

1. Overview

Spring JPA and Hibernate provide a powerful tool for seamless database communication. However, because clients delegate more control to the frameworks, the resulting generated queries might be far from optimal.

In this tutorial, we’ll review a common N+1 problem while using Spring JPA and Hibernate. We’ll check different situations that might cause the problem.

2. Social Media Platform

To visualize the issue better, we need to outline the relationships between entities. Let’s take a simple social network platform as an example. There’ll be only Users and Posts:

We’re using Iterable in the diagrams, and we’ll provide concrete implementations for each example: List or Set.

To test the number of requests, we’ll use a dedicated library instead of checking the logs. However, we’ll refer to logs to better understand the structure of the requests.

The fetch type of relationships is assumed as default if not mentioned explicitly in each example. All to-one relationships have eager fetch and to-many – lazy. Also, the code examples use Lombok to reduce the noise in the code.

3. N+1 Problem

The N+1 problem is the situation when, for a single request, for example, fetching Users, we make additional requests for each User to get their information. Although this problem often is connected to lazy loading, it’s not always the case.

We can get this issue with any type of relationship. However, it usually arises from many-to-many or one-to-many relationships.

3.1. Lazy Fetch

First of all, let’s see how lazy loading might cause the N+1 problem. We’ll consider the following example:

@Entity
public class User {
    @Id
    private Long id;
    private String username;
    private String email;
    @OneToMany(cascade = CascadeType.ALL, mappedBy = "author")
    protected List<Post> posts;
    // constructors, getters, setters, etc.
}

Users have one-to-many relationships with the Posts. This means that each User has multiple Posts. We didn’t explicitly identify the fetch strategy for the fields. The strategy is inferred from the annotations. As was mentioned previously, @OneToMany has lazy fetch by default:

@Target({METHOD, FIELD}) 
@Retention(RUNTIME)
public @interface OneToMany {
    Class targetEntity() default void.class;
    CascadeType[] cascade() default {};
    FetchType fetch() default FetchType.LAZY;
    String mappedBy() default "";
    boolean orphanRemoval() default false;
}

If we’re trying to get all the Users, lazy fetch won’t pull more information than we accessed:

@Test
void givenLazyListBasedUser_WhenFetchingAllUsers_ThenIssueOneRequests() {
    getUserService().findAll();
    assertSelectCount(1);
}

Thus, to get all Users, we’ll issue a single request. Let’s try to access Posts. Hibernate will issue an additional request because the information wasn’t fetched beforehand. For a single User, it means two requests overall:

@ParameterizedTest
@ValueSource(longs = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void givenLazyListBasedUser_WhenFetchingOneUser_ThenIssueTwoRequest(Long id) {
    getUserService().getUserByIdWithPredicate(id, user -> !user.getPosts().isEmpty());
    assertSelectCount(2);
}

The getUserByIdWithPredicate(Long, Predicate) method filters the Users, but its main goal in the tests is to trigger the loading. We’ll have 1+1 requests, but if we scale it, we’ll get the N+1 problem:

@Test
void givenLazyListBasedUser_WhenFetchingAllUsersCheckingPosts_ThenIssueNPlusOneRequests() {
    int numberOfRequests = getUserService().countNumberOfRequestsWithFunction(users -> {
        List<List<Post>> usersWithPosts = users.stream()
          .map(User::getPosts)
          .filter(List::isEmpty)
          .toList();
        return users.size();
    });
    assertSelectCount(numberOfRequests + 1);
}

We should be careful about lazy fetch. In some cases, lazy loading makes sense to reduce the data we get from a database. However, if we’re accessing lazily-fetched information in most cases, we might increase the volume of requests. To make the best judgment, we must investigate the access patterns.

3.2. Eager Fetch

In most cases, eager loading can help us with the N+1 problem. However, the result depends on the relationships between our entities. Let’s consider a similar User class but with an explicitly set eager fetch:

@Entity
public class User {
    @Id
    private Long id;
    private String username;
    private String email;
    @OneToMany(cascade = CascadeType.ALL, mappedBy = "author", fetch = FetchType.EAGER)
    private List<Post> posts;
    // constructors, getters, setters, etc.
}

If we fetch a single user, the fetch type will force Hibernate to load all the data in a single request:

@ParameterizedTest
@ValueSource(longs = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void givenEagerListBasedUser_WhenFetchingOneUser_ThenIssueOneRequest(Long id) {
    getUserService().getUserById(id);
    assertSelectCount(1);
}

At the same time, the situation with fetching all users changes. We’ll get N+1 straight away whether we want to use the Posts or not:

@Test
void givenEagerListBasedUser_WhenFetchingAllUsers_ThenIssueNPlusOneRequests() {
    List<User> users = getUserService().findAll();
    assertSelectCount(users.size() + 1);
}

Although eager fetch changed how Hibernate pulls the data, it’s hard to call it a successful optimization.

4. Multiple Collections

Let’s introduce Groups in our initial domain:

The Group contains a List of Users:

@Entity
public class Group {
    @Id
    private Long id;
    private String name;
    @ManyToMany
    private List<User> members;
    // constructors, getters, setters, etc.
}

4.1. Lazy Fetch

This relationship would generally behave similarly to the previous examples with lazy fetch. We’ll get a new request for each access to lazily pulled information.

Thus, unless we access users directly, we’ll have a single request:

@Test
void givenLazyListBasedGroup_whenFetchingAllGroups_thenIssueOneRequest() {
    groupService.findAll();
    assertSelectCount( 1);
}
@ParameterizedTest
@ValueSource(longs = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void givenLazyListBasedGroup_whenFetchingAllGroups_thenIssueOneRequest(Long groupId) {
    Optional<Group> group = groupService.findById(groupId);
    assertThat(group).isPresent();
    assertSelectCount(1);
}

However, it would create the N+1 problem if we try to access each User in a group:

@Test
void givenLazyListBasedGroup_whenFilteringGroups_thenIssueNPlusOneRequests() {
    int numberOfRequests = groupService.countNumberOfRequestsWithFunction(groups -> {
        groups.stream()
          .map(Group::getMembers)
          .flatMap(Collection::stream)
          .collect(Collectors.toSet());
        return groups.size();
    });
    assertSelectCount(numberOfRequests + 1);
}

The countNumberOfRequestsWithFunction(ToIntFunction) method counts the requests and also triggers lazy loading.

4.2. Eager Fetch

Let’s check the behavior with eager fetch. While requesting a single group, we’ll get the following result:

@ParameterizedTest
@ValueSource(longs = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void givenEagerListBasedGroup_whenFetchingAllGroups_thenIssueNPlusOneRequests(Long groupId) {
    Optional<Group> group = groupService.findById(groupId);
    assertThat(group).isPresent();
    assertSelectCount(1 + group.get().getMembers().size());
}

It’s reasonable, as we need to get the information for each User eagerly. At the same time, when we get all groups, the number of requests jumps significantly:

@Test
void givenEagerListBasedGroup_whenFetchingAllGroups_thenIssueNPlusMPlusOneRequests() {
    List<Group> groups = groupService.findAll();
    Set<User> users = groups.stream().map(Group::getMembers).flatMap(List::stream).collect(Collectors.toSet());
    assertSelectCount(groups.size() + users.size() + 1);
}

We need to get the information about Users, and then, for each User, we fetch their Posts. Technically, we have an N+M+1 situation. Thus, neither lazy nor eager fetch entirely resolved the problem.

4.3. Using Set

Let’s approach this situation differently. Let’s replace Lists with Sets. We’ll be using eager fetch, as lazy Sets and List behave similarly:

@Entity
public class Group {
    @Id
    private Long id;
    private String name;
    @ManyToMany(fetch = FetchType.EAGER)
    private Set<User> members;
    // constructors, getters, setters, etc.
}
@Entity
public class User {
    @Id
    private Long id;
    private String username;
    private String email;
    @OneToMany(cascade = CascadeType.ALL, mappedBy = "author", fetch = FetchType.EAGER)
    protected Set<Post> posts;
    // constructors, getters, setters, etc.
}
@Entity
public class Post {
    @Id
    private Long id;
    @Lob
    private String content;
    @ManyToOne
    private User author;
    // constructors, getters, setters, etc.
}

Let’s run similar tests to see if this makes any difference:

@ParameterizedTest
@ValueSource(longs = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
void givenEagerSetBasedGroup_whenFetchingAllGroups_thenCreateCartesianProductInOneQuery(Long groupId) {
    groupService.findById(groupId);
    assertSelectCount(1);
}

We resolved the N+1 problem while getting a single Group. Hibernate fetched Users and their Posts in one request. Also, getting all Groups has decreased the number of requests, but it’s still N+1:

@Test
void givenEagerSetBasedGroup_whenFetchingAllGroups_thenIssueNPlusOneRequests() {
    List<Group> groups = groupService.findAll();
    assertSelectCount(groups.size() + 1);
}

Although we partially solved the problem, we created another one. Hibernates uses several JOINs, creating the Cartesian product:

SELECT g.id, g.name, gm.interest_group_id,
       u.id, u.username, u.email,
       p.id, p.author_id, p.content
FROM group g
         LEFT JOIN (group_members gm JOIN user u ON u.id = gm.members_id)
                   ON g.id = gm.interest_group_id
         LEFT JOIN post p ON u.id = p.author_id
WHERE g.id = ?

The query might become overly complex and, with many dependencies between objects, pull a huge chunk of a database.

Due to the nature of Sets, Hibernate can ensure that all the duplicates in the result set are from the Cartesian product. This is not possible with lists, so data should be fetched in separate requests to maintain its integrity when using lists.

Most relationships align with the Set invariants. It makes little sense to allow Users to have several identical Posts. At the same time, we can provide a fetch mode explicitly instead of relying on default behavior.

5. Tradeoffs

Picking a fetch type might help reduce the number of requests in simple cases. However, using simple annotations, we have limited control over query generation. Also, it’s done transparently, and small changes in the domain model might create a dramatic impact.

The best way to address the issue is to observe the system’s behavior and identify the access patterns. Creating separate methods, SQL and JPQL queries can help tailor them for each case. Also, we can use fetch mode to hint Hibernate about how we load related entities.

Adding simple tests can help with unintended changes in the model. This way, we can ensure that new relationships won’t create the Cartesian product or N+1 problem.

6. Conclusion

While eager fetch type can mitigate some simple issues with additional queries, it might cause other issues. It’s necessary to test the application to ensure its performance.

Different combinations of fetch types and relationships can often produce an unexpected result. That’s why it’s better to cover crucial parts with tests.

As usual, all the code from this tutorial is available over on GitHub.

       

Regular Expression for Password Validation in Java

$
0
0

1. Introduction

Regarding cybersecurity, password validation is essential in protecting users’ accounts. Moreover, using regular expressions (regex) in Java provides a powerful and dynamic way of imposing specific standards for password complexity.

In this tutorial, we’ll delve into utilizing the regex for Java-based password validation processes.

2. Criteria for a Robust Password

Before we get into the code, we’ll establish what makes a strong password. An ideal password should:

  • Have eight characters or more
  • Include a capital letter
  • Use at least one lowercase letter
  • Consists of at least one digit
  • Need to have one special symbol (i.e., @, #, $, %, etc.)
  • Doesn’t contain space, tab, etc.

3. Implementation in Java

3.1. Regular Expression-based Password Validation

Regular expressions, or regex, are useful tools in Java that allow searching, matching, and transforming strings based on certain patterns. In the same context, regex adopts a more static approach for password validation that operates with the help of predefined regular expressions.

The following Java regular expression encapsulates the specified requirements:

^(?=.*[a-z])(?=.*[A-Z])(?=.*\\d)(?=.*[@#$%^&+=]).{8,}$

Breaking down its components:

  • ^: indicates the string’s beginning
  • (?=.*[a-z]): makes sure that there is at least one small letter
  • (?=.*[A-Z]): needs at least one capital letter
  • (?=.*\\d): requires at least one digit
  • (?=.*[@#$%^&+=]): provides a guarantee of at least one special symbol
  • .{8,20}: imposes the minimum length of 8 characters and the maximum length of 20 characters
  • $: terminates the string

Let’s use regex for password validation:

@Test
public void givenStringPassword_whenUsingRegulaExpressions_thenCheckIfPasswordValid() {
    String regExpn = "^(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z])(?=.*[@#$%^&+=])(?=\\S+$).{8,20}$";
    Pattern pattern = Pattern.compile(regExpn, Pattern.CASE_INSENSITIVE);
    Matcher matcher = pattern.matcher(password);
    assertTrue(matcher.matches());
}

Here, we characterize the regExpn regular expression, which specifies certain rules for a password. Besides, we compile the regExpn regular expression into a pattern using Pattern.compile() method and then create a matcher for the given password through the pattern.matcher() method.

Lastly, we utilize the matcher.matches() method to determine if the password meets the regExpn regular expression.

3.2. Dynamic Password Validation

This approach presents a dynamic password verification method that enables the creation of a pattern based on different attributes. This technique involves an arbitrary pattern, including a minimum/maximum length, special symbols, and other elements.

Let’s implement this approach:

@Test
public void givenStringPassword_whenUsingDynamicPasswordValidationRules_thenCheckIfPasswordValid() {
    boolean result = false;
    try {
        if (password != null) {
            String MIN_LENGTH = "8";
            String MAX_LENGTH = "20";
            boolean SPECIAL_CHAR_NEEDED = false;
            String ONE_DIGIT = "(?=.*[0-9])";
            String LOWER_CASE = "(?=.*[a-z])";
            String UPPER_CASE = "(?=.*[A-Z])";
            String SPECIAL_CHAR = SPECIAL_CHAR_NEEDED ? "(?=.*[@#$%^&+=])" : "";
            String NO_SPACE = "(?=\\S+$)";
            String MIN_MAX_CHAR = ".{" + MIN_LENGTH + "," + MAX_LENGTH + "}";
            String PATTERN = ONE_DIGIT + LOWER_CASE + UPPER_CASE + SPECIAL_CHAR + NO_SPACE + MIN_MAX_CHAR;
            assertTrue(password.matches(PATTERN));
        }
    } catch (Exception ex) {
        ex.printStackTrace();
        fail("Exception occurred: " + ex.getMessage());
    }
}

Here, we first ensure that the password doesn’t equal null before carrying on with validation. Then, the method determines validation criteria through individual strings, stipulating such issues as the presence of one digit, one lower case symbol, and an upper case letter with optionally special characters.

Moreover, we use the MIN_MAX_CHAR string to establish the password’s minimum and maximum length limits, using defined standards MIN_LENGTH and MAX_LENGTH. Afterward, the composite PATTERN string concatenates all the indicated prerequisites to develop a dynamic validation pattern.

Finally, we utilize the assertTrue(password.matches(PATTERN)) method to verify the password’s compliance with the dynamically created pattern. If exceptions occur during validation, the test is considered failed; details of the exception are printed for debugging purposes.

This approach provides the flexibility to set password validation rules by changing parameters, which makes it appropriate for different validators.

4. Conclusion

In summary, Java regular expressions are a reliable mechanism for performing text validation and manipulation, particularly when it involves the application of strong password security.

Hence, in this article, we give a concise step-by-step guide for constructing an appropriate regular expression to validate passwords, providing the foundation for alteration that can increase safety during user account creation.

As always, the complete code samples for this article can be found over on GitHub.

       

Convert Long to Date in Java

$
0
0

1. Overview

When working with dates in Java, we often see date/time values expressed as long values that denote the number of days, seconds, or milliseconds since the epoch, January 1, 1970, 00:00:00 GMT.

In this short tutorial, we’ll explore different ways of converting a long value to a date in Java. First, we’ll explain how to do this using core JDK classes. Then, we’ll showcase how to achieve the same objective using the third-party Joda-Time library.

2. Using the Java 8+ Date-Time API

Java 8 is often praised for the new Date-Time API feature it brought to the Java landscape. This API was introduced mainly to cover the drawbacks of the old date API. So, let’s take a close look at what this API provides to answer our central question.

2.1. Using the Instant Class

The easiest solution would be using the Instant class introduced in the new Java 8 date-time API. This class describes a single instantaneous point on the timeline.

So, let’s see it in practice:

@Test
void givenLongValue_whenUsingInstantClass_thenConvert() {
    Instant expectedDate = Instant.parse("2020-09-08T12:16:40Z");
    long seconds = 1599567400L;
    Instant date = Instant.ofEpochSecond(seconds);
    assertEquals(expectedDate, date);
}

As shown above, we used the ofEpochSecond() method to create an object of the Instant class. Please bear in mind that we can use the ofEpochMilli() method as well to create an Instant instance using milliseconds.

2.2. Using the LocalDate Class

LocalDate is another option to consider when converting a long value to a date. This class models a classic date, such as 2023-10-17, without the time detail.

Typically, we can use the LocalDate#ofEpochDay method to achieve our objective:

@Test
void givenLongValue_whenUsingLocalDateClass_thenConvert() {
    LocalDate expectedDate = LocalDate.of(2023, 10, 17);
    long epochDay = 19647L;
    LocalDate date = LocalDate.ofEpochDay(epochDay);
    assertEquals(expectedDate, date);
}

The ofEpochDay() method creates an instance of the LocalDate class from the given epoch day.

3. Using the Legacy Date API

Before Java 8, we would usually use the Date or Calendar classes from the java.util package to achieve our objective. So, let’s see how to use these two classes to convert a long value to a date.

3.1. Using the Date Class

The Date class denotes a specific instant in time with millisecond precision. As the name indicates, it comes with a host of methods that we can use to manipulate dates. It offers the easiest way to convert a long value to a date as it provides an overloaded constructor that accepts a parameter of long type.

So, let’s see it in action:

@Test
void givenLongValue_whenUsingDateClass_thenConvert() {
    SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
    dateFormat.setTimeZone(TimeZone.getTimeZone("UTC"));
    Date expectedDate = dateFormat.parse("2023-10-15 22:00:00");
    long milliseconds = 1689458400000L;
    Date date = new Date(milliseconds);
    assertEquals(expectedDate, date);
}

Please note that the Date class is outdated and belongs to the old API. So, it’s not the best way to go when working with dates.

3.2. Using the Calendar Class

Another solution would be to use the Calendar class from the old date API. This class provides the setTimeInMillis(long value) method that we can use to set the time to the given long value.

Now, let’s exemplify the use of this method using another test case:

@Test
void givenLongValue_whenUsingCalendarClass_thenConvert() {
    SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
    dateFormat.setTimeZone(TimeZone.getTimeZone("UTC"));
    Date expectedDate = dateFormat.parse("2023-07-15 22:00:00");
    long milliseconds = 1689458400000L;
    Calendar calendar = Calendar.getInstance();
    calendar.setTimeZone(TimeZone.getTimeZone("UTC"));
    calendar.setTimeInMillis(milliseconds);
    assertEquals(expectedDate, calendar.getTime());
}

Similarly, the specified long value denotes the number of milliseconds passed since the epoch.

4. Using Joda-Time

Lastly, we can use the Joda-Time library to tackle our challenge. First, let’s add its dependency to the pom.xml file:

<dependency>
    <groupId>joda-time</groupId>
    <artifactId>joda-time</artifactId>
    <version>2.12.5</version>
</dependency>

Similarly, Joda-Time provides its version of the LocalDate class. So, let’s see how we can use it to convert a long value to a LocalDate object:

@Test
void givenLongValue_whenUsingJodaTimeLocalDateClass_thenConvert() {
    org.joda.time.LocalDate expectedDate = new org.joda.time.LocalDate(2023, 7, 15);
    long milliseconds = 1689458400000L;
    org.joda.time.LocalDate date = new org.joda.time.LocalDate(milliseconds, DateTimeZone.UTC);
    assertEquals(expectedDate, date);
}

As illustrated, LocalDate offers a direct way to construct a date from a long value.

5. Conclusion

In this short article, we explained in detail how to convert a long value to a date in Java.

First, we’ve seen how to do the conversion using built-in JDK classes. Then, we illustrated how to accomplish the same goal using the Joda-Time library.

As always, the code used in this article can be found over on GitHub.

       

Getting Started With TimescaleDB

$
0
0

1. Overview

In this article, we’ll explore TimescaleDB, an open-source time-series database built on top of PostgreSQL. We’ll delve into its features, examine its capabilities, and discuss how to effectively interact with this database.

2. What Is TimescaleDB

TimescaleDB is an open-source database extension for PostgreSQL, designed to handle time-series data effectively. It extends PostgreSQL’s capabilities to provide dedicated features for time-series data including automated time partitioning, optimized indexing, and compression.

Let’s have a brief look at some of its key features:

  • Hypertables are a key concept in TimescaleDB. They are PostgreSQL tables that automatically partition data by time.
  • It supports continuous aggregates, allowing us to pre-compute and store aggregates. It speeds up query performance by avoiding the need to compute aggregates dynamically while querying.
  • It employs advanced compression techniques to minimize storage requirements while maintaining query performance.
  • TimescaleDB uses multi-dimensional indexing and time-based indexing to speed up queries, making it well-suited for time-series data.

3. Installation

To get started with TimescaleDB, we first need to make sure to have PostgreSQL installed on our machine. We can install PostgreSQL using the package manager of our operating system, or by downloading it from PostgreSQL’s website.

After successfully installing PostgreSQL, we can proceed to install TimescaleDB. We’ll also need to install Homebrew before initiating the installation:

brew tap timescale/tap
brew install timescaledb

4. Using TimescaleDB

4.1. Initializing TimescaleDB

Upon successful installation, let’s connect to our PostgreSQL instance and run the query to initialize the TimescaleDB instance:

CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;

This query sets up the necessary extension and prepares our PostgreSQL environment for efficient time-series data handling.

4.2. Creating a Hypertable

TimescaleDB introduces the concept of Hypertable, which is a partitioned table optimized for time-series storage. A hypertable is always partitioned on time, since it’s intended for time-series data, and possesses the flexibility to partition on additional columns as well. We can use one of the supported data types for partitioning: date, smallint, int, bigint, timestamp, and timestamptz.

Creating a hypertable involves creating a table first and subsequently converting it to a hypertable. Let’s first create a regular table:

CREATE TABLE employee(
    id int8 NOT NULL,
    name varchar(255) NOT NULL,
    login_time timestamp,
    logout_time timestamp);

Now, let’s convert it into a hypertable:

SELECT create_hypertable('employee', by_range('login_time'));

The required parameters of create_hypertable() are the table name employee and the dimension builder by_range(‘login_time’). The dimension builder defines which column we want to use to partition the table.

In this case, we could skip the dimension builder because create_hypertable() automatically assumes that a single specified column is range-partitioned by time by default.

For existing data in the table, we can include the migrate_data option for migration:

SELECT create_hypertable('employee', by_range('login_time'), migrate_data => true);

4.3. Inserting and Querying Data

With the hypertable in place, we can start adding data. TimescaleDB provides effective mechanisms for handling large volumes of timestamped information. Let’s perform the insertion:

INSERT INTO employee values('1', 'John', '2023-01-01 09:00:00', '2023-01-01 18:30:00');
INSERT INTO employee values('2', 'Sarah', '2023-01-02 08:45:12', '2023-01-02 18:10:10');

One of the compelling features of TimescaleDB is its capability to perform efficient queries on time-series data using standard SQL. With our data successfully populated, let’s proceed to query it:

SELECT * FROM employee WHERE login_time BETWEEN '2023-01-01 00:00:00' AND '2023-01-01 12:00:00';

This query retrieves data for a specific time range, showcasing the ease of analyzing and visualizing time-series data with TimescaleDB.

5. Conclusion

In this article, we presented a quick overview of TimescaleDB, delving into the fundamental steps for setting it up and querying data. TimescaleDB efficiently handles large time-series datasets, making it invaluable for robust data management in various applications.

       

Print a Java 2D Array

$
0
0

1. Overview

In this tutorial, we’ll get familiar with some ways to print 2D arrays, along with their time and space complexity.

2. Common Ways to Print a 2D Array

Java, a versatile programming language, offers multiple methods for handling and manipulating arrays. Specifically, 2D arrays provide a convenient way to organize and store data in a grid-like structure. Printing a 2D array constitutes a common operation, and Java presents several approaches to accomplish this task.

2.1. Using Nested Loops

The most straightforward method involves using nested loops to iterate through the rows and columns of the 2D array. This method is simple and intuitive, making it an excellent choice for basic array printing. Let’s look into the implementation:

int[][] myArray = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 } };
for (int i = 0; i < myArray.length; i++) {
    for (int j = 0; j < myArray[i].length; j++) {
        System.out.print(myArray[i][j] + " ");
    }
}

Advantages:

  • Simple and easy to understand
  • Doesn’t necessitate extra libraries or functionalities

Disadvantages:

  • If prioritizing code brevity, this may not be the optimal selection

Time Complexity: O(m * n), where ‘m’ is the number of rows and ‘n’ is the number of columns in the 2D array

Space Complexity: O(1), constant space as no additional data structures are used

2.2. Using Arrays.deepToString()

For simplicity and conciseness, Java provides the Arrays.deepToString() method, which enables printing 2D arrays directly. This method manages nested arrays and furnishes a compact representation of the array contents. Let’s delve into the implementation:

int[][] myArray = { {1, 2, 3}, {4, 5, 6}, {7, 8, 9} };
System.out.println(Arrays.deepToString(myArray));

Advantages:

  • Offers conciseness and demands minimal code
  • Appropriate for swift debugging or when accepting a compact output format

Disadvantages:

  • Generates a new string representation of the entire array, potentially less efficient in terms of space complexity for very large arrays
  • Lacks control over the array’s formatting and depends on the implementation of the toString

Time Complexity: O(m * n)

Space Complexity: O(m * n), due to the creation of a new string representation of the entire 2D array

2.3. Using Java 8 Streams

For a more modern approach, Java 8 introduced streams, allowing concise and expressive code. The Arrays.stream() method can be employed to flatten the 2D array, and then forEach() is used to print the elements. Let’s look into the implementation:

int[][] myArray = { {1, 2, 3}, {4, 5, 6}, {7, 8, 9} };
Arrays.stream(myArray)
  .flatMapToInt(Arrays::stream)
  .forEach(num -> System.out.print(num + " "));

Advantages:

  • Embraces modernity and expressiveness
  • Employs concise code that utilizes Java 8 features

Disadvantages:

  • May be deemed more advanced and could be less readable for individuals unfamiliar with Java 8 streams

Time Complexity: O(m * n)

Space Complexity: O(1), constant space as no additional data structures are used

2.4. Using Arrays.toString()

This method is used to convert each row of the 2D array into a string representation and then print each row. This approach provides a clean and concise output. Let’s look into the implementation:

int[][] myArray = { {1, 2, 3}, {4, 5, 6}, {7, 8, 9} };
for (int[] row : myArray) {
    System.out.print(Arrays.toString(row));
}

Advantages:

  • Does not create additional data structures like lists or streams, resulting in a more memory-efficient solution
  • Straightforward implementation, requiring minimal code to achieve the desired output

Disadvantages:

  • It generates a new string representation of each row, which might be less efficient in terms of space complexity for arrays with a large number of columns.
  • We lack control over how the array is formatted, and it depends on the implementation of the toString method of the elements.

Time Complexity: O(m * n)

Space Complexity: O(n), due to the creation of a new string representation of each row

It’s important to note that all these approaches have a time complexity of O(m * n) because to print the entire 2D array, we must visit each element at least once. The space complexity varies slightly based on whether we create additional data structures, such as strings for representation. In general, these complexities are quite reasonable for typical use cases, and the choice of method can depend on factors like code readability, simplicity, and specific project requirements.

3. Conclusion

In conclusion, the choice of the “best” approach depends on your specific requirements and coding preferences. For most general use cases, the nested loops approach strikes a good balance between simplicity and efficiency. However, for scenarios where conciseness or customization is a priority, other methods might be more suitable. Java offers flexibility to meet the diverse needs of developers. Choose the method that best fits your coding style and the requirements of your project.

As usual, the source code for all of these examples is available over on GitHub.

       
Viewing all 4469 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>