Quantcast
Channel: Baeldung
Viewing all 4700 articles
Browse latest View live

The Difference Between mockito-core and mockito-all

$
0
0

1. Overview

Mockito is a popular mocking framework for Java. But, before we start, we have some different artifacts to choose from.

In this quick tutorial, we'll explore the difference between mockito-core and mockito-all. Afterward, we'll be able to choose the right one.

2. mockito-core

The mockito-core artifact is Mockito's main artifact. Specifically, it contains both the API and the implementation of the library.

We can obtain the artifact by adding the dependency to our pom.xml:

<dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-core</artifactId>
    <version>3.3.3</version>
</dependency>

At this point, we can already start using Mockito.

3. mockito-all

Of course, mockito-core has some dependencies like hamcrest and objenesis that Maven downloads separately, but mockito-all is an out-dated dependency that bundles Mockito as well as its required dependencies.

To verify this, let's look inside the mockito-all.jar to see the packages it contains:

mockito-all.jar
|-- org
|   |-- hamcrest
|   |-- mockito
|   |-- objenesis

The latest GA version of mockito-all is a 1.x version released in 2014. Newer versions of Mockito don't release mockito-all anymore.

The maintainers released this dependency as a simplification. Developers were supposed to use this if they don't have a build tool with dependency management.

4. Conclusion

As we explored above, mockito-core is the main artifact of Mockito. Newer versions don't release mockito-all anymore. Henceforth, we should only use mockito-core.


Guide to AtomicStampedReference in Java

$
0
0

1. Overview

In this tutorial, we'll review details of the AtomicStampedReference class from the java.util.concurrent.atomic package.

After that, we'll explore the AtomicStampedReference classes atomic operations and how we can use it to perform A-B-A detection.

2. Why Do We Need AtomicStampedReference?

First, AtomicStampedReference provides us with both an object reference variable and a stamp that we can read and write atomically. We can think of the stamp a bit like a timestamp or a version number.

What does AtomicStampedReference do for us, though, that AtomicReference and AtomicMarkableReference don't already do?

Consider a bank account that has two pieces of data: A balance and a last modified date. The last modified date is updated any time the balance is altered. By observing this last modified date, we can know the account has been updated.

Simply put, adding a stamp allows us to detect when another thread has changed the shared reference from the original reference A, to a new reference B, and back to the original reference A. This is referred to as the A-B-A Problem.

This can be especially handy in lock-free data structures since they rely heavily on compare-and-swap (CAS) operations. However, it's helpful in any situation when we need to know if a reference has been altered since our last read, like optimistically locking a database record.

3. How Does AtomicStampedReference Help Detect A-B-A?

AtomicStampedReference provides both an object reference and a stamp to indicate if the object reference has been modified. A thread changes both the reference and the stamp atomically.

Now, if another thread modifies the reference from A to B then back to A we can detect this situation because the stamp will have changed.

So, let's see this in action.

3.1. Reading a Value and Its Stamp

First, let's imagine that our reference is holding onto an account balance:

AtomicStampedReference<Integer> account = new AtomicStampedReference<>(100, 0);

Note that we've supplied the balance, 100, and a stamp, 0.

To access the balance, we'll want to get both the balance and the stamp. We can achieve this with a non-zero-length array that will hold the stamp:

int holder = new int[1];
int balance = account.get(holder);
int stamp = holder[0];

We pass that array to AtomicStampedReference#get to atomically obtain both the current account balance and the stamp.

3.2. Changing a Value and Its Stamp

Now, let's review how to set the value of an AtomicStampedReference atomically.

If we want to change the account's balance, we need to change both the balance and the stamp:

int newStamp = stamp + 1;
if (!account.compareAndSet(balance, balance + 100, stamp, stamp + 1)) {
    // retry
}

The compareAndSet method returns a boolean indicating success or failure. A failure means that either the balance or the stamp has changed since we last read it. Thus, we've abstracted away the notion of “latest value” from the reference itself.

Finally, let's encapsulate our account logic into a class:

public class StampedAccount {

    AtomicInteger stamp = new AtomicInteger(0);
    AtomicStampedReference<Integer> account = new AtomicStampedReference<>(100, 0);

    public int getBalance() {
        return this.account.get(new int[1]);
    }

    public int getStamp() {
        int[] stamps = new int[1];
        this.balance.get(stamps);
        return stamps[0];
    }

    public boolean deposit(int funds) {
        int[] holder = new int[1];
        int balance = this.account.get(holder);
        int newStamp = this.stamp.incrementAndGet();
        return this.account.compareAndSet(balance, balance + funds, holder[0], newStamp);
    }

    public boolean withdrawal(int funds) {
        int[] holder = new int[1];
        int current = this.account.get(holder);
        int newStamp = this.stamp.incrementAndGet();
        return this.account.compareAndSet(balance, balance - funds, holder[0], newStamp);
    }
}

The nice thing about what we've just written is we can know before withdrawing or depositing that no other thread has altered the balance, even back to what it was since our last read.

For example, consider the following thread interleaving:

The balance is set to $100. Thread 1 runs deposit(100) up to the following point:

int[] holder = new int[1];
int balance = this.account.get(holder);
int newStamp = this.stamp.incrementAndGet(); 
// Thread 1 is paused here

meaning the deposit has not yet completed.

Then, thread 2 runs deposit(100) and withdraw(100), bringing the balance to $200 and then back to $100.

Finally, thread 1 runs:

return this.account.compareAndSet(balance, balance + 100, holder[0], newStamp);

Thread 1 will successfully detect that some other thread has altered the account balance since its last read, even though the balance itself is the same as it was when thread 1 read it.

3.3. Testing

It's tricky to test since this depends on a very specific thread interleaving. But, let's at least write a simple unit test to verify that deposits and withdrawals work:

public class ThreadStampedAccountUnitTest {

    @Test
    public void givenMultiThread_whenStampedAccount_thenSetBalance() throws InterruptedException {
        StampedAccount account = new StampedAccount();
        Thread t = new Thread(() -> {
            while (!account.withdrawal(100))
                Thread.yield();
        });
        t.start();
        Assert.assertTrue(account.deposit(100));
        t.join(1_000);
        Assert.assertFalse(t.isAlive());
        Assert.assertSame(0, account.getBalance());
    }
}

3.4. Choosing the Next Stamp

Semantically, the stamp is like a timestamp or a version number, so it's typically always increasing. It's also possible to use a random number generator.

The reason for this is that, if the stamp can be changed to something it was previously, this could defeat the purpose of AtomicStampedReference. AtomicStampedReference itself doesn't enforce this constraint, so it's up to us to follow this practice.

4. Conclusion

In conclusion, AtomicStampedReference is a powerful concurrency utility that provides both a reference and a stamp that can be read and updated atomically. It was designed for A-B-A detection and should be preferred to other concurrency classes such as AtomicReference where the A-B-A problem is a concern.

As always, we can find the code available over on GitHub.

Mocking the ObjectMapper readValue() Method

$
0
0

1. Overview

When unit testing code that involves deserializing JSON with Jackson, we might find it easier to mock the ObjectMapper#readValue method. By doing so, we don't need to specify long JSON inputs in our tests.

In this tutorial, we're going to see how we can achieve this using Mockito.

2. Maven Dependencies

First of all, as Maven dependencies, we're going to use mockito-core and jackson-databind:

<dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-core</artifactId>
    <version>3.3.3</version>
    <scope>test</scope>
 </dependency>
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.10.3</version>
    <type>bundle</type>
</dependency>

3. An ObjectMapper Example

Let's consider a simple Flower class:

public class Flower {

    private String name;
    private Integer petals;

    public Flower(String name, Integer petals) {
        this.name = name;
        this.petals = petals;
    }

    // default constructor, getters and setters
}

And suppose we have a class for validating a JSON string representation of a Flower object. It takes ObjectMapper as a constructor argument — this makes it easy for us to mock it later:

public class FlowerJsonStringValidator {
    private ObjectMapper objectMapper;

    public FlowerJsonStringValidator(ObjectMapper objectMapper) {
        this.objectMapper = objectMapper;
    }

    public boolean flowerHasPetals(String jsonFlowerAsString) throws JsonProcessingException {
        Flower flower = objectMapper.readValue(jsonFlowerAsString, Flower.class);
        return flower.getPetals() > 0;
    }
}

Next, we'll use Mockito to write unit tests for the validator logic.

4. Unit Testing With Mockito

Let's start by setting up our test class. We can easily mock an ObjectMapper and pass it as a constructor argument to our FlowerStringValidator class:

@ExtendWith(MockitoExtension.class)
public class FlowerJsonStringValidatorUnitTest {

    @Mock
    private ObjectMapper objectMapper;

    private FlowerJsonStringValidator flowerJsonStringValidator;

    @BeforeEach
    public void setUp() {
        flowerJsonStringValidator = new FlowerJsonStringValidator(objectMapper);
    }
 
    ...
}

Note that we're using JUnit 5 in our tests, so we've annotated our test class with @ExtendWith(MockitoExtension.class).

Now that we have our mock ObjectMapper ready to go, let's write a simple test:

@Test
public void whenCallingHasPetalsWithPetals_thenReturnsTrue() throws JsonProcessingException {
    Flower rose = new Flower("testFlower", 100);

    when(objectMapper.readValue(anyString(), eq(Flower.class))).thenReturn(rose);

    assertTrue(flowerJsonStringValidator.flowerHasPetals("this can be a very long json flower"));

    verify(objectMapper, times(1)).readValue(anyString(), eq(Flower.class));
}

Since we're mocking ObjectMapper here, we can ignore its input and focus on its output, which is then passed to the actual validator logic. As we can see, we don't need to specify valid JSON input, which can be very long and difficult in a real-world scenario.

5. Conclusion

In this article, we saw how to mock ObjectMapper to provide efficient test cases around it. Finally, the code can be found over on GitHub.

Encode a String to UTF-8 in Java

$
0
0

1. Overview

When dealing with Strings in Java, sometimes we need to encode them into a specific charset.

This tutorial is a practical guide showing different ways to encode a String to the UTF-8 charset; for a more technical deep-dive see our Guide to Character Encoding.

2. Defining the Problem

To showcase the Java encoding, we'll work with the German String “Entwickeln Sie mit Vergnügen”.

String germanString = "Entwickeln Sie mit Vergnügen";
byte[] germanBytes = germanString.getBytes();

String asciiEncodedString = new String(germanBytes, StandardCharsets.US_ASCII);

assertNotEquals(asciiEncodedString, germanString);

This String encoded using US_ASCII gives us the value “Entwickeln Sie mit Vergn?gen” when printed, because it doesn't understand the non-ASCII ü character. But when we convert an ASCII-encoded String that uses all English characters to UTF-8, we get the same string.

String englishString = "Develop with pleasure";
byte[] englishBytes = englishString.getBytes();

String asciiEncondedEnglishString = new String(englishBytes, StandardCharsets.US_ASCII);

assertEquals(asciiEncondedEnglishString, englishString);

Let's see what happens when we use the UTF-8 encoding.

3. Encoding With Core Java

Let's start with the core library.

Strings are immutable in Java, which means we cannot change a String character encoding. To achieve what we want, we need to copy the bytes of the String and then create a new one with the desired encoding.

First, we get the String bytes and, after that, create a new one using the retrieved bytes and the desired charset:

String rawString = "Entwickeln Sie mit Vergnügen";
byte[] bytes = rawString.getBytes(StandardCharsets.UTF_8);

String utf8EncodedString = new String(bytes, StandardCharsets.UTF_8);

assertEquals(rawString, utf8EncodedString);

4. Encoding With Java 7 StandardCharsets

Alternatively, we can use the StandardCharsets class introduced in Java 7 to encode the String.

First, we'll decode the String into bytes and, secondly, encode the String to UTF-8:

String rawString = "Entwickeln Sie mit Vergnügen";
ByteBuffer buffer = StandardCharsets.UTF_8.encode(rawString); 

String utf8EncodedString = StandardCharsets.UTF_8.decode(buffer).toString();

assertEquals(rawString, utf8EncodedString);

5. Encoding With Commons-Codec

Besides using core Java, we can alternatively use Apache Commons Codec to achieve the same results.

Apache Commons Codec is a handy package containing simple encoders and decoders for various formats.

First, let's start with the project configuration. When using Maven, we have to add the commons-codec dependency to our pom.xml:

<dependency>
    <groupId>commons-codec</groupId>
    <artifactId>commons-codec</artifactId>
    <version>1.14</version>
</dependency>

Then, in our case, the most interesting class is StringUtils, which provides methods to encode Strings. Using this class, getting a UTF-8 encoded String is pretty straightforward:

String rawString = "Entwickeln Sie mit Vergnügen"; 
byte[] bytes = StringUtils.getBytesUtf8(rawString);
 
String utf8EncodedString = StringUtils.newStringUtf8(bytes);

assertEquals(rawString, utf8EncodedString);

6. Conclusion

Encoding a String into UTF-8 isn't difficult, but it's not that intuitive. This tutorial presents three ways of doing it, either using core Java or using Apache Commons Codec.

As always, the code samples can be found over on GitHub.

Introduction to Transactions

$
0
0

1. Introduction

In this tutorial, we'll understand the concept of transactions.

We'll go through the types of transactions and different guarantees they provide. We'll also explore different protocols and algorithms to deal with distributed transactions in heterogeneous environments.

2. What is a Transaction?

In programming, we refer to a transaction as a group of related actions that need to be performed as a single action. In other words, a transaction is a logical unit of work whose effect is visible outside the transaction either in entirety or not at all. We require this to ensure data integrity in our applications.

Let's have a look at an example to understand this better. A typical requirement in event-based architecture is to update the local database and produce an event for consumption by other services:

Here, we'd like these two operations to either happen together or not happen at all. We can achieve this by wrapping these operations into a single transaction:

We typically refer to these components like database and message broker as participating resources in a transaction.

3. A Brief History of Transactions

We mostly associate the concept of transactions with relational databases. Hence, the history and evolution of transactions are closely related to those of relational databases as well.

We largely attribute the introduction of the relational model of data to Edgar F. Codd, who published his seminal paper on this subject back in 1970.

3.1. Earlier Transaction Models

The ease and flexibility of relational databases made them commonplace. This brought the complexities of large multi-user, concurrently accessible systems. Soon, it was realized that some consistency enforcement was necessary.

This gave birth to the ACID properties. Transactions adhering to ACID properties are guaranteed to be atomic and serializable. A transaction processing system is responsible for ensuring the ACID properties. This worked very well for flat transactions with short execution time, fewer concurrent users, and a single database system.

But soon, as the demand started to surge, the complexities started to grow as well. Applications started to require long-living and complex transactions. This resulted in complex transaction models like sub-transaction and transaction groups.

This gave more precise control for failure scenarios, especially in the case of long-living transactions.

3.2. Advanced Transaction Models

The next phase of evolution in transactions came through the support of distributed and nested transactions. The applications grew more complex and often required transactional access to multiple database systems. The distributed transaction takes a bottom-up approach while the nested transaction takes a top-down approach to decompose a complex transaction into subtransactions.

Distributed transactions provided global integrity constraints over multiple resources. These resources soon started to be heterogeneous as well. This gave birth to the X/Open DTP (Distributed Transaction Processing) model.

The other important evolution for transactions included chained transactions and sagas. While nested transactions worked well for federated database systems, it still was not suitable for long-lived transactions. Chained transactions presented the idea to decompose such transactions into small, sequentially executing sub-transactions.

Sagas were based on the concept of chained transactions and proposed a compensation mechanism to roll back already completed sub-transactions. The saga model is an important transaction model because of the relaxed consistency it proposes. It finds a lot of relevance in the present-day applications developed with microservice architecture.

We'll discuss many of the terms and concepts presented here in more detail later in the tutorial.

4. Local vs. Distributed Transactions

Operations that are part of a transaction can all execute in a single participating resource or span across multiple participating resources. Hence, transactions can be local or distributed.

In local transactions, operations execute in the same resource. While in distributed transactions, operations are spread across multiple resources:

So far, we haven't spoken about the location of participating resources in a transaction. A transaction can involve multiple independent resources like databases, message queues, or web services. These resources can execute on the same virtual machine, on different virtual machines in the same physical machine, or different physical machines altogether.

The number and location of participating resources is a crucial aspect in implementing transactions with certain guarantees, which we'll elaborate more in the next section.

5. Transaction Guarantees

One of the fundamental reasons to use transactions in handling data is to ensure data integrity. Data integrity has been well defined by a set of guarantees that every transaction is supposed to provide.

Further, a distributed data system presents new challenges that can force us to forfeit some of these guarantees in favor of better leverage from data partitioning. We'll explore these concepts in this section.

5.1. ACID Properties

We often associate transactions with a set of guarantees, famously captured in the acronym ACID. The concept was originally suggested by Jim Gray and later expanded by Andreas Reuter and Theo Härder. ACID stands for Atomicity, Consistency, Isolation, and Durability:

  • Atomicity: Atomicity ensures that all changes we make to the data as part of a transaction, we make them as a single entity and operation. This effectively means that either we perform all the changes or none of them.
  • Consistency: Consistency ensures that we execute all the data changes while maintaining a consistent state at the start and the end of a transaction. A consistent state of data must conform to all the constraints that we define for data.
  • Isolation: Isolation ensures that we keep the intermediate states of a transaction invisible to other transactions. This gives concurrently running transactions an effect of being serialized. The degree to which a transaction must be isolated from other transactions is defined by isolation levels.
  • Durability: Durability ensures that when a transaction completes, we persist changes to the data, and any other transaction doesn't revert those changes. Although not necessary, this also may require the data changes to be saved on the disk.

These are the guarantees which we should expect from a transaction. But, a transaction doesn't need to provide all of them. We can find many arguments in the literature that suggest that a transaction that does not provide ACID guarantees is not a transaction at all.

However, with more adoption of distributed systems where the emphasis is on availability, we often see the term transaction being used more liberally.

5.2. CAP Theorem

Distributed data systems are generally constrained by CAP theorem in what they can offer. Eric Brewer provided the original conjecture in 2000, while Seth Gilbert and Nancy Lynch provided a formal proof of this in 2002. CAP stands for Consistency, Availability and Partition tolerance:

  • Consistency: Consistency is a guarantee that in a distributed data system, every node returns the most recent and successfully written value. In effect, every node has the same view of the data at all times. We must not confuse this with the Consistency in ACID, they are different concepts.
  • Availability: Availability demands that every non-failing node returns a non-error response to the read and write requests in a reasonable amount of time.
  • Partition-tolerance: Partition tolerance refers to the fact that a data system continues to function even when an arbitrary number of messages gets dropped or deployed between nodes.

CAP theorem states that a distributed data system can't provide all three of consistency, availability, and partition tolerance simultaneously. In a more pragmatic sense, a distributed data system can only provide a strong guarantee of either availability or consistency.

This is because a distributed data system by default should not compromise partition tolerance anyways.

5.3. BASE Systems

Under the constraints of the CAP theorem, many distributed data systems chose to favor consistency over availability. This gives rise to a new set of guarantees for distributed systems with the acronym as the BASE. The BASE stands for Basically-available, Soft-state, and Eventual consistency:

  • Basically-Available: This guarantee favors availability over consistency as per the CAP theorem. The data system will produce a response to a request, even though the response can be stale.
  • Soft-state: This refers to the fact that the state of the system can change over time even without any input being received. Hence, the system always remains in a soft state moving towards eventual consistency.
  • Eventual consistency: This is a guarantee that the system will eventually become consistent once it stops receiving any input. The data changes will eventually propagate to all nodes and all nodes will have the same view of data.

BASE is diametrically opposite to ACID in terms of the consistency model they propose. While ACID enforces consistency at the end of every transaction, BASE accepts that the consistency may be in a state of flux at the end of the transaction.

This relaxation in strong consistency requirements allows for a distributed data system to achieve high availability.

6. Distributed Commit Protocols

Almost all popular relational databases provide support for transactions by default. Since a local transaction involves just one database, the database can manage such transactions directly. Moreover, the application can control the transaction boundary through relevant APIs.

However, it starts to get complicated when we talk about distributed transactions. Since there are multiple databases or resources involved here, a database can't manage such a transaction exclusively. What we need here is a transaction coordinator and individual resources like a database to cooperate in the transaction.

6.1. Two-phase Commit

For a distributed transaction to guarantee ACID properties, what we need is a coordination protocol. Two-phase commit is a widely-used distributed algorithm to facilitate the decision to commit or rollback a distributed transaction.

The protocol consists of two phases:

  • Prepare Phase: This phase consists of the transaction coordinator asking all participants to prepare for commit, the individual resource manager can reply affirmatively or negatively.
  • Commit Phase: This phase involves the transaction coordinator asking all participants to either commit or rollback based on individual responses in the previous phase.

A transaction coordinator facilitates the two-phase commit with all the participants. For a participant to participate in a two-phase commit, it must understand and support the protocol.

6.2. Three-phase Commit

The two-phase commit protocol, although quite useful, is not quite as robust as we may imagine. One of the key problems is that it can not dependably recover from a failure of both the coordinator and one of the participants during the commit phase.

The three-phase commit protocol is a refinement over the two-phase commit protocol which addresses this issue. It introduces the third phase by splitting the commit phase into pre-commit and commit phases:

The pre-commit phase here helps to recover from the failure scenario where either a participant fails or both the coordinator and a participant fails during the commit phase. The recovery coordinator can use the pre-commit phase to safely decide if it has to proceed with the commit or abort.

While these commit protocols ensure us the ACID guarantees in a distributed transaction, it's not free from its one share of problems. The biggest challenge with these protocols is that these are blocking protocols which, as we'll see later, isn't always suitable.

7. Industry Specifications

Vendors can independently implement distributed transaction protocols like two-phase commit. However, this will make interoperability quite a challenge, especially when working with multiple vendors. The complexity further grows when we start to include heterogeneous resources like message queues in the transaction.

To exactly address this issue, there have been several industry collaborations to define standard specifications for distributed transactions.

7.1. X/Open DTP Model

XA refers to eXtended Architecture, which is a specification for distributed transaction processing. It was first released in 1991 by X/Open consortium, which later merged with The Open Group. The goal of this specification is to provide atomicity in global transactions involving heterogeneous components.

The XA specification provides data integrity using the two-phase commit protocol and standardizes the components and interfaces involved:

XA describes several components to facilitate a two-phase commit based distributed transaction:

  • Application Program: The application program is responsible for defining the transaction and accessing resources within transaction boundaries. The application program uses a transaction manager to define the start and end of the global transaction.
  • Transaction Manager:  The transaction manager is responsible for managing the global transactions which are a unit of work in a distributed transaction, coordinate the decision to commit them or roll them back, and coordinate failure recovery.
  • Resource Manager: A resource manager is responsible to manage a certain part of a shared resource like a database. A resource manager coordinates with the transaction manager for a transaction branch which is part of the global transaction.

XA also describes the interface between these components to facilitate how they work with each other. This explanation has just mentioned the important parts of the XA specification and is not a complete description.

7.2. OMG OTS Model

OTS stands for Object Transaction Service which describes a communication infrastructure for distributed application in an object-oriented manner. This is part of the Object Management Architecture (OMA) of the Object Management Group (OMG).

OTS enables the use of distributed two-phase commit transactions in CORBA applications by defining several components along with their interworking:

Let's understand these components in little more detail:

  • Transactional Server: This holds one or more objects involved in the transaction
  • Transactional Client: This is a program which calls methods of transactional object
  • Recoverable Server: This holds one or more recoverable objects, which are transactional object affected by commit or rollback of a transaction
  • Transactional Object: This is a CORBA object whose methods can be called in a transactional context

OTS is one of the several object services provided by OMG under OMA. The kernel of the OMA architecture is Object Request Broker (ORB) defined in the CORBA specification.

Moreover, the OTS model is based on the X/Open DTP model, where it replaces the XA and TX interfaces with CORBA IDL interfaces. A thorough analysis of OTS is beyond the scope of this tutorial.

8. Distributed Transactions in Java

While several industry specifications have standardized distributed transactions, it's still not convenient to use them in a program directly. This is why all major programming languages provide their specifications defining API that binds to the industry specifications. Java has quite mature support for distributed transactions through JTA and JTS.

We'll explore these components of enterprise Java in more detail in this section.

8.1. Java Transaction API

Java Transaction API (JTA) is a Java Enterprise Edition API developed under the Java Community Process. It enables Java applications and application servers to perform distributed transactions across XA resources. JTA is modeled around XA architecture, leveraging two-phase commit.

JTA specifies standard Java interfaces between a transaction manager and the other parties in a distributed transaction:

Let's understand some of the key interfaces highlighted above:

  • TransactionManager: An interface which allows an application server to demarcate and control transactions
  • UserTransaction: This interface allows an application program to demarcate and control transactions explicitly
  • XAResource: The purpose of this interface is to allow a transaction manager to work with resource managers for XA-compliant resources

 8.2. Java Transaction Service

Java Transaction Service (JTS) is a specification for building the transaction manager that maps to the OMG OTS specification. JTS uses the standard CORBA ORB/TS interfaces and Internet Inter-ORB Protocol (IIOP) for transaction context propagation between JTS transaction managers.

At a high level, it supports the Java Transaction API (JTA). A JTS transaction manager provides transaction services to the parties involved in a distributed transaction:

Services that JTS provides to an application are largely transparent and hence we may not even notice them in the application architecture. The JTS is architected around an application server which abstracts all transaction semantics from the application programs.

9. Long-Running Transactions

While most of the distributed transaction protocols focus on providing ACID guarantees, they all suffer from the fact that they are blocking. While they work perfectly well for transactions with a short execution time, they are unsuitable for long-running business transactions.

It can make an application extremely difficult to scale. Traditional techniques using resource locking don't agree well with modern applications that require business transactions in a loosely-coupled, asynchronous environment; for instance, business transactions in an application built with microservices architecture.

There have been several attempts to define patterns and specifications to address long-running transactions, and we'll discuss some of them in this section.

9.1. Saga Interaction Pattern

The saga interaction pattern attempts to break a long-running business process to multiple small and related business actions and interactions. Further, it coordinates the whole process by managing based on messages and timeouts. This was first defined back in 1987 by Hector Garcia-Molina and Kenneth Salem.

Let's see how Saga decomposes a business process:

Contrary to an ACID transaction, we can not rollback in the case of Saga when a failure occurs.  Here, what we do instead is called counteraction, or compensating action. A counteraction is, however, just a best effort to undo the effect of the original action. It may not be possible to completely revert the effect of every transaction always.

Further, the Saga pattern requires individual actions and their corresponding counteractions to be idempotent for a successful recovery from failures.

9.2. OASIS WS-BA

The saga interaction pattern finds a great fit for SOA-based architecture with SOAP service-based interactions. Several protocol extensions have been defined for SOAP to address specific communication requirements. These collectively fall under WS* and include protocols for supporting distributed transactions.

Web Services – Business Activity (WS-BA) defines an orderly protocol and states for both the participating services and the coordinator in a Saga-based business process. WS-BA defines two protocols:

  • Business Agreement with Coordinator Completion: This is a more ordered protocol where the coordinator decides and notifies the participants when to complete
  • Business Agreement with Participant Completion: This is a more loosely-coupled protocol where the participants decide when they have to complete

Further, WS-BA defines two coordination types. First is the Atomic Outcome, where all participants have to close or compensate. Second is the Mixed Outcome, where the coordinator treats each participant differently.

9.3. OASIS BTP

Business Transaction Process (BTP) provides a common understanding and a way to communicate guarantees and limits on guarantees between organizations. This provides formal rules for the distribution of parts of the business process outside the boundaries of an organization.

While BTP provides coordination and forces a consistent termination of the business process, it relies on local compensating actions from participating organizations. BTP provides two different protocols:

  • BTP Atomic Transactions: Also known as atoms, is similar to transactions in tightly-coupled systems. Here, one atom coordinator and zero or more sub-coordinators coordinate a transaction, each managing one or more participants. The outcome of an atom is atomic.
  • BTP Cohesive Transactions: Also know as cohesions,  contrary to atoms, this may deliver different termination outcomes to its participants. Consistency here is determined by the agreement and interaction between the client and the coordinator.

Hence, BTP provides a compensation-based transaction semantic for distributed business processes operating in heterogeneous environments.

10. Advanced Consensus Protocols

The decision, whether to commit a transaction to a database is part of a broader set of problems in distributed computing known as the consensus problem. The problem is to achieve system reliability in the presence of random failures. Consensus refers to a process for distributed processes to agree on some state or decision. Other such decisions include leader election, state machine replication, and clock synchronization.

What we need to solve the consensus problem is a consensus protocol. A consensus protocol must provide eventual termination, data integrity, and agreement between distributed processes or nodes. Different consensus protocols can prescribe different levels of integrity here. Important evaluation criteria for consensus protocols include running time and message complexity.

The distributed commit protocols that we have discussed so far like two-phase commit and three-phase commit, all are types of consensus protocols. The two-phase commit protocol has a low message complexity and low overall latency, but blocks on the coordinator failure. The three-phase commit protocol improves upon this problem at the cost of higher overall latency. Even the three-phase commit protocol comes apart in the face of network-partition.

What we're going to discuss in this section are some advanced consensus protocols that address the problems associated with failure scenarios.

10.1. Paxos

Paxos is a family of protocols originally proposed in 1989 by Leslie Lamport. These protocols solve consensus problems in an asynchronous network of unreliable processes. Paxos provides durability even with the failure of a bounded number of replicas in the network. Paxos has been widely regarded as the first consensus protocol that is rigorously proved to be correct.

Since its proposal, there have been several versions of the Paxos protocol proposed. We'll examine the most basic Paxos protocol here. The basic Paxos protocol proposes multiple rounds, each with two phases, both further divided into two sub-phases:

  • Prepare (Phase 1A): A Proposer creates a message identified by a unique number (n) which should be the greatest used so far
  • Promise (Phase 1B): An Acceptor receives the message from a Proposer and checked its number (n), if the number is the greatest received so far, Acceptor returns a Promise to the Proposer
  • Accept (Phase 2A): The Proposer may receive a majority of Promises from a Quorum of Acceptors and has to send an Accept message with chosen value to the Quoram of Acceptors
  • Accepted (Phase 2B): The Acceptor receives the Accept message from the Proposer and has to accept if it has not promised to a proposal with a higher identifier already

Note that Paxos allows multiple proposers to send conflicting messages and acceptors to accept multiple proposals. In the process, rounds can fail but Paxos ensures that the acceptors ultimately agree on a single value.

10.2. Raft

Raft is a consensus algorithm developed by Diego Ongaro and John Ousterhout in their seminal paper and later expanded in a doctoral dissertation. It stands for Reliable, Replicated, Redundant, And Fault-Tolerant.

Raft offers a generic way to distribute a state machine across a cluster of computing nodes. Further, it ensures that each node in the cluster agrees upon the same series of state transitions. Raft works by keeping a replicated log and only a single node, the leader can manage it.

Raft divides the consensus problem into three sub-problems:

  • Leader Election: Every node can stay in any of the three states, namely, Leader, Candidate, and Follower. There can't be more than one leader at any point in time. A node always starts as the follower and expects a heartbeat from the leader. When it does not receive the heartbeat, it transitions into candidate state and requests for votes for transitioning into the leader state.
  • Log Replication: When a leader receives a request, it first appends it to the log and then sends a request to every follower so that they can do the same thing. The leader on getting confirmation from a majority of the nodes can commit the message then respond to the client. The followers commit the message on receiving the next heartbeat from the leader.
  • Safety: It's important to ensure that every log is correctly replicated, and commands are executed in the same order. For this, Raft uses several safety mechanisms. These include Log Matching Property and Election Restriction.

Please note that Raft is equivalent to Paxos in fault-tolerance and performance. Like Paxos, Raft is also formally proven to be safe. But importantly, Raft is easier to understand and develop than its much more complex predecessor Paxos.

11. Conclusion

In this tutorial, we had a look at what is meant by a transaction and differences between local and distributed transactions.

We also went through some of the popular protocols for handling distributed transactions. Further, we touched upon the industry specifications that are available and their support in Java.

We also discussed the long-running transactions and finally about some of the complex consensus algorithms.

Spring Security: Check If a User Has a Role in Java

$
0
0

1. Introduction

In Spring Security, sometimes it is necessary to check if an authenticated user has a specific role. This can be useful to enable or disable particular features in our applications.

In this tutorial, we'll see various ways to check user roles in Java for Spring Security.

2. Checking User Role in Java

Spring Security provides several ways to check user roles in Java code. We'll look at each of them below.

2.1. @PreAuthorize

The first way to check for user roles in Java is to use the @PreAuthorize annotation provided by Spring Security. This annotation can be applied to a class or method, and it accepts a single string value that represents a SpEL expression.

Before we can use this annotation, we must first enable global method security. This can be done in Java code by adding the @EnableGlobalMethodSecurity annotation to any configuration class.

Then, Spring Security provides two expressions we can use with the @PreAuthorize annotation to check user roles:

@PreAuthorize("hasRole('ROLE_ADMIN')")
@GetMapping("/user/{id}")
public String getUser(@PathVariable("id") String id) {
    ...
}

We can also check multiple roles in a single expression:

@PreAuthorize("hasAnyRole('ROLE_ADMIN','ROLE_MANAGER')")
@GetMapping("/users")
public String getUsers() {
    ...
}

In this case, the request will be allowed if the user has any of the specified roles.

If the method is called without having the proper role, Spring Security throws an exception and redirects to the error page.

2.2. SecurityContext

The next way we can check for user roles in Java code is with the SecurityContext class.

By default, Spring Security uses a thread-local copy of this class. This means each request in our application has its security context that contains details of the user making the request.

To use it, we simply call the static methods in SecurityContextHolder:

Authentication auth = SecurityContextHolder.getContext().getAuthentication();
if (auth != null && auth.getAuthorities().stream().anyMatch(a -> a.getAuthority().equals("ADMIN"))) {
    ...
}

Note that we're using the plain authority name here instead of the full role name.

This works well when we need more fine-grained checks — for example, a specific part of a single method. However, this approach will not work if we use the global context holder mode in Spring Security.

2.3. UserDetailsService

The third way we can lookup user roles in Java code is by using the UserDetailsService. This a bean we can inject anywhere into our application and call it as needed:

@GetMapping("/users")
public String getUsers() {
    UserDetails details = userDetailsService.loadUserByUsername("mike");
    if (details != null && details.getAuthorities().stream()
      .anyMatch(a -> a.getAuthority().equals("ADMIN"))) {
        // ...
    }
}

Again, we must use the authority name here, not the full role name with prefix.

The benefit of this approach is that we can check roles for any user, not just the one who made the request.

2.4. Servlet Request

If we're using Spring MVC, we can also check user roles in Java using the HttpServletRequest class:

@GetMapping("/users")
public String getUsers(HttpServletRequest request) {
    if (request.isUserInRole("ROLE_ADMIN")) {
        ...
    }
}

3. Conclusion

In this article, we have seen several different ways to check for roles using Java code with Spring Security. As always, the code examples from this article can be found over on GitHub.

Creating Spring Beans Through Factory Methods

$
0
0

1. Introduction

Factory methods can be a useful technique for hiding complex creation logic within a single method call.

While we commonly create beans in Spring using constructor or field injection, we can also create Spring beans using factory methods.

In this tutorial, we will delve into creating Spring beans using both instance and static factory methods.

2. Instance Factory Method

A standard implementation of the factory method pattern is to create an instance method that returns the desired bean.

Additionally, we can configure Spring to create our desired bean with or without arguments.

2.1. Without Arguments

We can create a Foo class that represents our bean being created:

public class Foo {}

Then, we create an InstanceFooFactory class that includes a factory method, createInstance, that creates our Foo bean:

public class InstanceFooFactory {

    public Foo createInstance() {
        return new Foo();
    }
}

After that, we configure Spring:

  1. Create a bean for our factory class (InstanceFooFactory)
  2. Use the factory-bean attribute to reference our factory bean
  3. Use the factory-method attribute to reference our factory method (createInstance)

Applying this to a Spring XML configuration, we end up with:

<beans ...>

    <bean id="instanceFooFactory"
      class="com.baeldung.factorymethod.InstanceFooFactory" />

    <bean id="foo"
      factory-bean="instanceFooFactory"
      factory-method="createInstance" />

</beans>

Lastly, we autowire our desired Foo bean. Spring will then create our bean using our createInstance factory method:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("/factorymethod/instance-config.xml")
public class InstanceFooFactoryIntegrationTest {

    @Autowired
    private Foo foo;
    
    @Test
    public void givenValidInstanceFactoryConfig_whenCreateFooInstance_thenInstanceIsNotNull() {
        assertNotNull(foo);
    }
}

2.2. With Arguments

We can also provide arguments to our instance factory method using the constructor-arg element in our Spring configuration.

First, we create a class, Bar, that utilizes an argument:

public class Bar {

    private String name;

    public Bar(String name) {
        this.name = name;
    }

    // ...getters & setters
}

Next, we create an instance factory class, InstanceBarFactory, with a factory method that accepts an argument and returns a Bar bean:

public class InstanceBarFactory {

    public Bar createInstance(String name) {
        return new Bar(name);
    }
}

Lastly, we add a constructor-arg element to our Bar bean definition:

<beans ...>

    <bean id="instanceBarFactory"
      class="com.baeldung.factorymethod.InstanceBarFactory" />

    <bean id="bar"
      factory-bean="instanceBarFactory"
      factory-method="createInstance">
        <constructor-arg value="someName" />
    </bean>

</beans>

We can then autowire our Bar bean in the same manner as we did for our Foo bean:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("/factorymethod/instance-bar-config.xml")
public class InstanceBarFactoryIntegrationTest {

    @Autowired
    private Bar instance;
    
    @Test
    public void givenValidInstanceFactoryConfig_whenCreateInstance_thenNameIsCorrect() {
        assertNotNull(instance);
        assertEquals("someName", instance.getName());
    }
}

3. Static Factory Method

We can also configure Spring to use a static method as a factory method.

While instance factory methods should be preferred, this technique can be useful if we have existing, legacy static methods that produce desired beans. For example, if a factory method returns a singleton, we can configure Spring to use this singleton factory method.

Similar to instance factory methods, we can configure static methods with and without arguments.

3.1. Without Arguments

Using our Foo class as our desired bean, we can create a class, SingletonFooFactory, that includes a createInstance factory method that returns a singleton instance of Foo:

public class SingletonFooFactory {

    private static final Foo INSTANCE = new Foo();
    
    public static Foo createInstance() {
        return INSTANCE;
    }
}

This time, we only need to create one bean. This bean requires only two attributes:

  1. class – declares our factory class (SingletonFooFactory)
  2. factory-method – declares the static factory method (createInstance)

Applying this to our Spring XML configuration, we get:

<beans ...>

    <bean id="foo"
      class="com.baeldung.factorymethod.SingletonFooFactory"
      factory-method="createInstance" />

</beans>

Lastly, we autowire our Foo bean using the same structure as before:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("/factorymethod/static-foo-config.xml")
public class SingletonFooFactoryIntegrationTest {

    @Autowired
    private Foo singleton;
    
    @Test
    public void givenValidStaticFactoryConfig_whenCreateInstance_thenInstanceIsNotNull() {
        assertNotNull(singleton);
    }
}

3.2. With Arguments

While we should avoid changing the state of static objects — like our singleton — when possible, we can still pass arguments to our static factory method.

To do this, we create a new factory method that accepts our desired arguments:

public class SingletonBarFactory {

    private static final Bar INSTANCE = new Bar("unnamed");
    
    public static Bar createInstance(String name) {
        INSTANCE.setName(name);
        return INSTANCE;
    }
}

After that, we configure Spring to pass in the desired argument using the constructor-arg element:

<beans ...>

    <bean id="bar"
      class="com.baeldung.factorymethod.SingletonBarFactory"
      factory-method="createInstance">
        <constructor-arg value="someName" />
    </bean>

</beans>

Lastly, we autowire our Bar bean using the same structure as before:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("/factorymethod/static-bar-config.xml")
public class SingletonBarFactoryIntegrationTest {

    @Autowired
    private Bar instance;
    
    @Test
    public void givenValidStaticFactoryConfig_whenCreateInstance_thenNameIsCorrect() {
        assertNotNull(instance);
        assertEquals("someName", instance.getName());
    }
}

4. Conclusion

In this article, we looked at how to configure Spring to use instance and static factory methods — both with and without arguments.

While creating beans through constructor and field injection is more common, factory methods can be handy for complex creation steps and legacy code.

The code used in this article can be found over on GitHub.

Building a Simple Web Application with Spring Boot and Groovy

$
0
0

1. Overview

Groovy has a number of capabilities we might want to use in our Spring web applications.

So, in this tutorial, we'll build a simple todo application with Spring Boot and Groovy. Also, we'll explore their integration points.

2. Todo Application

Our application will have the following features:

  • Create task
  • Edit task
  • Delete task
  • View specific task
  • View all tasks

It'll be a REST-based application and we'll use Maven as our build tool.

2.1. Maven Dependencies

Let's include all the dependencies required in our pom.xml file:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
    <version>2.2.6.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>2.2.6.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.codehaus.groovy</groupId>
    <artifactId>groovy</artifactId>
    <version>3.0.3</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <version>2.2.6.RELEASE</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.200</version>
    <scope>runtime</scope>
</dependency>

Here, we're including spring-boot-starter-web to build REST endpoints, and importing the groovy dependency to provide Groovy support to our project.

For the persistence layer, we're using spring-boot-starter-data-jpa, and h2 is the embedded database.

Also, we've got to include gmavenplus-plugin with all the goals in the pom.xml:

<build>
    <plugins>
        //...
        <plugin>
            <groupId>org.codehaus.gmavenplus</groupId>
            <artifactId>gmavenplus-plugin</artifactId>
            <version>1.9.0</version>
            <executions>
                <execution>
                    <goals>
                        <goal>addSources</goal>
                        <goal>addTestSources</goal>
                        <goal>generateStubs</goal>
                        <goal>compile</goal>
                        <goal>generateTestStubs</goal>
                        <goal>compileTests</goal>
                        <goal>removeStubs</goal>
                        <goal>removeTestStubs</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

2.2. JPA Entity Class

Let's write a simple Todo Groovy class with three fields – id, task, and isCompleted:

@Entity
@Table(name = 'todo')
class Todo {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    Integer id
    
    @Column
    String task
    
    @Column
    Boolean isCompleted
}

Here, the id field is the unique identifier of the task. task contains the details of the task and isCompleted shows whether the task is completed or not.

Notice that, when we don't provide access modifiers to the field, then the Groovy compiler will make that field as private and also will generate getter and setter methods for it.

2.3. The Persistence Layer

Let's create a Groovy interface – TodoRepository which implements JpaRepository. It'll take care of all the CRUD operations in our application:

@Repository
interface TodoRepository extends JpaRepository<Todo, Integer> {}

2.4. The Service Layer

The TodoService interface contains all the abstract methods required for our CRUD operation:

interface TodoService {

    List<Todo> findAll()

    Todo findById(Integer todoId)

    Todo saveTodo(Todo todo)

    Todo updateTodo(Todo todo)

    Todo deleteTodo(Integer todoId)
}

The TodoServiceImpl is an implementation class which implements all the methods of TodoService:

@Service
class TodoServiceImpl implements TodoService {

    //...
    
    @Override
    List<Todo> findAll() {
        todoRepository.findAll()
    }

    @Override
    Todo findById(Integer todoId) {
        todoRepository.findById todoId get()
    }
    
    @Override
    Todo saveTodo(Todo todo){
        todoRepository.save todo
    }
    
    @Override
    Todo updateTodo(Todo todo){
        todoRepository.save todo
    }
    
    @Override
    Todo deleteTodo(Integer todoId){
        todoRepository.deleteById todoId
    }
}

2.5. The Controller Layer

Now, let's define all the REST APIs in the TodoController which is our @RestController:

@RestController
@RequestMapping('todo')
public class TodoController {

    @Autowired
    TodoService todoService

    @GetMapping
    List<Todo> getAllTodoList(){
        todoService.findAll()
    }

    @PostMapping
    Todo saveTodo(@RequestBody Todo todo){
        todoService.saveTodo todo
    }

    @PutMapping
    Todo updateTodo(@RequestBody Todo todo){
        todoService.updateTodo todo
    }

    @DeleteMapping('/{todoId}')
    deleteTodo(@PathVariable Integer todoId){
        todoService.deleteTodo todoId
    }

    @GetMapping('/{todoId}')
    Todo getTodoById(@PathVariable Integer todoId){
        todoService.findById todoId
    }
}

Here, we've defined five endpoints which user can call to perform CRUD operations.

2.6. Bootstrapping the Spring Boot Application

Now, let's write a class with the main method that will be used to start our application:

@SpringBootApplication
class SpringBootGroovyApplication {
    static void main(String[] args) {
        SpringApplication.run SpringBootGroovyApplication, args
    }
}

Notice that, in Groovy, the use of parenthesis is optional when calling a method by passing arguments – and this is what we're doing in the example above.

Also, the suffix .class is not needed for any class in Groovy that's why we're using the SpringBootGroovyApplication directly.

Now, let's define this class in pom.xml as start-class:

<properties>
    <start-class>com.baeldung.app.SpringBootGroovyApplication</start-class>
</properties>

3. Running the Application

Finally, our application is ready to run. We should simply run the SpringBootGroovyApplication class as the Java application or run the Maven build:

spring-boot:run

This should start the application on http://localhost:8080 and we should be able to access its endpoints.

4. Testing the Application

Our application is ready for testing. Let's create a Groovy class – TodoAppTest to test our application.

4.1. Initial Setup

Let's define three static variables – API_ROOT, readingTodoId, and writingTodoId in our class:

static API_ROOT = "http://localhost:8080/todo"
static readingTodoId
static writingTodoId

Here, the API_ROOT contains the root URL of our app. The readingTodoId and writingTodoId are the primary keys of our test data which we'll use later to perform testing.

Now, let's create another method – populateDummyData() by using the annotation @BeforeClass to populate the test data:

@BeforeClass
static void populateDummyData() {
    Todo readingTodo = new Todo(task: 'Reading', isCompleted: false)
    Todo writingTodo = new Todo(task: 'Writing', isCompleted: false)

    final Response readingResponse = 
      RestAssured.given()
        .contentType(MediaType.APPLICATION_JSON_VALUE)
        .body(readingTodo).post(API_ROOT)
          
    Todo cookingTodoResponse = readingResponse.as Todo.class
    readingTodoId = cookingTodoResponse.getId()

    final Response writingResponse = 
      RestAssured.given()
        .contentType(MediaType.APPLICATION_JSON_VALUE)
        .body(writingTodo).post(API_ROOT)
          
    Todo writingTodoResponse = writingResponse.as Todo.class
    writingTodoId = writingTodoResponse.getId()
}

We'll also populate variables – readingTodoId and writingTodoId in the same method to store the primary key of the records we're saving.

Notice that, in Groovy we can also initialize beans by using named parameters and the default constructor like we're doing for beans like readingTodo and writingTodo in the above snippet.

4.2. Testing CRUD Operations

Next, let's find all the tasks from the todo list:

@Test
void whenGetAllTodoList_thenOk(){
    final Response response = RestAssured.get(API_ROOT)
    
    assertEquals HttpStatus.OK.value(),response.getStatusCode()
    assertTrue response.as(List.class).size() > 0
}

Then, let's find a specific task by passing readingTodoId which we've populated earlier:

@Test
void whenGetTodoById_thenOk(){
    final Response response = 
      RestAssured.get("$API_ROOT/$readingTodoId")
    
    assertEquals HttpStatus.OK.value(),response.getStatusCode()
    Todo todoResponse = response.as Todo.class
    assertEquals readingTodoId,todoResponse.getId()
}

Here, we've used interpolation to concatenate the URL string.

Furthermore, let's try to update the task in the todo list by using readingTodoId:

@Test
void whenUpdateTodoById_thenOk(){
    Todo todo = new Todo(id:readingTodoId, isCompleted: true)
    final Response response = 
      RestAssured.given()
        .contentType(MediaType.APPLICATION_JSON_VALUE)
        .body(todo).put(API_ROOT)
          
    assertEquals HttpStatus.OK.value(),response.getStatusCode()
    Todo todoResponse = response.as Todo.class
    assertTrue todoResponse.getIsCompleted()
}

And then delete the task in the todo list by using writingTodoId:

@Test
void whenDeleteTodoById_thenOk(){
    final Response response = 
      RestAssured.given()
        .delete("$API_ROOT/$writingTodoId")
    
    assertEquals HttpStatus.OK.value(),response.getStatusCode()
}

Finally, we can save a new task:

@Test
void whenSaveTodo_thenOk(){
    Todo todo = new Todo(task: 'Blogging', isCompleted: false)
    final Response response = 
      RestAssured.given()
        .contentType(MediaType.APPLICATION_JSON_VALUE)
        .body(todo).post(API_ROOT)
          
    assertEquals HttpStatus.OK.value(),response.getStatusCode()
}

5. Conclusion

In this article, we've used Groovy and Spring Boot to build a simple application. We've also seen how they can be integrated together and demonstrated some of the cool features of Groovy with examples.

As always, the full source code of the example is available over on GitHub.


An Introduction to Kaniko

$
0
0

1. Introduction

In this tutorial, we'll take a look at building container images using Kaniko.

2. Kaniko

Kaniko is a tool to build container images from a Dockerfile. Unlike Docker, Kaniko doesn't require the Docker daemon.

Since there's no dependency on the daemon process, this can be run in any environment where the user doesn't have root access like a Kubernetes cluster.

Kaniko executes each command within the Dockerfile completely in the userspace using an executor image: gcr.io/kaniko-project/executor which runs inside a container; for instance, a Kubernetes pod. It executes each command inside the Dockerfile in order and takes a snapshot of the file system after each command.

If there are changes to the file system, the executor takes a snapshot of the filesystem change as a “diff” layer and updates the image metadata.

There are different ways to deploy and run Kaniko:

  • Kubernetes cluster
  • gVisor
  • Google Cloud Build

In this tutorial, we'll be deploying Kaniko using a Kubernetes cluster.

3. Installing Minikube

We'll be using Minikube to deploy Kubernetes locally. It can be downloaded as a stand-alone binary:

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube

We can then add Minikube executable to the path:

$ sudo mkdir -p /usr/local/bin/
$ sudo install minikube /usr/local/bin/

Next, let's make sure the Docker daemon is running:

$ docker version

Also, this command will tell us the client and server version installed.

And now, we can create our Kubernetes cluster:

$ minikube start --driver=docker

Once the start command executes successfully, we'll see a message:

Done! kubectl is now configured to use "minikube"
For best results, install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Upon running the minikube status command, we should see the kubelet status as “Running“:

m01
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

Next, we need to set up the kubectl binary to be able to run the Kubernetes commands. Let's download the binary and make it executable:

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl && chmod +x ./kubectl

Let's now move this to the path:

$ sudo mv ./kubectl /usr/local/bin/kubectl

We can verify the version using:

$ kubectl version

4. Building Images Using Kaniko

Now that we have a Kubernetes cluster ready, let's start building an image using Kaniko.

First, we need to create a local directory that will be mounted in Kaniko container as the build context.

For this, we need to SSH to the Kubernetes cluster and create the directory:

$ minikube ssh
$ mkdir kaniko && cd kaniko

Next, let's create a Dockerfile which pulls the Ubuntu image and echoes a string “hello”:

$ echo 'FROM ubuntu' >> dockerfile
$ echo 'ENTRYPOINT ["/bin/bash", "-c", "echo hello"]' >> dockerfile

If we run cat dockerfile now, we should see:

FROM ubuntu
ENTRYPOINT ["/bin/bash", "-c", "echo hello"]

And lastly, we'll run the pwd command, to get the path to the local directory which later needs to be specified in the persistent volume.

The output for this should be similar to:

/home/docker/kaniko

And finally, we can abort the SSH session:

$ exit

4.1. Kaniko Executor Image Arguments

Before proceeding further with creating the Kubernetes configuration files, let's take a look at some of the arguments that the Kaniko executor image requires:

  • Dockerfile (–dockerfile) – File containing all the commands required to build the image
  • Build context (–context) – This is similar to the build context of Docker, which refers to the directory which Kaniko uses to build the image. So far, Kaniko supports Google Cloud Storage (GCS),  Amazon S3, Azure blob storage, a Git repository, and a local directory. In this tutorial, we'll use the local directory we configured earlier.
  • Destination (–destination) – This refers to the Docker registry or any similar repository to which we push the image. This argument is mandatory. If we don't want to push the image we can override the behavior by using the –no-push flag instead.

We can also see additional flags that can be passed.

4.2. Setting Up the Configuration Files

Let's now start creating the configuration files required for running Kaniko in the Kubernetes cluster.

First, let's create the persistent volume which provides the volume mount path that was created earlier in the cluster. Let's call the file volume.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: dockerfile
  labels:
    type: local
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  storageClassName: local-storage
  hostPath:
    path: /home/docker/kaniko # replace this with the output of pwd command from before, if it is different

Next, let's create a persistent volume claim for this persistent volume. We'll create a file volume-claim.yaml with:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: dockerfile-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  storageClassName: local-storage

Finally, let's create the pod descriptor which comprises the executor image. This descriptor has the reference to the volume mounts specified above which in turn point to the Dockerfile we've created before.

We'll call the file pom.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: kaniko
spec:
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:latest
    args: ["--dockerfile=/workspace/dockerfile",
            "--context=dir://workspace",
            "--no-push"] 
    volumeMounts:
      - name: dockerfile-storage
        mountPath: /workspace
  restartPolicy: Never
  volumes:
    - name: dockerfile-storage
      persistentVolumeClaim:
        claimName: dockerfile-claim

As mentioned before, in this tutorial we're focusing only on the image creation using Kaniko and are not publishing it. Hence we specify the no-push flag in the arguments to the executor.

With all of the required configuration files in place, let's apply each of them:

$ kubectl create -f volume.yaml
$ kubectl create -f volume-claim.yaml
$ kubectl create -f pod.yaml

After applying the descriptors, we can check that the Kaniko pod comes into the completed status. We can check this using kubectl get po:

NAME     READY   STATUS      RESTARTS   AGE
kaniko    0/1   Completed       0        3m

Now we can check the logs of this pod using kubectl logs kaniko, to check the status of the image creation and it should show the following output:

INFO[0000] Resolved base name ubuntu to ubuntu          
INFO[0000] Resolved base name ubuntu to ubuntu          
INFO[0000] Retrieving image manifest ubuntu             
INFO[0003] Retrieving image manifest ubuntu             
INFO[0006] Built cross stage deps: map[]                
INFO[0006] Retrieving image manifest ubuntu             
INFO[0008] Retrieving image manifest ubuntu             
INFO[0010] Skipping unpacking as no commands require it. 
INFO[0010] Taking snapshot of full filesystem...        
INFO[0013] Resolving paths                              
INFO[0013] ENTRYPOINT ["/bin/bash", "-c", "echo hello"] 
INFO[0013] Skipping push to container registry due to --no-push flag

We can see in the output that the container has executed the steps we have put in the Dockerfile.

It began by pulling the base Ubuntu image, and it ended by adding the echo command to the entry point. Since the no-push flag was specified, it didn't push the image to any repository.

Like mentioned before, we also see that a snapshot of the file system is taken before adding the entry point.

5. Conclusion

In this tutorial, we've looked at a basic introduction to Kaniko. We've seen how it can be used to build an image and also setting up a Kubernetes cluster using Minikube with the required configuration for Kaniko to run.

As usual, the code snippets used in this article are available over on GitHub.

Spring Security Custom Logout Handler

$
0
0

1. Overview

The Spring Security framework provides very flexible and powerful support for authentication. Together with user identification, we'll typically want to handle user logout events and, in some cases, add some custom logout behavior. One such use case could be for invalidating a user cache or closing authenticated sessions.

For this very purpose, Spring provides the LogoutHandler interface, and in this tutorial, we'll take a look at how to implement our own custom logout handler.

2. Handling Logout Requests

Every web application that logs users in must log them out someday. Spring Security handlers usually control the logout process. Basically, we have two ways of handling logout. As we're going to see, one of them is implementing the LogoutHandler interface.

2.1. LogoutHandler Interface

The LogoutHandler interface has the following definition:

public interface LogoutHandler {
    void logout(HttpServletRequest request, HttpServletResponse response,Authentication authentication);
}

It is possible to add as many logout handlers as we need to our application. The one requirement for the implementation is that no exceptions are thrown. This is because handler actions must not break the application state on logout.

For example, one of the handlers may do some cache cleanup, and its method must complete successfully. In the tutorial example, we'll show exactly this use case.

2.2. LogoutSuccessHandler Interface

On the other hand, we can use exceptions to control the user logout strategy. For this, we have the LogoutSuccessHandler interface and the onLogoutSuccess method. This method may raise an exception to set user redirection to an appropriate destination.

Furthermore, it's not possible to add multiple handlers when using a LogoutSuccessHandler type, so there is only one possible implementation for the application. Generally speaking, it turns out that it's the last point of the logout strategy.

3. LogoutHandler Interface in Practice

Now, let's create a simple web application to demonstrate the logout handling process. We'll implement some simple caching logic to retrieve user data to avoid unnecessary hits on the database.

Let's start with the application.properties file, which contains the database connection properties for our sample application:

spring.datasource.url=jdbc:postgresql://localhost:5432/test
spring.datasource.username=test
spring.datasource.password=test
spring.jpa.hibernate.ddl-auto=create

3.1. Web Application Setup

Next, we'll add a simple User entity that we'll use for login purposes and data retrieval. As we can see, the User class maps to the users table in our database:

@Entity
@Table(name = "users")
public class User {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer id;

    @Column(unique = true)
    private String login;

    private String password;

    private String role;

    private String language;

    // standard setters and getters
}

For the caching purposes of our application, we'll implement a cache service that uses a ConcurrentHashMap internally to store users:

@Service
public class UserCache {
    @PersistenceContext
    private EntityManager entityManager;

    private final ConcurrentMap<String, User> store = new ConcurrentHashMap<>(256);
}

Using this service, we can retrieve a user by user name (login) from the database and store it internally in our map:

public User getByUserName(String userName) {
    return store.computeIfAbsent(userName, k -> 
      entityManager.createQuery("from User where login=:login", User.class)
        .setParameter("login", k)
        .getSingleResult());
}

Furthermore, it is possible to evict the user from the store. As we'll see later, this will be the main action that we'll invoke from our logout handler:

public void evictUser(String userName) {
    store.remove(userName);
}

To retrieve user data and language information we'll use a standard Spring Controller:

@Controller
@RequestMapping(path = "/user")
public class UserController {

    private final UserCache userCache;

    public UserController(UserCache userCache) {
        this.userCache = userCache;
    }

    @GetMapping(path = "/language")
    @ResponseBody
    public String getLanguage() {
        String userName = UserUtils.getAuthenticatedUserName();
        User user = userCache.getByUserName(userName);
        return user.getLanguage();
    }
}

3.2. Web Security Configuration

There are two simple actions we’ll focus on in the application — login and logout. First, we need to set up our MVC configuration class to allow users to authenticate using Basic HTTP Auth:

@Configuration
@EnableWebSecurity
public class MvcConfiguration extends WebSecurityConfigurerAdapter {

    @Autowired
    private CustomLogoutHandler logoutHandler;

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.httpBasic()
            .and()
                .authorizeRequests()
                    .antMatchers(HttpMethod.GET, "/user/**")
                    .hasRole("USER")
            .and()
                .logout()
                    .logoutUrl("/user/logout")
                    .addLogoutHandler(logoutHandler)
                    .logoutSuccessHandler(new HttpStatusReturningLogoutSuccessHandler(HttpStatus.OK))
                    .permitAll()
            .and()
                .csrf()
                    .disable()
                .formLogin()
                    .disable();
    }

    // further configuration
}

The important part to note from the above configuration is the addLogoutHandler method. We pass and trigger our CustomLogoutHandler at the end of logout processing. The remaining settings fine-tune the HTTP Basic Auth.

3.3. Custom Logout Handler

Finally, and most importantly, we'll write our custom logout handler that handles the necessary user cache cleanup:

@Service
public class CustomLogoutHandler implements LogoutHandler {

    private final UserCache userCache;

    public CustomLogoutHandler(UserCache userCache) {
        this.userCache = userCache;
    }

    @Override
    public void logout(HttpServletRequest request, HttpServletResponse response, 
      Authentication authentication) {
        String userName = UserUtils.getAuthenticatedUserName();
        userCache.evictUser(userName);
    }
}

As we can see, we override the logout method and simply evict the given user from the user cache.

4. Integration Testing

Let's now test the functionality. To begin with, we need to verify that the cache works as intended — that is, it loads authorized users into its internal store:

@Test
public void whenLogin_thenUseUserCache() {
    assertThat(userCache.size()).isEqualTo(0);

    ResponseEntity<String> response = restTemplate.withBasicAuth("user", "pass")
        .getForEntity(getLanguageUrl(), String.class);

    assertThat(response.getBody()).contains("english");

    assertThat(userCache.size()).isEqualTo(1);

    HttpHeaders requestHeaders = new HttpHeaders();
    requestHeaders.add("Cookie", response.getHeaders()
        .getFirst(HttpHeaders.SET_COOKIE));

    response = restTemplate.exchange(getLanguageUrl(), HttpMethod.GET, 
      new HttpEntity<String>(requestHeaders), String.class);
    assertThat(response.getBody()).contains("english");

    response = restTemplate.exchange(getLogoutUrl(), HttpMethod.GET, 
      new HttpEntity<String>(requestHeaders), String.class);
    assertThat(response.getStatusCode()
        .value()).isEqualTo(200);
}

Let’s decompose the steps to understand what we've done::

  • First, we check that the cache is empty
  • Next, we authenticate a user via the withBasicAuth method
  • Now we can verify the user data and language value retrieved
  • Consequently, we can verify that the user must now be in the cache
  • Again, we check the user data by hitting the language endpoint and using a session cookie
  • Finally, we verify logging out the user

In our second test, we'll verify that the user cache is cleaned when we logout. This is the moment when our logout handler will be invoked:

@Test
public void whenLogout_thenCacheIsEmpty() {
    assertThat(userCache.size()).isEqualTo(0);

    ResponseEntity<String> response = restTemplate.withBasicAuth("user", "pass")
        .getForEntity(getLanguageUrl(), String.class);

    assertThat(response.getBody()).contains("english");

    assertThat(userCache.size()).isEqualTo(1);

    HttpHeaders requestHeaders = new HttpHeaders();
    requestHeaders.add("Cookie", response.getHeaders()
        .getFirst(HttpHeaders.SET_COOKIE));

    response = restTemplate.exchange(getLogoutUrl(), HttpMethod.GET, 
      new HttpEntity<String>(requestHeaders), String.class);
    assertThat(response.getStatusCode()
        .value()).isEqualTo(200);

    assertThat(userCache.size()).isEqualTo(0);

    response = restTemplate.exchange(getLanguageUrl(), HttpMethod.GET, 
      new HttpEntity<String>(requestHeaders), String.class);
    assertThat(response.getStatusCode()
        .value()).isEqualTo(401);
}

Again, step by step:

  • As before, we begin by checking that the cache is empty
  • Then we authenticate a user and check the user is in the cache
  • Next, we perform a logout and check that the user has been removed from the cache
  • Finally, an attempt to hit the language endpoint results with 401 HTTP unauthorized response code

5. Conclusion

It this tutorial, we learned how to implement a custom logout handler for evicting users from a user cache using Spring’s LogoutHandler interface.

As always, the full source code of the article is available over on GitHub.

Java-R Integration

$
0
0

1. Overview

R is a popular programming language used for statistics. Since it has a wide variety of functions and packages available, it's not an uncommon requirement to embed R code into other languages.

In this article, we'll take a look at some of the most common ways of integrating R code into Java.

2. R Script

For our project, we'll start by implementing a very simple R function that takes a vector as input and returns the mean of its values. We'll define this in a dedicated file:

customMean <- function(vector) {
    mean(vector)
}

Throughout this tutorial, we'll use a Java helper method to read this file and return its content as a String:

String getMeanScriptContent() throws IOException, URISyntaxException {
    URI rScriptUri = RUtils.class.getClassLoader().getResource("script.R").toURI();
    Path inputScript = Paths.get(rScriptUri);
    return Files.lines(inputScript).collect(Collectors.joining());
}

Now, let's take a look at the different options we have to invoke this function from Java.

3. RCaller

The first library we're going to consider is RCaller which can execute code by spawning a dedicated R process on the local machine.

Since RCaller is available from Maven Central, we can just include it in our pom.xml:

<dependency>
    <groupId>com.github.jbytecode</groupId>
    <artifactId>RCaller</artifactId>
    <version>3.0</version>
</dependency>

Next, let's write a custom method which returns the mean of our values by using our original R script:

public double mean(int[] values) throws IOException, URISyntaxException {
    String fileContent = RUtils.getMeanScriptContent();
    RCode code = RCode.create();
    code.addRCode(fileContent);
    code.addIntArray("input", values);
    code.addRCode("result <- customMean(input)");
    RCaller caller = RCaller.create(code, RCallerOptions.create());
    caller.runAndReturnResult("result");
    return caller.getParser().getAsDoubleArray("result")[0];
}

In this method we're mainly using two objects:

  • RCode, which represents our code context, including our function, its input, and an invocation statement
  • RCaller, which lets us run our code and get the result back

It's important to notice that RCaller is not suitable for small and frequent computations because of the time it takes to start the R process. This is a noticeable drawback.

Also, RCaller works only with R installed on the local machine.

4. Renjin

Renjin is another popular solution available on the R integration landscape. It's more widely adopted, and it also offers enterprise support.

Adding Renjin to our project is a bit less trivial since we have to add the bedatadriven repository along with the Maven dependency:

<repositories>
    <repository>
        <id>bedatadriven</id>
        <name>bedatadriven public repo</name>
        <url>https://nexus.bedatadriven.com/content/groups/public/</url>
    </repository>
</repositories>

<dependencies>
    <dependency>
        <groupId>org.renjin</groupId>
        <artifactId>renjin-script-engine</artifactId>
        <version>RELEASE</version>
    </dependency>
</dependencies>

Once again, let's build a Java wrapper to our R function:

public double mean(int[] values) throws IOException, URISyntaxException, ScriptException {
    RenjinScriptEngine engine = new RenjinScriptEngine();
    String meanScriptContent = RUtils.getMeanScriptContent();
    engine.put("input", values);
    engine.eval(meanScriptContent);
    DoubleArrayVector result = (DoubleArrayVector) engine.eval("customMean(input)");
    return result.asReal();
}

As we can see, the concept is very similar to RCaller, although being less verbose, since we can invoke functions directly by name using the eval method.

The main advantage of Renjin is that it doesn't require an R installation as it uses a JVM-based interpreter. However, Renjin is currently not 100% compatible with GNU R.

5. Rserve

The libraries we have reviewed so far are good choices for running code locally. But what if we want to have multiple clients invoking our R script? That's where Rserve comes into play, letting us run R code on a remote machine through a TCP server.

Setting up Rserve involves installing the related package and starting the server loading our script, through the R console:

> install.packages("Rserve")
...
> library("Rserve")
> Rserve(args = "--RS-source ~/script.R")
Starting Rserve...

Next, we can now include Rserve in our project by, as usual, adding the Maven dependency:

<dependency>
    <groupId>org.rosuda.REngine</groupId>
    <artifactId>Rserve</artifactId>
    <version>1.8.1</version>
</dependency>

Finally, let's wrap our R script into a Java method. Here we'll use an RConnection object with our server address, defaulting to 127.0.0.1:6311 if not provided:

public double mean(int[] values) throws REngineException, REXPMismatchException {
    RConnection c = new RConnection();
    c.assign("input", values);
    return c.eval("customMean(input)").asDouble();
}

6. FastR

The last library we're going to talk about is FastR. a high-performance R implementation built on GraalVM. At the time of this writing, FastR is only available on Linux and Darwin x64 systems.

In order to use it, we first need to install GraalVM from the official website. After that, we need to install FastR itself using the Graal Component Updater and then run the configuration script that comes with it:

$ bin/gu install R
...
$ languages/R/bin/configure_fastr

This time our code will depend on Polyglot, the GraalVM internal API for embedding different guest languages in Java. Since Polyglot is a general API, we specify the language of the code we want to run. Also, we'll use the c R function to convert our input to a vector:

public double mean(int[] values) {
    Context polyglot = Context.newBuilder().allowAllAccess(true).build();
    String meanScriptContent = RUtils.getMeanScriptContent(); 
    polyglot.eval("R", meanScriptContent);
    Value rBindings = polyglot.getBindings("R");
    Value rInput = rBindings.getMember("c").execute(values);
    return rBindings.getMember("customMean").execute(rInput).asDouble();
}

When following this approach, keep in mind that it makes our code tightly coupled with the JVM. To learn more about GraalVM check out our article on the Graal Java JIT Compiler.

7. Conclusion

In this article, we went through some of the most popular technologies for integrating R in Java. To sum up:

  • RCaller is easier to integrate since it's available on Maven Central
  • Renjin offers enterprise support and doesn't require R to be installed on the local machine but it's not 100% compatible with GNU R
  • Rserve can be used to execute R code on a remote server
  • FastR allows seamless integration with Java but makes our code dependent on the VM and is not available for every OS

As always, all the code used in this tutorial is available over on GitHub.

Constructing a JPA Query Between Unrelated Entities

$
0
0

1. Overview

In this tutorial, we'll see how we can construct a JPA query between unrelated entities.

2. Maven Dependencies

Let's start by adding the necessary dependencies to our pom.xml.

First of all, we need to add a dependency for the Java Persistence API:

<dependency>
   <groupId>javax.persistence</groupId>
   <artifactId>javax.persistence-api</artifactId>
   <version>2.2</version>
</dependency>

Then, we add a dependency for the Hibernate ORM which implements the Java Persistence API:

<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-core</artifactId>
    <version>5.4.14.Final</version>
</dependency>

And finally, we add some QueryDSL dependencies; namely, querydsl-apt and querydsl-jpa:

<dependency>
    <groupId>com.querydsl</groupId>
    <artifactId>querydsl-apt</artifactId>
    <version>4.3.1</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>com.querydsl</groupId>
    <artifactId>querydsl-jpa</artifactId>
    <version>4.3.1</version>
</dependency>

3. The Domain Model

The domain of our example is a cocktail bar. Here we have two tables in the database:

  • The menu table to store the cocktails that our bar sells and their prices, and
  • The recipes table to store the instructions of creating a cocktail

These two tables are not strictly related to each other. A cocktail can be in our menu without keeping instructions for its recipe. Additionally, we could have available recipes for cocktails that we don't sell yet.

In our example, we are going to find all the cocktails on our menu that we have an available recipe.

4. The JPA Entities

We can easily create two JPA entities to represent our tables:

@Entity
@Table(name = "menu")
public class Cocktail {
    @Id
    @Column(name = "cocktail_name")
    private String name;

    @Column
    private double price;

    // getters & setters
}
@Entity
@Table(name="recipes")
public class Recipe {
    @Id
    @Column(name = "cocktail")
    private String cocktail;

    @Column
    private String instructions;
    
    // getters & setters
}

Between the menu and recipes tables, there is an underlying one-to-one relationship without an explicit foreign key constraint. For example, if we have a menu record where its cocktail_name column's value is “Mojito” and a recipes record where its cocktail column's value is “Mojito”, then the menu record is associated with this recipes record.

To represent this relationship in our Cocktail entity, we add the recipe field annotated with various annotations:

@Entity
@Table(name = "menu")
public class Cocktail {
    // ...
 
    @OneToOne
    @NotFound(action = NotFoundAction.IGNORE)
    @JoinColumn(name = "cocktail_name", 
       referencedColumnName = "cocktail", 
       insertable = false, updatable = false, 
       foreignKey = @javax.persistence
         .ForeignKey(value = ConstraintMode.NO_CONSTRAINT))
    private Recipe recipe;
   
    // ...
}

The first annotation is @OneToOne, which declares the underlying one-to-one relationship with the Recipe entity.

Next, we annotate the recipe field with the @NotFound(action = NotFoundAction.IGNORE) Hibernate annotation. This tells our ORM to not throw an exception when there is a recipe for a cocktail that doesn't exist in our menu table.

The annotation that associates the Cocktail with its associated Recipe is @JoinColumn. By using this annotation, we define a pseudo foreign key relationship between the two entities.

Finally, by setting the foreignKey property to @javax.persistence.ForeignKey(value = ConstraintMode.NO_CONSTRAINT), we instruct the JPA provider to not generate the foreign key constraint.

5. The JPA and QueryDSL Queries

Since we are interested in retrieving the Cocktail entities that are associated with a Recipe, we can query the Cocktail entity by joining it with its associated Recipe entity.

One way we can construct the query is by using JPQL:

entityManager.createQuery("select c from Cocktail c join c.recipe")

Or by using the QueryDSL framework:

new JPAQuery<Cocktail>(entityManager)
  .from(QCocktail.cocktail)
  .join(QCocktail.cocktail.recipe)

Another way to get the desired results is to join the Cocktail with the Recipe entity and by using the on clause to define the underlying relationship in the query directly.

We can do this using JPQL:

entityManager.createQuery("select c from Cocktail c join Recipe r on c.name = r.cocktail")

or by using the QueryDSL framework:

new JPAQuery(entityManager)
  .from(QCocktail.cocktail)
  .join(QRecipe.recipe)
  .on(QCocktail.cocktail.name.eq(QRecipe.recipe.cocktail))

6. One-To-One Join Unit Test

Let's start creating a unit test for testing the above queries. Before our test cases run, we have to insert some data into our database tables.

public class UnrelatedEntitiesUnitTest {
    // ...

    @BeforeAll
    public static void setup() {
        // ...

        mojito = new Cocktail();
        mojito.setName("Mojito");
        mojito.setPrice(12.12);
        ginTonic = new Cocktail();
        ginTonic.setName("Gin tonic");
        ginTonic.setPrice(10.50);
        Recipe mojitoRecipe = new Recipe(); 
        mojitoRecipe.setCocktail(mojito.getName()); 
        mojitoRecipe.setInstructions("Some instructions for making a mojito cocktail!");
        entityManager.persist(mojito);
        entityManager.persist(ginTonic);
        entityManager.persist(mojitoRecipe);
      
        // ...
    }

    // ... 
}

In the setup method, we are saving two Cocktail entities, the mojito and the ginTonic. Then, we add a recipe for how we can make a “Mojito” cocktail.

Now, we can test the results of the queries of the previous section. We know that only the mojito cocktail has an associated Recipe entity, so we expect the various queries to return only the mojito cocktail:

public class UnrelatedEntitiesUnitTest {
    // ...

    @Test
    public void givenCocktailsWithRecipe_whenQuerying_thenTheExpectedCocktailsReturned() {
        // JPA
        Cocktail cocktail = entityManager.createQuery("select c " +
          "from Cocktail c join c.recipe", Cocktail.class)
          .getSingleResult();
        verifyResult(mojito, cocktail);

        cocktail = entityManager.createQuery("select c " +
          "from Cocktail c join Recipe r " +
          "on c.name = r.cocktail", Cocktail.class).getSingleResult();
        verifyResult(mojito, cocktail);

        // QueryDSL
        cocktail = new JPAQuery<Cocktail>(entityManager).from(QCocktail.cocktail)
          .join(QCocktail.cocktail.recipe)
          .fetchOne();
        verifyResult(mojito, cocktail);

        cocktail = new JPAQuery<Cocktail>(entityManager).from(QCocktail.cocktail)
          .join(QRecipe.recipe)
          .on(QCocktail.cocktail.name.eq(QRecipe.recipe.cocktail))
          .fetchOne();
        verifyResult(mojito, cocktail);
    }

    private void verifyResult(Cocktail expectedCocktail, Cocktail queryResult) {
        assertNotNull(queryResult);
        assertEquals(expectedCocktail, queryResult);
    }

    // ...
}

The verifyResult method helps us to verify that the result returned from the query is equal to the expected result.

7. One-To-Many Underlying Relationship

Let's change the domain of our example to show how we can join two entities with a one-to-many underlying relationship.


Instead of the recipes table, we have the multiple_recipes table, where we can store as many recipes as we want for the same cocktail.

@Entity
@Table(name = "multiple_recipes")
public class MultipleRecipe {
    @Id
    @Column(name = "id")
    private Long id;

    @Column(name = "cocktail")
    private String cocktail;

    @Column(name = "instructions")
    private String instructions;

    // getters & setters
}

Now, the Cocktail entity is associated with the MultipleRecipe entity by a one-to-many underlying relationship :

@Entity
@Table(name = "cocktails")
public class Cocktail {    
    // ...

    @OneToMany
    @NotFound(action = NotFoundAction.IGNORE)
    @JoinColumn(
       name = "cocktail", 
       referencedColumnName = "cocktail_name", 
       insertable = false, 
       updatable = false, 
       foreignKey = @javax.persistence
         .ForeignKey(value = ConstraintMode.NO_CONSTRAINT))
    private List<MultipleRecipe> recipeList;

    // getters & setters
}

To find and get the Cocktail entities for which we have at least one available MultipleRecipe, we can query the Cocktail entity by joining it with its associated MultipleRecipe entities.

We can do this using JPQL:

entityManager.createQuery("select c from Cocktail c join c.recipeList");

or by using the QueryDSL framework:

new JPAQuery(entityManager).from(QCocktail.cocktail)
  .join(QCocktail.cocktail.recipeList);

There is also the option to not use the recipeList field which defines the one-to-many relationship between the Cocktail and MultipleRecipe entities. Instead, we can write a join query for the two entities and determine their underlying relationship by using JPQL “on” clause:

entityManager.createQuery("select c "
  + "from Cocktail c join MultipleRecipe mr "
  + "on mr.cocktail = c.name");

Finally, we can construct the same query by using the QueryDSL framework:

new JPAQuery(entityManager).from(QCocktail.cocktail)
  .join(QMultipleRecipe.multipleRecipe)
  .on(QCocktail.cocktail.name.eq(QMultipleRecipe.multipleRecipe.cocktail));

8. One-To-Many Join Unit Test

Here, we'll add a new test case for testing the previous queries. Before doing so, we have to persist some MultipleRecipe instances during our setup method:

public class UnrelatedEntitiesUnitTest {    
    // ...

    @BeforeAll
    public static void setup() {
        // ...
        
        MultipleRecipe firstMojitoRecipe = new MultipleRecipe();
        firstMojitoRecipe.setId(1L);
        firstMojitoRecipe.setCocktail(mojito.getName());
        firstMojitoRecipe.setInstructions("The first recipe of making a mojito!");
        entityManager.persist(firstMojitoRecipe);
        MultipleRecipe secondMojitoRecipe = new MultipleRecipe();
        secondMojitoRecipe.setId(2L);
        secondMojitoRecipe.setCocktail(mojito.getName());
        secondMojitoRecipe.setInstructions("The second recipe of making a mojito!"); 
        entityManager.persist(secondMojitoRecipe);
       
        // ...
    }

    // ... 
}

We can then develop a test case, where we verify that when the queries we showed in the previous section are executed, they return the Cocktail entities that are associated with at least one MultipleRecipe instance:

public class UnrelatedEntitiesUnitTest {
    // ...
    
    @Test
    public void givenCocktailsWithMultipleRecipes_whenQuerying_thenTheExpectedCocktailsReturned() {
        // JPQL
        Cocktail cocktail = entityManager.createQuery("select c "
          + "from Cocktail c join c.recipeList", Cocktail.class)
          .getSingleResult();
        verifyResult(mojito, cocktail);

        cocktail = entityManager.createQuery("select c "
          + "from Cocktail c join MultipleRecipe mr "
          + "on mr.cocktail = c.name", Cocktail.class)
          .getSingleResult();
        verifyResult(mojito, cocktail);

        // QueryDSL
        cocktail = new JPAQuery<Cocktail>(entityManager).from(QCocktail.cocktail)
          .join(QCocktail.cocktail.recipeList)
          .fetchOne();
        verifyResult(mojito, cocktail);

        cocktail = new JPAQuery<Cocktail>(entityManager).from(QCocktail.cocktail)
          .join(QMultipleRecipe.multipleRecipe)
          .on(QCocktail.cocktail.name.eq(QMultipleRecipe.multipleRecipe.cocktail))
          .fetchOne();
        verifyResult(mojito, cocktail);
    }

    // ...

}

9. Many-To-Many Underlying Relationship

In this section, we choose to categorize our cocktails in our menu by their base ingredient. For example, the base ingredient of the mojito cocktail is the rum, so the rum is a cocktail category in our menu.

To depict the above in our domain, we add the category field into the Cocktail entity:

@Entity
@Table(name = "menu")
public class Cocktail {
    // ...

    @Column(name = "category")
    private String category;
    
     // ...
}

Also, we can add the base_ingredient column to the multiple_recipes table to be able to search for recipes based on a specific drink.

@Entity
@Table(name = "multiple_recipes")
public class MultipleRecipe {
    // ...
    
    @Column(name = "base_ingredient")
    private String baseIngredient;
    
    // ...
}

After the above, here's our database schema:

Now, we have a many-to-many underlying relationship between Cocktail and MultipleRecipe entities. Many MultipleRecipe entities can be associated with many Cocktail entities that their category value is equal with the baseIngredient value of the MultipleRecipe entities.

To find and get the MultipleRecipe entities that their baseIngredient exists as a category in the Cocktail entities, we can join these two entities by using JPQL:

entityManager.createQuery("select distinct r " 
  + "from MultipleRecipe r " 
  + "join Cocktail c " 
  + "on r.baseIngredient = c.category", MultipleRecipe.class)

Or by using QueryDSL:

QCocktail cocktail = QCocktail.cocktail; 
QMultipleRecipe multipleRecipe = QMultipleRecipe.multipleRecipe; 
new JPAQuery(entityManager).from(multipleRecipe)
  .join(cocktail)
  .on(multipleRecipe.baseIngredient.eq(cocktail.category))
  .fetch();

10. Many-To-Many Join Unit Test

Before proceeding with our test case we have to set the category of our Cocktail entities and the baseIngredient of our MultipleRecipe entities:

public class UnrelatedEntitiesUnitTest {
    // ...

    @BeforeAll
    public static void setup() {
        // ...

        mojito.setCategory("Rum");
        ginTonic.setCategory("Gin");
        firstMojitoRecipe.setBaseIngredient(mojito.getCategory());
        secondMojitoRecipe.setBaseIngredient(mojito.getCategory());

        // ...
    }

    // ... 
}

Then, we can verify that when the queries we showed previously are executed, they return the expected results:

public class UnrelatedEntitiesUnitTest {
    // ...

    @Test
    public void givenMultipleRecipesWithCocktails_whenQuerying_thenTheExpectedMultipleRecipesReturned() {
        Consumer<List<MultipleRecipe>> verifyResult = recipes -> {
            assertEquals(2, recipes.size());
            recipes.forEach(r -> assertEquals(mojito.getName(), r.getCocktail()));
        };

        // JPQL
        List<MultipleRecipe> recipes = entityManager.createQuery("select distinct r "
          + "from MultipleRecipe r "
          + "join Cocktail c " 
          + "on r.baseIngredient = c.category",
          MultipleRecipe.class).getResultList();
        verifyResult.accept(recipes);

        // QueryDSL
        QCocktail cocktail = QCocktail.cocktail;
        QMultipleRecipe multipleRecipe = QMultipleRecipe.multipleRecipe;
        recipes = new JPAQuery<MultipleRecipe>(entityManager).from(multipleRecipe)
          .join(cocktail)
          .on(multipleRecipe.baseIngredient.eq(cocktail.category))
          .fetch();
        verifyResult.accept(recipes);
    }

    // ...
}

11. Conclusion

In this tutorial, we presented various ways of constructing JPA queries between unrelated entities and by using JPQL or the QueryDSL framework.

As always, the code is available over on GitHub.

Invoking a SOAP Web Service in Java

$
0
0

1. Overview

In this tutorial, we'll learn how to build a SOAP client in Java with JAX-WS RI. First, we'll generate the client code using the wsimport utility, and then test it using a JUnit.

For those starting out, our introduction to JAX-WS provides great background on the subject.

2. The Web Service

Before we start building a client, we need a server. In this case, a server exposing a JAX-WS web service.

For the purpose of this tutorial, we'll use a web service which will fetch us a country's data, given its name.

2.1. Summary of Implementation

Since we're focusing on building the client, we won't get into the implementation details of our service.

Let's suffice to say that an interface CountryService is used to expose the web service to the external world. To keep things simple, we'll build and deploy the web service using the javax.xml.ws.Endpoint API in our class CountryServicePublisher.

We'll run CountryServicePublisher as a Java application to publish an endpoint that'll accept the incoming requests. In other words, this will be our server.

After starting the server, hitting the URL http://localhost:8888/ws/country?wsdl gives us the web service description file. The WSDL acts as a guide to understand the service's offerings and generate implementation code for the client.

2.2. The Web Services Description Language

Let's look at our web service's WSDL, country:

<?xml version="1.0" encoding="UTF-8"?>
<definitions <!-- namespace declarations -->
    targetNamespace="http://server.ws.soap.baeldung.com/" name="CountryServiceImplService">
    <types>
        <xsd:schema>
            <xsd:import namespace="http://server.ws.soap.baeldung.com/" 
              schemaLocation="http://localhost:8888/ws/country?xsd=1"></xsd:import>
        </xsd:schema>
    </types>
    <message name="findByName">
        <part name="arg0" type="xsd:string"></part>
    </message>
    <message name="findByNameResponse">
        <part name="return" type="tns:country"></part>
    </message>
    <portType name="CountryService">
        <operation name="findByName">
            <input wsam:Action="http://server.ws.soap.baeldung.com/CountryService/findByNameRequest" 
              message="tns:findByName"></input>
            <output wsam:Action="http://server.ws.soap.baeldung.com/CountryService/findByNameResponse" 
              message="tns:findByNameResponse"></output>
        </operation>
    </portType>
    <binding name="CountryServiceImplPortBinding" type="tns:CountryService">
        <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="rpc"></soap:binding>
        <operation name="findByName">
            <soap:operation soapAction=""></soap:operation>
            <input>
                <soap:body use="literal" namespace="http://server.ws.soap.baeldung.com/"></soap:body>
            </input>
            <output>
                <soap:body use="literal" namespace="http://server.ws.soap.baeldung.com/"></soap:body>
            </output>
        </operation>
    </binding>
    <service name="CountryServiceImplService">
        <port name="CountryServiceImplPort" binding="tns:CountryServiceImplPortBinding">
            <soap:address location="http://localhost:8888/ws/country"></soap:address>
        </port>
    </service>
</definitions>

In a nutshell, this is the useful information it provides:

  • we can invoke the method findByName with a string argument
  • in response, the service will return us a custom type of country
  • types are defined in an xsd schema generated at the location http://localhost:8888/ws/country?xsd=1:
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema <!-- namespace declarations -->
    targetNamespace="http://server.ws.soap.baeldung.com/">
    <xs:complexType name="country">
        <xs:sequence>
            <xs:element name="capital" type="xs:string" minOccurs="0"></xs:element>
            <xs:element name="currency" type="tns:currency" minOccurs="0"></xs:element>
            <xs:element name="name" type="xs:string" minOccurs="0"></xs:element>
            <xs:element name="population" type="xs:int"></xs:element>
        </xs:sequence>
    </xs:complexType>
    <xs:simpleType name="currency">
        <xs:restriction base="xs:string">
            <xs:enumeration value="EUR"></xs:enumeration>
            <xs:enumeration value="INR"></xs:enumeration>
            <xs:enumeration value="USD"></xs:enumeration>
        </xs:restriction>
    </xs:simpleType>
</xs:schema>

That's all we need to implement a client.

Let's see how in the next section.

3. Using wsimport to Generate Client Code

3.1. Maven Plugin

First, let's add a plugin to our pom.xml to use this tool via Maven:

<plugin> 
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>jaxws-maven-plugin</artifactId>
    <version>2.6</version>
    <executions> 
        <execution> 
            <id>wsimport-from-jdk</id>
            <goals>
                <goal>wsimport</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <wsdlUrls>
            <wsdlUrl>http://localhost:8888/ws/country?wsdl</wsdlUrl> 
        </wsdlUrls>
        <keep>true</keep> 
        <packageName>com.baeldung.soap.ws.client.generated</packageName> 
        <sourceDestDir>src/main/java</sourceDestDir>
    </configuration>
</plugin>

Second, let's execute this plugin:

mvn clean jaxws:wsimport

That's all! The above command will generate code in the specified package com.baeldung.soap.ws.client.generated inside the sourceDestDir we provided in the plugin configuration.

Another way to achieve the same would be to use the wsimport utility. It comes out of the box with the standard JDK 8 distribution and can be found under JAVA_HOME/bin directory.

To generate client code using wsimport, we can navigate to the project's root, and run this command:

JAVA_HOME/bin/wsimport -s src/main/java/ -keep -p com.baeldung.soap.ws.client.generated "http://localhost:8888/ws/country?wsdl"

It's important to bear in mind that the service endpoint should be available in order to successfully execute the plugin or command.

Next, let's look at the generated artifacts.

3.2. Generated POJOs

Based on the xsd we saw earlier, the tool will generate a file named Country.java:

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "country", propOrder = { "capital", "currency", "name", "population" })
public class Country {
    protected String capital;
    @XmlSchemaType(name = "string")
    protected Currency currency;
    protected String name;
    protected int population;
    // standard getters and setters
}

As we can see, the generated class is decorated with JAXB annotations for marshalling and unmarshalling the object to and from XML.

Also, it generates a Currency enum:

@XmlType(name = "currency")
@XmlEnum
public enum Currency {
    EUR, INR, USD;
    public String value() {
        return name();
    }
    public static Currency fromValue(String v) {
        return valueOf(v);
    }
}

3.3. CountryService

The second generated artifact is an interface that acts as a proxy to the actual web service.

The interface CountryService declares the same method as our server, findByName:

@WebService(name = "CountryService", targetNamespace = "http://server.ws.soap.baeldung.com/")
@SOAPBinding(style = SOAPBinding.Style.RPC)
@XmlSeeAlso({ ObjectFactory.class })
public interface CountryService {
    @WebMethod
    @WebResult(partName = "return")
    @Action(input = "http://server.ws.soap.baeldung.com/CountryService/findByNameRequest", 
      output = "http://server.ws.soap.baeldung.com/CountryService/findByNameResponse")
    public Country findByName(@WebParam(name = "arg0", partName = "arg0") String arg0);
}

Notably, the interface is marked as a javax.jws.WebService, with a SOAPBinding.Style as RPC as defined by the service's WSDL.

The method findByName is annotated to declare that it's a javax.jws.WebMethod, with its expected input and output parameter types.

3.4. CountryServiceImplService

Our next generated class, CountryServiceImplService, extends javax.xml.ws.Service. Its annotation WebServiceClient denotes that it is the client view of a service:

@WebServiceClient(name = "CountryServiceImplService", 
  targetNamespace = "http://server.ws.soap.baeldung.com/", 
  wsdlLocation = "http://localhost:8888/ws/country?wsdl")
public class CountryServiceImplService extends Service {

    private final static URL COUNTRYSERVICEIMPLSERVICE_WSDL_LOCATION;
    private final static WebServiceException COUNTRYSERVICEIMPLSERVICE_EXCEPTION;
    private final static QName COUNTRYSERVICEIMPLSERVICE_QNAME = 
      new QName("http://server.ws.soap.baeldung.com/", "CountryServiceImplService");

    static {
        URL url = null;
        WebServiceException e = null;
        try {
            url = new URL("http://localhost:8888/ws/country?wsdl");
        } catch (MalformedURLException ex) {
            e = new WebServiceException(ex);
        }
        COUNTRYSERVICEIMPLSERVICE_WSDL_LOCATION = url;
        COUNTRYSERVICEIMPLSERVICE_EXCEPTION = e;
    }

    public CountryServiceImplService() {
        super(__getWsdlLocation(), COUNTRYSERVICEIMPLSERVICE_QNAME);
    }

    // other constructors 

    @WebEndpoint(name = "CountryServiceImplPort")
    public CountryService getCountryServiceImplPort() {
        return super.getPort(new QName("http://server.ws.soap.baeldung.com/", "CountryServiceImplPort"), 
          CountryService.class);
    }

    private static URL __getWsdlLocation() {
        if (COUNTRYSERVICEIMPLSERVICE_EXCEPTION != null) {
            throw COUNTRYSERVICEIMPLSERVICE_EXCEPTION;
        }
        return COUNTRYSERVICEIMPLSERVICE_WSDL_LOCATION;
    }

}

The important method to note here is getCountryServiceImplPort. Given a qualified name of the service endpoint, or QName, and the dynamic proxy's service endpoint interface name, it returns a proxy instance.

To invoke the web service, we need to use this proxy, as we'll see shortly.

Using a proxy makes it seem as if we are calling a service locally, abstracting away the intricacies of remote invocation.

4. Testing the Client

Next, we'll write a JUnit test to connect to the web service using the generated client code.

Before we can do that, we need to get the service's proxy instance at the client end:

@BeforeClass
public static void setup() {
    CountryServiceImplService service = new CountryServiceImplService();
    CountryService countryService = service.getCountryServiceImplPort();
}

For more advanced scenarios such as enabling or disabling a WebServiceFeature, we can use other generated constructors for CountryServiceImplService.

Now let's look at some tests:

@Test
public void givenCountryService_whenCountryIndia_thenCapitalIsNewDelhi() {
    assertEquals("New Delhi", countryService.findByName("India").getCapital());
}

@Test
public void givenCountryService_whenCountryFrance_thenPopulationCorrect() {
    assertEquals(66710000, countryService.findByName("France").getPopulation());
}

@Test
public void givenCountryService_whenCountryUSA_thenCurrencyUSD() {
    assertEquals(Currency.USD, countryService.findByName("USA").getCurrency());
}

As we can see, invoking the remote service's methods became as simple as calling methods locally. The proxy's findByName method returned a Country instance matching the name we provided. Then, we used various getters of the POJO to assert expected values.

5. Conclusion

In this tutorial, we saw how to invoke a SOAP web service in Java using JAX-WS RI and the wsimport utility.

Alternatively, we can use other JAX-WS implementations such as Apache CXF, Apache Axis2, and Spring to do the same.

As always, source code is available over on GitHub.

Using Multiple Cache Managers in Spring

$
0
0

1. Overview

In this tutorial, we'll learn how we can configure multiple cache managers in a Spring application.

2. Caching

Spring applies caching to methods so that our application doesn't execute the same method multiple times for the same input.

It's very easy to implement caching in a Spring application. This can be done by adding the @EnableCaching annotation in our configuration class:

@Configuration
@EnableCaching
public class MultipleCacheManagerConfig {}

Then we can start caching the output of a method by adding the @Cacheable annotation on the method:

@Cacheable(cacheNames = "customers")
public Customer getCustomerDetail(Integer customerId) {
    return customerDetailRepository.getCustomerDetail(customerId);
}

As soon as we add the above configuration, Spring Boot itself creates a cache manager for us.

By default, it uses ConcurrentHashMap as the underlying cache if we've not specified any other explicitly.

3. Configuring Multiple Cache Managers

In some cases, we might need to use more than one cache manager in our application. So, let's see how we can do this in our Spring Boot application using an example.

In our example, we'll use a CaffeineCacheManager and a simple ConcurrentMapCacheManager.

CaffeineCacheManager is provided by the spring-boot-starter-cache starter. It'll be auto-configured by Spring if Caffeine is present, which is a caching library written in Java 8.

ConcurrentMapCacheManager uses an implementation of the cache using ConcurrentHashMap.

We can do this in the following ways.

3.1. Using @Primary

We can create two beans of cache managers in our configuration class. Then, we can make one bean primary:

@Configuration
@EnableCaching
public class MultipleCacheManagerConfig {

    @Bean
    @Primary
    public CacheManager cacheManager() {
        CaffeineCacheManager cacheManager = new CaffeineCacheManager("customers", "orders");
        cacheManager.setCaffeine(Caffeine.newBuilder()
          .initialCapacity(200)
          .maximumSize(500)
          .weakKeys()
          .recordStats());
        return cacheManager;
    }

    @Bean
    public CacheManager alternateCacheManager() {
        return new ConcurrentMapCacheManager("customerOrders", "orderprice");
    }
}

Now, Spring Boot will use CaffeineCacheManager as default for all the methods until we explicitly specify our alternateCacheManager for a method:

@Cacheable(cacheNames = "customers")
public Customer getCustomerDetail(Integer customerId) {
    return customerDetailRepository.getCustomerDetail(customerId);
}

@Cacheable(cacheNames = "customerOrders", cacheManager = "alternateCacheManager")
public List<Order> getCustomerOrders(Integer customerId) {
    return customerDetailRepository.getCustomerOrders(customerId);
}

In the above example, our application will use CaffeineCacheManager for the getCustomerDetail() method. And for the getCustomerOrders() method, it'll use alternateCacheManager. 

3.2. Extending CachingConfigurerSupport

Another way we can do this is by extending the CachingConfigurerSupport class and by overriding the cacheManager() method. This method returns a bean which will be the default cache manager for our application:

@Configuration
@EnableCaching
public class MultipleCacheManagerConfig extends CachingConfigurerSupport {

    @Bean
    public CacheManager cacheManager() {
        CaffeineCacheManager cacheManager = new CaffeineCacheManager("customers", "orders");
        cacheManager.setCaffeine(Caffeine.newBuilder()
          .initialCapacity(200)
          .maximumSize(500)
          .weakKeys()
          .recordStats());
        return cacheManager;
    }

    @Bean
    public CacheManager alternateCacheManager() {
        return new ConcurrentMapCacheManager("customerOrders", "orderprice");
    }
}

Note that we can still create another bean called alternateCacheManager. We can use this alternateCacheManager for a method by explicitly specifying it, as we'd in the last example.

3.3. Using CacheResolver

We can implement the CacheResolver interface and create a custom CacheResolver:

public class MultipleCacheResolver implements CacheResolver {
    
    private final CacheManager simpleCacheManager;
    private final CacheManager caffeineCacheManager;    
    private static final String ORDER_CACHE = "orders";    
    private static final String ORDER_PRICE_CACHE = "orderprice";
    
    public MultipleCacheResolver(CacheManager simpleCacheManager,CacheManager caffeineCacheManager) {
        this.simpleCacheManager = simpleCacheManager;
        this.caffeineCacheManager=caffeineCacheManager;
        
    }

    @Override
    public Collection<? extends Cache> resolveCaches(CacheOperationInvocationContext<?> context) {
        Collection<Cache> caches = new ArrayList<Cache>();
        if ("getOrderDetail".equals(context.getMethod().getName())) {
            caches.add(caffeineCacheManager.getCache(ORDER_CACHE));
        } else {
            caches.add(simpleCacheManager.getCache(ORDER_PRICE_CACHE));
        }
        return caches;
    }
}

In this case, we've got to override the resolveCaches method of the CacheResolver interface.

In our example, we're selecting a cache manager based on the method name. After this, we need to create a bean of our custom CacheResolver:

@Configuration
@EnableCaching
public class MultipleCacheManagerConfig extends CachingConfigurerSupport {

    @Bean
    public CacheManager cacheManager() {
        CaffeineCacheManager cacheManager = new CaffeineCacheManager("customers", "orders");
        cacheManager.setCaffeine(Caffeine.newBuilder()
          .initialCapacity(200)
          .maximumSize(500)
          .weakKeys()
          .recordStats());
        return cacheManager;
    }

    @Bean
    public CacheManager alternateCacheManager() {
        return new ConcurrentMapCacheManager("customerOrders", "orderprice");
    }

    @Bean
    public CacheResolver cacheResolver() {
        return new MultipleCacheResolver(alternateCacheManager(), cacheManager());
    }
}

Now we can use our custom CacheResolver to resolve a cache manager for our methods:

@Component
public class OrderDetailBO {

    @Autowired
    private OrderDetailRepository orderDetailRepository;

    @Cacheable(cacheNames = "orders", cacheResolver = "cacheResolver")
    public Order getOrderDetail(Integer orderId) {
        return orderDetailRepository.getOrderDetail(orderId);
    }

    @Cacheable(cacheNames = "orderprice", cacheResolver = "cacheResolver")
    public double getOrderPrice(Integer orderId) {
        return orderDetailRepository.getOrderPrice(orderId);
    }
}

Here, we're passing the name of our CacheResolver bean in the cacheResolver element.

4. Conclusion

In this article, we learned how we can enable caching in our Spring Boot application. Then, we learned three ways by which we can use multiple cache managers in our application.

As always, the code for these examples is available over on GitHub.

Invoking a SOAP Web Service in Spring

$
0
0

1. Overview

Previously, we saw how to create a SOAP web service with Spring.

In this tutorial, we'll learn how to create a Spring-based client to consume this web service.

In invoking a SOAP web service in Java, we did the same using JAX-WS RI.

2. The Spring SOAP Web Service – a Quick Recap

Earlier, we had created a web service in Spring to fetch a country’s data, given its name. Before delving into the client implementation, let's do a quick recap of how we'd done that.

Following the contract-first approach, we first wrote an XML schema file defining the domain. We then used this XSD to generate classes for the request, response, and data model using the jaxb2-maven-plugin.

After that we coded four classes:

Finally, we tested it via cURL by sending a SOAP request.

Now let's start the server by running the above Boot app and move on to the next step.

3. The Client

Here, we're going to build a Spring client to invoke and test the above web service.

Now, let's see step-by-step what all we need to do in order to create a client.

3.1. Generate Client Code

First, we'll generate a few classes using the WSDL available at http://localhost:8080/ws/countries.wsdl. We'll download and save this in our src/main/resources folder.

To generate code using Maven, we'll add the maven-jaxb2-plugin to our pom.xml:

<plugin> 
    <groupId>org.jvnet.jaxb2.maven2</groupId>
    <artifactId>maven-jaxb2-plugin</artifactId>
    <version>0.14.0</version>
    <executions>
         <execution>
              <goals>
                  <goal>generate</goal>
              </goals>
         </execution>
    </executions>
    <configuration>
          <schemaLanguage>WSDL</schemaLanguage>
          <generateDirectory>${project.basedir}/src/main/java</generateDirectory>
          <generatePackage>com.baeldung.springsoap.client.gen</generatePackage>
          <schemaDirectory>${project.basedir}/src/main/resources</schemaDirectory>
          <schemaIncludes>
             <include>countries.wsdl</include>
          </schemaIncludes>
    </configuration>
</plugin>

Notably, in the plugin configuration we defined:

  • generateDirectory – the folder where the generated artifacts will be saved
  • generatePackage – the package name that the artifacts will use
  • schemaDirectory and schemaIncludes – the directory and file name for the WSDL

To carry out the JAXB generation process, we'll execute this plugin by simply building the project:

mvn compile

Interestingly, the artifacts generated here are the same as those generated for the service.

Let's list down the ones we'll be using:

  • Country.java and Currency.java – POJOs representing the data model
  • GetCountryRequest.java – the request type
  • GetCountryResponse.java – the response type

The service might be deployed anywhere in the world, and with just its WSDL, we were able to generate the same classes at the client end as the server!

3.2. CountryClient

Next, we need to extend Spring's WebServiceGatewaySupport to interact with the web service.

We'll call this class CountryClient:

public class CountryClient extends WebServiceGatewaySupport {

    public GetCountryResponse getCountry(String country) {
        GetCountryRequest request = new GetCountryRequest();
        request.setName(country);

        GetCountryResponse response = (GetCountryResponse) getWebServiceTemplate()
          .marshalSendAndReceive(request);
        return response;
    }
}

Here, we defined a single method getCountry, corresponding to the operation that the web service had exposed. In the method, we created a GetCountryRequest instance and invoked the web service to get a GetCountryResponse. In other words, here's where we performed the SOAP exchange.

As we can see, Spring made the invocation pretty straightforward with its WebServiceTemplate. We used the template's method marshalSendAndReceive to perform the SOAP exchange.

The XML conversions are handled here via a plugged-in Marshaller.

Now let's look at the configuration where this Marshaller is coming from.

3.3. CountryClientConfig

All we need to configure our Spring WS client are two beans.

First, a Jaxb2Marshaller to convert messages to and from XML, and second, our CountryClient, which will wire in the marshaller bean:

@Configuration
public class CountryClientConfig {

    @Bean
    public Jaxb2Marshaller marshaller() {
        Jaxb2Marshaller marshaller = new Jaxb2Marshaller();
        marshaller.setContextPath("com.baeldung.springsoap.client.gen");
        return marshaller;
    }
    @Bean
    public CountryClient countryClient(Jaxb2Marshaller marshaller) {
        CountryClient client = new CountryClient();
        client.setDefaultUri("http://localhost:8080/ws");
        client.setMarshaller(marshaller);
        client.setUnmarshaller(marshaller);
        return client;
    }
}

Here, we need to take care that the marshaller‘s context path is the same as generatePackage specified in the plugin configuration of our pom.xml.

Please also notice the default URI for the client here. It's set as the soap:address location specified in the WSDL.

4. Testing the Client

Next, we'll write a JUnit test to verify that our client is functioning as expected:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = CountryClientConfig.class, loader = AnnotationConfigContextLoader.class)
public class ClientLiveTest {

    @Autowired
    CountryClient client;

    @Test
    public void givenCountryService_whenCountryPoland_thenCapitalIsWarsaw() {
        GetCountryResponse response = client.getCountry("Poland");
        assertEquals("Warsaw", response.getCountry().getCapital());
    }

    @Test
    public void givenCountryService_whenCountrySpain_thenCurrencyEUR() {
        GetCountryResponse response = client.getCountry("Spain");
        assertEquals(Currency.EUR, response.getCountry().getCurrency());
    }
}

As we can see, we wired in the CountryClient bean defined in our CountryClientConfig. Then, we used its getCountry to invoke the remote service as described earlier.

Moreover, we were able to extract the information we needed for our assertions using the generated data model POJOs, Country, and Currency.

5. Conclusion

In this tutorial, we saw the basics of how to invoke a SOAP web service using Spring WS.

We merely scratched the surface of what Spring has to offer in the SOAP web services area; there's lots to explore.

As always, source code is available over on GitHub.


Java Weekly, Issue 331

$
0
0

1. Spring and Java

>> Spring Tips: Configuration [spring.io]

Handy tips from the experts on getting the most out of Spring's Environment abstraction through application.properties.

>> An introductory guide to annotations and annotation processors [blog.frankel.ch]

A good write-up that aims to lift the shroud of mystery surrounding Java annotations.

>> Thread Local Randoms in Java [alidg.me]

An exercise in implementing a highly-performant, thread-safe random number generator.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Mainline [martinfowler.com] and >> Feature Branching [martinfowler.com] and >> Mainline Integration [martinfowler.com] and >> Healthy Branch [martinfowler.com]

The series on common branching patterns for source code management continues.

Also worth reading:

3. Musings

>> Continuing our investment in Africa: Introducing the AWS Africa (Cape Town) Region [allthingsdistributed.com]

The new AWS region should be a big help to African businesses and government organizations, many of whom are fighting the effects of COVID-19.

>> Generic VC Advice Letter for COVID Times [diegobasch.com]

This just cracked me up, so I had to include it.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Stress Can Kill You [dilbert.com]

>> Working At Home [dilbert.com]

>> Social Distancing [dilbert.com]

5. Pick of the Week

>> STOP!! You don’t need Microservices. [medium.com]

Mapping Lists with ModelMapper

$
0
0

1. Overview

In this tutorial, we'll explain how to map lists of different element types using the ModelMapper framework. This involves using generic types in Java as a solution to convert different types of data from one list to another.

2. Model Mapper

The main role of ModelMapper is to map objects by determining how one object model is mapped to another called a Data Transformation Object (DTO).

In order to use ModelMapper, we start by adding the dependency to our pom.xml:

<dependency> 
    <groupId>org.modelmapper</groupId>
    <artifactId>modelmapper</artifactId>
    <version>2.3.7</version>
</dependency>

2.1. Configuration

ModelMapper provides a variety of configurations to simplify the mapping process. We customize the configuration by enabling or disabling the appropriate properties in the configuration. It's a common practice to set the fieldMatchingEnabled property to true and allow private field matching:

modelMapper.getConfiguration()
  .setFieldMatchingEnabled(true)
  .setFieldAccessLevel(Configuration.AccessLevel.PRIVATE);

By doing so, ModelMapper can compare private fields in the mapping classes (objects). In this configuration, it's not strictly necessary that all fields with the same names exist in both classes. Several Matching Strategies are allowed. By default, a standard matching strategy requires that all source and destination properties must be matched in any order. This is ideal for our scenario.

2.2. Type Token

ModelMapper uses TypeToken to map generic types. To see why this is necessary, let's see what happens when we map an Integer list to a Character list:

List<Integer> integers = new ArrayList<Integer>();
integers.add(1);
integers.add(2);
integers.add(3);

List<Character> characters = new ArrayList<Character>();
modelMapper.map(integers, characters);

Further, if we print out the elements of the characters list we would see an empty list. This is due to the occurrence of type erasure during runtime execution.

If we change our map call to use TypeToken, though, we can create a type literal for List<Character>:

List<Character> characters 
    = modelMapper.map(integers, new TypeToken<List<Character>>() {}.getType());

At compile time, the TokenType anonymous inner case preserves the List<Character> parameter type, and this time our conversion is successful.

3. Using Custom Type Mapping

Lists in Java can be mapped using custom element types.

For example, let's say we want to map a list of User entities to a UserDTO list. To achieve this, we'll call map for each element:

List<UserDTO> dtos = users
  .stream()
  .map(user -> modelMapper.map(user, UserDTO.class))
  .collect(Collectors.toList());

Of course, with some more work, we could make a general-purpose parameterized method:

<S, T> List<T> mapList(List<S> source, Class<T> targetClass) {
    return source
      .stream()
      .map(element -> modelMapper.map(element, targetClass))
      .collect(Collectors.toList());
}

So, then, we could instead do:

List<UserDTO> userDtoList = mapList(users, UserDTO.class);

4. Type Map and Property Mapping

Specific properties such as lists or sets can be added to the User-UserDTO model. TypeMap provides a method for explicitly defining the mapping of these properties. The TypeMap object stores mapping information of specific types (classes):

TypeMap<UserList, UserListDTO> typeMap = modelMapper.createTypeMap(UserList.class, UserListDTO.class);

UserList class contains a collection of Users. Here, we want to map the list of usernames from this collection to the property list of the UserListDTO class. To achieve this, we will create first UsersListConverter class and pass it List <User> and List <String> as parameter types for conversion:

public class UsersListConverter extends AbstractConverter<List<User>, List<String>> {

    @Override
    protected List<String> convert(List<User> users) {

        return users
          .stream()
          .map(User::getUsername)
          .collect(Collectors.toList());
    }
}

From the created TypeMap object we explicitly add Property Mapping by invoking an instance of UsersListConverter class:

 typeMap.addMappings(mapper -> mapper.using(new UsersListConverter())
   .map(UserList::getUsers, UserListDTO::setUsernames));

Inside the addMappings method, an expression mapping allows us to define the source to destination properties with lambda expressions. Finally, it converts the list of users into the resulting list of usernames.

5. Conclusion

In this tutorial, we explained how lists are mapped by manipulating generic types in ModelMapper. We can make use of TypeToken, generic type mapping, and property mapping to create object list types and make complex mappings.  

The complete source code for this article is available over on GitHub.

Memcached vs Redis

$
0
0

1. Overview

In this article, we'll examine the similarities and differences of two popular in-memory databases, Memcached and Redis.

2. Memcached and Redis

Often, we think about caching to improve performance while processing a large amount of data.

Memcached is a distributed memory caching system designed for ease of use and simplicity and is well-suited as a cache or a session store.

Redis is an in-memory data structure store that offers a rich set of features. It is useful as a cache, database, message broker, and queue.

3. Installation

3.1. Installing Memcached

We can install the latest Memcached server by downloading the package and running make:

$ wget http://memcached.org/latest
$ tar -zxvf memcached-1.6.3.tar.gz
$ cd memcached-1.6.3
$ ./configure && make && make test && sudo make install

3.2. Installing Redis

Similarly, we can install the latest Redis server:

$ wget http://download.redis.io/releases/redis-5.0.8.tar.gz
$ tar xzf redis-5.0.8.tar.gz
$ cd redis-5.0.8
$ make

4. Similarities

4.1. Sub-Millisecond Latency

Both Memcached and Redis offers sub-millisecond response times by keeping data in memory.

4.2. Data Partitioning

Similarly, both in-memory databases allow distributing data across multiple nodes.

4.3. Programming Languages Support

Likewise, both support all major programming languages including Java, Python, JavaScript, C, and Ruby.

Additionally, there are a few Java clients available for both in-memory databases. For instance, Xmemcached and Memcached-java-client are available for Memcached, while Jedis, Lettuce, and Redisson are available for Redis.

4.4. Cache Clearing

Memcached allows clearing the cache using the flush_all command. Similarly, Redis allows us to delete everything from a cache by using commands like FLUSHDB and FLUSHALL.

4.5. Scaling

Both caching solutions offer high scalability to handle large data when demand grows exponentially.

5. Differences

5.1. Command-Line

Memcached allows us to run commands by connecting to the server using telnet:

$ telnet 10.2.3.4 5678
Trying 10.2.3.4...
Connected to 10.2.3.4.
$ stats
STAT pid 14868
STAT uptime 175931
STAT time 1220540125
// ...

In contrast to Memcached, Redis comes with a dedicated command-line interface, redis-cli, allowing us to execute commands:

$ redis-cli COMMAND
1) 1) "save"
     2) (integer) 1
     3) 1) "admin"
        2) "noscript"
     // ...
2) 1) "multi"
   2) (integer) 1
   3) 1) "noscript"
      2) "fast"
   // ...
3) 1) "geodist"
   2) (integer) -4
   3) 1) "readonly"
   // ...

// ...

Here, we've executed COMMAND to list all the commands provided by Redis.

5.2. Disk I/O Dumping

Memcached handles disk dumping only with third-party tools like libmemcached-tools or forks like memcached-dd.

However, Redis provides highly configurable default mechanisms like RDB (Redis database file) or AOF (Append-only files) for disk dumping. This can be useful for archival and recovery.

Using redis-cli, we can execute the synchronous SAVE command to take a snapshot of the in-memory data:

$ redis-cli SAVE
OK

Here, the command stores the snapshot in a dump.rdb binary file and returns the status OK when complete.

However, the execution of the asynchronous BGSAVE starts the background process of taking a snapshot:

$ redis-cli BGSAVE
OK

Additionally, we can use the LASTSAVE command to check the Unix time of the last successful DB snapshot.

$ redis-cli LASTSAVE
(integer) 1410853592

5.3. Data Structures

Memcached stores key-value pairs as a String and has a 1MB size limit per value. However, Redis also supports other data structures like list, set, and hash, and can store values of up to 512MB in size.

5.4. Replication

Memcached supports replication with third-party forks like repcached.

Unlike Memcached, Redis provides us functionality to multiply clusters by replicating the primary storage for better scalability and high availability.

First, we can use the REPLICAOF command to create a replica of the Redis master server. Next, we execute the PSYNC command on the replica to initiate the replication from the master.

5.5. Transactions

Memcached doesn't support transactions, although its operations are atomic.

Redis provides out-of-the-box support for transactions to execute commands.

We can start the transaction using the MULTI command. Then, we can use the EXEC command for the execution of the following subsequent commands. Finally, Redis provides the WATCH command for the conditional execution of the transaction.

5.6. Publish and Subscribe Messaging

Memcached doesn't support publish/subscribe messaging out-of-the-box.

Redis, on the other hand, provides functionality to publish and subscribe to messages using pub/sub message queues.

This can be useful when designing applications that require real-time communication like chat rooms, social media feeds, and server intercommunication.

Redis comes with dedicated commands like PUBLISH, SUBSCRIBE, and UNSUBSCRIBE to publish a message to the channel, subscribe, and unsubscribe the client to the specified channels, respectively.

5.7. Geospatial Support

Geospatial support is useful for implementing location-based features for our applications. Unlike Memcached, Redis comes with special commands to manage real-time geospatial data.

For instance, the GEODIST command calculates the distance between two geospatial entries. Likewise, the GEORADIUS command returns all the entries within the radius provided.

Additionally, we can use Spring Data Redis to enable Redis geospatial support in a Java application.

5.8. Architecture

Redis uses a single core and shows better performance than Memcached in storing small datasets when measured in terms of cores.

Memcached implements a multi-threaded architecture by utilizing multiple cores. Therefore, for storing larger datasets, Memcached can perform better than Redis.

Another benefit of Memcached's multi-threaded architecture is its high scalability, achieved by utilizing multiple computational resources.

Redis can scale horizontally via clustering, which is comparatively more complex to set up and operate. Also, we can use Jedis or Lettuce to enable a Redis cluster using a Java application.

5.9. LUA Scripting

In contrast to Memcached, we can execute LUA scripts against Redis. It provides commands like EVAL and SCRIPT LOAD, useful for the execution of the LUA scripts.

For instance, we can execute the EVAL command to evaluate the script:

$ redis-cli eval "return redis.call('set',KEYS[1],'baeldung')" 1 website
OK

Here, we've set the key website to the value baeldung by evaluating a script.

5.10. Memory Usage

Memcached has a higher memory utilization rate than Redis when comparing the String data structure.

In spite of that, when Redis uses the hash structure, it provides a higher memory utilization rate than Memcached.

6. Conclusion

In this article, we explored Memcached and Redis. First, we looked at the similarities of both in-memory databases. Then, we looked at the differences in the features provided by both caching solutions.

There are many in-memory caching solutions available. Therefore, we should consider the features of a caching engine and match them against our use cases.

We can conclude that Memcached is a good choice for solving simple caching problems. However, Redis outperforms Memcached by offering richer functionality and various features that are promising for complex use-cases.

Best Practices When Using Terraform

$
0
0

1. Overview

Previously, we've covered Terraform's basic concepts and usage. Now, let's dig deeper and cover some of the best practices when using this popular DevOps tool.

2. Resource Files Organization

When we start using Terraform, it's not uncommon to put every resource definition, variable, and output in a single file. This approach, however, quickly leads to code that is hard to maintain and even harder to reuse.

A better approach is to take benefit from the fact that, within a module, Terraform will read any “.tf” file and process its contents. The order in which we declare resources in those is not relevant – this is Terraform's job, after all. We should keep them organized so that we can better understand what's going on.

Here, consistency is more important than how we choose to organize resources in our files. A common practice is to use at least three files per module:

  • variables.tf: All module's input variables go here, along with their default values when applicable
  • main.tf: This is where we'll put our resource definitions. Assuming that we're using the Single Responsibility principle, its size should stay under control
  • modules: If our module contains any sub-modules, this is where they'll go
  • outputs.tf: Exported data items should go here
  • providers.tf: Used only on the top-level directory, declares which providers we'll use in the project, including their versions

This organization allows any team member that wants to use our module to locate which are the required variables and output data quickly.

Also, as our module evolves, we must keep an eye on the main.tf file's size. A good sign that we should consider refactoring it into sub-modules is when it starts to grow in size. At this point, we should refactor it by moving tightly-coupled resources, such as an EC2 instance and an attached EBS volume, into nested modules. In the end, the chances are that our top-level main.tf file contains only module references stitched together.

3. Modules Usage

Modules are a powerful tool, but, as in any larger software project, it takes some time to get the right level of abstraction so we can maximize reuse across projects. Given the fact that Terraform, as the whole infrastructure-as-code practice, is relatively new, this is an area where we see many different approaches.

That said, we can still reuse some lessons learned from application codebases that can help with proper module organization. Among those lessons, the Single Responsibility from the S.O.L.I.D set of principles is quite useful.

In our context, this means a module should focus on a single aspect of the infrastructure, such as setting up a VPC or creating a virtual machine – and just that.

Let's take a look at a sample Terraform project directory layout that uses this principle:

$ tree .
├── main.tf
├── modules
│   ├── ingress
│   │   └── www.petshop.com.br
│   │       ├── main.tf
│   │       ├── outputs.tf
│   │       └── variables.tf
... other services omitted
│   └── SvcFeedback
│       ├── main.tf
│       ├── outputs.tf
│       └── variables.tf
├── outputs.tf
├── terraform.tfvars
└── variables.tf

Here, we've used modules for each significant aspect of our infrastructure: database, ingress, messaging, external services, and backend services. In this layout, each folder containing .tf files is a module containing three files:

  • variables.tf – Input variables for the module
  • main.tf – Resource definitions
  • outputs.tf – Output attributes definitions

This convention has the benefit that module consumers can get straight to its “contract” – variables and outputs – if they want to, skipping implementation details.

4. Provider Configuration

Most providers in Terraform require us to provide valid configuration parameters so it can manipulate resources. For instance, the AWS provider needs an access key/secret and a region so it can access our account and execute tasks.

Since those parameters contain sensitive information and deployment-target-specific information, we should avoid including them as part of our project's code. Instead, we should use variables or provider-specific methods to configure them.

4.1. Using Variables to Configure a Provider

In this approach, we define a project variable for every required provider parameter:

variable "aws_region" {
  type = string
}
variable "aws_access_key" {
  type = string
}a
variable "aws_secret_key" {
  type = string
}

Now, we use them in our provider declaration:

provider "aws" {
  region = var.aws_region
  access_key = var.aws_access_key
  secret_key = var.aws_secret_key
}

Finally, we provide actual values using a .tfvar file:

aws_access_key="xxxxx"
aws_secret_key="yyyyy"
aws_region="us-east-1"

We can also combine .tfvar files and environment variables when running Terraform commands such as plan or apply:

$ export TF_VAR_aws_region="us-east-1"
$ terraform plan -var="access_key=xxxx" -var-file=./aws.tfvars

Here, we've used a mix of environment variables and command-line arguments to pass variable values. In addition to those sources, Terraform will also look at variables defined in a terraform.tfvars file and any file with the “.auto.tfvars” extension in the project's folder.

4.2. Using Provider-Specific Configuration

In many cases, Terraform providers can pick credentials from the same place used by the native tool. A typical example is the Kubernetes provider. If our environment already has the native utility kubectl configured to point to our target cluster, then we don't need to provide any extra information.

5. State Management

Terraform state files usually contain sensitive information, so we must take proper measures to secure it. Let's take a look at a few of them:

  • Always use an exclusion rule for *.tfstate files in our VCS configuration file. For Git, this can go in a global exclusion rule or our project's .gitignore file.
  • Adopt as soon as possible a remote backend instead of the default local backend. Also, double-check access restrictions to the chosen backend.

Moving from the default state backend – local files – to a remote is a simple task. We just have to add a backend definition in one of our project's files:

terraform {
  backend "pg" {}
}

Here, we're informing Terraform that it will use the PostgreSQL backend to store state information. Remote backends usually require additional configuration. Much like providers, the recommended approach is to pass the needed parameters through environment variables or “.auto.tfvars” files

The main reason to adopt a remote backend is to enable multiple collaborators/tools to run Terraform on the same target environment. In those scenarios, we should avoid more than one Terraform run on the same target environment — that can cause all sorts of race conditions and conflicts and will likely create havoc.

By adopting a remote backend, we can avoid those issues, as remote backends support the concept of locking. This means that only a single collaborator can run commands such as terraform plan or terraform apply in turn.

Another way to enforce proper management of state files is to use a dedicated server to run Terraform. We can use any CI/CD tools for this, such as Jenkins, GitLab, and others. For small teams/organizations, we can also use the free-forever tier of Terraform's SaaS offering.

6. Workspaces

Workspaces allow us to store multiple state files for a single project. Building on the VCS branch analogy, we should start using them on a project as soon as we must deal with multiple target environments. This way, we can have a single codebase to recreate the same resources no matter where we point Terraform.

Of course, environments can and will vary in some way or another — for example, in machine sizing/count. Even so, we can address those aspects with input variables passed at apply time.

With those points in mind, a common practice is to name workspaces after environment names. For instance, we can use names such as DEV, QA, and PRD, so they match our existing environments.

If we had multiple teams working on the same project, we could also include their names. For instance, we could have a DEV-SQUAD1 workspace for a team working on new features and a DEV-SUPPORT for another team to reproduce and fix production issues.

7. Testing

As we start adopting standard coding practices to deal with our infrastructure, it is natural that we also adopt one of its hallmarks: automated testing. Those tests are particularly useful in the context of modules, as they enhance our confidence that they'll work as expected in different scenarios.

A typical test consists of deploying a test configuration into a temporary environment and running a series of tests against it. What should tests cover? Well, it largely depends on the specifics of what we're creating, but some are quite common:

  • Accessibility: Did we create our resources correctly? Are they reachable?
  • Security: Did we leave any open non-essential network ports? Did we disable default credentials?
  • Correctness: Did our module use its parameters correctly? Did it flag any missing parameters?

As of this writing, Terraform testing is still an evolving topic. We can write our tests using whatever framework we want, but those that focus on integrated tests are generally better suited for this task. Some examples include FitNesse, Spock, and Protractor, among others. We can also create our tests using regular shell scripts and add them to our CI/CD pipeline.

8. Conclusion

In this article, we've covered some best practices while using Terraform. Given this is still a relatively new field, we should take those just as a starting point. As more people adopt infrastructure-as-code tools, we're likely to see new practices and tools emerge.

As usual, all code is available over on GitHub.

Disable Security for a Profile in Spring Boot

$
0
0

1. Overview

In this tutorial, we're going to take a look at how we can disable Spring Security for a given profile.

2. Configuration

First of all, let's define a security configuration that simply allows all requests.

We can achieve this by extending WebSecurityConfigurerAdapter in a Spring @Configuration and ignoring requests for all paths.

@Configuration
public class ApplicationSecurity extends WebSecurityConfigurerAdapter {
    @Override
    public void configure(WebSecurity web) throws Exception {
        web.ignoring().antMatchers("/**");
    }
}

Remember that this shuts off not only authentication but also any security protections like XSS.

3. Specify Profile

Now we want to activate this configuration only for a given profile.

Let's assume we have a unit test suite where we don't want security. If this test suite runs with a profile named “test”, we can simply annotate our configuration with @Profile:

@Profile("test")
@Configuration
public class ApplicationSecurity extends WebSecurityConfigurerAdapter {
    @Override
    public void configure(WebSecurity web) throws Exception {
        web.ignoring().antMatchers("/**");
    }
}

Consequently, our test environment will differ, which we may not want. Alternatively, we can leave security on and use Spring Security's test support.

4. Conclusion

In this tutorial, we illustrated how to disable Spring Security for a specific profile.

As always, the complete source code is available over on GitHub.

Viewing all 4700 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>