1. Overview
In this tutorial, we’ll show how we can use R2DBC to perform database operations in a reactive way.
In order to explore R2DBC, we’ll create a simple Spring WebFlux REST application that implements CRUD operations for a single entity, using only asynchronous operations to achieve that goal.
2. What is R2DBC?
Reactive development is on the rise, with new frameworks coming every day and existing ones seeing increasing adoption. However, a major issue with reactive development is the fact that database access in the Java/JVM world remains basically synchronous. This is a direct consequence of the way JDBC was designed and led to some ugly hacks to adapt those two fundamentally different approaches.
To address the need for asynchronous database access in the Java land, two standards have emerged. The first one, ADBC (Asynchronous Database Access API), is backed by Oracle but, as of this writing, seems to be somewhat stalled, with no clear timeline.
The second one, which we’ll cover here, is R2DBC (Reactive Relational Database Connectivity), a community effort led by a team from Pivotal and other companies. This project, which is still in beta, has shown more vitality and already provides drivers for Postgres, H2, and MSSQL databases.
3. Project Setup
Using R2DBC in a project requires that we add dependencies to the core API and a suitable driver. In our example, we’ll be using H2, so this means just two dependencies:
<dependency> <groupId>io.r2dbc</groupId> <artifactId>r2dbc-spi</artifactId> <version>0.8.0.M7</version> </dependency> <dependency> <groupId>io.r2dbc</groupId> <artifactId>r2dbc-h2</artifactId> <version>0.8.0.M7</version> </dependency>
Maven Central still has no R2DBC artifacts for now, so we also need to add a couple of Spring’s repositories to our project:
<repositories> <repository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>spring-snapshots</id> <name>Spring Snapshots</name> <url>https://repo.spring.io/snapshot</url> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories>
4. Connection Factory Setup
The first thing we need to do to access a database using R2DBC is to create a ConnectionFactory object, which plays a similar role to JDBC’s DataSource. The most straightforward way to create a ConnectionFactory is through the ConnectionFactories class.
This class has static methods that take a ConnectionFactoryOptions object and return a ConnectionFactory. Since we’ll only need a single instance of our ConnectionFactory, let’s create a @Bean that we can later use via injection wherever we need:
@Bean public ConnectionFactory connectionFactory(R2DBCConfigurationProperties properties) { ConnectionFactoryOptions baseOptions = ConnectionFactoryOptions.parse(properties.getUrl()); Builder ob = ConnectionFactoryOptions.builder().from(baseOptions); if (!StringUtil.isNullOrEmpty(properties.getUser())) { ob = ob.option(USER, properties.getUser()); } if (!StringUtil.isNullOrEmpty(properties.getPassword())) { ob = ob.option(PASSWORD, properties.getPassword()); } return ConnectionFactories.get(ob.build()); }
Here, we take options received from a helper class decorated with the @ConfigurationProperties annotation and populate our ConnectionFactoryOptions instance. To populate it, R2DBC implements a builder pattern with a single option method that takes an Option and a value.
R2DBC defines a number of well-known options, such as USERNAME and PASSWORD that we’ve used above. Another way to set those options is to pass a connection string to the parse() method of the ConnectionFactoryOptions class.
Here’s an example of a typical R2DBC connection URL:
r2dbc:h2:mem://./testdb
Let’s break this string into its components:
- r2dbc: Fixed-scheme identifier for R2DBC URLs — another valid scheme is rd2bcs, used for SSL-secured connections
- h2: Driver identifier used to locate the appropriate connection factory
- mem: Driver-specific protocol — in our case, this corresponds to an in-memory database
- //./testdb: Driver-specific string, usually containing host, database, and any additional options.
Once we have our option set ready, we pass it to the get() static factory method to create our ConnectionFactory bean.
5. Executing Statements
Similarly to JDBC, using R2DBC is mostly about sending SQL statements to the database and processing result sets. However, since R2DBC is a reactive API, it depends heavily on reactive streams types, such as Publisher and Subscriber.
Using those types directly is a bit cumbersome, so we’ll use project reactor’s types like Mono and Flux that help us to write cleaner and more concise code.
In the next sections, we’ll see how to implement database-related tasks by creating a reactive DAO class for a simple Account class. This class contains just three properties and has a corresponding table in our database:
public class Account { private Long id; private String iban; private BigDecimal balance; // ... getters and setters omitted }
5.1. Getting a Connection
Before we can send any statements to the database, we need a Connection instance. We’ve already seen how to create a ConnectionFactory, so it’s no surprise that we’ll use it to get a Connection. What we must remember is that now, instead of getting a regular Connection, what we get is a Publisher of a single Connection.
Our ReactiveAccountDao, which is a regular Spring @Component, gets its ConnectionFactory via constructor injection, so it’s readily available in handler methods.
Let’s take a look at the first couple of lines of the findById() method to see how to retrieve and start using a Connection:
public Mono<Account>> findById(Long id) { return Mono.from(connectionFactory.create()) .flatMap(c -> // use the connection ) // ... downstream processing omitted }
Here, we’re adapting the Publisher returned from our ConnectionFactory into a Mono that is the initial source for our event stream.
5.1. Preparing and Submitting Statements
Now that we have a Connection, let’s use it to create a Statement and bind a parameter to it:
.flatMap( c -> Mono.from(c.createStatement("select id,iban,balance from Account where id = $1") .bind("$1", id) .execute()) .doFinally((st) -> close(c)) )
The Connection‘s method createStatement takes a SQL query string, which can optionally have bind placeholders — referred to as “markers” in the spec.
A couple of noteworthy points here: first, createStatement is a synchronous operation, which allows us to use a fluent style to bind values to the returned Statement; second, and very important, placeholder/marker syntax is vendor-specific!
In this example, we’re using H2’s specific syntax, which uses $n to mark parameters. Other vendors may use different syntax, such as :param, @Pn, or some other convention. This is an important aspect that we must pay attention to when migrating legacy code to this new API.
The binding process itself is quite straightforward, due to the fluent API pattern and simplified typing: there’s just a single overloaded bind() method that takes care of all typing conversions — subject to database rules, of course.
The first parameter passed to bind() can be a zero-based ordinal that corresponds to the marker’s placement in the statement, or it can be a string with the actual marker.
Once we’ve set values to all parameters, we call execute(), which returns a Publisher of Result objects, which we again wrap into a Mono for further processing. We attach a doFinally() handler to this Mono so that we make sure that we’ll close our connection whether the stream processing completes normally or not.
5.2. Processing Results
The next step in our pipeline is responsible for processing Result objects and generating a stream of ResponseEntity<Account> instances.
Since we know that there can be only one instance with the given id, we’ll actually return a Mono stream. The actual conversion happens inside the function passed to the map() method of the received Result:
.map(result -> result.map((row, meta) -> new Account(row.get("id", Long.class), row.get("iban", String.class), row.get("balance", BigDecimal.class)))) .flatMap(p -> Mono.from(p));
The result’s map() method expects a function that takes two parameters. The first one is a Row object that we use to gather values for each column and populate an Account instance. The second, meta, is a RowMetadata object that contains information about the current row, such as column names and types.
The previous map() call in our pipeline resolves to a Mono<Producer<Account>>, but we need to return a Mono<Account> from this method. To fix this, we add a final flatMap() step, which adapts the Producer into a Mono.
5.3. Batch Statements
R2DBC also supports the creation and execution of statement batches, which allow for the execution of multiple SQL statements in a single execute() call. In contrast with regular statements, batch statements do not support binding and are mainly used for performance reasons in scenarios such as ETL jobs.
Our sample project uses a batch of statements to create the Account table and insert some test data into it:
@Bean public CommandLineRunner initDatabase(ConnectionFactory cf) { return (args) -> Flux.from(cf.create()) .flatMap(c -> Flux.from(c.createBatch() .add("drop table if exists Account") .add("create table Account(" + "id IDENTITY(1,1)," + "iban varchar(80) not null," + "balance DECIMAL(18,2) not null)") .add("insert into Account(iban,balance)" + "values('BR430120980198201982',100.00)") .add("insert into Account(iban,balance)" + "values('BR430120998729871000',250.00)") .execute()) .doFinally((st) -> c.close()) ) .log() .blockLast(); }
Here, we use the Batch returned from createBatch() and add a few SQL statements. We then send those statements for execution using the same execute() method available in the Statement interface.
In this particular case, we are not interested in any results — just that the statements all execute fine. Had we needed any produced results, all we had to do is to add a downstream step in this stream to process the emitted Result objects.
6. Transactions
The last topic we’ll cover in this tutorial is transactions. As we should expect by now, we manage transactions as in JDBC, that is, by using methods available in the Connection object.
As before, the main difference is that now all transaction-related methods are asynchronous, returning a Publisher that we must add to our stream at appropriate points.
Our sample project uses a transaction in its implementation of the createAccount() method:
public Mono<Account> createAccount(Account account) { return Mono.from(connectionFactory.create()) .flatMap(c -> Mono.from(c.beginTransaction()) .then(Mono.from(c.createStatement("insert into Account(iban,balance) values($1,$2)") .bind("$1", account.getIban()) .bind("$2", account.getBalance()) .returnGeneratedValues("id") .execute())) .map(result -> result.map((row, meta) -> new Account(row.get("id", Long.class), account.getIban(), account.getBalance()))) .flatMap(pub -> Mono.from(pub)) .delayUntil(r -> c.commitTransaction()) .doFinally((st) -> c.close())); }
Here, we’ve added transaction-related calls in two points. First, right after getting a new connection from the database, we call the beginTransactionMethod(). Once we know that the transaction was successfully started, we prepare and execute the insert statement.
This time we’ve also used the returnGeneratedValues() method to instruct the database to return the identity value generated for this new Account. R2DBC returns those values in a Result containing a single row with all generated values, which we use to create the Account instance.
Once again, we need to adapt the incoming Mono<Publisher<Account>> into a Mono<Account>, so we add a flatMap() to solve this. Next, we commit the transaction in a delayUntil() step. We need this because we want to make sure the returned Account has already been committed to the database.
Finally, we attach a doFinally step to this pipeline that closes the Connection when all events from the returned Mono are consumed.
7. Sample DAO Usage
Now that we have a reactive DAO, let’s use it to create a simple Spring WebFlux application to showcase how to use it in a typical application. Since this framework already supports reactive constructs, this becomes a trivial task. For instance, let’s take a look at the implementation of the GET method:
@RestController public class AccountResource { private final ReactiveAccountDao accountDao; public AccountResource(ReactiveAccountDao accountDao) { this.accountDao = accountDao; } @GetMapping("/accounts/{id}") public Mono<ResponseEntity<Account>> getAccount(@PathVariable("id") Long id) { return accountDao.findById(id) .map(acc -> new ResponseEntity<>(acc, HttpStatus.OK)) .switchIfEmpty(Mono.just(new ResponseEntity<>(null, HttpStatus.NOT_FOUND))); } // ... other methods omitted }
Here, we’re using our DAO’s returned Mono to construct a ResponseEntity with the appropriate status code. We’re doing this just because we want a NOT_FOUND (404) status code when there is no Account with the given id.
8. Conclusion
In this article, we’ve covered the basics of reactive database access using R2DBC. Although in its infancy, this project is quickly evolving, targeting a release date sometime in early 2020.
Compared to ADBA, which will definitely not be part of Java 12, R2DBC seems to be more promising and already provides drivers for a few popular databases — Oracle being a notable absence here.
As usual, the complete source code used in this tutorial is available over on Github.