Quantcast
Channel: Baeldung
Viewing all 4699 articles
Browse latest View live

Building a Data Pipeline with Kafka, Spark Streaming and Cassandra

$
0
0

1. Overview

Apache Kafka is a scalable, high performance, low latency platform that allows reading and writing streams of data like a messaging system. We can start with Kafka in Java fairly easily.

Spark Streaming is part of the Apache Spark platform that enables scalable, high throughput, fault tolerant processing of data streams. Although written in Scala, Spark offers Java APIs to work with.

Apache Cassandra is a distributed and wide-column NoSQL data store. More details on Cassandra is available in our previous article.

In this tutorial, we’ll combine these to create a highly scalable and fault tolerant data pipeline for a real-time data stream.

2. Installations

To start, we’ll need Kafka, Spark and Cassandra installed locally on our machine to run the application. We’ll see how to develop a data pipeline using these platforms as we go along.

However, we’ll leave all default configurations including ports for all installations which will help in getting the tutorial to run smoothly.

2.1. Kafka

Installing Kafka on our local machine is fairly straightforward and can be found as part of the official documentation. We’ll be using the 2.1.0 release of Kafka.

In addition, Kafka requires Apache Zookeeper to run but for the purpose of this tutorial, we’ll leverage the single node Zookeeper instance packaged with Kafka.

Once we’ve managed to start Zookeeper and Kafka locally following the official guide, we can proceed to create our topic, named “messages”:

 $KAFKA_HOME$\bin\windows\kafka-topics.bat --create \
  --zookeeper localhost:2181 \
  --replication-factor 1 --partitions 1 \
  --topic messages

Note that the above script is for Windows platform, but there are similar scripts available for Unix-like platforms as well.

2.2. Spark

Spark uses Hadoop’s client libraries for HDFS and YARN. Consequently, it can be very tricky to assemble the compatible versions of all of these. However, the official download of Spark comes pre-packaged with popular versions of Hadoop. For this tutorial, we’ll be using version 2.3.0 package “pre-built for Apache Hadoop 2.7 and later”.

Once the right package of Spark is unpacked, the available scripts can be used to submit applications. We’ll see this later when we develop our application in Spring Boot.

2.3. Cassandra

DataStax makes available a community edition of Cassandra for different platforms including Windows. We can download and install this on our local machine very easily following the official documentation. We’ll be using version 3.9.0.

Once we’ve managed to install and start Cassandra on our local machine, we can proceed to create our keyspace and table. This can be done using the CQL Shell which ships with our installation:

CREATE KEYSPACE vocabulary
    WITH REPLICATION = {
        'class' : 'SimpleStrategy',
        'replication_factor' : 1
    };
USE vocabulary;
CREATE TABLE words (word text PRIMARY KEY, count int);

Note that we’ve created a namespace called vocabulary and a table therein called words with two columns, word, and count.

3. Dependencies

We can integrate Kafka and Spark dependencies into our application through Maven. We’ll pull these dependencies from Maven Central:

And we can add them to our pom accordingly:

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-core_2.11</artifactId>
    <version>2.3.0</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-sql_2.11</artifactId>
    <version>2.3.0</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming_2.11</artifactId>
    <version>2.3.0</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
    <version>2.3.0</version>
</dependency>
<dependency>
    <groupId>com.datastax.spark</groupId>
    <artifactId>spark-cassandra-connector_2.11</artifactId>
    <version>2.3.0</version>
</dependency>
<dependency>
    <groupId>com.datastax.spark</groupId>
    <artifactId>spark-cassandra-connector-java_2.11</artifactId>
    <version>1.5.2</version>
</dependency>

Note that some these dependencies are marked as provided in scope. This is because these will be made available by the Spark installation where we’ll submit the application for execution using spark-submit.

4. Spark Streaming – Kafka Integration Strategies

At this point, it is worthwhile to talk briefly about the integration strategies for Spark and Kafka.

Kafka introduced new consumer API between versions 0.8 and 0.10. Hence, the corresponding Spark Streaming packages are available for both the broker versions. It’s important to choose the right package depending upon the broker available and features desired.

4.1. Spark Streaming Kafka 0.8

The 0.8 version is the stable integration API with options of using the Receiver-based or the Direct Approach. We’ll not go into the details of these approaches which we can find in the official documentation. An important point to note here is that this package is compatible with Kafka Broker versions 0.8.2.1 or higher.

4.2. Spark Streaming Kafka 0.10

This is currently in an experimental state and is compatible with Kafka Broker versions 0.10.0 or higher only.  This package offers the Direct Approach only, now making use of the new Kafka consumer API. We can find more details about this in the official documentation. Importantly, it is not backward compatible with older Kafka Broker versions.

Please note that for this tutorial, we’ll make use of the 0.10 package. The dependency mentioned in the previous section refers to this only.

5. Developing a Data Pipeline

We’ll create a simple application in Java using Spark which will integrate with the Kafka topic we created earlier. The application will read the messages as posted and count the frequency of words in every message. This will then be updated in the Cassandra table we created earlier.

Let’s quickly visualize how the data will flow:

5.1. Getting JavaStreamingContext

Firstly, we’ll begin by initializing the JavaStreamingContext which is the entry point for all Spark Streaming applications:

SparkConf sparkConf = new SparkConf();
sparkConf.setAppName("WordCountingApp");
sparkConf.set("spark.cassandra.connection.host", "127.0.0.1");

JavaStreamingContext streamingContext = new JavaStreamingContext(
  sparkConf, Durations.seconds(1));

5.2. Getting DStream from Kafka

Now, we can connect to the Kafka topic from the JavaStreamingContext:

Map<String, Object> kafkaParams = new HashMap<>();
kafkaParams.put("bootstrap.servers", "localhost:9092");
kafkaParams.put("key.deserializer", StringDeserializer.class);
kafkaParams.put("value.deserializer", StringDeserializer.class);
kafkaParams.put("group.id", "use_a_separate_group_id_for_each_stream");
kafkaParams.put("auto.offset.reset", "latest");
kafkaParams.put("enable.auto.commit", false);
Collection<String> topics = Arrays.asList("messages");

JavaInputDStream<ConsumerRecord<String, String>> messages = 
  KafkaUtils.createDirectStream(
    streamingContext, 
    LocationStrategies.PreferConsistent(), 
    ConsumerStrategies.<String, String> Subscribe(topics, kafkaParams));

Please note that we’ve to provide deserializers for key and value here. For common data types like String, the deserializer is available by default. However, if we wish to retrieve custom data types, we’ll have to provide custom deserializers.

Here, we’ve obtained JavaInputDStream which is an implementation of Discretized Streams or DStreams, the basic abstraction provided by Spark Streaming. Internally DStreams is nothing but a continuous series of RDDs.

5.3. Processing Obtained DStream

We’ll now perform a series of operations on the JavaInputDStream to obtain word frequencies in the messages:

JavaPairDStream<String, String> results = messages
  .mapToPair( 
      record -> new Tuple2<>(record.key(), record.value())
  );
JavaDStream<String> lines = results
  .map(
      tuple2 -> tuple2._2()
  );
JavaDStream<String> words = lines
  .flatMap(
      x -> Arrays.asList(x.split("\\s+")).iterator()
  );
JavaPairDStream<String, Integer> wordCounts = words
  .mapToPair(
      s -> new Tuple2<>(s, 1)
  ).reduceByKey(
      (i1, i2) -> i1 + i2
    );

5.4. Persisting Processed DStream into Cassandra

Finally, we can iterate over the processed JavaPairDStream to insert them into our Cassandra table:

wordCounts.foreachRDD(
    javaRdd -> {
      Map<String, Integer> wordCountMap = javaRdd.collectAsMap();
      for (String key : wordCountMap.keySet()) {
        List<Word> wordList = Arrays.asList(new Word(key, wordCountMap.get(key)));
        JavaRDD<Word> rdd = streamingContext.sparkContext().parallelize(wordList);
        javaFunctions(rdd).writerBuilder(
          "vocabulary", "words", mapToRow(Word.class)).saveToCassandra();
      }
    }
  );

5.5. Running the Application

As this is a stream processing application, we would want to keep this running:

streamingContext.start();
streamingContext.awaitTermination();

6. Leveraging Checkpoints

In a stream processing application, it’s often useful to retain state between batches of data being processed.

For example, in our previous attempt, we are only able to store the current frequency of the words. What if we want to store the cumulative frequency instead? Spark Streaming makes it possible through a concept called checkpoints.

We’ll now modify the pipeline we created earlier to leverage checkpoints:

Please note that we’ll be using checkpoints only for the session of data processing. This does not provide fault-tolerance. However, checkpointing can be used for fault tolerance as well.

There are a few changes we’ll have to make in our application to leverage checkpoints. This includes providing the JavaStreamingContext with a checkpoint location:

streamingContext.checkpoint("./.checkpoint");

Here, we are using the local filesystem to store checkpoints. However, for robustness, this should be stored in a location like HDFS, S3 or Kafka. More on this is available in the official documentation.

Next, we’ll have to fetch the checkpoint and create a cumulative count of words while processing every partition using a mapping function:

JavaMapWithStateDStream<String, Integer, Integer, Tuple2<String, Integer>> cumulativeWordCounts = wordCounts
  .mapWithState(
    StateSpec.function( 
        (word, one, state) -> {
          int sum = one.orElse(0) + (state.exists() ? state.get() : 0);
          Tuple2<String, Integer> output = new Tuple2<>(word, sum);
          state.update(sum);
          return output;
        }
      )
    );

Once we get the cumulative word counts, we can proceed to iterate and save them in Cassandra as before.

Please note that while data checkpointing is useful for stateful processing, it comes with a latency cost. Hence, it’s necessary to use this wisely along with an optimal checkpointing interval.

7. Understanding Offsets

If we recall some of the Kafka parameters we set earlier:

kafkaParams.put("auto.offset.reset", "latest");
kafkaParams.put("enable.auto.commit", false);

These basically mean that we don’t want to auto-commit for the offset and would like to pick the latest offset every time a consumer group is initialized. Consequently, our application will only be able to consume messages posted during the period it is running.

If we want to consume all messages posted irrespective of whether the application was running or not and also want to keep track of the messages already posted, we’ll have to configure the offset appropriately along with saving the offset state, though this is a bit out of scope for this tutorial.

This is also a way in which Spark Streaming offers a particular level of guarantee like “exactly once”. This basically means that each message posted on Kafka topic will only be processed exactly once by Spark Streaming.

8. Deploying Application

We can deploy our application using the Spark-submit script which comes pre-packed with the Spark installation:

$SPARK_HOME$\bin\spark-submit \
  --class com.baeldung.data.pipeline.WordCountingAppWithCheckpoint \
  --master local[2] 
  \target\spark-streaming-app-0.0.1-SNAPSHOT-jar-with-dependencies.jar

Please note that the jar we create using Maven should contain the dependencies that are not marked as provided in scope.

Once we submit this application and post some messages in the Kafka topic we created earlier, we should see the cumulative word counts being posted in the Cassandra table we created earlier.

9. Conclusion

To sum up, in this tutorial, we learned how to create a simple data pipeline using Kafka, Spark Streaming and Cassandra. We also learned how to leverage checkpoints in Spark Streaming to maintain state between batches.

As always, the code for the examples is available over on GitHub.


Spring Boot Ehcache Example

$
0
0

1. Overview

Let’s look at an example of using Ehcache with Spring Boot. We’ll use Ehcache version 3 as this provides an implementation of a JSR-107 cache manager.

The example is a simple REST service that produces the square of a number.

2. Dependencies

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>2.1.1.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-cache</artifactId>
    <version>2.1.1.RELEASE</version></dependency>
<dependency>
    <groupId>javax.cache</groupId>
    <artifactId>cache-api</artifactId>
    <version>1.1.0</version>
</dependency>
<dependency>
    <groupId>org.ehcache</groupId>
    <artifactId>ehcache</artifactId>
    <version>3.6.2</version>
</dependency>     

3. Example

Let’s create a simple REST controller which calls a service to square a number and returns the result as a JSON string:

@RestController
@RequestMapping("/number", MediaType.APPLICATION_JSON_UTF8_VALUE)
public class NumberController {

    // ...

    @Autowired
    private NumberService numberService;

    @GetMapping(path = "/square/{number}")
    public String getSquare(@PathVariable Long number) {
        log.info("call numberService to square {}", number);
        return String.format("{\"square\": %s}", numberService.square(number));
    }
}

Now let’s create the service.

We annotate the method with @Cacheable so that Spring will handle the caching. As a result of this annotation, Spring will create a proxy of the NumberService to intercept calls to the square method and call Ehcache.

We need to provide the name of the cache to use and optionally the key. We can also add a condition to restrict what is cached:

@Service
public class NumberService {

    // ...
    @Cacheable(
      value = "squareCache", 
      key = "#number", 
      condition = "#number>10")
    public BigDecimal square(Long number) {
        BigDecimal square = BigDecimal.valueOf(number)
          .multiply(BigDecimal.valueOf(number));
        log.info("square of {} is {}", number, square);
        return square;
    }
}

Finally, let’s create our main Spring Boot application:

@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

4. Cache Configuration

We need to add Spring’s @EnableCaching annotation to a Spring bean so that Spring’s annotation-driven cache management is enabled.

Let’s create a CacheConfig class:

@Configuration
@EnableCaching
public class CacheConfig {
}

Spring’s auto-configuration finds Ehcache’s implementation of JSR-107. However, no caches are created by default.

Because neither Spring nor Ehcache looks for a default ehcache.xml file. We add the following property to tell Spring where to find it:

spring.cache.jcache.config=classpath:ehcache.xml

Let’s create an ehcache.xml file with a cache called squareCache:

<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns="http://www.ehcache.org/v3"
    xmlns:jsr107="http://www.ehcache.org/v3/jsr107"
    xsi:schemaLocation="
            http://www.ehcache.org/v3 http://www.ehcache.org/schema/ehcache-core-3.0.xsd
            http://www.ehcache.org/v3/jsr107 http://www.ehcache.org/schema/ehcache-107-ext-3.0.xsd">

    <cache alias="squareCache">
        <key-type>java.lang.Long</key-type>
        <value-type>java.math.BigDecimal</value-type>
        <expiry>
            <ttl unit="seconds">30</ttl>
        </expiry>

        <listeners>
            <listener>
                <class>com.baeldung.cachetest.config.CacheEventLogger</class>
                <event-firing-mode>ASYNCHRONOUS</event-firing-mode>
                <event-ordering-mode>UNORDERED</event-ordering-mode>
                <events-to-fire-on>CREATED</events-to-fire-on>
                <events-to-fire-on>EXPIRED</events-to-fire-on>
            </listener>
        </listeners>

        <resources>
            <heap unit="entries">2</heap>
            <offheap unit="MB">10</offheap>
        </resources>
    </cache>

</config>

And let’s also add the cache event listener which logs both CREATED and EXPIRED cache events:

public class CacheEventLogger 
  implements CacheEventListener<Object, Object> {

    // ...

    @Override
    public void onEvent(
      CacheEvent<? extends Object, ? extends Object> cacheEvent) {
        log.info(/* message */,
          cacheEvent.getKey(), cacheEvent.getOldValue(), cacheEvent.getNewValue());
    }
}

5. In Action

We can use Maven to start this app by running mvn spring-boot:run.

Then open up a browser and access the REST service on port 8080.

If we go to http://localhost:8080/number/square/12then we’ll get back {“square”:144}, and in the log we’ll see:

INFO [nio-8080-exec-1] c.b.cachetest.rest.NumberController : call numberService to square 12
INFO [nio-8080-exec-1] c.b.cachetest.service.NumberService : square of 12 is 144
INFO [e [_default_]-0] c.b.cachetest.config.CacheEventLogger : Cache event CREATED for item with key 12. Old value = null, New value = 144

We can see the log message from the square method of NumberService, and the CREATED event from the EventLogger. If we then refresh the browser we will only see the following added to the log:

INFO [nio-8080-exec-2] c.b.cachetest.rest.NumberController : call numberService to square 12

The log message in the square method of NumberService isn’t being invoked. This shows us that the cached value is being used.

If we wait 30 seconds for the cached item to expire and refresh the browser we’ll see an EXPIRED event, and the value added back into the cache:

INFO [nio-8080-exec-1] (...) NumberController : call numberService to square 12
INFO [e [_default_]-1] (...) CacheEventLogger : Cache event EXPIRED for item with key 12. Old value = 144,New value = null
INFO [nio-8080-exec-1] (... )NumberService : square of 12 is 144
INFO [e [_default_]-1] (...) CacheEventLogger : Cache event CREATED for item with key 12. Old value = null, New value = 144

If we enter http://localhost:8080/number/square/3 into the browser, we get the correct answer of 9, but the value isn’t cached.

This is because of the condition we used on the @Cacheable annotation to only cache values for numbers higher than 10.

6. Conclusion

In this quick tutorial, we showed how to set up Ehcache with Spring Boot.

As always, the code can be found on GitHub.

Summing Numbers with Java Streams

$
0
0

1. Introduction

In this quick tutorial, we’ll show various ways of calculating the sum of integers, using the Stream API.

For the sake of simplicity, we’ll use integers in our examples. However, we can apply the same methods to longs and doubles as well.

2. Using Stream.reduce()

Stream.reduce() is an intermediate operation that performs a reduction on the elements of the stream.

It applies a binary operator (accumulator) to each element in the stream, where the first operand is the return value of the previous application, and the second one is the current stream element.

In the first method of using the reduce() method, the accumulator function is a lambda expression that adds two Integer values and returns an Integer value:

List<Integer> integers = Arrays.asList(1, 2, 3, 4, 5);
Integer sum = integers.stream()
  .reduce(0, (a, b) -> a + b);

In the same way, we can use an already existing Java method:

List<Integer> integers = Arrays.asList(1, 2, 3, 4, 5);
Integer sum = integers.stream()
  .reduce(0, Integer::sum);

Or we can define and use our custom method:

public class ArithmeticUtils {

    public static int add(int a, int b) {
        return a + b;
    }
}

Then, we can pass this function as a parameter to the reduce() method:

List<Integer> integers = Arrays.asList(1, 2, 3, 4, 5);
Integer sum = integers.stream()
  .reduce(0, ArithmeticUtils::add);

3. Using Stream.collect()

The second method for calculating the sum of a list of integers is by using the collect() terminal operation:

List<Integer> integers = Arrays.asList(1, 2, 3, 4, 5);
Integer sum = integers.stream()
  .collect(Collectors.summingInt(Integer::intValue));

Similarly, the Collectors class provides summingLong() and summingDouble() methods to calculate the sums of longs and doubles respectively.

4. Using IntStream.sum()

The Stream API provides us with the mapToInt() intermediate operation, which converts our stream to an IntStream object.

This method takes a mapper as a parameter, which it uses to do the conversion, then, we can call the sum() method to calculate the sum of the stream’s elements.

Let’s see a quick example of how we can use it:

List<Integer> integers = Arrays.asList(1, 2, 3, 4, 5);
Integer sum = integers.stream()
  .mapToInt(Integer::intValue)
  .sum();

In the same fashion, we can use the mapToLong() and mapToDouble() methods to calculate the sums of longs and doubles respectively.

5. Using Stream#sum with Map

To calculate the sum of values of a Map<Object, Integer> data structure, firstly we create a stream from the values of that Map, then we apply one of the methods we used previously.

For instance, by using IntStream.sum():

Integer sum = map.values()
  .stream()
  .mapToInt(Integer::valueOf)
  .sum();

6. Using Stream#sum with Objects

Let’s imagine that we have a list of objects and that we want to calculate the sum of all the values of a given field of these objects.

For example:

public class Item {

    private int id;
    private Integer price;

    public Item(int id, Integer price) {
        super();
        this.id = id;
        this.price = price;
    }

    // Standard getters and setters
}

Next, let’s imagine that we want to calculate the total price of all the items of the following list:

Item item1 = new Item(1, 10);
Item item2 = new Item(2, 15);
Item item3 = new Item(3, 25);
Item item4 = new Item(4, 40);
        
List<Item> items = Arrays.asList(item1, item2, item3, item4);

In this case, in order to calculate the sum using the methods shown in previous examples, we need to call the map() method to convert our stream into a stream of integers.

As a result, we can use Stream.reduce(), Stream.collect(), and IntStream.sum() to calculate the sum:

Integer sum = items.stream()
  .map(x -> x.getPrice())
  .reduce(0, ArithmeticUtils::add);
Integer sum = items.stream()
  .map(x -> x.getPrice())
  .reduce(0, Integer::sum);
Integer sum = items.stream()
  .map(item -> item.getPrice())
  .reduce(0, (a, b) -> a + b);
Integer sum = items.stream()
  .map(x -> x.getPrice())
  .collect(Collectors.summingInt(Integer::intValue));
items.stream()
  .mapToInt(x -> x.getPrice())
  .sum();

7. Using Stream#sum with String

Let’s suppose that we have a String object containing some integers.

To calculate the sum of these integers, first of all, we need to convert that String into an Array, then we need to filter out the non-integer elements, and finally, convert the remaining elements of that array into numbers.

Let’s see all these steps in action:

String string = "Item1 10 Item2 25 Item3 30 Item4 45";

Integer sum = Arrays.stream(string.split(" "))
    .filter((s) -> s.matches("\\d+"))
    .mapToInt(Integer::valueOf)
    .sum();

8. Conclusion

In this tutorial, we saw several methods of how to calculate the sum of a list of integers by using the Stream API. Also, we used these methods to calculate the sum of values of a given field of a list of objects, the sum of the values of a map, as well as the numbers within a given String object.

As always, the complete code is available over on GitHub.

Retrieve Fields from a Java Class Using Reflection

$
0
0

1. Overview

Reflection is the ability for computer software to inspect its structure at runtime. In Java, we achieve this by using the Java Reflection API. It allows us to inspect the elements of a class such as fields, methods or even inner classes, all at runtime.

This tutorial will focus on how to retrieve the fields of a Java class, including private and inherited fields.

2. Retrieving Fields from a Class

Let’s first have a look at how to retrieve the fields of a class, regardless of their visibility. Later on, we’ll see how to get inherited fields as well.

Let’s start with an example of a Person class with two String fields: lastName and firstName. The former is protected (that’ll be useful later) while the latter is private:

public class Person {
    protected String lastName;
    private String firstName;
}

We want to get both lastName and firstName fields using reflection. We’ll achieve this by using the Class::getDeclaredFields method. As its name suggests, this returns all the declared fields of a class, in the form of a Field array:

public class PersonAndEmployeeReflectionUnitTest {

    /* ... constants ... */

    @Test
    public void givenPersonClass_whenGetDeclaredFields_thenTwoFields() {
        Field[] allFields = Person.class.getDeclaredFields();

        assertEquals(2, allFields.length);

        assertTrue(Arrays.stream(allFields).anyMatch(field ->
          field.getName().equals(LAST_NAME_FIELD)
            && field.getType().equals(String.class))
        );
        assertTrue(Arrays.stream(allFields).anyMatch(field ->
          field.getName().equals(FIRST_NAME_FIELD)
            && field.getType().equals(String.class))
        );
    }

}

As we can see, we get the two fields of the Person class. We check their names and types which matches the fields definitions in the Person class.

3. Retrieving Inherited Fields

Let’s now see how to get the inherited fields of a Java class.

To illustrate this, let’s create a second class named Employee extending Person, with a field of its own:

public class Employee extends Person {
    public int employeeId;
}

3.1. Retrieving Inherited Fields on a Simple Class Hierarchy

Using Employee.class.getDeclaredFields() would only return the employeeId field, as this method doesn’t return the fields declared in superclasses. To also get inherited fields we must also get the fields of the Person superclass.

Of course, we could use the getDeclaredFields() method on both Person and Employee classes and merge their results into a single array. But what if we don’t want to explicitly specify the superclass?

In this case, we can make use of another method of the Java Reflection APIClass::getSuperclass. This gives us the superclass of another class, without us needing to know what that superclass is.

Let’s gather the results of getDeclaredFields() on Employee.class and Employee.class.getSuperclass() and merge them into a single array:

@Test
public void givenEmployeeClass_whenGetDeclaredFieldsOnBothClasses_thenThreeFields() {
    Field[] personFields = Employee.class.getSuperclass().getDeclaredFields();
    Field[] employeeFields = Employee.class.getDeclaredFields();
    Field[] allFields = new Field[employeeFields.length + personFields.length];
    Arrays.setAll(allFields, i -> 
      (i < personFields.length ? personFields[i] : employeeFields[i - personFields.length]));

    assertEquals(3, allFields.length);

    Field lastNameField = allFields[0];
    assertEquals(LAST_NAME_FIELD, lastNameField.getName());
    assertEquals(String.class, lastNameField.getType());

    Field firstNameField = allFields[1];
    assertEquals(FIRST_NAME_FIELD, firstNameField.getName());
    assertEquals(String.class, firstNameField.getType());

    Field employeeIdField = allFields[2];
    assertEquals(EMPLOYEE_ID_FIELD, employeeIdField.getName());
    assertEquals(int.class, employeeIdField.getType());
}

We can see here that we’ve gathered the two fields of Person as well as the single field of Employee.

But, is the private field of Person really an inherited field? Not so much. That would be the same for a package-private field. Only public and protected fields are considered inherited.

3.2. Filtering public and protected Fields

Unfortunately, no method in the Java API allows us to gather public and protected fields from a class and its superclasses. The Class::getFields method approaches our goal as it returns all public fields of a class and its superclasses, but not the protected ones.

The only way we have to get only inherited fields is to use the getDeclaredFields() method, as we just did, and filter its results using the Field::getModifiers method. This one returns an int representing the modifiers of the current field. Each possible modifier is assigned a power of two between 2^0 and 2^7.

For example, public is 2^0 and static is 2^3. Therefore calling the getModifiers() method on a public and static field would return 9.

Then, it’s possible to perform a bitwise and between this value and the value of a specific modifier to see if that field has that modifier. If the operation returns something else than 0 then the modifier is applied, otherwise not.

We’re lucky as Java provides us with a utility class to check if modifiers are present in the value returned by getModifiers(). Let’s use the isPublic() and isProtected() methods to gather only inherited fields in our example:

List<Field> personFields = Arrays.stream(Employee.class.getSuperclass().getDeclaredFields())
  .filter(f -> Modifier.isPublic(f.getModifiers()) || Modifier.isProtected(f.getModifiers()))
  .collect(Collectors.toList());

assertEquals(1, personFields.size());

assertTrue(personFields.stream().anyMatch(field ->
  field.getName().equals(LAST_NAME_FIELD)
    && field.getType().equals(String.class))
);

As we can see, the result doesn’t carry the private field anymore.

3.3. Retrieving Inherited Fields on a Deep Class Hierarchy

In the above example, we worked on a single class hierarchy. What do we do now if we have a deeper class hierarchy and want to gather all the inherited fields?

Let’s assume we have a subclass of Employee or a superclass of Person – then obtaining the fields of the whole hierarchy will require to check all the superclasses.

We can achieve that by creating a utility method that runs through the hierarchy, building the complete result for us:

List<Field> getAllFields(Class clazz) {
    if (clazz == null) {
        return Collections.emptyList();
    }

    List<Field> result = new ArrayList<>(getAllFields(clazz.getSuperclass()));
    List<Field> filteredFields = Arrays.stream(clazz.getDeclaredFields())
      .filter(f -> Modifier.isPublic(f.getModifiers()) || Modifier.isProtected(f.getModifiers()))
      .collect(Collectors.toList());
    result.addAll(filteredFields);
    return result;
}

This recursive method will search public and protected fields through the class hierarchy and returns all that have been found in a List.

Let’s illustrate it with a little test on a new MonthEmployee class, extending the Employee one:

public class MonthEmployee extends Employee {
    protected double reward;
}

This class defines a new field – reward. Given all the hierarchy class, our method should give us the following fields definitions: Person::lastName, Employee::employeeId and MonthEmployee::reward.

Let’s call the getAllFields() method on MonthEmployee:

@Test
public void givenMonthEmployeeClass_whenGetAllFields_thenThreeFields() {
    List<Field> allFields = getAllFields(MonthEmployee.class);

    assertEquals(3, allFields.size());

    assertTrue(allFields.stream().anyMatch(field ->
      field.getName().equals(LAST_NAME_FIELD)
        && field.getType().equals(String.class))
    );
    assertTrue(allFields.stream().anyMatch(field ->
      field.getName().equals(EMPLOYEE_ID_FIELD)
        && field.getType().equals(int.class))
    );
    assertTrue(allFields.stream().anyMatch(field ->
      field.getName().equals(MONTH_EMPLOYEE_REWARD_FIELD)
        && field.getType().equals(double.class))
    );
}

As expected, we gather all the public and protected fields.

4. Conclusion

In this article, we saw how to retrieve the fields of a Java class using the Java Reflection API.

We first learned how to retrieve the declared fields of a class. After that, we saw how to retrieve its superclass fields as well. Then, we learned to filter out non-public and non-protected fields.

Finally, we saw how to apply all of this to gather the inherited fields of a multiple class hierarchy.

As usual, the full code for this article is available over on our GitHub.

JPA 2.2 Support for Java 8 Date/Time Types

$
0
0

1. Overview

The JPA 2.2 version has officially introduced the support for Java 8 Date and Time API. Before that, either we had to rely on a proprietary solution, or we had to use the JPA Converter API.

In this tutorial, we’ll show how to map the various Java 8 Date and Time types. We’ll especially focus on the ones that take into account the offset information.

2. Maven Dependencies

Before we start, we need to include the JPA 2.2 API to the project classpath. In a Maven based project, we can simply add its dependency to our pom.xml file:

<dependency>
    <groupId>javax.persistence</groupId>
    <artifactId>javax.persistence-api</artifactId>
    <version>2.2</version>
</dependency>

Additionally, to run the project, we need a JPA implementation and the JDBC driver of the database that we’ll be working with. In this tutorial, we’ll use EclipseLink and the PostgreSQL database:

<dependency>
    <groupId>org.eclipse.persistence</groupId>
    <artifactId>eclipselink</artifactId>
    <version>2.7.4-RC1</version>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <version>42.2.5</version>
    <scope>runtime</scope>
    <type>bundle</type>
</dependency>

Feel free to check the latest versions of JPA API, EclipseLink, and PostgreSQL JDBC driver on Maven Central.

Of course, we can use other databases or JPA implementations like Hibernate.

3. TimeZone Support

We can work with any database, but first, we should check the support for these Standard SQL Types, as the JDBC 4.2 is based on:

  • TIMESTAMP(n) WITH TIME ZONE
  • TIMESTAMP(n) WITHOUT TIME ZONE
  • TIME(n) WITH TIME ZONE
  • TIME(n) WITHOUT TIME ZONE

Here, n is the fractional seconds precision and is between 0 and 9 digits. WITHOUT TIME ZONE is optional and can be omitted. If WITH TIME ZONE is specified, the timezone name or the offset to UTC is required.

We can represent the timezone in one of these two formats:

  • Timezone name
  • Offset from UTC or the letter Z for UTC

For our example, we’ve chosen the PostgreSQL database thanks to its full support for the SQL Type TIME WITH TIME ZONE.

Note that other databases may not support these types.

4. Mapping Date Types Before Java 8

Before Java 8, we usually had to map the generic SQL types TIME, DATE, and TIMESTAMP, to either the java.sql.* classes java.sql.Timejava.sql.Date, and java.sql.Timestamp, respectively, or to java.util types java.util.Date and java.util.Calendar.

First, let’s see how to use the java.sql types. Here, we’re simply defining the attributes with java.sql types as part of an @Entity class:

@Entity
public class JPA22DateTimeEntity {

    private java.sql.Time sqlTime;
    private java.sql.Date sqlDate;
    private java.sql.Timestamp sqlTimestamp;
    
    // ...
}

While the java.sql types work like any other types without any additional mapping, the java.util types need to specify the corresponding temporal types.

This is done through the @Temporal annotation whose value attribute allows us to specify the corresponding JDBC type, using the TemporalType enumeration:

@Temporal(TemporalType.TIME)
private java.util.Date utilTime;

@Temporal(TemporalType.DATE)
private java.util.Date utilDate;

@Temporal(TemporalType.TIMESTAMP)
private java.util.Date utilTimestamp;

Note that if we’re using Hibernate as an implementation, this doesn’t support mapping Calendar to TIME.

Similarly, we can use the Calendar class:

@Temporal(TemporalType.TIME)
private Calendar calendarTime;

@Temporal(TemporalType.DATE)
private Calendar calendarDate;

@Temporal(TemporalType.TIMESTAMP)
private Calendar calendarTimestamp;

None of these types have support for the timezone or the offset. To deal with those pieces of information, we traditionally had to store the UTC time.

5. Mapping Java 8 Date Types

Java 8 has introduced the java.time packages, and the JDBC 4.2 API added support for the additional SQL types TIMESTAMP WITH TIME ZONE and TIME WITH TIME ZONE.

We can now map the JDBC Types TIME, DATE, and TIMESTAMP to the java.time types – LocalTime, LocalDate, and LocalDateTime:

@Column(name = "local_time", columnDefinition = "TIME")
private LocalTime localTime;

@Column(name = "local_date", columnDefinition = "DATE")
private LocalDate localDate;

@Column(name = "local_date_time", columnDefinition = "TIMESTAMP")
private LocalDateTime localDateTime;

Additionally, we have support for the offset local timezone to UTC through the OffsetTime and OffsetDateTime classes:

@Column(name = "offset_time", columnDefinition = "TIME WITH TIME ZONE")
private OffsetTime offsetTime;

@Column(name = "offset_date_time", columnDefinition = "TIMESTAMP WITH TIME ZONE")
private OffsetDateTime offsetDateTime;

The corresponding mapped column types should be TIME WITH TIME ZONE and TIMESTAMP WITH TIME ZONE. Unfortunately, not all databases support these two types.

As we can see, JPA supports these five classes as basic types, and there’s no additional information needed to distinguish between the date and/or the time information.

After saving a new instance of our entity class, we can check that data has been inserted correctly:

6. Conclusion

Before Java 8 and JPA 2.2, developers usually had to convert date/time types to UTC before persisting them. JPA 2.2 now supports this feature out of the box by supporting the offset to UTC and by leveraging JDBC 4.2 support for the timezone.

The full source code for these samples can be found over on Github.

Create a Directory in Java

$
0
0

1. Overview

Creating a directory with Java is pretty straight-forward. The language provides us with two methods allowing us to create either a single directory or multiple nested directories – mkdir() and mkdirs().

In this tutorial, we’ll see how they both behave.

2. Create a Single Directory

Let’s start with the creation of a single directory.

For our purposes, we’ll make use of the user temp directory. We can look it up with System.getProperty(“java.io.tmpdir”).

We’ll pass this path to a Java File object, which will represent our temp directory:

private static final File TEMP_DIRECTORY = new File(System.getProperty("java.io.tmpdir"));

Now let’s create a new directory inside of it. We’ll achieve this by calling the File::mkdir method on a new File object representing the directory to create:

File newDirectory = new File(TEMP_DIRECTORY, "new_directory");
assertFalse(newDirectory.exists());
assertTrue(newDirectory.mkdir());

To ensure our directory doesn’t exist yet, we first used the exists() method.

Then we called the mkdir() method that tells us if the directory creation succeeded or not. If the directory already existed, the method would have returned false.

If we make the same calls again:

assertTrue(newDirectory.exists());
assertFalse(newDirectory.mkdir());

Then, as we expected, the method returns false on the second call.

3. Create Multiple Nested Directories

What we’ve seen so far works well on a single directory, but what happens if we want to create multiple nested directories?

In the following example, we’ll see that File::mkdir doesn’t work for that:

File newDirectory = new File(TEMP_DIRECTORY, "new_directory");
File nestedDirectory = new File(newDirectory, "nested_directory");
assertFalse(newDirectory.exists());
assertFalse(nestedDirectory.exists());
assertFalse(nestedDirectory.mkdir());

As the new_directory doesn’t exist mkdir doesn’t create the underlying nested_directory.

However, the File class provides us with another method to achieve that – mkdirs(). This method will behave like mkdir() but will also create all the unexisting parent directories as well.

In our previous example, this would mean creating not only nested_directory, but also new_directory.

Note that until now we used the File(File, String) constructor, but we can also use the File(String) constructor and pass the complete path of our file using File.separator to separate the different parts of the path:

File newDirectory = new File(System.getProperty("java.io.tmpdir") + File.separator + "new_directory");
File nestedDirectory = new File(newDirectory, "nested_directory");
assertFalse(newDirectory.exists());
assertFalse(nestedDirectory.exists());
assertTrue(nestedDirectories.mkdirs());

As we can see, the directories are created as expected. Moreover, the method only returns false when all the directories already exist. If at least one directory is created the method will return true.

Therefore this means that the mkdirs() method used on a directory whose parents exist will work the same as the mkdir() method.

4. Conclusion

In this article, we’ve seen two methods allowing us to create directories in Java. The first one, mkdir(), targets the creation of a single directory, provided its parents already exist. The second one, mkdirs(), is able to create a directory as well as its unexisting parents.

The code of this article can be found on our GitHub.

Common Hibernate Exceptions

$
0
0

1. Introduction

In this tutorial, we’ll discuss some common exceptions we can encounter while working with Hibernate.

We’ll review their purpose and some common causes. Additionally, we’ll look into their solutions.

2. Hibernate Exception Overview

Many conditions can cause exceptions to be thrown while using Hibernate. These can be mapping errors, infrastructure problems, SQL errors, data integrity violations, session problems, and transaction errors.

These exceptions mostly extend from HibernateException. However, if we’re using Hibernate as a JPA persistence provider, these exceptions may get wrapped into PersistenceException.

Both of these base classes extend from RuntimeException. Therefore, they’re all unchecked. Hence, we don’t need to catch or declare them at every place they’re used.

Furthermore, most of these are unrecoverable. As a result, retrying the operation would not help. This means we have to abandon the current session on encountering them.

Let’s now look into each of these, one at a time.

3. Mapping Errors

Object-Relational mapping is a major benefit of Hibernate. Specifically, it frees us from manually writing SQL statements.

At the same time, it requires us to specify the mapping between Java objects and database tables. Accordingly, we specify them using annotations or through mapping documents. These mappings can be coded manually. Alternatively, we can use tools to generate them.

While specifying these mappings, we may make mistakes. These could be in the mapping specification. Or, there can be a mismatch between a Java object and the corresponding database table.

Such mapping errors generate exceptions. We come across them frequently during initial development. Additionally, we may run into them while migrating changes across environments.

Let’s look into these errors with some examples.

3.1. MappingException

A problem with the object-relational mapping causes a MappingException to be thrown:

public void whenQueryExecutedWithUnmappedEntity_thenMappingException() {
    thrown.expectCause(isA(MappingException.class));
    thrown.expectMessage("Unknown entity: java.lang.String");

    Session session = sessionFactory.getCurrentSession();
    NativeQuery<String> query = session
      .createNativeQuery("select name from PRODUCT", String.class);
    query.getResultList();
}

In the above code, the createNativeQuery method tries to map the query result to the specified Java type String. It uses the implicit mapping of the String class from Metamodel to do the mapping.

However, the String class doesn’t have any mapping specified. Therefore, Hibernate doesn’t know how to map the name column to String and throws the exception.

For a detailed analysis of possible causes and solutions, check out Hibernate Mapping Exception – Unknown Entity.

Similarly, other errors can also cause this exception:

  • Mixing annotations on fields and methods
  • Failing to specify the @JoinTable for a @ManyToMany association
  • The default constructor of the mapped class throws an exception during mapping processing

Furthermore, MappingException has a few subclasses which can indicate specific mapping problems:

  • AnnotationException – a problem with an annotation
  • DuplicateMappingException – duplicate mapping for a class, table, or property name
  • InvalidMappingException – mapping is invalid
  • MappingNotFoundException – mapping resource could not be found
  • PropertyNotFoundException – an expected getter or setter method could not be found on a class

Therefore, if we come across this exception, we should first verify our mappings.

3.2. AnnotationException

To understand the AnnotationException, let’s create an entity without an identifier annotation on any field or property:

@Entity
public class EntityWithNoId {
    private int id;
    public int getId() {
        return id;
    }

    // standard setter
}

Since Hibernate expects every entity to have an identifier, we’ll get an AnnotationException when we use the entity:

public void givenEntityWithoutId_whenSessionFactoryCreated_thenAnnotationException() {
    thrown.expect(AnnotationException.class);
    thrown.expectMessage("No identifier specified for entity");

    Configuration cfg = getConfiguration();
    cfg.addAnnotatedClass(EntityWithNoId.class);
    cfg.buildSessionFactory();
}

Furthermore, some other probable causes are:

  • Unknown sequence generator used in the @GeneratedValue annotation
  • @Temporal annotation used with a Java 8 Date/Time class
  • Target entity missing or non-existent for @ManyToOne or @OneToMany
  • Raw collection classes used with relationship annotations @OneToMany or @ManyToMany
  • Concrete classes used with the collection annotations @OneToMany, @ManyToMany or @ElementCollection as Hibernate expects the collection interfaces

To resolve this exception, we should first check the specific annotation mentioned in the error message.

4. Schema Management Errors

Automatic database schema management is another benefit of Hibernate. For example, it can generate DDL statements to create or validate database objects.

To use this feature, we need to set the hibernate.hbm2ddl.auto property appropriately.

If there are problems while performing schema management, we get an exception. Let’s examine these errors.

4.1. SchemaManagementException

Any infrastructure-related problem in performing schema management causes a SchemaManagementException.

To demonstrate, let’s instruct Hibernate to validate the database schema:

public void givenMissingTable_whenSchemaValidated_thenSchemaManagementException() {
    thrown.expect(SchemaManagementException.class);
    thrown.expectMessage("Schema-validation: missing table");

    Configuration cfg = getConfiguration();
    cfg.setProperty(AvailableSettings.HBM2DDL_AUTO, "validate");
    cfg.addAnnotatedClass(Product.class);
    cfg.buildSessionFactory();
}

Since the table corresponding to Product is not present in the database, we get the schema-validation exception while building the SessionFactory.

Additionally, there are other possible scenarios for this exception:

  • unable to connect to the database to perform schema management tasks
  • the schema is not present in the database

4.2. CommandAcceptanceException

Any problem executing a DDL corresponding to a specific schema management command can cause a CommandAcceptanceException.

As an example, let’s specify the wrong dialect while setting up the SessionFactory:

public void whenWrongDialectSpecified_thenCommandAcceptanceException() {
    thrown.expect(SchemaManagementException.class);
        
    thrown.expectCause(isA(CommandAcceptanceException.class));
    thrown.expectMessage("Halting on error : Error executing DDL");

    Configuration cfg = getConfiguration();
    cfg.setProperty(AvailableSettings.DIALECT,
      "org.hibernate.dialect.MySQLDialect");
    cfg.setProperty(AvailableSettings.HBM2DDL_AUTO, "update");
    cfg.setProperty(AvailableSettings.HBM2DDL_HALT_ON_ERROR,"true");
    cfg.getProperties()
      .put(AvailableSettings.HBM2DDL_HALT_ON_ERROR, true);

    cfg.addAnnotatedClass(Product.class);
    cfg.buildSessionFactory();
}

Here, we’ve specified the wrong dialect: MySQLDialect. Also, we’re instructing Hibernate to update the schema objects. Consequently, the DDL statements executed by Hibernate to update the H2 database will fail and we’ll get an exception.

By default, Hibernate silently logs this exception and moves on. When we later use the SessionFactory, we get the exception.

To ensure that an exception is thrown on this error, we’ve set the property HBM2DDL_HALT_ON_ERROR to true.

Similarly, these are some other common causes for this error:

  • There is a mismatch in column names between mapping and the database
  • Two classes are mapped to the same table
  • The name used for a class or table is a reserved word in the database, like USER, for example
  • The user used to connect to the database does not have the required privilege

5. SQL Execution Errors

When we insert, update, delete or query data using Hibernate, it executes DML statements against the database using JDBC. This API raises an SQLException if the operation results in errors or warnings.

Hibernate converts this exception into JDBCException or one of its suitable subclasses:

  • ConstraintViolationException
  • DataException
  • JDBCConnectionException
  • LockAcquisitionException
  • PessimisticLockException
  • QueryTimeoutException
  • SQLGrammarException
  • GenericJDBCException

Let’s discuss common errors.

5.1. JDBCException

JDBCException is always caused by a particular SQL statement. We can call the getSQL method to get the offending SQL statement.

Furthermore, we can retrieve the underlying SQLException with the getSQLException method.

5.2. SQLGrammarException

SQLGrammarException indicates that the SQL sent to the database was invalid. It could be due to a syntax error or an invalid object reference.

For example, a missing table can result in this error while querying data:

public void givenMissingTable_whenQueryExecuted_thenSQLGrammarException() {
    thrown.expect(isA(PersistenceException.class));
    thrown.expectCause(isA(SQLGrammarException.class));
    thrown.expectMessage("SQLGrammarException: could not prepare statement");

    Session session = sessionFactory.getCurrentSession();
    NativeQuery<Product> query = session.createNativeQuery(
      "select * from NON_EXISTING_TABLE", Product.class);
    query.getResultList();
}

Also, we can get this error while saving data if the table is missing:

public void givenMissingTable_whenEntitySaved_thenSQLGrammarException() {
    thrown.expect(isA(PersistenceException.class));
    thrown.expectCause(isA(SQLGrammarException.class));
    thrown
      .expectMessage("SQLGrammarException: could not prepare statement");

    Configuration cfg = getConfiguration();
    cfg.addAnnotatedClass(Product.class);

    SessionFactory sessionFactory = cfg.buildSessionFactory();
    Session session = null;
    Transaction transaction = null;
    try {
        session = sessionFactory.openSession();
        transaction = session.beginTransaction();
        Product product = new Product();
        product.setId(1);
        product.setName("Product 1");
        session.save(product);
        transaction.commit();
    } catch (Exception e) {
        rollbackTransactionQuietly(transaction);
        throw (e);
    } finally {
        closeSessionQuietly(session);
        closeSessionFactoryQuietly(sessionFactory);
    }
}

Some other possible causes are:

  • The naming strategy used doesn’t map the classes to the correct tables
  • The column specified in @JoinColumn doesn’t exist

5.3. ConstraintViolationException

A ConstraintViolationException indicates that the requested DML operation caused an integrity constraint to be violated. We can get the name of this constraint by calling the getConstraintName method.

A common cause of this exception is trying to save duplicate records:

public void whenDuplicateIdSaved_thenConstraintViolationException() {
    thrown.expect(isA(PersistenceException.class));
    thrown.expectCause(isA(ConstraintViolationException.class));
    thrown.expectMessage(
      "ConstraintViolationException: could not execute statement");

    Session session = null;
    Transaction transaction = null;

    for (int i = 1; i <= 2; i++) {
        try {
            session = sessionFactory.openSession();
            transaction = session.beginTransaction();
            Product product = new Product();
            product.setId(1);
            product.setName("Product " + i);
            session.save(product);
            transaction.commit();
        } catch (Exception e) {
            rollbackTransactionQuietly(transaction);
            throw (e);
        } finally {
            closeSessionQuietly(session);
        }
    }
}

Also, saving a null value to a NOT NULL column in the database can raise this error.

In order to resolve this error, we should perform all validations in the business layer. Furthermore, database constraints should not be used to do application validations.

5.4. DataException

DataException indicates that the evaluation of an SQL statement resulted in some illegal operation, type mismatch or incorrect cardinality.

For instance, using character data against a numeric column can cause this error:

public void givenQueryWithDataTypeMismatch_WhenQueryExecuted_thenDataException() {
    thrown.expectCause(isA(DataException.class));
    thrown.expectMessage(
      "org.hibernate.exception.DataException: could not prepare statement");

    Session session = sessionFactory.getCurrentSession();
    NativeQuery<Product> query = session.createNativeQuery(
      "select * from PRODUCT where id='wrongTypeId'", Product.class);
    query.getResultList();
}

To fix this error, we should ensure that the data types and length match between the application code and the database.

5.5. JDBCConnectionException

A JDBCConectionException indicates problems communicating with the database.

For example, a database or network going down can cause this exception to be thrown.

Additionally, an incorrect database setup can cause this exception. One such case is the database connection being closed by the server because it was idle for a long time. This can happen if we’re using connection pooling and the idle timeout setting on the pool is more than the connection timeout value in the database.

To solve this problem, we should first ensure that the database host is present and that it’s up. Then, we should verify that the correct authentication is used for the database connection. Finally, we should check that the timeout value is correctly set on the connection pool.

5.6. QueryTimeoutException

When a database query times out, we get this exception. We can also see it due to other errors, such as the tablespace becoming full.

This is one of the few recoverable errors, which means that we can retry the statement in the same transaction.

To fix this issue, we can increase the query timeout for long-running queries in multiple ways:

  • Set the timeout element in a @NamedQuery or @NamedNativeQuery annotation
  • Invoke the setHint method of the Query interface
  • Call the setTimeout method of the Transaction interface
  • Invoke the setTimeout method of the Query interface

6. Session-State-Related Errors

Let’s now look into errors due to Hibernate session usage errors.

6.1. NonUniqueObjectException

Hibernate doesn’t allow two objects with the same identifier in a single session.

If we try to associate two instances of the same Java class with the same identifier in a single session, we get a NonUniqueObjectException. We can get the name and identifier of the entity by calling the getEntityName() and getIdentifier() methods.

To reproduce this error, let’s try to save two instances of Product with the same id with a session:

public void 
givenSessionContainingAnId_whenIdAssociatedAgain_thenNonUniqueObjectException() {
    thrown.expect(isA(NonUniqueObjectException.class));
    thrown.expectMessage(
      "A different object with the same identifier value was already associated with the session");

    Session session = null;
    Transaction transaction = null;

    try {
        session = sessionFactory.openSession();
        transaction = session.beginTransaction();

        Product product = new Product();
        product.setId(1);
        product.setName("Product 1");
        session.save(product);

        product = new Product();
        product.setId(1);
        product.setName("Product 2");
        session.save(product);

        transaction.commit();
    } catch (Exception e) {
        rollbackTransactionQuietly(transaction);
        throw (e);
    } finally {
        closeSessionQuietly(session);
    }
}

We’ll get a NonUniqueObjectException, as expected.

This exception occurs frequently while reattaching a detached object with a session by calling the update method. If the session has another instance with the same identifier loaded, then we get this error. In order to fix this, we can use the merge method to reattach the detached object.

6.2. StaleStateException

Hibernate throws StaleStateExceptions when the version number or timestamp check fails. It indicates that the session contained stale data.

Sometimes this gets wrapped into an OptimisticLockException.

This error usually happens while using long-running transactions with versioning.

In addition, it can also happen while trying to update or delete an entity if the corresponding database row doesn’t exist:

public void whenUpdatingNonExistingObject_thenStaleStateException() {
    thrown.expect(isA(OptimisticLockException.class));
    thrown.expectMessage(
      "Batch update returned unexpected row count from update");
    thrown.expectCause(isA(StaleStateException.class));

    Session session = null;
    Transaction transaction = null;

    try {
        session = sessionFactory.openSession();
        transaction = session.beginTransaction();

        Product product = new Product();
        product.setId(15);
        product.setName("Product1");
        session.update(product);
        transaction.commit();
    } catch (Exception e) {
        rollbackTransactionQuietly(transaction);
        throw (e);
    } finally {
        closeSessionQuietly(session);
    }
}

Some other possible scenarios are:

  • we did not specify a proper unsaved-value strategy for the entity
  • two users tried to delete the same row at almost the same time
  • we manually set a value in the autogenerated ID or version field

7. Lazy Initialization Errors

We usually configure associations to be loaded lazily in order to improve application performance. The associations are fetched only when they’re first used.

However, Hibernate requires an active session to fetch data. If the session is already closed when we try to access an uninitialized association, we get an exception.

Let’s look into this exception and the various ways to fix it.

7.1. LazyInitializationException

LazyInitializationException indicates an attempt to load uninitialized data outside an active session. We can get this error in many scenarios.

First, we can get this exception while accessing a lazy relationship in the presentation layer. The reason is that the entity was partially loaded in the business layer and the session was closed.

Secondly, we can get this error with Spring Data if we use the getOne method. This method lazily fetches the instance.

There are many ways to solve this exception.

First of all, we can make all relationships eagerly loaded. But, this would impact the application performance because we’ll be loading data that won’t be used.

Secondly, we can keep the session open until the view is rendered. This is known as the “Open Session in View” and it’s an anti-pattern. We should avoid this as it has several disadvantages.

Thirdly, we can open another session and reattach the entity in order to fetch the relationships. We can do so by using the merge method on the session.

Finally, we can initialize the required associations in the business layers. We’ll discuss this in the next section.

7.2. Initializing Relevant Lazy Relationships in the Business Layer

There are many ways to initialize lazy relationships.

One option is to initialize them by invoking the corresponding methods on the entity. In this case, Hibernate will issue multiple database queries causing degraded performance. We refer to it as the “N+1 SELECT” problem.

Secondly, we can use Fetch Join to get the data in a single query. However, we need to write custom code to achieve this.

Finally, we can use entity graphs to define all the attributes to be fetched. We can use the annotations @NamedEntityGraph, @NamedAttributeNode, and @NamedEntitySubgraph to declaratively define the entity graph. We can also define them programmatically with the JPA API. Then, we retrieve the entire graph in a single call by specifying it in the fetch operation.

8. Transaction Issues

Transactions define units of work and isolation between concurrent activities. We can demarcate them in two different ways. First, we can define them declaratively using annotations. Second, we can manage them programmatically using the Hibernate Transaction API.

Furthermore, Hibernate delegates the transaction management to a transaction manager. If a transaction could not be started, committed or rolled back due to any reason, Hibernate throws an exception.

We usually get a TransactionException or an IllegalArgumentException depending on the transaction manager.

As an illustration, let’s try to commit a transaction which has been marked for rollback:

public void 
givenTxnMarkedRollbackOnly_whenCommitted_thenTransactionException() {
    thrown.expect(isA(TransactionException.class));
    thrown.expectMessage(
        "Transaction was marked for rollback only; cannot commit");

    Session session = null;
    Transaction transaction = null;
    try {
        session = sessionFactory.openSession();
        transaction = session.beginTransaction();

        Product product = new Product();
        product.setId(15);
        product.setName("Product1");
        session.save(product);
        transaction.setRollbackOnly();

        transaction.commit();
    } catch (Exception e) {
        rollbackTransactionQuietly(transaction);
        throw (e);
    } finally {
        closeSessionQuietly(session);
    }
}

Similarly, other errors can also cause an exception:

  • Mixing declarative and programmatic transactions
  • Attempting to start a transaction when another one is already active in the session
  • Trying to commit or rollback without starting a transaction
  • Trying to commit or rollback a transaction multiple times

9. Concurrency Issues

Hibernate supports two locking strategies to prevent database inconsistency due to concurrent transactions – optimistic and pessimistic. Both of them raise an exception in case of a locking conflict.

To support high concurrency and high scalability, we typically use optimistic concurrency control with version checking. This uses version numbers or timestamps to detect conflicting updates.

OptimisticLockingException is thrown to indicate an optimistic locking conflict. For instance, we get this error if we perform two updates or deletes of the same entity without refreshing it after the first operation:

public void whenDeletingADeletedObject_thenOptimisticLockException() {
    thrown.expect(isA(OptimisticLockException.class));
    thrown.expectMessage(
        "Batch update returned unexpected row count from update");
    thrown.expectCause(isA(StaleStateException.class));

    Session session = null;
    Transaction transaction = null;

    try {
        session = sessionFactory.openSession();
        transaction = session.beginTransaction();

        Product product = new Product();
        product.setId(12);
        product.setName("Product 12");
        session.save(product1);
        transaction.commit();
        session.close();

        session = sessionFactory.openSession();
        transaction = session.beginTransaction();
        product = session.get(Product.class, 12);
        session.createNativeQuery("delete from Product where id=12")
          .executeUpdate();
        // We need to refresh to fix the error.
        // session.refresh(product);
        session.delete(product);
        transaction.commit();
    } catch (Exception e) {
        rollbackTransactionQuietly(transaction);
        throw (e);
    } finally {
        closeSessionQuietly(session);
    }
}

Likewise, we can also get this error if two users try to update the same entity at almost the same time. In this case, the first may succeed and the second raises this error.

Therefore, we cannot completely avoid this error without introducing pessimistic locking. However, we can minimize the probability of its occurrence by doing the following:

  • Keep update operations as short as possible
  • Update entity representations in the client as often as possible
  • Do not cache the entity or any value object representing it
  • Always refresh the entity representation on the client after update

10. Conclusion

In this article, we looked into some common exceptions encountered while using Hibernate. Furthermore, we investigated their probable causes and resolutions.

As usual, the full source code can be found over on GitHub.

Implementing Simple State Machines with Java Enums

$
0
0

1. Overview

In this tutorial, we’ll have a look at State Machines and how they can be implemented in Java using Enums.

We’ll also explain the advantages of this implementation compared to using an interface and a concrete class for each state.

2. Java Enums

A Java Enum is a special type of class that defines a list of constants. This allows for type-safe implementation and more readable code.

As an example, let’s suppose we have an HR software system that can approve leave requests submitted by employees. This request is reviewed by the Team Leader, who escalates it to the Department Manager. The Department Manager is the person responsible for approving the request.

The simplest enum that holds the states of a leave request is:

public enum LeaveRequestState {
    Submitted,
    Escalated,
    Approved
}

We can refer to the constants of this enum:

LeaveRequestState state = LeaveRequestState.Submitted;

Enums can also contain methods. We can write an abstract method in an enum, which will force every enum instance to implement this method. This is very important for the implementation of state machines, as we’ll see below.

Since Java enums implicitly extend the class java.lang.Enum, they can’t extend another class. However, they can implement an interface, just like any other class.

Here’s an example of an enum containing an abstract method:

public enum LeaveRequestState {
    Submitted {
        @Override
        public String responsiblePerson() {
            return "Employee";
        }
    },
    Escalated {
        @Override
        public String responsiblePerson() {
            return "Team Leader";
        }
    },
    Approved {
        @Override
        public String responsiblePerson() {
            return "Department Manager";
        }
    };

    public abstract String responsiblePerson();
}

Note the usage of the semicolon at the end of the last enum constant. The semicolon is required when we have one or more methods following the constants.

In this case, we extended the first example with a responsiblePerson() method. This tells us the person responsible for performing each action. So, if we try to check the person responsible for the Escalated state, it will give us “Team Leader”:

LeaveRequestState state = LeaveRequestState.Escalated;
assertEquals("Team Leader", state.responsiblePerson());

In the same way, if we check who is responsible for approving the request, it will give us “Department Manager”:

LeaveRequestState state = LeaveRequestState.Approved;
assertEquals("Department Manager", state.responsiblePerson());

3. State Machines

A state machine — also called a finite state machine or finite automaton — is a computational model used to build an abstract machine. These machines can only be in one state at a given time. Each state is a status of the system that changes to another state. These state changes are called transitions.

It can get complicated in mathematics with diagrams and notations, but things are a lot easier for us programmers.

The State Pattern is one of the well-known twenty-three design patterns of the GoF. This pattern borrows the concept from the model in mathematics. It allows an object to encapsulate different behaviors for the same object, based on its state. We can program the transition between states and later define separate states.

To explain the concept better, we’ll expand our leave request example to implement a state machine.

4. Enums as State Machines

We’ll focus on the enum implementation of state machines in Java. Other implementations are possible, and we’ll compare them in the next section.

The main point of state machine implementation using an enum is that we don’t have to deal with explicitly setting the states. Instead, we can just provide the logic on how to transition from one state to the next one. Let’s dive right in:

public enum LeaveRequestState {

    Submitted {
        @Override
        public LeaveRequestState nextState() {
            return Escalated;
        }

        @Override
        public String responsiblePerson() {
            return "Employee";
        }
    },
    Escalated {
        @Override
        public LeaveRequestState nextState() {
            return Approved;
        }

        @Override
        public String responsiblePerson() {
            return "Team Leader";
        }
    },
    Approved {
        @Override
        public LeaveRequestState nextState() {
            return this;
        }

        @Override
        public String responsiblePerson() {
            return "Department Manager";
        }
    };

    public abstract LeaveRequestState nextState(); 
    public abstract String responsiblePerson();
}

In this example, the state machine transitions are implemented using the enum’s abstract methods. More precisely, using the nextState() on each enum constant, we specify the transition to the next state. If needed, we can also implement a previousState() method.

Below is a test to check our implementation:

LeaveRequestState state = LeaveRequestState.Submitted;

state = state.nextState();
assertEquals(LeaveRequestState.Escalated, state);

state = state.nextState();
assertEquals(LeaveRequestState.Approved, state);

state = state.nextState();
assertEquals(LeaveRequestState.Approved, state);

We start the leave request in the Submitted initial state. We then verify the state transitions by using the nextState() method we implemented above.

Note that since Approved is the final state, no other transition can happen.

5. Advantages of Implementing State Machines with Java Enums

The implementation of state machines with interfaces and implementation classes can be a significant amount of code to develop and maintain.

Since a Java enum is, in its simplest form, a list of constants, we can use an enum to define our states. And since an enum can also contain behavior, we can use methods to provide the transition implementation between states.

Having all the logic in a simple enum allows for a clean and straightforward solution.

6. Conclusion

In this article, we looked at state machines and how they can be implemented in Java using Enums. We gave an example and tested it.

Eventually, we also discussed the advantages of using enums to implement state machines. As an alternative to the interface and implementation solution, enums provide a cleaner and easier-to-understand implementation of state machines.

As always, all of the code snippets mentioned in this article can be found in on our GitHub repository.


Counting Matches on a Stream Filter

$
0
0

1. Overview

In this tutorial, we’ll explore the use of the Stream.count() method. Specifically, we’ll see how we can combine the count() method with the filter() method to count the matches of a Predicate we’ve applied.

2. Using Stream.count()

The count() method itself provides a small but very useful functionality. We can also combine it excellently with other tools, for example with Stream.filter().

Let’s use the same Customer class that we defined in our tutorial for Stream.filter():

public class Customer {
    private String name;
    private int points;
    //Constructor and standard getters
}

In addition, we also create the same collection of customers:

Customer john = new Customer("John P.", 15);
Customer sarah = new Customer("Sarah M.", 200);
Customer charles = new Customer("Charles B.", 150);
Customer mary = new Customer("Mary T.", 1);

List<Customer> customers = Arrays.asList(john, sarah, charles, mary);

Next, we’ll apply Stream methods on the list to filter it and determine how many matches our filters get.

2.1. Counting Elements

Let’s see the very basic usage of count():

long count = customers.stream().count();

assertThat(count).isEqualTo(4L);

Note that count() returns a long value.

2.2. Using count() with filter()

The example in the previous subsection wasn’t really impressive. We could have come to the same result with the List.size() method.

Stream.count() really shines when we combine it with other Stream methods – most often with filter():

long countBigCustomers = customers
  .stream()
  .filter(c -> c.getPoints() > 100)
  .count();

assertThat(countBigCustomers).isEqualTo(2L);

In this example, we’ve applied a filter on the list of customers, and we’ve also obtained the number of customers that fulfill the condition. In this case, we have two customers with more than 100 points.

Of course, it can also happen that no element matches our filter:

long count = customers
  .stream()
  .filter(c -> c.getPoints() > 500)
  .count();

assertThat(count).isEqualTo(0L);

2.3. Using count() with Advanced Filters

In our tutorial about filter(), we saw some more advanced use cases of the method. Of course, we can still count the result of such filter() operations.

We can filter collections with multiple criteria:

long count = customers
  .stream()
  .filter(c -> c.getPoints() > 10 && c.getName().startsWith("Charles"))
  .count();

assertThat(count).isEqualTo(1L);

Here, we filtered and counted the number of customers whose names start with “Charles” and who have more than 10 points.

We can also extract the criteria into its own method and use method reference:

long count = customers
  .stream()
  .filter(Customer::hasOverHundredPoints)
  .count();

assertThat(count).isEqualTo(2L);

3. Conclusion

In this article, we saw some examples of how to use the count() method in combination with the filter() method to process streams. For further use cases of count(), check out other methods that return a Stream, such as those shown in our tutorial about merging streams with concat().

As always, the complete code is available over on GitHub.

Variable Scope in Java

$
0
0

1. Overview

In Java, as in any programming language, each variable has a scope. This is the segment of the program where a variable can be used and is valid.

In this tutorial, we’ll introduce the available scopes in Java and discuss the differences between them.

2. Class Scope

Each variable declared inside of a class’s brackets ( {} ) but outside of any method, has class scope. As a result, these variables can be used everywhere in the class, but not outside of it:

public class ClassScopeExample {
    Integer amount = 0;
    public void exampleMethod() {
        amount++;
    }
    public void anotherExampleMethod() {
        Integer anotherAmount = amount + 4;
    }
}

We can see that ClassScopeExample has a class variable amount that can be accessed within the class’s methods.

3. Method Scope

When a variable is declared inside a method, it has method scope and it will only be valid inside the same method:

public class MethodScopeExample {
    public void methodA() {
        Integer area = 2;
    }
    public void methodB() {
        // compiler error, area cannot be resolved to a variable
        area = area + 2;
    }
}

In methodA, we created a method variable called area. For that reason, we can use area inside methodA, but we can’t use it anywhere else.

4. Loop Scope

If we declare a variable inside a loop, it will have a loop scope and will only be available inside the loop:

public class LoopScopeExample {
    List<String> listOfNames = Arrays.asList("Joe", "Susan", "Pattrick");
    public void iterationOfNames() {
        String allNames = "";
        for (String name : listOfNames) {
            allNames = allNames + " " + name;
        }
        // compiler error, name cannot be resolved to a variable
        String lastNameUsed = name;
    }
}

We can see that method iterationOfNames has a method variable called name. This variable can be used only inside the loop and is not valid outside of it.

5. Bracket Scope

We can define additional scopes anywhere using brackets ({}):

public class BracketScopeExample {    
    public void mathOperationExample() {
        Integer sum = 0;
        {
            Integer number = 2;
            sum = sum + number;
        }
        // compiler error, number cannot be solved as a variable
        number++;
    }
}

The variable number is only valid within the brackets.

6. Scopes and Variable Shadowing

Imagine that we have a class variable, and we want to declare a method variable with the same name:

public class NestedScopesExample {
    String title = "Baeldung";
    public void printTitle() {
        System.out.println(title);
        String title = "John Doe";
        System.out.println(title);
    }
}

The first time that we are printing title, it will print “Baeldung”. After that, declare a method variable with the same name and assign to it the value “John Doe“.

The title method variable overrides the possibility to access to the class variable title again. That’s why the second time, it will print “John Doe.

Confusing, right? This is called variable shadowing and isn’t a good practice. It’s better to use the prefix this in order to access the title class variable, like this.title.

7. Conclusion

We learned the different scopes that exist in Java. Every variable’s scope is limited to the brackets that contain it and starts in the line that it’s declared.

As always, the code is available over on GitHub.

Nested forEach in Kotlin

$
0
0

1. Introduction

In this short Kotlin tutorial, we’ll look at the parameter scope inside a forEach loop’s lambda.

First, we define the data which we’ll use in our examples. Second, we’ll see how to use forEach to iterate over a list. Third, we’ll look at how to use it in nested loops.

2. Test Data

The data we’ll use is a list of countries, each containing a list of cities, which in turn, contain a list of streets:

class Country(val name : String, val cities : List<City>)

class City(val name : String, val streets : List<String>)

class World {

    val streetsOfAmsterdam = listOf("Herengracht", "Prinsengracht")
    val streetsOfBerlin = listOf("Unter den Linden","Tiergarten")
    val streetsOfMaastricht = listOf("Grote Gracht", "Vrijthof")
    val countries = listOf(
      Country("Netherlands", listOf(City("Maastricht", streetsOfMaastricht),
        City("Amsterdam", streetsOfAmsterdam))),
      Country("Germany", listOf(City("Berlin", streetsOfBerlin))))
}

3. Simple forEach

To print the name of each country in the list, we can write the following code:

fun allCountriesExplicit() { 
    countries.forEach { c -> println(c.name) } 
}

The above syntax is similar to Java. However, in Kotlin, if the lambda accepts only one parameter, we can use it as the default parameter name and do not need to name it explicitly:

fun allCountriesIt() { 
    countries.forEach { println(it.name) } 
}

The above is also equivalent to:

fun allCountriesItExplicit() {
    countries.forEach { it -> println(it.name) }
}

It’s worthwhile to note that we can only use it as an implicit parameter name if there’s no explicit parameter.

For example, the following doesn’t work:

fun allCountriesExplicit() { 
    countries.forEach { c -> println(it.name) } 
}

And we’ll see an error at compile-time:

Error:(2, 38) Kotlin: Unresolved reference: it

4. Nested forEach

If we want to iterate over all countries, cities, and streets, we can write a nested loop:

fun allNested() {
    countries.forEach {
        println(it.name)
        it.cities.forEach {
            println(" ${it.name}")
            it.streets.forEach { println("  $it") }
        }
    }
}

Here, the first it refers to a country, the second it to a city and the third it to a street.

However, if we use IntelliJ, we see a warning:

Implicit parameter 'it' of enclosing lambda is shadowed

This might not be a problem, but, in line 6 we cannot refer to the country or city anymore. If we want that, we need to explicitly name the parameter:

fun allTable() {
    countries.forEach { c ->
        c.cities.forEach { p ->
            p.streets.forEach { println("${c.name} ${p.name} $it") }
        }
    }
}

5. Conclusion

In this short article, we saw how to use the default parameter it in Kotlin and how to access the parameters of an outer forEach from within a nested forEach loop.

All the code snippets in this article can be found in our GitHub repository.

Guide to JUnit 5 Parameterized Tests

$
0
0

1. Overview

JUnit 5, the next generation of JUnit, facilitates writing developer tests with new and shiny features.

One such feature is parameterized tests. This feature enables us to execute a single test method multiple times with different parameters.

In this tutorial, we’re going to explore parameterized tests in depth, so let’s get started!

2. Dependencies

In order to use JUnit 5 parameterized tests, we need to import the junit-jupiter-params artifact from JUnit Platform. That means when using Maven, we’ll add the following to our pom.xml:

<dependency>
    <groupId>org.junit.jupiter</groupId>
    <artifactId>junit-jupiter-params</artifactId>
    <version>5.3.2</version>
    <scope>test</scope>
</dependency>

Also, when using Gradle, we’ll specify it a little differently:

testCompile("org.junit.jupiter:junit-jupiter-params:5.3.2")

3. First Impression

Let’s say we have an existing utility function and we’d like to be confident about its behavior:

public class Numbers {
    public static boolean isOdd(int number) {
        return number % 2 != 0;
    }
}

Parameterized tests are like other tests except that we add the @ParameterizedTest annotation:

@ParameterizedTest
@ValueSource(ints = {1, 3, 5, -3, 15, Integer.MAX_VALUE}) // six numbers
void isOdd_ShouldReturnTrueForOddNumbers(int number) {
    assertTrue(Numbers.isOdd(number));
}

JUnit 5 test runner executes this above test – and consequently, the isOdd method – six times. And each time, it assigns a different value from the @ValueSource array to the number method parameter.

So, this example shows us two things we need for a parameterized test:

  • a source of arguments, an int array, in this case
  • a way to access them, in this case, the number parameter

There is also one more thing not evident with this example, so stay tuned.

4. Argument Sources

As we should know by now, a parameterized test executes the same test multiple times with different arguments.

And, hopefully, we can do more than just numbers – so, let’s explore!

4.1. Simple Values

With the @ValueSource annotation, we can pass an array of literal values to the test method.

For example, suppose we’re going to test our simple isBlank method:

public class Strings {
    public static boolean isBlank(String input) {
        return input == null || input.trim().isEmpty();
    }
}

We expect from this method to return true for null for blank strings. So, we can write a parameterized test like the following to assert this behavior:

@ParameterizedTest
@ValueSource(strings = {"", "  "})
void isBlank_ShouldReturnTrueForNullOrBlankStrings(String input) {
    assertTrue(Strings.isBlank(input));
}

As we can see, JUnit will run this test two times and each time assigns one argument from the array to the method parameter.

One of the limitations of value sources is that they only support the following types:

  • short (with the shorts attribute)
  • byte (with the bytes attribute)
  • int (with the ints attribute)
  • long  (with the longs attribute)
  • float (with the floats attribute)
  • double (with the doubles attribute)
  • char (with the chars attribute)
  • java.lang.String (with the strings attribute)
  • java.lang.Class (with the classes attribute)

Also, we can only pass one argument to the test method each time.

And before going any further, did anyone notice we didn’t pass null as an argument? That’s another limitation: We can’t pass null through a @ValueSource, even for String and Class!

4.2. Enum

In order to run a test with different values from an enumeration, we can use the @EnumSource annotation.

For example, we can assert that all month numbers are between 1 and 12:

@ParameterizedTest
@EnumSource(Month.class) // passing all 12 months
void getValueForAMonth_IsAlwaysBetweenOneAndTwelve(Month month) {
    int monthNumber = month.getValue();
    assertTrue(monthNumber >= 1 && monthNumber <= 12);
}

Or, we can filter out a few months by using the names attribute.

How about asserting the fact that April, September, June, and November are 30 days long:

@ParameterizedTest
@EnumSource(value = Month.class, names = {"APRIL", "JUNE", "SEPTEMBER", "NOVEMBER"})
void someMonths_Are30DaysLong(Month month) {
    final boolean isALeapYear = false;
    assertEquals(30, month.length(isALeapYear));
}

By default, the names will only keep the matched enum values. We can turn this around by setting the mode attribute to EXCLUDE:

@ParameterizedTest
@EnumSource(
  value = Month.class,
  names = {"APRIL", "JUNE", "SEPTEMBER", "NOVEMBER", "FEBRUARY"},
  mode = EnumSource.Mode.EXCLUDE)
void exceptFourMonths_OthersAre31DaysLong(Month month) {
    final boolean isALeapYear = false;
    assertEquals(31, month.length(isALeapYear));
}

In addition to literal strings, we can pass a regular expression to the names attribute:

@ParameterizedTest
@EnumSource(value = Month.class, names = ".+BER", mode = EnumSource.Mode.MATCH_ANY)
void fourMonths_AreEndingWithBer(Month month) {
    EnumSet<Month> months =
      EnumSet.of(Month.SEPTEMBER, Month.OCTOBER, Month.NOVEMBER, Month.DECEMBER);
    assertTrue(months.contains(month));
}

Quite similar to @ValueSource, @EnumSource is only applicable when we’re going to pass just one argument per test execution.

4.3. CSV Literals

Suppose we’re going to make sure that the toUpperCase() method from String generates the expected uppercase value. @ValueSource won’t be enough.

In order to write a parameterized test for such scenarios, we have to:

  • Pass an input value and an expected value to the test method
  • Compute the actual result with those input values
  • Assert the actual value with the expected value

So, we need argument sources capable of passing multiple arguments. The @CsvSource is one of those sources:

@ParameterizedTest
@CsvSource({"test,TEST", "tEst,TEST", "Java,JAVA"})
void toUpperCase_ShouldGenerateTheExpectedUppercaseValue(String input, String expected) {
    String actualValue = input.toUpperCase();
    assertEquals(expected, actualValue);
}

The @CsvSource accepts an array of comma-separated values and each array entry corresponds to a line in a CSV file.

This source takes one array entry each time, splits it by comma and passes each array to the annotated test method as separate parameters. By default, the comma is the column separator but we can customize it using the delimiter attribute:

@ParameterizedTest
@CsvSource(value = {"test:test", "tEst:test", "Java:java"}, delimiter = ':')
void toLowerCase_ShouldGenerateTheExpectedLowercaseValue(String input, String expected) {
    String actualValue = input.toLowerCase();
    assertEquals(expected, actualValue);
}

Now it’s a colon-separated value, still a CSV!

4.4. CSV Files

Instead of passing the CSV values inside the code, we can refer to an actual CSV file.

For example, we could use a CSV file like:

input,expected
test,TEST
tEst,TEST
Java,JAVA

We can load the CSV file and ignore the header column with @CsvFileSource:

@ParameterizedTest
@CsvFileSource(resources = "/data.csv", numLinesToSkip = 1)
void toUpperCase_ShouldGenerateTheExpectedUppercaseValueCSVFile(String input, String expected) {
    String actualValue = input.toUpperCase();
    assertEquals(expected, actualValue);
}

The resources attribute represents the CSV file resources on the classpath to read. And, we can pass multiple files to it.

The numLinesToSkip attribute represents the number of lines to skip when reading the CSV files. By default, @CsvFileSource does not skip any lines, but this feature is usually useful for skipping the header lines, like we did here.

Just like the simple @CsvSource, the delimiter is customizable with the delimiter attribute.

In addition to the column separator:

  • The line separator can be customized using the lineSeparator attribute – a newline is the default value
  • The file encoding is customizable using the encoding attribute – UTF-8 is the default value

4.5. Method

The argument sources we’ve covered so far are somewhat simple and share one limitation: It’s hard or impossible to pass complex objects using them!

One approach to providing more complex arguments is to use a method as an argument source.

Let’s test the isBlank method with a @MethodSource:

@ParameterizedTest
@MethodSource("provideStringsForIsBlank")
void isBlank_ShouldReturnTrueForNullOrBlankStrings(String input, boolean expected) {
    assertEquals(expected, Strings.isBlank(input));
}

The name we supply to @MethodSource needs to match an existing method.

So let’s next write provideStringsForIsBlank, static method that returns a Stream of Arguments:

private static Stream<Arguments> provideStringsForIsBlank() {
    return Stream.of(
      Arguments.of(null, true),
      Arguments.of("", true),
      Arguments.of("  ", true),
      Arguments.of("not blank", false)
    );
}

Here we’re literally returning a stream of arguments, but it’s not a strict requirement. For example, we can return any other collection-like interfaces like List. 

If we’re going to provide just one argument per test invocation, then it’s not necessary to use the Arguments abstraction:

@ParameterizedTest
@MethodSource // hmm, no method name ...
void isBlank_ShouldReturnTrueForNullOrBlankStringsOneArgument(String input) {
    assertTrue(Strings.isBlank(input));
}

private static Stream<String> isBlank_ShouldReturnTrueForNullOrBlankStringsOneArgument() {
    return Stream.of(null, "", "  ");
}

When we don’t provide a name for the @MethodSource, JUnit will search for a source method with the same name as the test method.

Sometimes it’s useful to share arguments between different test classes. In these cases, we can refer to a source method outside of the current class by its fully-qualified name:

class StringsUnitTest {

    @ParameterizedTest
    @MethodSource("com.baeldung.parameterized.StringParams#blankStrings")
    void isBlank_ShouldReturnTrueForNullOrBlankStringsExternalSource(String input) {
        assertTrue(Strings.isBlank(input));
    }
}

public class StringParams {

    static Stream<String> blankStrings() {
        return Stream.of(null, "", "  ");
    }
}

Using the FQN#methodName format we can refer to an external static method.

4.6. Custom Argument Provider

Another advanced approach to pass test arguments is to use a custom implementation of an interface called ArgumentsProvider:

class BlankStringsArgumentsProvider implements ArgumentsProvider {

    @Override
    public Stream<? extends Arguments> provideArguments(ExtensionContext context) {
        return Stream.of(
          Arguments.of((String) null), 
          Arguments.of(""), 
          Arguments.of("   ") 
        );
    }
}

Then we can annotate our test with the @ArgumentsSource annotation to use this custom provider:

@ParameterizedTest
@ArgumentsSource(BlankStringsArgumentsProvider.class)
void isBlank_ShouldReturnTrueForNullOrBlankStringsArgProvider(String input) {
    assertTrue(Strings.isBlank(input));
}

Let’s make the custom provider a more pleasant API to use with a custom annotation!

4.7. Custom Annotation

How about loading the test arguments from a static variable? Something like:

static Stream<Arguments> arguments = Stream.of(
  Arguments.of(null, true), // null strings should be considered blank
  Arguments.of("", true),
  Arguments.of("  ", true),
  Arguments.of("not blank", false)
);

@ParameterizedTest
@VariableSource("arguments")
void isBlank_ShouldReturnTrueForNullOrBlankStringsVariableSource(String input, boolean expected) {
    assertEquals(expected, Strings.isBlank(input));
}

Actually, JUnit 5 does not provide this! However, we can roll our own solution.

First off, we can create an annotation:

@Documented
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@ArgumentsSource(VariableArgumentsProvider.class)
public @interface VariableSource {

    /**
     * The name of the static variable
     */
    String value();
}

Then we need to somehow consume the annotation details and provide test arguments. JUnit 5 provides two abstractions to achieve those two things:

  • AnnotationConsumer to consume the annotation details
  • ArgumentsProvider to provide test arguments

So, we next need to make the VariableArgumentsProvider class read from the specified static variable and return its value as test arguments:

class VariableArgumentsProvider implements ArgumentsProvider, AnnotationConsumer<VariableSource> {

    private String variableName;

    @Override
    public Stream<? extends Arguments> provideArguments(ExtensionContext context) {
        return context.getTestClass()
                .map(this::getField)
                .map(this::getValue)
                .orElseThrow(() -> new IllegalArgumentException("Failed to load test arguments"));
    }

    @Override
    public void accept(VariableSource variableSource) {
        variableName = variableSource.value();
    }

    private Field getField(Class<?> clazz) {
        try {
            return clazz.getDeclaredField(variableName);
        } catch (Exception e) {
            return null;
        }
    }

    @SuppressWarnings("unchecked")
    private Stream<Arguments> getValue(Field field) {
        Object value = null;
        try {
            value = field.get(null);
        } catch (Exception ignored) {}

        return value == null ? null : (Stream<Arguments>) value;
    }
}

And it works like a charm!

5. Argument Conversion

5.1. Implicit Conversion

Let’s re-write one of those @EnumTests with a @CsvSource:

@ParameterizedTest
@CsvSource({"APRIL", "JUNE", "SEPTEMBER", "NOVEMBER"}) // Pssing strings
void someMonths_Are30DaysLongCsv(Month month) {
    final boolean isALeapYear = false;
    assertEquals(30, month.length(isALeapYear));
}

This shouldn’t work, right? But, somehow it does!

So, JUnit 5 converts the String arguments to the specified enum type. To support use cases like this, JUnit Jupiter provides a number of built-in implicit type converters.

The conversion process depends on the declared type of each method parameter. The implicit conversion can convert the String instances to types like:

  • UUID 
  • Locale
  • LocalDate, LocalTime, LocalDateTime, Year, Month, etc.
  • File and Path
  • URL and URI
  • Enum subclasses

5.2. Explicit Conversion

Sometimes we need to provide a custom and explicit converter for arguments.

Suppose we want to convert strings with the yyyy/mm/dd format to LocalDate instances. First off, we need to implement the ArgumentConverter interface:

class SlashyDateConverter implements ArgumentConverter {

    @Override
    public Object convert(Object source, ParameterContext context)
      throws ArgumentConversionException {
        if (!(source instanceof String)) {
            throw new IllegalArgumentException("The argument should be a string: " + source);
        }
        try {
            String[] parts = ((String) source).split("/");
            int year = Integer.parseInt(parts[0]);
            int month = Integer.parseInt(parts[1]);
            int day = Integer.parseInt(parts[2]);

            return LocalDate.of(year, month, day);
        } catch (Exception e) {
            throw new IllegalArgumentException("Failed to convert", e);
        }
    }
}

Then we should refer to the converter via the @ConvertWith annotation:

@ParameterizedTest
@CsvSource({"2018/12/25,2018", "2019/02/11,2019"})
void getYear_ShouldWorkAsExpected(
  @ConvertWith(SlashyDateConverter.class) LocalDate date, int expected) {
    assertEquals(expected, date.getYear());
}

6. Argument Accessor

By default, each argument provided to a parameterized test corresponds to a single method parameter. Consequently, when passing a handful of arguments via an argument source, the test method signature gets very large and messy.

One approach to address this issue is to encapsulate all passed arguments into an instance of ArgumentsAccessor and retrieve arguments by index and type.

For example, let’s consider our Person class:

class Person {

    String firstName;
    String middleName;
    String lastName;
    
    // constructor

    public String fullName() {
        if (middleName == null || middleName.trim().isEmpty()) {
            return String.format("%s %s", firstName, lastName);
        }

        return String.format("%s %s %s", firstName, middleName, lastName);
    }
}

Then, in order to test the fullName() method, we’ll pass four arguments: firstName, middleName, lastName, and the expected fullName. We can use the ArgumentsAccessor to retrieve the test arguments instead of declaring them as method parameters:

@ParameterizedTest
@CsvSource({"Isaac,,Newton,Isaac Newton", "Charles,Robert,Darwin,Charles Robert Darwin"})
void fullName_ShouldGenerateTheExpectedFullName(ArgumentsAccessor argumentsAccessor) {
    String firstName = argumentsAccessor.getString(0);
    String middleName = (String) argumentsAccessor.get(1);
    String lastName = argumentsAccessor.get(2, String.class);
    String expectedFullName = argumentsAccessor.getString(3);

    Person person = new Person(firstName, middleName, lastName);
    assertEquals(expectedFullName, person.fullName());
}

Here, we’re encapsulating all passed arguments into an ArgumentsAccessor instance and then, in the test method body, retrieving each passed argument with its index. In addition to just being an accessor, type conversion is supported through get* methods:

  • getString(index) retrieves an element at a specific index and converts it to String – the same is true for primitive types
  • get(index) simply retrieves an element at a specific index as an Object
  • get(index, type) retrieves an element at a specific index and converts it to the given type

7. Argument Aggregator

Using the ArgumentsAccessor abstraction directly may make the test code less readable or reusable. In order to address these issues, we can write a custom and reusable aggregator.

To do that, we implement the ArgumentsAggregator interface:

class PersonAggregator implements ArgumentsAggregator {

    @Override
    public Object aggregateArguments(ArgumentsAccessor accessor, ParameterContext context)
      throws ArgumentsAggregationException {
        return new Person(accessor.getString(1), accessor.getString(2), accessor.getString(3));
    }
}

And then we reference it via the @AggregateWith annotation:

@ParameterizedTest
@CsvSource({"Isaac Newton,Isaac,,Newton", "Charles Robert Darwin,Charles,Robert,Darwin"})
void fullName_ShouldGenerateTheExpectedFullName(
  String expectedFullName,
  @AggregateWith(PersonAggregator.class) Person person) {

    assertEquals(expectedFullName, person.fullName());
}

The PersonAggregator takes the last three arguments and instantiates a Person class out of them.

8. Customizing Display Names

By default, the display name for a parameterized test contains an invocation index along with a String representation of all passed arguments, something like:

├─ someMonths_Are30DaysLongCsv(Month)
│     │  ├─ [1] APRIL
│     │  ├─ [2] JUNE
│     │  ├─ [3] SEPTEMBER
│     │  └─ [4] NOVEMBER

However, we can customize this display via the name attribute of the @ParameterizedTest annotation:

@ParameterizedTest(name = "{index} {0} is 30 days long")
@EnumSource(value = Month.class, names = {"APRIL", "JUNE", "SEPTEMBER", "NOVEMBER"})
void someMonths_Are30DaysLong(Month month) {
    final boolean isALeapYear = false;
    assertEquals(30, month.length(isALeapYear));
}

April is 30 days long surely is a more readable display name:

├─ someMonths_Are30DaysLong(Month)
│     │  ├─ 1 APRIL is 30 days long
│     │  ├─ 2 JUNE is 30 days long
│     │  ├─ 3 SEPTEMBER is 30 days long
│     │  └─ 4 NOVEMBER is 30 days long

The following placeholders are available when customizing the display name:

  • {index} will be replaced with the invocation index – simply put, the invocation index for the first execution is 1, for the second is 2, and so on
  • {arguments} is a placeholder for the complete, comma-separated list of arguments
  • {0}, {1}, ... are placeholders for individual arguments

9. Conclusion

In this article, we’ve explored the nuts and bolts of parameterized tests in JUnit 5.

We learned that parameterized tests are different from normal tests in two aspects: they’re annotated with the @ParameterizedTest, and they need a source for their declared arguments.

Also, by now, we should now that JUnit provides some facilities to convert the arguments to custom target types or to customize the test names.

As usual, the sample codes are available on our GitHub project, so make sure to check it out!

Finding Leap Years in Java

$
0
0

1. Overview

In this tutorial, we’ll demonstrate several ways to determine if a given year is a leap year in Java.

A leap year is a year that is divisible by 4 and 400 without a remainder. Thus, years that are divisible by 100 but not by 400 don’t qualify, even though they’re divisible by 4.

2. Using the Pre-Java-8 Calendar API

Since Java 1.1, the GregorianCalendar class allows us to check if a year is a leap year:

public boolean isLeapYear(int year);

As we might expect, this method returns true if the given year is a leap year and false for non-leap years.

Years in BC (Before Christ) need to be passed as negative values and are calculated as 1 – year. For example, the year 3 BC is represented as -2, since 1 – 3 = -2.

3. Using the Java 8+ Date/Time API

Java 8 introduced the java.time package with a much better Date and Time API.

The class Year in the java.time package has a static method to check if the given year is a leap year:

public static boolean isLeap(long year);

And it also has an instance method to do the same:

public boolean isLeap();

4. Using the Joda-Time API

The Joda-Time API is one of the most used third-party libraries among the Java projects for date and time utilities. Since Java 8, this library is in maintainable state as mentioned in the Joda-Time GitHub source repository.

There is no pre-defined API method to find a leap year in Joda-Time. However, we can use their LocalDate and Days classes to check for leap year:

LocalDate localDate = new LocalDate(2020, 1, 31);
int numberOfDays = Days.daysBetween(localDate, localDate.plusYears(1)).getDays();

boolean isLeapYear = (numberOfDays > 365) ? true : false;

5. Conclusion

In this tutorial, we’ve seen what a leap year is, the logic for finding it, and several Java APIs we can use to check for it.

As always, code snippets can be found over on GitHub.

Delete the Contents of a File in Java

$
0
0

1. Introduction

In this tutorial, we’ll see how we use Java to delete the contents of a file without deleting the file itself. Since there are many simple ways to do it, let’s explore each one by one.

2. Using PrintWriter

Java’s PrintWriter class extends the Writer class. It prints the formatted representation of objects to the text-output stream.

We’ll perform a simple test. Let’s create a PrintWriter instance pointing to an existing file, deleting existing content of the file by just closing it, and then make sure the file length is empty:

new PrintWriter(FILE_PATH).close();
assertEquals(0, StreamUtils.getStringFromInputStream(new FileInputStream(FILE_PATH)).length());

Also, note that if we don’t need the PrintWriter object for further processing, this is the best option. However, if we need the PrintWriter object for further file operations, we can do this differently:

PrintWriter writer = new PrintWriter(FILE_PATH);
writer.print("");
// other operations
writer.close();

3. Using FileWriter

Java’s FileWriter is a standard Java IO API class that provides methods to write character-oriented data to a file.

Let’s now see how we can do the same operation using FileWriter:

new FileWriter(FILE_PATH, false).close();

Similarly, if we need the FileWriter object for further processing, we can assign it to a variable and update with an empty string.

4. Using FileOutputStream

Java’s FileOutputStream is an output stream used for writing byte data to a file.

Now, let’s delete the content of the file using FileOutputStream:

new FileOutputStream(FILE_PATH).close();

5. Using Apache Commons IO FileUtils

Apache Commons IO is a library that contains utility classes to help out with common IO problems. We can delete the content of the file using one of its utility classes – FileUtils.

To see how this works, let’s add the Apache Commons IO dependency to our pom.xml:

<dependency>
    <groupId>commons-io</groupId>
    <artifactId>commons-io</artifactId>
    <version>2.5</version>
</dependency>

After that, let’s take a quick example demonstrating deletion of file content:

FileUtils.write(new File(FILE_PATH), "", Charset.defaultCharset());

6. Using Java NIO Files

Java NIO File was introduced in JDK 7. It defines interfaces and classes to access files, file attributes, and file systems.

We can also delete the file contents using java.nio.file.Files:

BufferedWriter writer = Files.newBufferedWriter(Paths.get(FILE_PATH));
writer.write("");
writer.flush();

7. Using Java NIO FileChannel

Java NIO FileChannel is NIO’s implementation to connect a file. It also complements the standard Java IO package.

We can also delete the file contents using java.nio.channels.FileChannel:

FileChannel.open(Paths.get(FILE_PATH), StandardOpenOption.WRITE).truncate(0).close();

8. Using Guava

Guava is an open source Java-based library that provides utility methods to do I/O operations. Let’s see how to use the Guava API for deleting the file contents.

First, we need to add the Guava dependency in our pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>22.0</version>
</dependency>

After that, let’s see a quick example to delete file contents using Guava:

File file = new File(FILE_PATH);
byte[] empty = new byte[0];
com.google.common.io.Files.write(empty, file);

9. Conclusion

To summarize, we’ve seen multiple ways to delete the content of a file without deleting the file itself.

The full implementation of this tutorial can be found over on GitHub.

Convert a Float to a Byte Array in Java

$
0
0

1. Overview

In this quick tutorial, we’ll explore a few examples of using Java to convert a float to a byte array and vice versa.

This is simple if we convert an int or a long to a byte array as Java Bitwise Operators works only on integer types. However, for a float, we need to use another layer of conversion.

For instance, we can use APIs provided by the Float class or ByteBuffer class of java.nio package.

2. Float to Byte Array Conversion

As we know, the size of a float in Java is 32 bit which is similar to an int. So we can use floatToIntBits or floatToRawIntBits functions available in the Float class of Java. And then shift the bits to return a byte array. Click here to learn more about bit shifting operations.

The difference between both is floatToRawIntBits preserves Not-a-Number (NaN) values as well. Here shifting the bits has been done through a technique called Narrowing Primitive Conversion.

Firstly let’s have a look of the code with Float class function:

public static byte[] floatToByteArray(float value) {
    int intBits =  Float.floatToIntBits(value);
    return new byte[] {
      (byte) (intBits >> 24), (byte) (intBits >> 16), (byte) (intBits >> 8), (byte) (intBits) };
}

Secondly a neat way of conversion using ByteBuffer:

ByteBuffer.allocate(4).putFloat(value).array();

3. Byte Array to Float Conversion

Let’s now convert a byte array into a float using Float class function intBitsToFloat.

However, we need to first convert a byte array into int bits using the left shift:

public static float byteArrayToFloat(byte[] bytes) {
    int intBits = 
      bytes[0] << 24 | (bytes[1] & 0xFF) << 16 | (bytes[2] & 0xFF) << 8 | (bytes[3] & 0xFF);
    return Float.intBitsToFloat(intBits);  
}

Converting a byte array into a float using ByteBuffer is as simple as this:

ByteBuffer.wrap(bytes).getFloat();

4. Unit Testing

Let’s look at simple unit test cases for implementation:

public void givenAFloat_thenConvertToByteArray() {
    assertArrayEquals(new byte[] { 63, -116, -52, -51}, floatToByteArray(1.1f));
}

@Test
public void givenAByteArray_thenConvertToFloat() {
   assertEquals(1.1f, byteArrayToFloat(new byte[] { 63, -116, -52, -51}), 0);
}

5. Conclusion

We have seen different ways of float to byte conversion and vice-versa.

Float class provides functions as a workaround for such conversion. However, ByteBuffer provides a neat way to do this. For this reason, I suggest using it wherever possible.

The complete source code of these implementations and unit test cases can be found in the GitHub project.


Java Weekly, Issue 265

$
0
0

Here we go…

1. Spring and Java

>> Writing Addons With TestProject [petrikainulainen.net]

A solid introduction on how to remove duplicate code from test suites by writing custom TestProject addons.

>> Bootiful Azure: To Production (6/6) [spring.io]

A nice wrap-up to the series on Spring Boot and Microsoft Azure, with several things to consider when deploying to production.

>> OpenJDK 11, tools of the trade [blog.frankel.ch]

A good round-up of everyday JDK commands and tools that any developer should learn. Very cool.

>> All You Need To Know About Testing Web Controllers with Spring Boot [reflectoring.io]

The title says it all.

>> How to map a PostgreSQL Range column type with JPA and Hibernate [vladmihalcea.com]

And a quick introduction to mapping the range column types supported out-of-the-box by the hibernate-types project.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Using OAuth for a simple command line script to access Google’s data [martinfowler.com]

A clever adaptation of Google’s OAuth 2.0 for Mobile and Desktop Apps flow does the trick.

>> Increasing the Quality of Patient Care through Stream Processing [infoq.com]

An interesting proof-of-concept project using open-source tools to aggregate, sanitize, and enrich health data streams from multiple sources.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Thankless Tasks [dilbert.com]

>> Making Excuses for Your Excuses [dilbert.com]

>> An Empty Vessel for Transporting Sarcasm [dilbert.com]

4. Pick of the Week

>> How I went from newbie to Software Engineer in 9 months while working full time [medium.freecodecamp.org]

Guide to XMPP Smack Client

$
0
0

1. Introduction

XMPP is a rich and complex instant messaging protocol.

Instead of writing our own client from scratch, in this tutorial, we’ll take a look at Smack, a modular and portable open source XMPP client written in Java that has done much of the heavy lifting for us.

2. Dependencies

Smack is organized as several modules to provide more flexibility, so we can easily include the features we need.

Some of these include:

  • XMPP over TCP module
  • A module to support many of the extensions defined by the XMPP Standards Foundation
  • Legacy extensions support
  • A module to debug

We can find all the supported modules in XMPP’s documentation.

However, in this tutorial, we’ll just make use of the tcp, imextensions, and java7 modules:

<dependency>
    <groupId>org.igniterealtime.smack</groupId>
    <artifactId>smack-tcp</artifactId>
</dependency>
<dependency>
    <groupId>org.igniterealtime.smack</groupId>
    <artifactId>smack-im</artifactId>
</dependency>
<dependency>
    <groupId>org.igniterealtime.smack</groupId>
    <artifactId>smack-extensions</artifactId>
</dependency>
<dependency>
    <groupId>org.igniterealtime.smack</groupId>
    <artifactId>smack-java7</artifactId>
</dependency>

The latest versions can be found at Maven Central.

3. Setup

In order to test the client, we’ll need an XMPP server. To do so, we’ll create an account on jabber.hot-chilli.net, a free Jabber/XMPP service for everybody.

Afterward, we can configure Smack using the XMPPTCPConnectionConfiguration class that provides a builder to set up the connection’s parameters:

XMPPTCPConnectionConfiguration config = XMPPTCPConnectionConfiguration.builder()
  .setUsernameAndPassword("baeldung","baeldung")
  .setXmppDomain("jabb3r.org")
  .setHost("jabb3r.org")
  .build();

The builder allows us to set the basic information needed to perform a connection. If needed, we can also set other parameters such as port, SSL protocols, and timeouts.

4. Connection

Making a connection is simply achieved using the XMPPTCPConnection class:

AbstractXMPPConnection connection = new XMPPTCPConnection(config);
connection.connect(); //Establishes a connection to the server
connection.login(); //Logs in

The class contains a constructor that accepts the configuration previously built. It also provides methods to connect to the server and log in.

Once a connection has been established, we can use Smack’s features, like chat, that we’ll describe in the next section.

In the event that the connection was suddenly interrupted, by default, Smack will try to reconnect.

The ReconnectionManager will try to immediately reconnect to the server and increase the delay between attempts as successive reconnections keep failing.

5. Chat

One of the major features of the library is – chat support.

Using the Chat class makes possible to create a new thread of messages between two users:

ChatManager chatManager = ChatManager.getInstanceFor(connection);
EntityBareJid jid = JidCreate.entityBareFrom("baeldung2@jabb3r.org");
Chat chat = chatManager.chatWith(jid);

Note that, to build a Chat we used a ChatManager and, obviously, specified who to chat with. We achieved the latter by using the EntityBareJid object, which wraps an XMPP address —aka a JID— composed of a local part (baeldung2) and a domain part (jabb3r.org).

After that, we can send a message using the send() method:

chat.send("Hello!");

And receive messages by setting a listener:

chatManager.addIncomingListener(new IncomingChatMessageListener() {
  @Override
  public void newIncomingMessage(EntityBareJid from, Message message, Chat chat) {
      System.out.println("New message from " + from + ": " + message.getBody());
  }
});

5.1. Rooms

As well as end-to-end user chat, Smack provides support for group chats through the use of rooms.

There are two types of rooms, instant rooms, and reserved rooms.

Instant rooms are available for immediate access and are automatically created based on some default configuration. On the other hand, reserved rooms are manually configured by the room owner before anyone is allowed to enter.

Let’s have a look at how to create an instant room using MultiUserChatManager:

MultiUserChatManager manager = MultiUserChatManager.getInstanceFor(connection);
MultiUserChat muc = manager.getMultiUserChat(jid);
Resourcepart room = Resourcepart.from("baeldung_room");
muc.create(room).makeInstant();

In a similar fashion we can create a reserved room:

Set<Jid> owners = JidUtil.jidSetFrom(
  new String[] { "baeldung@jabb3r.org", "baeldung2@jabb3r.org" });

muc.create(room)
  .getConfigFormManger()
  .setRoomOwners(owners)
  .submitConfigurationForm();

6. Roster

Another feature that Smack provides is the possibility to track the presence of other users.

With Roster.getInstanceFor(), we can obtain a Roster instance:

Roster roster = Roster.getInstanceFor(connection);

The Roster is a contact list that represents the users as RosterEntry objects and allows us to organize users into groups.

We can print all entries in the Roster using the getEntries() method:

Collection<RosterEntry> entries = roster.getEntries();
for (RosterEntry entry : entries) {
    System.out.println(entry);
}

Moreover, it allows us to listen for changes in its entries and presence data with a RosterListener:

roster.addRosterListener(new RosterListener() {
    public void entriesAdded(Collection<String> addresses) { // handle new entries }
    public void entriesDeleted(Collection<String> addresses) { // handle deleted entries }
    public void entriesUpdated(Collection<String> addresses) { // handle updated entries }
    public void presenceChanged(Presence presence) { // handle presence change }
});

It also provides a way to protect user’s privacy by making sure that only approved users are able to subscribe to a roster. To do so, Smack implements a permissions-based model.

There are three ways to handle presence subscription requests with the Roster.setSubscriptionMode() method:

  • Roster.SubscriptionMode.accept_all – Accept all subscription requests
  • Roster.SubscriptionMode.reject_all – Reject all subscription requests
  • Roster.SubscriptionMode.manual – Process presence subscription requests manually

If we choose to handle subscription requests manually, we’ll need to register a StanzaListener (described in next section) and handle packets with the Presence.Type.subscribe type.

7. Stanza

In addition to the chat, Smack provides a flexible framework to send a stanza and listen for incoming one.

To clarify, a stanza is a discrete semantic unit of meaning in XMPP. It is structured information that is sent from one entity to another over an XML stream.

We can transmit a Stanza through a Connection using the send() method:

Stanza presence = new Presence(Presence.Type.subscribe);
connection.sendStanza(presence);

In the example above, we sent a Presence stanza to subscribe to a roster.

On the other hand, to process the incoming stanzas, the library provides two constructs:

  • StanzaCollector 
  • StanzaListener

In particular, StanzaCollector let us wait synchronously for new stanzas:

StanzaCollector collector
  = connection.createStanzaCollector(StanzaTypeFilter.MESSAGE);
Stanza stanza = collector.nextResult();

While StanzaListener is an interface for asynchronously notifying us of incoming stanzas:

connection.addAsyncStanzaListener(new StanzaListener() {
    public void processStanza(Stanza stanza) 
      throws SmackException.NotConnectedException,InterruptedException, 
        SmackException.NotLoggedInException {
            // handle stanza
        }
}, StanzaTypeFilter.MESSAGE);

7.1. Filters

Moreover, the library provides a built-in set of filters to process incoming stanzas.

We can filter stanza by type using StanzaTypeFilter or by ID with StanzaIdFilter:

StanzaFilter messageFilter = StanzaTypeFilter.MESSAGE;
StanzaFilter idFilter = new StanzaIdFilter("123456");

Or, discerning by particular address:

StanzaFilter fromFilter
  = FromMatchesFilter.create(JidCreate.from("baeldung@jabb3r.org"));
StanzaFilter toFilter
  = ToMatchesFilter.create(JidCreate.from("baeldung2@jabb3r.org"));

And we can use logical filter operator (AndFilter, OrFilter, NotFilter) to create complex filters:

StanzaFilter filter
  = new AndFilter(StanzaTypeFilter.Message, FromMatchesFilter.create("baeldung@jabb3r.org"));

8. Conclusion

In this article, we covered the most useful classes that Smack provides off the shelf.

We learned how to configure the library in order to send and receive XMPP stanza.

Subsequently, we learned how to handle group chats using ChatManager and Roster features.

As usual, all code samples shown in this tutorial are available over on GitHub.

Configuring a DataSource Programmatically in Spring Boot

$
0
0

1. Overview

Spring Boot uses an opinionated algorithm to scan for and configure a DataSource. This allows us to easily get a fully-configured DataSource implementation by default.

In addition, Spring Boot automatically configures a lightning-fast connection pool — either HikariCPApache Tomcat, or Commons DBCP, in that order, depending on which are on the classpath.

While Spring Boot’s automatic DataSource configuration works very well in most cases, sometimes we’ll need a higher level of control, so we’ll have to set up our own DataSource implementation, hence skipping the automatic configuration process.

In this tutorial, we’ll learn how to configure a DataSource programmatically in Spring Boot.

2. The Maven Dependencies

Creating a DataSource implementation programmatically is straightforward, overall.

To learn how to accomplish this, we’ll implement a simple repository layer, which will perform CRUD operations on some JPA entities.

Let’s take a look at our demo project’s dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>2.4.1</version> 
    <scope>runtime</scope> 
</dependency>

As shown above, we’ll use an in-memory H2 database instance to exercise the repository layer. In doing so, we’ll be able to test our programmatically-configured DataSource, without the cost of performing expensive database operations.

In addition, let’s make sure to check the latest version of spring-boot-starter-data-jpa on Maven Central.

3. Configuring a DataSource Programmatically

Now, if we stick with Spring Boot’s automatic DataSource configuration and run our project in its current state, it will just work as expected.

Spring Boot will do all the heavy infrastructure plumbing for us. This includes creating an H2 DataSource implementation, which will be automatically handled by HikariCP, Apache Tomcat, or Commons DBCP, and setting up an in-memory database instance.

Additionally, we won’t even need to create an application.properties file, as Spring Boot will provide some default database settings as well.

As we mentioned before, at times we’ll need a higher level of customization, hence we’ll have to configure programmatically our own DataSource implementation.

The simplest way to accomplish this is by defining a DataSource factory method, and placing it within a class annotated with the @Configuration annotation:

@Configuration
public class DataSourceConfig {
    
    @Bean
    public DataSource getDataSource() {
        DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create();
        dataSourceBuilder.driverClassName("org.h2.Driver");
        dataSourceBuilder.url("jdbc:h2:mem:test");
        dataSourceBuilder.username("SA");
        dataSourceBuilder.password("");
        return dataSourceBuilder.build();
    }
}

In this case, we used the convenience DataSourceBuilder class — a non-fluent version of Joshua Bloch’s builder pattern — to create programmatically our custom DataSource object.

This approach is really nice because the builder makes it easy to configure a DataSource using some common properties. Additionally, it uses the underlying connection pool as well.

4. Externalizing DataSource Configuration with the application.properties File

Of course, it’s also possible to partially externalize our DataSource configuration. For instance, we could define some basic DataSource properties in our factory method:

@Bean 
public DataSource getDataSource() { 
    DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create(); 
    dataSourceBuilder.username("SA"); 
    dataSourceBuilder.password(""); 
    return dataSourceBuilder.build(); 
}

And specify a few additional ones in the application.properties file:

spring.datasource.url=jdbc:h2:mem:test
spring.datasource.driver-class-name=org.h2.Driver

The properties defined in an external source, such as the above application.properties file or via a class annotated with @ConfigurationProperties, will override the ones defined in the Java API.

It becomes evident that, with this approach, we’ll no longer keep our DataSource configuration settings stored in one single place.

On the other hand, it allows us to keep compile-time and run-time configuration settings nicely separated from each other.

This is really good, as it allows us to easily set a configuration binding point. That way we can include different DataSource settings from other sources, without having to refactor our bean factory methods.

5. Testing the DataSource Configuration

Testing our custom DataSource configuration is very simple. The whole process just boils down to creating a JPA entity, defining a basic repository interface, and testing the repository layer.

5.1. Creating a JPA Entity

Let’s start defining our sample JPA entity class, which will model users:

@Entity
@Table(name = "users")
public class User {
    
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private long id;
    private String name;
    private String email;

    // standard constructors / setters / getters / toString
    
}

5.2. A Simple Repository Layer

Next, we need to implement a basic repository layer, which allows us to perform CRUD operations on instances of the User entity class defined above.

Since we’re using Spring Data JPA, we don’t have to create our own DAO implementation from scratch. We simply have to extend the CrudRepository interface to get a working repository implementation:

@Repository
public interface UserRepository extends CrudRepository<User, Long> {}

5.3. Testing the Repository Layer

Lastly, we need to check that our programmatically-configured DataSource is actually working. We can easily accomplish this with an integration test:

@RunWith(SpringRunner.class)
@DataJpaTest
public class UserRepositoryIntegrationTest {
    
    @Autowired
    private UserRepository userRepository;
   
    @Test
    public void whenCalledSave_thenCorrectNumberOfUsers() {
        userRepository.save(new User("Bob", "bob@domain.com"));
        List<User> users = (List<User>) userRepository.findAll();
        
        assertThat(users.size()).isEqualTo(1);
    }    
}

The UserRepositoryIntegrationTest class is pretty self-explanatory. It simply exercises two of the repository interface’s CRUD methods to persist and find entities.

Notice that regardless of whether we decide to programmatically configure our DataSource implementation, or split it into a Java config method and the application.properties file, we should always get a working database connection.

5.4. Running the Sample Application

Finally, we can run our demo application using a standard main() method:

@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

    @Bean
    public CommandLineRunner run(UserRepository userRepository) throws Exception {
        return (String[] args) -> {
            User user1 = new User("John", "john@domain.com");
            User user2 = new User("Julie", "julie@domain.com");
            userRepository.save(user1);
            userRepository.save(user2);
            userRepository.findAll().forEach(user -> System.out.println(user);
        };
    }
}

We already tested the repository layer, so we’re sure that our DataSource has been configured successfully. Thus, if we run the sample application, we should see in our console output the list of User entities stored in the database.

6. Conclusion

In this tutorial, we learned how to configure a DataSource implementation programmatically in Spring Boot.

As usual, all the code samples shown in this tutorial are available over on GitHub.

Blade – A Complete Guidebook

$
0
0

1. Overview

Blade is a tiny Java 8+ MVC framework, built from scratch with some clear goals in mind: to be self-contained, productive, elegant, intuitive, and super fast.

Many different frameworks inspired its design: Node’s Express, Python’s Flask, and Golang’s Macaron / Martini.

Blade is also part of an ambitiously larger project, Let’s Blade. It includes a heterogeneous collection of other small libraries, from Captcha generation to JSON conversion, from templating to a simple database connection.

However, in this tutorial, we’ll focus on the MVC only.

2. Getting Started

First of all, let’s create an empty Maven project and add the latest Blade MVC dependency in the pom.xml:

<dependency>
    <groupId>com.bladejava</groupId>
    <artifactId>blade-mvc</artifactId>
    <version>2.0.14.RELEASE</version>
</dependency>

2.1. Bundling a Blade Application

Since our app will be created as a JAR, it won’t have a /lib folder, like in a WAR. As a result, this leads us to the problem how to provide the blade-mvc JAR, along with any other JAR we might need, to our app.

The different ways of doing this, each one with pros and cons, are explained in the How to Create an Executable JAR with Maven tutorial.

For simplicity, we’ll use the Maven Assembly Plugin technique, which explodes any JAR imported in the pom.xml and subsequently bundles all the classes together in a single uber-JAR.

2.2. Running a Blade Application

Blade is based upon Netty, an amazing asynchronous event-driven network application framework. Therefore, to run our Blade-based application we don’t need any external Application Server nor Servlet Container; the JRE will be enough:

java -jar target/sample-blade-app.jar

After that, the application will be accessible at the http://localhost:9000 URL.

3. Understanding the Architecture

The architecture of Blade is very straightforward:

It always follows the same life cycle:

  1. Netty receives a request
  2. Middlewares are executed (optional)
  3. WebHooks are executed (optional)
  4. Routing is performed
  5. The response is sent to the client
  6. Cleanup

We’ll explore the above functions in the next sections.

4. Routing

In short, routing in MVC is the mechanism used to create a binding between an URL and a Controller.

Blade provides two types of routes: a basic one and an annotated one.

4.1. Basic Routes

Basic routes are intended for very small software, like microservices or minimal web applications:

Blade.of()
  .get("/basic-routes-example", ctx -> ctx.text("GET called"))
  .post("/basic-routes-example", ctx -> ctx.text("POST called"))
  .put("/basic-routes-example", ctx -> ctx.text("PUT called"))
  .delete("/basic-routes-example", ctx -> ctx.text("DELETE called"))
  .start(App.class, args);

The name of the method used to register a route corresponds to the HTTP verb that will be used to forward the request. As simple as that.

In this case, we’re returning a text, but we can also render pages, as we’ll see later in this tutorial.

4.2. Annotated Routes

Certainly, for more realistic use cases we can define all the routes we need using annotations. We should use separate classes for that.

First of all, we need to create a Controller through the @Path annotation, which will be scanned by Blade during the startup.

We then need to use the route annotation related to the HTTP method we want to intercept:

@Path
public class RouteExampleController {    
    
    @GetRoute("/routes-example") 
    public String get(){ 
        return "get.html"; 
    }
    
    @PostRoute("/routes-example") 
    public String post(){ 
        return "post.html"; 
    }
    
    @PutRoute("/routes-example") 
    public String put(){ 
        return "put.html"; 
    }
    
    @DeleteRoute("/routes-example") 
    public String delete(){ 
        return "delete.html"; 
    }
}

We can also use the simple @Route annotation and specify the HTTP method as a parameter:

@Route(value="/another-route-example", method=HttpMethod.GET) 
public String anotherGet(){ 
    return "get.html" ; 
}

On the other hand, if we don’t put any method parameter, the route will intercept every HTTP call to that URL, no matter the verb.

4.3. Parameter Injection

There are several ways to pass parameters to our routes. Let’s explore them with some examples from the documentation.

  • Form parameter:
@GetRoute("/home")
public void formParam(@Param String name){
    System.out.println("name: " + name);
}
  • Restful parameter:
@GetRoute("/users/:uid")
public void restfulParam(@PathParam Integer uid){
    System.out.println("uid: " + uid);
}
  • File upload parameter:
@PostRoute("/upload")
public void fileParam(@MultipartParam FileItem fileItem){
    byte[] file = fileItem.getData();
}
  • Header parameter:
@GetRoute("/header")
public void headerParam(@HeaderParam String referer){
    System.out.println("Referer: " + referer);
}
  • Cookie parameter:
@GetRoute("/cookie")
public void cookieParam(@CookieParam String myCookie){
    System.out.println("myCookie: " + myCookie);
}
  • Body parameter:
@PostRoute("/bodyParam")
public void bodyParam(@BodyParam User user){
    System.out.println("user: " + user.toString());
}
  • Value Object parameter, called by sending its attributes to the route:
@PostRoute("/voParam")
public void voParam(@Param User user){
    System.out.println("user: " + user.toString());
}
<form method="post">
    <input type="text" name="age"/>
    <input type="text" name="name"/>
</form>

5. Static Resources

Blade can also serve static resources if needed, by simply putting them inside the /resources/static folder.

For example, the src/main/resources/static/app.css will be available at http://localhost:9000/static/app.css.

5.1. Customizing the Paths

We can tune this behavior by adding one or more static paths programmatically:

blade.addStatics("/custom-static");

The same result is obtainable through configuration, by editing the file src/main/resources/application.properties:

mvc.statics=/custom-static

5.2. Enabling the Resources Listing

We can allow the listing of a static folder’s content, a feature turned off by default for a security reason:

blade.showFileList(true);

Or in the configuration:

mvc.statics.show-list=true

We can now open the http://localhost:9000/custom-static/ to show the content of the folder.

5.3. Using WebJars

As seen in the Introduction to WebJars tutorial, static resources packaged as JAR are also a viable option.

Blade exposes them automatically under the /webjars/ path.

For instance, let’s import Bootstrap in the pom.xml:

<dependency>
    <groupId>org.webjars</groupId>
    <artifactId>bootstrap</artifactId>
    <version>4.2.1</version>
</dependency>

As a result, it’ll be available under http://localhost:9000/webjars/bootstrap/4.2.1/css/bootstrap.css

6. HTTP Request

Since Blade is not based on the Servlet Specification, objects like its interface Request and its class HttpRequest are slightly different than the ones we’re used to.

6.1. Form Parameters

When reading form parameters, Blade makes great use of Java’s Optional in the results of the query methods (all methods below return an Optional object):

  • query(String name)
  • queryInt(String name)
  • queryLong(String name)
  • queryDouble(String name)

They’re also available with a fallback value:

  • String query(String name, String defaultValue)
  • int queryInt(String name, int defaultValue)
  • long queryLong(String name, long defaultValue)
  • double queryDouble(String name, double defaultValue)

We can read a form parameter through the automapped property:

@PostRoute("/save")
public void formParams(@Param String username){
    // ...
}

Or from the Request object:

@PostRoute("/save")
public void formParams(Request request){
    String username = request.query("username", "Baeldung");
}

6.2. JSON Data

Let’s now take a look at how a JSON object can be mapped to a POJO:

curl -X POST http://localhost:9000/users -H 'Content-Type: application/json' \ 
  -d '{"name":"Baeldung","site":"baeldung.com"}'

POJO (annotated with Lombok for readability):

public class User {
    @Getter @Setter private String name;
    @Getter @Setter private String site;
}

Again, the value is available as the injected property:

@PostRoute("/users")
public void bodyParams(@BodyParam User user){
    // ...
}

And from the Request:

@PostRoute("/users")
public void bodyParams(Request request) {
    String bodyString = request.bodyToString();
}

6.3. RESTful Parameters

RESTFul parameters in pretty URLs like localhost:9000/user/42 are also first-class citizens:

@GetRoute("/user/:id")
public void user(@PathParam Integer id){
    // ...
}

As usual, we can rely on the Request object when needed:

@GetRoute("/user")
public void user(Request request){
    Integer id = request.pathInt("id");
}

Obviously, the same method is available for Long and String types too.

6.4. Data Binding

Blade supports both JSON and Form binding parameters and attaches them to the model object automatically:

@PostRoute("/users")
public void bodyParams(User user){}

6.5. Request and Session Attributes

The API for reading and writing objects in a Request and a Session are crystal clear.

The methods with two parameters, representing key and value, are the mutators we can use to store our values in the different contexts:

Session session = request.session();
request.attribute("request-val", "Some Request value");
session.attribute("session-val", 1337);

On the other hand, the same methods accepting only the key parameter are the accessors:

String requestVal = request.attribute("request-val");
String sessionVal = session.attribute("session-val"); //It's an Integer

An interesting feature is their Generic return type <T> T, which saves us from the need of casting the result.

6.6. Headers

Request headers, on the contrary, can only be read from the request:

String header1 = request.header("a-header");
String header2 = request.header("a-safe-header", "with a default value");
Map<String, String> allHeaders = request.headers();

6.7. Utilities

The following utility methods are also available out of the box, and they’re so evident that don’t need further explanations:

  • boolean isIE()
  • boolean isAjax()
  • String contentType()
  • String userAgent()

6.8. Reading Cookies

Let’s see how the Request object helps us deal with Cookies, specifically when reading the Optional<Cookie>:

Optional<Cookie> cookieRaw(String name);

We can also get it as a String by specifying a default value to apply if a Cookie doesn’t exist:

String cookie(String name, String defaultValue);

Finally, this is how we can read all the Cookies at once (keys are Cookies’ names, values are Cookies’ values):

Map<String, String> cookies = request.cookies();

7. HTTP Response

Analogous to what was done with the Request, we can obtain a reference to the Response object by simply declaring it as a parameter of the routing method:

@GetRoute("/")
public void home(Response response) {}

7.1. Simple Output

We can easily send a simple output to the caller through one of the handy output methods, along with a 200 HTTP code and the appropriate Content-Type.

Firstly, we can send a plain text:

response.text("Hello World!");

Secondly, we can produce an HTML:

response.html("<h1>Hello World!</h1>");

Thirdly, we can likewise generate an XML:

response.xml("<Msg>Hello World!</Msg>");

Finally, we can output JSON using a String:

response.json("{\"The Answer\":42}");

And even from a POJO, exploiting the automatic JSON conversion:

User user = new User("Baeldung", "baeldung.com"); 
response.json(user);

7.2. File Output

Downloading a file from the server couldn’t be leaner:

response.download("the-file.txt", "/path/to/the/file.txt");

The first parameter sets the name of the file that will be downloaded, while the second one (a File object, here constructed with a String) represents the path to the actual file on the server.

7.3. Template Rendering

Blade can also render pages through a template engine:

response.render("admin/users.html");

The templates default directory is src/main/resources/templates/, hence the previous one-liner will look for the file src/main/resources/templates/admin/users.html.

We’ll learn more about this later, in the Templating section.

7.4. Redirect

A redirection means sending a 302 HTTP code to the browser, along with an URL to follow with a second GET.

We can redirect to another route, or to an external URL as well:

response.redirect("/target-route");

7.5. Writing Cookies

We should be used to the simplicity of Blade at this point. Let’s thus see how we can write an unexpiring Cookie in a single line of code:

response.cookie("cookie-name", "Some value here");

Indeed, removing a Cookie is equally simple:

response.removeCookie("cookie-name");

7.6. Other Operations

Finally, the Response object provides us with several other methods to perform operations like writing Headers, setting the Content-Type, setting the Status code, and so on.

Let’s take a quick look at some of them:

  • Response status(int status)
  • Map headers()
  • Response notFound()
  • Map cookies()
  • Response contentType(String contentType)
  • void body(@NonNull byte[] data)
  • Response header(String name, String value)

8. WebHooks

A WebHook is an interceptor through which we can run code before and after the execution of a routing method.

We can create a WebHook by simply implementing the WebHook functional interface and overriding the before() method:

@FunctionalInterface
public interface WebHook {

    boolean before(RouteContext ctx);

    default boolean after(RouteContext ctx) {
        return true;
    }
}

As we can see, after() is a default method, hence we’ll override it only when needed.

8.1. Intercepting Every Request

The @Bean annotation tells the framework to scan the class with the IoC Container.

A WebHook annotated with it will consequently work globally, intercepting requests to every URL:

@Bean
public class BaeldungHook implements WebHook {

    @Override
    public boolean before(RouteContext ctx) {
        System.out.println("[BaeldungHook] called before Route method");
        return true;
    }
}

8.2. Narrowing to a URL

We can also intercept specific URLs, to execute code around those route methods only:

Blade.of()
  .before("/user/*", ctx -> System.out.println("Before: " + ctx.uri()));
  .start(App.class, args);

8.3. Middlewares

Middlewares are prioritized WebHooks, which get executed before any standard WebHook:

public class BaeldungMiddleware implements WebHook {

    @Override
    public boolean before(RouteContext context) {
        System.out.println("[BaeldungMiddleware] called before Route method and other WebHooks");
        return true;
    }
}

They simply need to be defined without the @Bean annotation, and then registered declaratively through use():

Blade.of()
  .use(new BaeldungMiddleware())
  .start(App.class, args);

In addition, Blade comes with the following security-related built-in Middlewares, whose names should be self-explanatory:

9. Configuration

In Blade, the configuration is totally optional, because everything works out-of-the-box by convention. However, we can customize the default settings, and introduce new attributes, inside the src/main/resources/application.properties file.

9.1. Reading the Configuration

We can read the configuration in different ways, with or without specifying a default value in case the setting is not available.

  • During startup:
Blade.of()
  .on(EventType.SERVER_STARTED, e -> {
      Optional<String> version = WebContext.blade().env("app.version");
  })
  .start(App.class, args);
  • Inside a route:
@GetRoute("/some-route")
public void someRoute(){
    String authors = WebContext.blade().env("app.authors","Unknown authors");
}
  • In a custom loader, by implementing the BladeLoader interface, overriding the load() method, and annotating the class with @Bean:
@Bean
public class LoadConfig implements BladeLoader {

    @Override
    public void load(Blade blade) {
        Optional<String> version = WebContext.blade().env("app.version");
        String authors = WebContext.blade().env("app.authors","Unknown authors");
    }
}

9.2. Configuration Attributes

The several settings already configured, but ready to be customized, are grouped by type and listed at this address in three-column tables (name, description, default value). We can also refer to the translated page, paying attention to the fact that the translation erroneously capitalizes the settings’ names. The real settings are fully lowercase.

Grouping configuration settings by prefix makes them readable all at once into a map, which is useful when there are many of them:

Environment environment = blade.environment();
Map<String, Object> map = environment.getPrefix("app");
String version = map.get("version").toString();
String authors = map.get("authors","Unknown authors").toString();

9.3. Handling Multiple Environments

When deploying our app to a different environment, we might need to specify different settings, for example the ones related to the database connection. Instead of manually replacing the application.properties file, Blade offers us a way to configure the app for different environments. We can simply keep application.properties with all the development settings, and then create other files in the same folder, like application-prod.properties, containing only the settings that differ.

During the startup, we can then specify the environment we want to use, and the framework will merge the files by using the most specific settings from application-prod.properties, and all the other settings from the default application.properties file:

java -jar target/sample-blade-app.jar --app.env=prod

10. Templating

Templating in Blade is a modular aspect. While it integrates a very basic template engine, for any professional use of the Views we should rely on an external template engine. We can then choose an engine from the ones available in the blade-template-engines repository on GitHub, which are FreeMarker, Jetbrick, Pebble, and Velocity, or even creating a wrapper to import another template we like.

Blade’s author suggests Jetbrick, another smart Chinese project.

10.1. Using the Default Engine

The default template works by parsing variables from different contexts through the ${} notation:

<h1>Hello, ${name}!</h1>

10.2. Plugging in an External Engine

Switching to a different template engine is a breeze! We simply import the dependency of (the Blade wrapper of) the engine:

<dependency>
    <groupId>com.bladejava</groupId>
    <artifactId>blade-template-jetbrick</artifactId>
    <version>0.1.3</version>
</dependency>

At this point, it’s enough to write a simple configuration to instruct the framework to use that library:

@Bean
public class TemplateConfig implements BladeLoader {

    @Override
    public void load(Blade blade) {
        blade.templateEngine(new JetbrickTemplateEngine());
    }
}

As a result, now every file under src/main/resources/templates/ will be parsed with the new engine, whose syntax is beyond the scope of this tutorial.

10.3. Wrapping a New Engine

Wrapping a new template engine requires creating a single class, which must implement the TemplateEngine interface and override the render() method:

void render (ModelAndView modelAndView, Writer writer) throws TemplateException;

For this purpose, we can take a look at the code of the actual Jetbrick wrapper to get an idea of what that means.

11. Logging

Blade uses slf4j-api as logging interface.

It also includes an already configured logging implementation, called blade-log. Therefore, we don’t need to import anything; it works as is, by simply defining a Logger:

private static final org.slf4j.Logger log = org.slf4j.LoggerFactory.getLogger(LogExample.class);

11.1. Customizing the Integrated Logger

In case we want to modify the default configuration, we need to tune the following parameters as System Properties:

  • Logging levels (can be “trace”, “debug”, “info”, “warn”, or “error”):
# Root Logger
com.blade.logger.rootLevel=info

# Package Custom Logging Level
com.blade.logger.somepackage=debug

# Class Custom Logging Level
com.blade.logger.com.baeldung.sample.SomeClass=trace
  • Displayed information:
# Date and Time
com.blade.logger.showDate=false

# Date and Time Pattern
com.blade.logger.datePattern=yyyy-MM-dd HH:mm:ss:SSS Z

# Thread Name
com.blade.logger.showThread=true

# Logger Instance Name
com.blade.logger.showLogName=true

# Only the Last Part of FQCN
com.blade.logger.shortName=true
  • Logger:
# Path 
com.blade.logger.dir=./logs

# Name (it defaults to the current app.name)
com.blade.logger.name=sample

11.2. Excluding the Integrated Logger

Although having an integrated logger already configured is very handy to start our small project, we might easily end up in the case where other libraries import their own logging implementation. And, in that case, we’re able to remove the integrated one in order to avoid conflicts:

<dependency>
    <groupId>com.bladejava</groupId>
    <artifactId>blade-mvc</artifactId>
    <version>${blade.version}</version>
    <exclusions>
        <exclusion>
            <groupId>com.bladejava</groupId>
            <artifactId>blade-log</artifactId>
        </exclusion>
    </exclusions>
</dependency>

12. Customizations

12.1. Custom Exception Handling

An Exception Handler is also built-in by default in the framework. It prints the exception to the console, and if app.devMode is true, the stack trace is also visible on the webpage.

However, we can handle an Exception in a specific way by defining a @Bean extending the DefaultExceptionHandler class:

@Bean
public class GlobalExceptionHandler extends DefaultExceptionHandler {

    @Override
    public void handle(Exception e) {
        if (e instanceof BaeldungException) {
            BaeldungException baeldungException = (BaeldungException) e;
            String msg = baeldungException.getMessage();
            WebContext.response().json(RestResponse.fail(msg));
        } else {
            super.handle(e);
        }
    }
}

12.2. Custom Error Pages

Similarly, the errors 404 – Not Found and 500 – Internal Server Error are handled through skinny default pages.

We can force the framework to use our own pages by declaring them in the application.properties file with the following settings:

mvc.view.404=my-404.html
mvc.view.500=my-500.html

Certainly, those HTML pages must be placed under the src/main/resources/templates folder.

Within the 500 one, we can moreover retrieve the exception message and the stackTrace through their special variables:

<!DOCTYPE html>
<html>
    <head>
        <meta charset="utf-8">
        <title>500 Internal Server Error</title>
    </head>
    <body>
        <h1> Custom Error 500 Page </h1>
        <p> The following error occurred: "<strong>${message}</strong>"</p>
        <pre> ${stackTrace} </pre>
    </body>
</html>

13. Scheduled Tasks

Another interesting feature of the framework is the possibility of scheduling the execution of a method.

That’s possible by annotating the method of a @Bean class with the @Schedule annotation:

@Bean
public class ScheduleExample {

    @Schedule(name = "baeldungTask", cron = "0 */1 * * * ?")
    public void runScheduledTask() {
        System.out.println("This is a scheduled Task running once per minute.");
    }
}

Indeed, it uses the classical cron expressions to specify the DateTime coordinates. We can read more about those in A Guide to Cron Expressions.

Later on, we might exploit the static methods of the TaskManager class to perform operations on the scheduled tasks.

  • Get all the scheduled tasks:
List<Task> allScheduledTasks = TaskManager.getTasks();
  • Get a task by name:
Task myTask = TaskManager.getTask("baeldungTask");
  • Stop a task by name:
boolean closed = TaskManager.stopTask("baeldungTask");

14. Events

As already seen in section 9.1, it’s possible to listen for a specified event before running some custom code.

Blade provides the following events out of the box:

public enum EventType {
    SERVER_STARTING,
    SERVER_STARTED,
    SERVER_STOPPING,
    SERVER_STOPPED,
    SESSION_CREATED,
    SESSION_DESTROY,
    SOURCE_CHANGED,
    ENVIRONMENT_CHANGED
}

While the first six are easy to guess, the last two need some hints: ENVIRONMENT_CHANGED allows us to perform an action if a configuration file changes when the server is up. SOURCE_CHANGED, instead, is not yet implemented and is there for future use only.

Let’s see how we can put a value in the session whenever it’s created:

Blade.of()
  .on(EventType.SESSION_CREATED, e -> {
      Session session = (Session) e.attribute("session");
      session.attribute("name", "Baeldung");
  })
  .start(App.class, args);

15. Session Implementation

Talking about the session, its default implementation stores session values in-memory.

We might, thus, want to switch to a different implementation to provide cache, persistence, or something else. Let’s take Redis, for example. We’d first need to create our RedisSession wrapper by implementing the Session interface, as shown in the docs for the HttpSession.

Then, it’d be only a matter of letting the framework know we want to use it. We can do this in the same way we did for the custom template engine, with the only difference being that we call the sessionType() method:

@Bean
public class SessionConfig implements BladeLoader {
 
    @Override
    public void load(Blade blade) {
        blade.sessionType(new RedisSession());
    }
}

16. Command Line Arguments

When running Blade from the command line, there are three settings we can specify to alter its behavior.

Firstly, we can change the IP address, which by default is the local 0.0.0.0 loopback:

java -jar target/sample-blade-app.jar --server.address=192.168.1.100

Secondly, we can also change the port, which by default is 9000:

java -jar target/sample-blade-app.jar --server.port=8080

Finally, as seen in section 9.3, we can change the environment to let a different application-XXX.properties file to be read over the default one, which is application.properties:

java -jar target/sample-blade-app.jar --app.env=prod

17. Running in the IDE

Any modern Java IDE is able to play a Blade project without even needing the Maven plugins. Running Blade in an IDE is especially useful when running the Blade Demos, examples written expressly to showcase the framework’s functionalities. They all inherit a parent pom, so it’s easier to let the IDE do the work, instead of manually tweaking them to be run as standalone apps.

17.1. Eclipse

In Eclipse, it’s enough to right-click on the project and launch Run as Java Application, select our App class, and press OK.

Eclipse’s console, however, will not display ANSI colors correctly, pouring out their codes instead:

Luckily, installing the ANSI Escape in Console extension fixes the problem for good:

17.2. IntelliJ IDEA

IntelliJ IDEA works with ANSI colors out of the box. Therefore, it’s enough to create the project, right-click on the App file, and launch Run ‘App.main()’ (which is equivalent to pressing Ctrl+Shift+F10):

17.3. Visual Studio Code

It’s also possible to use VSCode, a popular non-Java-centric IDE, by previously installing the Java Extension Pack.

Pressing Ctrl+F5 will then run the project:

18. Conclusion

We’ve seen how to use Blade to create a small MVC application.

The entire documentation is available only in the Chinese language. Despite being widespread mainly in China, thanks to its Chinese origins, the author has recently translated the API and documented the core functionalities of the project in English on GitHub.

As always, we can find the source code of the example over on GitHub.

Introduction to Clojure

$
0
0

1. Introduction

Clojure is a functional programming language that runs entirely on the Java Virtual Machine, in a similar way to Scala and Kotlin. Clojure is considered to be a Lisp derivative and will be familiar to anyone who has experience with other Lisp languages.

This tutorial gives an introduction to the Clojure language, introducing how to get started with it and some of the key concepts to how it works.

2. Installing Clojure

Clojure is available as installers and convenience scripts for use on Linux and macOS. Unfortunately, at this stage, Windows doesn’t have such an installer.

However, the Linux scripts might work in something such as Cygwin or Windows Bash. There is also an online service that can be used to test out the language, and older versions have a standalone version that can be used.

2.1. Standalone Download

The standalone JAR file can be downloaded from Maven Central. Unfortunately, versions newer than 1.8.0 no longer work this way easily due to the JAR file having been split into smaller modules.

Once this JAR file is downloaded, we can use it as an interactive REPL simply by treating it as an executable JAR:

$ java -jar clojure-1.8.0.jar
Clojure 1.8.0
user=>

2.2. Web Interface to REPL

A web interface to the Clojure REPL is available at https://repl.it/languages/clojure for us to try without needing to download anything. Currently, this only supports Clojure 1.8.0 and not the newer releases.

2.3. Installer on MacOS

If you use macOS and have Homebrew installed, then the latest release of Clojure can be installed easily:

$ brew install clojure

This will support the latest version of Clojure – 1.10.0 at time of writing. Once installed, we can load the REPL simply by using the clojure or clj commands:

$ clj
Clojure 1.10.0
user=>

2.3. Installer on Linux

A self-installing shell script is available for us to install the tools on Linux:

$ curl -O https://download.clojure.org/install/linux-install-1.10.0.411.sh
$ chmod +x linux-install-1.10.0.411.sh
$ sudo ./linux-install-1.10.0.411.sh

As with the macOS installer, these will be available for the most recent releases of Clojure and can be executed using the clojure or clj commands.

3. Introduction to the Clojure REPL

All of the above options give us access to the Clojure REPL. This is the direct Clojure equivalent of the JShell tool for Java 9 and above and allows us to enter Clojure code and see the result immediately directly. This is a fantastic way to experiment and discover how certain language features work.

Once the REPL is loaded, we’ll have a prompt at which any standard Clojure code can be entered and immediately executed. This includes simple Clojure constructs, as well as interaction with other Java libraries – though they need to be available on the classpath to be loaded.

The prompt of the REPL is an indication of the current namespace we are working in. For the majority of our work, this is the user namespace, and so the prompt will be:

user=>

Everything in the rest of this article will assume that we have access to the Clojure REPL, and will all work directly in any such tool.

4. Language Basics

The Clojure language looks very different from many other JVM based languages, and will possibly seem very unusual to start with. It’s considered to be a dialect of Lisp and has very similar syntax and functionality to other Lisp languages.

A lot of the code that we write in Clojure – as with other Lisp dialects – is expressed in the form of Lists. Lists can then be evaluated to produce results – either in the form of more lists or simple values.

For example:

(+ 1 2) ; = 3

This is a list consisting of three elements. The “+” symbol indicates that we are performing this call – addition. The remaining elements are then used with this call. Thus, this evaluates to “1 + 2”.

By using a List syntax here, this can be trivially extended. For example, we can do:

(+ 1 2 3 4 5) ; = 15

And this evaluates to “1 + 2 + 3 + 4 + 5”.

Note as well the semi-colon character. This is used in Clojure to indicate a comment and isn’t the end of the expression as we’d see in Java.

4.1. Simple Types

Clojure is built on top of the JVM, and as such we have access to the same standard types as any other Java application. Types are typically inferred automatically and don’t need to be specified explicitly.

For example:

123 ; Long
1.23 ; Double
"Hello" ; String
true ; Boolean

We can specify some more complicated types as well, using special prefixes or suffixes:

42N ; clojure.lang.BigInt
3.14159M ; java.math.BigDecimal
1/3 ; clojure.lang.Ratio
#"[A-Za-z]+" ; java.util.regex.Pattern

Note that the clojure.lang.BigInt type is used instead of java.math.BigInteger. This is because the Clojure type has some minor optimizations and fixes.

4.2. Keywords and Symbols

Clojure gives us the concept of both keywords and symbols. Keywords refer only to themselves and are often used for things such as map keys. Symbols, on the other hand, are names used to refer to other things. For example, variable definitions and function names are symbols.

We can construct keywords by using a name prefixed with a colon:

user=> :kw
:kw
user=> :a
:a

Keywords have direct equality with themselves, and not with anything else:

user=> (= :a :a)
true
user=> (= :a :b)
false
user=> (= :a "a")
false

Most other things in Clojure that are not simple values are considered to be symbols. These evaluate to whatever they refer to, whereas a keyword always evaluates to itself:

user=> (def a 1)
#'user/a
user=> :a
:a
user=> a
1

4.3. Namespaces

The Clojure language has the concept of namespaces for organizing our code. Every piece of code we write lives in a namespace.

By default, the REPL runs in the user namespace – as seen by the prompt stating “user=>”.

We can create and change namespaces using the ns keyword:

user=> (ns new.ns)
nil
new.ns=>

Once we’ve changed namespaces, anything that is defined in the old one is no longer available to us, and anything defined in the new one is now available.

We can access definitions across namespaces by fully qualifying them. For example, the namespace clojure.string defines a function upper-case.

If we’re in the clojure.string namespace, we can access it directly. If we’re not, then we need to qualify it as clojure.string/upper-case:

user=> (clojure.string/upper-case "hello")
"HELLO"
user=> (upper-case "hello") ; This is not visible in the "user" namespace
Syntax error compiling at (REPL:1:1).
Unable to resolve symbol: upper-case in this context
user=> (ns clojure.string)
nil
clojure.string=> (upper-case "hello") ; This is visible because we're now in the "clojure.string" namespace
"HELLO"

We can also use the require keyword to access definitions from another namespace in an easier way. There are two main ways that we can use this – to define a namespace with a shorter name so that it’s easier to use, and to access definitions from another namespace without any prefix directly:

clojure.string=> (require '[clojure.string :as str])
nil
clojure.string=> (str/upper-case "Hello")
"HELLO"

user=> (require '[clojure.string :as str :refer [upper-case]])
nil
user=> (upper-case "Hello")
"HELLO"

Both of these only affect the current namespace, so changing to a different one will need to have new requires. This helps to keep our namespaces cleaner and give us access to only what we need.

4.4. Variables

Once we know how to define simple values, we can assign them to variables. We can do this using the keyword def:

user=> (def a 123)
#'user/a

Once we’ve done this, we can use the symbol a anywhere we want to represent this value:

user=> a
123

Variable definitions can be as simple or as complicated as we want.

For example, to define a variable as the sum of numbers, we can do:

user=> (def b (+ 1 2 3 4 5))
#'user/b
user=> b
15

Notice that we never have to declare the variable or indicate what type it is. Clojure automatically determines all of this for us.

If we try to use a variable that has not been defined, then we will instead get an error:

user=> unknown
Syntax error compiling at (REPL:0:0).
Unable to resolve symbol: unknown in this context
user=> (def c (+ 1 unknown))
Syntax error compiling at (REPL:1:8).
Unable to resolve symbol: unknown in this context

Notice that the output of the def function looks slightly different from the input. Defining a variable a returns a string of ‘user/a. This is because the result is a symbol, and this symbol is defined in the current namespace.

4.5. Functions

We’ve already seen a couple of examples of how to call functions in Clojure. We create a list that starts with the function to be called, and then all of the parameters.

When this list evaluates, we get the return value from the function. For example:

user=> (java.time.Instant/now)
#object[java.time.Instant 0x4b6690c0 "2019-01-15T07:54:01.516Z"]
user=> (java.time.Instant/parse "2019-01-15T07:55:00Z")
#object[java.time.Instant 0x6b8d96d9 "2019-01-15T07:55:00Z"]
user=> (java.time.OffsetDateTime/of 2019 01 15 7 56 0 0 java.time.ZoneOffset/UTC)
#object[java.time.OffsetDateTime 0xf80945f "2019-01-15T07:56Z"]

We can also nest calls to functions, for when we want to pass the output of one function call in as a parameter to another:

user=> (java.time.OffsetDateTime/of 2018 01 15 7 57 0 0 (java.time.ZoneOffset/ofHours -5))
#object[java.time.OffsetDateTime 0x1cdc4c27 "2018-01-15T07:57-05:00"]

Also, we can also define our functions if we desire. Functions are created using the fn command:

user=> (fn [a b]
  (println "Adding numbers" a "and" b)
  (+ a b)
)
#object[user$eval165$fn__166 0x5644dc81 "user$eval165$fn__166@5644dc81"]

Unfortunately, this doesn’t give the function a name that can be used. Instead, we can define a symbol that represents this function using def, exactly as we’ve seen for variables:

user=> (def add
  (fn [a b]
    (println "Adding numbers" a "and" b)
    (+ a b)
  )
)
#'user/add

Now that we’ve defined this function, we can call it the same as any other function:

user=> (add 1 2)
Adding numbers 1 and 2
3

As a convenience, Clojure also allows us to use defn to define a function with a name in a single go.

For example:

user=> (defn sub [a b]
  (println "Subtracting" b "from" a)
  (- a b)
)
#'user/sub
user=> (sub 5 2)
Subtracting 2 from 5
3

4.6. Let and Local Variables

The def call defines a symbol that is global to the current namespace. This is typically not what is desired when executing code. Instead, Clojure offers the let call to define variables local to a block. This is especially useful when using them inside functions, where you don’t want the variables to leak outside of the function.

For example, we could define our sub function:

user=> (defn sub [a b]
  (def result (- a b))
  (println "Result: " result)
  result
)
#'user/sub

However, using this has the following unexpected side effect:

user=> (sub 1 2)
Result:  -1
-1
user=> result ; Still visible outside of the function
-1

Instead, let’s re-write it using let:

user=> (defn sub [a b]
  (let [result (- a b)]
    (println "Result: " result)
    result
  )
)
#'user/sub
user=> (sub 1 2)
Result:  -1
-1
user=> result
Syntax error compiling at (REPL:0:0).
Unable to resolve symbol: result in this context

This time the result symbol is not visible outside of the function. Or, indeed, outside of the let block in which it was used.

5. Collections

So far, we’ve been mostly interacting with simple values. We have seen lists as well, but nothing more. Clojure does have a full set of collections that can be used, though, consisting of lists, vectors, maps, and sets:

  • A vector is an ordered list of values – any arbitrary value can be put into a vector, including other collections.
  • A set is an unordered collection of values, and can never contain the same value more than once.
  • A map is a simple set of key/value pairs. It’s very common to use keywords as the keys in a map, but we can use any value we like, including other collections.
  • A list is very similar to a vector. The difference is similar to that between an ArrayList and a LinkedList in Java. Typically, a vector is preferred, but a list is better if we want to be adding elements to the start, or if we only ever want to access the elements in sequential order.

5.1. Constructing Collections

Creating each of these can be done using a shorthand notation or using a function call:

; Vector
user=> [1 2 3]
[1 2 3]
user=> (vector 1 2 3)
[1 2 3]

; List
user=> '(1 2 3)
(1 2 3)
user=> (list 1 2 3)
(1 2 3)

; Set
user=> #{1 2 3}
#{1 3 2}
user=> (hash-set 1 2 3)
#{1 3 2}

; Map
user=> {:a 1 :b 2}
{:a 1, :b 2}
user=> (hash-map :a 1 :b 2)
{:b 2, :a 1}

Notice that the Set and Map examples don’t return the values in the same order. This is because these collections are inherently unordered, and what we see depends on how they are represented in memory.

We can also see that the syntax for creating a list is very similar to the standard Clojure syntax for expressions. A Clojure expression is, in fact, a list that gets evaluated, whereas the apostrophe character here indicates that we want the actual list of values instead of evaluating it.

We can, of course, assign a collection to a variable in the same way as any other value. We can also use one collection as a key or value inside another collection.

Lists are considered to be a seq. This means that the class implements the ISeq interface. All other collections can be converted to a seq using the seq function:

user=> (seq [1 2 3])
(1 2 3)
user=> (seq #{1 2 3})
(1 3 2)
user=> (seq {:a 1 2 3})
([:a 1] [2 3])

5.2. Accessing Collections

Once we have a collection, we can interact with it to get values back out again. How we can do this depends slightly on the collection in question, since each of them has different semantics.

Vectors are the only collection that lets us get any arbitrary value by index. This is done by evaluating the vector and index as an expression:

user=> (my-vector 2) ; [1 2 3]
3

We can do the same, using the same syntax, for maps as well:

user=> (my-map :b)
2

We also have functions for accessing vectors and lists to get the first value, last value, and the remainder of the list:

user=> (first my-vector)
1
user=> (last my-list)
3
user=> (next my-vector)
(2 3)

Maps have additional functions to get the entire list of keys and values:

user=> (keys my-map)
(:a :b)
user=> (vals my-map)
(1 2)

The only real access that we have to sets is to see if a particular element is a member.

This looks very similar to accessing any other collection:

user=> (my-set 1)
1
user=> (my-set 5)
nil

5.3. Identifying Collections

We’ve seen that the way we access a collection varies depending on the type of collection we have. We have a set of functions we can use to determine this, both in a specific and more generic manner.

Each of our collections has a specific function to determine if a given value is of that type – list? for lists, set? for sets, and so on. Additionally, there is seq? for determining if a given value is a seq of any kind, and associative? to determine if a given value allows associative access of any kind – which means vectors and maps:

user=> (vector? [1 2 3]) ; A vector is a vector
true
user=> (vector? #{1 2 3}) ; A set is not a vector
false
user=> (list? '(1 2 3)) ; A list is a list
true
user=> (list? [1 2 3]) ; A vector is not a list
false
user=> (map? {:a 1 :b 2}) ; A map is a map
true
user=> (map? #{1 2 3}) ; A set is not a map
false
user=> (seq? '(1 2 3)) ; A list is a seq
true
user=> (seq? [1 2 3]) ; A vector is not a seq
false
user=> (seq? (seq [1 2 3])) ; A vector can be converted into a seq
true
user=> (associative? {:a 1 :b 2}) ; A map is associative
true
user=> (associative? [1 2 3]) ; A vector is associative
true
user=> (associative? '(1 2 3)) ; A list is not associative
false

5.4. Mutating Collections

In Clojure, as with most functional languages, all collections are immutable. Anything that we do to change a collection results in a brand new collection being created to represent the changes. This can give huge efficiency benefits and means that there is no risk of accidental side effects.

However, we also have to be careful that we understand this, otherwise the expected changes to our collections will not be happening.

Adding new elements to a vector, list, or set is done using conj. This works differently in each of these cases, but with the same basic intention:

user=> (conj [1 2 3] 4) ; Adds to the end
[1 2 3 4]
user=> (conj '(1 2 3) 4) ; Adds to the beginning
(4 1 2 3)
user=> (conj #{1 2 3} 4) ; Unordered
#{1 4 3 2}
user=> (conj #{1 2 3} 3) ; Adding an already present entry does nothing
#{1 3 2}

We can also remove entries from a set using disj. Note that this doesn’t work on a list or vector, because they are strictly ordered:

user=> (disj #{1 2 3} 2) ; Removes the entry
#{1 3}
user=> (disj #{1 2 3} 4) ; Does nothing because the entry wasn't present
#{1 3 2}

Adding new elements to a map is done using assoc. We can also remove entries from a map using dissoc:

user=> (assoc {:a 1 :b 2} :c 3) ; Adds a new key
{:a 1, :b 2, :c 3}
user=> (assoc {:a 1 :b 2} :b 3) ; Updates an existing key
{:a 1, :b 3}
user=> (dissoc {:a 1 :b 2} :b) ; Removes an existing key
{:a 1}
user=> (dissoc {:a 1 :b 2} :c) ; Does nothing because the key wasn't present
{:a 1, :b 2}

5.5. Functional Programming Constructs

Clojure is, at its heart, a functional programming language. This means that we have access to many traditional functional programming concepts – such as map, filter, and reduceThese generally work the same as in other languages. The exact syntax may be slightly different, though.

Specifically, these functions generally take the function to apply as the first argument, and the collection to apply it to as the second argument:

user=> (map inc [1 2 3]) ; Increment every value in the vector
(2 3 4)
user=> (map inc #{1 2 3}) ; Increment every value in the set
(2 4 3)

user=> (filter odd? [1 2 3 4 5]) ; Only return odd values
(1 3 5)
user=> (remove odd? [1 2 3 4 5]) ; Only return non-odd values
(2 4)

user=> (reduce + [1 2 3 4 5]) ; Add all of the values together, returning the sum
15

6. Control Structures

As with all general purpose languages, Clojure features calls for standard control structures, such as conditionals and loops.

6.1. Conditionals

Conditionals are handled by the if statement. This takes three parameters: a test, a block to execute if the test is true, and a block to execute if the test is false. Each of these can be a simple value or a standard list that will be evaluated on demand:

user=> (if true 1 2)
1
user=> (if false 1 2)
2

Our test can be anything at all that we need – it doesn’t need to be a true/false value. It can also be a block that gets evaluated to give us the value that we need:

user=> (if (> 1 2) "True" "False")
"False"

All of the standard checks, including =, >, and <, can be used here. There’s also a set of predicates that can be used for various other reasons – we saw some already when looking at collections, for example:

user=> (if (odd? 1) "1 is odd" "1 is even")
"1 is odd"

The test can return any value at all – it doesn’t need only to be true or false. However, it is considered to be true if the value is anything except false or nil. This is different from the way that JavaScript works, where there is a large set of values that are considered to be “truth-y” but not true:

user=> (if 0 "True" "False")
"True"
user=> (if [] "True" "False")
"True"
user=> (if nil "True" "False")
"False"

6.2. Looping

Our functional support on collections handles much of the looping work – instead of writing a loop over the collection, we use the standard functions and let the language do the iteration for us.

Outside of this, looping is done entirely using recursion. We can write recursive functions, or we can use the loop and recur keywords to write a recursive style loop:

user=> (loop [accum [] i 0]
  (if (= i 10)
    accum
    (recur (conj accum i) (inc i))
  ))
[0 1 2 3 4 5 6 7 8 9]

The loop call starts an inner block that is executed on every iteration and starts by setting up some initial parameters. The recur call then calls back into the loop, providing the next parameters to use for the iteration. If recur is not called, then the loop finishes.

In this case, we loop every time that the value is not equal to 10, and then as soon as it is equal to 10, we instead return the accumulated vector of numbers.

7. Summary

This article has given an introduction to the Clojure programming language and shows how the syntax works and some of the things that you can do with it. This is only an introductory level and doesn’t go into the depths of everything that can be done with the language.

However, why not pick it up, give it a go and see what you can do with it.

Viewing all 4699 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>