Quantcast
Channel: Baeldung
Viewing all 4464 articles
Browse latest View live

HandlerInterceptors vs. Filters in Spring MVC

$
0
0

 1. Overview

In this article, we'll compare the Java servlet Filter and the Spring MVC HandlerInterceptor, and when one might be preferable over the other.

2. Filters

Filters are part of the webserver and not the Spring framework. For incoming requests, we can use filters to manipulate and even block requests from reaching any servlet. Vice versa, we can also block responses from reaching the client.

Spring Security is a great example of using filters for authentication and authorization. To configure Spring Security, we simply need to add a single filter, the DelegatingFilterProxy. Spring Security can then intercept all incoming and outgoing traffic. This is why Spring Security can be used outside of Spring MVC.

2.1. Creating a Filter

To create a filter, first, we create a class that implements the javax.servlet.Filter interface:

@Component
public class LogFilter implements Filter {
    private Logger logger = LoggerFactory.getLogger(LogFilter.class);
    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) 
      throws IOException, ServletException {
        logger.info("Hello from: " + request.getLocalAddr());
        chain.doFilter(request, response);
    }
}

Next, we override the doFilter method, where we can access or manipulate the ServletRequest, ServletResponse, or FilterChain objects. We can allow or block requests with the FilterChain object.

Finally, we add the Filter to the Spring context by annotating it with @Component. Spring will do the rest.

3. HandlerInterceptors

HandlerInterceptors are part of the Spring MVC framework and sit between the DispatcherServlet and our Controllers. We can intercept requests before they reach our controllers, and before and after the view is rendered.

3.1. Creating a HandlerInterceptor

To create a HandlerInterceptor, we create a class that implements the org.springframework.web.servlet.HandlerInterceptor interface. This gives us the option to override three methods:

  • preHandle() – Executed before the target handler is called
  • postHandle() – Executed after the target handler but before the DispatcherServlet renders the view
  • afterCompletion() – Callback after completion of request processing and view rendering

Let's add logging to the three methods in our test interceptor:

public class LogInterceptor implements HandlerInterceptor {
    private Logger logger = LoggerFactory.getLogger(LogInterceptor.class);
    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) 
      throws Exception {
        logger.info("preHandle");
        return true;
    }
    @Override
    public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) 
      throws Exception {
        logger.info("postHandle");
    }
    @Override
    public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) 
      throws Exception {
        logger.info("afterCompletion");
    }
}

4. Key Differences and Use Cases

Let's look at a diagram showing where Filters and HandlerInterceptors fit in the request/response flow:

Filters intercept requests before they reach the DispatcherServlet, making them ideal for coarse-grained tasks such as:

  • Authentication
  • Logging and auditing
  • Image and data compression
  • Any functionality we want to be decoupled from Spring MVC

HandlerIntercepors, on the other hand, intercept requests between the DispatcherServlet and our Controllers. This is done within the Spring MVC framework, providing access to the Handler and ModelAndView objects. This reduces duplication and allows for more fine-grained functionality such as:

  • Handling cross-cutting concerns such as application logging
  • Detailed authorization checks
  • Manipulating the Spring context or model

5. Conclusion

In this article, we covered the differences between a Filter and HandlerInterceptor.

The key takeaway is that with Filters, we can manipulate requests before they reach our controllers and outside of Spring MVC. Otherwise, HandlerInterceptors are a great place for application-specific cross-cutting concerns. By providing access to the target Handler and ModelAndView objects, we have more fine-grained control.

The implementation of all these examples and code snippets can be found over on GitHub.

The post HandlerInterceptors vs. Filters in Spring MVC first appeared on Baeldung.
       

Split a String in Java and Keep the Delimiters

$
0
0

1. Introduction

Programmers often come across algorithms involving splitting strings. In a special scenario, there might be a requirement to split a string based on single or multiple distinct delimiters and also return the delimiters as part of the split operation.

Let's discuss in detail the different available solutions to this String split problem.

2. Fundamentals

The Java universe offers quite a few libraries (java.lang.String, Guava, and Apache Commons, to name a few) to facilitate the splitting of strings in simple and fairly complex cases. Additionally, the feature-rich regular expressions provide extra flexibility in splitting problems that revolve around matching of a specific pattern.

3. Look-Around Assertions

In regular expressions, look-around assertions indicate that a match is possible either by looking ahead (lookahead) or looking behind (lookbehind) for another pattern, at the current location of the source string. Let's understand this better with an example.

A lookahead assertion Java(?=Baeldung) matches “Java” only if it is followed by “Baeldung”.

Likewise, a negative lookbehind assertion (?<!#)\d+ matches a number only if it is not preceded by ‘#'.

Let's use such look-around assertion regular expressions and devise a solution to our problem.

In all of the examples explained in this article, we're going to use two simple Strings:

String text = "Hello@World@This@Is@A@Java@Program";
String textMixed = "@HelloWorld@This:Is@A#Java#Program";

4. Using String.split()

Let's begin by using the split() method from the String class of the core Java library.

Moreover, we'll evaluate appropriate lookahead assertions, lookbehind assertions, and combinations of them to split the strings as desired.

4.1. Positive Lookahead

First of all, let's use the lookahead assertion “((?=@))” and split the string text around its matches:

String[] splits = text.split("((?=@))");

The lookahead regex splits the string by a forward match of “@” symbol. The content of the resulting array is:

[Hello, @World, @This, @Is, @A, @Java, @Program]

Using this regex doesn't return the delimiters separately in the splits array. Let's try an alternate approach.

4.2. Positive Lookbehind

We can also use a positive lookbehind assertion “((?<=@))” to split the string text:

String[] splits = text.split("((?<=@))");

However, the resulting output still won't contain the delimiters as individual elements of the array:

[Hello@, World@, This@, Is@, A@, Java@, Program]

4.3. Positive Lookahead or Lookbehind

We can use the combination of the above two explained look-arounds with a logical-or and see it in action.

The resulting regex “((?=@)|(?<=@))” will definitely give us the desired results. The below code snippet demonstrates this:

String[] splits = text.split("((?=@)|(?<=@))");

The above regular expression splits the string, and the resulting array contains the delimiters:

[Hello, @, World, @, This, @, Is, @, A, @, Java, @, Program]

Now that we understand the required look-around assertion regular expression, we can modify it based on the different types of delimiters present in the input string.

Let's attempt to split the textMixed as defined previously using a suitable regex:

String[] splitsMixed = textMixed.split("((?=:|#|@)|(?<=:|#|@))");

It would not be surprising to see the below results after executing the above line of code:

[@, HelloWorld, @, This, :, Is, @, A, #, Java, #, Program]

5. Using Guava Splitter

Considering that now we have clarity on the regex assertions discussed in the above section, let's delve into a Java library offered by Google.

The Splitter class from Guava offers methods on() and onPattern() to split a string using a regular expression pattern as a separator.

To start with, let's see them in action on the string text containing a single delimiter “@”:

List<String> splits = Splitter.onPattern("((?=@)|(?<=@))").splitToList(text);
List<String> splits2 = Splitter.on(Pattern.compile("((?=@)|(?<=@))")).splitToList(text);

The results from executing the above lines of code are quite similar to the ones generated by the split method, except we now have Lists instead of arrays.

Likewise, we can also use these methods to split a string containing multiple distinct delimiters:

List<String> splitsMixed = Splitter.onPattern("((?=:|#|@)|(?<=:|#|@))").splitToList(textMixed);
List<String> splitsMixed2 = Splitter.on(Pattern.compile("((?=:|#|@)|(?<=:|#|@))")).splitToList(textMixed);

As we can see, the difference between the above two methods is quite noticeable.

The on() method accepts an argument of java.util.regex.Pattern, whereas the onPattern() method just accepts the separator regex as a String.

6. Using Apache Commons StringUtils

We can also take advantage of the Apache Commons Lang project's StringUtils method splitByCharacterType().

It's really important to note that this method works by splitting the input string by the character type as returned by java.lang.Character.getType(char). Here, we don't get to pick or extract the delimiters of our choosing.

Furthermore, it delivers the best results when the source string has a constant case, either upper or lower, throughout:

String[] splits = StringUtils.splitByCharacterType("pg@no;10@hello;world@this;is@a#10words;Java#Program");

The different character types as seen in the above string are uppercase and lowercase letters, digits, and special characters (@ ; # ).

Hence, the resulting array splits, as expected, looks like:

[pg, @, no, ;, 10, @, hello, ;, world, @, this, ;, is, @, a, #, 10, words, ;, J, ava, #, P, rogram]

7. Conclusion

In this article, we've seen how to split a string in such a way that the delimiters are also available in the resulting array.

First, we discussed look-around assertions and used them to get the desired results. Later, we used the methods provided by the Guava library to achieve similar results.

Finally, we wrapped up with the Apache Commons Lang library, which provides a more user-friendly method to solve a related problem of splitting a string, also returning the delimiters.

As always, the code used in this article can be found over on GitHub.

The post Split a String in Java and Keep the Delimiters first appeared on Baeldung.
       

Overriding Column Definition With @AttributeOverride

$
0
0

1. Overview

In this tutorial, we'll show how to use @AttributeOverride to change a column mapping. We'll explain how to use it when extending or embedding an entity, and we'll cover single and collection embedding.

2. @AttributeOverride‘s Attributes

The annotation contains two mandatory attributes:

  • name – field name of an included entity
  • column – column definition which overrides the one defined in the original object

3. Use With @MappedSuperclass

Let's define a Vehicle class:

@MappedSuperclass
public class Vehicle {
    @Id
    @GeneratedValue
    private Integer id;
    private String identifier;
    private Integer numberOfWheels;
    
    // standard getters and setters
}

The @MappedSuperclass annotation indicates that it's a base class for other entities.

Let's next define class Car, which extends Vehicle. It demonstrates how to extend an entity and store a car's information in a single table. Note that the annotation goes on the class:

@Entity
@AttributeOverride(name = "identifier", column = @Column(name = "VIN"))
public class Car extends Vehicle {
    private String model;
    private String name;
    // standard getters and setters
}

As a result, we have one table with all car details together with vehicle details. The thing is that for a car, we want to store an identifier in column VIN. We achieve it with @AttributeOverride. The annotation defines that field identifier is stored in the VIN column. 

4. Use With an Embedded Class

Let's now add more details to our vehicle with two embeddable classes.

Let's first define basic address information:

@Embeddable
public class Address {
    private String name;
    private String city;
    // standard getters and setters
}

Let's also create a class with car manufacturer information:

@Embeddable
public class Brand {
    private String name;
    private LocalDate foundationDate;
    @Embedded
    private Address address;
    // standard getters and setters
}

The Brand class contains an embedded class with address details. We'll use it to demonstrate how to use @AttributeOverride with multiple levels of embedding.

Let's extend our Car with Brand details:

@Entity
@AttributeOverride(name = "identifier", column = @Column(name = "VIN"))
public class Car extends Vehicle {
    // existing fields
    @Embedded
    @AttributeOverrides({
      @AttributeOverride(name = "name", column = @Column(name = "BRAND_NAME", length = 5)),
      @AttributeOverride(name = "address.name", column = @Column(name = "ADDRESS_NAME"))
    })
    private Brand brand;
    // standard getters and setters
}

First of all, the @AttributeOverrides annotation allows us to modify more than one attribute. We've overridden the name column definition from the Brand class because the same column exists in the Car class. As a result, the brand name is stored in column BRAND_NAME.

Furthermore, we've defined column length to show that not only a column name can be overridden. Note that the column attribute overrides all values from the overridden class. To keep original values, all must be set in column attributes.

In addition to that, the name column from the Address class has been mapped to ADDRESS_NAME. To override mappings at multiple levels of embedding, we use a dot “.” to specify a path to the overridden field.

5. Embedded Collection

Let's play a little bit with this annotation and see how it works with a collection.

Let's add a car's owner details:

@Embeddable
public class Owner {
    private String name;
    private String surname;
    // standard getters and setters
}

We want to have an owner together with an address, so let's add a map of owners and their addresses:

@Entity
@AttributeOverride(name = "identifier", column = @Column(name = "VIN"))
public class Car extends Vehicle {
    // existing fields
    @ElementCollection
    @AttributeOverrides({
      @AttributeOverride(name = "key.name", column = @Column(name = "OWNER_NAME")),
      @AttributeOverride(name = "key.surname", column = @Column(name = "OWNER_SURNAME")),
      @AttributeOverride(name = "value.name", column = @Column(name = "ADDRESS_NAME")),
    })
    Map<Owner, Address> owners;
    // standard getters and setters
}

Thanks to the annotation, we can reuse the Address class. The key prefix indicates an override of a field from the Owner class. Furthermore, a value prefix points to the field from the Address class. For lists, no addition prefix is required.

6. Conclusion

That concludes this short article about the @AttibuteOverride annotation. We've seen how to use this annotation when extending or embedding an entity. After that, we've learned how to use it with a collection.

As always, the source code of the example is available over on GitHub.

The post Overriding Column Definition With @AttributeOverride first appeared on Baeldung.
       

Build a Trading Bot with Cassandre Spring Boot Starter

$
0
0

1. Overview

A trading bot is a computer program that can automatically place orders to a market or exchange without the need for human intervention.

In this tutorial, we'll use Cassandre to create a simple crypto trading bot that will generate positions when we think it’s the best moment.

2. Bot Overview

Trading means “exchanging one item for another”.

In the financial markets, it’s buying shares, futures, options, swaps, bonds, or like in our case, an amount of cryptocurrency. The idea here is to buy cryptocurrencies at a specific price and sell it at a higher price to make profits (even if we can still profit if the price goes down with a short position).

We'll use a sandbox exchange; a sandbox is a virtual system where we have “fake” assets, where we can place orders and receive tickers.

First, let's see what we'll do:

  • Add Cassandre spring boot starter to our project
  • Add the required configuration to connect to the exchange
  • Create a strategy:
    • Receive tickers from the exchange
    • Choose when to buy
    • When it’s time to buy, check if we have enough assets and creates a position
    • Display logs to see when positions are open/closed and how much gain we made
  • Run tests against historical data to see if we can make profits

3. Maven Dependencies

Let's get started by adding the necessary dependencies to our pom.xml, first the Cassandre spring boot starter:

<dependency>
    <groupId>tech.cassandre.trading.bot</groupId>
    <artifactId>cassandre-trading-bot-spring-boot-starter</artifactId>
    <version>4.2.1</version>
</dependency>

Cassandre relies on XChange to connect to crypto exchanges. For this tutorial, we're going to use the Kucoin XChange library:

<dependency>
    <groupId>org.knowm.xchange</groupId>
    <artifactId>xchange-kucoin</artifactId>
    <version>5.0.8</version>
</dependency>

We're also using hsqld to store data:

<dependency>
    <groupId>org.hsqldb</groupId>
    <artifactId>hsqldb</artifactId>
    <version>2.5.2</version>
</dependency>

For testing our trading bot against historical data, we also add our Cassandre spring boot starter for tests:

<dependency>
    <groupId>tech.cassandre.trading.bot</groupId>
    <artifactId>cassandre-trading-bot-spring-boot-starter-test</artifactId>
    <version>4.2.1</version>
    <scope>test</scope>
</dependency>

4. Configuration

Let's edit create application.properties to set our configuration:

# Exchange configuration
cassandre.trading.bot.exchange.name=kucoin
cassandre.trading.bot.exchange.username=kucoin.cassandre.test@gmail.com
cassandre.trading.bot.exchange.passphrase=cassandre
cassandre.trading.bot.exchange.key=6054ad25365ac6000689a998
cassandre.trading.bot.exchange.secret=af080d55-afe3-47c9-8ec1-4b479fbcc5e7
# Modes
cassandre.trading.bot.exchange.modes.sandbox=true
cassandre.trading.bot.exchange.modes.dry=false
# Exchange API calls rates (ms or standard ISO 8601 duration like 'PT5S')
cassandre.trading.bot.exchange.rates.account=2000
cassandre.trading.bot.exchange.rates.ticker=2000
cassandre.trading.bot.exchange.rates.trade=2000
# Database configuration
cassandre.trading.bot.database.datasource.driver-class-name=org.hsqldb.jdbc.JDBCDriver
cassandre.trading.bot.database.datasource.url=jdbc:hsqldb:mem:cassandre
cassandre.trading.bot.database.datasource.username=sa
cassandre.trading.bot.database.datasource.password=

The configuration has four categories:

  • Exchange configuration: The exchange credentials we set up for us a connection to an existing sandbox account on Kucoin
  • Modes: The modes we want to use. In our case, we're asking Cassandre to use the sandbox data
  • Exchange API calls rates: Indicates at which pace we want to retrieve data (accounts, orders, trades, and tickers) from the exchange. Be careful; all exchanges have maximum rates at which we can call them
  • Database configuration: Cassandre uses a database to store positions, orders & trades. For this tutorial, we'll use a simple hsqld in-memory database. Of course, when in production, we should use a persistent database

Now let's create the same file in application.properties in our test directory, but we change cassandre.trading.bot.exchange.modes.dry to true because, during tests, we don't want to send real orders to the sandbox. We only want to simulate them.

5. The Strategy

A trading strategy is a fixed plan designed to achieve a profitable return; we can make ours by adding a Java class annotated with @CassandreStrategy and extending BasicCassandreStrategy.

Let’s create our strategy class in MyFirstStrategy.java:

@CassandreStrategy
public class MyFirstStrategy extends BasicCassandreStrategy {
    @Override
    public Set<CurrencyPairDTO> getRequestedCurrencyPairs() {
        return Set.of(new CurrencyPairDTO(BTC, USDT));
    }
    @Override
    public Optional<AccountDTO> getTradeAccount(Set<AccountDTO> accounts) {
        return accounts.stream()
          .filter(a -> "trade".equals(a.getName()))
          .findFirst();
    }
}

Implementing BasicCassandreStrategy forces us to implement two methods getRequestedCurrencyPairs() & getTradeAccount():

In getRequestedCurrencyPairs(), we have to return the list of currency pairs updates we want to receive from the exchange. A currency pair is the quotation of two different currencies, with the value of one currency being quoted against the other. In our example, we want to work with BTC/USDT.

To make it more clear, we can retrieve a ticker manually with the following curl command:

curl -s https://api.kucoin.com/api/v1/market/orderbook/level1?symbol=BTC-USDT

We'll get something like that:

{
  "time": 1620227845003,
  "sequence": "1615922903162",
  "price": "57263.3",
  "size": "0.00306338",
  "bestBid": "57259.4",
  "bestBidSize": "0.00250335",
  "bestAsk": "57260.4",
  "bestAskSize": "0.01"
}

The price value indicates that 1 BTC costs 57263.3 USDT.

The other method we have to implement is getTradeAccount(). On the exchange, we usually have several accounts, and Cassandre needs to know which one of the accounts is the trading one. To do so, e have to implement the getTradeAccount() method, which gives usw as a parameter the list of accounts we own, and from that list, we have to return the one we want to use for trading.

In our example, our trade account on the exchange is named “trade”, so we simply return it.

6. Creating Positions

To be notified of new data, we can override the following methods of BasicCassandreStrategy:

  • onAccountUpdate() to receive updates about account
  • onTickerUpdate() to receive new tickers
  • onOrderUpdate() to receive updates about orders
  • onTradeUpdate() )to receive updates about trades
  • onPositionUpdate() to receive updates about positions
  • onPositionStatusUpdate() to receive updates about position status change

For this tutorial, we'll implement a dumb algorithm: we check every new ticker received. If the price of 1 BTC goes under 56 000 USDT, we think it’s time to buy.

To make things easier about gain calculation, orders, trades, and closure, Cassandre provides a class to manage positions automatically.

To use it, the first step is to create the rules for the position thanks to the PositionRulesDTO class, for example:

PositionRulesDTO rules = PositionRulesDTO.builder()
  .stopGainPercentage(4f)
  .stopLossPercentage(25f)
  .create();

Then, let's create the position with that rule:

createLongPosition(new CurrencyPairDTO(BTC, USDT), new BigDecimal("0.01"), rules);

At this moment, Cassandre will create a buy order of 0.01 BTC. The position status will be OPENING, and when all the corresponding trades have arrived, the status will move to OPENED. From now on, for every ticker received, Cassandre will automatically calculate, with the new price, if closing the position at that price would trigger one of our two rules (4% stop gain or 25% stop loss).

If one rule is triggered, Cassandre will automatically create a selling order of our 0.01 BTC. The position status will move to CLOSING, and when all the corresponding trades have arrived, the status will move to CLOSED.

This is the code we'll have:

@Override
public void onTickerUpdate(TickerDTO ticker) {
    if (new BigDecimal("56000").compareTo(ticker.getLast()) == -1) {
        if (canBuy(new CurrencyPairDTO(BTC, USDT), new BigDecimal("0.01"))) {
            PositionRulesDTO rules = PositionRulesDTO.builder()
              .stopGainPercentage(4f)
              .stopLossPercentage(25f)
              .build();
            createLongPosition(new CurrencyPairDTO(BTC, USDT), new BigDecimal("0.01"), rules);
        }
    }
}

To sum up:

  • For every new ticker, we check if the price is under 56000.
  • If we have enough USDT on our trade account, we open a position for 0.01 BTC.
  • From now on, for every ticker:
    • If the calculated gain with the new price is over 4% gain or 25% loss, Cassandre will close the position we created by selling the 0.01 BTC bought.

7. Follow Positions Evolution in Logs

We'll finally implement the onPositionStatusUpdate() to see when positions are opened/closed:

@Override
public void onPositionStatusUpdate(PositionDTO position) {
    if (position.getStatus() == OPENED) {
        logger.info("> New position opened : {}", position.getPositionId());
    }
    if (position.getStatus() == CLOSED) {
        logger.info("> Position closed : {}", position.getDescription());
    }
}

8. Backtesting

In simple words, backtesting a strategy is the process of testing a trading strategy on prior periods. Cassandre trading bot allows us to simulate bots' reactions to historical data.

The first step is to put our historical data (CSV or TSV files) in our src/test/resources folder.

If we are under Linux, here is a simple script to generate them:

startDate=`date --date="3 months ago" +"%s"`
endDate=`date +"%s"`
curl -s "https://api.kucoin.com/api/v1/market/candles?type=1day&symbol=BTC-USDT&startAt=${startDate}&endAt=${endDate}" \
| jq -r -c ".data[] | @tsv" \
| tac $1 > tickers-btc-usdt.tsv

It'll create a file named tickers-btc-usdt.tsv that contains the historical rate of BTC-USDT from startDate (3 months ago) to endDate (now).

The second step is to create our(s) virtual account(s) balances to simulate the exact amount of assets we want to invest.

In those files, for each account, we set the balances of each cryptocurrency. For example, this is the content of user-trade.csv that simulate our trade account assets :

This file must also be in the src/test/resources folder.

BTC 1
USDT 10000
ETH 10

Now, we can add a test:

@SpringBootTest
@Import(TickerFluxMock.class)
@DisplayName("Simple strategy test")
public class MyFirstStrategyUnitTest {
    @Autowired
    private MyFirstStrategy strategy;
    private final Logger logger = LoggerFactory.getLogger(MyFirstStrategyTest.class);
    @Autowired
    private TickerFluxMock tickerFluxMock;
    @Test
    @DisplayName("Check gains")
    public void whenTickersArrives_thenCheckGains() {
        await().forever().until(() -> tickerFluxMock.isFluxDone());
        HashMap<CurrencyDTO, GainDTO> gains = strategy.getGains();
        logger.info("Cumulated gains:");
        gains.forEach((currency, gain) -> logger.info(currency + " : " + gain.getAmount()));
        logger.info("Position still opened :");
        strategy.getPositions()
          .values()
          .stream()
          .filter(p -> p.getStatus().equals(OPENED))
          .forEach(p -> logger.info(" - {} " + p.getDescription()));
        assertTrue(gains.get(USDT).getPercentage() > 0);
    }
    
}

The @Import from TickerFluxMock will load the historical data from our src/test/resources folder and send them to our strategy. Then we use the await() method to be sure all tickers loaded from files have been sent to our strategy. We finish by displaying the closed positions, the position still opened, and the global gain.

9. Conclusion

This tutorial illustrated how to create a strategy interacting with a crypto exchange and test it against historical data.

Of course, our algorithm was straightforward; in real life, the goal is to find a promising technology, a good algorithm, and good data to know when we can create a position. We can, for example, use technical analysis as Cassandre integrates ta4j.

All the code of this article is available over on GitHub.

The post Build a Trading Bot with Cassandre Spring Boot Starter first appeared on Baeldung.
       

Converting String to BigDecimal in Java

$
0
0

1. Overview

In this tutorial, we'll cover many ways of converting String to BigDecimal in Java.

2. BigDecimal

BigDecimal represents an immutable arbitrary-precision signed decimal number. It consists of two parts:

  • Unscaled value – an arbitrary precision integer
  • Scale – a 32-bit integer representing the number of digits to the right of the decimal point

For example, the BigDecimal 3.14 has an unscaled value of 314 and a scale of 2.

If zero or positive, the scale is the number of digits to the right of the decimal point.

If negative, the unscaled value of the number is multiplied by ten to the power of the negation of the scale. Therefore, the value of the number represented by the BigDecimal is – (Unscaled value × 10-Scale).

The BigDecimal class in Java provides operations for basic arithmetic, scale manipulation, comparison, format conversion, and hashing.

Moreover, we use BigDecimal for high-precision arithmetic, calculations requiring control over the scale, and rounding off behavior. One such example is calculations involving financial transactions.

We can convert a String into BigDecimal in Java using one of the below methods:

  • BigDecimal(String) constructor
  • BigDecimal.valueOf() method
  • DecimalFormat.parse() method

Let's discuss all of them below.

3. BigDecimal(String)

The easiest way to convert String to BigDecimal in Java is to use BigDecimal(String) constructor:

BigDecimal bigDecimal = new BigDecimal("123");
assertEquals(new BigDecimal(123), bigDecimal);

4. BigDecimal.valueOf()

We can also convert String to BigDecimal by using the BigDecimal.valueOf(double) method.

It is a two-step process. The first step is to convert the String to Double. The second step is to convert Double to BigDecimal:

BigDecimal bigDecimal = BigDecimal.valueOf(Double.valueOf("123.42"));
assertEquals(new BigDecimal(123.42).setScale(2, BigDecimal.ROUND_HALF_UP), bigDecimal);

5. DecimalFormat.parse()

When a String representing a value has a more complex format, we can use a DecimalFormat.

For example, we can convert a decimal-based long value without removing non-numeric symbols:

BigDecimal bigDecimal = new BigDecimal(10692467440017.111).setScale(3, BigDecimal.ROUND_HALF_UP);
DecimalFormatSymbols symbols = new DecimalFormatSymbols();
symbols.setGroupingSeparator(',');
symbols.setDecimalSeparator('.');
String pattern = "#,##0.0#";
DecimalFormat decimalFormat = new DecimalFormat(pattern, symbols);
decimalFormat.setParseBigDecimal(true);
// parse the string value
BigDecimal parsedStringValue = (BigDecimal) decimalFormat.parse("10,692,467,440,017.111");
assertEquals(bigDecimal, parsedStringValue);

The DecimalFormat.parse method returns a Number, which we convert to a BigDecimal number using the setParseBigDecimal(true);

Usually, the DecimalFormat is more advanced than we require. Thus, we should favor the new BigDecimal(String) or the BigDecimal.valueOf() instead.

6. Invalid Conversions

Java provides generic exceptions for handling invalid numeric Strings.

Notably, new BigDecimal(String), BigDecimal.valueOf(), DecimalFormat.parse throw a NullPointerException when we pass null:

@Test(expected = NullPointerException.class)
public void givenNullString_WhenBigDecimalObjectWithStringParameter_ThenNullPointerExceptionIsThrown() {
    String bigDecimal = null;
    new BigDecimal(bigDecimal);
}
@Test(expected = NullPointerException.class)
public void givenNullString_WhenValueOfDoubleFromString_ThenNullPointerExceptionIsThrown() {
    BigDecimal.valueOf(Double.valueOf(null));
}
@Test(expected = NullPointerException.class)
public void givenNullString_WhenDecimalFormatOfString_ThenNullPointerExceptionIsThrown()
  throws ParseException {
    new DecimalFormat("#").parse(null);
}

Likewise, new BigDecimal(String), BigDecimal.valueOf() throw a NumberFormatException when we pass an invalid String that cannot be parsed to a BigDecimal (such as &):

@Test(expected = NumberFormatException.class)
public void givenInalidString_WhenBigDecimalObjectWithStringParameter_ThenNumberFormatExceptionIsThrown() {
    new BigDecimal("&");
}
@Test(expected = NumberFormatException.class)
public void givenInalidString_WhenValueOfDoubleFromString_ThenNumberFormatExceptionIsThrown() {
    BigDecimal.valueOf(Double.valueOf("&"));
}

Lastly, DecimalFormat.parse throws a ParseException when we pass an invalid String:

@Test(expected = ParseException.class)
public void givenInalidString_WhenDecimalFormatOfString_ThenNumberFormatExceptionIsThrown()
  throws ParseException {
    new DecimalFormat("#").parse("&");
}

7. Conclusion

In conclusion, Java provides us multiple methods to convert String to BigDecimal values.

In general, we recommend using the new BigDecimal(String) for converting String to BigDecimal in java.

As always, the code used in this article can be found over on GitHub.

The post Converting String to BigDecimal in Java first appeared on Baeldung.
       

How to Implement a Soft Delete with Spring JPA

$
0
0

1. Introduction

Physically deleting data from a table is a common requirement when interacting with databases. But sometimes there are business requirements to not permanently delete data from the database. These requirements, for example, the needs for data history tracking or audit and also related to reference integrity.

Instead of physically deleting the data, we can just hide that data so that it can't be accessed from the application front-end.

In this tutorial, we'll learn about soft delete and how to implement this technique with Spring JPA.

2. What is Soft Delete?

Soft delete performs an update process to mark some data as deleted instead of physically deleting it from a table in the database. A common way to implement soft delete is to add a field that will indicate whether a data has been deleted or not.

For example, let's suppose we have a product table with the following structure:

Let's now look at the SQL command we'll run when physically deleting a record from the table:

delete from table_product where id=1

This SQL command will permanently remove the product with id=1 from the table in the database.

Let's now implement the soft delete mechanism described above:

Note we added a new field called deleted. This field will contain the values 0 or 1.

The value 1 will indicate the data has been deleted and 0 will indicate the data has not been deleted. We should set 0 as the default value, and for every data deletion process, we don't run the SQL delete command, but the following SQL update command instead:

update from table_product set deleted=1 where id=1

Using this SQL command we didn't actually delete the row, but only marked it as deleted. So, when we're going to perform a read query, and we only want those rows that have not been deleted, we should only add a filter in our SQL query:

select * from table_product where deleted=0

3. How to Implement Soft Delete in Spring JPA

With Spring JPA the implementation of soft delete has become much easier. We'll only need a few JPA annotations for this purpose.

As we know, we generally use only a few SQL commands with JPA. It will create and execute the majority of the SQL queries behind the scenes.

Let's now implement the soft delete in Spring JPA with the same table example as above.

3.1. Entity Class

The most important part is creating the entity class.

Let's create a Product entity class:

@Entity
@Table(name = "table_product")
public class Product {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    private String name;
    private double price;
    private boolean deleted = Boolean.FALSE;
    // setter getter methods
}

As we can see, we've added a deleted property with the default value set as FALSE.

The next step will be to override the delete command in the JPA repository.

By default, the delete command in the JPA repository will run a SQL delete query, so let's first add some annotations to our entity class:

@Entity
@Table(name = "table_product")
@SQLDelete(sql = "UPDATE table_product SET deleted = true WHERE id=?")
@Where(clause = "deleted=false")
public class Product {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    private String name;
    private double price;
    private boolean deleted = Boolean.FALSE;
   
    // setter getter method
}

We are using the @SQLDelete annotation to override the delete command. Every time we execute the delete command, we actually have turned it into a SQL update command that changes the deleted field value to true instead of deleting the data permanently.

The @Where annotation, on the other hand, will add a filter when we read the product data. So, according to the code example above, product data with the value deleted = true won't be included within the results.

3.2. Repository

There are no special changes in the repository class, we can write it like a normal repository class in the Spring Boot application:

public interface ProductRepository extends CrudRepository<Product, Long>{
    
}

3.3. Service

Also for the service class, there is nothing special yet. We can call the functions from the repository that we want.

In this example, let's call three repository functions to create a record, and then perform a soft delete:

@Service
public class ProductService {
    
    @Autowired
    private ProductRepository productRepository;
    public Product create(Product product) {
        return productRepository.save(product);
    }
    public void remove(Long id){
        productRepository.deleteById(id);
    }
    public Iterable<Product> findAll(){
        return productRepository.findAll();
    }
}

4. How to get the Deleted Data?

By using the @Where annotation, we can't get the deleted product data in case we still want the deleted data to be accessible. An example of this is a user with administrator-level that has full access and can view the data that has been “deleted”.

To implement this, we shouldn't use the @Where annotation but two different annotations, @FilterDef, and @Filter. With these annotations we can dynamically add conditions as needed:

@Entity
@Table(name = "tbl_products")
@SQLDelete(sql = "UPDATE tbl_products SET deleted = true WHERE id=?")
@FilterDef(name = "deletedProductFilter", parameters = @ParamDef(name = "isDeleted", type = "boolean"))
@Filter(name = "deletedProductFilter", condition = "deleted = :isDeleted")
public class Product {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    private String name;
    private double price;
    private boolean deleted = Boolean.FALSE;
}

Here @FilterDef annotation defines the basic requirements that will be used by @Filter annotation. Furthermore, we also need to change the findAll() function in the ProductService service class to handle dynamic parameters or filters:

@Service
public class ProductService {
    
    @Autowired
    private ProductRepository productRepository;
    @Autowired
    private EntityManager entityManager;
    public Product create(Product product) {
        return productRepository.save(product);
    }
    public void remove(Long id){
        productRepository.deleteById(id);
    }
    public Iterable<Product> findAll(boolean isDeleted){
        Session session = entityManager.unwrap(Session.class);
        Filter filter = session.enableFilter("deletedProductFilter");
        filter.setParameter("isDeleted", isDeleted);
        Iterable<Product> products =  productRepository.findAll();
        session.disableFilter("deletedProductFilter");
        return products;
    }
}

Here we add the isDeleted parameter that we'll add to the object Filter affecting the process of reading the Product entity.

5. Conclusion

It's easy to implement soft delete techniques using Spring JPA. What we need to do is define a field that will store whether a row has been deleted or not. Then we've to override the delete command using the @SQLDelete annotation on that particular entity class.

If we want more control we can use the @FilterDef and @Filter annotations so we can determine if query results should include deleted data or not.

All the code in this article is available over on GitHub.

The post How to Implement a Soft Delete with Spring JPA first appeared on Baeldung.
       

How to Display a Message in Maven

$
0
0

1. Overview

Sometimes, we might want to print some extra information during Maven's execution. However, there's no built-in way to output values to the console in the Maven build lifecycles.

In this tutorial, we'll explore plugins that enable printing messages during Maven execution. We'll discuss three different plugins, each of which can be bound to a specific Maven phase of our choosing.

2. AntRun Plugin

First, we'll discuss the AntRun plugin. It provides the ability to run Ant tasks from within Maven. To make use of the plugin in our project, we need to add the maven-antrun-plugin to our pom.xml:

<plugins>
    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-antrun-plugin</artifactId>
        <version>3.0.0</version>
    </plugin>
</plugins>

Let's define the goal and phase in the execution tag. Moreover, we'll add the configuration tag that holds the target with echo messages:

<executions>
    <execution>
        <id>antrun-plugin</id>
        <phase>validate</phase>
        <goals>
            <goal>run</goal>
        </goals>
        <configuration>
            <target>
                <echo message="Hello, world"/>
                <echo message="Embed a line break: ${line.separator}"/>
                <echo message="Build dir: ${project.build.directory}" level="info"/>
                <echo file="${basedir}/logs/log-ant-run.txt" append="true" message="Save to file!"/>
            </target>
        </configuration>
    </execution>
</executions>

We can print regular strings as well as property values. The echo tags send messages to the current loggers and listeners, which correspond to System.out unless overridden. We can also specify a level, which tells the plugin at what logging level it should filter the message.

The task can also echo to a file. We can either append to a file or overwrite it by setting the append attribute to true or false, respectively. If we choose to log into a file, we should omit the logging level. Only messages marked with file tags will be logged to the file.

3. Echo Maven Plugin

If we don't want to use a plugin based on Ant, we can instead add the echo-maven-plugin dependency to our pom.xml:

<plugin>
    <groupId>com.github.ekryd.echo-maven-plugin</groupId>
    <artifactId>echo-maven-plugin</artifactId>
    <version>1.3.2</version>
</plugin>

Like we saw in the previous plugin example, we'll declare the goal and phase in the execution tag. Next, we'll fill the configuration tag:

<executions>
    <execution>
        <id>echo-maven-plugin-1</id>
        <phase>package</phase>
        <goals>
            <goal>echo</goal>
        </goals>
        <configuration>
            <message>
                Hello, world
                Embed a line break: ${line.separator}
                ArtifactId is ${project.artifactId}
            </message>
            <level>INFO</level>
            <toFile>/logs/log-echo.txt</toFile>
            <append>true</append>
        </configuration>
    </execution>
</executions>

Similarly, we can print simple strings and properties. We can also set the log level using the level tag. Using the toFile tag, we can indicate the path to the file to which the logs will be saved. Finally, if we want to print multiple messages, we should add a separate execution tag for each one.

4. Groovy Maven Plugin

To use groovy-maven-plugin, we have to put the dependency in our pom.xml:

<plugin>
    <groupId>org.codehaus.gmaven</groupId>
    <artifactId>groovy-maven-plugin</artifactId>
    <version>2.1.1</version>
</plugin>

Further, let's add phase and goal in the execution tag. Next, we'll put the source tag in the configuration section. It contains Groovy code:

<executions>
    <execution>
        <phase>validate</phase>
        <goals>
            <goal>execute</goal>
        </goals>
        <configuration>
            <source>
                log.info('Test message: {}', 'Hello, World!')
                log.info('Embed a line break {}', System.lineSeparator())
                log.info('ArtifactId is: ${project.artifactId}')
                log.warn('Message only in debug mode')
            </source>
        </configuration>
    </execution>
</executions>

Similar to the previous solutions, the Groovy logger allows us to set the logging level. From the code level, we can also easily access Maven properties. Further, we can write messages to a file using the Groovy script.

Thanks to Groovy scripts, we can add more complex logic to the messages. Groovy scripts can also be loaded from a file, so we don't have to clutter our pom.xml with long, inline scripts.

5. Conclusion

In this quick tutorial, we saw how to print using various plugins. We described how to print using maven-antrun-plugin, echo-maven-plugin, and groovy-maven-plugin. In addition, we covered several use cases.

Finally, all the source code for the article is available over on GitHub.

The post How to Display a Message in Maven first appeared on Baeldung.
       

Multipart Request Handling in Spring

$
0
0

1. Introduction

In this tutorial, we'll focus on various mechanisms for sending multipart requests in Spring Boot. Multipart requests consist of sending data of various different types separated by a boundary as a part of a single HTTP method call.

Generally, we can send complicated JSON, XML, or CSV data as well as transfer multipart file(s) in this request. Examples of multipart files can be audio or an image file. Equally, we can also send simple key/value pair data with the multipart file(s) as a multipart request.

Let's look into various ways we can send this data.

2. Using @ModelAttribute

Let's consider a simple use case of sending an employee's data consisting of a name and a file using a form.

First, let's create an Employee abstraction to store the form data:

public class Employee {
    private String name;
    private MultipartFile document;
}

Next, let's generate the form using Thymeleaf:

<form action="#" th:action="@{/employee}" th:object="${employee}" method="post" enctype="multipart/form-data">
    <p>name: <input type="text" th:field="*{name}" /></p>
    <p>document:<input type="file" th:field="*{document}" multiple="multiple"/>
    <input type="submit" value="upload" />
    <input type="reset" value="Reset" /></p>
</form>

The important thing to note is that we declare the enctype as multipart/form-data in the view.

Finally, we'll create a method that accepts the form data, including the multipart file:

@RequestMapping(path = "/employee", method = POST, consumes = { MediaType.MULTIPART_FORM_DATA_VALUE })
public String saveEmployee(@ModelAttribute Employee employee) {
    employeeService.save(employee);
    return "employee/success";
}

Here, the two particularly important details are:

  • consumes attribute value is set to multipart/form-data
  • @ModelAttribute has captured all the form data into the Employee POJO, including the uploaded file

3. Using @RequestPart

This annotation associates a part of a multipart request with the method argument, which is useful for sending complex multi-attribute data as payload, e.g., JSON or XML.

Let's create a method with two arguments, first of type Employee and second as MultipartFile. Furthermore, we'll annotate both of these arguments with @RequestPart:

@RequestMapping(path = "/requestpart/employee", method = POST, consumes = { MediaType.MULTIPART_FORM_DATA_VALUE })
public ResponseEntity<Object> saveEmployee(@RequestPart Employee employee, @RequestPart MultipartFile document) {
    employee.setDocument(document);
    employeeService.save(employee);
    return ResponseEntity.ok().build();
}

Now, to see this annotation in action, let's create the test using MockMultipartFile:

@Test
public void givenEmployeeJsonAndMultipartFile_whenPostWithRequestPart_thenReturnsOK() throws Exception {
    MockMultipartFile employeeJson = new MockMultipartFile("employee", null,
      "application/json", "{\"name\": \"Emp Name\"}".getBytes());
    mockMvc.perform(multipart("/requestpart/employee")
      .file(A_FILE)
      .file(employeeJson))
      .andExpect(status().isOk());
}

Above, the important thing to note is that we've set the content type of the Employee part as application/JSON. Also, we're sending this data as a JSON file in addition to the multipart file.

Further details on how to test multipart requests can be found here.

4. Using @RequestParam

Another way of sending multipart data is to use @RequestParam. This is especially useful for simple data, which is sent as key/value pairs along with the file:

@RequestMapping(path = "/requestparam/employee", method = POST, consumes = { MediaType.MULTIPART_FORM_DATA_VALUE })
public ResponseEntity<Object> saveEmployee(@RequestParam String name, @RequestPart MultipartFile document) {
    Employee employee = new Employee(name, document);
    employeeService.save(employee);
    return ResponseEntity.ok().build();
}

Let's write the test for this method to demonstrate:

@Test
public void givenRequestPartAndRequestParam_whenPost_thenReturns200OK() throws Exception {
    mockMvc.perform(multipart("/requestparam/employee")
      .file(A_FILE)
      .param("name", "testname"))
      .andExpect(status().isOk());
}

5. Conclusion

In this article, we looked at how to effectively handle multipart requests in Spring Boot.

Initially, we sent multipart form data using a model attribute. Then we looked at how to separately receive multipart data using @RequestPart and @RequestParam annotations.

As always, the full source code is available over on GitHub.

The post Multipart Request Handling in Spring first appeared on Baeldung.
       

Java Weekly, Issue 386

$
0
0

1. Spring and Java

>> Exploring ZooKeeper-less Kafka [morling.dev]

Meet KIP-500 in action – simplified configuration, better scalability, and less operational overhead by removing the ZooKeeper dependency. Interesting.

>> JEP 406: Pattern Matching for switch (Preview) [openjdk.java.net]

Enhancing switch expressions in Java 17 by adding type patterns, null refinements, and pattern guards. Very nice.

>> Remote Recording Stream [egahlin.github.io]

JFR event streaming in practice: a few practical examples to stream and subscribe to different JFR events.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> A new era of DevOps, powered by machine learning [allthingsdistributed.com]

How to treat DevOps, and the toolchains around it, as a data science problem – very interesting, and definitely promissing.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> CEO Wants To Get Involved In Politics [dilbert.com]

>> Wally Works At Home Unsafely [dilbert.com]

>> Universe Preparing Problems [dilbert.com]

4. Pick of the Week

>> Solutions Architect Tips — The 5 Types of Architecture Diagrams [betterprogramming.pub]

The post Java Weekly, Issue 386 first appeared on Baeldung.
       

JVM Parameters InitialRAMPercentage, MinRAMPercentage, and MaxRAMPercentage

$
0
0

1. Overview

In this tutorial, we'll discuss a few JVM parameters we can use to set the RAM percentage of the JVM.

Introduced in Java 8, the parameters InitialRAMPercentage, MinRAMPercentage, and MaxRAMPercentage help to configure the heap size of a Java application.

2. -XX:InitialRAMPercentage

The InitialRAMPercentage JVM parameter allows us to configure the initial heap size of the Java application. It's a percentage of the total memory of a physical server or container, passed as a double value.

For instance, if we set-XX:InitialRAMPercentage=50.0 for a physical server of 1 GB full memory, then the initial heap size will be around 500 MB (50% of 1 GB).

To start with, let's check the default value of the IntialRAMPercentage in the JVM:

$ docker run openjdk:8 java -XX:+PrintFlagsFinal -version | grep -E "InitialRAMPercentage"
   double InitialRAMPercentage                      = 1.562500                            {product}
openjdk version "1.8.0_292"
OpenJDK Runtime Environment (build 1.8.0_292-b10)

Then, let's set the initial heap size of 50% for a JVM:

$ docker run -m 1GB openjdk:8 java -XX:InitialRAMPercentage=50.0 -XX:+PrintFlagsFinal -version | grep -E "InitialRAMPercentage"
   double InitialRAMPercentage                     := 50.000000                           {product}
openjdk version "1.8.0_292"
OpenJDK Runtime Environment (build 1.8.0_292-b10)

It's important to note that the JVM ignores InitialRAMPercentage when we configure the -Xms option.

3. -XX:MinRAMPercentage

The MinRAMPercentage parameter, unlike its name, allows setting the maximum heap size for a JVM running with a small amount of memory (less than 200MB).

First, we'll explore the default value of the MinRAMPercentage:

$ docker run openjdk:8 java -XX:+PrintFlagsFinal -version | grep -E "MinRAMPercentage"
   double MinRAMPercentage                      = 50.000000                            {product}
openjdk version "1.8.0_292"
OpenJDK Runtime Environment (build 1.8.0_292-b10)

Then, let's use the parameter to set the maximum heap size for a JVM with a total memory of 100MB:

$ docker run -m 100MB openjdk:8 java -XX:MinRAMPercentage=80.0 -XshowSettings:VM -version
VM settings:
    Max. Heap Size (Estimated): 77.38M
    Ergonomics Machine Class: server
    Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_292"
OpenJDK Runtime Environment (build 1.8.0_292-b10)

Also, the JVM ignores the MaxRAMPercentage parameter while setting the maximum heap size for a small memory server/container:

$ docker run -m 100MB openjdk:8 java -XX:MinRAMPercentage=80.0 -XX:MaxRAMPercentage=50.0 -XshowSettings:vm -version
VM settings:
    Max. Heap Size (Estimated): 77.38M
    Ergonomics Machine Class: server
    Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_292"
OpenJDK Runtime Environment (build 1.8.0_292-b10)

4. -XX:MaxRAMPercentage

The MaxRAMPercentage parameter allows setting the maximum heap size for a JVM running with a large amount of memory (greater than 200 MB).

First, let's explore the default value of the MaxRAMPercentage:

$ docker run openjdk:8 java -XX:+PrintFlagsFinal -version | grep -E "MaxRAMPercentage"
   double MaxRAMPercentage                      = 25.000000                            {product}
openjdk version "1.8.0_292"
OpenJDK Runtime Environment (build 1.8.0_292-b10)

Then, we can use the parameter to set the maximum heap size to 60% for a JVM with 500 MB total memory:

$ docker run -m 500MB openjdk:8 java -XX:MaxRAMPercentage=60.0 -XshowSettings:vm -version
VM settings:
    Max. Heap Size (Estimated): 290.00M
    Ergonomics Machine Class: server
    Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_292"
OpenJDK Runtime Environment (build 1.8.0_292-b10)

Similarly, the JVM ignores the MinRAMPercentage parameter for a large memory server/container:

$ docker run -m 500MB openjdk:8 java -XX:MaxRAMPercentage=60.0 -XX:MinRAMPercentage=30.0 -XshowSettings:vm -version
VM settings:
    Max. Heap Size (Estimated): 290.00M
    Ergonomics Machine Class: server
    Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_292"
OpenJDK Runtime Environment (build 1.8.0_292-b10)

5. Conclusion

In this short article, we discussed the use of JVM parameters InitialRAMPercentageMinRAMPercentage, and MaxRAMPercentage for setting the RAM percentages that the JVM will use for the heap.

First, we checked the default values of the flags set on the JVM. Then, we used the JVM parameters to set the initial and maximum heap sizes.

The post JVM Parameters InitialRAMPercentage, MinRAMPercentage, and MaxRAMPercentage first appeared on Baeldung.
       

Deserialization Vulnerabilities in Java

$
0
0

1. Overview

In this tutorial, we'll explore how an attacker can use deserialization in Java code to exploit a system.

We'll start by looking at some different approaches an attacker might use to exploit a system. Then, we will look at the implications of a successful attack. Finally, we will look at some best practices to help avoid these types of attacks.

2. Deserialization Vulnerabilities

Java uses deserialization widely to create objects from input sources.

These input sources are byte-streams and come in a variety of formats (some standard forms include JSON and XML). Legitimate system functionality or communication with trusted sources across networks use deserialization. However, untrusted or malicious byte-streams can exploit vulnerable deserialization code.

Our previous article on Java Serialization covers how serialization and deserialization work in greater depth.

2.1. Attack Vector

Let's discuss how an attacker might use deserialization to exploit a system.

For a class to be serializable, it must conform to the Serializable interface. Classes that implement Serializable use the methods readObject and writeObject. These methods deserialize and serialize object instances of the class, respectively.

A typical implementation of this may look like this:

public class Thing implements Serializable {
    private static final long serialVersionUID = 0L;
    // Class fields
    private void readObject(ObjectInputStream ois) throws ClassNotFoundException, IOException {
        ois.defaultReadObject();
        // Custom attribute setting
    }
    private void writeObject(ObjectOutputStream oos) throws IOException {
        oos.defaultWriteObject(); 
        // Custom attribute getting
    }
}

Classes become vulnerable when they have generic or loosely defined fields and use reflection to set attributes on these fields:

public class BadThing implements Serializable {
    private static final long serialVersionUID = 0L;
    Object looselyDefinedThing;
    String methodName;
    private void readObject(ObjectInputStream ois) throws ClassNotFoundException, IOException {
        ois.defaultReadObject();
        try {
            Method method = looselyDefinedThing.getClass().getMethod(methodName);
            method.invoke(looselyDefinedThing);
        } catch (Exception e) {
            // handle error...
        }
    }
    // ...
}

Let's break down the above to see what is happening.

Firstly, our class BadThing has a field looselyDefinedThing which is of type Object. This is vague and allows for an attacker to make this field any type that is available on the classpath.

Next, what makes this class vulnerable is that the readObject method contains custom code that invokes a method on looselyDefinedThing. The method we want to invoke uses the field methodName (which can also be controlled by the attacker) via reflection.

The above code is equivalent to the following in execution if the class MyCustomAttackObject is on the system's classpath:

BadThing badThing = new BadThing();
badThing.looselyDefinedThing = new MyCustomAttackObject();
badThing.methodName = "methodThatTriggersAttack";
Method method = looselyDefinedThing.getClass().getMethod(methodName);
method.invoke(methodName);
public class MyCustomAttackObject implements Serializable {
    public static void methodThatTriggersAttack() {
        try {
            Runtime.getRuntime().exec("echo \"Oh, no! I've been hacked\"");
        } catch (IOException e) {
            // handle error...
        }
    }
}

By using the MyCustomAttackObject class, the attacker has been able to execute a command on the host machine.

This particular command is harmless. However, if this method were able to take custom commands, the possibilities of what an attacker can achieve are limitless.

The question that still stands is, “why would anyone have such a class on their classpath in the first place?”.

Classes that allow an attacker to execute malicious code exist widely throughout open source and third-party libraries that are used by many frameworks and software. They are not often as simple as the above example but involve using multiple classes and reflection to be able to execute commands of similar ilk.

Using multiple classes in this way is often referred to as a gadget chain. The open-source tool ysoserial maintains an active list of gadget chains that can be used in an attack.

2.2. Implications

Now that we know how an attacker might gain access to remote command execution let's discuss some of the implications of what an attacker may be able to achieve on our system.

Depending on the level of access that the user running the JVM has, the attacker may already have heightened privileges on the machine, which would allow them to access most files across the system and steal information.

Some deserialization exploits allow an attacker to execute custom Java code that could lead to denial of service attacks, stealing of user session or unauthorized access to resources.

As each deserialization vulnerability is different and each system set up is different, what an attacker can achieve varies widely. For this Reason, vulnerability databases consider deserialization vulnerabilities high risk.

3. Best Practices for Prevention

Now that we have covered how our system might be exploited, we'll touch on some best practices that can be followed to help prevent this type of attack and limit the scope of potential exploits.

Note, there is no silver bullet in exploit prevention, and this section is not an exhaustive list of all preventative measures:

  • We should keep open source libraries up to date. Prioritize updating to the latest version of libraries when available.
  • Actively check vulnerability databases such as the National Vulnerability Database, or CVE Mitre (to name a few) for newly declared vulnerabilities and make sure that we are not exposed
  • Verify the source of input byte-stream for deserialization (use secure connections and verify the user, etc.)
  • If the input has come from a user input field, make sure to validate these fields and authorize the user before deserializing
  • Follow the owasp cheatsheet for deserialization when creating custom deserialization code
  • Limit what the JVM can access on the host machine to reduce the scope of what an attacker can do if they are able to exploit our system

4. Conclusion

In this article, we've covered how an attacker may use deserialization to exploit a vulnerable system. In addition, we have covered some practices to maintain good security hygiene in a Java system.

As always, the source code is available over on GitHub.

The post Deserialization Vulnerabilities in Java first appeared on Baeldung.
       

AliasFor Annotation in Spring

$
0
0

1. Overview

In this tutorial, we'll learn about the @AliasFor annotation in Spring.

First, we'll see examples from within the framework where it's in use. Next, we'll look at a few customized examples.

2. The Annotation

@AliasFor is part of the framework since version 4.2. Several core Spring annotations have been updated to include this annotation now.

We can use it to decorate attributes either within a single annotation or in an annotation composed from a meta-annotation. Namely, a meta-annotation is an annotation that can be applied to another one.

In the same annotation, we use @AliasFor to declare aliases for attributes so that we can apply them interchangeably. Alternatively, we can use it in a composed annotation to override an attribute in its meta-annotation. In other words, when we decorate an attribute in a composed annotation with @AliasFor, it overrides the specified attribute in its meta-annotation.

Interestingly, many core Spring annotations such as @Bean, @ComponentScan, @Scope, @RequestMapping, and @RestController now use @AliasFor to configure their internal attribute aliases.

Here's the definition of the annotation:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
@Documented
public @interface AliasFor {
    @AliasFor("attribute")
    String value() default "";
    
    @AliasFor("value")
    String attribute() default "";
    Class<? extends Annotation> annotation() default Annotation.class;
}

Importantly, we can use this annotation implicitly as well as explicitly. Implicit usage is only restricted to aliases within an annotation. In comparison, explicit usage can also be made for an attribute in a meta-annotation.

We'll see this in detail with examples in the following sections.

3. Explicit Aliases Within an Annotation

Let's consider a core Spring annotation, @ComponentScan, to understand explicit aliases within a single annotation:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
@Documented
@Repeatable(ComponentScans.class)
public @interface ComponentScan {
    @AliasFor("basePackages")
    String[] value() default {};
    @AliasFor("value")
    String[] basePackages() default {};
...
}

As we can see, value is defined here explicitly as an alias for basePackages, and vice-versa. This means we can use them interchangeably.

So effectively, these two usages are similar:

@ComponentScan(basePackages = "com.baeldung.aliasfor")
@ComponentScan(value = "com.baeldung.aliasfor")

Furthermore, since the two attributes are also marked as default, let's write this more concisely:

@ComponentScan("com.baeldung.aliasfor")

Also, there're a few implementation requirements that Spring mandates for this scenario. First, the aliased attributes should declare the same default value. Additionally, they should have the same return type. If we violate any of these constraints, the framework throws an AnnotationConfigurationException.

4. Explicit Aliases for Attribute in Meta-Annotation

Next, let's see an example of a meta-annotation and create a composed annotation from it. Then, we'll see the explicit usage of aliases in the custom one.

First, let's consider the framework annotation RequestMapping as our meta-annotation:

@Target({ElementType.TYPE, ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Mapping
public @interface RequestMapping {
    String name() default "";
    
    @AliasFor("path")
    String[] value() default {};
    @AliasFor("value")
    String[] path() default {};
    RequestMethod[] method() default {};
    ...
}

Next, we'll create a composed annotation MyMapping from it:

@Target({ElementType.METHOD, ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@RequestMapping
public @interface MyMapping {
    @AliasFor(annotation = RequestMapping.class, attribute = "method")
    RequestMethod[] action() default {};
}

As we can see, in @MyMapping, action is an explicit alias for the attribute method in @RequestMapping. That is, action in our composed annotation overrides the method in the meta-annotation.

Similar to aliases within an annotation, meta-annotation attribute aliases must also have the same return type. For example, RequestMethod[] in our case. Furthermore, the attribute annotation should reference the meta-annotation as in our usage of annotation = RequestMapping.class.

To demonstrate, let's add a controller class called MyMappingController. We'll decorate its method with our custom annotation.

Specifically, here we'll add only two attributes to @MyMapping, route, and action:

@Controller
public class MyMappingController {
    @MyMapping(action = RequestMethod.PATCH, route = "/test")
    public void mappingMethod() {}
    
}

Finally, to see how explicit aliases behave, let's add a simple test:

@Test
public void givenComposedAnnotation_whenExplicitAlias_thenMetaAnnotationAttributeOverridden() {
    for (Method method : controllerClass.getMethods()) {
        if (method.isAnnotationPresent(MyMapping.class)) {
            MyMapping annotation = AnnotationUtils.findAnnotation(method, MyMapping.class);
            RequestMapping metaAnnotation = 
              AnnotationUtils.findAnnotation(method, RequestMapping.class);
            assertEquals(RequestMethod.PATCH, annotation.action()[0]);
            assertEquals(0, metaAnnotation.method().length);
        }
    }
}

As we can see, our custom annotation's attribute action has overridden the meta-annotation @RequestMapping‘s attribute method.

5. Implicit Aliases Within an Annotation

To understand this, let's add a few more aliases within our @MyMapping:

@AliasFor(annotation = RequestMapping.class, attribute = "path")
String[] value() default {};
@AliasFor(annotation = RequestMapping.class, attribute = "path")
String[] mapping() default {};
    
@AliasFor(annotation = RequestMapping.class, attribute = "path")
String[] route() default {};

In this situation, value, mapping, and route are explicit meta-annotation overrides for path in @RequestMapping. Therefore, they are also implicit aliases of each other. In other words, for @MyMapping, we can use these three attributes interchangeably.

To demonstrate this, we'll use the same controller as in the previous section. And here's another test:

@Test
public void givenComposedAnnotation_whenImplictAlias_thenAttributesEqual() {
    for (Method method : controllerClass.getMethods()) {
        if (method.isAnnotationPresent(MyMapping.class)) {
            MyMapping annotationOnBean = 
              AnnotationUtils.findAnnotation(method, MyMapping.class);
            assertEquals(annotationOnBean.mapping()[0], annotationOnBean.route()[0]);
            assertEquals(annotationOnBean.value()[0], annotationOnBean.route()[0]);
        }
    }
}

Notably, we did not define the attributes value and mapping in the annotation on our controller method. However, they still implicitly carry the same value as route.

6. Conclusion

In this tutorial, we learned about the @AliasFor annotation in the Spring Framework. In our examples, we looked at explicit as well as implicit usage scenarios.

As always, source code is available over on GitHub.

The post AliasFor Annotation in Spring first appeared on Baeldung.
       

Difference Between Super, Simplest, and Effective POM

$
0
0

1. Overview

In this short tutorial, we are going to overview the differences between super, simplest, and effective POM using Maven.

2. What is a POM?

POM stands for Project Object Model, and it is the core of a project's configuration in Maven. It is a single configuration XML file called pom.xml that contains the majority of the information required to build a project.

The role of a POM file is to describe the project, manage dependencies, and declare configuration details that help Maven to build the project.

3. Super POM

To understand super POM more easily, we can make an analogy with the Object class from Java: Every class from Java extends, by default, the Object class. Similarly, in the case of POM, every POM extends the super POM.

The super POM file defines all the default configurations. Hence, even the simplest form of a POM file will inherit all the configurations defined in the super POM file.

Depending on the Maven version that we use, the super POM can look slightly different. For instance, if we have Maven installed into our machine, we can visualize it at  ${M2_HOME}/lib, maven-model-builder-<version>.jar file. If we open this JAR file, we'll find it under the name org/apache/maven/model/pom-4.0.0.xml.

In the next sections, we'll go through the super POM configuration elements for version 3.6.3.

3.1. Repositories

Maven uses the repositories defined under the repositories section to download all the dependent artifacts during a Maven build.

Let's take a look at an example:

<repositories>
    <repository>
        <id>central</id>
        <name>Central Repository</name>
        <url>https://repo.maven.apache.org/maven2</url>
        <layout>default</layout>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
    </repository>
</repositories>

3.2. Plugin Repositories

The default plugin repository is the central Maven repository. Let's look at how it's defined in the pluginRepository section:

<pluginRepositories>
    <pluginRepository>
        <id>central</id>
        <name>Central Repository</name>
        <url>https://repo.maven.apache.org/maven2</url>
        <layout>default</layout>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
        <releases>
            <updatePolicy>never</updatePolicy>
        </releases>
    </pluginRepository>
</pluginRepositories>

As we can see above, snapshots are disabled, and the updatePolicy is set to “never”. Therefore, with this configuration, Maven will never automatically update a plugin if a new version is released.

3.3. Build

The build configuration section includes all the information required to build a project.

Let's see an example of the default build section:

<build>
    <directory>${project.basedir}/target</directory>
    <outputDirectory>${project.build.directory}/classes</outputDirectory>
    <finalName>${project.artifactId}-${project.version}</finalName>
    <testOutputDirectory>${project.build.directory}/test-classes</testOutputDirectory>
    <sourceDirectory>${project.basedir}/src/main/java</sourceDirectory>
    <scriptSourceDirectory>${project.basedir}/src/main/scripts</scriptSourceDirectory>
    <testSourceDirectory>${project.basedir}/src/test/java</testSourceDirectory>
    <resources>
        <resource>
	    <directory>${project.basedir}/src/main/resources</directory>
	</resource>
    </resources>
    <testResources>
        <testResource>
	    <directory>${project.basedir}/src/test/resources</directory>
	</testResource>
    </testResources>
    <pluginManagement>
        <!-- NOTE: These plugins will be removed from future versions of the super POM -->
	<!-- They are kept for the moment as they are very unlikely to conflict 
		with lifecycle mappings (MNG-4453) -->
	<plugins>
	    <plugin>
		<artifactId>maven-antrun-plugin</artifactId>
		<version>1.3</version>
	    </plugin>
	    <plugin>
		<artifactId>maven-assembly-plugin</artifactId>
		<version>2.2-beta-5</version>
	    </plugin>
	    <plugin>
		<artifactId>maven-dependency-plugin</artifactId>
		<version>2.8</version>
	    </plugin>
	    <plugin>
		<artifactId>maven-release-plugin</artifactId>
	        <version>2.5.3</version>
	    </plugin>
	</plugins>
    </pluginManagement>
</build>

3.4. Reporting

For reporting, the super POM only provides a default value for the output directory:

<reporting>
    <outputDirectory>${project.build.directory}/site</outputDirectory>
</reporting>

3.5. Profiles

If we do not have defined profiles at the application level, the default build profile will be executed.

The default profiles section looks like:

<profiles>
    <!-- NOTE: The release profile will be removed from future versions of the super POM -->
    <profile>
        <id>release-profile</id>
	<activation>
	    <property>
		<name>performRelease</name>
		<value>true</value>
	    </property>
        </activation>
	<build>
	    <plugins>
		<plugin>
		    <inherited>true</inherited>
		    <artifactId>maven-source-plugin</artifactId>
		    <executions>
			    <execution>
			    <id>attach-sources</id>
			    <goals>
			        <goal>jar-no-fork</goal>
			    </goals>
			</execution>
		    </executions>
		</plugin>
		<plugin>
		    <inherited>true</inherited>
		    <artifactId>maven-javadoc-plugin</artifactId>
		    <executions>
			<execution>
			    <id>attach-javadocs</id>
			    <goals>
			        <goal>jar</goal>
			     </goals>
			</execution>
		    </executions>
		</plugin>
		<plugin>
		    <inherited>true</inherited>
		    <artifactId>maven-deploy-plugin</artifactId>
		    <configuration>
			<updateReleaseInfo>true</updateReleaseInfo>
		    </configuration>
		</plugin>
	    </plugins>
        </build>
    </profile>
</profiles>

4. Simplest POM

The simplest POM is the POM that you declare in your Maven project. In order to declare a POM, you will need to specify at least these four elements: modelVersion, groupId, artifactId, and version. The simplest POM will inherit all the configurations from the super POM.

Let's have a look at the minimum required elements for a Maven project:

<project>
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.baeldung</groupId>
    <artifactId>maven-pom-types</artifactId>
    <version>1.0-SNAPSHOT</version>
</project>

One main advantage of the POM hierarchy in Maven is that we can extend and override the configuration inherited from the top. Therefore, to override the configuration of a given element or an artifact in the POM hierarchy, Maven should be able to uniquely identify the corresponding artifact.

5. Effective POM

Effective POM combines all the default settings from the super POM file and the configuration defined in our application POM. Maven uses default values for configuration elements when they are not overridden in the application pom.xml. Hence, if we take the same sample POM file from the simplest POM section, we'll see that the effective POM file will be the merge between simplest and super POM. We can visualize it from the command line:

mvn help:effective-pom

This is also the best way to see the default values that Maven uses.

6. Conclusion

In this short tutorial, we discussed the differences between Project Object Models in Maven.

As always, the example shown in this tutorial is available over on GitHub.

The post Difference Between Super, Simplest, and Effective POM first appeared on Baeldung.
       

Spring Conditional Annotations

$
0
0

1. Introduction

In this tutorial, we'll take a look at the @Conditional annotation. It's used to indicate whether a given component is eligible for registration based on a defined condition.

We'll learn how to use predefined conditional annotations, combine them with different conditions as well as create our custom, condition-based annotations.

2. Declaring Conditions

Before we move into the implementation, let's firstly see in which situations we could make use of conditional annotations.

The most common usage would be to include or exclude the whole configuration class:

@Configuration
@Conditional(IsDevEnvCondition.class)
class DevEnvLoggingConfiguration {
    // ...
}

Or just a single bean:

@Configuration
class DevEnvLoggingConfiguration {
    
    @Bean
    @Conditional(IsDevEnvCondition.class)
    LoggingService loggingService() {
        return new LoggingService();
    }
}

By doing so, we can base the behavior of our application on given conditions. For instance, the type of environment or specific needs of our clients. In the above example, we initialize additional logging service only for the development environment.

Another way of making the component conditional would be to place the condition directly on the component class:

@Service
@Conditional(IsDevEnvCondition.class)
class LoggingService {
    // ...
}

We can apply the above example to any bean declared with the @Component@Service@Repository, or @Controller annotations.

3. Predefined Conditional Annotations

Spring comes with a set of predefined conditional annotations. Let's go through some of the most popular ones.

Firstly, let's see how we can base a component on a configuration property value:

@Service
@ConditionalOnProperty(
  value="logging.enabled", 
  havingValue = "true", 
  matchIfMissing = true)
class LoggingService {
    // ...
}

The first attribute, value, tells us what configuration property we'll be looking at. The second one, havingValue, defines a value that's required for this condition. And lastly, the matchIfMissing attribute tells the Spring whether the condition should be matched if the parameter is missing.

Similarly, we can base the condition on an expression:

@Service
@ConditionalOnExpression(
  "${logging.enabled:true} and '${logging.level}'.equals('DEBUG')"
)
class LoggingService {
    // ...
}

Now, Spring will create the LoggingService only when both the logging.enabled configuration property is set to true, and the logging.level is set to DEBUG.

Another condition we can apply is to check whether a given bean was created:

@Service
@ConditionalOnBean(CustomLoggingConfiguration.class)
class LoggingService {
    // ...
}

Or a given class exists on the classpath:

@Service
@ConditionalOnClass(CustomLogger.class)
class LoggingService {
    // ...
}

We can achieve the opposite behavior by applying the @ConditionalOnMissingBean or @ConditionalOnMissingClass annotations.

In addition, we can depend our components on a given Java version:

@Service
@ConditionalOnJava(JavaVersion.EIGHT)
class LoggingService {
    // ...
}

In the above example, the LoggingService will be created only when the runtime environment is Java 8.

Finally, we can use the @ConditionalOnWarDeployment annotation to enable bean only in war packaging:

@Configuration
@ConditionalOnWarDeployment
class AdditionalWebConfiguration {
    // ...
}

Note, that for applications with embedded servers, this condition will return false.

4. Defining Custom Conditions

Spring allows us to customize the behavior of the @Conditional annotation by creating our custom condition templates. To create one, we simply need to implement the Condition interface:

class Java8Condition implements Condition {
    @Override
    public boolean matches(ConditionContext context, AnnotatedTypeMetadata metadata) {
        return JavaVersion.getJavaVersion().equals(JavaVersion.EIGHT);
    }
}

The matches method tells Spring whether the condition has passed or not. It has two arguments that give us respectively information about the context where the bean will initialize and the metadata of the used @Conditional annotation.

As we can see, in our example, we just check if the Java version is 8.

After that, we should place our new condition as an attribute in the @Conditional annotation:

@Service
@Conditional(Java8Condition.class)
public class Java8DependedService {
    // ...
}

This way, the Java8DependentService will be created only where the condition from Java8Condition class is matched.

5. Combining Conditions

For more complex solutions, we can group conditional annotations with OR or AND logical operators.

To apply the OR operator, we need to create a custom condition extending the AnyNestedCondition class. Inside it, we need to create an empty static class for each condition and annotate it with a proper @Conditional implementation.

For example, let's create a condition that requires either Java 8 or Java 9:

class Java8OrJava9 extends AnyNestedCondition {
    
    Java8OrJava9() {
        super(ConfigurationPhase.REGISTER_BEAN);
    }
    
    @Conditional(Java8Condition.class)
    static class Java8 { }
    
    @Conditional(Java9Condition.class)
    static class Java9 { }
    
}

The AND operator, on the other hand, is much simpler. We can simply group the conditions:

@Service
@Conditional({IsWindowsCondition.class, Java8Condition.class})
@ConditionalOnJava(JavaVersion.EIGHT)
public class LoggingService {
    // ...
}

In the above example, the LoggingService will be created only when both the IsWindowsCondition and Java8Condition are matched.

6. Summary

In this article, we've learned how to use and create conditional annotations. As always, all the source code is available over on GitHub.

The post Spring Conditional Annotations first appeared on Baeldung.
       

IllegalAccessError in Java

$
0
0

1. Overview

In this quick tutorial, we'll discuss the java.lang.IllegalAccessError.

We'll examine some examples of when it is thrown and how to avoid it.

2. Introduction to IllegalAccessError

An IllegalAccessError is thrown when an application attempts to access a field or invoke a method that is inaccessible.

The compiler catches such illegal invocations, but we may still bump into an IllegalAccessError at runtime.

First, let's observe the class hierarchy of IllegalAccessError:

java.lang.Object
  |_java.lang.Throwable
    |_java.lang.Error
      |_java.lang.LinkageError
        |_java.lang.IncompatibleClassChangeError
          |_java.lang.IllegalAccessError

Its parent class is IncompatibleClassChangeError. Hence, the cause of this error is an incompatible change in one or more class definitions in the application.

Simply put, the version of the class at runtime is different from the one it was compiled against.

3. How May This Error Occur?

Let's understand this with a simple program:

public class Class1 {
    public void bar() {
        System.out.println("SUCCESS");
    }
}
public class Class2 {
    public void foo() {
        Class1 c1 = new Class1();
        c1.bar();
    }
}

At runtime, the above code invokes the method bar() in Class1. So far, so good.

Now, let's update the access modifier of bar() to private and independently compile it.

Next, replace the previous definition of Class1 (the .class file) with the newly compiled version and re-run the program:

java.lang.IllegalAccessError: 
  class Class2 tried to access private method Class1.bar()

The above exception is self-explanatory. The method bar() is now private in Class1. Clearly, it's illegal to access.

4. IllegalAccessError in Action

4.1. Library Updates

Consider an application that uses a library at compile time, and the same is also available in classpath during runtime.

The library owner updates a publicly available method to private, rebuilds it, but forgets to update other parties of this change.

Furthermore, while executing, when the application invokes this method (assuming public access), it runs into an IllegalAccessError.

4.2 Interface Default Methods

Misuse of default methods in Interfaces is another cause of this error.

Consider the following interface and class definitions:

interface Baeldung {
    public default void foobar() {
        System.out.println("This is a default method.");
    }
}
class Super {
    private void foobar() {
        System.out.println("Super class method foobar");
    }
}

Also, let's extend Super and implement Baeldung:

class MySubClass extends Super implements Baeldung {}

Finally, let's invoke foobar() by instantiating MySubClass:

new MySubClass().foobar();

The method foobar() is private in Super and default in Baeldung. Hence, it is accessible in the hierarchy of MySubClass.

Therefore, the compiler doesn't complain, but at runtime, we get an error:

java.lang.IllegalAccessError:
  class IllegalAccessErrorExample tried to access private method 'void Super.foobar()'

During execution, a super-class method declaration always takes priority over an interface default method.

Technically, foobar from Super should have been called, but it's private. Undoubtedly, an IllegalAccessError will be thrown.

5. How to Avoid it?

Precisely, if we run into an IllegalAccessError, we should primarily look for changes in the class definitions with respect to access modifiers.

Secondly, we should validate interface default methods overridden with a private access modifier.

Making the class-level method public will do the trick.

6. Conclusion

To conclude, the compiler will resolve most of the illegal method invocations. If we still come across an IllegalAccesError, we need to look into class definition changes.

The source code of the examples is available over on GitHub.

The post IllegalAccessError in Java first appeared on Baeldung.
       

Build a Dashboard Using Cassandra, Astra, and Stargate

$
0
0

1. Introduction

In this article, we are going to build “Tony Stark's Avengers Status Dashboard”, used by The Avengers to monitor the status of the members of the team.

This will be built using DataStax Astra, a DBaaS powered by Apache Cassandra using Stargate to offer additional APIs for working with it. On top of this, we will be using a Spring Boot application to render the dashboard and show what's going on.

We will be building this with Java 16, so make sure this is installed and ready to use before continuing.

2. What is Astra?

DataStax Astra is a Database as a Service offering that is powered by Apache Cassandra. This gives us a fully hosted, fully managed Cassandra database that we can use to store our data, which includes all of the power that Cassandra offers for scalability, high availability and performance.

On top of this, Astra also incorporates the Stargate data platform that exposes the exact same underlying data via different APIs. This gives us access to traditional Cassandra tables using REST and GraphQL APIs – both of which are 100% compatible with each other and the more traditional CQL APIs. These can make access to our data incredibly flexible with only a standard HTTP client – such as the Spring RestTemplate.

It also offers a JSON Document API that allows for much more flexible data access. With this API there is no need for a schema, and every record can be a different shape if needed. Additionally, records can be as complex as needed, supporting the full power of JSON for representing the data.

This does come with a cost though – the Document API is not interchangeable with the other APIs, so it is important to decide ahead of time how data needs to be modelled and which APIs are best used to access it.

3. Our Application Data Model

We are building our system around the Astra system on top of Cassandra. This will have a direct reflection on the way that we model our data.

Cassandra is designed to allow massive amounts of data with very high throughput, and it stores records in tabular form. Astra adds to this some alternative APIs – REST and GraphQL – and the ability to represent documents as well as simple tabular data – using the Document API.

This is still backed by Cassandra, which does schema design differently. In modern systems, space is no longer a constraint. Duplicating data becomes a non-issue, removing the need for joins across collections or partitions of data. This means that we can denormalize our data within our collections to suit our needs.

As such, our data model is going to be built around two collections – events and statuses. The events collection is a record of every status event that has ever happened – this can potentially get very large, something for which Cassandra is ideal. This will be covered in more detail in the next article.

Records in this collection will look as follows:

avenger falcon
timestamp 2021-04-02T14:23:12Z
latitude 40.714558
longitude -73.975029
status 0.72

This gives us a single event update, giving the exact timestamp and location of the update and a percentage value for the status of the Avenger.

The statuses collection contains a single document that contains the dashboard data, which is a denormalized, summarized view of the data that goes into the events collection. This document will look similar to this:

{
    "falcon": {
	"realName": "Sam Wilson",
	"location": "New York",
	"status": "INJURED",
	"name": "Falcon"
    },
    "wanda": {
        "realName": "Wanda Maximoff",
        "location": "New York",
        "status": "HEALTHY"
    }
}

Here we have some general data that doesn't change – the name and realName fields – and we have some summary data that is generated from the most recent event for this Avenger – location is derived from the latitude and longitude values, and status is a general summary of the status field from the event.

This article is focused on the statuses collection, and accessing it using the Document API. Our next article will show how to work with the events collection which is row-based data instead.

4. How to Set Up DataStax Astra

Before we can start our application, we need a store for our data. We are going to use the Cassandra offering from DataStax Astra. To get started, we need to register a free account with Astra and create a new database. This needs to be given a reasonable name for both the database and the keyspace within:

(Note – screens are accurate at time of publication but might have changed since)

This will take a few minutes to set up. Once this is done, we will need to create an access token.

In order to do this, we need to visit the “Settings” tab for the newly created database and generate a token:

Once all of this is done, we will also need our database details. This includes:

  • Database ID
  • Region
  • Keyspace

These can be found on the “Connect” tab.

Finally, we need some data. For the purposes of this article, we are using some pre-populated data. This can be found in a shell script here.

5. How to Set Up Spring Boot

We are going to create our new application using Spring Initializr; we're also going to use Java 16 – allowing us to use Records. This in turn means we need Spring Boot 2.5 – currently this means 2.5.0-M3.

In addition, we need Spring Web and Thymeleaf as dependencies:

Once this is ready, we can download and unzip it somewhere and we are ready to build our application.

Before moving on, we also need to configure our Cassandra credentials. These all go into src/main/resources/application.properties as taken from the Astra dashboard:

ASTRA_DB_ID=e26d52c6-fb2d-4951-b606-4ea11f7309ba
ASTRA_DB_REGION=us-east-1
ASTRA_DB_KEYSPACE=avengers
ASTRA_DB_APPLICATION_TOKEN=AstraCS:xxx-token-here

These secrets are being managed like this purely for the purposes of this article. In a real application, they should be managed securely, for example using Vault.

6. Writing a Document Client

In order to interact with Astra, we need a client that can make the API calls necessary. This will work directly in terms of the Document API that Astra exposes, allowing our application to work in terms of rich documents. For our purposes here, we need to be able to fetch a single record by ID and to provide partial updates to the record.

In order to manage this, we will write a DocumentClient bean that encapsulates all of this:

@Repository
public class DocumentClient {
  @Value("https://${ASTRA_DB_ID}-${ASTRA_DB_REGION}.apps.astra.datastax.com/api/rest/v2/namespaces/${ASTRA_DB_KEYSPACE}")
  private String baseUrl;
  @Value("${ASTRA_DB_APPLICATION_TOKEN}")
  private String token;
  @Autowired
  private ObjectMapper objectMapper;
  private RestTemplate restTemplate;
  public DocumentClient() {
    this.restTemplate = new RestTemplate();
    this.restTemplate.setRequestFactory(new HttpComponentsClientHttpRequestFactory());
  }
  public <T> T getDocument(String collection, String id, Class<T> cls) {
    var uri = UriComponentsBuilder.fromHttpUrl(baseUrl)
      .pathSegment("collections", collection, id)
      .build()
      .toUri();
    var request = RequestEntity.get(uri)
      .header("X-Cassandra-Token", token)
      .build();
    var response = restTemplate.exchange(request, cls);
    return response.getBody();
  }
  public void patchSubDocument(String collection, String id, String key, Map<String, Object> updates) {
    var updateUri = UriComponentsBuilder.fromHttpUrl(baseUrl)
      .pathSegment("collections", collection, id, key)
      .build()
      .toUri();
    var updateRequest = RequestEntity.patch(updateUri)
      .header("X-Cassandra-Token", token)
      .body(updates);
    restTemplate.exchange(updateRequest, Map.class);
  } 
}

Here, our baseUrl and token fields are configured from the properties that we defined earlier. We then have a getDocument() method that can call Astra to get the specified record from the desired collection, and a patchSubDocument() method that can call Astra to patch part of any single document in the collection.

That's all that's needed to interact with the Document API from Astra since it works by simply exchanging JSON documents over HTTP.

Note that we need to change the request factory used by our RestTemplate. This is because the default one that is used by Spring doesn't support the PATCH method on HTTP calls.

7. Fetching Avengers Statuses via the Document API

Our first requirement is to be able to retrieve the statuses of the members of our team. This is the document from the statuses collection that we mentioned earlier. This will be built on top of the DocumentClient that we wrote earlier.

7.1. Retrieving Statuses from Astra

To represent these, we will need a Record as follows:

public record Status(String avenger, 
  String name, 
  String realName, 
  String status, 
  String location) {}

We also need a Record to represent the entire collection of statuses as retrieved from Cassandra:

public record Statuses(Map<String, Status> data) {}

This Statuses class represents the exact same JSON as will be returned by the Document API, and so can be used to receive the data via a RestTemplate and Jackson.

Then we need a service layer to retrieve the statuses from Cassandra and return them for use:

@Service
public class StatusesService {
  @Autowired
  private DocumentClient client;
  
  public List<Status> getStatuses() {
    var collection = client.getDocument("statuses", "latest", Statuses.class);
    var result = new ArrayList<Status>();
    for (var entry : collection.data().entrySet()) {
      var status = entry.getValue();
      result.add(new Status(entry.getKey(), status.name(), status.realName(), status.status(), status.location()));
    }
    return result;
  }  
}

Here, we are using our client to get the record from the “statuses” collection, represented in our Statuses record. Once retrieved we extract only the documents to return back to the caller. Note that we do have to rebuild the Status objects to also contain the IDs since these are actually stored higher up in the document within Astra.

7.2. Displaying the Dashboard

Now that we have a service layer to retrieve the data, we need to do something with it. This means a controller to handle incoming HTTP requests from the browser, and then render a template showing the actual dashboard.

First then, the controller:

@Controller
public class StatusesController {
  @Autowired
  private StatusesService statusesService;
  @GetMapping("/")
  public ModelAndView getStatuses() {
    var result = new ModelAndView("dashboard");
    result.addObject("statuses", statusesService.getStatuses());
    return result;
  }
}

This retrieves the statuses from Astra and passes them on to a template to render.

Our main “dashboard.html” template is then as follows:

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1" />
  <link href="https://feeds.feedblitz.com/~/t/0/_/baeldung/~https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta3/dist/css/bootstrap.min.css" rel="stylesheet"
    integrity="sha384-eOJMYsd53ii+scO/bJGFsiCZc+5NDVN2yr8+0RDqr0Ql0h+rP48ckxlpbzKgwra6" crossorigin="anonymous" />
  <title>Avengers Status Dashboard</title>
</head>
<body>
  <nav class="navbar navbar-expand-lg navbar-dark bg-dark">
    <div class="container-fluid">
      <a class="navbar-brand" href="#">Avengers Status Dashboard</a>
    </div>
  </nav>
  <div class="container-fluid mt-4">
    <div class="row row-cols-4 g-4">
      <div class="col" th:each="data, iterstat: ${statuses}">
        <th:block th:switch="${data.status}">
          <div class="card text-white bg-danger" th:case="DECEASED" th:insert="~{common/status}"></div>
          <div class="card text-dark bg-warning" th:case="INJURED" th:insert="~{common/status}"></div>
          <div class="card text-dark bg-warning" th:case="UNKNOWN" th:insert="~{common/status}"></div>
          <div class="card text-white bg-secondary" th:case="RETIRED" th:insert="~{common/status}"></div>
          <div class="card text-dark bg-light" th:case="*" th:insert="~{common/status}"></div>
        </th:block>
      </div>
    </div>
  </div>
  <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta3/dist/js/bootstrap.bundle.min.js"
    integrity="sha384-JEW9xMcG8R+pH31jmWH6WWP0WintQrMb4s7ZOdauHnUtxwoG2vI5DkLtS3qm9Ekf"
    crossorigin="anonymous"></script>
</body>
</html>

And this makes use of another nested template, under “common/status.html”, to display the status of a single Avenger:

<div class="card-body">
  <h5 class="card-title" th:text="${data.name}"></h5>
  <h6 class="card-subtitle"><span th:if="${data.realName}" th:text="${data.realName}"></span> </h6>
  <p class="card-text"><span th:if="${data.location}">Location: <span th:text="${data.location}"></span></span> </p>
</div>
<div class="card-footer">Status: <span th:text="${data.status}"></span></div>

This makes use of Bootstrap to format up our page, and displays one card for each Avenger, coloured based on the status and displaying the current details of that Avenger:

8. Status Updates via the Document API

We now have the ability to display the current status data of the various Avengers members. What we're missing is the ability to update them with feedback from the field. This will be a new HTTP controller that can update our document via the Document API to reflect the newest status details.

In the next article, this same controller will record both the latest status into the statuses collection but also the events collection. This will allow us to record the entire history of events for later analysis from the same input stream. As such, the inputs into this controller are going to be the individual events and not the rolled-up statuses.

8.1. Updating Statuses in Astra

Because we are representing the statuses data as a single document, we only need to update the appropriate portion of it. This uses the patchSubDocument() method of our client, pointing at the correct portion for the identified avenger.

We do this with a new method in the StatusesService class that will perform the updates:

public void updateStatus(String avenger, String location, String status) throws Exception {
  client.patchSubDocument("statuses", "latest", avenger, 
    Map.of("location", location, "status", status));
}

8.2. API to Update Statuses

We now need a controller that can be called in order to trigger these updates. This will be a new RestController endpoint that takes the avengers ID and the latest event details:

@RestController
public class UpdateController {
  @Autowired
  private StatusesService statusesService;
  @PostMapping("/update/{avenger}")
  public void update(@PathVariable String avenger, @RequestBody UpdateBody body) throws Exception {
    statusesService.updateStatus(avenger, lookupLocation(body.lat(), body.lng()), getStatus(body.status()));
  }
  private String lookupLocation(Double lat, Double lng) {
    return "New York";
  }
  private String getStatus(Double status) {
    if (status == 0) {
      return "DECEASED";
    } else if (status > 0.9) {
      return "HEALTHY";
    } else {
      return "INJURED";
    }
  }
  private static record UpdateBody(Double lat, Double lng, Double status) {}
}

This allows us to accept requests for a particular Avenger, containing the current latitude, longitude, and status of that Avenger. We then convert these values into status values and pass them on to the StatusesService to update the status record.

In a future article, this will be updated to also create a new events record with this data, so that we can track the entire history of events for every Avenger.

Note that we are not correctly looking up the name of the location to use for the latitude and longitude – it is just hard-coded. There are various options for implementing this but they are out of scope for this article.

9. Summary

Here we have seen how we can leverage the Astra Document API on top of Cassandra to build a dashboard of statuses. Since Astra is serverless, your demo database will scale to zero when unused, so you will not continue to incur usage charges. In our next article, we will instead work with the Row APIs that allow us to work with very large numbers of records in a very easy manner.

All of the code from this article can be found over on GitHub.

The post Build a Dashboard Using Cassandra, Astra, and Stargate first appeared on Baeldung.
       

Maximum Size of Java Arrays

$
0
0

1. Overview

In this tutorial, we'll look at the maximum size of an array in Java.

2. Max Size

A Java program can only allocate an array up to a certain size. It generally depends on the JVM that we're using and the platform. Since the index of the array is int, the approximate index value can be 2^31 – 1. Based on this approximation, we can say that the array can theoretically hold 2,147,483,647 elements.

For our example, we're using the OpenJDK and Oracle implementations of Java 8 and Java 15 on Linux and Mac machines. The results were the same throughout our testing.

This can be verified using a simple example:

for (int i = 2; i >= 0; i--) {
    try {
        int[] arr = new int[Integer.MAX_VALUE - i];
        System.out.println("Max-Size : " + arr.length);
    } catch (Throwable t) {
        t.printStackTrace();
    }
}

During the execution of the above program, using Linux and Mac machines, similar behavior is observed. On execution with VM arguments -Xms2G -Xmx2G, we'll receive the following errors:

java.lang.OutOfMemoryError: Java heap space
	at com.example.demo.ArraySizeCheck.main(ArraySizeCheck.java:8)
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
	at com.example.demo.ArraySizeCheck.main(ArraySizeCheck.java:8)
java.lang.OutOfMemoryError: Requested array size exceeds VM limit

Note that the first error is different from the last two. The last two errors mention the VM limitation, whereas the first one is about heap memory limitation.

Now let's try with the VM arguments -Xms9G -Xmx9G to receive the exact maximum size:

Max-Size: 2147483645
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
	at com.example.demo.ArraySizeCheck.main(ArraySizeCheck.java:8)
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
	at com.example.demo.ArraySizeCheck.main(ArraySizeCheck.java:8)

The results show the maximum size is 2,147,483,645.

The same behavior can be observed for byte, boolean, long, and other data types in the array, and the results are the same.

3. ArraySupport

ArraysSupport is a utility class in OpenJDK, which suggests having the maximum size as Integer.MAX_VALUE – 8 to make it work with all the JDK versions and implementations.

4. Conclusion

In this article, we looked at the maximum size of an array in Java.

As usual, all code samples used in this tutorial are available over on GitHub.

The post Maximum Size of Java Arrays first appeared on Baeldung.
       

Downloading Email Attachments in Java

$
0
0

1. Overview

In this tutorial, we take a look at how we can download email attachments using Java. For doing so, we need the JavaMail API. The JavaMail API is available as either a Maven dependency or as separate jars.

2. JavaMail API Overview

The JavaMail API is used to compose, send, and receive emails from an email server like Gmail. It provides a framework for an email system using abstract classes and interfaces. The API supports most RFC822 and MIME Internet messaging protocols like SMTP, POP, IMAP, MIME, and NNTP.

3. JavaMail API Setup

We need to add the javax.mail Maven dependency in our Java project to use the JavaMail API:

<dependency>
    <groupId>com.sun.mail</groupId>
    <artifactId>javax.mail</artifactId> 
    <version>1.6.2</version>
</dependency>

4. Download Email Attachments

For handling email in Java, we use the Message class from the javax.mail package. Message implements the javax.mail.Part interface.

The Part interface has BodyPart and attributes. The content with attachments is a BodyPart called MultiPart. If an email has any attachments, it has a disposition equal to “Part.ATTACHMENT“. In case there are no attachments, the disposition is null. The getDisposition method from the Part interface gets us the disposition.

We look at a simple Maven-based project to understand how downloading email attachments work. We'll concentrate on getting the emails to download and saving attachments to the disk.

Our project has a utility that deals with downloading emails and saving them to our disk. We're also displaying the list of attachments.

To download the attachment(s), we first check if the content type has multipart content or not. If yes, we can process it further to check if the part has any attachments. To check the content type, we write:

if (contentType.contains("multipart")) {
    //send to the download utility...
}

If we have a multipart, we first check if it is of the type Part.ATTACHMENT and, if it is, we save the file to our destination folder using the saveFile method. So, in the download utility, we would check:

if (Part.ATTACHMENT.equalsIgnoreCase(part.getDisposition())) {
    String file = part.getFileName();
    part.saveFile(downloadDirectory + File.separator + part.getFileName());
    downloadedAttachments.add(file);
}

Since we're using the JavaMail API version greater than 1.4, we can use the saveFile method from the Part interface. The saveFile method works with either a File object or a String. We have used a string in the example. This step saves the attachments to the folder we specify. We also maintain a list of attachments for the display.

Before the JavaMail API version 1.4, we had to write the entire file byte by byte using FileStream and InputStream. In our example, we've used a Pop3 server for a Gmail account. So, to call the method in the example, we need a valid Gmail username and password and a folder to download attachments.

Let's see the example code for downloading attachments and saving them to disk:

public List<String> downloadAttachments(Message message) throws IOException, MessagingException {
    List<String> downloadedAttachments = new ArrayList<String>();
    Multipart multiPart = (Multipart) message.getContent();
    int numberOfParts = multiPart.getCount();
    for (int partCount = 0; partCount < numberOfParts; partCount++) {
        MimeBodyPart part = (MimeBodyPart) multiPart.getBodyPart(partCount);
        if (Part.ATTACHMENT.equalsIgnoreCase(part.getDisposition())) {
            String file = part.getFileName();
            part.saveFile(downloadDirectory + File.separator + part.getFileName());
            downloadedAttachments.add(file);
        }
    }
    return downloadedAttachments;
}  

5. Conclusion

This article showed how to download emails in Java using the native JavaMail library to download email attachments. The entire code for this tutorial is available over on over on GitHub.

The post Downloading Email Attachments in Java first appeared on Baeldung.
       

Java Weekly, Issue 387

$
0
0

1. Spring and Java

>> Large Pages and Java  [kstefanj.github.io]

How to achieve more efficient memory address translation on different OSes in JVM via large pages – short but also quite in-depth.

>> Hacking third-party APIs on the JVM [blog.frankel.ch]

Hacking the behavior of third-party libraries: reflection, classpath shadowing, Aspect-Oriented Programming, and Java agents!

>> Java 16’s Stream#mapMulti() – a Better Stream#flatMap Replacement? [4comprehension.com]

A small new addition in Java 16 and Stream API: the mapMulti method, flatMap comparison, and its interesting use-cases.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Jolie – a Service-Oriented Programming Language for Distributed Applications [infoq.com]

Modeling software as composable services – a sweet introduction to a service-oriented language!

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Forty Minutes Late [dilbert.com]

>> Million Dollar Bonuses [dilbert.com]

>> Nominate A Coworker [dilbert.com]

4. Pick of the Week

>> How Discord Stores Billions of Messages [discord.com]

The post Java Weekly, Issue 387 first appeared on Baeldung.
       

Using Cucumber with Gradle

$
0
0

1. Introduction

Cucumber is a test automation tool that supports Behavior-Driven Development (BDD). It runs specifications written in plain text Gherkin syntax that describes the system behavior.

In this tutorial, we'll see a few ways to integrate Cucumber with Gradle in order to run BDD specifications as part of the project build.

2. Setup

First, let's set up a Gradle project, using Gradle Wrapper.

Next, we'll add the cucumber-java dependency to build.gradle:

testImplementation 'io.cucumber:cucumber-java:6.10.4'

This adds the official Cucumber Java implementation to our project.

3. Running Using Custom Task

In order to run our specifications using Gradle, we'll create a task that uses the Command-Line Interface Runner (CLI) from Cucumber.

3.1. Configuration

Let's start by adding the required configuration to the project's build.gradle file:

configurations {
    cucumberRuntime {
        extendsFrom testImplementation
    }
}

Next, we'll create the custom cucumberCli task:

task cucumberCli() {
    dependsOn assemble, testClasses
    doLast {
        javaexec {
            main = "io.cucumber.core.cli.Main"
            classpath = configurations.cucumberRuntime + sourceSets.main.output + sourceSets.test.output
            args = [
              '--plugin', 'pretty',
              '--plugin', 'html:target/cucumber-report.html', 
              '--glue', 'com.baeldung.cucumber', 
              'src/test/resources']
        }
    }
}

This task is configured to run all the test scenarios found in .feature files under the src/test/resources directory.

The –glue option to the Main class specifies the location of the step definition files required for running the scenarios.

The –plugin option specifies the format and location of the test reports. We can combine several values to generate the report(s) in the required format(s), such as pretty and HTML, as in our example.

There are several other options available. For example, there are options to filter tests based on names and tags.

3.2. Scenario

Now, let's create a simple scenario for our application in the src/test/resources/features/account_credited.feature file:

Feature: Account is credited with amount
  Scenario: Credit amount
    Given account balance is 0.0
    When the account is credited with 10.0
    Then account should have a balance of 10.0

Next, we'll implement the corresponding step definitions — the glue— required for running the scenario:

public class StepDefinitions {
    @Given("account balance is {double}")
    public void givenAccountBalance(Double initialBalance) {
        account = new Account(initialBalance);
    }
    // other step definitions 
}

3.3. Run the Task

Finally, let's run our cucumberCli task from the command line:

>> ./gradlew cucumberCli
> Task :cucumberCli
Scenario: Credit amount                      # src/test/resources/features/account_credited.feature:3
  Given account balance is 0.0               # com.baeldung.cucumber.StepDefinitions.account_balance_is(java.lang.Double)
  When the account is credited with 10.0     # com.baeldung.cucumber.StepDefinitions.the_account_is_credited_with(java.lang.Double)
  Then account should have a balance of 10.0 # com.baeldung.cucumber.StepDefinitions.account_should_have_a_balance_of(java.lang.Double)
1 Scenarios (1 passed)
3 Steps (3 passed)
0m0.381s

As we can see, our specification has been integrated with Gradle, runs successfully, and the output is shown on the console. Also, the HTML test report is available in the specified location.

4. Running Using JUnit

Instead of creating the custom task in Gradle, we can use JUnit to run the cucumber scenarios.

Let's start by including the cucumber-junit dependency:

testImplementation 'io.cucumber:cucumber-junit:6.10.4'

As we're using JUnit 5, we also need to add the junit-vintage-engine dependency:

testImplementation 'org.junit.vintage:junit-vintage-engine:5.7.2'

Next, we'll create an empty runner class in the test sources location:

@RunWith(Cucumber.class)
@CucumberOptions(
  plugin = {"pretty", "html:target/cucumber-report.html"},
  features = {"src/test/resources"}
)
public class RunCucumberTest {
}

Here, we've used the JUnit Cucumber runner in the @RunWith annotation. Furthermore, all the CLI runner options, such as features and plugin, are available via the @CucumberOptions annotation.

Now, executing the standard Gradle test task will find and run all the feature tests, in addition to any other unit tests:

>> ./gradlew test
> Task :test
RunCucumberTest > Credit amount PASSED
BUILD SUCCESSFUL in 2s

5. Running Using Plugin

The last approach is to use a third-party plugin that provides the ability to run specifications from the Gradle build.

In our example, we'll use the gradle-cucumber-runner plugin for running Cucumber JVM. Under the hood, this forwards all calls to the CLI runner that we used earlier. Let's include it in our project:

plugins {
  id "se.thinkcode.cucumber-runner" version "0.0.8"
}

This adds a cucumber task to our build, and now we can run it with default settings:

>> ./gradlew cucumber

It's worth noting that this is not an official Cucumber plugin, and there are others also available that provide similar functionality.

6. Conclusion

In this article, we demonstrated several ways to configure and run BDD specifications using Gradle.

Initially, we looked at how to create a custom task utilizing the CLI runner. Then, we looked at using the Cucumber JUnit runner to execute the specifications using the existing Gradle task. Finally, we used a third-party plugin to run Cucumber without creating our own custom tasks.

As always, the full source can be found over on GitHub.

The post Using Cucumber with Gradle first appeared on Baeldung.
       
Viewing all 4464 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>