Quantcast
Channel: Baeldung
Viewing all 4725 articles
Browse latest View live

Modifying an XML Attribute in Java

$
0
0

1. Introduction

One common activity when we are working with XML is working with its attributes. In this tutorial, we’ll explore how to modify an XML attribute using Java.

2. Dependencies

In order to run our tests, we’ll need to add the JUnit and xmlunit-assertj dependencies to our Maven project:

<dependency>
    <groupId>org.junit.jupiter</groupId>
    <artifactId>junit-jupiter</artifactId>
    <version>5.5.0</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.xmlunit</groupId>
    <artifactId>xmlunit-assertj</artifactId>
    <version>2.6.3</version>
    <scope>test</scope>
</dependency>

3. Using JAXP

Let’s start with an XML document:

<?xml version="1.0" encoding="UTF-8"?>
<notification id="5">
    <to customer="true">john@email.com</to>
    <from>mary@email.com</from>
</notification>

In order to process it, we’ll use the Java API for XML Processing (JAXP), which has been bundled with Java since version 1.4.

Let’s modify the customer attribute and change its value to false.

First, we need to build a Document object from the XML file, and to do that, we’ll use a DocumentBuilderFactory:

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true);
factory.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true);
Document input = factory
  .newDocumentBuilder()
  .parse(resourcePath);

Note that in order to disable external entity processing (XXE) for the DocumentBuilderFactory class, we configure the XMLConstants.FEATURE_SECURE_PROCESSING and http://apache.org/xml/features/disallow-doctype-decl features. It’s a good practice to configure it when we parse untrusted XML files.

After initializing our input object, we’ll need to locate the node with the attribute we’d like to change. Let’s use an XPath expression to select it:

XPath xpath = XPathFactory
  .newInstance()
  .newXPath();
String expr = String.format("//*[contains(@%s, '%s')]", attribute, oldValue);
NodeList nodes = (NodeList) xpath.evaluate(expr, input, XPathConstants.NODESET);

In this case, the XPath evaluate method returns us a node list with the matched nodes.

Let’s iterate over the list to change the value:

for (int i = 0; i < nodes.getLength(); i++) {
    Element value = (Element) nodes.item(i);
    value.setAttribute(attribute, newValue);
}

Or, instead of a for loop, we can use an IntStream:

IntStream
    .range(0, nodes.getLength())
    .mapToObj(i -> (Element) nodes.item(i))
    .forEach(value -> value.setAttribute(attribute, newValue));

Now, let’s use a Transformer object to apply the changes:

TransformerFactory factory = TransformerFactory.newInstance();
factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true);
Transformer xformer = factory.newTransformer();
xformer.setOutputProperty(OutputKeys.INDENT, "yes");
Writer output = new StringWriter();
xformer.transform(new DOMSource(input), new StreamResult(output));

If we print the output object content, we’ll get the resulting XML with the customer attribute modified:

<?xml version="1.0" encoding="UTF-8"?>
<notification id="5">
    <to customer="false">john@email.com</to>
    <from>mary@email.com</from>
</notification>

Also, we can use the assertThat method of XMLUnit if we need to verify it in a unit test:

assertThat(output.toString()).hasXPath("//*[contains(@customer, 'false')]");

4. Using dom4j

dom4j is an open-source framework for processing XML that is integrated with XPath and fully supports DOM, SAX, JAXP, and Java Collections.

4.1. Maven Dependency

We need to add the dom4j and jaxen dependencies to our pom.xml to use dom4j in our project:

<dependency>
    <groupId>org.dom4j</groupId>
    <artifactId>dom4j</artifactId>
    <version>2.1.1</version>
</dependency>
<dependency>
    <groupId>jaxen</groupId>
    <artifactId>jaxen</artifactId>
    <version>1.2.0</version>
</dependency>

We can learn more about dom4j in our XML Libraries Support article.

4.2. Using org.dom4j.Element.addAttribute

dom4j offers the Element interface as an abstraction for an XML element. We’ll be using the addAttribute method to update our customer attribute.

Let’s see how this works.

First, we need to build a Document object from the XML file — this time, we’ll use a SAXReader:

SAXReader xmlReader = new SAXReader();
Document input = xmlReader.read(resourcePath);
xmlReader.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true);
xmlReader.setFeature("http://xml.org/sax/features/external-general-entities", false);
xmlReader.setFeature("http://xml.org/sax/features/external-parameter-entities", false);

We set the additional features in order to prevent XXE.

Like JAXP, we can use an XPath expression to select the nodes:

String expr = String.format("//*[contains(@%s, '%s')]", attribute, oldValue);
XPath xpath = DocumentHelper.createXPath(expr);
List<Node> nodes = xpath.selectNodes(input);

Now, we can iterate and update the attribute:

for (int i = 0; i < nodes.size(); i++) {
    Element element = (Element) nodes.get(i);
    element.addAttribute(attribute, newValue);
}

Note that with this method, if an attribute already exists for the given name, it will be replaced. Otherwise, it’ll be added.

In order to print the results, we can reuse the code from the previous JAXP section.

5. Using jOOX

jOOX (jOOX Object-Oriented XML) is a wrapper for the org.w3c.dom package that allows for fluent XML document creation and manipulation where DOM is required but too verbose. jOOX only wraps the underlying document and can be used to enhance DOM, not as an alternative.

5.1. Maven Dependency

We need to add the dependency to our pom.xml to use jOOX in our project.

For use with Java 9+, we can use:

<dependency>
    <groupId>org.jooq</groupId>
    <artifactId>joox</artifactId>
    <version>1.6.2</version>
</dependency>

Or with Java 6+, we have:

<dependency>
    <groupId>org.jooq</groupId>
    <artifactId>joox-java-6</artifactId>
    <version>1.6.2</version>
</dependency>

We can find the latest versions of joox and joox-java-6 in the Maven Central repository.

5.2. Using org.w3c.dom.Element.setAttribute

The jOOX API itself is inspired by jQuery, as we can see in the examples below. Let’s see how to use it.

First, we need to load the Document:

DocumentBuilder builder = JOOX.builder();
Document input = builder.parse(resourcePath);

Now, we need to select it:

Match $ = $(input);

In order to select the customer Element, we can use the find method or an XPath expression. In both cases, we’ll get a list of the elements that match it.

Let’s see the find method in action:

$.find("to")
    .get()
    .stream()
    .forEach(e -> e.setAttribute(attribute, newValue));

To get the result as a String, we simply need to call the toString() method:

$.toString();

6. Benchmark

In order to compare the performance of these libraries, we used a JMH benchmark.

Let’s see the results:

| Benchmark                          Mode  Cnt  Score   Error  Units |
|--------------------------------------------------------------------|
| AttributeBenchMark.dom4jBenchmark  avgt    5  0.150 ± 0.003  ms/op |
| AttributeBenchMark.jaxpBenchmark   avgt    5  0.166 ± 0.003  ms/op |
| AttributeBenchMark.jooxBenchmark   avgt    5  0.230 ± 0.033  ms/op |

As we can see, for this use case and our implementation, dom4j and JAXP have better scores than jOOX.

7. Conclusion

In this quick tutorial, we’ve introduced how to modify XML attributes using JAXP, dom4j, and jOOX. Also, we measured the performance of these libraries with a JMH benchmark.

As usual, all the code samples shown here are available over on GitHub.


Converting Java String to Double

$
0
0

1. Overview

In this tutorial, we’ll cover many ways of converting a String into a double in Java.

2. Double.parseDouble

We can convert a String to a double using the Double.parseDouble method:

assertEquals(1.23, Double.parseDouble("1.23"), 0.000001);

3. Double.valueOf

Similarly, we can convert a String into a boxed Double using the Double.valueOf method:

assertEquals(1.23, Double.valueOf("1.23"), 0.000001);

Note that the returned value of Double.valueOf is a boxed Double. Since Java 5, this boxed Double is converted by the compiler to a primitive double where needed.

In general, we should favor Double.parseDouble since it does not require the compiler to perform auto-unboxing.

4. DecimalFormat.parse

When a String representing a double has a more complex format, we can use a DecimalFormat.

For example, we can convert a decimal-based currency value without removing non-numeric symbols:

DecimalFormat format = new DecimalFormat("\u00A4#,##0.00");
format.setParseBigDecimal(true);

BigDecimal decimal = (BigDecimal) format.parse("-$1,000.57");

assertEquals(-1000.57, decimal.doubleValue(), 0.000001);

Similar to the Double.valueOf, the DecimalFormat.parse method returns a Number, which we can convert to a primitive double using the doubleValue method. Additionally, we use the setParseBigDecimal method to force DecimalFormat.parse to return a BigDecimal.

Usually, the DecimalFormat is more advanced than we require, thus, we should favor the Double.parseDouble or the Double.valueOf instead.

To learn more about DecimalFormat, please check a practical guide to DecimalFormat.

5. Invalid Conversions

Java provides a uniform interface for handling invalid numeric Strings.

Notably, Double.parseDoubleDouble.valueOf, and DecimalFormat.parse throw a NullPointerException when we pass null.

Likewise, Double.parseDouble and Double.valueOf throw a NumberFormatException when we pass an invalid String that cannot be parsed to a double (such as &).

Lastly, DecimalFormat.parse throws a ParseException when we pass an invalid String.

6. Avoiding Deprecrated Conversions

Before Java 9, we could create a boxed Double from a String by instantiating a Double:

new Double("1.23");

As of version 9, Java officially deprecated this method.

7. Conclusion

In conclusion, Java provides us with multiple methods to convert Strings into double values.

In general, we recommend using Double.parseDouble unless a boxed Double is needed.

The source code for this article, including examples, can be found over on GitHub.

The Basics of Java Security

$
0
0

1. Overview

In this tutorial, we’ll go through the basics of security on the Java platform. We’ll also focus on what’s available to us for writing secure applications.

Security is a vast topic that encompasses many areas. Some of these are part of the language itself, like access modifiers and class loaders. Furthermore, others are available as services, which include data encryption, secure communication, authentication, and authorization, to a name a few.

Therefore, it’s not practical to gain meaningful insight into all of these in this tutorial. However, we’ll try to gain at least a meaningful vocabulary.

2. Language Features

Above all, security in Java begins right at the level of language features. This allows us to write secure code, as well as benefit from many implicit security features:

  • Static Data Typing: Java is a statically typed language, which reduces the possibilities of run-time detection of type-related errors
  • Access Modifiers: Java allows us to use different access modifiers like public and private to control access to fields, methods, and classes
  • Automatic Memory Management: Java has garbage-collection based memory management, which frees developers from managing this manually
  • Bytecode Verification: Java is a compiled language, which means it converts code into platform-agnostic bytecode, and runtime verifies every bytecode it loads for execution

This is not a complete list of security features that Java provides, but it’s good enough to give us some assurance!

3. Security Architecture in Java

Before we begin to explore specific areas, let’s spend some time understanding the core architecture of security in Java.

The core principles of security in Java are driven by interoperable and extensible Provider implementations. A particular implementation of Provider may implement some or all of the security services.

For example, some of the typical services a Provider may implement are:

  • Cryptographic Algorithms (such as DSA, RSA, or SHA-256)
  • Key generation, conversion, and management facilities (such as for algorithm-specific keys)

Java ships with many built-in providers. Also, it’s possible for an application to configure multiple providers with an order of preference.

 

Consequently, the provider framework in Java searches for a specific implementation of a service in all providers in the order of preference set on them.

Moreover, it’s always possible to implement custom providers with pluggable security functions in this architecture.

4. Cryptography

Cryptography is the cornerstone of security features in general and in Java. This refers to tools and techniques for secure communication in the presence of adversaries.

4.1. Java Cryptography

The Java Cryptographic Architecture (JCA) provides a framework to access and implement cryptographic functionalities in Java, including:

Most importantly, Java makes use of Provider-based implementations for cryptographic functions.

Moreover, Java includes built-in providers for commonly used cryptographic algorithms like RSA, DSA, and AES, to name a few. We can use these algorithms to add security to data in rest, in use, or in motion.

4.2. Cryptography in Practice

A very common use case in applications is to store user passwords. We use this for authentication at a later point in time. Now, it’s obvious that storing plain text passwords compromises security.

So, one solution is to scramble the passwords in such a way that the process is repeatable, yet only one-way. This process is known as the cryptographic hash function, and SHA1 is one such popular algorithm.

So, let’s see how we can do this in Java:

MessageDigest md = MessageDigest.getInstance("SHA-1");
byte[] hashedPassword = md.digest("password".getBytes());

Here, MessageDigest is a cryptographic service that we are interested in. We’re using the method getInstance() to request this service from any of the available security providers.

5. Public Key Infrastructure

Public Key Infrastructure (PKI) refers to the setup that enables the secure exchange of information over the network using public-key encryption. This setup relies on trust that is built between the parties involved in the communication. This trust is based on digital certificates issued by a neutral and trusted authority known as a Certificate Authority (CA).

5.1. PKI Support in Java

Java platform has APIs to facilitate the creation, storage, and validation of digital certificates:

  • KeyStore: Java provides the KeyStore class for persistent storage of cryptographic keys and trusted certificates. Here, KeyStore can represent both key-store and trust-store files. These files have similar content but vary in their usage.
  • CertStore: Additionally, Java has the CertStore class, which represents a public repository of potentially untrusted certificates and revocation lists. We need to retrieve certificates and revocation lists for certificate path building amongst other usages.

Java has a built-in trust-store called “cacerts” that contains certificates for well known CAs.

5.2. Java Tools for PKI

Java has some really handy tools to facilitate trusted communication:

  • There is a built-in tool called “keytool” to create and manage key-store and trust-store
  • There is also another tool “jarsigner” that we can use to sign and verify JAR files

5.3. Working with Certificates in Java

Let’s see how we can work with certificates in Java to establish a secure connection using SSL. A mutually authenticated SSL connection requires us to do two things:

  • Present Certificate — We need to present a valid certificate to another party in the communication. For that, we need to load the key-store file, where we must have our public keys:
KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
char[] keyStorePassword = "changeit".toCharArray();
try(InputStream keyStoreData = new FileInputStream("keystore.jks")){
    keyStore.load(keyStoreData, keyStorePassword);
}
  • Verify Certificate — We also need to verify the certificate presented by another party in the communication. For this we need to load the trust-store, where we must have previously trusted certificates from other parties:
KeyStore trustStore = KeyStore.getInstance(KeyStore.getDefaultType());
// Load the trust-store from filesystem as before

We rarely have to do this programmatically and normally pass system parameters to Java at runtime:

-Djavax.net.ssl.trustStore=truststore.jks 
-Djavax.net.ssl.keyStore=keystore.jks

6. Authentication

Authentication is the process of verifying the presented identity of a user or machine based on additional data like password, token, or a variety of other credentials available today.

6.1. Authentication in Java

Java APIs makes use of pluggable login modules to provide different and often multiple authentication mechanisms to applications. LoginContext provides this abstraction, which in turn refers to configuration and loads an appropriate LoginModule.

While multiple providers make available their login modules, Java has some default ones available for use:

  • Krb5LoginModule, for Kerberos-based authentication
  • JndiLoginModule, for username and password-based authentication backed by an LDAP store
  • KeyStoreLoginModule, for cryptographic key-based authentication

6.2. Login by Example

One of the most common mechanisms of authentication is the username and password. Let’s see how we can achieve this through JndiLoginModule.

This module is responsible for getting the username and password from a user and verifying it against a directory service configured in JNDI:

LoginContext loginContext = new LoginContext("Sample", new SampleCallbackHandler());
loginContext.login();

Here, we are using an instance of LoginContext to perform the login. LoginContext takes the name of an entry in the login configuration — in this case, it’s “Sample”. Also, we have to provide an instance of CallbackHandler, using the LoginModule that interacts with the user for details like username and password.

Let’s take a look at our login configuration:

Sample {
  com.sun.security.auth.module.JndiLoginModule required;
};

Simple enough, it suggests that we’re using JndiLoginModule as a mandatory LoginModule.

7. Secure Communication

Communication over the network is vulnerable to many attack vectors. For instance, someone may tap into the network and read our data packets as they’re being transferred. Over the years, the industry has established many protocols to secure this communication.

7.1. Java Support for Secure Communication

Java provides APIs to secure network communication with encryption, message integrity, and both client and server authentication:

  • SSL/TLS: SSL and its successor, TLS, provide security over untrusted network communication through data encryption and public-key infrastructure. Java provides support of SSL/TLS through SSLSocket defined in the package “java.security.ssl“.
  • SASL: Simple Authentication and Security Layer (SASL) is a standard for authentication between client and server. Java supports SASL as part of the package “java.security.sasl“.
  • GGS-API/Kerberos: Generic Security Service API (GSS-API) offers uniform access to security services over a variety of security mechanisms like Kerberos v5. Java supports GSS-API as part of the package “java.security.jgss“.

7.2. SSL Communication in Action

Let’s now see how we can open a secure connection with other parties in Java using SSLSocket:

SocketFactory factory = SSLSocketFactory.getDefault();
try (Socket connection = factory.createSocket(host, port)) {
    BufferedReader input = new BufferedReader(
      new InputStreamReader(connection.getInputStream()));
    return input.readLine();
}

Here, we are using SSLSocketFactory to create SSLSocket. As part of this, we can set optional parameters like cipher suites and which protocol to use.

For this to work properly, we must have created and set our key-store and trust-store as we saw earlier.

8. Access Control

Access Control refers to protecting sensitive resources like a filesystem or codebase from unwarranted access. This is typically achieved by restricting access to such resources.

8.1. Access Control in Java

We can achieve access control in Java using classes Policy and Permission mediated through the SecurityManager class. SecurityManager is part of the “java.lang” package and is responsible for enforcing access control checks in Java.

When the class loader loads a class in the runtime, it automatically grants some default permissions to the class encapsulated in the Permission object. Beyond these default permissions, we can grant more leverage to a class through security policies. These are represented by the class Policy.

During the sequence of code execution, if the runtime encounters a request for a protected resource, SecurityManager verifies the requested Permission against the installed Policy through the call stack. Consequently, it either grants permission or throws SecurityException.

8.2. Java Tools for Policy

Java has a default implementation of Policy that reads authorization data from the properties file. However, the policy entries in these policy files have to be in a specific format.

Java ships with “policytool”, a graphical utility to compose policy files.

8.3. Access Control Through Example

Let’s see how we can restrict access to a resource like a file in Java:

SecurityManager securityManager = System.getSecurityManager();
if (securityManager != null) {
    securityManager.checkPermission(
      new FilePermission("/var/logs", "read"));
}

Here, we’re using SecurityManager to validate our read request for a file, wrapped in FilePermission.

But, SecurityManager delegates this request to AccessController. AccessController internally makes use of the installed Policy to arrive at a decision.

Let’s see an example of the policy file:

grant {
  permission 
    java.security.FilePermission
      <<ALL FILES>>, "read";
};

We are essentially granting read permission to all files for everyone. But, we can provide much more fine-grained control through security policies.

It’s worth noting that a SecurityManager might not be installed by default in Java. We can ensure this by always starting Java with the parameter:

-Djava.security.manager -Djava.security.policy=/path/to/sample.policy

9. XML Signature

XML signatures are useful in securing data and provide data integrity. W3C provides recommendations for governance of XML Signature. We can use XML signature to secure data of any type, like binary data.

9.1. XML Signature in Java

Java API supports generating and validating XML signatures as per the recommended guidelines. Java XML Digital Signature API is encapsulated in the package “java.xml.crypto“.

The signature itself is just an XML document. XML signatures can be of three types:

  • Detached: This type of signature is over the data that is external to the Signature element
  • Enveloping: This type of signature is over the data that is internal to the Signature element
  • Enveloped: This type of signature is over the data that contains the Signature element itself

Certainly, Java supports creating and verifying all the above types of XML signatures.

9.2. Creating an XML Signature

Now, we’ll roll up our sleeves and generate an XML signature for our data. For instance, we may be about to send an XML document over the network. Hence, we would want our recipient to be able to verify its integrity.

So, let’s see how we can achieve this in Java:

XMLSignatureFactory xmlSignatureFactory = XMLSignatureFactory.getInstance("DOM");
DocumentBuilderFactory documentBuilderFactory = DocumentBuilderFactory.newInstance();
documentBuilderFactory.setNamespaceAware(true);
 
Document document = documentBuilderFactory
  .newDocumentBuilder().parse(new FileInputStream("data.xml"));
 
DOMSignContext domSignContext = new DOMSignContext(
  keyEntry.getPrivateKey(), document.getDocumentElement());
 
XMLSignature xmlSignature = xmlSignatureFactory.newXMLSignature(signedInfo, keyInfo);
xmlSignature.sign(domSignContext);

To clarify, we’re generating an XML signature for our data present in the file “data.xml”. Meanwhile, there are a few things to note about this piece of code:

  • Firstly, XMLSignatureFactory is the factory class for generating XML signatures
  • XMLSigntaure requires a SignedInfo object over which it calculates the signature
  • XMLSigntaure also needs KeyInfo, which encapsulates the signing key and certificate
  • Finally, XMLSignature signs the document using the private key encapsulated as DOMSignContext

As a result, the XML document will now contain the Signature element, which can be used to verify its integrity.

10. Security Beyond Core Java

As we have seen by now, the Java platform provides a lot of the necessary functionality to write secure applications. However, sometimes, these are quite low-level and not directly applicable to, for example, the standard security mechanism on the web.

For example, when working on our system, we generally don’t want to have to read the full OAuth RFC and implement that ourselves. We often need quicker, higher-level ways to achieve security. This is where application frameworks come into the picture – these help us achieve our objective with much less boilerplate code.

And, on the Java platform – generally that means Spring Security. The framework is part of the Spring ecosystem, but it can actually be used outside of pure Spring application.

In simple terms, it helps is achieve authentication, authorization and other security features in a simple, declarative, high-level manner.

Of course, Spring Security is extensively covered in a series of tutorials, as well as in a guided way, in the Learn Spring Security course.

11. Conclusion

In short, in this tutorial, we went through the high-level architecture of security in Java. Also, we understood how Java provides us with implementations of some of the standard cryptographic services.

We also saw some of the common patterns that we can apply to achieve extensible and pluggable security in areas like authentication and access control.

To sum up, this just provides us with a sneak peek into the security features of Java. Consequently, each of the areas discussed in this tutorial merits further exploration. But hopefully, we should have enough insight to get started in this direction!

Sorting Strings by Contained Numbers in Java

$
0
0

1. Introduction

In this tutorial, we’ll look at how to sort alphanumeric Strings by the numbers they contain. We’ll focus on removing all non-numeric characters from the String before sorting multiple Strings by the numerical characters that remain.

We’ll look at common edge cases, including empty Strings and invalid numbers.

Finally, we’ll unit test our solution to ensure it works as expected.

2. Outlining the Problem

Before we begin, we need to describe what we want our code to achieve. For this particular problem, we’ll make the following assumptions:

  1. Our strings may contain only numbers, only letters or a mix of the two.
  2. The numbers in our strings may be integers or doubles.
  3. When numbers in a string are separated by letters, we should remove the letter and condense the digits together. For example, 2d3 becomes 23.
  4. For simplicity, when an invalid or missing number appears, we should treat them as 0.

With this established, let’s get stuck into our solution.

3. A Regex Solution

Since our first step is to search for numeric patterns within our input String, we can put to use regular expressions, commonly known as a regex.

The first thing we need is our regex. We want to conserve all integers as well as decimal points from the input String. We can achieve our goal with the following:

String DIGIT_AND_DECIMAL_REGEX = "[^\\d.]"

String digitsOnly = input.replaceAll(DIGIT_AND_DECIMAL_REGEX, "");

Let’s briefly explain what’s happening:

  1. ‘[^ ]’ –  denotes a negated set, therefore targetting any character not specified by the enclosed regex
  2. ‘\d’ – match any digit character (0 – 9)
  3. ‘.’ – match any “.” character

We then use String.replaceAll method to remove any characters not specified by our regex. By doing this, we can ensure that the first three points of our goal can be achieved.

Next, we need to add some conditions to ensure empty and invalid Strings return 0, while valid Strings return a valid Double:

if("".equals(digitsOnly)) return 0;

try {
    return Double.parseDouble(digitsOnly);
} catch (NumberFormatException nfe) {
    return 0;
}

That completes our logic. All that’s left to do is plug it into a comparator so that we can conveniently sort Lists of input Strings. 

Let’s create an efficient method to return our comparator from anywhere we may want it:

public static Comparator<String> createNaturalOrderRegexComparator() {
    return Comparator.comparingDouble(NaturalOrderComparators::parseStringToNumber);
}

4. Test, Test, Test

What good is code without tests to verify its functionality? Let’s set up a quick unit test to ensure it all works as we planned:

List<String> testStrings = 
  Arrays.asList("a1", "d2.2", "b3", "d2.3.3d", "c4", "d2.f4",); // 1, 2.2, 3, 0, 4, 2.4

testStrings.sort(NaturalOrderComparators.createNaturalOrderRegexComparator());

List<String> expected = Arrays.asList("d2.3.3d", "a1", "d2.2", "d2.f4", "b3", "c4");

assertEquals(expected, testStrings);

In this unit test, we’ve packed in all of the scenarios we’ve planned for. Invalid numbers, integers, decimals, and letter-separated numbers all in included in our testStrings variable.

5. Conclusion

In this short article, we’ve demonstrated how to sort alphanumeric strings based on the numbers within them – making use of regular expressions to do the hard work for us.

We’ve handled standard exceptions that may occur when parsing input strings and tested the different scenarios with unit testing.

As always, the code can be found over on GitHub.

A Guide to Increment and Decrement Unary Operators in Java

$
0
0

1. Overview

In this tutorial, we’ll briefly discuss the increment and decrement unary operators in Java.

We’ll start by looking at the syntax followed by the usage.

2. Increment and Decrement Operations in Java

In Java, the increment unary operator increases the value of the variable by one while the decrement unary operator decreases the value of the variable by one.

Both update the value of the operand to its new value.

The operand required should be a variable that is not constant, as we wouldn’t be able to modify its value. Furthermore, the operand can’t be an expression because we cannot update them.

The increment and decrement unary operators have two forms, which are, prefix and postfix.

3. Pre-Increment and Pre-Decrement Unary Operators

In the prefix form, the increment and decrement unary operators appear before the operand.

While using the prefix form, we first update the value of the operand and then we use the new value in the expression.

First, let’s look at a code snippet using the pre-increment unary operator:

int operand = 1;
++operand; // operand = 2
int number = ++operand; // operand = 3, number = 3

Next, let’s have a look at the code snippet using the pre-decrement one:

int operand = 2;
--operand; // operand = 1
int number = --operand; // operand = 0, number = 0

As we see, the prefix operators change the value of the operand first, and then the rest of the expression gets evaluated. This can easily lead to confusion if embedded in a complex expression. It’s recommended we use them on their own line rather than in larger expressions.

4. Post-Increment and Post-Decrement Unary Operators

In the postfix form, the operator appears after the operand.

While using the postfix form, we first use the value of the operand in the expression and then update it.

Let’s look at a sample code snippet using the post-increment operator:

int operand = 1;
operand++; // operand = 2
int number = operand++; // operand = 3, number = 2

Also, let’s have a look at the post-decrement one:

int operand = 2;
operand--; //operand = 1
int number = operand--; // operand = 0, number 1

Similarly, post-increment and post-decrement unary operators should be on their own line rather than including them in larger expressions.

5. Conclusion

In this quick tutorial, we learned about the increment and decrement unary operators in Java. Moreover, we looked at their two forms: prefix and postfix. Finally, we looked at its syntax and sample code snippets.

The full source code of our examples here is, as always, over on GitHub.

Metaprogramming in Groovy

$
0
0

1. Overview

Groovy is a dynamic and powerful JVM language which has numerous features like closures and traits.

In this tutorial, we’ll explore the concept of Metaprogramming in Groovy.

2. What is Metaprogramming?

Metaprogramming is a programming technique of writing a program to modify itself or another program using metadata.

In Groovy, it’s possible to perform metaprogramming at both runtime and compile-time. Going forward, we’ll explore a few notable features of both techniques.

3. Runtime Metaprogramming

Runtime metaprogramming enables us to alter the existing properties and methods of a class. Also, we can attach new properties and methods; all at runtime.

Groovy provides a few methods and properties that help to alter the behavior of a class at runtime.

3.1. propertyMissing

When we try to access an undefined property of a Groovy class, it throws a MissingPropertyException. To avoid the exception, Groovy provides the propertyMissing method.

First, let’s write an Employee class with some properties:

class Employee {
    String firstName
    String lastName  
    int age
}

Second, we’ll create an Employee object and try to display an undefined property address. Consequently, it will throw the MissingPropertyException:

Employee emp = new Employee(firstName: "Norman", lastName: "Lewis")
println emp.address
groovy.lang.MissingPropertyException: No such property: 
address for class: com.baeldung.metaprogramming.Employee

Groovy provides the propertyMissing method to catch the missing property request. Therefore, we can avoid a MissingPropertyException at runtime.

To catch a missing property’s getter method call, we’ll define it with a single argument for the property name:

def propertyMissing(String propertyName) {
    "property '$propertyName' is not available"
}
assert emp.address == "property 'address' is not available"

Also, the same method can have the second argument as the value of the property, to catch a missing property’s setter method call:

def propertyMissing(String propertyName, propertyValue) { 
    println "cannot set $propertyValue - property '$propertyName' is not available" 
}

3.2. methodMissing

The methodMissing method is similar to propertyMissing. However, methodMissing intercepts a call for any missing method, thereby avoiding the MissingMethodException.

Let’s try to call the getFullName method on an Employee object. As getFullName is missing, execution will throw the MissingMethodException at runtime:

try {
    emp.getFullName()
} catch (MissingMethodException e) {
    println "method is not defined"
}

So, instead of wrapping a method call in a try-catch, we can define methodMissing:

def methodMissing(String methodName, def methodArgs) {
    "method '$methodName' is not defined"
}
assert emp.getFullName() == "method 'getFullName' is not defined"

3.3. ExpandoMetaClass

Groovy provides a metaClass property in all its classes. The metaClass property refers to an instance of the ExpandoMetaClass.

The ExpandoMetaClass class provides numerous ways to transform an existing class at runtime. For example, we can add properties, methods, or constructors.

First, let’s add the missing address property to the Employee class using metaClass property:

Employee.metaClass.address = ""
Employee emp = new Employee(firstName: "Norman", lastName: "Lewis", address: "US")
assert emp.address == "US"

Moving further, let’s add the missing getFullName method to the Employee class object at runtime:

emp.metaClass.getFullName = {
    "$lastName, $firstName"
}
assert emp.getFullName() == "Lewis, Norman"

Similarly, we can add a constructor to the Employee class at runtime:

Employee.metaClass.constructor = { String firstName -> 
    new Employee(firstName: firstName) 
}
Employee norman = new Employee("Norman")
assert norman.firstName == "Norman"
assert norman.lastName == null

Likewise, we can add static methods using metaClass.static.

The metaClass property is not only handy to modify user-defined classes, but also existing Java classes at runtime.

For example, let’s add a capitalize method to the String class:

String.metaClass.capitalize = { String str ->
    str.substring(0, 1).toUpperCase() + str.substring(1);
}
assert "norman".capitalize() == "Norman"

3.4. Extensions

An extension can add a method to a class at runtime and make it accessible globally.

The methods defined in an extension should always be static, with the self class object as the first argument.

For example, let’s write a BasicExtension class to add a getYearOfBirth method to the Employee class:

class BasicExtensions {
    static int getYearOfBirth(Employee self) {
        return (new Date().getYear() + 1900) - self.age;
    }
}

To enable the BasicExtensions, we’ll need to add the configuration file in the META-INF/services directory of our project.

So, let’s add the org.codehaus.groovy.runtime.ExtensionModule file with the following configuration:

moduleName=core-groovy-2 
moduleVersion=1.0-SNAPSHOT 
extensionClasses=com.baeldung.metaprogramming.extension.BasicExtensions

Let’s verify the getYearOfBirth method added in the Employee class:

Employee emp = new Employee(age: 28)
assert emp.getYearOfBirth() == 1991

Similarly, to add static methods in a class, we’ll need to define a separate extension class.

For instance, let’s add a static method getDefaultObj to our Employee class by defining StaticEmployeeExtension class:

class StaticEmployeeExtension {
    static Employee getDefaultObj(Employee self) {
        return new Employee(firstName: "firstName", lastName: "lastName", age: 20)
    }
}

Then, we enable the StaticEmployeeExtension by adding the following configuration to the ExtensionModule file:

staticExtensionClasses=com.baeldung.metaprogramming.extension.StaticEmployeeExtension

Now, all we need is to test our static getDefaultObj method on the Employee class:

assert Employee.getDefaultObj().firstName == "firstName"
assert Employee.getDefaultObj().lastName == "lastName"
assert Employee.getDefaultObj().age == 20

Similarly, using extensions, we can add a method to pre-compiled Java classes like Integer and Long:

public static void printCounter(Integer self) {
    while (self > 0) {
        println self
        self--
    }
    return self
}
assert 5.printCounter() == 0
public static Long square(Long self) {
    return self*self
}
assert 40l.square() == 1600l

4. Compile-time Metaprogramming

Using specific annotations, we can effortlessly alter the class structure at compile-time. In other words, we can use annotations to modify the abstract syntax tree of the class at the compilation.

Let’s discuss some of the annotations which are quite handy in Groovy to reduce boilerplate code. Many of them are available in the groovy.transform package.

If we carefully analyze, we’ll realize a few annotations provides features similar to Java’s Project Lombok.

4.1. @ToString

The @ToString annotation adds a default implementation of the toString method to a class at compile-time. All we need is to add the annotation to the class.

For instance, let’s add the @ToString annotation to our Employee class:

@ToString
class Employee {
    long id
    String firstName
    String lastName
    int age
}

Now, we’ll create an object of the Employee class and verify the string returned by the toString method:

Employee employee = new Employee()
employee.id = 1
employee.firstName = "norman"
employee.lastName = "lewis"
employee.age = 28

assert employee.toString() == "com.baeldung.metaprogramming.Employee(1, norman, lewis, 28)"

We can also declare parameters such as excludes, includes, includePackage and ignoreNulls with @ToString to modify the output string.

For example, let’s exclude id and package from the string of the Employee object:

@ToString(includePackage=false, excludes=['id'])
assert employee.toString() == "Employee(norman, lewis, 28)"

4.2. @TupleConstructor

Use @TupleConstructor in Groovy to add a parameterized constructor in the class. This annotation creates a constructor with a parameter for each property.

For example, let’s add @TupleConstructor to the Employee class:

@TupleConstructor 
class Employee { 
    long id 
    String firstName 
    String lastName 
    int age 
}

Now, we can create Employee object passing parameters in the order of properties defined in the class.

Employee norman = new Employee(1, "norman", "lewis", 28)
assert norman.toString() == "Employee(norman, lewis, 28)"

If we don’t provide values to the properties while creating objects, Groovy will consider default values:

Employee snape = new Employee(2, "snape")
assert snape.toString() == "Employee(snape, null, 0)"

Similar to @ToString, we can declare parameters such as excludes, includes and includeSuperProperties with @TupleConstructor to alter the behavior of its associated constructor as needed.

4.3. @EqualsAndHashCode

We can use @EqualsAndHashCode to generate the default implementation of equals and hashCode methods at compile time.

Let’s verify the behavior of @EqualsAndHashCode by adding it to the Employee class:

Employee normanCopy = new Employee(1, "norman", "lewis", 28)

assert norman == normanCopy
assert norman.hashCode() == normanCopy.hashCode()

4.4. @Canonical

@Canonical is a combination of @ToString, @TupleConstructor, and @EqualsAndHashCode annotations.

Just by adding it, we can easily include all three to a Groovy class. Also, we can declare @Canonical with any of the specific parameters of all three annotations.

4.5. @AutoClone

A quick and reliable way to implement Cloneable interface is by adding the @AutoClone annotation.

Let’s verify the clone method after adding @AutoClone to the Employee class:

try {
    Employee norman = new Employee(1, "norman", "lewis", 28)
    def normanCopy = norman.clone()
    assert norman == normanCopy
} catch (CloneNotSupportedException e) {
    e.printStackTrace()
}

4.6. Logging support with @Log, @Commons, @Log4j, @Log4j2, and @Slf4j

To add logging support to any Groovy class, all we need is to add annotations available in groovy.util.logging package.

Let’s enable the logging provided by JDK by adding the @Log annotation to the Employee class. Afterward, we’ll add the logEmp method:

def logEmp() {
    log.info "Employee: $lastName, $firstName is of $age years age"
}

Calling the logEmp method on an Employee object will show the logs on the console:

Employee employee = new Employee(1, "Norman", "Lewis", 28)
employee.logEmp()
INFO: Employee: Lewis, Norman is of 28 years age

Similarly, the @Commons annotation is available to add Apache Commons logging support. @Log4j is available for Apache Log4j 1.x logging support and @Log4j2 for Apache Log4j 2.x. Finally, use @Slf4j to add Simple Logging Facade for Java support.

5. Conclusion

In this tutorial, we’ve explored the concept of metaprogramming in Groovy.

Along the way, we’ve seen a few notable metaprogramming features both for runtime and compile-time.

At the same time, we’ve explored additional handy annotations available in Groovy for cleaner and dynamic code.

As usual, the code implementations for this article are available in the GitHub project.

Binary Numbers in Java

$
0
0

1. Introduction

The binary number system uses 0s and 1s to represent numbers. Computers use binary numbers to store and perform operations on any data.

In this tutorial, we’ll learn how to convert binary to decimal and vice versa. Also, we’ll perform addition and subtraction on them.

2. Binary Literal

Java 7 introduced the binary literal. It simplified binary number usage.

To use it, we need to prefix the number with 0B or 0b:

@Test
public void given_binaryLiteral_thenReturnDecimalValue() {

    byte five = 0b101;
    assertEquals((byte) 5, five);

    short three = 0b11;
    assertEquals((short) 3, three);

    int nine = 0B1001;
    assertEquals(9, nine);

    long twentyNine = 0B11101;
    assertEquals(29, twentyNine);

    int minusThirtySeven = -0B100101;
    assertEquals(-37, minusThirtySeven);

}

3. Binary Number Conversion

In this section, we’ll learn how to convert a binary number into its decimal format and vice versa. Here, we’ll first use a built-in Java function for conversion, and then we’ll write our custom methods for the same.

3.1. Decimal to a Binary Number

Integer has a function named toBinaryString to convert a decimal number into its binary string:

@Test
public void given_decimalNumber_then_convertToBinaryNumber() {
    assertEquals("1000", Integer.toBinaryString(8));
    assertEquals("10100", Integer.toBinaryString(20));
}

Now, we can try to write our own logic for this conversion. Before writing the code, let’s first understand how to convert a decimal number into a binary one.

To convert a decimal number n into its binary format, we need to:

  1. Divide n by 2, noting the quotient q and the remainder r
  2. Divide q by 2, noting its quotient and remainder
  3. Repeat step 2 until we get 0 as the quotient
  4. Concatenate in reverse order all remainders

Let’s see an example of converting 6 into its binary format equivalent:

  1. First, divide 6 by 2: quotient 3, remainder 0
  2. Then, divide 3 by 2: quotient 1, remainder 1
  3. And finally, divide 1 by 2: quotient 0, remainder 1
  4. 110

Let’s now implement the above algorithm:

public Integer convertDecimalToBinary(Integer decimalNumber) {

    if (decimalNumber == 0) {
        return decimalNumber;
    }

    StringBuilder binaryNumber = new StringBuilder();
    Integer quotient = decimalNumber;

    while (quotient > 0) {
        int remainder = quotient % 2;
        binaryNumber.append(remainder);
        quotient /= 2;
    }

    binaryNumber = binaryNumber.reverse();
    return Integer.valueOf(binaryNumber.toString());
}

3.2. Binary to a Decimal Number

To parse a binary string, the Integer class provides a parseInt function:

@Test
public void given_binaryNumber_then_ConvertToDecimalNumber() {
    assertEquals(8, Integer.parseInt("1000", 2));
    assertEquals(20, Integer.parseInt("10100", 2));
}

Here, the parseInt function takes two parameters as input:

  1. Binary string to be converted
  2. Radix or base of the number system in which input string has to be converted

Now, let’s try to write our own logic to convert a binary number into decimal:

  1. Start from with rightmost digit
  2. Multiply each digit with 2^{position} of that digit – here, rightmost digit’s position is zero and it increases as we move to the left side
  3. Add the result of all the multiplications to get the final decimal number

Again, let’s see our method in action:

  1. First, 101011 = (1*2^5) + (0*2^4)  + (1*2^3) + (0*2^2) + (1*2^1) + (1*2^0)
  2. Next, 101011 = (1*32) + (0*16) + (1*8) + (0*4)  + (1*2) + (1*1)
  3. Then, 101011 = 32 + 0 + 8 + 0 + 2 + 1
  4. And finally, 101011 = 43

Let’s finally code the above steps:

public Integer convertBinaryToDecimal(Integer binaryNumber) {

    Integer decimalNumber = 0;
    Integer base = 1;

    while (binaryNumber > 0) {
        int lastDigit = binaryNumber % 10;
        binaryNumber = binaryNumber / 10;
        decimalNumber += lastDigit * base;
        base = base * 2;
    }
    return decimalNumber;
}

4. Arithmetic Operations

In this section, we’ll concentrate on performing the arithmetic operations on binary numbers.

4.1. Addition

Just like the decimal number addition, we start adding the numbers from the rightmost digit.

While adding two binary digits, we need to remember the following rules:

  • 0 + 0 = 0
  • 0 + 1 = 1
  • 1 + 1 = 10 
  • 1 + 1 + 1 = 11 

These rules can be implemented as:

public Integer addBinaryNumber(Integer firstNum, Integer secondNum) {
    StringBuilder output = new StringBuilder();
    int carry = 0;
    int temp;
    while (firstNum != 0 || secondNum != 0) {
        temp = (firstNum % 10 + secondNum % 10 + carry) % 2;
        output.append(temp);

        carry = (firstNum % 10 + secondNum % 10 + carry) / 2;
        firstNum = firstNum / 10;
        secondNum = secondNum / 10;
    }
    if (carry != 0) {
        output.append(carry);
    }
    return Integer.valueOf(output.reverse().toString());
}

4.2. Subtraction

There are many ways to subtract binary numbers. In this section, we’ll learn a one’s complement method to do subtraction.

Let’s first understand what is one’s complement of a number.

One’s complement of a number is a number obtained by negating each digit of the binary number. That means just replace 1 by 0 and 0 by 1:

public Integer getOnesComplement(Integer num) {
    StringBuilder onesComplement = new StringBuilder();
    while (num > 0) {
        int lastDigit = num % 10;
        if (lastDigit == 0) {
            onesComplement.append(1);
        } else {
            onesComplement.append(0);
        }
        num = num / 10;
    }
    return Integer.valueOf(onesComplement.reverse().toString());
}

To do subtraction of two binary numbers using one’s complement, we need to:

  1. Calculate the one’s complement of the subtrahend s
  2. Add s and the minuend
  3. If a carry gets generated in step 2, then add that carry to step 2’s result to get the final answer.
  4. If a carry is not generated in step 2, then the one’s complement of step 2’s result is the final answer. But in this case, the answer is negative

Let’s implement the above steps:

public Integer substractBinaryNumber(Integer firstNum, Integer secondNum) {
    int onesComplement = Integer.valueOf(getOnesComplement(secondNum));
    StringBuilder output = new StringBuilder();
    int carry = 0;
    int temp;
    while (firstNum != 0 || onesComplement != 0) {
        temp = (firstNum % 10 + onesComplement % 10 + carry) % 2;
        output.append(temp);
        carry = (firstNum % 10 + onesComplement % 10 + carry) / 2;

        firstNum = firstNum / 10;
        onesComplement = onesComplement / 10;
    }
    String additionOfFirstNumAndOnesComplement = output.reverse().toString();
    if (carry == 1) {
        return addBinaryNumber(Integer.valueOf(additionOfFirstNumAndOnesComplement), carry);
    } else {
        return getOnesComplement(Integer.valueOf(additionOfFirstNumAndOnesComplement));
    }
}

5. Conclusion

In this article, we learned how to convert binary numbers into decimal ones and vice versa. Then, we performed arithmetic operations such as addition and subtraction on binary numbers.

The complete code used in this article is available over on GitHub.

Running a Spring Boot App with Maven vs an Executable War/Jar

$
0
0

1. Introduction

In this tutorial, we’ll explore the differences between starting a Spring Boot web application via the mvn spring-boot:run command and running it after it is compiled into a jar/war package via the java -jar command.

Let’s assume here you’re already familiar with the configuration of the Spring Boot repackage goal. For more details on this topic, please read Create a Fat Jar App with Spring Boot.

2. The Spring Boot Maven Plugin

When writing a Spring Boot application, the Spring Boot Maven plugin is the recommended tool to build, test, and package our code.

This plugin ships with lots of convenient features, such as:

  • it resolves the correct dependency versions for us
  • it can package all our dependencies (including an embedded application server if needed) in a single, runnable fat jar/war and will also:
    • manage for us the classpath configuration, so we can skip that long -cp option in our java -jar command
    • implement a custom ClassLoader to locate and load all the external jar libraries, now nested inside the package
    • find automatically the main() method and configure it in the manifest, so we don’t have to specify the main class in our java -jar command

3. Running the Code with Maven in Exploded Form

When we’re working on a web application, we can leverage another very interesting feature of the Spring Boot Maven plugin: the ability to automatically deploy our web application in an embedded application server.

We only need one dependency to let the plugin know we want to use Tomcat to run our code:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId> 
</dependency>

Now, when executing the mvn spring-boot:run command in our project root folder, the plugin reads the pom configuration and understands that we require a web application container.

Executing the mvn spring-boot:run command triggers the download of Apache Tomcat and initializes the startup of Tomcat.

Let’s try it:

$ mvn spring-boot:run
...
...
[INFO] --------------------< com.baeldung:spring-boot-ops >--------------------
[INFO] Building spring-boot-ops 0.0.1-SNAPSHOT
[INFO] --------------------------------[ war ]---------------------------------
[INFO]
[INFO] >>> spring-boot-maven-plugin:2.1.3.RELEASE:run (default-cli) > test-compile @ spring-boot-ops >>>
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/tomcat/embed/tomcat-embed-core/9.0.16/tomcat-embed-core-9.0.16.pom
Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/tomcat/embed/tomcat-embed-core/9.0.16/tomcat-embed-core-9.0.16.pom (1.8 kB at 2.8 kB/s)
...
...
[INFO] --- spring-boot-maven-plugin:2.1.3.RELEASE:run (default-cli) @ spring-boot-ops ---
...
...
11:33:36.648 [main] INFO  o.a.catalina.core.StandardService - Starting service [Tomcat]
11:33:36.649 [main] INFO  o.a.catalina.core.StandardEngine - Starting Servlet engine: [Apache Tomcat/9.0.16]
...
...
11:33:36.952 [main] INFO  o.a.c.c.C.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext
...
...
11:33:48.223 [main] INFO  o.a.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8080"]
11:33:48.289 [main] INFO  o.s.b.w.e.tomcat.TomcatWebServer - Tomcat started on port(s): 8080 (http) with context path ''
11:33:48.292 [main] INFO  org.baeldung.boot.Application - Started Application in 22.454 seconds (JVM running for 37.692)

When the log shows the line containing ‘Started Application’, our web application is ready to be queried via the browser at the address http://localhost:8080/

4. Running the Code as a Stand-Alone Packaged Application

Once we pass the development phase and we want to progress towards bringing our application to production, we need to package our application.

Unfortunately, if we are working with a jar package, the basic Maven package goal doesn’t include any of the external dependencies.

This means that we can use it only as a library in a bigger project.

To circumvent this limitation, we need to leverage the Maven Spring Boot plugin repackage goal to run our jar/war as a stand-alone application.

4.1. Configuration

Usually, we only need to configure the build plugin:

<build>
    <plugins>
        ...
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
        </plugin>
        ...
    </plugins>
</build>

But our example project contains more than one main class, so we have to tell Java which class to run, by either configuring the plugin:

<plugin>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-maven-plugin</artifactId>
    <executions>
        <execution>
            <configuration>
                <mainClass>com.baeldung.webjar.WebjarsdemoApplication</mainClass>
            </configuration>
        </execution>
    </executions>
</plugin>

or by setting the start-class property:

<properties>
    <start-class>com.baeldung.webjar.WebjarsdemoApplication</start-class>
</properties>

4.2. Running the Application

Now, we can run our example war with two simple commands:

$ mvn clean package spring-boot:repackage
$ java -jar target/spring-boot-ops.war

More details regarding how to run a jar file can be found in our article Run JAR Application With Command Line Arguments.

4.3. Inside the War File

To understand better how the command mentioned above can run a full server application, we can take a look into our spring-boot-ops.war.

If we uncompress it and peek inside, we find the usual suspects:

  • META-INF, with the auto-generated MANIFEST.MF
  • WEB-INF/classes, containing our compiled classes
  • WEB-INF/lib, which holds our war dependencies and the embedded Tomcat jar files

That’s not all though, as there are some folders specific to our fat package configuration:

  •  WEB-INF/lib-provided, containing external libraries required when running embedded but not required when deploying
  • org/springframework/boot/loader, which holds the Spring Boot custom class loader — this library is responsible for loading our external dependencies and making them accessible in runtime

4.4. Inside the War Manifest

As mentioned before, the Maven Spring Boot plugin finds the main class and generates the configuration needed for running the java command.

The resulting MANIFEST.MF has some additional lines:

Start-Class: com.baeldung.webjar.WebjarsdemoApplication
Main-Class: org.springframework.boot.loader.WarLauncher

In particular, we can observe that the last one specifies the Spring Boot class loader launcher to use.

4.5. Inside a Jar File

Due to the default packaging strategy, our war packaging scenario doesn’t differ much, whether we use the Spring Boot Maven Plugin or not.

To better appreciate the advantages of the plugin, we can try changing the pom packaging configuration to jar and run mvn clean package again.

We can now observe that our fat jar is organized a bit differently from our previous war file:

  • All our classes and resources folders are now located under BOOT-INF/classes
  • BOOT-INF/lib holds all the external libraries

Without the plugin, the lib folder would not exist, and all the content of BOOT-INF/classes would be located in the root of the package.

4.6. Inside the Jar Manifest

Also the MANIFEST.MF has changed, featuring these additional lines:

Spring-Boot-Classes: BOOT-INF/classes/
Spring-Boot-Lib: BOOT-INF/lib/
Spring-Boot-Version: 2.1.3.RELEASE
Main-Class: org.springframework.boot.loader.JarLauncher

Spring-Boot-Classes and Spring-Boot-Lib are particularly interesting, as they tell us where the class loader is going to find classes and external libraries.

5. How to Choose

When analyzing tools, it’s imperative to take account of the purpose these tools are created for. Do we want to ease the development or ensure smooth deployment and portability? Let’s have a look at the phases most affected by this choice.

5.1. Development

As developers, we often spend most of our time coding without needing to spend a lot of time setting up our environment to run the code locally. In simple applications, that’s usually not a concern. But, for more complex projects, we may need to set environment variables, start servers, and populate databases.

Configuring the right environment every time we want to run the application would be very impractical, especially if more than one service has to run at the same time.

That’s where running the code with Maven helps us. We already have the entire codebase checked out locally, so we can leverage the pom configuration and resource files. We can set environment variables, spawn an in-memory database, and even download the correct server version and deploy our application with one command.

Even in a multi-module codebase, where each module needs different variables and server versions, we can easily run the right environment via Maven profiles.

5.2. Production

The more we move towards production, the more the conversation shifts towards stability and security. That is why we cannot apply the process used for our development machine to a server with live customers.

Running the code through Maven at this stage is bad practice for multiple reasons:

  • First of all, we would need to install Maven
  • Then, just because we need to compile the code, we need the full Java Development Kit (JDK)
  • Next, we have to copy the codebase to our server, leaving all our proprietary code in plain text
  • The mvn command has to execute all phases of the life cycle (find sources, compile, and run)
  • Thanks to the previous point, we would also waste CPU and, in the case of a cloud server, money
  • Maven spawns multiple Java processes, each using memory (by default, they each use the same memory amount as the parent process)
  • Finally, if we have multiple servers to deploy, all the above is repeated on each one

These are just a few reasons why shipping the application as a package is more practical for production.

6. Conclusion

In this tutorial, we explored the differences between running our code via Maven and via the java -jar command. We also ran a quick overview of some practical case scenarios.

The source code used in this article is available over on GitHub.


An Introduction to Java SASL

$
0
0

1. Overview

In this tutorial, we’ll go through the basics of Simple Authentication and Security Layer (SASL). We’ll understand how Java supports adopting SASL for securing communication.

In the process, we’ll use simple client and server communication, securing it with SASL.

2. What is SASL?

SASL is a framework for authentication and data security in Internet protocols. It aims to decouple internet protocols from specific authentication mechanisms. We’ll better understand parts of this definition as we go along.

The need for security in communication is implicit. Let’s try to understand this in the context of client and server communication. Typically client and server exchange data over the network. It is imperative that both parties can trust each other and send data securely.

2.1. Where does SASL Fit in?

In an application, we may use SMTP to send emails and use LDAP to access directory services. But each of these protocols may support another authentication mechanism, like Digest-MD5 or Kerberos.

What if there was a way for protocols to swap authentication mechanisms more declaratively? This is exactly where SASL comes into the picture. Protocols supporting SASL can invariably support any of the SASL mechanisms.

Hence, applications can negotiate a suitable mechanism and adopt that for authentication and secure communication.

2.2. How does SASL Work?

Now, that we’ve seen where SASL fits in the overall scheme of security, let’s understand how it works.

SASL is a challenge-response framework. Here, the server issues a challenge to the client, and the client sends a response based on the challenge. The challenge and response are byte arrays of arbitrary length and, hence, can carry any mechanism-specific data.

This exchange can continue for multiple iterations and finally ends when the server issues no further challenge.

Furthermore, the client and server can negotiate a security layer post-authentication. All subsequent communication can then leverage this security layer. However, note that some of the mechanisms may only support authentication.

It’s important to understand here that SASL only provides a framework for the exchange of challenge and response data. It does not mention anything about the data itself or how they are exchanged. Those details are left for the applications adopting to use SASL.

3. SASL Support in Java

There are APIs in Java that support developing both client-side and server-side applications with SASL. The API is not dependent on the actual mechanisms themselves. Applications using Java SASL API can select a mechanism based on security features required.

3.1. Java SASL API

The key interfaces to notice, as part of the package “javax.security.sasl”, are SaslSerever and SaslClient.

SaslServer represents the server-side mechanism of SASL.

Let’s see how we can instantiate a SaslServer:

SaslServer ss = Sasl.createSaslServer(
  mechanism, 
  protocol, 
  serverName, 
  props, 
  callbackHandler);

We are using the factory class Sasl to instantiate SaslServer. The method createSaslServer accepts several parameters:

  • mechanism – the IANA registered name of a SASL supported mechanism
  • protocol – the name of the protocol for which authentication is being done
  • serverName – the fully qualified hostname of the server
  • props – a set of properties used to configure the authentication exchange
  • callbackHandler – a callback handler to be used by the selected mechanism to get further information

Out of the above, only the first two are mandatory, and rest are nullable.

SaslClient represents the client-side mechanism of SASL. Let’s see how can we instantiate a SaslClient:

SaslClient sc = Sasl.createSaslClient(
  mechanisms, 
  authorizationId, 
  protocol, 
  serverName, 
  props,
  callbackHandler);

Here again, we’re using the factory class Sasl to instantiate our SaslClient. The list of parameters which createSaslClient accepts is pretty much the same as before.

However, there are some subtle differences:

  • mechanisms – here, this is a list of mechanisms to try from
  • authorizationId – this is a protocol-dependent identification to be used for authorization

The rest of the parameters are similar in meaning and their optionality.

3.2. Java SASL Security Provider

Beneath the Java SASL API are the actual mechanisms that provide the security features. The implementation of these mechanisms is provided by security providers registered with the Java Cryptography Architecture (JCA).

There can be multiple security providers registered with the JCA. Each of these may support one or more of the SASL mechanisms.

Java ships with SunSASL as a security provider, which gets registered as a JCA provider by default. However, this may be removed or reordered with any other available providers.

Moreover, it is always possible to provide a custom security provider. This will require us to implement the interfaces SaslClient and SaslServer. In doing so, we may implement our custom security mechanism as well!

4. SASL Through an Example

Now that we’ve seen how to create a SaslServer and a SaslClient, it’s time to understand how to use them. We’ll be developing client and server components. These will exchange challenge and response iteratively to achieve authentication. We’ll make use of the DIGEST-MD5 mechanism in our simple example here.

4.1. Client and Server CallbackHandler

As we saw earlier, we need to provide implementations of CallbackHandler to SaslServer and SaslClient. Now, CallbackHandler is a simple interface that defines a single method — handle. This method accepts an array of Callback.

Here, Callback presents a way for the security mechanism to collect authentication data from the calling application. For instance, a security mechanism may require a username and password. There are quite a few Callback implementations like NameCallback and PasswordCallback available for use.

Let’s see how we can define a CallbackHandler for the server, to begin with:

public class ServerCallbackHandler implements CallbackHandler {
    @Override
    public void handle(Callback[] cbs) throws IOException, UnsupportedCallbackException {
        for (Callback cb : cbs) {
            if (cb instanceof AuthorizeCallback) {
                AuthorizeCallback ac = (AuthorizeCallback) cb;
                //Perform application-specific authorization action
                ac.setAuthorized(true);
            } else if (cb instanceof NameCallback) {
                NameCallback nc = (NameCallback) cb;
                //Collect username in application-specific manner
                nc.setName("username");
            } else if (cb instanceof PasswordCallback) {
                PasswordCallback pc = (PasswordCallback) cb;
                //Collect password in application-specific manner
                pc.setPassword("password".toCharArray());
            } else if (cb instanceof RealmCallback) { 
                RealmCallback rc = (RealmCallback) cb; 
                //Collect realm data in application-specific manner 
                rc.setText("myServer"); 
            }
        }
    }
}

Now, let’s see our client-side of the Callbackhandler:

public class ClientCallbackHandler implements CallbackHandler {
    @Override
    public void handle(Callback[] cbs) throws IOException, UnsupportedCallbackException {
        for (Callback cb : cbs) {
            if (cb instanceof NameCallback) {
                NameCallback nc = (NameCallback) cb;
                //Collect username in application-specific manner
                nc.setName("username");
            } else if (cb instanceof PasswordCallback) {
                PasswordCallback pc = (PasswordCallback) cb;
                //Collect password in application-specific manner
                pc.setPassword("password".toCharArray());
            } else if (cb instanceof RealmCallback) { 
                RealmCallback rc = (RealmCallback) cb; 
                //Collect realm data in application-specific manner 
                rc.setText("myServer"); 
            }
        }
    }
}

To clarify, we’re looping through the Callback array and handling only specific ones. The ones that we have to handle is specific to the mechanism in use, which is DIGEST-MD5 here.

4.2. SASL Authentication

So, we’ve written our client and server CallbackHandler. We’ve also instantiated SaslClient and SaslServer for DIGEST-MD5 mechanism.

Now is the time to see them in action:

@Test
public void givenHandlers_whenStarted_thenAutenticationWorks() throws SaslException {
    byte[] challenge;
    byte[] response;
 
    challenge = saslServer.evaluateResponse(new byte[0]);
    response = saslClient.evaluateChallenge(challenge);
 
    challenge = saslServer.evaluateResponse(response);
    response = saslClient.evaluateChallenge(challenge);
 
    assertTrue(saslServer.isComplete());
    assertTrue(saslClient.isComplete());
}

Let’s try to understand what is happening here:

  • First, our client gets the default challenge from the server
  • The client then evaluates the challenge and prepares a response
  • This challenge-response exchange continues for one more cycle
  • In the process, the client and server make use of callback handlers to collect any additional data as needed by the mechanism
  • This concludes our authentication here, but in reality, it can iterate over multiple cycles

A typical exchange of challenge and response byte arrays happens over the network. But, here for simplicity, we’ve assumed local communication.

4.3. SASL Secure Communication

As we discussed earlier, SASL is a framework capable of supporting secure communication beyond just authentication. However, this is only possible if the underlying mechanism supports it.

Firstly, let’s first check if we have been able to negotiate a secure communication:

String qop = (String) saslClient.getNegotiatedProperty(Sasl.QOP);
 
assertEquals("auth-conf", qop);

Here, QOP stands for the quality of protection. This is something that the client and server negotiate during authentication. A value of “auth-int” indicates authentication and integrity. While, a value of “auth-conf” indicates authentication, integrity, and confidentiality.

Once we have a security layer, we can leverage that to secure our communication.

Let’s see how we can secure outgoing communication in the client:

byte[] outgoing = "Baeldung".getBytes();
byte[] secureOutgoing = saslClient.wrap(outgoing, 0, outgoing.length);
 
// Send secureOutgoing to the server over the network

And, similarly, the server can process incoming communication:

// Receive secureIncoming from the client over the network
byte[] incoming = saslServer.unwrap(secureIncoming, 0, netIn.length);
 
assertEquals("Baeldung", new String(incoming, StandardCharsets.UTF_8));

5. SASL in the Real World

So, we now have a fair understanding of what SASL is and how to use it in Java. But, typically, that’s not what we’ll end up using SASL for, at least in our daily routine.

As we saw earlier, SASL is primarily meant for protocols like LDAP and SMTP. Although, more and more applications and coming on board with SASL — for instance, Kafka. So, how do we use SASL to authenticate with such services?

Let’s suppose we’ve configured Kafka Broker for SASL with PLAIN as the mechanism of choice. PLAIN simply means that it authenticates using a combination of username and password in plain text.

Let’s now see how can we configure a Java client to use SASL/PLAIN to authenticate against the Kafka Broker.

We begin by providing a simple JAAS configuration, “kafka_jaas.conf”:

KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="username"
  password="password";
};

We make use of this JAAS configuration while starting the JVM:

-Djava.security.auth.login.config=kafka_jaas.conf

Finally, we have to add a few properties to pass to our producer and consumer instances:

security.protocol=SASL_SSL
sasl.mechanism=PLAIN

That’s all there is to it. This is just a small part of Kafka client configurations, though. Apart from PLAIN, Kafka also supports GSSAPI/Kerberos for authentication.

6. SASL in Comparision

Although SASL is quite effective in providing a mechanism-neutral way of authenticating and securing client and server communication. However, SASL is not the only solution available in this regard.

Java itself provides other mechanisms to achieve this objective. We’ll briefly discuss them and understand how they fare against SASL:

  • Java Secure Socket Extension (JSSE): JSSE is a set of packages in Java that implements Secure Sockets Layer (SSL) for Java. It provides data encryption, client and server authentication, and message integrity. Unlike SASL, JSSE relies on a Public Key Infrastructure (PKI) to work. Hence, SASL works out to be more flexible and lightweight than JSSE.
  • Java GSS API (JGSS): JGGS is the Java language binding for Generic Security Service Application Programming Interface (GSS-API). GSS-API is an IETF standard for applications to access security services. In Java, under GSS-API, Kerberos is the only mechanism supported. Kerberos again requires a Kerberised infrastructure to work. Compared to SASL, here yet, choices are limited and heavyweight.

Overall, SASL is a very lightweight framework and offers a wide variety of security features through pluggable mechanisms. Applications adopting SASL have a lot of choices in implementing the right set of security features, depending upon the need.

7. Conclusion

To sum up, in this tutorial, we understood the basics of the SASL framework, which provides authentication and secure communication. We also discussed the APIs available in Java for implementing the client- and server-side of SASL.

We saw how to use a security mechanism through a JCA provider. Finally, we also talked about the usage of SASL in working with different protocols and applications.

As always, the code can be found over on GitHub.

Find Files That Have Been Changed Recently in Linux

$
0
0

1. Introduction

There are various occasions when we want to search for files that have been changed recently.

For example, as a system admin, we’re responsible to maintain and configure computer systems. Sometimes, because we’re dealing with a lot of configuration files, we probably want to know what are the files recently modified.

In this tutorial, we’re going to find the files that have been changed recently in Linux using bash commands.

2. The find Command

First, we’ll explore the find utility which is the most common way to achieve the intended purpose. This command is used to find files and directories recursively and to execute further operations on them. 

2.1. -mtime and -mmin

-mtime is handy, for example, if we want to find all the files from the current directory that have changed in the last 24 hours:

find . -mtime -1

Note that the . is used to refer to the current directory. -mtime n is an expression that finds the files and directories that have been modified exactly n days ago.

In addition, the expression can be used in two other ways:

  • -mtime +n = finds the files and directories modified more than n days ago
  • -mtime -n = finds the files and directories modified less than n days ago

In the same way, we can use the -mmin n expression to rely on minutes instead of days:

find /home/sports -mmin +120

So, this command recursively finds all the files and directories from the /home/sports directory modified at least 120 minutes ago.

Next, if we want to limit the searching only to files, excluding directories, we need to add the -type f expression:

find /home/sports -type f -mmin +120

Furthermore, we can even compose expressions. So, let’s find the files that have been changed less than 120 minutes ago and more than 60 minutes ago:

find . -type f -mmin -120 -mmin +60

2.2. -newermt

There are times when we want to find the files that were modified based on a particular date. In order to fulfill this requirement, we have to explore another parameter, which has the following syntax:

-newermt 'yyyy-mm-dd'

By using this expression, we can get the files that have been changed earlier than the specified date.

So, let’s build a command to better understand the new parameter:

find . -type f -newermt 2019-07-24

Moreover, we could get the files modified on a specific date by using a composed expression.

So, we’re going to get the files modified on ‘2019-07-24’:

find . -type f -newermt 2019-07-24 ! -newermt 2019-07-25

Finally, there’s another version of the -newermt parameter similar to -mmin and -mtime.

The first command finds the files modified in the last 24 hours. The rest of them are similar:

find . -type f -newermt "-24 hours" 
find . -type f -newermt "-10 minutes" 
find . -type f -newermt "1 day ago" 
find . -type f -newermt "yesterday"

3. The ls command

We know that the ls command lists information about the files in a specific directory. One of its usages is to show the long format of the files and to sort the output by modification time:

ls -lt

Which would result in something like:

-rw-r--r-- 1 root root 4233 Jul 27 18:44 b.txt 
-rw-rw-r-- 1 root root 2946 Jul 27 18:12 linux-commands.txt 
-rw-r--r-- 1 root root 5233 Jul 20 17:02 a.txt

We may not be able to list the files recently modified exactly as the find command does. But, we can filter the above output based on a specific date or time by applying the grep command on the result of the ls command:

ls -lt | grep 'Jul 27'
-rw-r--r-- 1 root root 4233 Jul 27 18:44 b.txt 
-rw-rw-r-- 1 root root 2946 Jul 27 18:12 linux-commands.txt
ls -lt | grep '17:'
-rw-r--r-- 1 root root 5233 Jul 20 17:02 a.txt

Note that the find command is recursive by default. In order to enable the recursive capability on the ls command, we also need to add the R(uppercase) parameter:

ls -ltR

4. Conclusion

In this quick tutorial, we’ve described a few ways that help us find the files that have been changed recently on a Linux operating system.

First, we’ve explored the find command and created several examples with different parameters like -mtime, -mmin and -newermt.

Then, we’ve shown how we can achieve similar results using a combination of two better known Linux utilities like the ls and grep commands.

Automatic Generation of the Builder Pattern with FreeBuilder

$
0
0

1. Overview

In this tutorial, we’ll use the FreeBuilder library to generate builder classes in Java.

2. Builder Design Pattern

Builder is one of the most widely used Creation Design Patterns in object-oriented languages. It abstracts the instantiation of a complex domain object and provides a fluent API for creating an instance. It thereby helps to maintain a concise domain layer.

Despite its usefulness, builder is generally complex to implement, particularly in Java. Even simpler value objects require a lot of boilerplate code.

3. Builder Implementation in Java

Before we proceed with FreeBuilder, let’s implement a boilerplate builder for our Employee class:

public class Employee {

    private final String name;
    private final int age;
    private final String department;

    private Employee(String name, int age, String department) {
        this.name = name;
        this.age = age;
        this.department = department;
    }
}

And an inner Builder class:

public static class Builder {

    private String name;
    private int age;
    private String department;

    public Builder setName(String name) {
        this.name = name;
        return this;
    }

    public Builder setAge(int age) {
        this.age = age;
        return this;
    }

    public Builder setDepartment(String department) {
        this.department = department;
        return this;
    }

    public Employee build() {
        return new Employee(name, age, department);
    }
}

Accordingly, we can now use the builder for instantiating the Employee object:

Employee.Builder emplBuilder = new Employee.Builder();

Employee employee = emplBuilder
  .setName("baeldung")
  .setAge(12)
  .setDepartment("Builder Pattern")
  .build();

As shown above, a lot of boilerplate code is necessary for implementing a builder class.

In the later sections, we’ll see how FreeBuilder can instantly simplify this implementation.

4. Maven Dependency

To add the FreeBuilder library, we’ll add the FreeBuilder Maven dependency in our pom.xml:

<dependency>
    <groupId>org.inferred</groupId>
    <artifactId>freebuilder</artifactId>
    <version>2.4.1</version>
</dependency>

5. FreeBuilder Annotation

5.1. Generating a Builder

FreeBuilder is an open-source library that helps developers avoid the boilerplate code while implementing builder classes. It makes use of annotation processing in Java to generate a concrete implementation of the builder pattern.

We’ll annotate our Employee class from the earlier section with @FreeBuilder and see how it automatically generates the builder class:

@FreeBuilder
public interface Employee {
 
    String name();
    int age();
    String department();
    
    class Builder extends Employee_Builder {
    }
}

It’s important to point out that Employee is now an interface rather than a POJO class. Furthermore, it contains all the attributes of an Employee object as methods.

Before we continue to use this builder, we must configure our IDEs to avoid any compilation issues. Since FreeBuilder automatically generates the Employee_Builder class during compilation, the IDE usually complains of ClassNotFoundException on line number 8.

To avoid such issues, we need to enable annotation processing in IntelliJ or Eclipse. And while doing so, we’ll use FreeBuilder’s annotation processor org.inferred.freebuilder.processor.Processor. Additionally, the directory used for generating these source files should be marked as Generated Sources Root.

Alternatively, we can also execute mvn install to build the project and generate the required builder classes.

Finally, we have compiled our project and can now use the Employee.Builder class:

Employee.Builder builder = new Employee.Builder();
 
Employee employee = builder.name("baeldung")
  .age(10)
  .department("Builder Pattern")
  .build();

All in all, there are two main differences between this and the builder class we saw earlier. First, we must set the value for all attributes of the Employee class. Otherwise, it throws an IllegalStateException.

We’ll see how FreeBuilder handles optional attributes in a later section.

Second, the method names of Employee.Builder don’t follow the JavaBean naming conventions. We’ll see this in the next section.

5.2. JavaBean Naming Convention

To enforce FreeBuilder to follow the JavaBean naming convention, we must rename our methods in Employee and prefix the methods with get:

@FreeBuilder
public interface Employee {
 
    String getName();
    int getAge();
    String getDepartment();

    class Builder extends Employee_Builder {
    }
}

This will generate getters and setters that follow the JavaBean naming convention:

Employee employee = builder
  .setName("baeldung")
  .setAge(10)
  .setDepartment("Builder Pattern")
  .build();

5.3. Mapper Methods

Coupled with getters and setters, FreeBuilder also adds mapper methods in the builder class. These mapper methods accept a UnaryOperator as input, thereby allowing developers to compute complex field values.

Suppose our Employee class also has a salary field:

@FreeBuilder
public interface Employee {
    Optional<Double> getSalaryInUSD();
}

Now suppose we need to convert the currency of the salary that is provided as input:

long salaryInEuros = INPUT_SALARY_EUROS;
Employee.Builder builder = new Employee.Builder();

Employee employee = builder
  .setName("baeldung")
  .setAge(10)
  .mapSalaryInUSD(sal -> salaryInEuros * EUROS_TO_USD_RATIO)
  .build();

FreeBuilder provides such mapper methods for all fields.

6. Default Values and Constraint Checks

6.1. Setting Default Values

The Employee.Builder implementation we have discussed so far expects the client to pass values for all fields. As a matter of fact, it fails the initialization process with an IllegalStateException in case of missing fields.

In order to avoid such failures, we can either set default values for fields or make them optional.

We can set default values in the Employee.Builder constructor:

@FreeBuilder
public interface Employee {

    // getter methods

    class Builder extends Employee_Builder {

        public Builder() {
            setDepartment("Builder Pattern");
        }
    }
}

So we simply set the default department in the constructor. This value will apply to all Employee objects.

6.2. Constraint Checks

Usually, we have certain constraints on field values. For example, a valid email must contain an “@” or the age of an Employee must be within a range.

Such constraints require us to put validations on input values. And FreeBuilder allows us to add these validations by merely overriding the setter methods:

@FreeBuilder
public interface Employee {

    // getter methods

    class Builder extends Employee_Builder {

        @Override
        public Builder setEmail(String email) {
            if (checkValidEmail(email))
                return super.setEmail(email);
            else
                throw new IllegalArgumentException("Invalid email");

        }

        private boolean checkValidEmail(String email) {
            return email.contains("@");
        }
    }
}

7. Optional Values

7.1. Using Optional Fields

Some objects contain optional fields, the values for which can be empty or null. FreeBuilder allows us to define such fields using the Java Optional type:

@FreeBuilder
public interface Employee {

    String getName();
    int getAge();

    // other getters
    
    Optional<Boolean> getPermanent();

    Optional<String> getDateOfJoining();

    class Builder extends Employee_Builder {
    }
}

Now we may skip providing any value for Optional fields:

Employee employee = builder.setName("baeldung")
  .setAge(10)
  .setPermanent(true)
  .build();

Notably, we simply passed the value for permanent field instead of an Optional. Since we didn’t set the value for dateOfJoining field, it will be Optional.empty() which is the default for Optional fields.

7.2. Using @Nullable Fields

Although using Optional is recommended for handling nulls in Java, FreeBuilder allows us to use @Nullable for backward compatibility:

@FreeBuilder
public interface Employee {

    String getName();
    int getAge();
    
    // other getter methods

    Optional<Boolean> getPermanent();
    Optional<String> getDateOfJoining();

    @Nullable String getCurrentProject();

    class Builder extends Employee_Builder {
    }
}

The use of Optional is ill-advised in some cases which is another reason why @Nullable is preferred for builder classes.

8. Collections and Maps

FreeBuilder has special support for collections and maps:

@FreeBuilder
public interface Employee {

    String getName();
    int getAge();
    
    // other getter methods

    List<Long> getAccessTokens();
    Map<String, Long> getAssetsSerialIdMapping();


    class Builder extends Employee_Builder {
    }
}

FreeBuilder adds convenience methods to add input elements into the Collection in the builder class:

Employee employee = builder.setName("baeldung")
  .setAge(10)
  .addAccessTokens(1221819L)
  .addAccessTokens(1223441L, 134567L)
  .build();

There is also a getAccessTokens() method in the builder class which returns an unmodifiable list. Similarly, for Map:

Employee employee = builder.setName("baeldung")
  .setAge(10)
  .addAccessTokens(1221819L)
  .addAccessTokens(1223441L, 134567L)
  .putAssetsSerialIdMapping("Laptop", 12345L)
  .build();

The getter method for Map also returns an unmodifiable map to the client code.

9. Nested Builders

For real-world applications, we may have to nest a lot of value objects for our domain entities. And since the nested objects can themselves need builder implementations, FreeBuilder allows nested buildable types.

For example, suppose we have a nested complex type Address in the Employee class:

@FreeBuilder
public interface Address {
 
    String getCity();

    class Builder extends Address_Builder {
    }
}

Now, FreeBuilder generates setter methods that take Address.Builder as an input together with Address type:

Address.Builder addressBuilder = new Address.Builder();
addressBuilder.setCity(CITY_NAME);

Employee employee = builder.setName("baeldung")
  .setAddress(addressBuilder)
  .build();

Notably, FreeBuilder also adds a method to customize the existing Address object in the Employee:

Employee employee = builder.setName("baeldung")
  .setAddress(addressBuilder)
  .mutateAddress(a -> a.setPinCode(112200))
  .build();

Along with FreeBuilder types, FreeBuilder also allows nesting of other builders such as protos.

10. Building Partial Object

As we’ve discussed before, FreeBuilder throws an IllegalStateException for any constraint violation — for instance, missing values for mandatory fields.

Although this is desired for production environments, it complicates unit testing that is independent of constraints in general.

To relax such constraints, FreeBuilder allows us to build partial objects:

Employee employee = builder.setName("baeldung")
  .setAge(10)
  .setEmail("abc@xyz.com")
  .buildPartial();

assertNotNull(employee.getEmail());

So, even though we haven’t set all the mandatory fields for an Employee, we could still verify that the email field has a valid value.

11. Custom toString() Method

With value objects, we often need to add a custom toString() implementation. FreeBuilder allows this through abstract classes:

@FreeBuilder
public abstract class Employee {

    abstract String getName();

    abstract int getAge();

    @Override
    public String toString() {
        return getName() + " (" + getAge() + " years old)";
    }

    public static class Builder extends Employee_Builder{
    }
}

We declared Employee as an abstract class rather than an interface and provided a custom toString() implementation.

12. Comparison with Other Builder Libraries

The builder implementation we have discussed in this article is very similar to those of Lombok, Immutables, or any other annotation processor. However, there are a few distinguishing characteristics that we have discussed already:

    • Mapper methods
    • Nested Buildable Types
    • Partial Objects

13. Conclusion

In this article, we used the FreeBuilder library to generate a builder class in Java. We implemented various customizations of a builder class with the help of annotations, thus reducing the boilerplate code required for its implementation.

We also saw how FreeBuilder is different from some of the other libraries and briefly discussed some of those characteristics in this article.

All the code examples are available over on GitHub.

Java Weekly, Issue 293

$
0
0

Here we go…

1. Spring and Java

>> Brian Goetz Speaks to InfoQ about Proposed Hyphenated Keywords in Java [infoq.com]

As Java continues to evolve, we may soon see hyphenated keywords, like the recently proposed (but later dropped) break-with.

>> 5 minutes or less: Jakarta JSON Binding with Apache Johnzon [tomitribe.com]

And a quick overview of JSON-B, yet another JSON binding layer for Java.

>> How to write JPA Criteria API queries using Codota (video) [vladmihalcea.com]

It’s cool to see Codota popping up in other writings I follow. The IDE plugin for IntelliJ and Eclipse takes some of the pain out of writing JPA criteria queries.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musing

>> Hands-On Spark Intro: Cross Join Customers and Products with Business Logic [blog.codecentric.de]

Lessons learned implementing large cross joins in a Spark application.

>> Those Who Can’t, Sell Tutorials on How You Can [daedtech.com]

Before buying into someone else’s blueprint for success, it’s good to remember that context is still king. What worked for them may not be the complete and tidy solution you’re looking for.

Also worth reading:

3. Comics

>> Opinionated Old Guy [dilbert.com]

>> Employee Engagement Survey [dilbert.com]

>> Circular Debating [dilbert.com]

4. Pick of the Week

>> Good Engineering Practices while Working Solo [bitsrc.io]

Javax BigDecimal Validation

$
0
0

1. Introduction

In the tutorial Java Bean Validation Basics, we saw how to apply basic javax validation to various types, and in this tutorial, we’ll focus on using javax validation with BigDecimal.

2. Validating BigDecimal Instances

Unfortunately, with BigDecimal, we can’t use the classic @Min or @Max javax annotations.

Luckily, we have a dedicated set of annotations for working with them:

  • @DecimalMin

  • @Digits

  • @DecimalMax

BigDecimal is the first choice for financial calculation because of its high precision.

Let’s see our Invoice class, which has a field of type BigDecimal:

public class Invoice {

    @DecimalMin(value = "0.0", inclusive = false)
    @Digits(integer=3, fraction=2)
    private BigDecimal price;
    private String description;

    public Invoice(BigDecimal price, String description) {
        this.price = price;
        this.description = description;
    }
}

2.1. @DecimalMin

The annotated element must be a number whose value is higher or equal to the specified minimum. @DecimalMin has an attribute inclusive that indicates whether the specified minimum value is inclusive or exclusive.

2.2. @DecimalMax

@DecimalMax is the counterpart of @DecimalMin. The annotated element must be a number whose value is lower or equal to the specified maximum. @DecimalMax has an inclusive attribute that specifies whether the specified maximum value is inclusive or exclusive.

Also, @Min and @Max accept long values only. In @DecimalMin and @DecimalMax, we can specify the value in string format, which can be of any numeric type.

2.3. @Digits

In many cases, we need to validate the number of digits in the integral part and fraction part of a decimal number.

The @Digit annotation has two attributes, integer and fraction, for specifying the number of allowed digits in the integral part and fraction part of the number.

As per the official documentation, integer allows us to specify the maximum number of integral digits accepted for this number. But this is true only for non-decimal numbers. For decimal numbers, it checks the exact number of digits in an integral part of the number. We will see this in our test case.

Similarly, the fraction attribute allows us to specify the maximum number of fractional digits accepted for this number.

2.4. Test Cases

Let’s see these annotations in action.

First, we’ll add a test that creates an Invoice with an invalid price according to our validation, and checks that the validation will fail:

public class InvoiceUnitTest {

    private static Validator validator;

    @BeforeClass
    public static void setupValidatorInstance() {
        validator = Validation.buildDefaultValidatorFactory().getValidator();
    }

    @Test
    public void whenPriceIntegerDigitLessThanThreeWithDecimalValue_thenShouldGiveConstraintViolations() {
        Invoice invoice = new Invoice(new BigDecimal(10.21), "Book purchased");
 
        Set<ConstraintViolation<Invoice>> violations = validator.validate(invoice);
 
        assertThat(violations.size()).isEqualTo(1);
        violations.forEach(action -> assertThat(action.getMessage())
                .isEqualTo("numeric value out of bounds (<3 digits>.<2 digits> expected)"));
    }
}

Now let’s check the validation with a correct price that’s an integer value:

@Test
public void whenPriceIntegerDigitLessThanThreeWithIntegerValue_thenShouldNotGiveConstraintViolations() {
    Invoice invoice = new Invoice(new BigDecimal(10), "Book purchased");
 
    Set<ConstraintViolation<Invoice>> violations = validator.validate(invoice);
 
    assertThat(violations.size()).isEqualTo(0);
}

If we set a price with more than 3 digits in the integral part, we should see a validation error:

@Test
public void whenPriceIntegerDigitGreaterThanThree_thenShouldGiveConstraintViolations() {
    Invoice invoice = new Invoice(new BigDecimal(1021.21), "Book purchased");
 
    Set<ConstraintViolation<Invoice>> violations = validator.validate(invoice);
 
    assertThat(violations.size()).isEqualTo(1);
    violations.forEach(action -> assertThat(action.getMessage())
      .isEqualTo("numeric value out of bounds (<3 digits>.<2 digits> expected)"));
}

A price equal to 000.00 should also a constraint validation:

@Test
public void whenPriceIsZero_thenShouldGiveConstraintViolations() {
    Invoice invoice = new Invoice(new BigDecimal(000.00), "Book purchased");
 
    Set<ConstraintViolation<Invoice>> violations = validator.validate(invoice);
 
    assertThat(violations.size()).isEqualTo(1);
    violations.forEach(action -> assertThat(action.getMessage())
      .isEqualTo("must be greater than 0.0"));
}

Finally, let’s the case with a price that’s greater than 0:

@Test
public void whenPriceIsGreaterThanZero_thenShouldNotGiveConstraintViolations() {
    Invoice invoice = new Invoice(new BigDecimal(100.50), "Book purchased");
 
    Set<ConstraintViolation<Invoice>> violations = validator.validate(invoice);
 
    assertThat(violations.size()).isEqualTo(0);
    }
}

3. Conclusion

In this article, we saw how to use javax validation for BigDecimal.

All code snippets can be found over on GitHub.

Java ‘Hello World’ Example

$
0
0

1. Overview

Java is a general-purpose programming language that focuses on the WORA (Write Once, Run Anywhere) principle.

It runs on a JVM (Java Virtual Machine) that is in charge of abstracting the underlying OS, allowing Java programs to run almost everywhere, from application servers to mobile phones.

When learning a new language, “Hello World” is often the first program we write.

In this tutorial, we’ll learn some basic Java syntax and write a simple “Hello World” program.

2. Writing the Hello World Program

Let’s open any IDE or text editor and create a simple file called HelloWorld.java:

public class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Hello World!");
    }
}

In our example, we’ve created a Java class named HelloWorld containing a main method that writes some text to the console.

When we execute the program, Java will run the main method, printing out “Hello World!” on the console.

Now, let’s see how we can compile and execute our program.

3. Compiling and Executing the Program

In order to compile a Java program, we need to call the Java compiler from the command line:

$ javac HelloWorld.java

The compiler produces the HelloWorld.class file, which is the compiled bytecode version of our code.

Let’s run it by calling:

$ java HelloWorld

and see the result:

Hello World!

4. Conclusion

With this simple example, we created a Java class with the default main method printing out a string on the system console.

We saw how to create, compile, and execute a Java program and got familiar with a little bit of basic syntax. The Java code and commands we saw here remain the same on every OS that supports Java.

Thymeleaf lists Utility Object

$
0
0

1. Overview

Thymeleaf is a Java template engine for processing and creating HTML.

In this quick tutorial, we’ll look into Thymeleaf’s lists utility object to perform common list-based operations.

2. Computing Size

First, the size method returns the length of a list. We can include it, say, via the th:text attribute:

size: <span th:text="${#lists.size(myList)}"/>

myList is our own object. We’d have passed it via the controller:

@GetMapping("/size")
public String usingSize(Model model) {
    model.addAttribute("myList", getColors());
    return "lists/size";
}

3. Checking if the List is Empty

The isEmpty method returns true if the given list has no elements:

<span th:text="${#lists.isEmpty(myList)}"/>

Generally, this utility method is used with conditionals – th:if and th:unless:

<span th:unless="${#lists.isEmpty(myList)}">List is not empty</span>

4. Checking Membership

The contains method checks whether an element is a member of the given list:

myList contains red: <span th:text="${#lists.contains(myList, 'red')}"/>

Similarly, we can check the membership of multiple elements using the containsAll method:

myList contains red and green: <span th:text='${#lists.containsAll(myList, {"red", "green"})}'/>

5. Sorting

The sort method enables us to sort a list:

sort: <span th:text="${#lists.sort(myList)}"/>

sort with Comparator: <span th:text="${#lists.sort(myList, reverse)}"/>

Here we have two overloaded sort methods. Firstly, we’re sorting our list in the natural order – ${#lists.sort(myList)}. Secondly, we’re passing an additional parameter of type Comparator. In our example, we’re getting this comparator from the model.

6. Converting to List

Lastly, we can convert Iterables and arrays to Lists using the toList method.

<span th:with="convertedList=${#lists.toList(myArray)}">
    converted list size: <span th:text="${#lists.size(convertedList)}"/>
</span>

Here we’re creating a new List, convertedList, and then printing its size with #lists.size.

7. Summary

In this tutorial, we’ve investigated the Thymeleaf built-in lists utility object and how to use it effectively.

As always, the source code for all examples is available on GitHub.


Adding a Path to the Linux PATH Variable

$
0
0

1. Overview

In this quick tutorial, we’ll focus on how to add a path to the Unix PATH variable.

2. PATH Variable

The PATH variable is an environment variable that contains an ordered list of paths that Unix will search for executables when running a command. Using these paths means that we do not have to specify an absolute path when running a command.

For example, if we want to print Hello, world!, the command echo can be used rather than /bin/echo so long as /bin is in PATH:

echo "Hello, world!"

Unix traverses the colon-separated paths in order until finding an executable. Thus, Unix uses the first path if two paths contain the desired executable.

We can print the current value of the PATH variable by echoing the PATH environment variable:

echo $PATH

We should see a list of colon-separated paths (exact paths may differ):

/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

3. Adding a New Path

We add a new path to the PATH variable using the export command.

To prepend a new path, such as /some/new/path, we reassign the PATH variable with our new path at the beginning of the existing PATH variable (represented by $PATH):

export PATH=/some/new/path:$PATH

To append a new path, we reassign PATH with our new path at the end:

export PATH=$PATH:/some/new/path

4. Persisting Changes

When we use the export command and open a new shell, the added path is lost.

4.1. Locally

To persist our changes for the current user, we add our export command to the end of ~/.profile. If the ~/.profile file doesn’t exist, we should create it using the touch command:

touch ~/.profile

Then we can add our export command to ~/.profile.

Additionally, we need to open a new shell or source our ~/.profile file to reflect the change. We’d either execute:

. ~/.profile

or we could use the source command if we are using Bash:

source ~/.profile

We could also append our export command to ~/.bash_profile if we are using Bash, but our changes will be not be reflected in other shells, such as Z shell (zsh). We shouldn’t add our export command to ~/.bashrc because only interactive Bash shells read this configuration file. If we open a non-interactive shell or a shell other than Bash, our PATH change will not be reflected.

4.2. Globally

We can add a new path for all users on a Unix system by creating a file ending in .sh in /etc/profile.d/ and adding our export command to this file.

For example, we can create a new script file, /etc/profile.d/example.sh, and add the following line to append /some/new/path to the global PATH:

export PATH=$PATH:/some/new/path

All of the scripts in /etc/profile.d/ will be executed when a new shell initializes. Therefore, we need to open a new shell for our global changes to take effect.

We can also add our new path directly to the existing PATH in the /etc/environment file:

PATH=<existing_PATH>:/some/new/path

The /etc/environment file is not a script file—it only contains simple variable assignments—and is less flexible than a script. Because of this, making PATH changes in /etc/environment is discouraged. We recommend adding a new script to /etc/profile.d instead.

5. Conclusion

In this tutorial, we saw how Unix uses the PATH variable to find executables when running a command.

We can prepend or append to PATH, but we must persist these changes in ~/.profile. We can use ~/.bash_profile as well, but ~/.profile is preferred.

We can also change the global PATH value by adding our export command to a new .sh file in /etc/profile.d.

Evaluation of Methods References in Java

$
0
0

1. Overview

Java 8 introduced the concept of method references. We often see them as similar to lambda expressions.

However, method references and lambda expressions are not exactly the same thing. In this article, we’ll show why they are different and what are the risks of using them in the wrong way.

2. Lambdas and Method References Syntax

To start with, let’s see a few examples of lambda expressions:

Runnable r1 = () -> "some string".toUpperCase();
Consumer<String> c1 = x -> x.toUpperCase();

And a few examples of method references:

Function<String, String> f1 = String::toUpperCase;
Runnable r2 = "some string"::toUpperCase;
Runnable r3 = String::new;

Those examples could make us think about method references as a shortened notation for lambdas.

But let’s take a look at the official Oracle documentation. We can find an interesting example there:

(test ? list.replaceAll(String::trim) : list) :: iterator

As we can see, the Java Language Specification allows us to have a different kind of expressions before double colon operator. The part before the :: is called the target reference.

Next, we’ll discuss the process of method reference evaluation.

3. Method Reference Evaluation

What is going to happen when we run the following code?

public static void main(String[] args) {
    Runnable runnable = (f("some") + f("string"))::toUpperCase;
}

private static String f(String string) {
    System.out.println(string);
    return string;
}

We’ve just created a Runnable object. Nothing more, nothing less. However, the output is:

some
string

It happened because the target reference is evaluated when the declaration is first spotted. Hence, we’ve lost the desired laziness. The target reference is also evaluated only once. So if we add this line to the above example:

runnable.run()

We will not see any output. What about the next case?

SomeWorker worker = null;
Runnable workLambda = () -> worker.work() // ok
Runnable workMethodReference = worker::work; // boom! NullPointerException

The explanation provided by the documentation mentioned before:

“A method invocation expression (§15.12) that invokes an instance method throws a NullPointerException if the target reference is null.”

The best way to prevent unexpected situations might be to never use variable access and complex expressions as target references.

A good idea might be to use method references only as a neat, short notation for its lambda equivalent. Having just a class name before :: operator guarantees safety.

4. Conclusion

In this article, we’ve learned about the evaluation process of method references.

We know the risks and the rules we should follow to not be suddenly surprised by the behavior of our application.

Find the Number of Lines in a File Using Java

$
0
0

1. Overview

In this tutorial, we’ll learn how to find the number of lines in a file using Java with the help of standard Java IO APIs, Google Guava and the Apache Commons IO library.

2. NIO2 Files

Note that, across this tutorial, we’ll be using the following sample values as the input file name and the total number of lines:

static final String INPUT_FILE_NAME = "src/main/resources/input.txt";
static final int NO_OF_LINES = 45;

Java 7 introduced many improvements to the existing IO libraries and packaged it under NIO2:

Let’s start with Files and see how can we use its API to count the numbers of lines:

@Test
public void whenUsingNIOFiles_thenReturnTotalNumberOfLines() throws IOException {
    try (Stream<String> fileStream = Files.lines(Paths.get(INPUT_FILE_NAME))) {
        int noOfLines = (int) fileStream.count();
        assertEquals(NO_OF_LINES, noOfLines);
    }
}

Or by simply using Files#readAllLines method:

@Test
public void whenUsingNIOFilesReadAllLines_thenReturnTotalNumberOfLines() throws IOException {
    List<String> fileStream = Files.readAllLines(Paths.get(INPUT_FILE_NAME));
    int noOfLines = fileStream.size();
    assertEquals(NO_OF_LINES, noOfLines);
}

3. NIO FileChannel

Now let’s check FileChannel, a high-performance Java NIO alternative to read the number of lines:

@Test
public void whenUsingNIOFileChannel_thenReturnTotalNumberOfLines() throws IOException {
    int noOfLines = 1;
    try (FileChannel channel = FileChannel.open(Paths.get(INPUT_FILE_NAME), StandardOpenOption.READ)) {
        ByteBuffer byteBuffer = channel.map(MapMode.READ_ONLY, 0, channel.size());
        while (byteBuffer.hasRemaining()) {
            byte currentByte = byteBuffer.get();
            if (currentByte == '\n')
                noOfLines++;
       }
    }
    assertEquals(NO_OF_LINES, noOfLines);
}

Though the FileChannel was introduced in JDK 4, the above solution works only with JDK 7 or higher.

4. Google Guava Files

An alternative third-party library would be Google Guava Files class. This class can also be used to count the total number of lines in a similar way to what we saw with Files#readAllLines.

Let’s start by adding the guava dependency in our pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>28.0-jre</version>
</dependency>

And then we can use readLines to get a List of file lines:

@Test
public void whenUsingGoogleGuava_thenReturnTotalNumberOfLines() throws IOException {
    List<String> lineItems = Files.readLines(Paths.get(INPUT_FILE_NAME)
      .toFile(), Charset.defaultCharset());
    int noOfLines = lineItems.size();
    assertEquals(NO_OF_LINES, noOfLines);
}

5. Apache Commons IO FileUtils

Now, let’s see Apache Commons IO FileUtils API, a parallel solution to Guava.

To use the library, we have to include the commons-io dependency in the pom.xml:

<dependency>
    <groupId>commons-io</groupId>
    <artifactId>commons-io</artifactId>
    <version>2.6</version>
</dependency>

At that point, we can use Apache Commons IO’s FileUtils#lineIterator, which cleans up some of the file handlings for us:

@Test
public void whenUsingApacheCommonsIO_thenReturnTotalNumberOfLines() throws IOException {
    int noOfLines = 0;
    LineIterator lineIterator = FileUtils.lineIterator(new File(INPUT_FILE_NAME));
    while (lineIterator.hasNext()) {
        lineIterator.nextLine();
        noOfLines++;
    }
    assertEquals(NO_OF_LINES, noOfLines);
}

As we can see, this is a bit more verbose than the Google Guava solution.

6. BufferedReader

So, what about old-school ways? If we aren’t on JDK 7 and we can’t use a third-party library, we have BufferedReader:

@Test
public void whenUsingBufferedReader_thenReturnTotalNumberOfLines() throws IOException {
    int noOfLines = 0;
    try (BufferedReader reader = new BufferedReader(new FileReader(INPUT_FILE_NAME))) {
        while (reader.readLine() != null) {
            noOfLines++;
        }
    }
    assertEquals(NO_OF_LINES, noOfLines);
}

7. LineNumberReader

Or, we can use LineNumberReader, a direct subclass of BufferedReader, which is just a bit less verbose:

@Test
public void whenUsingLineNumberReader_thenReturnTotalNumberOfLines() throws IOException {
    try (LineNumberReader reader = new LineNumberReader(new FileReader(INPUT_FILE_NAME))) {
        reader.skip(Integer.MAX_VALUE);
        int noOfLines = reader.getLineNumber() + 1;
        assertEquals(NO_OF_LINES, noOfLines);
    }
}

Here we are calling the skip method to go to the end of the file, and we’re adding 1 to the total number of lines counted since the line numbering begins at 0.

8. Scanner

And finally, if we’re already using Scanner as part of a larger solution, it can solve the problem for us, too:

@Test
public void whenUsingScanner_thenReturnTotalNumberOfLines() throws IOException {
    try (Scanner scanner = new Scanner(new FileReader(INPUT_FILE_NAME))) {
        int noOfLines = 0;
        while (scanner.hasNextLine()) {
            scanner.nextLine();
            noOfLines++;
        }
        assertEquals(NO_OF_LINES, noOfLines);
    }
}

9. Conclusion

In this tutorial, we have explored different ways to find the number of lines in a file using Java. Since the main purpose of all these APIs is not for counting the number of lines in a file, it’s recommended choosing the right solution for our need.

The source code for this tutorial is available on GitHub.

Finding Greatest Common Divisor in Java

$
0
0

1. Overview

In mathematics, the GCD of two integers, which are non-zero, is the largest positive integer that divides each of the integers evenly.

In this tutorial, we’ll look at three approaches to find the Greatest Common Divisor (GCD) of two integers. Further, we’ll look at their implementation in Java.

2. Brute Force

For our first approach, we iterate from 1 to the smallest number given and check whether the given integers are divisible by the index. The largest index which divides the given numbers is the GCD of the given numbers:

int gcdByBruteForce(int n1, int n2) {
    int gcd = 1;
    for (int i = 1; i <= n1 && i <= n2; i++) {
        if (n1 % i == 0 && n2 % i == 0) {
            gcd = i;
        }
    }
    return gcd;
}

As we can see, the complexity of the above implementation is O(min(n1, n2)) because we need to iterate over the loop for n times (equivalent to the smaller number) to find the GCD.

3. Euclid’s Algorithm

Second, we can use Euclid’s algorithm to find the GCD. Euclid’s algorithm is not only efficient but also easy to understand and easy to implement using recursion in Java.

Euclid’s method depends on two important theorems:

  • First, if we subtract the smaller number from the larger number, the GCD doesn’t change – therefore, if we keep on subtracting the number we finally end up with their GCD
  • Second, when the smaller number exactly divides the larger number, the smaller number is the GCD of the two given numbers.

Note in our implementation that we’ll use modulo instead of subtraction since it’s basically many subtractions at a time:

int gcdByEuclidsAlgorithm(int n1, int n2) {
    if (n2 == 0) {
        return n1;
    }
    return gcdByEuclidsAlgorithm(n2, n1 % n2);
}

Also, note how we use n2 in n1‘s position and use the remainder in n2’s position in the recursive step of the algorithm.

Further, the complexity of Euclid’s algorithm is O(Log min(n1, n2)) which is better as compared to the Brute Force method we saw before.

4. Stein’s Algorithm or Binary GCD Algorithm

Finally, we can use Stein’s algorithm, also known as the Binary GCD algorithm, to find the GCD of two non-negative integers. This algorithm uses simple arithmetic operations like arithmetic shifts, comparison, and subtraction.

Stein’s algorithm repeatedly applies the following basic identities related to GCDs to find GCD of two non-negative integers:

  1. gcd(0, 0) = 0, gcd(n1, 0) = n1, gcd(0, n2) = n2
  2. When n1 and n2 are both even integers, then gcd(n1, n2) = 2 * gcd(n1/2, n2/2), since 2 is the common divisor
  3. If n1 is even integer and n2 is odd integer, then gcd(n1, n2) = gcd(n1/2, n2), since 2 is not the common divisor and vice versa
  4. If n1 and n2 are both odd integers, and n1 >= n2, then gcd(n1, n2) = gcd((n1-n2)/2, n2) and vice versa

We repeat steps 2-4 until n1 equals n2, or n1 = 0. The GCD is (2n) * n2. Here, n is the number of times 2 is found common in n1 and n2 while performing step 2:

int gcdBySteinsAlgorithm(int n1, int n2) {
    if (n1 == 0) {
        return n2;
    }

    if (n2 == 0) {
        return n1;
    }

    int n;
    for (n = 0; ((n1 | n2) & 1) == 0; n++) {
        n1 >>= 1;
        n2 >>= 1;
    }

    while ((n1 & 1) == 0) {
        n1 >>= 1;
    }

    do {
        while ((n2 & 1) == 0) {
            n2 >>= 1;
        }

        if (n1 > n2) {
            int temp = n1;
            n1 = n2;
            n2 = temp;
        }
        n2 = (n2 - n1);
    } while (n2 != 0);
    return n1 << n;
}

We can see that we use arithmetic shift operations in order to divide or multiply by 2. Further, we use subtraction in order to reduce the given numbers.

The complexity of Stein’s algorithm when n1 > n2 is O((log2n1)2) whereas. when n1 < n2, it is O((log2n2)2).

5. Conclusion

In this tutorial, we looked at various methods for calculating the GCD of two numbers. Besides, we implemented them in Java and looked at their complexity.

The full source code of our examples here is, as always, over on GitHub.

A Guide to Java GSS API

$
0
0

1. Overview

In this tutorial, we’ll understand the Generic Security Service API (GSS API) and how we can implement it in Java. We’ll see how we can secure network communication using the GSS API in Java.

In the process, we’ll create simple client and server components, securing them with GSS API.

2. What is GSS API

So, what really is the Generic Security Service API? GSS API provides a generic framework for applications to use different security mechanisms like Kerberos, NTLM, and SPNEGO in a pluggable manner. Consequently, it helps applications to decouple themselves from the security mechanisms directly.

To clarify, security here spans authentication, data integrity, and confidentiality.

2.1. Why do we Need GSS API?

Security mechanisms like Kerberos, NTLM, and Digest-MD5 are quite different in their capabilities and implementations. Typically, an application supporting one of these mechanisms finds it quite daunting to switch to another.

This is where a generic framework like GSS API provides applications with an abstraction. Therefore applications using GSS API can negotiate a suitable security mechanism and use that for communication. All that without actually having to implement any mechanism-specific details.

2.2. How does GSS API Work?

GSS API is a token-based mechanism. It works by the exchange of security tokens between peers. This exchange typically happens over a network but GSS API is agnostic to those details.

These tokens are generated and processed by the specific implementations of the GSS API. The syntax and semantics of these tokens are specific to the security mechanism negotiated between the peers:

The central theme of GSS API revolves around a security context. We can establish this context between peers through the exchange of tokens. We may need multiple exchanges of tokens between peers to establish the context.

Once successfully established at both the ends, we can use the security context to exchange data securely. This may include data integrity checks and data encryption, depending upon the underlying security mechanism.

3. GSS API Support in Java

Java supports GSS API as part of the package “org.ietf.jgss”. The package name may seem peculiar. That’s because the Java bindings for GSS API are defined in an IETF specification. The specification itself is independent of the security mechanism.

One of the popular security mechanism for Java GSS is Kerberos v5.

3.1. Java GSS API

Let’s try to understand some of the core APIs that builds Java GSS:

  • GSSContext encapsulates the GSS API security context and provides services available under the context
  • GSSCredential encapsulates the GSS API credentials for an entity that is necessary to establish the security context
  • GSSName encapsulates the GSS API principal entity which provides an abstraction for different namespace used by underlying mechanisms

Apart from the above interfaces, there are few other important classes to note:

  • GSSManager serves as the factory class for other important GSS API classes like GSSName, GSSCredential, and GSSContext
  • Oid represents the Universal Object Identifiers (OIDs) which are hierarchical identifiers used within GSS API to identify mechanisms and name formats
  • MessageProp wraps properties to indicate GSSContext on things like Quality of Protection (QoP) and confidentiality for data exchange
  • ChannelBinding encapsulates the optional channel binding information used to strengthen the quality with which peer entity authentication is provided

3.2. Java GSS Security Provider

While the Java GSS defines the core framework for implementing the GSS API in Java, it does not provide an implementation. Java adopts Provider-based pluggable implementations for security services including Java GSS.

There can be one or more such security providers registered with the Java Cryptography Architecture (JCA). Each security provider may implement one or more security services, like Java GSSAPI and security mechanisms underneath.

There is a default GSS provider that ships with the JDK. However, there are other vendor-specific GSS providers with different security mechanisms which we can use. One such provider is IBM Java GSS. We have to register such a security provider with JCA to be able to use them.

Moreover, if required, we can implement our own security provider with possibly custom security mechanisms. However, this is hardly needed in practice.

4. GSS API Through an Example

Now, we’ll see Java GSS in action through an example. We’ll create a simple client and server application. The client is more commonly referred to as initiator and server as an acceptor in GSS. We’ll use Java GSS and Kerberos v5 underneath for authentication.

4.1. GSS Context for Client and Server

To begin with, we’ll have to establish a GSSContext, both at the server and client-side of the application.

Let’s first see how we can do this at the client-side:

GSSManager manager = GSSManager.getInstance();
String serverPrinciple = "HTTP/localhost@EXAMPLE.COM";
GSSName serverName = manager.createName(serverPrinciple, null);
Oid krb5Oid = new Oid("1.2.840.113554.1.2.2");
GSSContext clientContext = manager.createContext(
  serverName, krb5Oid, (GSSCredential)null, GSSContext.DEFAULT_LIFETIME);
context.requestMutualAuth(true);
context.requestConf(true);
context.requestInteg(true);

There is quite a lot of things happening here, let’s break them down:

  • We begin by creating an instance of the GSSManager
  • Then we use this instance to create GSSContext, passing along:
    • a GSSName representing the server principal, note the Kerberos specific principal name here
    • the Oid of mechanism to use, Kerberos v5 here
    • the initiator’s credentials, null here means that default credentials will be used
    • the lifetime for the established context
  • Finally, we prepare the context for mutual authentication, confidentiality, and data integrity

Similarly, we have to define the server-side context:

GSSManager manager = GSSManager.getInstance();
GSSContext serverContext = manager.createContext((GSSCredential) null);

As we can see, this is much simpler than the client-side context. The only difference here is that we need acceptor’s credentials which we have used as null. As before, null means that the default credentials will be used.

4.2. GSS API Authentication

Although we have created the server and client-side GSSContext, please note that they are unestablished at this stage.

To establish these contexts, we need to exchange tokens specific to the security mechanism specified, that is Kerberos v5:

// On the client-side
clientToken = clientContext.initSecContext(new byte[0], 0, 0);
sendToServer(clientToken); // This is supposed to be send over the network
		
// On the server-side
serverToken = serverContext.acceptSecContext(clientToken, 0, clientToken.length);
sendToClient(serverToken); // This is supposed to be send over the network
		
// Back on the client side
clientContext.initSecContext(serverToken, 0, serverToken.length);

This finally makes the context established at both the ends:

assertTrue(serverContext.isEstablished());
assertTrue(clientContext.isEstablished());

4.3. GSS API Secure Communication

Now, that we have context established at both the ends, we can start sending data with integrity and confidentiality:

// On the client-side
byte[] messageBytes = "Baeldung".getBytes();
MessageProp clientProp = new MessageProp(0, true);
byte[] clientToken = clientContext.wrap(messageBytes, 0, messageBytes.length, clientProp);
sendToClient(serverToken); // This is supposed to be send over the network
       
// On the server-side 
MessageProp serverProp = new MessageProp(0, false);
byte[] bytes = serverContext.unwrap(clientToken, 0, clientToken.length, serverProp);
String string = new String(bytes);
assertEquals("Baeldung", string);

There are a couple of things happening here, let’s analyze:

  • MessageProp is used by the client to set the wrap method and generate the token
  • The method wrap also adds cryptographic MIC of the data, the MIC is bundled as part of the token
  • That token is sent to the server (possibly over a network call)
  • The server leverages MessageProp again to set the unwrap method and get data back
  • Also, the method unwrap verifies the MIC for the received data, ensuring the data integrity

Hence, the client and server are able to exchange data with integrity and confidentiality.

4.4. Kerberos Set-up for the Example

Now, a GSS mechanism like Kerberos is typically expected to fetch credentials from an existing Subject. The class Subject here is a JAAS abstraction representing an entity like a person or a service. This is usually populated during a JAAS-based authentication.

However, for our example, we’ll not directly use a JAAS-based authentication. We’ll let Kerberos obtain credentials directly, in our case using a keytab file. There is a JVM system parameter to achieve that:

-Djavax.security.auth.useSubjectCredsOnly=false

However, the defaults Kerberos implementation provided by Sun Microsystem relies on JAAS to provide authentication.

This may sound contrary to what we just discussed. Please note that we can explicitly use JAAS in our application which will populate the Subject. Or leave it to the underlying mechanism to authenticate directly, where it anyways uses JAAS. Hence, we need to provide a JAAS configuration file to the underlying mechanism:

com.sun.security.jgss.initiate  {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab=example.keytab
  principal="client/localhost"
  storeKey=true;
};
com.sun.security.jgss.accept  {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab=example.keytab
  storeKey=true
  principal="HTTP/localhost";
};

This configuration is straight-forward, where we have defined Kerberos as the required login module for both initiator and acceptor. Additionally, we have configured to use the respective principals from a keytab file. We can pass this JAAS configuration to JVM as a system parameter:

-Djava.security.auth.login.config=login.conf

Here, the assumption is that we have access to a Kerberos KDC. In the KDC we have set up the required principals and obtained the keytab file to use, let’s say “example.keytab”.

Additionally, we need the Kerberos configuration file pointing to the right KDC:

[libdefaults]
default_realm = EXAMPLE.COM
udp_preference_limit = 1
[realms]
EXAMPLE.COM = {
    kdc = localhost:52135
}

This simple configuration defines a KDC running on port 52135 with a default realm as EXAMPLE.COM. We can pass this to JVM as a system parameter:

-Djava.security.krb5.conf=krb5.conf

4.5. Running the Example

To run the example, we have to make use of the Kerberos artifacts discussed in the last section.

Also, we need to pass the required JVM parameters:

java -Djava.security.krb5.conf=krb5.conf \
  -Djavax.security.auth.useSubjectCredsOnly=false \
  -Djava.security.auth.login.config=login.conf \
  com.baeldung.jgss.JgssUnitTest

This is sufficient for Kerberos to perform the authentication with credentials from keytab and GSS to establish the contexts.

5. GSS API in Real World

While GSS API promises to solve a host of security problems through pluggable mechanisms, there are few use cases which have been more widely adopted:

  • It’s widely used in SASL as a security mechanism, especially where Kerberos is the underlying mechanism of choice. Kerberos is a widely used authentication mechanism, especially within an enterprise network. It is really useful to leverage a Kerberised infrastructure to authenticate a new application. Hence, GSS API bridges that gap nicely.
  • It’s also used in conjugation with SPNEGO to negotiate a security mechanism when one is not known beforehand. In this regard, SPNEGO is a pseudo mechanism of GSS API in a sense. This is widely supported in all modern browsers making them capable of leveraging Kerberos-based authentication.

6. GSS API in Comparision

GSS API is quite effective in providing security services to applications in a pluggable manner. However, it’s not the only choice to achieve this in Java.

Let’s understand what else Java has to offer and how do they compare against GSS API:

  • Java Secure Socket Extension (JSSE): JSSE is a set of packages in Java that implements Secure Sockets Layer (SSL) for Java. It provides data encryption, client and server authentication, and message integrity. Unlike GSS API, JSSE relies on a Public Key Infrastructure (PKI) to work. Hence, GSS API works out to be more flexible and lightweight than JSSE.
  • Java Simple Authentication and Security Layer (SASL): SASL is a framework for authentication and data security for internet protocols which decouples them from specific authentication mechanisms. This is similar in scope to GSS API. However, Java GSS has limited support for underlying security mechanisms through available security providers.

Overall, GSS API is pretty powerful in providing security services in mechanism agnostic manner. However, support for more security mechanisms in Java will take this further in adoption.

7. Conclusion

To sum up, in this tutorial, we understood the basics of GSS API as a security framework. We went through the Java API for GSS and understood how we can leverage them. In the process, we created simple client and server components that performed mutual authentication and exchanged data securely.

Further, we also saw what are the practical applications of GSS API and what are the alternatives available in Java.

As always, the code can be found over on GitHub.

Viewing all 4725 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>