1. Overview
Apache Lucene is a full-text search engine which can be used from various programming languages.
In this article, we’ll try to understand the core concepts of the library and create a simple application.
2. Maven Setup
To get started, let’s add necessary dependencies first:
<dependency> <groupId>org.apache.lucene</groupId> <artifactId>lucene-core</artifactId> <version>7.1.0</version> </dependency>
The latest version can be found here.
Also, for parsing our search queries, we’ll need:
<dependency> <groupId>org.apache.lucene</groupId> <artifactId>lucene-queryparser</artifactId> <version>7.1.0</version> </dependency>
Check for the latest version here.
3. Core Concepts
3.1. Indexing
Simply put, Lucene uses an “inverted indexing” of data – instead of mapping pages to keywords, it maps keywords to pages just like a glossary at the end of any book.
This allows for faster search responses, as it searches through an index, instead of searching through text directly.
3.2. Documents
Here, a document is a collection of fields, and each field has a value associated with it.
Indices are typically made up of one or more documents, and search results are sets of best-matching documents.
It isn’t always a plain text document, it could also be a database table or a collection.
3.3. Fields
Documents can have field data, where a field is typically a key holding a data value:
title: Goodness of Tea body: Discussing goodness of drinking herbal tea...
Notice that here title and body are fields and could be searched for together or individually.
3.4. Analysis
An analysis is converting the given text into smaller and precise units for easy the sake of searching.
The text goes through various operations of extracting keywords, removing common words and punctuations, changing words to lower case, etc.
For this purpose, there are multiple built-in analyzers:
- StandardAnalyzer – analyses based on basic grammar, removes stop words like “a”, “an” etc. Also converts in lowercase
- SimpleAnalyzer – breaks the text based on no-letter character and converts in lowercase
- WhiteSpaceAnalyzer – breaks the text based on white spaces
There’re more analyzers available for us to use and customize as well.
3.5. Searching
Once an index is built, we can search that index using a Query and an IndexSearcher. The search result is typically a result set, containing the retrieved data.
Note that an IndexWritter is responsible for creating the index and an IndexSearcher for searching the index.
3.6. Query Syntax
Lucene provides a very dynamic and easy to write query syntax.
To search a free text, we’d just use a text String as the query.
To search a text in a particular field, we’d use:
fieldName:text eg: title:tea
Range searches:
timestamp:[1509909322,1572981321]
We can also search using wildcards:
dri?nk
would search for a single character in place of the wildcard “?”
d*k
searches for words starting with “d” and ending in “k”, with multiple characters in between.
uni*
will find words starting with “uni”.
We may also combine these queries and create more complex queries. And include logical operator like AND, NOT, OR:
title: "Tea in breakfast" AND "coffee"
More about query syntax here.
4. A Simple Application
Let’s create a simple application, and index some documents.
First, we’ll create an in-memory index, and add some documents to it:
... Directory memoryIndex = new RAMDirectory(); StandardAnalyzer analyzer = new StandardAnalyzer(); IndexWriterConfig indexWriterConfig = new IndexWriterConfig(analyzer); IndexWriter writter = new IndexWriter(memoryIndex, indexWriterConfig); Document document = new Document(); document.add(new TextField("title", title, Field.Store.YES)); document.add(new TextField("body", body, Field.Store.YES)); writter.addDocument(document); writter.close();
Here, we create a document with TextField and add them to the index using the IndexWriter. The third argument in the TextField constructor indicates whether the value of the field is also to be stored or not.
Analyzers are used to split the data or text into chunks, and then filter out the stop words from them. Stop words are words like ‘a’, ‘am’, ‘is’ etc. These completely depend on the given language.
Next, let’s create a search query and search the index for the added document:
public List<Document> searchIndex(String inField, String queryString) { Query query = new QueryParser(inField, analyzer) .parse(queryString); IndexReader indexReader = DirectoryReader.open(memoryIndex); IndexSearcher searcher = new IndexSearcher(indexReader); TopDocs topDocs = searcher.search(query, 10); List<Document> documents = new ArrayList<>(); for (ScoreDoc scoreDoc : topDocs.scoreDocs) { documents.add(searcher.doc(scoreDoc.doc)); } return documents; }
In the search() method the second integer argument indicates how many top search results it should return.
Now let’s test it:
@Test public void givenSearchQueryWhenFetchedDocumentThenCorrect() { InMemoryLuceneIndex inMemoryLuceneIndex = new InMemoryLuceneIndex(new RAMDirectory(), new StandardAnalyzer()); inMemoryLuceneIndex.indexDocument("Hello world", "Some hello world"); List<Document> documents = inMemoryLuceneIndex.searchIndex("body", "world"); assertEquals( "Hello world", documents.get(0).get("title")); }
Here, we add a simple document to the index, with two fields ‘title’ and ‘body’, and then try to search the same using a search query.
6. Lucene Queries
As we are now comfortable with the basics of indexing and searching, let us dig a little deeper.
In earlier sections, we’ve seen the basic query syntax, and how to convert that into a Query instance using the QueryParser.
Lucene provides various concrete implementations as well:
6.1. TermQuery
A Term is a basic unit for search, containing the field name together with the text to be searched for.
TermQuery is the simplest of all queries consisting of a single term:
@Test public void givenTermQueryWhenFetchedDocumentThenCorrect() { InMemoryLuceneIndex inMemoryLuceneIndex = new InMemoryLuceneIndex(new RAMDirectory(), new StandardAnalyzer()); inMemoryLuceneIndex.indexDocument("activity", "running in track"); inMemoryLuceneIndex.indexDocument("activity", "Cars are running on road"); Term term = new Term("body", "running"); Query query = new TermQuery(term); List<Document> documents = inMemoryLuceneIndex.searchIndex(query); assertEquals(2, documents.size()); }
6.2. PrefixQuery
To search a document with a “starts with” word:
@Test public void givenPrefixQueryWhenFetchedDocumentThenCorrect() { InMemoryLuceneIndex inMemoryLuceneIndex = new InMemoryLuceneIndex(new RAMDirectory(), new StandardAnalyzer()); inMemoryLuceneIndex.indexDocument("article", "Lucene introduction"); inMemoryLuceneIndex.indexDocument("article", "Introduction to Lucene"); Term term = new Term("body", "intro"); Query query = new PrefixQuery(term); List<Document> documents = inMemoryLuceneIndex.searchIndex(query); assertEquals(2, documents.size()); }
6.3. WildcardQuery
As the name suggests, we can use wildcards “*” or “?” for searching:
// ... Term term = new Term("body", "intro*"); Query query = new WildcardQuery(term); // ...
6.4. PhraseQuery
It’s used to search a sequence of texts in a document:
// ... inMemoryLuceneIndex.indexDocument( "quotes", "A rose by any other name would smell as sweet."); Query query = new PhraseQuery( 1, "body", new BytesRef("smell"), new BytesRef("sweet")); List<Document> documents = inMemoryLuceneIndex.searchIndex(query); // ...
Notice that the first argument of the PhraseQuery constructor is called slop, which is the distance in the number of words, between the terms to be matched.
6.5. FuzzyQuery
We can use this when searching for something similar, but not necessarily identical:
// ... inMemoryLuceneIndex.indexDocument("article", "Halloween Festival"); inMemoryLuceneIndex.indexDocument("decoration", "Decorations for Halloween"); Term term = new Term("body", "hallowen"); Query query = new FuzzyQuery(term); List<Document> documents = inMemoryLuceneIndex.searchIndex(query); // ...
We tried searching for the text “Halloween”, but with miss-spelled “hallowen”.
6.6. BooleanQuery
Sometimes we might need to execute complex searches, combining two or more different types of queries:
// ... inMemoryLuceneIndex.indexDocument("Destination", "Las Vegas singapore car"); inMemoryLuceneIndex.indexDocument("Commutes in singapore", "Bus Car Bikes"); Term term1 = new Term("body", "singapore"); Term term2 = new Term("body", "car"); TermQuery query1 = new TermQuery(term1); TermQuery query2 = new TermQuery(term2); BooleanQuery booleanQuery = new BooleanQuery.Builder() .add(query1, BooleanClause.Occur.MUST) .add(query2, BooleanClause.Occur.MUST) .build(); // ...
7. Sorting Search Results
We may also sort the search results documents based on certain fields:
@Test public void givenSortFieldWhenSortedThenCorrect() { InMemoryLuceneIndex inMemoryLuceneIndex = new InMemoryLuceneIndex(new RAMDirectory(), new StandardAnalyzer()); inMemoryLuceneIndex.indexDocument("Ganges", "River in India"); inMemoryLuceneIndex.indexDocument("Mekong", "This river flows in south Asia"); inMemoryLuceneIndex.indexDocument("Amazon", "Rain forest river"); inMemoryLuceneIndex.indexDocument("Rhine", "Belongs to Europe"); inMemoryLuceneIndex.indexDocument("Nile", "Longest River"); Term term = new Term("body", "river"); Query query = new WildcardQuery(term); SortField sortField = new SortField("title", SortField.Type.STRING_VAL, false); Sort sortByTitle = new Sort(sortField); List<Document> documents = inMemoryLuceneIndex.searchIndex(query, sortByTitle); assertEquals(4, documents.size()); assertEquals("Amazon", documents.get(0).getField("title").stringValue()); }
We tried to sort the fetched documents by title fields, which are the names of the rivers. The boolean argument to the SortField constructor is for reversing the sort order.
8. Remove Documents from Index
Let’s try to remove some documents from the index based on a given Term:
// ... IndexWriterConfig indexWriterConfig = new IndexWriterConfig(analyzer); IndexWriter writer = new IndexWriter(memoryIndex, indexWriterConfig); writer.deleteDocuments(term); // ...
We’ll test this:
@Test public void whenDocumentDeletedThenCorrect() { InMemoryLuceneIndex inMemoryLuceneIndex = new InMemoryLuceneIndex(new RAMDirectory(), new StandardAnalyzer()); inMemoryLuceneIndex.indexDocument("Ganges", "River in India"); inMemoryLuceneIndex.indexDocument("Mekong", "This river flows in south Asia"); Term term = new Term("title", "ganges"); inMemoryLuceneIndex.deleteDocument(term); Query query = new TermQuery(term); List<Document> documents = inMemoryLuceneIndex.searchIndex(query); assertEquals(0, documents.size()); }
9. Conclusion
This article was a quick introduction to getting started with Apache Lucene. Also, we executed various queries and sorted the retrieved documents.
As always the code for the examples can be found over on Github.