We have seen how Lucene can be incorporated to build a high performance search application. We have also learned that Lucene by itself is a library, as it is not intended to run as a stand-alone service. Because Lucene does not come with any user interfaces, in order to use it, we need to write some codes around the library to provide our own interfaces. The road to adapt to Lucene may not be as straightforward as using a stand-alone service, but the many customizable options Lucene provides out of the box should outweigh the burden of the initial setup. To make setup simpler and allow the user to deploy Lucene quickly, we can leverage an open source product called Elasticsearch. This provides a user interface that wraps around Lucene. It is a stand-alone server with its own facilities to manage data injection operations and searches. It also comes with all sorts of tools to manage all aspects of indexing and searching processes. We are going to cover...
You're reading from Lucene 4 Cookbook
There are currently two major open source search engine projects that are based on Lucene. They are Solr and Elasticsearch; both are very capable search engines and their search/indexing performance and features are comparable. Solr has a nicer admin user interface, while Elasticsearch provides a simpler RESTful interface for its entire API. Elasticsearch has more emphasis on sharding for distributed architecture, although Solr also provides SolrCloud, which is Solr's answer to the distributed architecture. At the time of writing, the latest Elasticsearch release is a stack of Elasticsearch, Logstash, and Kibana. Elasticsearch is becoming a big player in data analytics, providing capability to slice and dice time-series data (for example, logs analysis by Logstash) and visualization with Kibana.
Elasticsearch accepts data in JSON format. JSON is a widely accepted data format, you can read more about it here: http://www.json.org/. It also has the ability to be schema-less when...
In this section, we will look into getting and setting up Elasticsearch. The installation process is straightforward as all you need to do is extract the downloaded file into your desired location. Then, you can run it as it is with default settings to begin using the search engine. We will also install a web front plugin called Elasticsearch-head to provide a user interface to browse and interact with Elasticsearch.
The prerequisite to install Elasticsearch is Java 7; only Oracle's Java and OpenJDK are supported. The installation package of Elasticsearch can be found on their official site: https://www.elastic.co/downloads. After you have downloaded the installation package, you can extract the package into an installation location.
Before we start adding any documents to Elasticsearch, we need to create an index first. An index in Elasticsearch is basically a named space where you can ingest data. Elasticsearch supports multiple indexes, handling right out of the box. We will take advantage of it by creating our own index for our exercise.
Run the following command to start a new index by using the HTTP PUT method:
curl -XPUT 'http://localhost:9200/news/'
This command should return the following:
{"acknowledged":true}
If the index already exists (for example, run the above command twice), you will see the following error message:
{"error":"IndexAlreadyExistsException[[news] already exists]","status":400}
To confirm the index is created, we can use the GET method:
curl -XGET 'http://localhost:9200/news/'
This should return the following:
{ news: { aliases: { }, mappings: { }, settings: { index: { creation_date: "1425700268502", number_of_shards: "5", ...
A mapping is analogous to the table in a database; it contains fields that are equivalent to the columns in a table. However, a mapping is only a logical separation; it does not physically separate the data between mappings as tables do in a database. When data gets added to an index, they are all stored as documents in Lucene. Although Elasticsearch supports schema-less data ingestion, we should always predefine fields so that we know exactly what data types are mapped instead of relying on Elasticsearch to detect data types, which sometimes may produce undesired results. In this section, we will demonstrate field mappings for a news article's index.
We will be using the put mapping
API to predefine fields. Here is an example:
curl -XPUT "localhost:9200/news/_mapping/article" -d ' { "article" : { "properties" : { "title" : {"type" : "string", "store" : true, "index" : "analyzed" }, "content": {"type" : "string", "store" : true, "index...
After index and mapping are created, we can begin sending data to Elasticsearch for indexing. We can use either HTTP PUT or POST to submit data. The difference between these two methods is that with PUT, we need to specify a unique ID, whereas with POST, Elasticsearch will automatically generate an ID for us. Here is the general URL format to submit a document:
http://<host>:<port>/<index>/<mapping>
Both methods accept data in JSON format. In our scenario, the JSON format should be in a flat key value pair structure.
Let's look at an example. We will use both HTTP methods to submit news articles to our index.
Using HTTP PUT:
curl -XPUT 'http://localhost:9200/news/article/1' -d ' { "title" : "Europe stocks tumble on political fears , PMI data" , "publication_date" : "2012-03-30", "content" : "LONDON (MarketWatch)-European stock markets tumbled to a three-month low on Monday, driven by steep losses for banks and resource firms...
The
delete
API allows you to delete a document by id. When documents are added to the index, an id (_id field
) that is either supplied by source data or automatically generated, is always assigned. Every document in the index has to have an _id
value as it's used to uniquely identify a document within an index and type. The delete API can be triggered by the HTTP DELETE method.
Here is a command to delete a document where the id is 1 in the news index, under type to article:
curl -XDELETE 'http://localhost:9200/news/article/1'
If the document exists, it should return a message like the following:
{"found":true,"_index":"news","_type":"article","_id":"1","_version":2}
Otherwise, it would say not found
:
{"found":false,"_index":"news","_type":"article","_id":"1","_version":1}
The updated API allows you to add/update fields in a document. The action can be triggered by using the HTTP PUT or POST method. A document can be specified by their ids (_id field
).
Let's assume that we have an existing document in our news article index, in which id is 1 and title is "Europe stocks tumble on political fears, PMI data"
. We will submit an update action to update title to "Europe stocks tumble on political fears, PMI data | STOCK NEWS"
.
Here is a PUT method example:
curl -XPUT 'http://localhost:9200/news/article/1' -d ' { "title" : "Europe stocks tumble on political fears, PMI data | STOCK NEWS" }'
If successful, it should return the following:
{"_index":"news","_type":"article","_id":"1","_version":2,"created":false}
Here is a POST
method example:
curl -XPOST 'http://localhost:9200/news/article/1/_update' -d ' { "doc" : { "title" : "Europe stocks tumble on political fears, PMI data | STOCK NEWS" } }'
If successful, it should return the...
Elasticsearch supports bulk operation to load/update data to the index. The advantage of bulk update is that it reduces the number of HTTP calls, which will in turn increase throughput by the reduction of turnarounds between calls. When using the bulk API, we should use a file to store bulk data to prepare for an upload. In CURL, we can use the --data-binary
flag to upload a file, instead of the -d
plain. This is because in bulk mode, a newline character is treated as a record delimiter, which means no pretty print JSON.
Bulk API supports most update operations and can be broken down into four types of actions: index, create, delete, and update. Index and create serve a similar purpose; you can use either one to insert a document. The action composed of two rows: a row for action and metadata, and a row for source (for example, a document we want to insert). Delete has the same semantics as delete API and it does not require a source. Update's syntax is similar to...
Elasticsearch has a flexible search interface; it allows you to search across multiple indexes and types, or limited to a specific index and/or type. The search interface supports the URL search with a query string as a parameter, or using a request body in a JSON format, in which you can use Elasticsearch's Query DSL (domain-specific language) to specify search components. We will go over both these approaches in this section.
Let's look at the following examples:
curl -XGET 'http://localhost:9200/news/article/_search?q=monday' curl -XGET 'http://localhost:9200/news,news2/article/_search?q=monday' curl -XGET 'http://localhost:9200/news/article,article2/_search?q=monday' curl -XGET 'http://localhost:9200/news/_search?q=monday' curl -XGET 'http://localhost:9200/_search?q=monday'
Each command represents different demonstrations of URI-based searches. We can in any combination of indexes and types. Assuming that we do have an existing document in the index...
The main selling point of Elasticsearch is its simplicity to scale. Even by running it in a default setting, it immediately begins to showcase its scalability ability with an index of 5 shards and 1 replica (default index settings). The name Elastic in Elasticsearch refers to its clustering flexibility. You can easily scale up Elasticsearch by adding more machines into the mix in order to instantly increase capacity. It also includes the facility to handle automatic failover, so that planning for scalable and high availability architecture is greatly simplified.
To better comprehend Elasticsearch's scaling strategy, you will need to understand three main concepts: sharding, replica, and clustering. These are the core concepts that help drive elasticity in Elasticsearch.
A shard is a single Lucene index instance; it represents a slice of a bigger dataset. Sharding is a data partitioning strategy to make a large dataset more manageable. When dealing with an ever-growing...