Reader small image

You're reading from  Lucene 4 Cookbook

Product typeBook
Published inJun 2015
Reading LevelExpert
Publisher
ISBN-139781782162285
Edition1st Edition
Languages
Tools
Right arrow
Authors (2):
Edwood Ng
Edwood Ng
author image
Edwood Ng

Edwood Ng is a technologist with over a decade of experience in building scalable solutions from proprietary implementations to client-facing web-based applications. Currently, he's the director of DevOps at Wellframe, leading infrastructure and DevOps operations. His background in search engine began at Endeca Technologies in 2004, where he was a technical consultant helping numerous clients to architect and implement faceted search solutions. After Endeca, he drew on his knowledge and began designing and building Lucene-based solutions. His first Lucene implementation that went to production was the search engine behind http://UpDown.com. From there on, he continued to create search applications using Lucene extensively to deliver robust and scalable systems for his clients. Edwood is a supporter of an open source software. He has also contributed to the plugin sfI18NGettextPluralPlugin to the Symphony project.
Read more about Edwood Ng

Vineeth Mohan
Vineeth Mohan
author image
Vineeth Mohan

Vineeth Mohan is an architect and developer. He currently works as the CTO at Factweavers Technologies and is also an Elasticsearch-certified trainer. He loves to spend time studying emerging technologies and applications related to data analytics, data visualizations, machine learning, natural language processing, and developments in search analytics. He began coding during his high school days, which later ignited his interest in computer science, and he pursued engineering at Model Engineering College, Cochin. He was recruited by the search giant Yahoo! during his college days. After 2 years of work at Yahoo! on various big data projects, he joined a start-up that dealt with search and analytics. Finally, he started his own big data consulting company, Factweavers. Under his leadership and technical expertise, Factweavers is one of the early adopters of Elasticsearch and has been engaged with projects related to end-to-end big data solutions and analytics for the last few years. There, he got the opportunity to learn various big-data-based technologies, such as Hadoop, and high-performance data ingress systems and storage. Later, he moved to a start-up in his hometown, where he chose Elasticsearch as the primary search and analytic engine for the project assigned to him. Later in 2014, he founded Factweavers Technologies along with Jalaluddeen; it is consultancy that aims at providing Elasticsearch-based solutions. He is also an Elasticsearch-certified corporate trainer who conducts trainings in India. Till date, he has worked on numerous projects that are mostly based on Elasticsearch and has trained numerous multinationals on Elasticsearch.
Read more about Vineeth Mohan

View More author details
Right arrow

Chapter 8. Introducing Elasticsearch

We have seen how Lucene can be incorporated to build a high performance search application. We have also learned that Lucene by itself is a library, as it is not intended to run as a stand-alone service. Because Lucene does not come with any user interfaces, in order to use it, we need to write some codes around the library to provide our own interfaces. The road to adapt to Lucene may not be as straightforward as using a stand-alone service, but the many customizable options Lucene provides out of the box should outweigh the burden of the initial setup. To make setup simpler and allow the user to deploy Lucene quickly, we can leverage an open source product called Elasticsearch. This provides a user interface that wraps around Lucene. It is a stand-alone server with its own facilities to manage data injection operations and searches. It also comes with all sorts of tools to manage all aspects of indexing and searching processes. We are going to cover...

Introduction


There are currently two major open source search engine projects that are based on Lucene. They are Solr and Elasticsearch; both are very capable search engines and their search/indexing performance and features are comparable. Solr has a nicer admin user interface, while Elasticsearch provides a simpler RESTful interface for its entire API. Elasticsearch has more emphasis on sharding for distributed architecture, although Solr also provides SolrCloud, which is Solr's answer to the distributed architecture. At the time of writing, the latest Elasticsearch release is a stack of Elasticsearch, Logstash, and Kibana. Elasticsearch is becoming a big player in data analytics, providing capability to slice and dice time-series data (for example, logs analysis by Logstash) and visualization with Kibana.

Elasticsearch accepts data in JSON format. JSON is a widely accepted data format, you can read more about it here: http://www.json.org/. It also has the ability to be schema-less when...

Getting Elasticsearch


In this section, we will look into getting and setting up Elasticsearch. The installation process is straightforward as all you need to do is extract the downloaded file into your desired location. Then, you can run it as it is with default settings to begin using the search engine. We will also install a web front plugin called Elasticsearch-head to provide a user interface to browse and interact with Elasticsearch.

Getting ready

The prerequisite to install Elasticsearch is Java 7; only Oracle's Java and OpenJDK are supported. The installation package of Elasticsearch can be found on their official site: https://www.elastic.co/downloads. After you have downloaded the installation package, you can extract the package into an installation location.

How to do it...

You are now actually ready to run Elasticsearch, but before we start the search engine, let's install the Elasticsearch-head plugin. You can run the following commands in command line to perform the installation...

Creating a new index


Before we start adding any documents to Elasticsearch, we need to create an index first. An index in Elasticsearch is basically a named space where you can ingest data. Elasticsearch supports multiple indexes, handling right out of the box. We will take advantage of it by creating our own index for our exercise.

How to do it…

Run the following command to start a new index by using the HTTP PUT method:

curl -XPUT 'http://localhost:9200/news/'

This command should return the following:

{"acknowledged":true}

If the index already exists (for example, run the above command twice), you will see the following error message:

{"error":"IndexAlreadyExistsException[[news] already exists]","status":400}

To confirm the index is created, we can use the GET method:

curl -XGET 'http://localhost:9200/news/'

This should return the following:

{
  news: {
    aliases: { },
    mappings: { },
    settings: {
      index: {
        creation_date: "1425700268502",
        number_of_shards: "5",
 ...

Predefine field mappings


A mapping is analogous to the table in a database; it contains fields that are equivalent to the columns in a table. However, a mapping is only a logical separation; it does not physically separate the data between mappings as tables do in a database. When data gets added to an index, they are all stored as documents in Lucene. Although Elasticsearch supports schema-less data ingestion, we should always predefine fields so that we know exactly what data types are mapped instead of relying on Elasticsearch to detect data types, which sometimes may produce undesired results. In this section, we will demonstrate field mappings for a news article's index.

How to do it...

We will be using the put mapping API to predefine fields. Here is an example:

curl -XPUT "localhost:9200/news/_mapping/article" -d '
{
  "article" : {
    "properties" : {
      "title" : {"type" : "string", "store" : true, "index" : "analyzed" },
      "content": {"type" : "string", "store" : true, "index...

Adding a document


After index and mapping are created, we can begin sending data to Elasticsearch for indexing. We can use either HTTP PUT or POST to submit data. The difference between these two methods is that with PUT, we need to specify a unique ID, whereas with POST, Elasticsearch will automatically generate an ID for us. Here is the general URL format to submit a document:

http://<host>:<port>/<index>/<mapping>

Both methods accept data in JSON format. In our scenario, the JSON format should be in a flat key value pair structure.

How to do it...

Let's look at an example. We will use both HTTP methods to submit news articles to our index.

Using HTTP PUT:

curl -XPUT 'http://localhost:9200/news/article/1' -d '
{
    "title" : "Europe stocks tumble on political fears , PMI data" ,
    "publication_date" : "2012-03-30",
    "content" : "LONDON (MarketWatch)-European stock markets tumbled to a three-month low on Monday, driven by steep losses for banks and resource firms...

Deleting a document


The delete API allows you to delete a document by id. When documents are added to the index, an id (_id field) that is either supplied by source data or automatically generated, is always assigned. Every document in the index has to have an _id value as it's used to uniquely identify a document within an index and type. The delete API can be triggered by the HTTP DELETE method.

How to do it…

Here is a command to delete a document where the id is 1 in the news index, under type to article:

curl -XDELETE 'http://localhost:9200/news/article/1'

If the document exists, it should return a message like the following:

{"found":true,"_index":"news","_type":"article","_id":"1","_version":2}

Otherwise, it would say not found:

{"found":false,"_index":"news","_type":"article","_id":"1","_version":1}

How it works…

The DELETE HTTP method triggers the delete API. In our example, we specified the index as news and type as article in the URL and document id (_id field) as 1. We can verify the...

Updating a document


The updated API allows you to add/update fields in a document. The action can be triggered by using the HTTP PUT or POST method. A document can be specified by their ids (_id field).

How to do it…

Let's assume that we have an existing document in our news article index, in which id is 1 and title is "Europe stocks tumble on political fears, PMI data". We will submit an update action to update title to "Europe stocks tumble on political fears, PMI data | STOCK NEWS".

Here is a PUT method example:

curl -XPUT 'http://localhost:9200/news/article/1' -d '
{
    "title" : "Europe stocks tumble on political fears, PMI data | STOCK NEWS"
}'

If successful, it should return the following:

{"_index":"news","_type":"article","_id":"1","_version":2,"created":false}

Here is a POST method example:

curl -XPOST 'http://localhost:9200/news/article/1/_update' -d '
{ "doc" : {
    "title" : "Europe stocks tumble on political fears, PMI data | STOCK NEWS"
  }
}'

If successful, it should return the...

Performing bulk indexing


Elasticsearch supports bulk operation to load/update data to the index. The advantage of bulk update is that it reduces the number of HTTP calls, which will in turn increase throughput by the reduction of turnarounds between calls. When using the bulk API, we should use a file to store bulk data to prepare for an upload. In CURL, we can use the --data-binary flag to upload a file, instead of the -d plain. This is because in bulk mode, a newline character is treated as a record delimiter, which means no pretty print JSON.

Bulk API supports most update operations and can be broken down into four types of actions: index, create, delete, and update. Index and create serve a similar purpose; you can use either one to insert a document. The action composed of two rows: a row for action and metadata, and a row for source (for example, a document we want to insert). Delete has the same semantics as delete API and it does not require a source. Update's syntax is similar to...

Searching the index


Elasticsearch has a flexible search interface; it allows you to search across multiple indexes and types, or limited to a specific index and/or type. The search interface supports the URL search with a query string as a parameter, or using a request body in a JSON format, in which you can use Elasticsearch's Query DSL (domain-specific language) to specify search components. We will go over both these approaches in this section.

How to do it...

Let's look at the following examples:

curl -XGET 'http://localhost:9200/news/article/_search?q=monday'

curl -XGET 'http://localhost:9200/news,news2/article/_search?q=monday'

curl -XGET 'http://localhost:9200/news/article,article2/_search?q=monday'

curl -XGET 'http://localhost:9200/news/_search?q=monday'

curl -XGET 'http://localhost:9200/_search?q=monday'

Each command represents different demonstrations of URI-based searches. We can in any combination of indexes and types. Assuming that we do have an existing document in the index...

Scaling Elasticsearch


The main selling point of Elasticsearch is its simplicity to scale. Even by running it in a default setting, it immediately begins to showcase its scalability ability with an index of 5 shards and 1 replica (default index settings). The name Elastic in Elasticsearch refers to its clustering flexibility. You can easily scale up Elasticsearch by adding more machines into the mix in order to instantly increase capacity. It also includes the facility to handle automatic failover, so that planning for scalable and high availability architecture is greatly simplified.

To better comprehend Elasticsearch's scaling strategy, you will need to understand three main concepts: sharding, replica, and clustering. These are the core concepts that help drive elasticity in Elasticsearch.

A shard is a single Lucene index instance; it represents a slice of a bigger dataset. Sharding is a data partitioning strategy to make a large dataset more manageable. When dealing with an ever-growing...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Lucene 4 Cookbook
Published in: Jun 2015Publisher: ISBN-13: 9781782162285
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Edwood Ng

Edwood Ng is a technologist with over a decade of experience in building scalable solutions from proprietary implementations to client-facing web-based applications. Currently, he's the director of DevOps at Wellframe, leading infrastructure and DevOps operations. His background in search engine began at Endeca Technologies in 2004, where he was a technical consultant helping numerous clients to architect and implement faceted search solutions. After Endeca, he drew on his knowledge and began designing and building Lucene-based solutions. His first Lucene implementation that went to production was the search engine behind http://UpDown.com. From there on, he continued to create search applications using Lucene extensively to deliver robust and scalable systems for his clients. Edwood is a supporter of an open source software. He has also contributed to the plugin sfI18NGettextPluralPlugin to the Symphony project.
Read more about Edwood Ng

author image
Vineeth Mohan

Vineeth Mohan is an architect and developer. He currently works as the CTO at Factweavers Technologies and is also an Elasticsearch-certified trainer. He loves to spend time studying emerging technologies and applications related to data analytics, data visualizations, machine learning, natural language processing, and developments in search analytics. He began coding during his high school days, which later ignited his interest in computer science, and he pursued engineering at Model Engineering College, Cochin. He was recruited by the search giant Yahoo! during his college days. After 2 years of work at Yahoo! on various big data projects, he joined a start-up that dealt with search and analytics. Finally, he started his own big data consulting company, Factweavers. Under his leadership and technical expertise, Factweavers is one of the early adopters of Elasticsearch and has been engaged with projects related to end-to-end big data solutions and analytics for the last few years. There, he got the opportunity to learn various big-data-based technologies, such as Hadoop, and high-performance data ingress systems and storage. Later, he moved to a start-up in his hometown, where he chose Elasticsearch as the primary search and analytic engine for the project assigned to him. Later in 2014, he founded Factweavers Technologies along with Jalaluddeen; it is consultancy that aims at providing Elasticsearch-based solutions. He is also an Elasticsearch-certified corporate trainer who conducts trainings in India. Till date, he has worked on numerous projects that are mostly based on Elasticsearch and has trained numerous multinationals on Elasticsearch.
Read more about Vineeth Mohan