Home Data Elasticsearch Server: Second Edition

Elasticsearch Server: Second Edition

books-svg-icon Book
eBook $32.99 $22.99
Print $54.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $32.99 $22.99
Print $54.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Getting Started with the Elasticsearch Cluster
About this book
Publication date:
April 2014
Publisher
Packt
Pages
428
ISBN
9781783980529

 

Chapter 1. Getting Started with the Elasticsearch Cluster

Welcome to the wonderful world of Elasticsearch—a great full text search and analytics engine. It doesn't matter if you are new to Elasticsearch and full text search in general or if you have experience. We hope that by reading this book you'll be able to learn and extend your knowledge of Elasticsearch. As this book is also dedicated to beginners, we decided to start with a short introduction to full text search in general and after that, a brief overview of Elasticsearch.

The first thing we need to do with Elasticsearch is install it. With many applications, you start with the installation and configuration and usually forget the importance of those steps. We will try to guide you through these steps so that it becomes easier to remember. In addition to this, we will show you the simplest way to index and retrieve data without getting into too many details. By the end of this chapter, you will have learned the following topics:

  • Full-text searching

  • Understanding Apache Lucene

  • Performing text analysis

  • Learning the basic concepts of Elasticsearch

  • Installing and configuring Elasticsearch

  • Using the Elasticsearch REST API to manipulate data

  • Searching using basic URI requests

 

Full-text searching


Back in the days when full-text searching was a term known to a small percentage of engineers, most of us used SQL databases to perform search operations. Of course, it is ok, at least to some extent. However, as you go deeper and deeper, you start to see the limits of such an approach. Just to mention some of them—lack of scalability, not enough flexibility, and lack of language analysis (of course there were additions that introduced full-text searching to SQL databases). These were the reasons why Apache Lucene (http://lucene.apache.org) was created—to provide a library of full text search capabilities. It is very fast, scalable, and provides analysis capabilities for different languages.

The Lucene glossary and architecture

Before going into the details of the analysis process, we would like to introduce you to the glossary for Apache Lucene and the overall architecture of Apache Lucene. The basic concepts of the mentioned library are as follows:

  • Document: This is a main data carrier used during indexing and searching, comprising one or more fields that contain the data we put in and get from Lucene.

  • Field: This is a section of the document which is built of two parts; the name and the value.

  • Term: This is a unit of search representing a word from the text.

  • Token: This is an occurrence of a term in the text of the field. It consists of the term text, start and end offsets, and a type.

Apache Lucene writes all the information to the structure called inverted index. It is a data structure that maps the terms in the index to the documents and not the other way around as the relational database does in its tables. You can think of an inverted index as a data structure where data is term-oriented rather than document-oriented. Let's see how a simple inverted index will look. For example, let's assume that we have the documents with only the title field to be indexed and they look as follows:

  • Elasticsearch Server 1.0 (document 1)

  • Mastering Elasticsearch (document 2)

  • Apache Solr 4 Cookbook (document 3)

So, the index (in a very simplified way) can be visualized as follows:

Each term points to the number of documents it is present in. This allows a very efficient and fast searching, such as the term-based queries. In addition to this, each term has a number connected to it, count, telling Lucene how often the term occurs.

Of course, the actual index created by Lucene is much more complicated and advanced because of additional files that include information such as term vectors, doc values, and so on. However, all you need to know for now is how the data is organized and not what is exactly stored.

Each index is divided into multiple write once and read many time segments. When indexing, after a single segment is written to the disk, it can't be updated. Therefore, the information on deleted documents is stored in a separate file, but the segment itself is not updated.

However, multiple segments can be merged together through a process called segments merge. After forcing the segments to merge or after Lucene decides that it is time to perform merging, the segments are merged together by Lucene to create larger ones. This can demand I/O; however, some information needs to be cleaned up because during this time, information that is not needed anymore will be deleted (for example, the deleted documents). In addition to this, searching with one large segment is faster than searching with multiple smaller ones holding the same data. That's because, in general, to search means to just match the query terms to the ones that are indexed. You can imagine how searching through multiple small segments and merging those results will be slower than having a single segment preparing the results.

Input data analysis

Of course, the question that arises is how the data that is passed in the documents is transformed into the inverted index and how the query text is changed into terms to allow searching. The process of transforming this data is called analysis. You may want some of your fields to be processed by a language analyzer so that words such as car and cars are treated as the same in your index. On the other hand, you may want other fields to be only divided on the white space or only lowercased.

Analysis is done by the analyzer, which is built of a tokenizer and zero or more token filters, and it can also have zero or more character mappers.

A tokenizer in Lucene is used to split the text into tokens, which are basically the terms with additional information, such as its position in the original text and its length. The results of the tokenizer's work is called a token stream, where the tokens are put one by one and are ready to be processed by the filters.

Apart from the tokenizer, the Lucene analyzer is built of zero or more token filters that are used to process tokens in the token stream. Some examples of filters are as follows:

  • Lowercase filter: This makes all the tokens lowercased

  • Synonyms filter: This is responsible for changing one token to another on the basis of synonym rules

  • Multiple language stemming filters: These are responsible for reducing tokens (actually, the text part that they provide) into their root or base forms, the stem

Filters are processed one after another, so we have almost unlimited analysis possibilities with the addition of multiple filters one after another.

Finally, the character mappers operate on non-analyzed text—they are used before the tokenizer. Therefore, we can easily remove HTML tags from whole parts of text without worrying about tokenization.

Indexing and querying

We may wonder how all the preceding functionalities affect indexing and querying when using Lucene and all the software that is built on top of it. During indexing, Lucene will use an analyzer of your choice to process the contents of your document; of course, different analyzers can be used for different fields, so the name field of your document can be analyzed differently compared to the summary field. Fields may not be analyzed at all, if we want.

During a query, your query will be analyzed. However, you can also choose not to analyze your queries. This is crucial to remember because some of the Elasticsearch queries are analyzed and some are not. For example, the prefix and the term queries are not analyzed, and the match query is analyzed. Having the possibility to chose from the queries that are analyzed and the ones that are not analyzed are very useful; sometimes, you may want to query a field that is not analyzed, while sometimes you may want to have a full text search analysis. For example, if we search for the LightRed term and the query is being analyzed by the standard analyzer, then the terms that would be searched are light and red. If we use a query type that has not been analyzed, then we will explicitly search for the LightRed term.

What you should remember about indexing and querying analysis is that the index should match the query term. If they don't match, Lucene won't return the desired documents. For example, if you are using stemming and lowercasing during indexing, you need to ensure that the terms in the query are also lowercased and stemmed, or your queries wouldn't return any results at all. It is important to keep the token filters in the same order during indexing and query time analysis so that the terms resulting of such an analysis are the same.

Scoring and query relevance

There is one additional thing we haven't mentioned till now—scoring. What is the score of a document? The score is a result of a scoring formula that describes how well the document matches the query. By default, Apache Lucene uses the TF/IDF (term frequency / inverse document frequency) scoring mechanism—an algorithm that calculates how relevant the document is in the context of our query. Of course, it is not the only algorithm available, and we will mention other algorithms in the Mappings configuration section of Chapter 2, Indexing Your Data.

Note

If you want to read more about the Apache Lucene TF/IDF scoring formula, please visit Apache Lucene Javadocs for the TFIDFSimilarity class available at http://lucene.apache.org/core/4_6_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html.

Remember though that the higher the score value calculated by Elasticsearch and Lucene, the more relevant is the document. The score calculation is affected by parameters such as boost, by different query types (we will discuss these query types in the Basic queries section of Chapter 3, Searching Your Data), or by using different scoring algorithms.

Note

If you want to read more detailed information about how Apache Lucene scoring works, what the default algorithm is, and how the score is calculated, please refer to our book, Mastering ElasticSearch, Packt Publishing.

 

The basics of Elasticsearch


Elasticsearch is an open source search server project started by Shay Banon and published in February 2010. During this time, the project has grown into a major player in the field of search and data analysis solutions and is widely used in many more or lesser-known search applications. In addition, due to its distributed nature and real-time capabilities, many people use it as a document store.

Key concepts of data architecture

Let's go through the basic concepts of Elasticsearch. You can skip this section if you are already familiar with the Elasticsearch architecture. However, if you are not familiar with this architecture, consider reading this section. We will refer to the key words used in the rest of the book.

Index

Index is the logical place where Elasticsearch stores logical data, so that it can be divided into smaller pieces. If you come from the relational database world, you can think of an index like a table. However, the index structure is prepared for fast and efficient full-text searching, and in particular, does not store original values. If you know MongoDB, you can think of the Elasticsearch index as a collection in MongoDB. If you are familiar with CouchDB, you can think about an index as you would about the CouchDB database. Elasticsearch can hold many indices located on one machine or spread over many servers. Every index is built of one or more shards, and each shard can have many replicas.

Document

The main entity stored in Elasticsearch is a document. Using the analogy to relational databases, a document is a row of data in a database table. When you compare an Elasticsearch document to a MongoDB document, you will see that both can have different structures, but the document in Elasticsearch needs to have the same type for all the common fields. This means that all the documents with a field called title need to have the same data type for it, for example, string.

Documents consist of fields, and each field may occur several times in a single document (such a field is called multivalued). Each field has a type (text, number, date, and so on). The field types can also be complex: a field can contain other subdocuments or arrays. The field type is important for Elasticsearch because it gives information about how various operations such as analysis or sorting should be performed. Fortunately, this can be determined automatically (however, we still suggest using mappings). Unlike the relational databases, documents don't need to have a fixed structure—every document may have a different set of fields, and in addition to this, fields don't have to be known during application development. Of course, one can force a document structure with the use of schema. From the client's point of view, a document is a JSON object (see more about the JSON format at http://en.wikipedia.org/wiki/JSON). Each document is stored in one index and has its own unique identifier (which can be generated automatically by Elasticsearch) and document type. A document needs to have a unique identifier in relation to the document type. This means that in a single index, two documents can have the same unique identifier if they are not of the same type.

Document type

In Elasticsearch, one index can store many objects with different purposes. For example, a blog application can store articles and comments. The document type lets us easily differentiate between the objects in a single index. Every document can have a different structure, but in real-world deployments, dividing documents into types significantly helps in data manipulation. Of course, one needs to keep the limitations in mind; that is, different document types can't set different types for the same property. For example, a field called title must have the same type across all document types in the same index.

Mapping

In the section about the basics of full-text searching (the Full-text searching section), we wrote about the process of analysis—the preparation of input text for indexing and searching. Every field of the document must be properly analyzed depending on its type. For example, a different analysis chain is required for the numeric fields (numbers shouldn't be sorted alphabetically) and for the text fetched from web pages (for example, the first step would require you to omit the HTML tags as it is useless information—noise). Elasticsearch stores information about the fields in the mapping. Every document type has its own mapping, even if we don't explicitly define it.

Key concepts of Elasticsearch

Now, we already know that Elasticsearch stores data in one or more indices. Every index can contain documents of various types. We also know that each document has many fields and how Elasticsearch treats these fields is defined by mappings. But there is more. From the beginning, Elasticsearch was created as a distributed solution that can handle billions of documents and hundreds of search requests per second. This is due to several important concepts that we are going to describe in more detail now.

Node and cluster

Elasticsearch can work as a standalone, single-search server. Nevertheless, to be able to process large sets of data and to achieve fault tolerance and high availability, Elasticsearch can be run on many cooperating servers. Collectively, these servers are called a cluster, and each server forming it is called a node.

Shard

When we have a large number of documents, we may come to a point where a single node may not be enough—for example, because of RAM limitations, hard disk capacity, insufficient processing power, and inability to respond to client requests fast enough. In such a case, data can be divided into smaller parts called shards (where each shard is a separate Apache Lucene index). Each shard can be placed on a different server, and thus, your data can be spread among the cluster nodes. When you query an index that is built from multiple shards, Elasticsearch sends the query to each relevant shard and merges the result in such a way that your application doesn't know about the shards. In addition to this, having multiple shards can speed up the indexing.

Replica

In order to increase query throughput or achieve high availability, shard replicas can be used. A replica is just an exact copy of the shard, and each shard can have zero or more replicas. In other words, Elasticsearch can have many identical shards and one of them is automatically chosen as a place where the operations that change the index are directed. This special shard is called a primary shard, and the others are called replica shards. When the primary shard is lost (for example, a server holding the shard data is unavailable), the cluster will promote the replica to be the new primary shard.

Gateway

Elasticsearch handles many nodes. The cluster state is held by the gateway. By default, every node has this information stored locally, which is synchronized among nodes. We will discuss the gateway module in The gateway and recovery modules section of Chapter 7, Elasticsearch Cluster in Detail.

Indexing and searching

You may wonder how you can practically tie all the indices, shards, and replicas together in a single environment. Theoretically, it should be very difficult to fetch data from the cluster when you have to know where is your document, on which server, and in which shard. Even more difficult is searching when one query can return documents from different shards placed on different nodes in the whole cluster. In fact, this is a complicated problem; fortunately, we don't have to care about this—it is handled automatically by Elasticsearch itself. Let's look at the following diagram:

When you send a new document to the cluster, you specify a target index and send it to any of the nodes. The node knows how many shards the target index has and is able to determine which shard should be used to store your document. Elasticsearch can alter this behavior; we will talk about this in the Routing section of Chapter 2, Indexing Your Data. The important information that you have to remember for now is that Elasticsearch calculates the shard in which the document should be placed using the unique identifier of the document. After the indexing request is sent to a node, that node forwards the document to the target node, which hosts the relevant shard.

Now let's look at the following diagram on searching request execution:

When you try to fetch a document by its identifier, the node you send the query to uses the same routing algorithm to determine the shard and the node holding the document and again forwards the query, fetches the result, and sends the result to you. On the other hand, the querying process is a more complicated one. The node receiving the query forwards it to all the nodes holding the shards that belong to a given index and asks for minimum information about the documents that match the query (identifier and score, by default), unless routing is used, where the query will go directly to a single shard only. This is called the scatter phase. After receiving this information, the aggregator node (the node that receives the client request) sorts the results and sends a second request to get the documents that are needed to build the results list (all the other information apart from the document identifier and score).

This is called the gather phase. After this phase is executed, the results are returned to the client.

Now the question arises—what is the role of replicas in the process described previously? While indexing, replicas are only used as an additional place to store the data. When executing a query, by default, Elasticsearch will try to balance the load among the shard and its replicas so that they are evenly stressed. Also, remember that we can change this behavior; we will discuss this in the Understanding the querying process section of Chapter 3, Searching Your Data.

 

Installing and configuring your cluster


There are a few steps required to install Elasticsearch, which we will explore in the following sections.

Installing Java

In order to set up Elasticsearch, the first step is to make sure that a Java SE environment is installed properly. Elasticsearch requires Java Version 6 or later to run. You can download it from http://www.oracle.com/technetwork/java/javase/downloads/index.html. You can also use OpenJDK (http://openjdk.java.net/) if you wish. You can, of course, use Java Version 6, but it is not supported with patches by default, so we suggest that you install Java 7.

Installing Elasticsearch

To install Elasticsearch, just download it from http://www.elasticsearch.org/download/ and unpack it. Choose the last stable version. That's it! The installation is complete.

Note

At the time of writing this book, we used Elasticsearch 1.0.0.GA. This means that we've skipped describing some properties that were marked as deprecated and are or will be removed in the future versions of Elasticsearch.

The main interface to communicate with Elasticsearch is based on an HTTP protocol and REST. This means that you can even use a web browser for some basic queries and requests, but for anything more sophisticated, you'll need to use additional software such as the cURL command. If you use the Linux or OS X command, the curl package should already be available. If you use Windows, you can download it from http://curl.haxx.se/download.html.

Installing Elasticsearch from binary packages on Linux

The other way to install Elasticsearch is to use the provided binary packages—the RPM or DEB packages, depending on your Linux distribution. The mentioned binary packages can be found at the following URL address: http://www.elasticsearch.org/download/.

Installing Elasticsearch using the RPM package

After downloading the RPM package, you just need to run the following command:

sudo yum elasticsearch-1.0.0.noarch.rpm

It is as simple as that. If everything went well, Elasticsearch should be installed and its configuration file should be stored in /etc/sysconfig/elasticsearch. If your operating system is based on Red Hat, you will be able to use the init script found at /etc/init.d/elasticsearch. If your operating system is a SUSE Linux, you can use the systemctl file found at /bin to start and stop the Elasticsearch service.

Installing Elasticsearch using the DEB package

After downloading the DEB package, all you need to do is run the following command:

sudo dpkg -i elasticsearch-1.0.0.deb

It is as simple as that. If everything went well, Elasticsearch should be installed and its configuration file should be stored in /etc/elasticsearch/elasticsearch.yml. The init script that allows you to start and stop Elasticsearch will be found at /etc/init.d/elasticsearch. Also, there will be files containing environment settings at /etc/default/elasticsearch.

The directory layout

Now, let's go to the newly created directory. We should see the following directory structure:

Directory

Description

bin

The scripts needed for running Elasticsearch instances and for plugin management

config

The directory where configuration files are located

lib

The libraries used by Elasticsearch

After Elasticsearch starts, it will create the following directories (if they don't exist):

Directory

Description

data

Where all the data used by Elasticsearch is stored

logs

The files with information about events and errors

plugins

The location for storing the installed plugins

work

The temporary files used by Elasticsearch

Configuring Elasticsearch

One of the reasons—of course, not the only one—why Elasticsearch is gaining more and more popularity is that getting started with Elasticsearch is quite easy. Because of the reasonable default values and automatic settings for simple environments, we can skip the configuration and go straight to the next chapter without changing a single line in our configuration files. However, in order to truly understand Elasticsearch, it is worth understanding some of the available settings.

We will now explore the default directories and layout of the files provided with the Elasticsearch tar.gz archive. The whole configuration is located in the config directory. We can see two files there: elasticsearch.yml (or elasticsearch.json, which will be used if present) and logging.yml. The first file is responsible for setting the default configuration values for the server. This is important because some of these values can be changed at runtime and can be kept as a part of the cluster state, so the values in this file may not be accurate. The two values that we cannot change at runtime are cluster.name and node.name.

The cluster.name property is responsible for holding the name of our cluster. The cluster name separates different clusters from each other. Nodes configured with the same cluster name will try to form a cluster.

The second value is the instance (the node) name. We can leave this parameter undefined. In this case, Elasticsearch automatically chooses a unique name for itself. Note that this name is chosen during every startup, so the name can be different on each restart. Defining the name can help when referring to concrete instances by the API or when using monitoring tools to see what is happening to a node during long periods of time and between restarts. Think about giving descriptive names to your nodes.

Other parameters are well commented in the file, so we advise you to look through it; don't worry if you do not understand the explanation. We hope that everything will become clear after reading the next few chapters.

Note

Remember that most of the parameters that have been set in the elasticsearch.yml file can be overwritten with the use of Elasticsearch REST API. We will talk about this API in the The update settings API section of Chapter 8, Administrating Your Cluster.

The second file (logging.yml) defines how much information is written to system logs, defines the logfiles, and creates new files periodically. Changes in this file are usually required only when you need to adapt to monitoring or backup solutions or during system debugging; however, if you want to have a more detailed logging, you need to adjust it accordingly.

Let's leave the configuration files for now. An important part of the configuration is tuning your operating system. During the indexing, especially when having many shards and replicas, Elasticsearch will create many files; so, the system cannot limit the open file descriptors to less than 32,000. For Linux servers, this can be usually changed in /etc/security/limits.conf and the current value can be displayed using the ulimit command. If you end up reaching the limit, Elasticsearch will not be able to create new files; so, merging will fail, indexing may fail, and new indices will not be created.

The next set of settings is connected to the Java Virtual Machine (JVM) heap memory limit for a single Elasticsearch instance. For small deployments, the default memory limit (1024 MB) will be sufficient, but for large ones, it will not be enough. If you spot entries that indicate the OutOfMemoryError exceptions in a logfile, set the ES_HEAP_SIZE variable to a value greater than 1024. When choosing the right amount of memory size to be given to the JVM, remember that, in general, no more than 50 percent of your total system memory should be given. However, as with all the rules, there are exceptions. We will discuss this in greater detail later, but you should always monitor your JVM heap usage and adjust it when needed.

Running Elasticsearch

Let's run our first instance that we just downloaded as the ZIP archive and unpacked. Go to the bin directory and run the following commands depending on the OS:

  • Linux or OS X: ./elasticsearch

  • Windows: elasticsearch.bat

Congratulations! Now, we have our Elasticsearch instance up and running. During its work, the server usually uses two port numbers: the first one for communication with the REST API using the HTTP protocol, and the second one for the transport module used for communication in a cluster and in between the native Java client and the cluster. The default port used for the HTTP API is 9200, so we can check the search readiness by pointing the web browser to http://127.0.0.1:9200/. The browser should show a code snippet similar to the following:

{
  "status" : 200,
  "name" : "es_server",
  "version" : {
    "number" : "1.0.0",
    "build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
    "build_timestamp" : "2014-02-12T16:18:34Z",
    "build_snapshot" : false,
    "lucene_version" : "4.6"
  },
  "tagline" : "You Know, for Search"
}

The output is structured as a JSON (JavaScript Object Notation) object. If you are not familiar with JSON, please take a minute and read the article available at http://en.wikipedia.org/wiki/JSON.

Note

Elasticsearch is smart. If the default port is not available, the engine binds to the next free port. You can find information about this on the console during booting as follows:

[2013-11-16 11:56:12,101][INFO ][http] [Red Lotus] bound_address {inet[/0:0:0:0:0:0:0:0%0:9200]}, publish_address {inet[/192.168.1.101:9200]}

Note the fragment with [http]. Elasticsearch uses a few ports for various tasks. The interface that we are using is handled by the HTTP module.

Now, we will use the cURL program. For example, to check cluster health, we will use the following command:

curl -XGET http://127.0.0.1:9200/_cluster/health?pretty

The -X parameter is a request method. The default value is GET (so, in this example, we can omit this parameter). Temporarily, do not worry about the GET value; we will describe it in more detail later in this chapter.

As a standard, the API returns information in a JSON object in which new line characters are omitted. The pretty parameter added to our requests forces Elasticsearch to add a new line character to the response, making the response more human friendly. You can try running the preceding query with and without the ?pretty parameter to see the difference.

Elasticsearch is useful in small- and medium-sized applications, but it has been built with large clusters in mind. So, now we will set up our big, two-node cluster. Unpack the Elasticsearch archive in a different directory and run the second instance. If we look at the log, we see what is shown as follows:

[2013-11-16 11:55:16,767][INFO ][cluster.service          ]
[Stane, Obadiah] detected_master [Martha Johansson]
[vswsFRWTSjOa_fy7uPuOMA]
[inet[/192.168.1.19:9300]], added {[Martha Johansson]
[vswsFRWTSjOa_fy7uPuOMA]
[inet[/192.168.1.19:9300]],}, reason: zen-disco-receive(from master
[[Martha Johansson][vswsFRWTSjOa_fy7uPuOMA]
[inet[/192.168.1.19:9300]]]) 

This means that our second instance (named Stane,Obadiah) discovered the previously running instance (named Martha Johansson). Here, Elasticsearch automatically formed a new, two-node cluster.

Note

Note that on some systems, the firewall software may be enabled by default, which may result in the nodes not being able to discover themselves.

Shutting down Elasticsearch

Even though we expect our cluster (or node) to run flawlessly for a lifetime, we may need to restart it or shut it down properly (for example, for maintenance). The following are three ways in which we can shut down Elasticsearch:

  • If your node is attached to the console, just press Ctrl + C

  • The second option is to kill the server process by sending the TERM signal (see the kill command on the Linux boxes and Program Manager on Windows)

  • The third method is to use a REST API

We will focus on the last method now. It allows us to shut down the whole cluster by executing the following command:

curl -XPOST http://localhost:9200/_cluster/nodes/_shutdown

To shut down just a single node, for example, a node with the BlrmMvBdSKiCeYGsiHijdg identifier, we will execute the following command:

curl –XPOST http://localhost:9200/_cluster/nodes/BlrmMvBdSKiCeYGsiHijdg/_shutdown

The identifier of the node can be read either from the logs or using the _cluster/nodes API, with the following command:

curl -XGET http://localhost:9200/_cluster/nodes/

Running Elasticsearch as a system service

Elasticsearch 1.0 can run as a service both on Linux-based systems as well as on Windows-based ones.

Elasticsearch as a system service on Linux

If you have installed Elasticsearch from the provided binary packages, you are already good to go and don't have to worry about anything. However, if you have just downloaded the archive and unpacked Elasticsearch to the directories of your choice, you'll need to put some additional effort. To install Elasticsearch as a Linux system service, we will use the Elasticsearch service wrapper that can be downloaded from https://github.com/elasticsearch/elasticsearch-servicewrapper.

Let's look at the steps to use the Elasticsearch service wrapper in order to set up a Linux service for Elasticsearch. First, we will run the following command to download the wrapper:

curl -L http://github.com/elasticsearch/elasticsearch-servicewrapper/tarball/master | tar -xz

Assuming that Elasticsearch has been installed in /usr/local/share/elasticsearch, we will run the following command to move the needed service wrapper files:

sudo mv *servicewrapper*/service /usr/local/share/elasticsearch/bin/

We will remove the remaining wrapper files by running the following command:

rm -Rf *servicewrapper*

Finally, we will install the service by running the install command as follows:

sudo /usr/local/share/elasticsearch/bin/service/elasticsearch install

After this, we need to create a symbolic link to the /usr/local/share/elasticsearch/bin/service/elasticsearch script in /usr/local/bin/rcelasticsearch. We do this by running the following command:

sudo ln -s 'readlink -f /usr/local/share/elasticsearch/bin/service/elasticsearch' /usr/local/bin/rcelasticsearch

And that's all. If you want to start Elasticsearch, just run the following command:

/etc/init.d/elasticsearch start

Elasticsearch as a system service on Windows

Installing Elasticsearch as a system service on Windows is very easy. You just need to go to your Elasticsearch installation directory, then go to the bin subdirectory, and run the following command:

service.bat install

You'll be asked about the permission to do so. If you allow the script to run, Elasticsearch will be installed as a Windows service.

If you would like to see all the commands exposed by the service.bat script file, just run the following command in the same directory as earlier:

service.bat

For example, to start Elasticsearch, we will just run the following command:

service.bat start
 

Manipulating data with the REST API


The Elasticsearch REST API can be used for various tasks. Thanks to this, we can manage indices, change instance parameters, check nodes and cluster status, index data, search the data, or retrieve documents via the GET API. But for now, we will concentrate on using the CRUD (create-retrieve-update-delete) part of the API, which allows you to use Elasticsearch in a similar way to how you would use a NoSQL database.

Understanding the Elasticsearch RESTful API

In a REST-like architecture, every request is directed to a concrete object indicated by the path of the address. For example, if /books/ is a reference to a list of books in our library, /books/1 is the reference to the book with the identifier 1. Note that these objects can be nested. The /books/1/chapter/6 reference denotes the sixth chapter of the first book in the library, and so on. We have a subject for our API call. What about an operation that we would like to execute, such as GET or POST? To indicate this, request types are used. The HTTP protocol gives us quite a long list of types that can be used as verbs in the API calls. Logical choices are GET in order to obtain the current state of the requested object, POST to change the object state, PUT to create an object, and DELETE to destroy objects. There is also a HEAD request that is only used to fetch the base information of an object.

If we look at the following examples of the operations discussed in the Shutting down Elasticsearch section, everything should make more sense:

  • GET http://localhost:9000/: This command retrieves basic information about Elasticsearch

  • GET http://localhost:9200/_cluster/state/nodes/: This command retrieves the information about the nodes in the cluster

  • POST http://localhost:9200/_cluster/nodes/_shutdown: This command sends a shutdown request to all the nodes in the cluster

We now know what REST means, at least in general (you can read more about REST at http://en.wikipedia.org/wiki/Representational_state_transfer). Now, we can proceed and learn how to use the Elasticsearch API to store, fetch, alter, and delete data.

Storing data in Elasticsearch

As we have already discussed, in Elasticsearch, every piece of data—each document—has a defined index and type. Each document can contain one or more fields that will hold your data. We will start by showing you how to index a simple document using Elasticsearch.

Creating a new document

Now, we will try to index some of the documents. For example, let's imagine that we are building some kind of CMS system for our blog. One of the entities in this blog is articles (surprise!).

Using the JSON notation, a document can be presented as shown in the following example:

{
  "id": "1",
  "title": "New version of Elasticsearch released!",
  "content": "Version 1.0 released today!",
  "priority": 10,
  "tags": ["announce", "elasticsearch", "release"]
}

As we can see, the JSON document contains a set of fields, where each field can have a different form. In our example, we have a number (priority), text (title), and an array of strings (tags). In the following examples, we will show you the other types. As mentioned earlier in this chapter, Elasticsearch can guess these types (because JSON is semi-typed; for example, the numbers are not in quotation marks) and automatically customize how this data will be stored in its internal structures.

Of course, we would like to index our example document and make it available for searching. We will use an index named blog and a type named article. In order to index our example document to this index under the given type and with the identifier of 1, we will execute the following command:

curl -XPUT http://localhost:9200/blog/article/1 -d '{"title": "New version of Elasticsearch released!", "content": "Version 1.0 released today!", "tags": ["announce", "elasticsearch", "release"] }'

Note a new option to the cURL command: the -d parameter. The value of this option is the text that will be used as a request payload—a request body. This way, we can send additional information such as document definition. Also, note that the unique identifier is placed in the URL and not in the body. If you omit this identifier (while using the HTTP PUT request), the indexing request will return the following error:

No handler found for uri [/blog/article/] and method [PUT]

If everything is correct, Elasticsearch will respond with a JSON response similar to the following output:

{
  "_index":"blog",
  "_type":"article",
  "_id":"1",
  "_version":1
}

In the preceding response, Elasticsearch includes the information about the status of the operation and shows where a new document was placed. There is information about the document's unique identifier and current version, which will be incremented automatically by Elasticsearch every time it is updated.

Automatic identifier creation

In the last example, we specified the document identifier ourselves. However, Elasticsearch can generate this automatically. This seems very handy, but only when index is the only source of data. If we use a database to store data and Elasticsearch for full-text searching, the synchronization of this data will be hindered unless the generated identifier is stored in the database as well. The generation of a unique identifier can be achieved by using the POST HTTP request type and by not specifying the identifier in the URL. For example, look at the following command:

curl -XPOST http://localhost:9200/blog/article/ -d '{"title": "New version of Elasticsearch released!", "content": "Version 1.0 released today!", "tags": ["announce", "elasticsearch", "release"] }'

Note the use of the POST HTTP request method instead of PUT in comparison to the previous example. Referring to the previous description of REST verbs, we wanted to change the list of documents in the index rather than create a new entity, and that's why we used POST instead of PUT. The server should respond with a response similar to the following output:

{
  "_index" : "blog",
  "_type" : "article",
  "_id" : "XQmdeSe_RVamFgRHMqcZQg",
  "_version" : 1
}

Note the highlighted line, which holds the unique identifier generated automatically by Elasticsearch.

Retrieving documents

We already have documents stored in our instance. Now let's try to retrieve them by using their identifiers. We will start by executing the following command:

curl -XGET http://localhost:9200/blog/article/1

Elasticsearch will return a response similar to the following output:

{
  "_index" : "blog",
  "_type" : "article",
  "_id" : "1",
  "_version" : 1,
  "exists" : true, 
  "_source" : {
    "title": "New version of Elasticsearch released!", 
    "content": "Version 1.0 released today!", 
    "tags": ["announce", "elasticsearch", "release"] 
  }

In the preceding response, besides the index, type, identifier, and version, we can also see the information that says that the document was found (the exists property) and the source of this document (in the _source field). If document is not found, we get a reply as follows:

{
  "_index" : "blog",
  "_type" : "article",
  "_id" : "9999",
  "exists" : false
}

Of course, there is no information about the version and source because no document was found.

Updating documents

Updating documents in the index is a more complicated task. Internally, Elasticsearch must first fetch the document, take its data from the _source field, remove the old document, apply changes to the _source field, and then index it as a new document. It is so complicated because we can't update the information once it is stored in the Lucene inverted index. Elasticsearch implements this through a script given as an update request parameter. This allows us to do more a sophisticated document transformation than simple field changes. Let's see how it works in a simple case.

Please recall the example blog article that we've indexed previously. We will try to change its content field from the old one to new content. To do this, we will run the following command:

curl -XPOST http://localhost:9200/blog/article/1/_update -d '{
  "script": "ctx._source.content = \"new content\""
}'

Elasticsearch will reply with the following response:

{"_index":"blog","_type":"article","_id":"1","_version":2}

It seems that the update operation was executed successfully. To be sure, let's retrieve the document by using its identifier. To do this, we will run the following command:

curl -XGET http://localhost:9200/blog/article/1

The response from Elasticsearch should include the changed content field, and indeed, it includes the following information:

{
  "_index" : "blog",
  "_type" : "article",
  "_id" : "1",
  "_version" : 2,
  "exists" : true, 
  "_source" : {
    "title":"New version of Elasticsearch released!",
    "content":"new content",
    "tags":["announce","elasticsearch","release"]
  }

Elasticsearch changed the contents of our article and the version number for this document. Note that we didn't have to send the whole document, only the changed parts. However, remember that to use the update functionality, we need to use the _source field—we will describe how to use the _source field in the Extending your index structure with additional internal information section in Chapter 2, Indexing Your Data.

There is one more thing about document updates; if your script uses a field value from a document that is to be updated, you can set a value that will be used if the document doesn't have that value present. For example, if you want to increment the counter field of the document and it is not present, you can use the upsert section in your request to provide the default value that will be used. For example, look at the following lines of command:

curl -XPOST http://localhost:9200/blog/article/1/_update -d '{
  "script": "ctx._source.counter += 1",
  "upsert": {
    "counter" : 0
  }
}'

If you execute the preceding example, Elasticsearch will add the counter field with the value of 0 to our example document. This is because our document does not have the counter field present and we've specified the upsert section in the update request.

Deleting documents

We have already seen how to create (PUT) and retrieve (GET) documents. We also know how to update them. It is not difficult to guess that the process to remove a document is similar; we need to send a proper HTTP request using the DELETE request type. For example, to delete our example document, we will run the following command:

curl -XDELETE http://localhost:9200/blog/article/1

The response from Elasticsearch will be as follows:

{"found":true,"_index":"blog","_type":"article","_id":"1","_version":3}

This means that our document was found and it was deleted.

Now we can use the CRUD operations. This lets us create applications using Elasticsearch as a simple key-value store. But this is only the beginning!

Versioning

In the examples provided, you might have seen information about the version of the document, which looked like the following:

"_version" : 1

If you look carefully, you will notice that after updating the document with the same identifier, this version is incremented. By default, Elasticsearch increments the version when a document is added, changed, or deleted. In addition to informing us about the number of changes made to the document, it also allows us to implement optimistic locking (http://en.wikipedia.org/wiki/Optimistic_concurrency_control). This allows us to avoid issues when processing the same document in parallel. For example, we read the same document in two different applications, modify it differently, and then try to update the one in Elasticsearch. Without versioning the version, we will see the one sent for indexation as the last version. Using optimistic locking, Elasticsearch guards the data accuracy—every attempt to write the document that has been already changed will fail.

An example of versioning

Let's look at an example that uses versioning. Let's assume that we want to delete a document with the identifier 1 with the book type from the library index. We also want to be sure that the delete operation is successful if the document was not updated. What we need to do is add the version parameter with the value of 1 as follows:

curl -XDELETE 'localhost:9200/library/book/1?version=1'

If the version of the document in the index is different from 1, the following error will be returned by Elasticsearch:

{
  "error": "VersionConflictEngineException[[library][4] [book][1]: version conflict, current [2], provided [1]]",
   "status": 409
}

In our example, Elasticsearch compared the version number declared by us and saw that this version is not the same in comparison to the version of the document in Elasticsearch. That's why the operation failed.

Using the version provided by an external system

Elasticsearch can also be based on the version number provided by us. It is necessary when the version is stored in the external system—in this case, when you index a new document, you should provide the version parameter as in the preceding example. In such cases, Elasticsearch will only check if the version provided with the operation is greater (it is not important how much) than the one saved in the index. If it is, the operation will be successful, and if not, it will fail. To inform Elasticsearch that we want to use external version tracking, we need to add the version_type=external parameter in addition to the version parameter.

For example, if we want to add a document that has a version 123456 in our system, we will run a command as follows:

curl -XPUT 'localhost:9200/library/book/1?version=123456' -d {...}

Note

Elasticsearch can check the version number even after the document is removed. That's because Elasticsearch keeps information about the version of the deleted document. By default, this information is available for 60 seconds after the deletion of the document. This time value can be changed by using the index.gc_deletes configuration parameter.

 

Searching with the URI request query


Before going into the details of Elasticsearch querying, we will use its capabilities of using a simple URI request to search. Of course, we will extend our search knowledge using Elasticsearch in Chapter 3, Searching Your Data, but for now, we will stick to the simplest approach.

Sample data

For the purpose of this section of the book, we will create a simple index with two document types. To do this, we will run the following commands:

curl -XPOST 'localhost:9200/books/es/1' -d '{"title":"Elasticsearch Server", "published": 2013}'
curl -XPOST 'localhost:9200/books/es/2' -d '{"title":"Mastering Elasticsearch", "published": 2013}'
curl -XPOST 'localhost:9200/books/solr/1' -d '{"title":"Apache Solr 4 Cookbook", "published": 2012}'

Running the preceding commands will create the books index with two types: es and solr. The title and published fields will be indexed. If you want to check this, you can do so by running the mappings API call using the following command (we will talk about the mappings in the Mappings configuration section of Chapter 2, Indexing Your Data):

curl -XGET 'localhost:9200/books/_mapping?pretty'

This will result in Elasticsearch returning the mappings for the whole index.

The URI request

All the queries in Elasticsearch are sent to the _search endpoint. You can search a single index or multiple indices, and you can also narrow down your search only to a given document type or multiple types. For example, in order to search our books index, we will run the following command:

curl -XGET 'localhost:9200/books/_search?pretty'

If we have another index called clients, we can also run a single query against these two indices as follows:

curl -XGET 'localhost:9200/books,clients/_search?pretty'

In the same manner, we can also choose the types we want to use during searching. For example, if we want to search only in the es type in the books index, we will run a command as follows:

curl -XGET 'localhost:9200/books/es/_search?pretty'

Note

Please remember that in order to search for a given type, we need to specify the index or indices. If we want to search for any index, we just need to set * as the index name or omit the index name totally. Elasticsearch allows quite a rich semantics when it comes to choosing index names. If you are interested, please refer to http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/multi-index.html.

We can also search all the indices by omitting the indices and types. For example, the following command will result in a search through all the data in our cluster:

curl -XGET 'localhost:9200/_search?pretty'

The Elasticsearch query response

Let's assume that we want to find all the documents in our books index that contain the elasticsearch term in the title field. We can do this by running the following query:

curl -XGET 'localhost:9200/books/_search?pretty&q=title:elasticsearch'

The response returned by Elasticsearch for the preceding request will be as follows:

{
  "took" : 4,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 2,
    "max_score" : 0.625,
    "hits" : [ {
      "_index" : "books",
      "_type" : "es",
      "_id" : "1",
      "_score" : 0.625, "_source" : {"title":"Elasticsearch Server", "published": 2013}
    }, {
      "_index" : "books",
      "_type" : "es",
      "_id" : "2",
      "_score" : 0.19178301, "_source" : {"title":"Mastering Elasticsearch", "published": 2013}
    } ]
  }
}

The first section of the response gives us the information on how much time the request took (the took property is specified in milliseconds); whether it was timed out (the timed_out property); and information on the shards that were queried during the request execution—the number of queried shards (the total property of the _shards object), the number of shards that returned the results successfully (the successful property of the _shards object), and the number of failed shards (the failed property of the _shards object). The query may also time out if it is executed for a longer time than we want. (We can specify the maximum query execution time using the timeout parameter.) The failed shard means that something went wrong on that shard or it was not available during the search execution.

Of course, the mentioned information can be useful, but usually, we are interested in the results that are returned in the hits object. We have the total number of documents returned by the query (in the total property) and the maximum score calculated (in the max_score property). Finally, we have the hits array that contains the returned documents. In our case, each returned document contains its index name (the _index property), type (the _type property), identifier (the _id property), score (the _score property), and the _source field (usually, this is the JSON object sent for indexing; we will discuss this in the Extending your index structure with additional internal information section in Chapter 2, Indexing Your Data.

Query analysis

You may wonder why the query we've run in the previous section worked. We indexed the Elasticsearch term and ran a query for elasticsearch and even though they differ (capitalization), relevant documents were found. The reason for this is the analysis. During indexing, the underlying Lucene library analyzes the documents and indexes the data according to the Elasticsearch configuration. By default, Elasticsearch will tell Lucene to index and analyze both string-based data as well as numbers. The same happens during querying because the URI request query maps to the query_string query (which will be discussed in Chapter 3, Searching Your Data), and this query is analyzed by Elasticsearch.

Let's use the indices analyze API (http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-analyze.html). It allows us to see how the analysis process is done. With it, we can see what happened to one of the documents during indexing and what happened to our query phrase during querying.

In order to see what was indexed in the title field for the Elasticsearch Server phrase, we will run the following command:

curl -XGET 'localhost:9200/books/_analyze?field=title' -d 'Elasticsearch Server'

The response will be as follows:

{
  "tokens" : [ {
    "token" : "elasticsearch",
    "start_offset" : 0,
    "end_offset" : 13,
    "type" : "<ALPHANUM>",
    "position" : 1
  }, {
    "token" : "server",
    "start_offset" : 14,
    "end_offset" : 20,
    "type" : "<ALPHANUM>",
    "position" : 2
  } ]
}

We can see that Elasticsearch has divided the text into two terms—the first one has a token value of elasticsearch and the second one has a token value of server.

Now let's look at how the query text was analyzed. We can do that by running the following command:

curl -XGET 'localhost:9200/books/_analyze?pretty&field=title' -d 'elasticsearch'

The response of the request looks as follows:

{
  "tokens" : [ {
    "token" : "elasticsearch",
    "start_offset" : 0,
    "end_offset" : 13,
    "type" : "<ALPHANUM>",
    "position" : 1
  } ]
}

We can see that the word is the same as the original one that we passed to the query. We won't get into Lucene query details and how the query parser constructed the query, but in general, the indexed term after analysis was the same as the one in the query after analysis; so, the document matched the query and the result was returned.

URI query string parameters

There are a few parameters that we can use to control the URI query behavior, which we will discuss now. Each parameter in the query should be concatenated with the & character, as shown in the following example:

curl -XGET 'localhost:9200/books/_search?pretty&q=published:2013&df=title&explain=true&default_operator=AND'

Please also remember about the ' characters because on Linux-based systems, the & character will be analyzed by the Linux shell.

The query

The q parameter allows us to specify the query that we want our documents to match. It allows us to specify the query using the Lucene query syntax described in the The Lucene query syntax section in this chapter. For example, a simple query could look like q=title:elasticsearch.

The default search field

By using the df parameter, we can specify the default search field that should be used when no field indicator is used in the q parameter. By default, the _all field will be used (the field that Elasticsearch uses to copy the content of all the other fields. We will discuss this in greater depth in the Extending your index structure with additional internal information section in Chapter 2, Indexing Your Data). An example of the df parameter value can be df=title.

Analyzer

The analyzer property allows us to define the name of the analyzer that should be used to analyze our query. By default, our query will be analyzed by the same analyzer that was used to analyze the field contents during indexing.

The default operator

The default_operator property which can be set to OR or AND allows us to specify the default Boolean operator used for our query. By default, it is set to OR, which means that a single query term match will be enough for a document to be returned. Setting this parameter to AND for a query will result in the returning of documents that match all the query terms.

Query explanation

If we set the explain parameter to true, Elasticsearch will include additional explain information with each document in the result—such as the shard, from which the document was fetched, and detailed information about the scoring calculation (we will talk more about it in the Understanding the explain information section in Chapter 5, Make Your Search Better). Also remember not to fetch the explain information during normal search queries because it requires additional resources and adds performance degradation to the queries. For example, a single result can look like the following code:

{
  "_shard" : 3,
  "_node" : "kyuzK62NQcGJyhc2gI1P2w",
  "_index" : "books",
  "_type" : "es",
  "_id" : "2",
  "_score" : 0.19178301, "_source" : {"title":"Mastering Elasticsearch", "published": 2013},
  "_explanation" : {
    "value" : 0.19178301,
    "description" : "weight(title:elasticsearch in 0) [PerFieldSimilarity], result of:",
    "details" : [ {
      "value" : 0.19178301,
      "description" : "fieldWeight in 0, product of:",
      "details" : [ {
        "value" : 1.0,
        "description" : "tf(freq=1.0), with freq of:",
        "details" : [ {
          "value" : 1.0,
          "description" : "termFreq=1.0"
        } ]
      }, {
        "value" : 0.30685282,
        "description" : "idf(docFreq=1, maxDocs=1)"
      }, {
        "value" : 0.625,
        "description" : "fieldNorm(doc=0)"
      } ]
    } ]
  }
}
The fields returned

By default, for each document returned, Elasticsearch will include the index name, type name, document identifier, score, and the _source field. We can modify this behavior by adding the fields parameter and specifying a comma-separated list of field names. The field will be retrieved from the stored fields (if they exist) or from the internal _source field. By default, the value of the fields parameter is _source. An example can be like this fields=title.

Note

We can also disable the fetching of the _source field by adding the _source parameter with its value set to false.

Sorting the results

By using the sort parameter, we can specify custom sorting. The default behavior of Elasticsearch is to sort the returned documents by their score in the descending order. If we would like to sort our documents differently, we need to specify the sort parameter. For example, adding sort=published:desc will sort the documents by the published field in the descending order. By adding the sort=published:asc parameter, we will tell Elasticsearch to sort the documents on the basis of the published field in the ascending order.

If we specify custom sorting, Elasticsearch will omit the _score field calculation for documents. This may not be the desired behavior in your case. If you want to still keep a track of the scores for each document when using custom sort, you should add the track_scores=true property to your query. Please note that tracking the scores when doing custom sorting will make the query a little bit slower (you may even not notice it) due to the processing power needed to calculate the score.

The search timeout

By default, Elasticsearch doesn't have timeout for queries, but you may want your queries to timeout after a certain amount of time (for example, 5 seconds). Elasticsearch allows you to do this by exposing the timeout parameter. When the timeout parameter is specified, the query will be executed up to a given timeout value, and the results that were gathered up to that point will be returned. To specify a timeout of 5 seconds, you will have to add the timeout=5s parameter to your query.

The results window

Elasticsearch allows you to specify the results window (the range of documents in the results list that should be returned). We have two parameters that allow us to specify the results window size: size and from. The size parameter defaults to 10 and defines the maximum number of results returned. The from parameter defaults to 0 and specifies from which document the results should be returned. In order to return five documents starting from the eleventh one, we will add the following parameters to the query: size=5&from=10.

The search type

The URI query allows us to specify the search type by using the search_type parameter, which defaults to query_then_fetch. There are six values that we can use: dfs_query_then_fetch, dfs_query_and_fetch, query_then_fetch, query_and_fetch, count, and scan. We'll learn more about search types in the Understanding the querying process section in Chapter 3, Searching Your Data.

Lowercasing the expanded terms

Some of the queries use query expansion, such as the prefix query. We will discuss this in the Query rewrite section of Chapter 3, Searching Your Data. We are allowed to define whether the expanded terms should be lowercased or not by using the lowercase_expanded_terms property. By default, the lowercase_expanded_terms property is set to true, which means that the expanded terms will be lowercased.

Analyzing the wildcard and prefixes

By default, the wildcard queries and the prefix queries are not analyzed. If we want to change this behavior, we can set the analyze_wildcard property to true.

The Lucene query syntax

We thought that it will be good to know a bit more about what syntax can be used in the q parameter passed in the URI query. Some of the queries in Elasticsearch (such as the one currently discussed) support the Lucene query parsers syntax—the language that allows you to construct queries. Let's take a look at it and discuss some basic features. To read about the full Lucene query syntax, please go to the following web page: http://lucene.apache.org/core/4_6_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html.

A query that we pass to Lucene is divided into terms and operators by the query parser. Let's start with the terms—you can distinguish them into two types—single terms and phrases. For example, to query for a term book in the title field, we will pass the following query:

title:book

To query for a phrase elasticsearch book in the title field, we will pass the following query:

title:"elasticsearch book"

You may have noticed the name of the field in the beginning and in the term or phrase later.

As we already said, the Lucene query syntax supports operators. For example, the + operator tells Lucene that the given part must be matched in the document. The - operator is the opposite, which means that such a part of the query can't be present in the document. A part of the query without the + or - operator will be treated as the given part of the query that can be matched but it is not mandatory. So, if we would like to find a document with the term book in the title field and without the term cat in the description field, we will pass the following query:

+title:book -description:cat

We can also group multiple terms with parenthesis, as shown in the following query:

title:(crime punishment)

We can also boost parts of the query with the ^ operator and the boost value after it, as shown in the following query:

title:book^4
 

Summary


In this chapter, we learned what full text search is and how Apache Lucene fits in there. In addition to this, we are now familiar with the basic concepts of Elasticsearch and its top-level architecture. We used the Elasticsearch REST API not only to index data but also to update it, retrieve it, and finally delete it. Finally, we searched our data using the simple URI query. In the next chapter, we'll focus on indexing our data. We will see how Elasticsearch indexing works and what is the role of primary shard and its replicas. We'll see how Elasticsearch handles the data that it doesn't know or how to create our own mappings—the JSON structure that describes the structure of our index. We'll also learn how to use batch indexing to speed up the indexing process and what additional information can be stored along with our index to help us achieve our goal. In addition, we will discuss what an index segment is, what segment merging is, and how to tune the segment. Finally, we'll see how routing works in Elasticsearch and what options we have when it comes to both indexing and querying routing.

Latest Reviews (2 reviews total)
The book is a bit dated and seems to be lacking details such as using a Python API that I was looking for. I ended up buying Elasticsearch Essentials for this information.
Elasticsearch Server: Second Edition
Unlock this book and the full library FREE for 7 days
Start now