In this chapter, we will cover the following topics:
Understanding node and cluster
Understanding node services
Managing your data
Understanding cluster, replication, and sharding
Communicating with ElasticSearch
Using the HTTP protocol
Using the Native protocol
Using the Thrift protocol
In order to efficiently use ElasticSearch, it is very important to understand how it works. The goal of this chapter is to give the reader an overview of the basic concepts of ElasticSearch such as node, index, shard, type, records, and fields.
ElasticSearch can be used both as a search engine and as a data store. A brief description of the ElasticSearch logic helps the user to improve the performance and quality, and decide when and how to invest in infrastructure to improve scalability and availability. Some details about data replications and base node communication processes are also explained. At the end of this chapter the protocols used to manage ElasticSearch are also discussed.
Every instance of ElasticSearch is called as node. Several nodes are grouped in a cluster. This is the base of the cloud nature of ElasticSearch.
To better understand the upcoming sections, some knowledge of basic concepts of application node and cluster is required.
One or more ElasticSearch nodes can be set up on a physical or a virtual server depending on available resources such as RAM, CPUs, and disk space. A default node allows storing data in it and to process requests and responses. (In Chapter 2, Downloading and Setting Up ElasticSearch,we'll see details about how to set up different nodes and cluster topologies). When a node is started, several actions take place during its startup:
The configuration is read from the environment variables and from the
elasticsearch.yml
configuration fileA node name is set by a
config
file or chosen from a list of built-in random namesInternally, the ElasticSearch engine initializes all the modules and plugins that are available in the current installation
After node startup, the node searches for other cluster members and checks its indices and shards status. In order to join two or more nodes in a cluster, the following rules must be matched:
The version of ElasticSearch must be the same (0.20, 0.9, and so on) otherwise the join is rejected
The cluster name must be the same
The network must be configured to support multicast (default) and they can communicate with each other
Refer to the Networking setup recipe in the next chapter.
A common approach in cluster management is to have a master node, which is the main reference for all cluster level actions, and the others ones called secondary or slaves, that replicate the master data and actions. All the update actions are first committed in the master node and then replicated in secondary ones.
In a cluster with multiple nodes, if a master node dies, a secondary one is elected to be the new master; this approach allows automatic failover to be set up in an ElasticSearch cluster.
There are two important behaviors in an ElasticSearch node, namely the arbiter and the data container.
The arbiter nodes are able to process the REST response and all the other operations of search. During every action execution, ElasticSearch generally executes actions using a MapReduce approach. The arbiter is responsible for distributing the actions to the underlying shards (map) and collecting/aggregating the shard results (redux) to be sent a final response. They may use a huge amount of RAM due to operations such as facets, collecting hits and caching (for example, scan queries).
Data nodes are able to store data in them. They contain the indices shards that store the indexed documents as Lucene indices. All the standard nodes are both arbiter and data container.
In big cluster architectures, having some nodes as simple arbiters with a lot of RAM with no data reduces the resources required by data nodes and improves performance in search using the local memory cache of arbiters.
When a node is running, a lot of services are managed by its instance. Services provide additional functionalities to a node and they cover different behaviors such as networking, indexing, analyzing, and so on.
ElasticSearch natively provides a large set of functionalities that can be extended with additional plugins. During a node startup, a lot of required services are automatically started. The most important are as follows:
Cluster services manage cluster state and intra-node communication and synchronization
Indexing service manages all the index operations, initializing all active indices and shards
Mapping service that manages the document types stored in the cluster (we'll discuss mapping in Chapter 3, Managing Mapping)
Network services, such as HTTP REST services (default on port 9200), internal ES protocol (on port 9300) and Thrift server (on port 9500 if thrift plugin is installed)
Plugin service (discussed in Chapter 2, Downloading and Setting Up ElasticSearch, for installation and Chapter 12, Plugin Development, for detailed usage)
River service (covered in Chapter 8, Rivers)
Language scripting services that allow adding new language scripting support to ElasticSearch
Unless you are using ElasticSearch as a search engine or a distributed data store, it's important to understand concepts on how ElasticSearch stores and manages your data.
To work with ElasticSearch data, a user must know basic concepts of data management and JSON that is the "lingua franca" for working with ElasticSearch data and services.
Our main data container is called index (plural indices) and it can be considered as a database in the traditional SQL world. In an index, the data is grouped in data types called mappings in ElasticSearch. A mapping describes how the records are composed (called fields).
Every record, that must be stored in ElasticSearch, must be a JSON object.
Natively, ElasticSearch is a schema-less datastore. When you put records in it, during insert it processes the records, splits them into fields, and updates the schema to manage the inserted data.
To manage huge volumes of records, ElasticSearch uses the common approach to split an index into many shards so that they can be spread on several nodes. The shard management is transparent in usage—all the common record operations are managed automatically in the ElasticSearch application layer.
Every record is stored in only one shard. The sharding algorithm is based on record ID, so many operations that require loading and changing of records can be achieved without hitting all the shards.
The following schema compares ElasticSearch structure with SQL and MongoDB ones:
ElasticSearch |
SQL |
MongoDB |
---|---|---|
Index (Indices) |
Database |
Database |
Shard |
Shard |
Shard |
Mapping/Type |
Table |
Collection |
Field |
Field |
Field |
Record (JSON object) |
Record (Tuples) |
Record (BSON object) |
ElasticSearch, internally, has rigid rules about how to execute operations to ensure safe operations on index/mapping/records. In ElasticSearch, the operations are divided as follows:
Cluster operations: At cluster level all write ones are locked, first they are applied to the master node and then to the secondary one. The read operations are typically broadcasted.
Index management operations: These operations follow the cluster pattern.
Record operations: These operations are executed on single documents at shard level.
When a record is saved in ElasticSearch, the destination shard is chosen based on the following factors:
The ID (unique identifier) of the record. If the ID is missing, it is autogenerated by ElasticSearch.
If the routing or parent (covered while learning the parent/child mapping) parameters are defined, the correct shard is chosen by the hash of these parameters.
Splitting an index into shards allows you to store your data in different nodes, because ElasticSearch tries to do shard balancing.
Every shard can contain up to 2^32 records (about 4.2 billion records), so the real limit to shard size is its storage size.
Shards contain your data and during search process all the shards are used to calculate and retrieve results. ElasticSearch performance in big data scales horizontally with the number of shards.
All native records operations (such as index, search, update, and delete) are managed in shards.
The shard management is completely transparent to the user. Only an advanced user tends to change the default shard routing and management to cover their custom scenarios. A common custom scenario is the requirement to put customer data in the same shard to speed up his/her operations (search/index/analytics).
It's best practice not to have a too big shard (over 10 GB) to avoid poor performance in indexing due to continuous merge and resizing of index segments.
It's not good to oversize the number of shards to avoid poor search performance due to native distributed search (it works as MapReduce). Having a huge number of empty shards in an index consumes only memory.
Shard on Wikipedia http://en.wikipedia.org/wiki/Shard_(database_architecture)
Related to shard management, there is the key concept of replication and cluster status.
You need one or more nodes running to have a cluster. To test an effective cluster you need at least two nodes (they can be on the same machine).
An index can have one or more replicas—the shards are called primary if they are part of the master index and secondary if they are part of replicas.
To maintain consistency in write operations the following workflow is executed:
The write is first executed in the primary shard.
If the primary write is successfully done, it is propagated simultaneously in all the secondary shards.
If a primary shard dies, a secondary one is elected as primary (if available) and the flow is re-executed.
During search operations, a valid set of shards is chosen randomly between primary and secondary to improve performances.
The following figure shows an example of possible shards configuration:

Related to the concept of replication there is the cluster indicator of the health of your cluster.
It can cover three different states:
Green: Everything is ok.
Yellow: Something is missing but you can work.
Red: "Houston, we have a problem". Some primary shards are missing.
Mainly yellow status is due to some shards that are not allocated. If your cluster is in recovery status, just wait if there is enough space in nodes for your shards.
If your cluster, even after recovery is still in yellow state, it means you don't have enough nodes to contain your replicas so you can either reduce the number of your replicas or add the required number of nodes.
When you have lost data (that is, one or more shard is missing), you need to try restoring the node(s) that are missing. If your nodes restart and the system goes back to yellow or green status you are safe. Otherwise, you have lost data and your cluster is not usable. In this case, delete the index/indices and restore them from backup (if you have it) or from other sources.
You can communicate with your ElasticSearch server with several protocols. In this recipe we will look at some main protocols.
ElasticSearch is designed to be used as a RESTful server, so the main protocol is HTTP usually on port 9200 and above. Thus, it allows using different protocols such as native and thrift ones. Many others are available as extension plugins, but they are seldom used, such as memcached one.
Every protocol has weak and strong points, it's important to choose the correct one depending on the kind of applications you are developing. If you are in doubt, choose the HTTP protocol layer that is the most standard and easy to use one.
Choosing the right protocol depends on several factors, mainly architectural and performance related. This schema factorizes advantages and disadvantages related to them. If you are using it to communicate with Elasticsearch, the official clients switching from a protocol to another one is generally a simple setting in the client initialization. Refer to the following table which shows protocols and their advantages, disadvantages, and types:
This recipe shows a sample of using the HTTP protocol.
You need a working ElasticSearch cluster. Using default configuration the 9200 port is open in your server to communicate with.
The standard RESTful protocol, it's easy to integrate.
Now, I'll show how to easily fetch the ElasticSearch greeting API on a running server at 9200 port using several ways and programming languages.
For every language sample, the answer will be the same:
{ "ok" : true, "status" : 200, "name" : "Payge, Reeva", "version" : { "number" : "0.90.5", "snapshot_build" : false }, "tagline" : "You Know, for Search" }
In BASH:
curl –XGET http://127.0.0.1:9200
In Python:
import urllib result = urllib.open("http://127.0.0.1:9200")
In Java:
import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.net.URL; … try { // get URL content URL url = new URL("http://127.0.0.1:9200"); URLConnection conn = url.openConnection();// open the stream and put it into BufferedReader BufferedReader br = new BufferedReader(new InputStreamReader(conn.getInputStream())); String inputLine; while ((inputLine = br.readLine()) != null){ System.out.println(inputLine); } br.close(); System.out.println("Done"); } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); }
In Scala:
scala.io.Source.fromURL("http://127.0.0.1:9200","utf-8").getLines.mkString("\n")
Every client creates a connection to the server and fetches the answer. The answer is a valid JSON object. You can call ElasticSearch server from any language that you like.
The main advantages of this protocol are as follows:
Portability: It uses web standards so it can be integrated in different languages (Erlang, JavaScript, Python, Ruby, and so on) or called from command-line applications such as curl.
Durability: The REST APIs don't often change. They don't break for minor release changes as Native protocol does.
Simple to use: It speaks JSON to JSON.
More supported than other protocols: Every plugin typically supports a REST endpoint on HTTP.
In this book a lot of examples are used calling the HTTP API via command-line cURL program. This approach is very fast and allows you to test functionalities very quickly.
ElasticSearch provides a Native protocol, used mainly for low-level communication between nodes, but very useful for fast importing of huge data blocks. This protocol is available only for JVM languages.
You need a working ElasticSearch cluster— the standard port for Native protocol is 9300.
Creating a Java client is quite easy. Take a look at the following code snippet:
import net.thenetplanet.common.settings.ImmutableSettings; import net.thenetplanet.common.settings.Settings; import net.thenetplanet.client.Client; import net.thenetplanet.client.transport.TransportClient; … Settings settings = ImmutableSettings.settingsBuilder() .put("client.transport.sniff", true).build(); // we define a new settings // using snif transport allows to autodetect other nodes Client client = new TransportClient(settings) .addTransportAddress(new InetSocketTransportAddress("127.0.0.1","9300)); // a client is created with the settings
To initialize a native client some settings are required. The important ones are:
cluster.name
: This provides the name of the clusterclient.transport.sniff
: This allows sniff the rest of the cluster, and add those into its list of machines to use.
With these settings it's possible to initialize a new client giving an IP address and port (default 9300).
This is the internal protocol used in ElasticSearch—it's the faster protocol available to talk with ElasticSearch.
The Native protocol is an optimized binary one and works only for JVM languages. To use this protocol, you need to include elasticsearch.jar
in your JVM project. Because it depends on ElasticSearch implementation, it must be the same version of ElasticSearch cluster.
For this reason, every time you update your ElasticSearch Server/Cluster, you need to update elasticsearch.jar
of your projects and if there are internal API changes, you need to modify your application code.
To use this protocol you also need to study the internals of ElasticSearch, so it's not so easy to use as HTTP and Thrift protocol.
Native protocol is useful for massive data import. But as ElasticSearch is mainly thought of as a REST HTTP server to communicate with, it lacks support for everything is not standard in ElasticSearch core, such as plugins entry points. Using this protocol you are unable to call entry points made by externals plugins.
The Native protocol is the most used protocol in the Java world and it will be discussed in detail in Chapter 10, Java Integration and Chapter 12 , Plugin Development.
Thrift is an interface definition language, initially developed by Facebook, used to define and create services. This protocol is now in the Apache Software Foundation.
Its usage is similar to HTTP, but it bypasses the limit of HTTP protocol (latency, handshake, and so on) and it's faster.
You need a working ElasticSearch cluster with the thrift plugin installed (https://github.com/elasticsearch/elasticsearch-transport-thrift/) the standard port for thrift protocol is 9500.
In java using ElasticSearch generated classes, creating a client is quite easy as shown in the following code snippet:
import org.apache.thrift.protocol.TBinaryProtocol; import org.apache.thrift.protocol.TProtocol; import org.apache.thrift.transport.TSocket; import org.apache.thrift.transport.TTransport; import org.apache.thrift.transport.TTransportException; import org.elasticsearch.thrift.*; TTransport transport = new TSocket("127.0.0.1", 9500); TProtocol protocol = new TBinaryProtocol(transport); Rest.Client client = new Rest.Client(protocol); transport.open();
To initialize a connection, first we need to open a socket transport. This is done with the TSocket (host/port), using the ElasticSearch thrift standard port 9500.
Then the Socket Transport Protocol must be encapsulated in a Binary Protocol—this is done with the TBinaryProtocol (transport).
Now, a client can be initialized by passing the protocol. The Rest-Client and other utilities classes are generated by elasticsearch.thrift
, and live in the org.elasticsearch.thrift
namespace.
To have a fully working client, we must open the socket (transport.open()
).
At the end of program, we should clean the socket closing it (transport.close()
).
Some drivers to connect to ElasticSearch provide a simple to use API to interact with thrift without the boulder that this protocol needs.
For advanced usage, I suggest the use of the Thrift protocol to bypass some problems related with HTTP limits. They are as follows:
The number of simultaneous connections required in HTTP—thrift transport is less resource angry
The network traffic is light reduced to its binary nature
A big advantage of this protocol is that on server side it wraps the REST entry points so it can be also used with calls provided by external REST plugins.
For more details on Thrift, visit the Wikipedia page at http://en.wikipedia.org/wiki/Apache_Thrift
For a complete reference of Thrift ElasticSearch plugin, the official documentation is available at https://github.com/elasticsearch/elasticsearch-transport-thrift/