Reader small image

You're reading from  Elasticsearch 5.x Cookbook - Third Edition

Product typeBook
Published inFeb 2017
Publisher
ISBN-139781786465580
Edition3rd Edition
Right arrow
Author (1)
Alberto Paro
Alberto Paro
author image
Alberto Paro

Alberto Paro is an engineer, manager, and software developer. He currently works as technology architecture delivery associate director of the Accenture Cloud First data and AI team in Italy. He loves to study emerging solutions and applications, mainly related to cloud and big data processing, NoSQL, Natural language processing (NLP), software development, and machine learning. In 2000, he graduated in computer science engineering from Politecnico di Milano. Then, he worked with many companies, mainly using Scala/Java and Python on knowledge management solutions and advanced data mining products, using state-of-the-art big data software. A lot of his time is spent teaching how to effectively use big data solutions, NoSQL data stores, and related technologies.
Read more about Alberto Paro

Right arrow

Chapter 13. Ingest

In this chapter, we will cover the following recipes:

  • Pipeline definition

  • Put an ingest pipeline

  • Get an ingest pipeline

  • Delete an ingest pipeline

  • Simulate a pipeline

  • Built-in processors

  • The grok processor

  • Using the ingest attachment plugin

  • Using the ingest GeoIP plugin

Introduction


Elasticsearch 5.x introduces a set of powerful functionalities, targeting the problems that arise during ingestion of documents via the ingest node.

An Elasticsearch node can be master, data, or ingest.

The idea to split the ingest component from the others, is to create a more stable cluster due to problems that can arise during pre-processing documents.

To create a more stable cluster, the ingest nodes should be isolated by the master or data nodes, in the event that some problems may occur, such as a crash due to an attachment plugin and high loads due to complex type manipulation.

Note

The ingestion node can replace a Logstash installation in simple scenarios.

Pipeline definition


The job of ingest nodes is to pre-process the documents before sending them to the data nodes. This process is called a pipeline definition and every single step of this pipeline is a processor definition.

Getting ready

You need an up-and-running Elasticsearch installation as we described in the Downloading and installing Elasticsearch recipe in Chapter 2, Downloading and Setup.

How to do it...

To define an ingestion pipeline, you need to provide a description and some processors, as follows:

  1. We will define a pipeline that adds a field user with the value, john:

            { 
              "description" : "Add user john field", 
              "processors" : [  
                 { 
                  "set" : { 
                    "field": "user", 
                    "value": "john" 
                  } 
                } 
              ] 
            } 
    

How it works...

The generic template representation is the following one:

{ 
  "description" : "...", 
  "processors" : [ ... ], 
  "version": 1, 
  "on_failure" : [ ...

Put an ingest pipeline


The power of the pipeline definition is the ability for to be updated and created without a node restart (compared to Logstash). The definition is stored in a cluster state via the put pipeline API.

After having defined a pipeline, we need to provide it to the Elasticsearch cluster.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 2, Downloading and Setup.

To execute curl via the command line, you need to install curl for your operative system.

How to do it...

To store or update an ingestion pipeline in Elasticsearch, we will perform the following steps:

  1. We can store the ingest pipeline via a PUT call:

            curl -XPUT 'http://127.0.0.1:9200/_ingest/pipeline/add-user- 
            john' -d '{ 
              "description" : "Add user john field", 
              "processors" : [  
                 { 
                  "set" : { 
                    "field": "user", 
                    "value": "john...

Get an ingest pipeline


After having stored your pipeline, it is common to retrieve its content, for checking its definition. This action can be done via the get pipeline API.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 2, Downloading and Setup.

To execute curl via the command line, you need to install curl for your operative system.

How to do it...

To retrieve an ingestion pipeline in Elasticsearch, we will perform the following steps:

  1. We can retrieve the ingest pipeline via a GET call:

            curl -XGET 'http://127.0.0.1:9200/_ingest/pipeline/add-user-
            john'
    
  2. The result returned by Elasticsearch, if everything is okay, should be as follows:

            {
              "add-user-john" : {
                "description" : "Add user john field",
                "processors" : [
                  {
                    "set" : {
                      "field" : "user",
                      "value" : "john"
               ...

Delete an ingest pipeline


To clean up our Elasticsearch cluster for obsolete or unwanted pipelines, we need to call the delete pipeline API with the ID of the pipeline.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 2, Downloading and Setup.

To execute curl via the command line, you need to install curl for your operative system.

How to do it...

To delete an ingestion pipeline in Elasticsearch, we will perform the following steps:

  1. We can delete the ingest pipeline via a DELETE call:

            curl -XDELETE 'http://127.0.0.1:9200/_ingest/pipeline/add-user-
            john'
    
  2. The result returned by Elasticsearch, if everything is okay, should be:

            {"acknowledged":true}
    

How it works...

The delete pipeline API removes the named pipeline from Elasticsearch.

As the pipelines are kept in memory in every node due to their cluster level storage and the pipelines are always up and running in the ingest node...

Simulate an ingest pipeline


The ingest part of every architecture is very sensitive, so the Elasticsearch team has created the possibility of simulating your pipelines without the need to store them in Elasticsearch.

The simulate pipeline API allows a user to test/improve and check functionalities of your pipeline without deployment in the Elasticsearch cluster.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 2, Downloading and Setup.

To execute curl via the command-line, you need to install curl for your operative system.

How to do it...

To simulate an ingestion pipeline in Elasticsearch, we will perform the following steps:

  1. We can need to execute a call passing both the pipeline and a sample subset of a document to test the pipeline against:

            curl -XPOST 'http://127.0.0.1:9200/_ingest/pipeline/_simulate' 
            -d '{
              "pipeline": {
                "description": "Add user john field...

Built-in processors


Elasticsearch provides by default a large set of ingest processors. Their number and functionalities can also change from minor versions to extended versions for new scenarios.

In this recipe, we will see the most commonly used ones.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 2, Downloading and Setup.

To execute curl via the command-line, you need to install curl for your operative system.

How to do it...

To use several processors in an ingestion pipeline in Elasticsearch, we will perform the following steps:

  1. We execute a simulate pipeline API call using several processors with a sample subset of a document to test the pipeline against:

            curl -XPOST 'http://127.0.0.1:9200/_ingest/pipeline/_simulate?
            pretty' -d '{
              "pipeline": {
                "description": "Testing some build-processors",
                "processors": [
                  {
                   ...

Grok processor


Elasticsearch provides a large number of built-in processors that increases with every release. In the preceding examples, we have seen the set and the replace ones. In this recipe, we will cover one of the most used for log analysis: the grok processor, which is well known to Logstash users.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 2, Downloading and Setup.

To execute curl via the command line, you need to install curl for your operative system.

How to do it...

To test a grok pattern against some log lines, we will perform the following steps:

  1. We will execute a call passing both the pipeline with our grok processor and a sample subset of a document to test the pipeline against:

            curl -XPOST 'http://127.0.0.1:9200/_ingest/pipeline/_simulate?
            pretty' -d '{
              "pipeline": {
                "description": "Testing grok pattern",
                "processors": [...

Using the ingest attachment plugin


It's easy to make a cluster irresponsive in Elasticsearch prior to 5.x, using the attachment mapper. The metadata extraction from a document requires a very high CPU operation and if you are ingesting a lot of documents, your cluster is under load.

To prevent this scenario, Elasticsearch introduces the ingest node. An ingest node can be held under very high pressure without causing problems to the rest of the Elasticsearch cluster.

The attachment processor allows us to use the document extraction capabilities of Tika in an ingest node.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 2, Downloading and Setup.

To execute curl via the command line, you need to install curl for your operative system.

How to do it...

To be able to use the ingest attachment processor, perform the following steps:

  1. You need to install it as a plugin via:

            bin/elasticsearch-plugin...

Using the ingest GeoIP plugin


Another interesting processor is the GeoIP one that allows us to map an IP address to a GeoPoint and other location data.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 2, Downloading and Setup.

To execute curl via the command line, you need to install curl for your operative system.

How to do it...

To be able to use the ingest GeoIP processor, perform the following steps:

  1. You need to install it as a plugin via:

            bin/elasticsearch-plugin install ingest-geoip
    
  2. The output will be something like the following one:

              -> Downloading ingest-geoip from elastic
              [=================================================] 100%??            
              @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
              @     WARNING: plugin requires additional permissions     @     
              @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
              * java.lang.RuntimePermission...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Elasticsearch 5.x Cookbook - Third Edition
Published in: Feb 2017Publisher: ISBN-13: 9781786465580
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Alberto Paro

Alberto Paro is an engineer, manager, and software developer. He currently works as technology architecture delivery associate director of the Accenture Cloud First data and AI team in Italy. He loves to study emerging solutions and applications, mainly related to cloud and big data processing, NoSQL, Natural language processing (NLP), software development, and machine learning. In 2000, he graduated in computer science engineering from Politecnico di Milano. Then, he worked with many companies, mainly using Scala/Java and Python on knowledge management solutions and advanced data mining products, using state-of-the-art big data software. A lot of his time is spent teaching how to effectively use big data solutions, NoSQL data stores, and related technologies.
Read more about Alberto Paro