Reader small image

You're reading from  Elasticsearch 7.0 Cookbook. - Fourth Edition

Product typeBook
Published inApr 2019
Reading LevelBeginner
PublisherPackt
ISBN-139781789956504
Edition4th Edition
Languages
Right arrow
Author (1)
Alberto Paro
Alberto Paro
author image
Alberto Paro

Alberto Paro is an engineer, manager, and software developer. He currently works as technology architecture delivery associate director of the Accenture Cloud First data and AI team in Italy. He loves to study emerging solutions and applications, mainly related to cloud and big data processing, NoSQL, Natural language processing (NLP), software development, and machine learning. In 2000, he graduated in computer science engineering from Politecnico di Milano. Then, he worked with many companies, mainly using Scala/Java and Python on knowledge management solutions and advanced data mining products, using state-of-the-art big data software. A lot of his time is spent teaching how to effectively use big data solutions, NoSQL data stores, and related technologies.
Read more about Alberto Paro

Right arrow

Using the Ingest Module

Elasticsearch 5.x introduces a set of powerful functionalities that target the problems that arise during ingestion of documents via the ingest node.

In Chapter 1Getting Started, we discussed that the Elasticsearch node can be master, data, or ingest; the idea to split the ingest component from the others is to create a more stable cluster due to problems that can arise when preprocessing documents.

To create a more stable cluster, the ingest nodes should be isolated by the master nodes (and possibly also from the data ones) in the event that some problems may occur, such as a crash due to an attachment plugin and high loads due to complex type manipulation.

The ingestion node can replace a Logstash installation in simple scenarios.

In this chapter, we will cover the following recipes:

  • Pipeline definition
  • Inserting an ingest pipeline
  • Getting ...

Pipeline definition

The job of ingest nodes is to pre-process the documents before sending them to the data nodes. This process is called a pipeline definition and every single step of this pipeline is a processor definition.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1Getting Started.

To execute these commands, any HTTP client can be used, such as curl (https://curl.haxx.se/), postman (https://www.getpostman.com/), or similar. We will use the Kibana console, as it provides code completion and better character escaping for Elasticsearch.

...

Inserting an ingest pipeline

The power of the pipeline definition is the ability for it to be updated and created without a node restart (compared to Logstash). The definition is stored in a cluster state via the put pipeline API.

Now that we've defined a pipeline, we need to provide it to the Elasticsearch cluster.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1Getting Started.

To execute the commands, every HTTP client can be used, such as curl (https://curl.haxx.se/), postman (https://www.getpostman.com/), or similar. Use the Kibana console, as it provides the code completion and better character escaping for...

Getting an ingest pipeline

After having stored your pipeline, it is common to retrieve its content, so that you can check its definition. This action can be done via the get pipeline API.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1Getting Started.

To execute the commands, every HTTP client can be used, such as curl (https://curl.haxx.se/), postman (https://www.getpostman.com/), or similar. Use the Kibana console, as it provides the code completion and better character escaping for Elasticsearch.

How to do it...

...

Deleting an ingest pipeline

To clean up our Elasticsearch cluster for obsolete or unwanted pipelines, we need to call the delete pipeline API with the ID of the pipeline.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1Getting Started.

To execute the commands, every HTTP client can be used, such as curl (https://curl.haxx.se/), postman (https://www.getpostman.com/), or similar. Use the Kibana console, as it provides the code completion and better character escaping for Elasticsearch.

How to do it...

...

Simulating an ingest pipeline

The ingest part of every architecture is very sensitive, so the Elasticsearch team has created the possibility of simulating your pipelines without the need to store them in Elasticsearch.

The simulate pipeline API allows a user to test, improve, and check functionalities of your pipeline without deployment in the Elasticsearch cluster.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1Getting Started.

To execute the commands, every HTTP client can be used, such as curl (https://curl.haxx.se/), postman (https://www.getpostman.com/), or similar. Use the Kibana console, as it provides the...

Built-in processors

Elasticsearch provides a large set of ingest processors by default. Their number and functionalities can also change from minor versions to extended versions for new scenarios.

In this recipe, we will look at the most commonly used ones.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1Getting Started.

To execute the commands, every HTTP client can be used, such as curl (https://curl.haxx.se/), postman (https://www.getpostman.com/), or similar. Use the Kibana console, as it provides the code completion and better character escaping for Elasticsearch.

...

Grok processor

Elasticsearch provides a large number of built-in processors that increases with every release. In the preceding examples, we have seen the set and the replace ones. In this recipe, we will cover one that's mostly used for log analysis: the grok processor, which is well-known to Logstash users.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1Getting Started.

To execute the commands, every HTTP client can be used, such as curl (https://curl.haxx.se/), postman (https://www.getpostman.com/), or similar. Use the Kibana console, as it provides the code completion and better character escaping for Elasticsearch...

Using the ingest attachment plugin

It's easy to make a cluster non-responsive in Elasticsearch prior to 5.x, by using the attachment mapper. The metadata extraction from a document requires a very high CPU operation and if you are ingesting a lot of documents, your cluster is under-loaded.

To prevent this scenario, Elasticsearch introduces the ingest node. An ingest node can be held under very high pressure without causing problems to the rest of the Elasticsearch cluster.

The attachment processor allows us to use the document extraction capabilities of Tika in an ingest node.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the, Downloading and installing Elasticsearch recipe in...

Using the ingest GeoIP plugin

Another interesting processor is the GeoIP plugin that allows us to map an IP address to a GeoPoint and other location data.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearchrecipe in Chapter 1Getting Started.

To execute the commands, every HTTP client can be used, such as curl (https://curl.haxx.se/), postman (https://www.getpostman.com/), or similar. Use the Kibana console, as it provides the code completion and better character escaping for Elasticsearch.

How to do it...

...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Elasticsearch 7.0 Cookbook. - Fourth Edition
Published in: Apr 2019Publisher: PacktISBN-13: 9781789956504
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Alberto Paro

Alberto Paro is an engineer, manager, and software developer. He currently works as technology architecture delivery associate director of the Accenture Cloud First data and AI team in Italy. He loves to study emerging solutions and applications, mainly related to cloud and big data processing, NoSQL, Natural language processing (NLP), software development, and machine learning. In 2000, he graduated in computer science engineering from Politecnico di Milano. Then, he worked with many companies, mainly using Scala/Java and Python on knowledge management solutions and advanced data mining products, using state-of-the-art big data software. A lot of his time is spent teaching how to effectively use big data solutions, NoSQL data stores, and related technologies.
Read more about Alberto Paro