Reader small image

You're reading from  Elasticsearch 8.x Cookbook - Fifth Edition

Product typeBook
Published inMay 2022
PublisherPackt
ISBN-139781801079815
Edition5th Edition
Right arrow
Author (1)
Alberto Paro
Alberto Paro
author image
Alberto Paro

Alberto Paro is an engineer, manager, and software developer. He currently works as technology architecture delivery associate director of the Accenture Cloud First data and AI team in Italy. He loves to study emerging solutions and applications, mainly related to cloud and big data processing, NoSQL, Natural language processing (NLP), software development, and machine learning. In 2000, he graduated in computer science engineering from Politecnico di Milano. Then, he worked with many companies, mainly using Scala/Java and Python on knowledge management solutions and advanced data mining products, using state-of-the-art big data software. A lot of his time is spent teaching how to effectively use big data solutions, NoSQL data stores, and related technologies.
Read more about Alberto Paro

Right arrow

Chapter 17: Big Data Integration

Elasticsearch has become a common component in big data architectures because it provides several of the following features:

  • It allows you to search for massive amounts of data quickly.
  • For common aggregation operations, it provides real-time analytics on big data.
  • It's easier to use an Elasticsearch aggregation than a Spark one.
  • If you need to move on to a fast data solution, starting from a subset of documents after a query is faster than doing a full rescan of all your data.

The most common big data software that's used for processing data is now Apache Spark (http://spark.apache.org/), which is considered the evolution of the obsolete Hadoop MapReduce for moving the processing from disk to memory.

In this chapter, we will see how to integrate Elasticsearch in Spark, both for write and read data. At the end, we will see how to use Apache Pig to write data in Elasticsearch in a simple way.

In this chapter...

Installing Apache Spark

To use Apache Spark, we need to install it. The process is very easy because its requirements are not the traditional Hadoop ones that require Apache ZooKeeper and Hadoop Distributed File System (HDFS).

Apache Spark can work in a standalone node installation similar to Elasticsearch.

Getting ready

You need a Java virtual machine installed. Generally, version 8.x or above is used. The maximum Java version supported by Apache Spark is 11.x.

How to do it...

To install Apache Spark, we will perform the following steps:

  1. Download a binary distribution from https://spark.apache.org/downloads.html. For generic usage, I would suggest that you download a standard version using the following request:
    wget https://dlcdn.apache.org/spark/spark-3.2.1/spark-3.2.1-bin-hadoop3.2.tgz 
  2. Now, we can extract the Spark distribution using tar, as follows:
    tar xfvz spark-3.2.1-bin-hadoop3.2.tgz 
  3. Now, we can test whether Apache Spark is working by executing...

Indexing data using Apache Spark

Now that we have installed Apache Spark, we can configure it to work with Elasticsearch and write some data in it.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1, Getting Started.

You also need a working installation of Apache Spark.

To simplify the configuration, we disable the HTTP Secure Sockets Layer (SSL) self-signed certificate that updates the section of config/elasticsearch.yml to false, as shown in the following code:

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: false
  keystore.path: certs/http.p12

After changing the configuration, the Elasticsearch node/cluster must be restarted.

How to do it...

To configure Apache Spark to communicate with Elasticsearch, we will perform the following steps:

  1. We need to download...

Indexing data with meta using Apache Spark

Using a simple map for ingesting data is not good for simple jobs. The best practice in Spark is to use the case class so that you have fast serialization and can manage complex type checking. During indexing, providing custom IDs can be very handy. In this recipe, we will see how to cover these issues.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1, Getting Started.

You also need a working installation of Apache Spark.

How to do it...

To store data in Elasticsearch using Apache Spark, we will perform the following steps:

  1. In the Spark root directory, start the Spark shell to apply the Elasticsearch configuration by running the following command:
    ./bin/spark-shell \
        --conf spark.es.index.auto.create=true \
        --conf spark.es.net.http.auth.user=$ES_USER \
       ...

Reading data with Apache Spark

In Spark, you can read data from a lot of sources, but in general, with NoSQL data stores such as HBase, Accumulo, and Cassandra, you have a limited query subset, and you often need to scan all the data to read only what is required. Using Elasticsearch, you can retrieve a subset of documents that matches your Elasticsearch query, speeding up the data reading several-fold.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1, Getting Started.

You also need a working installation of Apache Spark and the data that we indexed in the previous example.

How to do it...

To read data in Elasticsearch via Apache Spark, we will perform the following steps:

  1. In the Spark root directory, start the Spark shell to apply the Elasticsearch configuration by running the following command:
    ./bin/spark-shell \
        --conf spark.es.index...

Reading data using Spark SQL

Spark SQL is a Spark module for structured data processing. It provides a programming abstraction called DataFrames and can also act as a distributed SQL query engine. Elasticsearch Spark integration allows us to read data using SQL queries.

Spark SQL works with structured data; in other words, all entries are expected to have the same structure (the same number of fields, of the same type and name). Using unstructured data (documents with different structures) is not supported and will cause problems.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1, Getting Started.

You also need a working installation of Apache Spark and the data that we indexed in the Indexing data using Apache Spark recipe of this chapter.

How to do it...

To read data in Elasticsearch using Apache Spark SQL and DataFrames, we will perform the following steps:

...

Indexing data with Apache Pig

Apache Pig (https://pig.apache.org/) is a tool that's frequently used to store and manipulate data in data stores. It can be very handy if you need to import some Comma-Separated Values (CSV) in Elasticsearch very quickly.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1, Getting Started.

You need a working Pig installation. Depending on your operating system, you should follow the instructions at http://pig.apache.org/docs/r0.17.0/start.html.

If you are using macOS X with Homebrew, you can install it with brew install pig; in Linux/Windows, you can install it with the following commands:

wget -c https://downloads.apache.org/pig/pig-0.17.0/pig-0.17.0.tar.gz
tar xfvz pig-0.17.0.tar.gz

How to do it...

We want to read a CSV file and write the data in Elasticsearch. We will perform the following steps to do so:

  1. We will download...

Using Elasticsearch with Alpakka

The Alpakka project (https://doc.akka.io/docs/alpakka/current/index.html) is a reactive enterprise integration library for Java and Scala, based on Reactive Streams and Akka (https://akka.io/).

Reactive Streams is based on components – the most important ones are Source (which is used to read data from different sources) and Sink (which is used to write data in storage).

Alpakka supports Source and Sink for many data stores, Elasticsearch being one of them.

In this recipe, we will go through a common scenario – reading a CSV file and ingesting it in Elasticsearch.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1, Getting Started.

An IDE that supports Scala programming, such as IntelliJ IDEA with the Scala plugin, should be installed globally.

The code for this recipe can be found in the ch17/alpakka directory, and...

Using Elasticsearch with MongoDB

MongoDB (https://www.mongodb.com/) is one of the most popular documented data stores, due to its simple installation and the large community that is using it.

It's very common to use Elasticsearch as a search or query layer and MongoDB as a more secure data stage in many architectures. In this recipe, we'll see how simple it is to write in MongoDB, reading from an Elasticsearch query stream using Alpakka.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1, Getting Started.

An IDE that supports Scala programming, such as IntelliJ IDEA with the Scala plugin, should be installed globally.

A local installation of MongoDB is required to run the example. You can install it faster with Docker via the following command:

docker run -d -p 27017:27017 --name example-mongo mongo:latest

The code for this recipe can be found in the...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Elasticsearch 8.x Cookbook - Fifth Edition
Published in: May 2022Publisher: PacktISBN-13: 9781801079815
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Alberto Paro

Alberto Paro is an engineer, manager, and software developer. He currently works as technology architecture delivery associate director of the Accenture Cloud First data and AI team in Italy. He loves to study emerging solutions and applications, mainly related to cloud and big data processing, NoSQL, Natural language processing (NLP), software development, and machine learning. In 2000, he graduated in computer science engineering from Politecnico di Milano. Then, he worked with many companies, mainly using Scala/Java and Python on knowledge management solutions and advanced data mining products, using state-of-the-art big data software. A lot of his time is spent teaching how to effectively use big data solutions, NoSQL data stores, and related technologies.
Read more about Alberto Paro