Reader small image

You're reading from  Elasticsearch 7.0 Cookbook. - Fourth Edition

Product typeBook
Published inApr 2019
Reading LevelBeginner
PublisherPackt
ISBN-139781789956504
Edition4th Edition
Languages
Right arrow
Author (1)
Alberto Paro
Alberto Paro
author image
Alberto Paro

Alberto Paro is an engineer, manager, and software developer. He currently works as technology architecture delivery associate director of the Accenture Cloud First data and AI team in Italy. He loves to study emerging solutions and applications, mainly related to cloud and big data processing, NoSQL, Natural language processing (NLP), software development, and machine learning. In 2000, he graduated in computer science engineering from Politecnico di Milano. Then, he worked with many companies, mainly using Scala/Java and Python on knowledge management solutions and advanced data mining products, using state-of-the-art big data software. A lot of his time is spent teaching how to effectively use big data solutions, NoSQL data stores, and related technologies.
Read more about Alberto Paro

Right arrow

Big Data Integration

Elasticsearch has become a common component in big data architectures, because it provides several of the following features:

  • It allows you to search on massive amounts of data in a very fast way
  • For common aggregation operations, it provides real-time analytics on big data
  • It's more easy to use an Elasticsearch aggregation than a Spark one
  • If you need to move on to a fast data solution, starting from a subset of documents after a query is faster than doing a full rescan of all your data

The most common big data software that's used for processing data is now Apache Spark (http://spark.apache.org/), which is considered the evolution of the obsolete Hadoop MapReduce for moving the processing from disk to memory.

In this chapter, we will see how to integrate Elasticsearch in Spark, both for write and read data. In the end, we will see how to use Apache...

Installing Apache Spark

To use Apache Spark, we need to install it. The process is very easy, because its requirements are not the traditional Hadoop ones that require Apache Zookeeper and Hadoop HDFS.

Apache Spark is able to work in a standalone node installation that is similar to that of Elasticsearch.

Getting ready

You need a Java Virtual Machine installed. Generally, version 8.x or above is used.

How to do it...

Indexing data using Apache Spark

Now that we have installed Apache Spark, we can configure it to work with Elasticsearch and write some data in it.

Getting ready

You need an up and running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1, Getting Started.

You also need a working installation of Apache Spark.

How to do it...

To configure Apache Spark to communicate with Elasticsearch, we will perform the following steps:

  1. We need to download the ElasticSearch Spark JAR, as follows:
wget -c https://artifacts...

Indexing data with meta using Apache Spark

Using a simple map for ingesting data is not good for simple jobs. The best practice in Spark is to use the case class so that you have fast serialization and can manage complex type checking. During indexing, providing custom IDs can be very handy. In this recipe, we will see how to cover these issues.

Getting ready

You need an up and running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1Getting Started.

You also need a working installation of Apache Spark.

How to do it...

...

Reading data with Apache Spark

In Spark, you can read data from a lot of sources but, in general, with NoSQL datastores such as HBase, Accumulo, and Cassandra, you have a limited query subset, and you often need to scan all the data to read only the required data. Using Elasticsearch, you can retrieve a subset of documents that matches your Elasticsearch query.

Getting ready

You need an up and running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1Getting Started.

You also need a working installation of Apache Spark and the data that we indexed in the previous example.

...

Reading data using Spark SQL

Spark SQL is a Spark module for structured data processing. It provides a programming abstraction called DataFrames and can also act as a distributed SQL query engine. Elasticsearch Spark integration allows us to read data using SQL queries.

Spark SQL works with structured data; in other words, all entries are expected to have the same structure (the same number of fields, of the same type and name). Using unstructured data (documents with different structures) is not supported and will cause problems.

Getting ready

You need an up and running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1Getting Started.

You also need...

Indexing data with Apache Pig

Apache Pig (https://pig.apache.org/) is a tool that's frequently used to store and manipulate data in datastores. It can be very handy if you need to import some comma-separated values (CSV) in Elasticsearch in a very fast way.

Getting ready

You need an up and running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1Getting Started.

You need a working Pig installation. Depending on your operating system, you should follow the instructions at http://pig.apache.org/docs/r0.17.0/start.html.

If you are using macOS X with Homebrew, you can install it with brew install pig.

...

Using Elasticsearch with Alpakka

Alpakka project (https://doc.akka.io/docs/alpakka/current/index.html) is a reactive enterprise integration library for Java and Scala, based on Reactive Streams and Akka (https://akka.io/).

Reactive streams are based on components—the most important ones are Source (that are used to read data from different sources) and Sink (that are used to write data in storages).

Alpakka supports Source and Sink for many data stores and Elasticsearch is one of them.

In this recipe, we will go through a common scenario—read a CSV file and ingest it in Elasticsearch.

Getting ready

You need an up-and-running Elasticsearch installation as we described in the Downloading and installing Elasticsearch recipe...

Using Elasticsearch with MongoDB

MongoDB (https://www.mongodb.com/) is one of the most popular documented data stores, due to its simplicity of installation and the large community that is using it.

It's very common to use Elasticsearch as search or query layer and MongoDB as more secure data-stage in many architectures. In this recipe we'll see how simple it is to write in MongoDB reading from an Elasticsearch query stream using Alpakka.

Getting ready

You need an up-and-running Elasticsearch installation as we described in Downloading and installing Elasticsearch recipe in Chapter 1, Getting Started.

An IDE that supports Scala programming, such as IntelliJ IDEA, with the Scala plugin should be installed globally.

A...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Elasticsearch 7.0 Cookbook. - Fourth Edition
Published in: Apr 2019Publisher: PacktISBN-13: 9781789956504
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Alberto Paro

Alberto Paro is an engineer, manager, and software developer. He currently works as technology architecture delivery associate director of the Accenture Cloud First data and AI team in Italy. He loves to study emerging solutions and applications, mainly related to cloud and big data processing, NoSQL, Natural language processing (NLP), software development, and machine learning. In 2000, he graduated in computer science engineering from Politecnico di Milano. Then, he worked with many companies, mainly using Scala/Java and Python on knowledge management solutions and advanced data mining products, using state-of-the-art big data software. A lot of his time is spent teaching how to effectively use big data solutions, NoSQL data stores, and related technologies.
Read more about Alberto Paro