Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Fast Data Processing with Spark 2 - Third Edition

You're reading from  Fast Data Processing with Spark 2 - Third Edition

Product type Book
Published in Oct 2016
Publisher Packt
ISBN-13 9781785889271
Pages 274 pages
Edition 3rd Edition
Languages
Author (1):
Holden Karau Holden Karau
Profile icon Holden Karau

Table of Contents (18) Chapters

Fast Data Processing with Spark 2 Third Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Installing Spark and Setting Up Your Cluster Using the Spark Shell Building and Running a Spark Application Creating a SparkSession Object Loading and Saving Data in Spark Manipulating Your RDD Spark 2.0 Concepts Spark SQL Foundations of Datasets/DataFrames – The Proverbial Workhorse for DataScientists Spark with Big Data Machine Learning with Spark ML Pipelines GraphX

Chapter 10. Spark with Big Data

As we mentioned in Chapter 8, Spark SQL, the big data compute stack doesn't work in isolation. Integration points across multiple stacks and technologies are essential. In this chapter, we will look at how Spark works with some of the big data technologies that are part of the Hadoop ecosystem. We will cover the following topics in this chapter:

  • Parquet: This is an efficient storage format

  • HBase: This is the database in the Hadoop ecosystem

Parquet - an efficient and interoperable big data format


We explored the Parquet format in Chapter 7, Spark 2.0 Concepts. To recap, Parquet is essentially an interoperable storage format. Its main goals are space efficiency and query efficiency. Parquet's origin is based on Google's Dremel and was developed by Twitter and Cloudera. It is now an Apache incubator project. The nested storage format from Google Dremel is implemented in Parquet. It stores data in a columnar format and has an evolvable schema. This enables you to optimize queries (it can restrict the columns that you need to access and so you need not bring all the columns into the memory and discard the ones not needed), and it allows storage optimization (by decoding at the column level, which gives a much higher compression ratio). Another interesting feature is that Parquet can store nested Datasets. This feature can be leveraged in curated data lakes to store subject-based data. In addition to the ability to restrict column...

HBase


HBase is the NoSQL datastore in the Hadoop ecosystem. Integration with a database is essential for Spark. It can read data from an HBase table or write to one. In fact, Spark supports HBase very well via the HadoopdataSet calls.

Tip

If you want to experiment with HBase, you can install a standalone local version of HBase, as described in http://hbase.apache.org/book.html#quickstart.

Before working through the examples, let's create a table and three records in HBase. For testing, you can install a local standalone version of HBase that works from the local filesystem. So there's no need for Hadoop or HDFS. However, this won't be suitable for production.

I created a test table with three records via the HBase shell, as shown in the following screenshot:

Loading from HBase

The HBase test code in the Apache Spark examples is a good start to testing our HBase connectivity and loading data. The code is not that difficult, but we do need to keep track of the data types, that is, keys as bytes...

Summary


This chapter focused on the integration of Spark with other big data technologies. The Parquet format is an excellent way to expose the data processed by Spark to external systems, and Impala makes this very easy. The advantage of the Parquet format is that it is very efficient in terms of storage and expressive enough to capture the schema. We also looked at the process of interfacing with HBase. Thus, we can have our cake and eat it too! This means that we can leverage Spark for distributed scalable data processing, without losing the capability to integrate with other big data technologies. The next chapter, probably my favorite, is about machine learning. We will explore ML pipelines.

lock icon The rest of the chapter is locked
You have been reading a chapter from
Fast Data Processing with Spark 2 - Third Edition
Published in: Oct 2016 Publisher: Packt ISBN-13: 9781785889271
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}