As a data scientist, you'll no doubt be very familiar with handling files and processing perhaps even large amounts of data. However, as I'm sure you will agree, doing anything more than a simple analysis over a single type of data requires a method of organizing and cataloguing data so that it can be managed effectively. Indeed, this is the cornerstone of a great data scientist. As the data volume and complexity increases, a consistent and robust approach can be the difference between generalized success and over-fitted failure!
This chapter is an introduction to an approach and ecosystem for achieving success with data at scale. It focuses on the data science tools and technologies. It introduces the environment, and how to configure it appropriately, but also explains some of the nonfunctional considerations relevant to the overall data architecture. While there is little actual data science at this stage, it provides the essential platform to pave the way for success in the rest of the book.
In this chapter, we will cover the following topics:
Data management responsibilities
Data architecture
Companion tools
Data management is of particular importance, especially when the data is in flux; either constantly changing or being routinely produced and updated. What is needed in these cases is a way of storing, structuring, and auditing data that allows for the continuous processing and refinement of models and results.
Here, we describe how to best hold and organize your data to integrate with Apache Spark and related tools within the context of a data architecture that is broad enough to fit the everyday requirement.
Even if, in the medium term, you only intend to play around with a bit of data at home; then without proper data management, more often than not, efforts will escalate to the point where it is easy to lose track of where you are and mistakes will happen. Taking the time to think about the organization of your data, and in particular, its ingestion, is crucial. There's nothing worse than waiting for a long running analytic to complete, collating the results and producing a report, only to discover you used the wrong version of data, or data is incomplete, has missing fields, or even worse you deleted your results!
The bad news is that, despite its importance, data management is an area that is consistently overlooked in both commercial and non-commercial ventures, with precious few off-the-shelf solutions available. The good news is that it is much easier to do great data science using the fundamental building blocks that this chapter describes.
When we think about data, it is easy to overlook the true extent of the scope of the areas we need to consider. Indeed, most data "newbies" think about the scope in this way:
Obtain data
Place the data somewhere (anywhere)
Use the data
Throw the data away
In reality, there are a large number of other considerations, it is our combined responsibility to determine which ones apply to a given work piece. The following data management building blocks assist in answering or tracking some important questions about the data:
File integrity
Is the data file complete?
How do you know?
Was it part of a set?
Is the data file correct?
Was it tampered with in transit?
Data integrity
Is the data as expected?
Are all of the fields present?
Is there sufficient metadata?
Is the data quality sufficient?
Has there been any data drift?
Scheduling
Is the data routinely transmitted?
How often does the data arrive?
Was the data received on time?
Can you prove when the data was received?
Does it require acknowledgement?
Schema management
Is the data structured or unstructured?
How should the data be interpreted?
Can the schema be inferred?
Has the data changed over time?
Can the schema be evolved from the previous version?
Version Management
What is the version of the data?
Is the version correct?
How do you handle different versions of the data?
How do you know which version you're using?
Security
Is the data sensitive?
Does it contain personally identifiable information (PII)?
Does it contain personal health information (PHI)?
Does it contain payment card information (PCI)?
How should I protect the data?
Who is entitled to read/write the data?
Does it require anonymization/sanitization/obfuscation/encryption?
Disposal
How do we dispose of the data?
When do we dispose of the data?
If, after all that, you are still not convinced, before you go ahead and write that bash script using the gawk
and crontab
commands, keep reading and you will soon see that there is a far quicker, flexible, and safer method that allow you to start small and incrementally create commercial grade ingestion pipelines!
Apache Spark is the emerging de facto standard for scalable data processing. At the time of writing this book, it is the most active Apache Software Foundation (ASF) project and has a rich variety of companion tools available. There are new projects appearing every day, many of which overlap in functionality. So it takes time to learn what they do and decide whether they are appropriate to use. Unfortunately, there's no quick way around this. Usually, specific trade-offs must be made on a case-by-case basis; there is rarely a one-size-fits-all solution. Therefore, the reader is encouraged to explore the available tools and choose wisely!
Various technologies are introduced throughout this book, and the hope is that they will provide the reader with a taster of some of the more useful and practical ones to a level where they may start utilizing them in their own projects. And further, we hope to show that if the code is written carefully, technologies may be interchanged through clever use of Application Program Interface (APIs) (or high order functions in Spark Scala) even when a decision is proved to be incorrect.
Let's start with a high-level introduction to data architectures: what they do, why they're useful, when they should be used, and how Apache Spark fits in.

At their most general, modern data architectures have four basic characteristics:
Data Ingestion
Data Lake
Data Science
Data Access
Let's introduce each of these now, so that we can go into more detail in the later chapters.
Traditionally, data is ingested under strict rules and formatted according to a predetermined schema. This process is known as Extract, Transform, Load (ETL), and is still a very common practice supported by a large array of commercial tools as well as some open source products.

The ETL approach favors performing up-front checks, which ensure data quality and schema conformance, in order to simplify follow-on online analytical processing. It is particularly suited to handling data with a specific set of characteristics, namely, those that relate to a classical entity-relationship model. However, it is not suitable for all scenarios.
During the big data revolution, there was a metaphorical explosion of demand for structured, semi-structured, and unstructured data, leading to the creation of systems that were required to handle data with a different set of characteristics. These came to be defined by the phrase, 4 Vs: Volume, Variety, Velocity, and Veracity http://www.ibmbigdatahub.com/infographic/four-vs-big-data. While traditional ETL methods floundered under this new burden-because they simply required too much time to process the vast quantities of data, or were too rigid in the face of change, a different approach emerged. Enter the schema-on-read paradigm. Here, data is ingested in its original form (or at least very close to) and the details of normalization, validation, and so on are done at the time of analytical processing.
This is typically referred to as Extract Load Transform (ELT), a reference to the traditional approach:

This approach values the delivery of data in a timely fashion, delaying the detailed processing until it is absolutely required. In this way, a data scientist can gain access to the data immediately, searching for insight using a range of techniques not available with a traditional approach.
Although we only provide a high-level overview here, this approach is so important that throughout the book we will explore further by implementing various schema-on-read algorithms. We will assume the ELT method for data ingestion, that is to say we encourage the loading of data at the user's convenience. This may be every n minute, overnight or during times of low usage. The data can then be checked for integrity, quality, and so forth by running batch processing jobs offline, again at the user's discretion.
A data lake is a convenient, ubiquitous store of data. It is useful because it provides a number of key benefits, primarily:
Reliable storage
Scalable data processing capability
Let's take a brief look at each of these.
There is a good choice of underlying storage implementations for a data lake, these include Hadoop Distributed File System (HDFS), MapR-FS, and Amazon AWS S3.
Throughout the book, HDFS will be the assumed storage implementation. Also, in this book the authors use a distributed Spark setup, deployed on Yet Another Resource Negotiator (YARN) running inside a Hortonworks HDP environment. Therefore, HDFS is the technology used, unless otherwise stated. If you are not familiar with any of these technologies, they are discussed further on in this chapter.
In any case, it's worth knowing that Spark references HDFS locations natively, accesses local file locations via the prefix file://
and references S3 locations via the prefix s3a://
.
Clearly, Apache Spark will be our data processing platform of choice. In addition, as you may recall, Spark allows the user to execute code in their preferred environment, be that local, standalone, YARN or Mesos, by configuring the appropriate cluster manager; in masterURL
. Incidentally, this can be done in any one of the three locations:
Using the
--master
option when issuing thespark-submit
commandAdding the
spark.master
property in theconf/spark-defaults.conf
fileInvoking the
setMaster
method on theSparkConf
object
If you're not familiar with HDFS, or if you do not have access to a cluster, then you can run a local Spark instance using the local filesystem, which is useful for testing. However, beware that there are often bad behaviors that only appear when executing on a cluster. So, if you're serious about Spark, it's worth investing in a distributed cluster manager why not try Spark standalone cluster mode, or Amazon AWS EMR? For example, Amazon offers a number of affordable paths to cloud computing, you can explore the idea of spot instances at https://aws.amazon.com/ec2/spot/.
A data science platform provides services and APIs that enable effective data science to take place, including explorative data analysis, machine learning model creation and refinement, image and audio processing, natural language processing, and text sentiment analysis.
This is the area where Spark really excels and forms the primary focus of the remainder of this book, exploiting a robust set of native machine learning libraries, unsurpassed parallel graph processing capabilities and a strong community. Spark provides truly scalable opportunities for data science.
The remaining chapters will provide insight into each of these areas, including Chapter 6, Scraping Link-Based External Data, Chapter 7, Building Communities, and Chapter 8, Building a Recommendation System.
Data in a data lake is most frequently accessed by data engineers and scientists using the Hadoop ecosystem tools, such as Apache Spark, Pig, Hive, Impala, or Drill. However, there are times when other users, or even other systems, need access to the data and the normal tools are either too technical or do not meet the demanding expectations of the user in terms of real-world latency.
In these circumstances, the data often needs to be copied into data marts or index stores so that it may be exposed to more traditional methods, such as a report or dashboard. This process, which typically involves creating indexes and restructuring data for low-latency access, is known as data egress.
Fortunately, Apache Spark has a wide variety of adapters and connectors into traditional databases, BI tools, and visualization and reporting software. Many of these will be introduced throughout the book.
When Hadoop first started, the word Hadoop referred to the combination of HDFS and the MapReduce processing paradigm, as that was the outline of the original paper http://research.google.com/archive/mapreduce.html. Since that time, a plethora of technologies have emerged to complement Hadoop, and with the development of Apache YARN we now see other processing paradigms emerge such as Spark.
Hadoop is now often used as a colloquialism for the entire big data software stack and so it would be prudent at this point to define the scope of that stack for this book. The typical data architecture with a selection of technologies we will visit throughout the book is detailed as follows:

The relationship between these technologies is a dense topic as there are complex interdependencies, for example, Spark depends on GeoMesa, which depends on Accumulo, which depends on Zookeeper and HDFS! Therefore, in order to manage these relationships, there are platforms available, such as Cloudera or Hortonworks HDP http://hortonworks.com/products/sandbox/. These provide consolidated user interfaces and centralized configuration. The choice of platform is that of the reader, however, it is not recommended to install a few of the technologies initially and then move to a managed platform as the version problems encountered will be very complex. Therefore, it is usually easier to start with a clean machine and make a decision upfront as to which direction to take.
All of the software we use in this book is platform-agnostic and therefore fits into the general architecture described earlier. It can be installed independently and it is relatively straightforward to use with single or multiple server environment without the use of a managed product.
In many ways, Apache Spark is the glue that holds these components together. It increasingly represents the hub of the software stack. It integrates with a wide variety of components but none of them are hard-wired. Indeed, even the underlying storage mechanism can be swapped out. Combining this feature with the ability to leverage different processing frameworks means the original Hadoop technologies effectively become components, rather than an imposing framework. The logical diagram of our architecture appears as follows:

As Spark has gained momentum and wide-scale industry acceptance, many of the original Hadoop implementations for various components have been refactored for Spark. Thus, to add further complexity to the picture, there are often several possible ways to programmatically leverage any particular component; not least the imperative and declarative versions depending upon whether an API has been ported from the original Hadoop Java implementation. We have attempted to remain as true as possible to the Spark ethos throughout the remaining chapters.
Now that we have established a technology stack to use, let's describe each of the components and explain why they are useful in a Spark environment. This part of the book is designed as a reference rather than a straight read. If you're familiar with most of the technologies, then you can refresh your knowledge and continue to the next section, Chapter 2, Data Acquisition.
The Hadoop Distributed File System (HDFS) is a distributed filesystem with built-in redundancy. It is optimized to work on three or more nodes by default (although one will work fine and the limit can be increased), which provides the ability to store data in replicated blocks. So not only is a file split into a number of blocks but three copies of those blocks exist at any one time. This cleverly provides data redundancy (if one is lost two others still exist) but also provides data locality. When a distributed job is run against HDFS, not only will the system attempt to gather all of the blocks required for the data input to that job, it will also attempt to only use the blocks which are physically close to the server running that job; so it has the ability to reduce network bandwidth using only the blocks on its local storage, or those on nodes close to itself. This is achieved in practice by allocating HDFS physical disks to nodes, and nodes to racks; blocks are written in a node-local, rack-local, and cluster-local method. All instructions to HDFS are passed through a central server called NameNode, so this provides a possible central point of failure; there are various methods for providing NameNode redundancy.
Furthermore, in a multi-tenanted HDFS scenario, where many processes are accessing the same file at the same time, load balancing can also be achieved through the use of multiple blocks; for example, if a file takes up one block, this block is replicated three times and, therefore, potentially can be read from three different physical locations concurrently. Although this may not seem like a big win, on clusters of hundreds or thousands of nodes the network IO is often the single most limiting factor to a running job–the authors have certainly experienced times on multi-thousand node clusters where jobs have had to wait hours to complete purely because the network bandwidth has been maxed out due to the large number of other threads calling for data.
If you are running a laptop, require data to be stored locally, or wish to use the hardware you already have, then HDFS is a good option.
The following are the advantages of using HDFS:
Redundancy: Configurable replication of blocks provides tolerance for node and disk failure
Load balancing: Block replication means the same data can be accessed from different physical locations
Data locality: Analytics try to access the closest relevant physical block, reducing network IO.
Data balance: An algorithm is available to re-balance the data blocks as they become too clustered or fragmented.
Flexible storage: If more space is needed, further disks and nodes can be added; although this is not a hot process, the cluster will require outage to add these resources
Additional costs: No third-party costs are involved
Data encryption: Implicit encryption (when turned on)
The following are the disadvantages:
The NameNode provides for a central point of failure; to mitigate this, there are secondary and high availability options available
A cluster requires basic administration and potentially some hardware effort
To use HDFS, we should decide whether to run Hadoop in a local, pseudo-distributed or fully-distributed manner; for a single server, pseudo-distributed is useful as analytics should translate directly from this machine to any Hadoop cluster. In any case, we should install Hadoop with at least the following components:
NameNode
Secondary NameNode (or High Availability NameNode)
DataNode
Hadoop can be installed via http://hadoop.apache.org/releases.html.
Spark needs to know the location of the Hadoop configuration, specifically the following files: hdfs-site.xml
, core-site.xml
. This is then set in the configuration parameter HADOOP_CONF_DIR
in your Spark configuration.
HDFS will then be available natively, so the file hdfs://user/local/dir/text.txt
can be addressed in Spark simply using /user/local/dir/text.txt
.
S3 abstracts away all of the issues related to parallelism, storage restrictions, and security allowing very large parallel read/write operations along with a great Service Level Agreement (SLA) for a very small cost. This is perfect if you need to get up and running quickly, can't store data locally, or don't know what your future storage requirements might be. It should be recognized that s3n
and S3a
utilize an object storage model, not file storage, and therefore there are some compromises:
Eventual consistency is where changes made by one application (creation, updates, and deletions) will not be visible until some undefined time, although most AWS regions now support read-after-write consistency.
s3n
ands3a
utilize nonatomic rename and delete operations; therefore, renaming or deleting large directories takes time proportional to the number of entries. However, target files can remain visible to other processes during this time, and indeed, until the eventual consistency has been resolved.
S3 can be accessed through command-line tools (s3cmd
) via a webpage and via APIs for most popular languages; it has native integration with Hadoop and Spark through a basic configuration.
The following are the advantages:
Infinite storage capacity
No hardware considerations
Encryption available (user stored keys)
99.9% availability
Redundancy
The following are the disadvantages:
Cost to store and transfer data
No data locality
Eventual consistency
Relatively high latency
You can create an AWS account: https://aws.amazon.com/free/. Through this account, you will have access to S3 and will simply need to create some credentials.
The current S3 standard is s3a
; to use it through Spark requires some changes to the Spark configuration:
spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem spark.hadoop.fs.s3a.access.key=MyAccessKeyID spark.hadoop.fs.s3a.secret.key=MySecretKey
If using HDP, you may also need:
spark.driver.extraClassPath=${HADOOP_HOME}/extlib/hadoop-aws-currentversion.jar:${HADOOP_HOME}/ext/aws-java-sdk-1.7.4.jar
All S3 files will then be accessible within Spark using the prefix s3a://
to the S3 object reference:
val rdd = spark.sparkContext.textFile("s3a://user/dir/text.txt")
We can also use the AWS credentials inline assuming that we have set spark.hadoop.fs.s3a.impl
:
spark.sparkContext.textFile("s3a://AccessID:SecretKey@user/dir/file")
However, this method will not accept the forward-slash character /
in either of the keys. This is usually solved by obtaining another key from AWS (keep generating a new one until there are no forward-slashes present).
We can also browse the objects through the web interface located under the S3 tab in your AWS account.
Apache Kafka is a distributed, message broker written in Scala and available under the Apache Software Foundation license. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds. The result is essentially a massively scalable publish-subscribe message queue, making it highly valuable for enterprise infrastructures to process streaming data.
The following are the advantages:
Publish-subscribe messaging
Fault-tolerant
Guaranteed delivery
Replay messages on failure
Highly-scalable, shared-nothing architecture
Supports back pressure
Low latency
Good Spark-streaming integration
Simple for clients to implement
The following are the disadvantages:
At least once semantics - cannot provide exactly-once messaging due to lack of a transaction manager (as yet)
Requires Zookeeper for operation
As Kafka is a pub-sub tool, its purpose is to manage messages (publishers) and direct them to the relevant endpoints (subscribers). This is done using a broker, which is installed when implementing Kafka. Kafka is available through the Hortonworks HDP platform, or can be installed independently from this link http://kafka.apache.org/downloads.html.
Kafka uses Zookeeper to manage leadership election (as Kafka can be distributed thus allowing for redundancy), the quick start guide found in the preceding link can be used to set up a single node Zookeeper instance, and also provide a client and consumer to publish and subscribe to topics, which provide the mechanism for message handling.
Since the inception of Hadoop, the idea of columnar-based formats (as opposed to row based) has been gaining increasing support. Parquet has been developed to take advantage of compressed, efficient columnar data representation and is designed with complex nested data structures in mind; taking the lead from algorithms discussed in the Apache Dremel paper http://research.google.com/pubs/pub36632.html. Parquet allows compression schemes to be specified on a per-column level, and is future-proofed for adding more encodings as they are implemented. It has also been designed to provide compatibility throughout the Hadoop ecosystem and, like Avro, stores the data schema with the data itself.
The following are the advantages:
Columnar storage
Highly storage efficient
Per column compression
Supports predicate pushdown
Supports column pruning
Compatible with other formats, for example, Avro
Read efficient, designed for partial data retrieval
The following are the disadvantages:
Not good for random access
Potentially computationally intensive for writes
Apache Avro is a data serialization framework originally developed for Hadoop. It uses JSON for defining data types and protocols (although there is an alternative IDL), and serializes data in a compact binary format. Avro provides both a serialization format for persistent data, and a wire format for communication between Hadoop nodes, and from client programs to the Hadoop services. Another useful feature is its ability to store the data schema along with the data itself, so any Avro file can always be read without the need for referencing external sources. Further, Avro supports schema evolution and therefore backwards compatibility between Avro files written with older schema versions being read with a newer schema version.
The following are the advantages:
Schema evolution
Disk space savings
Supports schemas in JSON and IDL
Supports many languages
Supports compression
The following are the disadvantages:
Requires schema to read and write data
Serialization computationally heavy
As we are using Scala, Spark, and Maven environments in this book, Avro can be imported as follows:
<dependency> <groupId>org.apache.avro</groupId> <artifactId>avro</artifactId> <version>1.7.7</version> </dependency>
It is then a matter of creating a schema and producing the Scala code to write data to Avro using the schema. This is explained in detail in Chapter 3, Input Formats and Schema.
Apache NiFi originated from the United States National Security Agency (NSA) where it was released to open source in 2014 as part of their Technology Transfer Program. NiFi enables the production of scalable directed graphs of data routing and transformation, within a simple user interface. It also supports data provenance, a wide range of prebuilt processors and the ability to build new processors quickly and efficiently. It has prioritization, tunable delivery tolerances, and back-pressure features included, which allow the user to tune processors and pipelines for specific requirements, even allowing flow modification at runtime. All of this adds up to an incredibly flexible tool for building everything from one-off file download data flows through to enterprise grade ETL pipelines. It is generally quicker to build a pipeline and download files with NiFi than even writing a quick bash script, adding in the feature-rich processors used for this and it makes for a compelling proposition.
The following are the advantages:
Wide range of processors
Hub and spoke architecture
Graphical User Interface (GUI)
Scalable
Simplifies parallel processing
Simplifies thread handling
Allows runtime modifications
Redundancy through clusters
The following are the disadvantages:
No cross-cutting error handler
Expression language is only partially implemented
Flowfile version management lacking
Apache NiFi can be installed with Hortonworks and is known as Hortonworks Dataflow. It is also available as a standalone install from Apache, https://nifi.apache.org/. There is an introduction to NiFi in Chapter 2, Data Acquisition.
YARN is the principle component of Hadoop 2.0, which essentially allows Hadoop to plug in processing paradigms rather than being limited to just the original MapReduce. YARN consists of three main components: the resource manager, node manager, and application manager. It is out of the scope of this book to dive into YARN; the main thing to understand is that if we are running a Hadoop cluster, then our Spark jobs can be executed using YARN in client mode, as follows:
spark-submit --class package.Class / --master yarn / --deploy-mode client [options] <app jar> [app options]
The following are the advantages:
Supports Spark
Supports prioritized scheduling
Supports data locality
Job history archive
Works out of the box with HDP
YARN is installed as part of Hadoop; this could either be Hortonworks HDP, Apache Hadoop, or one of the other vendors. In any case, we should install Hadoop with at least the following components:
ResourceManager
NodeManager (1 or more)
To ensure that Spark can use YARN, it simply needs to know the location of yarn-site.xml
, which is set using the YARN_CONF_DIR
parameter in your Spark configuration.
Lucene is an indexing and search library tool originally built with Java, but now ported to several other languages, including Python. Lucene has spawned a number of subprojects in its time, including Mahout, Nutch, and Tika. These have now become top-level Apache projects in their own right while Solr has more recently joined as a subproject. Lucene has a comprehensive capability, but is particularly known for its use in Q&A search engines and information-retrieval systems.
The following are the advantages:
Highly efficient full-text searches
Scalable
Multilanguage support
Excellent out-of-the-box functionality
Lucene can be downloaded from https://lucene.apache.org/ if you wish to learn more and interact with the library directly.
When utilizing Lucene, we only really need to include lucene-core-<version>.jar
in our project. For example, when using Maven:
<dependency> <groupId>org.apache.lucene</groupId> <artifactId>lucene-core</artifactId> <version>6.1.0</version> </dependency>
Kibana is an analytics and visualization platform that also provides charting and streaming data summarization. It uses Elasticsearch for its data source (which in turn uses Lucene) and can therefore leverage very powerful search and indexing capabilities at scale. Kibana can be used to visualize data in many different ways, including bar charts, histograms, and maps. We have mentioned Kibana briefly towards the end of this chapter and it will be used extensively throughout this book.
The following are the advantages:
Visualize data at scale
Intuitive interface to quickly develop dashboards
The following are the disadvantages:
Only integrates with Elasticsearch
Kibana releases are tied to specific Elasticsearch versions
Kibana can easily be installed as a standalone piece since it has its own web server. It can be downloaded from https://www.elastic.co/downloads/kibana. As Kibana requires Elasticsearch, this will also need to be installed; see preceding link for more information. The Kibana configuration is handled in config/kibana.yml
, if you have installed a standalone version of Elasticsearch, then no changes are required, it will work out of the box!
Elasticsearch is a web-based search engine based on Lucene (see previously). It provides a distributed, multitenant-capable full-text search engine with schema-free JSON documents. It is built in Java but can be utilized from any language due to its HTTP web interface. This makes it particularly useful for transactions and/or data-intensive instructions that are to be displayed via web pages.
The disadvantages are as follows
Unable to perform distributed transactions
Lack of frontend tooling
Elasticsearch can be installed from https://www.elastic.co/downloads/elasticsearch. To provide access to the Rest API, we can import the Maven dependency:
<dependency> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch-spark_2.10</artifactId> <version>2.2.0-m1</version> </dependency>
There is also a great tool to help with administering Elasticsearch content. Search for the Chrome extension, Sense, at https://chrome.google.com/webstore/category/extensions. With a further explanation found at: https://www.elastic.co/blog/found-sense-a-cool-json-aware-interface-to-elasticsearch. Alternatively, it is available for Kibana at https://www.elastic.co/guide/en/sense/current/installing.html.
Accumulo is a no-sql database based on Google's Bigtable design and was originally developed by the American National Security Agency, subsequently being released to the Apache community in 2011. Accumulo offers us the usual big data advantages such as bulk loading and parallel reading but also has some additional capabilities; iterators, for efficient server and client side pre-computation, data aggregation and, most importantly, cell level security. The security aspect of Accumulo makes it very useful for Enterprise usage as it enables flexible security in a multitenant environment. Accumulo is powered by Apache Zookeeper, in the same way as Kafka, and also leverages Apache Thrift, https://thrift.apache.org/, which enables a cross language Remote Procedural Call (RPC) capability.
The advantages are as follows:
Pure implementation of Google Bigtable
Cell level security
Scalable
Redundancy
Provides iterators for server-side computation
The disadvantages are as follows:
Zookeeper not universally popular with DevOps
Not always the most efficient choice for bulk relational operations
Accumulo can be installed as part of the Hortonworks HDP release, or may be installed as a standalone instance from https://accumulo.apache.org/. The instance should then be configured using the installation documentation, at the time of writing https://accumulo.apache.org/1.7/accumulo_user_manual#_installation.
In Chapter 7, Building Communities, we demonstrate the use of Accumulo with Spark, along with some of the more advanced features such as Iterators
and InputFormats
. We also show how to work with data between Elasticsearch and Accumulo.
In this chapter, we introduced the idea of data architecture and explained how to group responsibilities into capabilities that help manage data throughout its lifecycle. We explained that all data handling requires a level of due diligence, whether this is enforced by corporate rules or otherwise, and without this, analytics and their results can quickly become invalid.
Having scoped our data architecture, we have walked through the individual components and their respective advantages/disadvantages, explaining that our choices are based upon collective experience. Indeed, there are always options when it comes to choosing components and their individual features should always be carefully considered before any commitment.
In the next chapter, we will dive deeper into how to source and capture data. We will advise on how to bring data onto the platform and discuss aspects related to processing and handling data through a pipeline.