This chapter covers the information on Impala, its core components, and its inner workings in detail. We will cover Impala architecture including Impala daemon, statestore, and execution model, and how they interact together along with other components. Impala metadata and metastore are also discussed here, to understand how Impala maintains its information. Finally, we will study various ways to interface Impala.
The objective of this chapter is to provide enough information for you to kick-start Impala on a single node experimental or multimode production cluster. This chapter covers the Impala essentials within the following broad categories:
Impala architecture and execution
Impala is for a new breed of data wranglers who want to process the data at lightening-fast speed using traditional SQL knowledge. Impala provides data analysts or scientists a way to access data, which is stored on Hadoop at lightening speed by directly using SQL or other Business Intelligence tools. Impala uses the Hadoop data processing layer, also called HDFS, to process the data so there is no need to migrate data from Hadoop to any other middleware, specialized system, or data warehouse. Impala provides data wranglers a Massively Parallel Processing (MPP) query engine, which runs natively on Hadoop.
Native on Hadoop means the engine runs on Hadoop and uses the Hadoop core component, HDFS, along with other additional components, such as Hive and HBase. To process data, Impala has its own execution component, which runs on each DataNode where the data is stored in blocks. There is a list of third-party applications that can directly process data stored on Hadoop through Impala. The biggest advantage of Impala is that data transformation or data movement is not required for data stored on Hadoop. No data movement means all the processing is happening where the data resides in the cluster. In other distributed systems, data is transferred over the network before it is processed; however, with Impala the processing happens at the place where data is stored, which is one of the premier reasons why Impala is very fast in comparison to other large data processing systems.
Before we learn more about Impala, let's see what the key Impala features are:
First and foremost, Impala is 100% open source under the Apache license
Impala is a native MPP engine, running on the Cloudera Hadoop distribution
Impala supports in-memory processing for data through SQL-like queries
Impala uses Hadoop Distributed File System (HDFS) and HBase
Impala supports integration with leading Business Intelligence tools, such as Tableau, Pentaho, Microstrategy, Zoomdata, and so on
Impala supports a wide variety of input file formats, that is, regular text files, files in CSV/TSV or other delimited format, sequence files, Avro, RCFile, LZO, and Parquet types
For third-party application connectivity, Impala supports ODBC drive, SQL-like syntax, and Beeswax GUI (in Apache Hue) from Apache Hive
Impala uses Kerberos authentication and role-based authorization with Sentry
The key benefits of using Impala are:
Impala uses Hive to read a table's metadata; however, using its own distributed execution engine it makes data processing very fast. So the very first benefit of using Impala is the super fast access of data from HDFS.
Impala uses a SQL-like syntax to interact with data, so you can leverage the existing BI tools to interact with data stored on Hadoop. The engineers with SQL expertise can benefit from Impala as they do not need to learn new languages and skills. Additionally, Impala offers higher performance and execution speed.
While running on Hadoop, Impala leverages the Hadoop file and data format, metadata, resource management, and security, all available on Hadoop.
As Impala interacts with the stored data in Hadoop, it preserves full fidelity of data while analyzing the data, due to aggregations or conformance of fixed schemas.
Impala performs interactive analysis directly on the data stored on Hadoop DataNodes without requiring data movement, which results in lightening-fast query results, because there are no network bottlenecks and the time available to move data is zero.
Impala provides a single repository and metadata store from source to analysis, which enables more users to interact with a large amount of data. The presence of a single repository also reduces data movement, which helps in performing interactive analysis directly on full fidelity data.
Red Hat Enterprise Linux 5.7/6.2/6.4
SLES 11 with SP 1 or newer
As Impala runs on Hadoop, it is also important to discuss the supported Hadoop version. At the time of writing this book, Impala was supported on the following Hadoop distributions:
Impala 1.1 and 1.1.1
Cloudera Hadoop CDH 4.1 or later
ClouderaHadoopCDH 4.1 or later
Impala 0.7 and older
Cloudera Hadoop CDH 4.1 only
Besides CDH, Impala can run on other Hadoop distributions by compiling the source code and then configuring it correctly as required.
Depending on the latest version of Impala, requirements might change, so please visit the Cloudera Impala website for updated information.
Even though the common perception is that Impala needs Hive to function, it is not completely true. The fact is that only the Hive metastore is required for Impala to function and Hive can be installed on some other client machine. Hive doesn't require being installed on the same DataNode where Impala is installed, because as long as Impala can access the Hive metastore, it will function as expected. In brief, the Hive metastore stores tables and partitions' specific information, which is also called metadata.
For those who don't know, Impala is written in C++. However, Impala uses Java to communicate with various Hadoop components. In Impala, the
impala-dependencies.jar file located at
/usr/lib/impala/lib includes all the required Java dependencies. Oracle JVM is the officially supported JVM for Impala and other JVMs might cause problems while running Impala.
The source datasets processed by Impala, along with join operations, could be very large, and because processing is done in the memory, as an Impala user you must make sure that you have sufficient memory to process the join operations. The memory requirement is based on your source dataset requirement, which you are going to process through Impala. You also know that Impala cannot run queries that have a working set greater than the maximum available RAM. In a case when memory is not sufficient, Impala will not be able to process the query and the query will be canceled.
For best performance with Impala, it is suggested to have DataNodes with multiple storage disks because disk I/O speed is often considered the bottleneck for Impala performance. The total amount of physical storage requirement is based on the source data, which you would want to process with Impala.
As Impala uses the SSE4.2 CPU instructions set, which is mostly found in the latest processors, the latest processors are often suggested for better performance with Impala.
Impala daemons running in DataNodes can process data stored in local nodes as well as in remote nodes. To achieve the highest performance, it is advised that Impala attempts to complete data processing on the local data instead of remote data using a network connection. To achieve local data processing, Impala matches the hostname provided to each Impala daemon with the IP address of each DataNode by resolving the hostname flag to an IP address. For Impala to work with the local data stored in a DataNode, you must use a single IP interface for the DataNode and an Impala daemon on each machine. Since there is a single IP address, make sure that the Impala daemon hostname flags resolve the IP address of the DataNode.
When Impala is installed, a user name
impala and group name
impala is created, and Impala uses this username and group name during its life after installation. You must ensure that no one changes the
impala group and user settings, and also no other application or system activity obstructs the functionality of the
impala user and group. To achieve the highest performance, Impala uses direct reads and, because a root user cannot do direct reads, Impala is not executed as root. To achieve full performance with Impala, the user must make sure that Impala is not running as a root user.
As Impala is designed and developed to run on the Cloudera Hadoop distribution, there are two different ways Impala can be installed on supported Cloudera Hadoop distributions. Both installation methods are described in a nutshell, as follows.
Cloudera Manager is only available for the Cloudera Hadoop distribution. The biggest advantage of installing Impala using Cloudera Manager is that most of the complex configuration is taken care of by Cloudera Manager, and applies to all depending applications, if applicable. Cloudera Manager has various versions available; however, to support specific Impala versions, the user must have a proper Cloudera Manager for successful installation.
Once previously described requirements are met, using Cloudera Manager can help you install Impala. Depending on the Cloudera Manager version, you can install specific Impala versions. For example, to install Impala version 1.1.1 you would need Cloudera Manager 4.7 or a higher version, which supports all the features and the auditing feature introduced in Impala 1.1.1. Just use the Cloudera Manager UI to install Impala from the list and follow the instructions as they appear. As shown in the following Cloudera Manager UI screenshot, I have Impala 1.1.1 installed; however, I can upgrade to Impala 1.2.1 just using Cloudera Manager.
If you decide to install Impala on your own in your Cloudera Hadoop cluster, you must make sure that basic Impala requirements are met and necessary components are already installed. First you must have the correct version of the Cloudera Hadoop cluster ready depending on your Impala version, and have the Hive metastore installed either using MySQL or PostgreSQL.
Once you have made sure that the Hive metastore is available in your Cloudera Hadoop cluster, you can start the Impala installation to all DataNodes as follows:
Make sure that you have Cloudera public repo set in your OS, so Impala specific packages can be downloaded and installed on your machine. If you do not have the Cloudera specific public repo set, please visit the Cloudera website to get your OS specific information.
After that, you will need to install the following three packages on your machine:
hdfs-site.xmlHadoop configuration files to the
/etc/impala/conffolder, which is the Impala configuration folder.
Impala is also compiled and tested to run on the MapR Hadoop distribution, so if you are interested in running Impala on MapR, please visit the following link:
After Impala is installed, you must perform a few mandatory and recommended configuration settings for smooth Impala operations. Cloudera Manager does some of the configurations automatically; however, a few of them need to be completed after any kind of installation. The following is a list of post-installation configurations:
On Cloudera Hadoop CDH 4.2 or newer distribution, the user must enable short-circuit reads on each DataNode, after each type of installation. To enable short-circuit reads, here are the steps to follow on your Cloudera Hadoop cluster:
hdfs-site.xmlin each DataNode as follows:
<property> <name>dfs.client.read.shortcircuit</name> <value>true</value> </property> <property> <name>dfs.domain.socket.path</name> <value>/var/run/hadoop-hdfs/dn._PORT</value> </property> <property> <name>dfs.client.file-block-storage-locations.timeout</name> <value>3000</value> </property>
/var/run/Hadoop-hdfs/is group writable, make sure its group is the root.
Restart all DataNodes.
Cloudera Manager enables "block location tracking" and "native checksumming" for optimum performance; however, for independent installation both of these have to be enabled. Enabling block location metadata allows Impala to know on which disk data blocks are located, allowing better utilization of the underlying disks. Both "block location tracking" and "native checksumming" are described in later chapters for better understanding. Here is what you can do to enable block location tracking:
hdfs-site.xmlon each DataNode must have the following setting:
<property> <name>dfs.datanode.hdfs-blocks-metadata.enabled</name> <value>true</value> </property>
Make sure the updated
hdfs-site.xmlfile is placed in the Impala configuration folder at
Restart all DataNodes.
Enabling native checksumming causes Impala to use an optimized native library for computing checksums if that library is available. If Impala is installed using Cloudera Manager, "native checksumming" is automatically configured and no action is needed. However, if you need to enable native checksumming on your self installed Impala cluster, you must build and install the
libhadoop.soHadoop Native Library. If this library is not available, you might receive the Unable to load native-hadoop library for your platform... using built-in-java classes where applicable message in Impala logs, indicating that native checksumming is not enabled.
If you have used Cloudera Manager to install Impala, then you can use the Cloudera Manager UI to start/shutdown Impala. However, those who installed Impala directly need to start at least one instance of Impala-state-store and Impala on all DataNodes where it is installed. In this scenario, you can either use init scripts or you can start the statestore and Impala directly. Impala uses Impala-state-store to run in the distributed mode. Impala-state-store helps Impala to achieve the best performance; however, if the state store becomes unavailable, Impala continues to function.
To start the Impala-state-store, use the following command:
$ sudo service impala-state-store start
To start Impala on each DataNode, use the following command:
$ sudo service impala-server start
Impala-state-store and Impala server-specific init scripts are located at
/etc/default/impala, which can be edited if necessary when you want to automate or start these services depending on certain conditions.
Upgrading Impala from an older to a newer version is similar to other application upgrades on Linux machines. Upgrading Impala requires stopping the currently running Impala services. Upgrade Impala, and then add extra configurations if needed, and finally restart Impala services. Here we will learn how we can upgrade Impala services depending on our initial installation method.
First remove all the Impala-related packages.
Connect to the Cloudera Manager Admin Console.
Click on Download.
Click on Distribute.
Click on Activate.
Connect to the Cloudera Manager Admin Console.
Click on Actions.
Click on Stop.
Make sure to update
hadoop-lzo-cdh4depending on whether it is installed already or not.
Update Impala shell on each node on which it is installed.
Connect to the Cloudera Manager Admin console.
In the Services tab, click on the Impala service.
Click on Actions and then on Start.
Validate if any update-specific configuration is needed and, if so, please apply that configuration.
Update the Impala-server and Impala shell using appropriate update commands on your Linux OS. Depending on your Linux OS and Impala package types, you might be using these commands, for example, "yum" on RedHat/CentOS Linux and "apt-get" on the Ubuntu/Debian Linux OS.
In this section we will first learn about various important components of Impala and then discuss the intricate details on Impala inner workings. Here, we will discuss the following important components:
Impala metadata and metastore
Putting together the above components with Hadoop and an application or command line interface, we can conceptualize them as seen in the following figure:
At the core of Impala, there exists the Impala daemon, which runs on each DataNode where Impala is installed. The Impala daemon is represented by an actual process named impalad. This Impala daemon process impalad is responsible for processing the queries, which are submitted through Impala shell, API, and other third-party applications connected through ODBC/JDBC connectors or Hue.
A query can be submitted to any impalad running on any node, and that particular node serves as a "coordinator node" for that query. Multiple queries are served by impalad running on other nodes as well. After accepting the query, impalad reads and writes to data files and parallelizes the queries by distributing the work to other Impala nodes in the Impala cluster. When queries are processing on various impalad instances, all impalad instances return the result to the central coordinator node. Depending on your requirement, queries can be submitted to a dedicated impalad or in a load balanced manner to another impalad in your cluster.
Impala has another important component called Impala statestore, which is responsible for checking the health of each impalad, and then relaying each impala daemon health to other daemons frequently. Impala statestore is a single running process and can run on the same node where the Impala server or any other node within the cluster is running. The name of the Impala statestore daemon process is statestored. Every Impala daemon process interacts with the Impala statestore process providing its latest health status and this information is relayed within the cluster to each and every Impala daemon so they can make correct decisions before distributing the queries to a specific impalad. In the event of a node failure due to any reason, statestored updates all other nodes about this failure, and once such a notification is available to other impalad no other Impala daemon assigns any further queries to the affected node.
One important thing to note here is that even when the Impala statestore component provides a critical update on the node in trouble, the process itself is not critical to the Impala execution. In an event where the Impala statestore becomes unavailable, the rest of the node continues working as usual. When statestore is offline, the cluster becomes less robust, and when statestore is back online it restarts communicating with each node and resumes its natural process.
Another important component of Impala is its metadata and metastore. Impala uses traditional MySQL or PostgreSQL databases to store table definitions. While other databases can also be used to configure the Hive metastore, either MySQL or PostgreSQL is recommended. The important details, such as table and column information and table definitions are stored in a centralized database known as a metastore. Apache Hive also shares the same databases for its metastore, because of which Impala can access the table created or loaded by Hive if all the table columns use the supported data types, data format, and data compression types.
Besides that, Impala also maintains information about the data files stored on HDFS. Impala tracks information about file metadata, that is, the physical location of the blocks about data files in HDFS. Each Impala node caches all of the metadata locally, which can expedite the process of gathering metadata for a large amount of data, distributed across multiple DataNodes. When dealing with an extremely large amount of data and/or many partitions, getting table specific metadata could take a significant amount of time. So a locally stored metadata cache helps in providing such information instantly.
When a table definition or table data is updated, other Impala daemons must update their metadata cache by retrieving the latest metadata before issuing a new query against the table in question. Impala uses
REFRESH when new data files are added to an existing table. Another statement,
INVALIDATE METADATA, is also used when a new table is included, or an existing table is dropped. The same
INVALIDATE METADATA statement is also used when data files are removed from HDFS or a DFS rebalanced operation is initiated to balance data blocks in HDFS.
Command-line interface through Impala shell
Web interface through Apache Hue
Third-party application interface through ODBC/JDBC
The Impala daemon process is configured to listen to incoming requests from the previously described interfaces via several ports. Both the command-line interface and web-based interface share the same port; however, JDBC and ODBC use different ports to listen for the incoming requests. The use of ODBC- and JDBC-based connectivity adds extensibility to Impala running on the Linux environment. Using ODBC and JDBC third-party applications running on Windows or other Linux platforms can submit queries directly to Impala. Most of the third-party Business Intelligence applications use JDBC and ODBC to submit queries to the Impala cluster and the impalad processes running on various nodes listen to these requests and process them as requested.
Previously we discussed the Impala daemon, statestore, and metastore in detail to understand how they work together. Essentially, Impala daemons receive queries from a variety of sources and distribute the query load to Impala daemons running on other nodes. While doing so, it interacts with the statestore for node-specific updates and accesses the metastore, either stored in the centralized database or in the local cache. Now to complete the Impala execution, we will discuss how Impala interacts with other components, that is, Hive, HDFS, and HBase.
We have already discussed earlier the Impala metastore using the centralized database as a metastore, and Hive also uses the same MySQL or PostgreSQL database for the same kind of data. Impala provides the same SQL-like query interface used in Apache Hive. Since both Impala and Hive share the same database as a metastore, Impala can access Hive-specific table definitions if the Hive table definition uses the same file format, compression codecs, and Impala-supported data types for their column values.
Apache Hive provides various kinds of file-type processing support to Impala. When using formats other than a text file, that is, RCFile, Avro, and SequenceFile, the data must be loaded through Hive first and then Impala can query the data from these file formats. Impala can perform a read operation on more types of data using the
SELECT statement and then perform a write operation using the
INSERT statement. The
ANALYZE TABLE statement in Hive generates useful table and column statistics and Impala uses these valuable statistics to optimize the queries.
Impala table data are actually regular data files stored in HDFS and Impala uses HDFS as its primary data storage medium. As soon as a data file or a collection of files is available in a specific folder of a new table, Impala reads all of the files regardless of their names, and new data is included in files with the name controlled by Impala. HDFS provides data redundancy through the replication factor and relies on such redundancy to access data on other DataNodes in case it is not available on a specific DataNode. We have already learned earlier that Impala also maintains the information on the physical location of the blocks about data files in HDFS, which helps data access in case of node failure.
HBase is a distributed, scalable, big data storage system that provides random, real-time read and write access to data stored on HDFS. HBase, a database storage system, sits on top of HDFS; however, like other traditional database storage systems, HBase does not provide built-in SQL support. Third-party applications can provide such functionality.
To use HBase, first the user defines tables in Impala and then maps them to the equivalent HBase tables. Once a table relationship is established, users can submit queries into the HBase table through Impala. Join operations can also be formed including HBase and Impala tables.
Impala is designed and developed to run on top of Hadoop. So you must understand the Hadoop security model as well as the security provided in the OS where Hadoop is running. If Hadoop is running on Linux, then a Linux administrator and Hadoop administrator user can tighten the security, which definitely can be taken into account with the security provided by Impala. Impala 1.1 or higher uses Sentry Open Source Project to provide a detailed authorization framework for Hadoop. Impala 1.1.1 supports auditing capabilities in a cluster by creating auditing data, which can be collected from all nodes and then processed for further analysis and insight.
Here, in this chapter, we will talk about the security features provided by Impala. To start with Impala security, we can consider the following types of security features.
Authorization means "who can access the data resources" and "what kind of action is approved for which user." Impala uses the Linux OS user ID of the user who started the Impala shell process or another client application. This user ID is associated with other privileges to be used with Impala. With Impala 1.1, the Open Source Sentry project is used for authorization. so users can learn more by accessing relevant information in this regard.
Impala uses the same authorization privilege model that is used with other database systems, that is, MySQL and Hive. In Impala, privilege is granted to various kinds of objects in schema. Any privilege that can be granted is associated with a level in the object hierarchy. For example, if a container object is given privilege, the child object automatically inherits it.
Following this we will learn how a restricted set of privileges determines what you can do with each object.
SELECT privilege allows the user to read the data from a table. If users use
SHOW DATABASES and
SHOW TABLES statements, only objects for which a user has this privilege will be shown in the output and the same goes with the
INVALIDATE METADATA statements. These statements will only access metadata for tables for which the user has this privilege.
ALL privilege users can create or modify any object. This access privilege is needed to execute DDL statements, that is,
ALTER TABLE, or
DROP TABLE for a table,
CREATE DATABASE or
DROP DATABASE for a database, or
ALTER VIEW, or
DROP VIEW for a view.
Here are a few examples of how you can set the described privileges:
GRANT SELECT on TABLE table_name TO USER user_name GRANT ALL on TABLE table_name TO GROUP group_name
Authentication means verifying the credentials and confirming the identity of the user before processing the request. Impala uses Kerberos security subsystems to authenticate the user and his or her identity.
In the Cloudera Hadoop distribution, the Kerberos security can be enabled through Cloudera Manager. Running Impala in a managed environment, Cloudera Manager automatically completes the Kerberos configuration. At the time of writing this book, Impala does not support application data wire encryption. Once your Hadoop distribution has Kerberos security enabled, you can enable Kerberos security in Impala.
Auditing means keeping account of each and every operation executed in the system and maintaining a record of whether they succeed or failed. Using auditing features, users can look back to check what operation was executed and what part of the data has been accessed by which user. The auditing feature helps track down such activities in the system, so respective professionals can take proper measurements. In Impala, the auditing feature produces audit data, which is collected and presented in user-friendly details by Cloudera Manger.
Auditing features are introduced with Impala 1.1.1 and the key features are as follows:
Enable auditing directory with the impalad startup option using
By default, Impala starts a new audit logfile after every 5,000 queries. To change this count, use the
-max_audit_event_log_file_sizeoption with the impalad startup option.
Optionally, the Cloudera Navigator application is used to collect and consolidate audit logs from all nodes in the cluster.
Optionally, Cloud Manager is used to filter, visualize, and produce the audit reports.
Blocked SQL queries that could not be authorized
SQL queries that are authorized to execute are logged after analysis is done and before the actual execution
Query information is logged into the audit log in JSON format, using a single line per SQL query. Each logged query can be accessed through SQL syntax by providing any combination of session ID, user name, and client network address.
Now let's take a look at the security guidelines for Impala, which could improve the security against malicious intruders, unauthorized access, accidents, and common mistakes. Here is the comprehensive list, which definitely can harden a cluster running Impala:
Impala specific guidelines
Make sure that the Hadoop ownership and permissions for Impala audit logs files are restricted
Make sure that the Impala web UI is password protected
Enable authorization by executing impalad daemons with
-authorization_policy_fileoptions on all nodes
System specific guidelines
Make sure that the Kerberos authentication is enabled and working with Impala
Tighten the HDFS file ownership and permission mechanism
Keeping a long list of sudoers is definitely a big red flag. Keep the list of sudoers to a bare minimum to stop unauthorized and unwanted access
Secure the Hive metastore from unwanted and unauthorized access
In this chapter we covered basic information on Impala, core components, and how various components work together to process the data with lightening speed. We have learned about Impala installation, configuration, upgradating, and security in detail, and in the next chapter we will learn about Impala shell and commands, which can be used to manage Impala components in a cluster.