Learning Cloudera Impala

By Avkash Chauhan
  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Getting Started with Impala

About this book

If you have always wanted to crunch billions of rows of raw data on Hadoop in a couple of seconds, then Cloudera Impala is the number one choice for you. Cloudera Impala provides fast, interactive SQL queries directly on your Apache Hadoop data stored in HDFS or HBase. In addition to using the same unified storage platform, Impala also uses the same metadata, SQL syntax (Hive SQL), ODBC driver, and user interface (Hue Beeswax) as Apache Hive. This provides a familiar and unified platform for batch-oriented or real-time queries.

In this practical, example-oriented book, you will learn everything you need to know about Cloudera Impala so that you can get started on your very own project. The book covers everything about Cloudera Impala from installation, administration, and query processing, all the way to connectivity with other third party applications. With this book in your hand, you will find yourself empowered to play with your data in Hadoop.

As a reader of this book, you will learn about the origin of Impala and the technology behind it that allows it to run on thousands of machines. You will learn how to install, run, manage, and troubleshoot Impala in your own Hadoop cluster using the step-by-step guidance provided in the book. The book covers tenets of data processing such as loading data stored in Hadoop into Impala tables and querying data using Impala SQL statements, all with various code illustrations and a real-world example.

The book is written to get you started with Impala by providing rich information so you can understand what Impala is, what it can do for you, and finally how you can use it to achieve your objective.

Publication date:
December 2013
Publisher
Packt
Pages
150
ISBN
9781783281275

 

Chapter 1. Getting Started with Impala

This chapter covers the information on Impala, its core components, and its inner workings in detail. We will cover Impala architecture including Impala daemon, statestore, and execution model, and how they interact together along with other components. Impala metadata and metastore are also discussed here, to understand how Impala maintains its information. Finally, we will study various ways to interface Impala.

The objective of this chapter is to provide enough information for you to kick-start Impala on a single node experimental or multimode production cluster. This chapter covers the Impala essentials within the following broad categories:

  • System requirement

  • Installation

  • Configuration

  • Upgradation

  • Security

  • Impala architecture and execution

Impala is for a new breed of data wranglers who want to process the data at lightening-fast speed using traditional SQL knowledge. Impala provides data analysts or scientists a way to access data, which is stored on Hadoop at lightening speed by directly using SQL or other Business Intelligence tools. Impala uses the Hadoop data processing layer, also called HDFS, to process the data so there is no need to migrate data from Hadoop to any other middleware, specialized system, or data warehouse. Impala provides data wranglers a Massively Parallel Processing (MPP) query engine, which runs natively on Hadoop.

Native on Hadoop means the engine runs on Hadoop and uses the Hadoop core component, HDFS, along with other additional components, such as Hive and HBase. To process data, Impala has its own execution component, which runs on each DataNode where the data is stored in blocks. There is a list of third-party applications that can directly process data stored on Hadoop through Impala. The biggest advantage of Impala is that data transformation or data movement is not required for data stored on Hadoop. No data movement means all the processing is happening where the data resides in the cluster. In other distributed systems, data is transferred over the network before it is processed; however, with Impala the processing happens at the place where data is stored, which is one of the premier reasons why Impala is very fast in comparison to other large data processing systems.

Before we learn more about Impala, let's see what the key Impala features are:

  • First and foremost, Impala is 100% open source under the Apache license

  • Impala is a native MPP engine, running on the Cloudera Hadoop distribution

  • Impala supports in-memory processing for data through SQL-like queries

  • Impala uses Hadoop Distributed File System (HDFS) and HBase

  • Impala supports integration with leading Business Intelligence tools, such as Tableau, Pentaho, Microstrategy, Zoomdata, and so on

  • Impala supports a wide variety of input file formats, that is, regular text files, files in CSV/TSV or other delimited format, sequence files, Avro, RCFile, LZO, and Parquet types

  • For third-party application connectivity, Impala supports ODBC drive, SQL-like syntax, and Beeswax GUI (in Apache Hue) from Apache Hive

  • Impala uses Kerberos authentication and role-based authorization with Sentry

The key benefits of using Impala are:

  • Impala uses Hive to read a table's metadata; however, using its own distributed execution engine it makes data processing very fast. So the very first benefit of using Impala is the super fast access of data from HDFS.

  • Impala uses a SQL-like syntax to interact with data, so you can leverage the existing BI tools to interact with data stored on Hadoop. The engineers with SQL expertise can benefit from Impala as they do not need to learn new languages and skills. Additionally, Impala offers higher performance and execution speed.

  • While running on Hadoop, Impala leverages the Hadoop file and data format, metadata, resource management, and security, all available on Hadoop.

  • As Impala interacts with the stored data in Hadoop, it preserves full fidelity of data while analyzing the data, due to aggregations or conformance of fixed schemas.

  • Impala performs interactive analysis directly on the data stored on Hadoop DataNodes without requiring data movement, which results in lightening-fast query results, because there are no network bottlenecks and the time available to move data is zero.

  • Impala provides a single repository and metadata store from source to analysis, which enables more users to interact with a large amount of data. The presence of a single repository also reduces data movement, which helps in performing interactive analysis directly on full fidelity data.

 

Impala requirements


Impala is supported on 64-bit Linux-based operating systems. At the time of writing this book, Impala was supported on the following operating systems:

  • Red Hat Enterprise Linux 5.7/6.2/6.4

  • CentOS 5.7/6.2/6.4

  • SLES 11 with SP 1 or newer

  • Ubuntu 10.04/12.04

  • Debian 6.03

As Impala runs on Hadoop, it is also important to discuss the supported Hadoop version. At the time of writing this book, Impala was supported on the following Hadoop distributions:

  • Impala 1.1 and 1.1.1

    • Cloudera Hadoop CDH 4.1 or later

  • Impala 1.0

    • ClouderaHadoopCDH 4.1 or later

  • Impala 0.7 and older

    • Cloudera Hadoop CDH 4.1 only

Besides CDH, Impala can run on other Hadoop distributions by compiling the source code and then configuring it correctly as required.

Note

Depending on the latest version of Impala, requirements might change, so please visit the Cloudera Impala website for updated information.

Dependency on Hive for Impala

Even though the common perception is that Impala needs Hive to function, it is not completely true. The fact is that only the Hive metastore is required for Impala to function and Hive can be installed on some other client machine. Hive doesn't require being installed on the same DataNode where Impala is installed, because as long as Impala can access the Hive metastore, it will function as expected. In brief, the Hive metastore stores tables and partitions' specific information, which is also called metadata.

As Hive uses PostgreSQL or MySQL for the Hive metastore, we can also consider that either PostgreSQL or MySQL is required for Impala.

Dependency on Java for Impala

For those who don't know, Impala is written in C++. However, Impala uses Java to communicate with various Hadoop components. In Impala, the impala-dependencies.jar file located at /usr/lib/impala/lib includes all the required Java dependencies. Oracle JVM is the officially supported JVM for Impala and other JVMs might cause problems while running Impala.

Hardware dependency

The source datasets processed by Impala, along with join operations, could be very large, and because processing is done in the memory, as an Impala user you must make sure that you have sufficient memory to process the join operations. The memory requirement is based on your source dataset requirement, which you are going to process through Impala. You also know that Impala cannot run queries that have a working set greater than the maximum available RAM. In a case when memory is not sufficient, Impala will not be able to process the query and the query will be canceled.

For best performance with Impala, it is suggested to have DataNodes with multiple storage disks because disk I/O speed is often considered the bottleneck for Impala performance. The total amount of physical storage requirement is based on the source data, which you would want to process with Impala.

As Impala uses the SSE4.2 CPU instructions set, which is mostly found in the latest processors, the latest processors are often suggested for better performance with Impala.

Networking requirements

Impala daemons running in DataNodes can process data stored in local nodes as well as in remote nodes. To achieve the highest performance, it is advised that Impala attempts to complete data processing on the local data instead of remote data using a network connection. To achieve local data processing, Impala matches the hostname provided to each Impala daemon with the IP address of each DataNode by resolving the hostname flag to an IP address. For Impala to work with the local data stored in a DataNode, you must use a single IP interface for the DataNode and an Impala daemon on each machine. Since there is a single IP address, make sure that the Impala daemon hostname flags resolve the IP address of the DataNode.

User account requirements

When Impala is installed, a user name impala and group name impala is created, and Impala uses this username and group name during its life after installation. You must ensure that no one changes the impala group and user settings, and also no other application or system activity obstructs the functionality of the impala user and group. To achieve the highest performance, Impala uses direct reads and, because a root user cannot do direct reads, Impala is not executed as root. To achieve full performance with Impala, the user must make sure that Impala is not running as a root user.

 

Installing Impala


As Impala is designed and developed to run on the Cloudera Hadoop distribution, there are two different ways Impala can be installed on supported Cloudera Hadoop distributions. Both installation methods are described in a nutshell, as follows.

Installing Impala with Cloudera Manager

Cloudera Manager is only available for the Cloudera Hadoop distribution. The biggest advantage of installing Impala using Cloudera Manager is that most of the complex configuration is taken care of by Cloudera Manager, and applies to all depending applications, if applicable. Cloudera Manager has various versions available; however, to support specific Impala versions, the user must have a proper Cloudera Manager for successful installation.

Once previously described requirements are met, using Cloudera Manager can help you install Impala. Depending on the Cloudera Manager version, you can install specific Impala versions. For example, to install Impala version 1.1.1 you would need Cloudera Manager 4.7 or a higher version, which supports all the features and the auditing feature introduced in Impala 1.1.1. Just use the Cloudera Manager UI to install Impala from the list and follow the instructions as they appear. As shown in the following Cloudera Manager UI screenshot, I have Impala 1.1.1 installed; however, I can upgrade to Impala 1.2.1 just using Cloudera Manager.

Tip

To learn more about the installation of Cloudera Manager, please visit the Cloudera documentation site at the following link, which will give you the updated information:

http://www.cloudera.com/content/cloudera-content/cloudera-docs/Impala/latest/Cloudera-Impala-Release-Notes/Cloudera-Impala-Release-Notes.html

Installing Impala without Cloudera Manager

If you decide to install Impala on your own in your Cloudera Hadoop cluster, you must make sure that basic Impala requirements are met and necessary components are already installed. First you must have the correct version of the Cloudera Hadoop cluster ready depending on your Impala version, and have the Hive metastore installed either using MySQL or PostgreSQL.

Once you have made sure that the Hive metastore is available in your Cloudera Hadoop cluster, you can start the Impala installation to all DataNodes as follows:

  • Make sure that you have Cloudera public repo set in your OS, so Impala specific packages can be downloaded and installed on your machine. If you do not have the Cloudera specific public repo set, please visit the Cloudera website to get your OS specific information.

  • After that, you will need to install the following three packages on your machine:

    • Impala

    • Impala-server

    • Impala-state-store

  • Then, copy hive-site.xml, core-site.xml, and hdfs-site.xml Hadoop configuration files to the /etc/impala/conf folder, which is the Impala configuration folder.

  • As per Cloudera advice, it is not a good choice to install Impala in Namenode, so please do not do so, because any problem caused by Impala may bring your Hadoop cluster down.

  • Finally, install Impala shell to a single DataNode or a network-connected external machine on which you have decided to run queries.

Tip

Impala is also compiled and tested to run on the MapR Hadoop distribution, so if you are interested in running Impala on MapR, please visit the following link:

http://doc.mapr.com/display/MapR/Impala

 

Configuring Impala after installation


After Impala is installed, you must perform a few mandatory and recommended configuration settings for smooth Impala operations. Cloudera Manager does some of the configurations automatically; however, a few of them need to be completed after any kind of installation. The following is a list of post-installation configurations:

  • On Cloudera Hadoop CDH 4.2 or newer distribution, the user must enable short-circuit reads on each DataNode, after each type of installation. To enable short-circuit reads, here are the steps to follow on your Cloudera Hadoop cluster:

    1. First configure hdfs-site.xml in each DataNode as follows:

      <property>
        <name>dfs.client.read.shortcircuit</name>
        <value>true</value>
      </property>
      <property>
        <name>dfs.domain.socket.path</name>
        <value>/var/run/hadoop-hdfs/dn._PORT</value>
      </property>
      <property>
        <name>dfs.client.file-block-storage-locations.timeout</name>
        <value>3000</value>
      </property>
    2. If /var/run/Hadoop-hdfs/ is group writable, make sure its group is the root.

    3. Copy core-site.xml and hdfs-site.xml from the Hadoop configuration folder to the Impala configuration folder at /etc/impala/conf.

    4. Restart all DataNodes.

  • Cloudera Manager enables "block location tracking" and "native checksumming" for optimum performance; however, for independent installation both of these have to be enabled. Enabling block location metadata allows Impala to know on which disk data blocks are located, allowing better utilization of the underlying disks. Both "block location tracking" and "native checksumming" are described in later chapters for better understanding. Here is what you can do to enable block location tracking:

    1. hdfs-site.xml on each DataNode must have the following setting:

      <property>
        <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
        <value>true</value>
      </property>
    2. Make sure the updated hdfs-site.xml file is placed in the Impala configuration folder at /etc/impala/conf.

    3. Restart all DataNodes.

  • Enabling native checksumming causes Impala to use an optimized native library for computing checksums if that library is available. If Impala is installed using Cloudera Manager, "native checksumming" is automatically configured and no action is needed. However, if you need to enable native checksumming on your self installed Impala cluster, you must build and install the libhadoop.so Hadoop Native Library. If this library is not available, you might receive the Unable to load native-hadoop library for your platform... using built-in-java classes where applicable message in Impala logs, indicating that native checksumming is not enabled.

 

Starting Impala


If you have used Cloudera Manager to install Impala, then you can use the Cloudera Manager UI to start/shutdown Impala. However, those who installed Impala directly need to start at least one instance of Impala-state-store and Impala on all DataNodes where it is installed. In this scenario, you can either use init scripts or you can start the statestore and Impala directly. Impala uses Impala-state-store to run in the distributed mode. Impala-state-store helps Impala to achieve the best performance; however, if the state store becomes unavailable, Impala continues to function.

To start the Impala-state-store, use the following command:

$ sudo service impala-state-store start

To start Impala on each DataNode, use the following command:

$ sudo service impala-server start

Impala-state-store and Impala server-specific init scripts are located at /etc/default/impala, which can be edited if necessary when you want to automate or start these services depending on certain conditions.

 

Stopping Impala


To stop Impala services in all nodes where it is installed, use the following command:

$sudo service impala-server stop

To stop any instances of Impala-state-store in the Hadoop Cluster, use the following command:

$sudo service impala-state-store stop
 

Restarting Impala


To restart Impala services in all nodes where it is installed, use the following command:

$sudo service impala-server restart

To restart any instances of Impala-state-store in the Hadoop Cluster, use the following command:

$sudo service impala-state-store restart
 

Upgrading Impala


Upgrading Impala from an older to a newer version is similar to other application upgrades on Linux machines. Upgrading Impala requires stopping the currently running Impala services. Upgrade Impala, and then add extra configurations if needed, and finally restart Impala services. Here we will learn how we can upgrade Impala services depending on our initial installation method.

Upgrading Impala using parcels with Cloudera Manager

This method is used only when Impala is installed using Cloudera Packages, but now you are using Parcels from Cloudera Manager by accessing Cloudera Manager UI. The steps to be followed are:

  1. First remove all the Impala-related packages.

  2. Connect to the Cloudera Manager Admin Console.

  3. Navigate to the Hosts | Parcels tab. You should see a parcel with a newer version of Impala that you can upgrade to.

  4. Click on Download.

  5. Click on Distribute.

  6. Click on Activate.

  7. Once activation is completed, a Restart button will appear.

  8. Click on the Restart button to restart the Impala service.

Upgrading Impala using packages with Cloudera Manager

The steps to be followed are as follows:

  1. Connect to the Cloudera Manager Admin Console.

  2. In the Services tab, click on the Impala service.

  3. Click on Actions.

  4. Click on Stop.

  5. Update the Impala server on each Impala node in your cluster.

  6. Make sure to update hadoop-lzo-cdh4 depending on whether it is installed already or not.

  7. Update Impala shell on each node on which it is installed.

  8. Connect to the Cloudera Manager Admin console.

  9. In the Services tab, click on the Impala service.

  10. Click on Actions and then on Start.

Upgrading Impala without Cloudera Manager

The steps to be followed are as follows:

  1. Stop Impala services and Impala-state-store in all nodes where it is installed.

  2. Validate if any update-specific configuration is needed and, if so, please apply that configuration.

  3. Update the Impala-server and Impala shell using appropriate update commands on your Linux OS. Depending on your Linux OS and Impala package types, you might be using these commands, for example, "yum" on RedHat/CentOS Linux and "apt-get" on the Ubuntu/Debian Linux OS.

  4. Restart Impala services.

 

Impala core components


In this section we will first learn about various important components of Impala and then discuss the intricate details on Impala inner workings. Here, we will discuss the following important components:

  • Impala daemon

  • Impala statestore

  • Impala metadata and metastore

Putting together the above components with Hadoop and an application or command line interface, we can conceptualize them as seen in the following figure:

Let's starts discussing the core Impala components in detail now.

Impala daemon

At the core of Impala, there exists the Impala daemon, which runs on each DataNode where Impala is installed. The Impala daemon is represented by an actual process named impalad. This Impala daemon process impalad is responsible for processing the queries, which are submitted through Impala shell, API, and other third-party applications connected through ODBC/JDBC connectors or Hue.

A query can be submitted to any impalad running on any node, and that particular node serves as a "coordinator node" for that query. Multiple queries are served by impalad running on other nodes as well. After accepting the query, impalad reads and writes to data files and parallelizes the queries by distributing the work to other Impala nodes in the Impala cluster. When queries are processing on various impalad instances, all impalad instances return the result to the central coordinator node. Depending on your requirement, queries can be submitted to a dedicated impalad or in a load balanced manner to another impalad in your cluster.

Impala statestore

Impala has another important component called Impala statestore, which is responsible for checking the health of each impalad, and then relaying each impala daemon health to other daemons frequently. Impala statestore is a single running process and can run on the same node where the Impala server or any other node within the cluster is running. The name of the Impala statestore daemon process is statestored. Every Impala daemon process interacts with the Impala statestore process providing its latest health status and this information is relayed within the cluster to each and every Impala daemon so they can make correct decisions before distributing the queries to a specific impalad. In the event of a node failure due to any reason, statestored updates all other nodes about this failure, and once such a notification is available to other impalad no other Impala daemon assigns any further queries to the affected node.

One important thing to note here is that even when the Impala statestore component provides a critical update on the node in trouble, the process itself is not critical to the Impala execution. In an event where the Impala statestore becomes unavailable, the rest of the node continues working as usual. When statestore is offline, the cluster becomes less robust, and when statestore is back online it restarts communicating with each node and resumes its natural process.

Impala metadata and metastore

Another important component of Impala is its metadata and metastore. Impala uses traditional MySQL or PostgreSQL databases to store table definitions. While other databases can also be used to configure the Hive metastore, either MySQL or PostgreSQL is recommended. The important details, such as table and column information and table definitions are stored in a centralized database known as a metastore. Apache Hive also shares the same databases for its metastore, because of which Impala can access the table created or loaded by Hive if all the table columns use the supported data types, data format, and data compression types.

Besides that, Impala also maintains information about the data files stored on HDFS. Impala tracks information about file metadata, that is, the physical location of the blocks about data files in HDFS. Each Impala node caches all of the metadata locally, which can expedite the process of gathering metadata for a large amount of data, distributed across multiple DataNodes. When dealing with an extremely large amount of data and/or many partitions, getting table specific metadata could take a significant amount of time. So a locally stored metadata cache helps in providing such information instantly.

When a table definition or table data is updated, other Impala daemons must update their metadata cache by retrieving the latest metadata before issuing a new query against the table in question. Impala uses REFRESH when new data files are added to an existing table. Another statement, INVALIDATE METADATA, is also used when a new table is included, or an existing table is dropped. The same INVALIDATE METADATA statement is also used when data files are removed from HDFS or a DFS rebalanced operation is initiated to balance data blocks in HDFS.

The Impala programming interface

Impala provides the following ways to submit queries to the Impala daemon:

  • Command-line interface through Impala shell

  • Web interface through Apache Hue

  • Third-party application interface through ODBC/JDBC

The Impala daemon process is configured to listen to incoming requests from the previously described interfaces via several ports. Both the command-line interface and web-based interface share the same port; however, JDBC and ODBC use different ports to listen for the incoming requests. The use of ODBC- and JDBC-based connectivity adds extensibility to Impala running on the Linux environment. Using ODBC and JDBC third-party applications running on Windows or other Linux platforms can submit queries directly to Impala. Most of the third-party Business Intelligence applications use JDBC and ODBC to submit queries to the Impala cluster and the impalad processes running on various nodes listen to these requests and process them as requested.

 

The Impala execution architecture


Previously we discussed the Impala daemon, statestore, and metastore in detail to understand how they work together. Essentially, Impala daemons receive queries from a variety of sources and distribute the query load to Impala daemons running on other nodes. While doing so, it interacts with the statestore for node-specific updates and accesses the metastore, either stored in the centralized database or in the local cache. Now to complete the Impala execution, we will discuss how Impala interacts with other components, that is, Hive, HDFS, and HBase.

Working with Apache Hive

We have already discussed earlier the Impala metastore using the centralized database as a metastore, and Hive also uses the same MySQL or PostgreSQL database for the same kind of data. Impala provides the same SQL-like query interface used in Apache Hive. Since both Impala and Hive share the same database as a metastore, Impala can access Hive-specific table definitions if the Hive table definition uses the same file format, compression codecs, and Impala-supported data types for their column values.

Apache Hive provides various kinds of file-type processing support to Impala. When using formats other than a text file, that is, RCFile, Avro, and SequenceFile, the data must be loaded through Hive first and then Impala can query the data from these file formats. Impala can perform a read operation on more types of data using the SELECT statement and then perform a write operation using the INSERT statement. The ANALYZE TABLE statement in Hive generates useful table and column statistics and Impala uses these valuable statistics to optimize the queries.

Working with HDFS

Impala table data are actually regular data files stored in HDFS and Impala uses HDFS as its primary data storage medium. As soon as a data file or a collection of files is available in a specific folder of a new table, Impala reads all of the files regardless of their names, and new data is included in files with the name controlled by Impala. HDFS provides data redundancy through the replication factor and relies on such redundancy to access data on other DataNodes in case it is not available on a specific DataNode. We have already learned earlier that Impala also maintains the information on the physical location of the blocks about data files in HDFS, which helps data access in case of node failure.

Working with HBase

HBase is a distributed, scalable, big data storage system that provides random, real-time read and write access to data stored on HDFS. HBase, a database storage system, sits on top of HDFS; however, like other traditional database storage systems, HBase does not provide built-in SQL support. Third-party applications can provide such functionality.

To use HBase, first the user defines tables in Impala and then maps them to the equivalent HBase tables. Once a table relationship is established, users can submit queries into the HBase table through Impala. Join operations can also be formed including HBase and Impala tables.

Note

To learn more about using HBase with Impala, please visit the Cloudera website at the following link, for extensive documentation:

http://www.cloudera.com/content/cloudera-content/cloudera-docs/Impala/latest/Installing-and-Using-Impala/ciiu_impala_hbase.html

 

Impala security


Impala is designed and developed to run on top of Hadoop. So you must understand the Hadoop security model as well as the security provided in the OS where Hadoop is running. If Hadoop is running on Linux, then a Linux administrator and Hadoop administrator user can tighten the security, which definitely can be taken into account with the security provided by Impala. Impala 1.1 or higher uses Sentry Open Source Project to provide a detailed authorization framework for Hadoop. Impala 1.1.1 supports auditing capabilities in a cluster by creating auditing data, which can be collected from all nodes and then processed for further analysis and insight.

Here, in this chapter, we will talk about the security features provided by Impala. To start with Impala security, we can consider the following types of security features.

Authorization

Authorization means "who can access the data resources" and "what kind of action is approved for which user." Impala uses the Linux OS user ID of the user who started the Impala shell process or another client application. This user ID is associated with other privileges to be used with Impala. With Impala 1.1, the Open Source Sentry project is used for authorization. so users can learn more by accessing relevant information in this regard.

Impala uses the same authorization privilege model that is used with other database systems, that is, MySQL and Hive. In Impala, privilege is granted to various kinds of objects in schema. Any privilege that can be granted is associated with a level in the object hierarchy. For example, if a container object is given privilege, the child object automatically inherits it.

Currently only Server Name, URI, Databases, and Tables can be used to restrict privileges; however, partition- or column-level restriction is not supported.

Following this we will learn how a restricted set of privileges determines what you can do with each object.

The SELECT privilege

The SELECT privilege allows the user to read the data from a table. If users use SHOW DATABASES and SHOW TABLES statements, only objects for which a user has this privilege will be shown in the output and the same goes with the REFRESH and INVALIDATE METADATA statements. These statements will only access metadata for tables for which the user has this privilege.

The INSERT privilege

The INSERT privilege applies only to the INSERT and LOAD DATA statements, and allows the user to write data into a table.

The ALL privilege

With the ALL privilege users can create or modify any object. This access privilege is needed to execute DDL statements, that is, CREATE TABLE, ALTER TABLE, or DROP TABLE for a table, CREATE DATABASE or DROP DATABASE for a database, or CREATE VIEW, ALTER VIEW, or DROP VIEW for a view.

Here are a few examples of how you can set the described privileges:

GRANT SELECT on TABLE table_name TO USER user_name
GRANT ALL on TABLE table_name TO GROUP group_name

Authentication through Kerberos

Authentication means verifying the credentials and confirming the identity of the user before processing the request. Impala uses Kerberos security subsystems to authenticate the user and his or her identity.

In the Cloudera Hadoop distribution, the Kerberos security can be enabled through Cloudera Manager. Running Impala in a managed environment, Cloudera Manager automatically completes the Kerberos configuration. At the time of writing this book, Impala does not support application data wire encryption. Once your Hadoop distribution has Kerberos security enabled, you can enable Kerberos security in Impala.

Note

To learn more about enabling Kerberos security features with Impala, please visit the Cloudera Impala documentation website, where you can find the latest information.

Auditing

Auditing means keeping account of each and every operation executed in the system and maintaining a record of whether they succeed or failed. Using auditing features, users can look back to check what operation was executed and what part of the data has been accessed by which user. The auditing feature helps track down such activities in the system, so respective professionals can take proper measurements. In Impala, the auditing feature produces audit data, which is collected and presented in user-friendly details by Cloudera Manger.

Auditing features are introduced with Impala 1.1.1 and the key features are as follows:

  • Enable auditing directory with the impalad startup option using audit_event_log_dir.

  • By default, Impala starts a new audit logfile after every 5,000 queries. To change this count, use the -max_audit_event_log_file_size option with the impalad startup option.

  • Optionally, the Cloudera Navigator application is used to collect and consolidate audit logs from all nodes in the cluster.

  • Optionally, Cloud Manager is used to filter, visualize, and produce the audit reports.

Here are the types of SQL queries that are logged with audit logs:

  • Blocked SQL queries that could not be authorized

  • SQL queries that are authorized to execute are logged after analysis is done and before the actual execution

Query information is logged into the audit log in JSON format, using a single line per SQL query. Each logged query can be accessed through SQL syntax by providing any combination of session ID, user name, and client network address.

 

Impala security guidelines for a higher level of protection


Now let's take a look at the security guidelines for Impala, which could improve the security against malicious intruders, unauthorized access, accidents, and common mistakes. Here is the comprehensive list, which definitely can harden a cluster running Impala:

  • Impala specific guidelines

    • Make sure that the Hadoop ownership and permissions for Impala data files are restricted

    • Make sure that the Hadoop ownership and permissions for Impala audit logs files are restricted

    • Make sure that the Impala web UI is password protected

    • Enable authorization by executing impalad daemons with –server_name and -authorization_policy_file options on all nodes

    • When creating databases, tables, and views, using tables and other databases structures allow policy rules to specify simple and consistent rules

  • System specific guidelines

    • Create a policy file that specifies which Impala privileges are available to users in particular Hadoop groups

    • Make sure that the Kerberos authentication is enabled and working with Impala

    • Tighten the HDFS file ownership and permission mechanism

    • Keeping a long list of sudoers is definitely a big red flag. Keep the list of sudoers to a bare minimum to stop unauthorized and unwanted access

    • Secure the Hive metastore from unwanted and unauthorized access

 

Summary


In this chapter we covered basic information on Impala, core components, and how various components work together to process the data with lightening speed. We have learned about Impala installation, configuration, upgradating, and security in detail, and in the next chapter we will learn about Impala shell and commands, which can be used to manage Impala components in a cluster.

About the Author

  • Avkash Chauhan

    Avkash Chauhan is a software technology veteran with more than 12 years of industry experience in various disciplines such as embedded engineering, cloud computing, big data analytics, data processing, and data visualization. He has an extensive global work experience with Fortune 100 companies worldwide. He has spent the last eight years at Microsoft before moving on to Silicon Valley to work with a big data and analytics start-up. He started his career as an embedded engineer; and during his eight-year long gig at Microsoft, he worked on Windows CE, Windows Phone, Windows Azure, and HDInsight. He spent several years working with the Windows Azure team to develop world-class cloud technology, and his last project was Apache Hadoop on Windows Azure, also known as HDInsight. He worked on the HDInsight project since its incubation at Microsoft, and helped its early development and then deployment on cloud. For the past three years, he has been working on big data- and Hadoop-related technologies by developing applications to make Hadoop easy to use for large- and mid-market companies. He is a prolific blogger and very active on the social networking sites. You can directly contact him through the following:

    • LinkedIn: https://www.linkedin.com/in/avkashchauhan
    • Blog: http://cloudcelebrity.wordpress.com/
    • Twitter: @avkashchauhan

    Browse publications by this author
Book Title
Access this book, plus 7,500 other titles for FREE
Access now