Securing Hadoop

By Sudheesh Narayanan
    Advance your knowledge in tech with a Packt subscription

  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies

About this book

Security of Big Data is one of the biggest concerns for enterprises today. How do we protect the sensitive information in a Hadoop ecosystem? How can we integrate Hadoop security with existing enterprise security systems? What are the challenges in securing Hadoop and its ecosystem? These are the questions which need to be answered in order to ensure effective management of Big Data. Hadoop, along with Kerberos, provides security features which enable Big Data management and which keep data secure.

This book is a practitioner’s guide for securing a Hadoop-based Big Data platform. This book provides you with a step-by-step approach to implementing end-to-end security along with a solid foundation of knowledge of the Hadoop and Kerberos security models.

This practical, hands-on guide looks at the security challenges involved in securing sensitive data in a Hadoop-based Big Data platform and also covers the Security Reference Architecture for securing Big Data. It will take you through the internals of the Hadoop and Kerberos security models and will provide detailed implementation steps for securing Hadoop. You will also learn how the internals of the Hadoop security model are implemented, how to integrate Enterprise Security Systems with Hadoop security, and how you can manage and control user access to a Hadoop ecosystem seamlessly. You will also get acquainted with implementing audit logging and security incident monitoring within a Big Data platform.

Publication date:
November 2013
Publisher
Packt
Pages
116
ISBN
9781783285259

 

Chapter 1. Hadoop Security Overview

Like any development project, the ones in Hadoop start with proof of concept (POC). Especially because the technology is new and continuously evolving, the focus always begins with figuring out what it can offer and how to leverage it to solve different business problems, be it consumer analysis, breaking news processing, and so on. Being an open source framework, it has its own nuances and requires a learning curve. As these POCs mature and move to pilot and then to production phase, a new infrastructure has to be set up. Then questions arise around maintaining the newly setup infrastructure, including questions on data security and the overall ecosystem's security. Few of the questions that the infrastructure administrators and security paranoids would ask are:

How secure is a Hadoop ecosystem? How secure is the data residing in Hadoop? How would different teams including business analysts, data scientists, developers, and others in the enterprise access the Hadoop ecosystem in a secure manner? How to enforce existing Enterprise Security Models in this new infrastructure? Are there any best practices for securing such an infrastructure?

This chapter will begin the journey to answer these questions and provide an overview of the typical challenges faced in securing Hadoop-based Big Data ecosystem. We will look at the key security considerations and then present the security reference architecture that can be used for securing Hadoop.

The following topics will be covered in this chapter:

  • Why do we need to secure a Hadoop-based ecosystem?

  • The challenges in securing such an infrastructure

  • Important security considerations for a Hadoop ecosystem

  • The reference architecture for securing a Hadoop ecosystem

 

Why do we need to secure Hadoop?


Enterprise data consists of crucial information related to sales, customer interactions, human resources, and so on, and is locked securely within systems such as ERP, CRM, and general ledger systems. In the last decade, enterprise data security has matured significantly as organizations learned their lessons from various data security incidents that caused them losses in billions. As the services industry has grown and matured, most of the systems are outsourced to vendors who deal with crucial client information most of the time. As a result, security and privacy standards such as HIPAA, HITECH, PCI, SOX, ISO, and COBIT have evolved . This requires service providers to comply with these regulatory standards to fully safeguard their client's data assets. This has resulted in a very protective data security enforcement within enterprises including service providers as well as the clients. There is absolutely no tolerance to data security violations. Over the last eight years of its development, Hadoop has now reached a mature state where enterprises have started adopting it for their Big Data processing needs. The prime use case is to gain strategic and operational advantages from their humongous data sets. However, to do any analysis on top of these datasets, we need to bring them to the Hadoop ecosystem for processing. So the immediate question that arises with respect to data security is, how secure is the data storage inside the Hadoop ecosystem?

The question is not just about securing the source data which is moved from the enterprise systems to the Hadoop ecosystem. Once these datasets land into the Hadoop ecosystems, analysts and data scientists perform large-scale analytics and machine-learning-based processing to derive business insights. These business insights are of great importance to the enterprise. Any such insights in the hands of the competitor or any unauthorized personnel could be disastrous to the business. It is these business insights that are highly sensitive and must be fully secured.

Any data security incident will cause business users to lose their trust in the ecosystem. Unless the business teams have confidence in the Hadoop ecosystem, they won't take the risk to invest in Big Data. Hence, the success and failure of Big Data-related projects really depends upon how secure our data ecosystem is going to be.

 

Challenges for securing the Hadoop ecosystem


Big Data not only brings challenges for storing, processing, and analysis but also for managing and securing these large data assets. Hadoop was not built with security to begin with. As enterprises started adopting Hadoop, the Kerberos-based security model evolved within Hadoop. But given the distributed nature of the ecosystem and wide range of applications that are built on top of Hadoop, securing Hadoop from an enterprise context is a big challenge.

A typical Big Data ecosystem has multiple stakeholders who interact with the system. For example, expert users (business analysts and data scientists) within the organization would interact with the ecosystem using business intelligence (BI) and analytical tools, and would need deep data access to the data to perform various analysis. A finance department business analyst should not be able to see the data from the HR department and so on. BI tools need a wide range of system-level access to the Hadoop ecosystem depending on the protocol and data that they use for communicating with the ecosystem.

One of the biggest challenges for Big Data projects within enterprises today is about securely integrating the external data sources (social blogs, websites, existing ERP and CRM systems, and so on). This external connectivity needs to be established so that the extracted data from these external sources is available in the Hadoop ecosystem.

Hadoop ecosystem tools such as Sqoop and Flume were not built with full enterprise grade security. Cloudera, MapR, and few others have made significant contributions towards enabling these ecosystem components to be enterprise grade, resulting in Sqoop 2, Flume-ng, and Hive Server 2. Apart from these, there are multiple security-focused projects within the Hadoop ecosystem such as Cloudera Sentry (http://www.cloudera.com/content/cloudera/en/products/cdh/sentry.html), Hortonworks Knox Gateway (http://hortonworks.com/hadoop/knox-gateway/), and Intel's Project Rhino (https://github.com/intel-hadoop/project-rhino/). These projects are making significant progress to make Apache Hadoop provide enterprise grade security. A detailed understanding of each of these ecosystem components is needed to deploy them in production.

Another area of concern within enterprises is the need the existing enterprise Identity and Access Management (IDAM) systems with the Hadoop ecosystem. With such integration, enterprises can extend the Identity and Access Management to the Hadoop ecosystem. However, these integrations bring in multiple challenges as Hadoop inherently has not been built with such enterprise integrations in mind.

Apart from ecosystem integration, there is often a need to have sensitive information within the Big Data ecosystem, to derive patterns and inferences from these datasets. As we move these datasets to the Big Data ecosystem we need to mask/encrypt this sensitive information. Traditional data masking and encryption tools don't scale well for large scale Big Data masking and encryption. We need to identify new means for encryption of large scale datasets.

Usually, as the adoption of Big Data increases, enterprises quickly move to a multicluster/multiversion scenario, where there are multiple versions of the Hadoop ecosystem operating in an enterprise. Also, sensitive data that was earlier banned from the Big Data platform slowly makes its way in. This brings in additional challenges on how we address security in such a complex environment, as a small lapse in security could result in huge financial loss for the organization.

 

Key security considerations


As discussed previously, to meet the enterprise data security needs for a Big Data ecosystem, a complex and holistic approach is needed to secure the entire ecosystem. Some of the key security considerations while securing Hadoop-based Big Data ecosystem are:

  • Authentication: There is a need to provide a single point for authentication that is aligned and integrated with existing enterprise identity and access management system.

  • Authorization: We need to enforce a role-based authorization with fine-grained access control for providing access to sensitive data.

  • Access control: There is a need to control who can do what on a dataset, and who can use how much of the processing capacity available in the cluster.

  • Data masking and encryption: We need to deploy proper encryption and masking techniques on data to ensure secure access to sensitive data for authorized personnel.

  • Network perimeter security: We need to deploy perimeter security for the overall Hadoop ecosystem that controls how the data can move in and move out of the ecosystem to other infrastructures. Design and implement the network topology to provide proper isolation of the Big Data ecosystem from the rest of the enterprise. Provide proper network-level security by configuring the appropriate firewall rules to prevent unauthorized traffic.

  • System security: There is a need to provide system-level security by hardening the OS and the applications that are installed as part of the ecosystem. Address all the known vulnerability of OS and applications.

  • Infrastructure security: We need to enforce strict infrastructure and physical access security in the data center.

  • Audits and event monitoring: A proper audit trial is required for any changes to the data ecosystem and provide audit reports for various activities (data access and data processing) that occur within the ecosystem.

Reference architecture for Big Data security

Implementing all the preceding security considerations for the enterprise data security becomes very vital to building a trusted Big Data ecosystem within the enterprise. The following figure shows as a typical Big Data ecosystem and how various ecosystem components and stakeholders interact with each other. Implementing the security controls in each of these interactions requires elaborate planning and careful execution.

The reference architecture depicted in the following diagram summarizes the key security pillars that needs to be considered for securing a Big Data ecosystem. In the next chapters, we will explore how to leverage the Hadoop security model and the various existing enterprise tools to secure the Big Data ecosystem.

In Chapter 4, Securing the Hadoop Ecosystem, we will look at the implementation details to secure the OS and applications that are deployed along with Hadoop in the ecosystem. In Chapter 5, Integrating Hadoop with Enterprise Security Systems, we look at the corporate network perimeter security requirement and how to secure the cluster and look at how authorization defined within the enterprise identity management system can be integrated with the Hadoop ecosystem. In Chapter 6, Securing Sensitive Data in Hadoop, we look at the encryption implementation for securing sensitive data in Hadoop. In Chapter 7, Security Event and Audit Logging in Hadoop, we look at security incidents and event monitoring along with the security policies required to address the audit and reporting requirements.

 

Summary


In this chapter, we understood the overall security challenges for securing Hadoop-based Big Data ecosystem deployments. We looked at the two different types (source and insights) of data that is stored in the Hadoop ecosystem and how important it is to secure these datasets to retain business confidence. We detailed out the key security considerations for securing Hadoop, and presented the overall security reference architecture that can be used as a guiding light for the overall security design of a Big Data ecosystem. In the rest of the book, we will use this reference architecture as a guide to implement the Hadoop-based secured Big Data ecosystem.

In the next chapter, we will look in depth at the Kerberos security model and how this is deployed in a secured Hadoop cluster. We will look at the Hadoop security model in detail and understand the key design considerations based on the current Hadoop security implementation.

About the Author

  • Sudheesh Narayanan

    Sudheesh Narayanan is a Technology Strategist and Big Data Practitioner with expertise in technology consulting and implementing Big Data solutions. With over 15 years of IT experience in Information Management, Business Intelligence, Big Data & Analytics, and Cloud & J2EE application development, he provided his expertise in architecting, designing, and developing Big Data products, Cloud management platforms, and highly scalable platform services. His expertise in Big Data includes Hadoop and its ecosystem components, NoSQL databases (MongoDB, Cassandra, and HBase), Text Analytics (GATE and OpenNLP), Machine Learning (Mahout, Weka, and R), and Complex Event Processing. Sudheesh is currently working with Genpact as the Assistant Vice President and Chief Architect – Big Data, with focus on driving innovation and building Intellectual Property assets, frameworks, and solutions. Prior to Genpact, he was the co-inventor and Chief Architect of the Infosys BigDataEdge product.

    Browse publications by this author
Securing Hadoop
Unlock this book and the full library for $5 a month*
Start now