Data Lake Development with Big Data

More Information
Learn
  • Identify the need for a Data Lake in your enterprise context and learn to architect a Data Lake
  • Learn to build various tiers of a Data Lake, such as data intake, management, consumption, and governance, with a focus on practical implementation scenarios
  • Find out the key considerations to be taken into account while building each tier of the Data Lake
  • Understand Hadoop-oriented data transfer mechanism to ingest data in batch, micro-batch, and real-time modes
  • Explore various data integration needs and learn how to perform data enrichment and data transformations using Big Data technologies
  • Enable data discovery on the Data Lake to allow users to discover the data
  • Discover how data is packaged and provisioned for consumption
  • Comprehend the importance of including data governance disciplines while building a Data Lake
About

A Data Lake is a highly scalable platform for storing huge volumes of multistructured data from disparate sources with centralized data management services. This book explores the potential of Data Lakes and explores architectural approaches to building data lakes that ingest, index, manage, and analyze massive amounts of data using batch and real-time processing frameworks. It guides you on how to go about building a Data Lake that is managed by Hadoop and accessed as required by other Big Data applications.

This book will guide readers (using best practices) in developing Data Lake's capabilities. It will focus on architect data governance, security, data quality, data lineage tracking, metadata management, and semantic data tagging. By the end of this book, you will have a good understanding of building a Data Lake for Big Data.

Features
  • Comprehend the intricacies of architecting a Data Lake and build a data strategy around your current data architecture
  • Efficiently manage vast amounts of data and deliver it to multiple applications and systems with a high degree of performance and scalability
  • Packed with industry best practices and use-case scenarios to get you up-and-running
Page Count 164
Course Length 4 hours 55 minutes
ISBN 9781785888083
Date Of Publication 26 Nov 2015

Authors

Pradeep Pasupuleti

Pradeep Pasupuleti has over 17 years of experience in architecting and developing distributed and real-time data-driven systems. Currently, he focuses on developing robust data platforms and data products that are fuelled by scalable machine-learning algorithms, and delivering value to customers in addressing business problems by applying his deep technical insights.

Pradeep founded Datatma expressly to humanize Big Data, simplify it, and unravel new value on a previously unimaginable scale in economy and scope. He has created COE (Centers of Excellence) to provide quick wins with data products that analyze high-dimensional multistructured data using scalable natural language processing and deep learning techniques. He has performed roles in technology consulting and advising Fortune 500 companies.

Beulah Salome Purra

Beulah Salome Purra has over 11 years of experience and specializes in building large-scale distributed systems. Her core expertise lies in working on Big Data Analytics. In her current role at ATMECS, her focus is on building robust and scalable data products that extract value from the organization's huge data assets.