Reader small image

You're reading from  Data Lakehouse in Action

Product typeBook
Published inMar 2022
Reading LevelBeginner
PublisherPackt
ISBN-139781801815932
Edition1st Edition
Languages
Tools
Concepts
Right arrow
Author (1)
Pradeep Menon
Pradeep Menon
author image
Pradeep Menon

Pradeep Menon is a seasoned data analytics professional with more than 18 years of experience in data and AI. Pradeep can balance business and technical aspects of any engagement and cross-pollinate complex concepts across many industries and scenarios. Currently, Pradeep works as a data and AI strategist at Microsoft. In this role, he is responsible for driving big data and AI adoption for Microsoft’s strategic customers across Asia. Pradeep is also a distinguished speaker and blogger and has given numerous keynotes on cloud technologies, data, and AI.
Read more about Pradeep Menon

Right arrow

Investigating the data lake era

The genesis of the data lake starts in 2004. In 2004, Google researchers Jeffery Dean and Sanjay Ghemawat published a paper titled MapReduce: Simplified Data Processing on Large Clusters. This paper laid the foundation of a new technology that evolved into Hadoop, whose original authors are Doug Cutting and Mike Cafarella.

Hadoop was later incorporated into Apache Software Foundation, a decentralized open source community of developers. Hadoop has been one of the top open source projects within the Apache ecosystem.

Hadoop was based on a simple concept – divide and conquer. The idea entailed three steps:

  1. Distribute data into multiple files and distribute them across the various nodes in a cluster.
  2. Use compute nodes to process the data locally in the nodes of each cluster.
  3. Use an orchestrator that communicates with each node and aggregates data for the final output.

Over the years, this concept gained traction, and a new kind of paradigm emerged for analytics. This architecture paradigm is the data lake paradigm. A typical data lake pattern can be depicted in the following figure:

Figure 1.8 – A typical data lake pattern

Figure 1.8 – A typical data lake pattern

This pattern addressed the challenges prevalent in the EDW pattern. The advantages that the data lake architecture pattern can offer are evident. The key advantages are as follows:

  • The data lake caters to both structured and unstructured data. The Hadoop ecosystem was primarily developed to store and process data formats such as JSON, text, and images. The EDW pattern was not designed to store or analyze these data types.
  • The data lake pattern can process large volumes of data at a relatively cheaper cost. The volumes of data that data lakes can store and process are in the order of high Terabytes (TBs) or Petabytes (PB). The EDW pattern found these large volumes of data challenging to store and process efficiently.
  • Data lakes can better address fast-changing business requirements. The evolving AI technologies can leverage data lakes better.

This pattern is widely adopted as it is the need of the hour. However, it has its own challenges. A few challenges associated with this pattern are as follows:

  • It is easy for a data lake to become a data swamp. Data lakes take in data, any form of data, and store it in its raw form. The philosophy is to ingest data first and then figure out what to do with it. This causes easy slippage of governance, and it becomes challenging to govern the data lake. With no proper data governance, data starts to mushroom all over the place in a data lake, and soon it becomes a data swamp.
  • Data lakes also have challenges with the rapid evolution of technology. The data lake paradigm mainly relies on open source software. Open source software evolves rapidly into behemoths that can become too difficult to manage. The software is predominantly community-driven, and it doesn't have proper enterprise support. This causes a lot of maintenance overhead and implementation complexities. Many features that are demanded by enterprises are missing from open source software, for example, a robust security framework.
  • Data lakes focus a lot more on AI enablement than BI. It was natural that the open source software evolution focused more on enabling AI. AI was having its own journey and was riding the wave, cresting together with Hadoop. BI was seen as retro, as it was already mature in its life cycle.

Soon, it became evident that the data lake pattern alone wouldn't be sustainable in the long run. There was a need for a new paradigm that fuses these two patterns.

Previous PageNext Page
You have been reading a chapter from
Data Lakehouse in Action
Published in: Mar 2022Publisher: PacktISBN-13: 9781801815932
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Pradeep Menon

Pradeep Menon is a seasoned data analytics professional with more than 18 years of experience in data and AI. Pradeep can balance business and technical aspects of any engagement and cross-pollinate complex concepts across many industries and scenarios. Currently, Pradeep works as a data and AI strategist at Microsoft. In this role, he is responsible for driving big data and AI adoption for Microsoft’s strategic customers across Asia. Pradeep is also a distinguished speaker and blogger and has given numerous keynotes on cloud technologies, data, and AI.
Read more about Pradeep Menon