Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Data Engineering with Apache Spark, Delta Lake, and Lakehouse

You're reading from  Data Engineering with Apache Spark, Delta Lake, and Lakehouse

Product type Book
Published in Oct 2021
Publisher Packt
ISBN-13 9781801077743
Pages 480 pages
Edition 1st Edition
Languages
Author (1):
Manoj Kukreja Manoj Kukreja
Profile icon Manoj Kukreja

Table of Contents (17) Chapters

Preface Section 1: Modern Data Engineering and Tools
Chapter 1: The Story of Data Engineering and Analytics Chapter 2: Discovering Storage and Compute Data Lakes Chapter 3: Data Engineering on Microsoft Azure Section 2: Data Pipelines and Stages of Data Engineering
Chapter 4: Understanding Data Pipelines Chapter 5: Data Collection Stage – The Bronze Layer Chapter 6: Understanding Delta Lake Chapter 7: Data Curation Stage – The Silver Layer Chapter 8: Data Aggregation Stage – The Gold Layer Section 3: Data Engineering Challenges and Effective Deployment Strategies
Chapter 9: Deploying and Monitoring Pipelines in Production Chapter 10: Solving Data Engineering Challenges Chapter 11: Infrastructure Provisioning Chapter 12: Continuous Integration and Deployment (CI/CD) of Data Pipelines Other Books You May Enjoy

Designing CI/CD pipelines

Before we deep dive into the actual development and implementation of CI/CD pipelines, we should try to design their layout. In typical data analytics projects, the focus of development revolves around two key areas:

  • Infrastructure Deployment: As discussed in the previous chapter, these days, it is recommended to perform cloud deployments using the Infrastructure as Code (IaC) practice. Infrastructure code used to be developed by DevOps engineers, although recently, data engineers are being asked to share this responsibility.
  • Data Pipelines: The development of data pipelines is likely handled entirely by data engineers. The code that's developed includes functionality to perform data collection, ingestion, curation, aggregations, governance, and distribution.

Following the continuous development, integration, and deployment principles, the recommended approach is to create two CI/CD pipelines that we will refer to as the Electroniz...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}