Reader small image

You're reading from  Modern Data Architectures with Python

Product typeBook
Published inSep 2023
Reading LevelExpert
PublisherPackt
ISBN-139781801070492
Edition1st Edition
Languages
Concepts
Right arrow
Author (1)
Brian Lipp
Brian Lipp
author image
Brian Lipp

Brian Lipp is a Technology Polyglot, Engineer, and Solution Architect with a wide skillset in many technology domains. His programming background has ranged from R, Python, and Scala, to Go and Rust development. He has worked on Big Data systems, Data Lakes, data warehouses, and backend software engineering. Brian earned a Master of Science, CSIS from Pace University in 2009. He is currently a Sr. Data Engineer working with large Tech firms to build Data Ecosystems.
Read more about Brian Lipp

Right arrow

Building out the Groundwork

In this chapter, we will set up our environment and build the template for all the hard work we will do in the final chapter on our project. Some of the tooling we will use might be new and we will introduce it with explanations. The main tooling introduction is to GitHub Actions, which is a CI tool used to automate code-related tasks. We will be using this to run code checks in all of our repos. Poetry will be used to manage our Python code and package it into a PyPI repo. Organizing our code like this helps in many ways and allows us to share the code across systems. Lastly, we will be working with the PyPi public system to deploy and manage our Python packages. This isn’t the normal process, but to avoid creating a private server, this public service was used. In production, typically, you normally use a hosted PyPI service or hosts your own server. Those tools were chosen simply to introduce something new in the final project. As mentioned before...

Technical requirements

The tooling used in this chapter is tied to the tech stack chosen for the book. All vendors should offer a free trial account.

I will be using the following:

  • Databricks
  • GitHub
  • Terraform
  • PyPI

Setting up your environment

Before we begin our chapter, let’s take the time to set up our working environment.

The Databricks CLI

The first step is to install the databricks-cli tool using the pip Python package manager:

pip install databricks-cli

Let’s validate that everything has been installed correctly. If this command produces the tool version, then everything is working correctly:

Databricks -v

Now let’s set up authentication. First, go into the Databricks UI and generate a personal access token (PAT). The following command will ask for the host created for your Databricks instance and the created token:

databricks configure --token

We can quickly determine whether the CLI is set up correctly by running the following command, and if no error is returned, you have a working setup:

databricks fs ls

Git

Git will be used in this chapter and there are many ways to install it. I would recommend using https://git-scm.com/download...

Creating GitHub repos

So, we are going to set up our GitHub infrastructure and use GitHub Actions. First things first, let’s create our repositories. They will have empty README files. I am going to create five repositories, one for infrastructure as code, one for docs, one for an ML application, one for an ETL application, and one to manage DDL:

  • infra: gh repo create infra-project --public --add-readme
  • docs: gh repo create docs-project --public --add-readme
  • ML-Job: gh repo create ML-Jobs-project --public --add-readme
  • ETL-Jobs: gh repo create ETL-Jobs-project --public --add-readme
  • SCHEMA-Jobs: gh repo create SCHEMA-Jobs-project --public --add-readme

Now that every repo is created, let’s introduce a new tool we will use on all the Python repositories. We will use Poetry to manage our projects, which is a very easy-to-use package management system. It will also allow you to deploy Python applications very easily to PyPI. To install Poetry...

Terraform setup

We will be using the infrastructure repo to store our infrastructure as code. I will go through the Terraform code in a little bit, but keep in mind we are using pre-commit on this repo also. It will reformat, lint, and syntax check your Terraform code. I can’t stress enough how useful this is, and I wish more teams followed this approach.

Initial file setup

I’m going to shy away from repeating the exact same setup for each Python repo given, in this chapter, we will only have the base template created. Instead, I will walk through one repo and the infrastructure repo and explain the key files.

I create a visible tree of the folder structure I used in Windows. The majority of the work is done in Linux but here and there, I am switching to Windows:

tree /f

Schema repository

Here, we can see the basic folder structure. I have removed anything not committed or not useful to explain. One example might be pytest cache folders. I will show...

Summary

We wrapped up a very fun chapter deep-diving into several tools to set up our new project. We set up GitHub, Git, Terraform, pre-commit, and a PyPI project. We also started the code scaffolding for all our apps. It might not be clear at first, but these initial steps are some of the most important in any project. In the next chapter, we will look at the Python code for each app and how the users will interact with our data.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Modern Data Architectures with Python
Published in: Sep 2023Publisher: PacktISBN-13: 9781801070492
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Brian Lipp

Brian Lipp is a Technology Polyglot, Engineer, and Solution Architect with a wide skillset in many technology domains. His programming background has ranged from R, Python, and Scala, to Go and Rust development. He has worked on Big Data systems, Data Lakes, data warehouses, and backend software engineering. Brian earned a Master of Science, CSIS from Pace University in 2009. He is currently a Sr. Data Engineer working with large Tech firms to build Data Ecosystems.
Read more about Brian Lipp