Reader small image

You're reading from  Data Engineering with Scala and Spark

Product typeBook
Published inJan 2024
PublisherPackt
ISBN-139781804612583
Edition1st Edition
Right arrow
Authors (3):
Eric Tome
Eric Tome
author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

Rupam Bhattacharjee
Rupam Bhattacharjee
author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

David Radford
David Radford
author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford

View More author details
Right arrow

Test-Driven Development, Code Health, and Maintainability

In this chapter, we are going to look at some software development best practices and learn how to apply them to data engineering. The topics covered in the chapter will go a long way to helping you identify defects early, write code in a consistent way, and address potential security vulnerabilities as part of development. For example, test-driven development (TDD) requires building test cases first before we write the actual application code. Since we start with test cases, this helps to write the application code in a way that can be easily used to run the test cases. Another example is code formatting. Each of us has our own ways of writing programs and having consistency among application code written by different developers helps to reduce the time that it would otherwise take to adapt to a particular coding style.

Since a single chapter cannot cover such a vast topic in detail, we are going to provide a high-level...

Technical requirements

You need to have Scala installed locally and should be able to update the build. You also need to have Docker installed on your machine. If you have not done so already, please refer to Chapter 2 for detailed steps.

Introducing TDD

TDD is a topic that is broad and deserves its own book. However, we will cover the basics so that you can apply TDD to your Scala data engineering projects.

One essential aspect of TDD in data engineering is testing the data transformations and manipulations within the pipelines you create. This involves creating unit tests that verify the correctness and accuracy of data transformations, aggregations, filters, and other data manipulation operations. Unit tests also ensure the code you create or change doesn’t break any existing processes that were previously created by you or anyone else on your team.

To accomplish this, it is important to develop code that is easily testable. You can do this by creating functions that perform one action and then composing multiple functions together to build your applications. Doing so will help to maintain code health and maintainability because you have small functions that make refactoring those functions easy.

...

Running static code analysis

Static code analysis is a debugging method that is performed without running the code. With application security taking center stage, it is very important to catch potential vulnerabilities early in the development phase and address them as you build your application code. Static code analysis helps developers catch issues such as the following:

  • Coding standard violations
  • Security vulnerabilities
  • Programming errors

There are several tools available for static code analysis. For this book, we are going to look at SonarQube, which can analyze over 30 different programming languages and is one of the most widely adopted tools for static code analysis.

Installing SonarQube locally

The easiest way to install SonarQube is to launch it as a Docker container using the following command:

docker run -d --name sonarqube -e SONAR_ES_BOOTSTRAP_CHECKS_DISABLE=true -p 9000:9000 sonarqube:latest

Once installed, open http://localhost:9000...

Understanding linting and code style

Scala is a type-safe language. By type safety, we mean Scala enforces type checks at compile time and thus enables programmers to catch and fix type errors early. A Scala program that compiles is guaranteed to run without any type errors.

Though type safety enforced by Scala is of immense help, there are still cases where a program will compile but has inherent flaws that the type checker will not call out. This is where linting tools come into play. They highlight potential bugs by analyzing the source code. Please note that there is no clear delineation between a linter and a static code analysis tool and they can be used to complement each other.

We will look at WartRemover, which is a Scala linting tool, next.

Linting code with WartRemover

There are several linting tools available to use. In this section, we are going to look at WartRemover and some of its predefined checks. We need to first add WartRemover to Plugins.sbt. Here is...

Summary

In this chapter, we looked at various software engineering best practices, such as TDD, unit and integration testing, code coverage, static code analysis, and code style and formatting. We have seen how TDD helps with building code that is easy to test and maintain, how code coverage gives you an understanding of how much of your code base has unit tests written, and how static code analysis can help you address potential vulnerabilities. Though we have shown how to run these tests and checks locally, we usually want to run them in our Git repositories or CI/CD tools.

In the next chapter, we are going to look at CI/CD with GitHub.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Data Engineering with Scala and Spark
Published in: Jan 2024Publisher: PacktISBN-13: 9781804612583
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford