Reader small image

You're reading from  Data Observability for Data Engineering

Product typeBook
Published inDec 2023
PublisherPackt
ISBN-139781804616024
Edition1st Edition
Right arrow
Authors (2):
Michele Pinto
Michele Pinto
author image
Michele Pinto

Michele Pinto is the Head of Engineering at Kensu. With over 15 years of experience, Michele has a great knack for understanding how data observability and data engineering are closely linked. He started his career as a software engineer and has worked since then in various roles, such as big data engineer, big data architect, head of data and until recently he was a Head of Engineering. He has a great community presence and believes in giving back to the community. He has also been a teacher for Digital Product Management Master TAG Innovation School in Milan, Italy. His collaboration on the book has been prompt, swift, eager, and very invested.
Read more about Michele Pinto

Sammy El Khammal
Sammy El Khammal
author image
Sammy El Khammal

Sammy El Khammal works at Kensu. He started off as a field engineer and worked his way up to the position of product manager. In the past, he has also worked with Mercedes as their Business Development Analyst – Intern. He has also been an O'Reilly teacher for 3 workshops on data quality, lineage monitoring, and data observability. During that time, he provided some brilliant insights, very responsive behaviour, and immense talent and determination.
Read more about Sammy El Khammal

View More author details
Right arrow

Exploring the seven dimensions of data quality

Now that you understand why data quality is important, let’s learn how to measure the quality of datasets.

A lot has already been explained about data quality in the literature – for instance, we can cite the DAMA (Data Management Body of Knowledge) book. In this chapter, we have decided to focus on seven dimensions: accuracy, completeness, consistency, conformity, integrity, timeliness, and uniqueness:

Figure 1.4 – Dimensions of data quality

Figure 1.4 – Dimensions of data quality

In the following sections, we will cover these dimensions in detail and explain the potential business impacts of poor-quality data in those dimensions.

Accuracy

The accuracy of data defines its ability to reflect the actual data. In a CRM, the contract type of a customer should be correctly associated with the right customer ID. Otherwise, marketing action could be wrongly targeted. For instance, in Figure 1.5, where the dataset was wrongly copied into the second one, you can see that the email addresses were mixed up:

Figure 1.5 – Example of inaccurate data

Figure 1.5 – Example of inaccurate data

Completeness

Data is considered complete if it contains all the data needed for the consumers. It is about getting the right data for the right process. A dataset can be incomplete if it does not contain an expected column, or if missing values are present. However, if the removed column is not used by your process, you can evaluate the dataset as complete for your needs. In Figure 1.6, the page_visited column is missing, while other columns are missing values. This is very annoying for the marketing team, who are in charge of sending emails, as they cannot contact all their customers:

Figure 1.6 – Example of incomplete data

Figure 1.6 – Example of incomplete data

The preceding case is a clear example of where data producers’ incentives can be different from customers’. Maybe the producer left empty cells to increase sales conversions as filling in the email address may create friction. However, for a consumer using the data for an email campaign, this field is crucial.

Consistency

The way data is represented should be consistent across the system. If you record the addresses of customers and need the ZIP code for the model, the records have to be consistent, meaning that you will not record the city for some customers and the ZIP code for others. At a technical level, the presentation of data can be inconsistent too. Look at Figure 1.7 – the has_confirmed column recorded Booleans as numbers for the first few rows and then used strings.

In this example, we can suppose the data source is a file, where fields can be easily changed. In a relational database management system (RDMS), this issue can be avoided as the data type cannot be changed:

Figure 1.7 – Example of inconsistent data

Figure 1.7 – Example of inconsistent data

Conformity

Data should be collected in the right format. For instance, page_visited can only contain integer numbers, order_id should be a string of characters, and in another dataset, a ZIP code can be a combination of letters and numbers. In Figure 1.8, you would expect to see @ in the email address, but you can only see a username:

Figure 1.8 – Example of improper data

Figure 1.8 – Example of improper data

Integrity

Data transformation should ensure that data items keep the same relationship. Integrity means that data is connected correctly and you don’t have standalone data – for instance, an address not connected to any customer name. Integrity ensures the data in the dataset can be traced and connected to other data.

In Figure 1.9, an engineer has extracted the duration column, which is useless without order_id:

Figure 1.9 – Example of an integrity issue

Figure 1.9 – Example of an integrity issue

Timeliness

Time is also an important dimension of data quality. If you run a weekly report, you want to be sure that the data is up to date and that the process runs at the correct time. For instance, if a weekly sales report is created, you expect to receive a report on last week’s sales. If you receive outdated data, because the database was not updated with the new week’s data, you may see that the total of this week’s report is the same as last week’s, which will lead to wrong assumptions.

Time-sensitive data, if not delivered on time, can lead to inaccurate insights, misinformed decisions, and, ultimately, monetary losses.

Uniqueness

To ensure there is no ambiguous data, the uniqueness dimension is used. A data record should not be duplicated and should not contain overlaps. In Figure 1.10, you can see that the same order ID was used for two distinct orders, and an order was recorded twice, assuming that there is no primary key defined in the dataset. This kind of data discrepancy can lead to various issues, such as incorrect order tracking, inaccurate inventory management, and potentially negative customer experiences:

Figure 1.10 – Example of duplicated data

Figure 1.10 – Example of duplicated data

Consequences of data quality issues

Data quality issues may have disastrous effects and negative consequences. When the quality standard is not met, the consumer can face various consequences.

Sometimes, a report won’t be created because the data quality issue will result in a pipeline issue, leading to delays in the decision-making process. Other times, the issue can be more subtle. For instance, because of a completeness issue, the marketing team could send emails with "Hello {firstname}", destroying their professional appearance to the customers. The result can damage the company’s profit or reputation.

Nevertheless, it is important to note that not all data quality issues will lead to a catastrophic outcome. Indeed, the issue only happens if the data item is part of your pipeline. The consumer won’t experience issues with data they don’t need. However, this means that to ensure the quality of the pipeline, the producer needs to know what is done with the data, and what is considered important for the data consumer. A data source that’s important for your project may have no relevance for another project.

This is why in the first stages of the project, a new approach must be implemented. The producer and the consumer have to come together to define the quality expectations of the pipeline. In the next section, we will see how this can be implemented with service-level agreements (SLAs) and service-level objectives (SLOs).

lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Data Observability for Data Engineering
Published in: Dec 2023Publisher: PacktISBN-13: 9781804616024

Authors (2)

author image
Michele Pinto

Michele Pinto is the Head of Engineering at Kensu. With over 15 years of experience, Michele has a great knack for understanding how data observability and data engineering are closely linked. He started his career as a software engineer and has worked since then in various roles, such as big data engineer, big data architect, head of data and until recently he was a Head of Engineering. He has a great community presence and believes in giving back to the community. He has also been a teacher for Digital Product Management Master TAG Innovation School in Milan, Italy. His collaboration on the book has been prompt, swift, eager, and very invested.
Read more about Michele Pinto

author image
Sammy El Khammal

Sammy El Khammal works at Kensu. He started off as a field engineer and worked his way up to the position of product manager. In the past, he has also worked with Mercedes as their Business Development Analyst – Intern. He has also been an O'Reilly teacher for 3 workshops on data quality, lineage monitoring, and data observability. During that time, he provided some brilliant insights, very responsive behaviour, and immense talent and determination.
Read more about Sammy El Khammal