Ever asked yourself whether your data is clean enough to be analyzed? It's likely that everyone who works with data has, which is why this chapter is dedicated to getting your data ready for analysis, otherwise known as data cleaning.
The first part of this chapter is theory-oriented and does not include exercises. A careful reading of this information is encouraged, since it provides a foundation for greater insight. The latter portion of the chapter provides various exercises specifically focused on data preparation.
Now let's dive into this fascinating topic with the goal of enriching our understanding and becoming ever-better data stewards.
In this chapter, we will discuss the following topics:
Since Tableau Desktop 10.5 has been on the market for some time, you may already have heard of Hyper. Regardless of whether you have or not, continue reading for a primer on this useful tool!
In this section, we will explore Tableau's data-handling engine, and how it enables structured yet organic data mining processes in enterprises. Since the release of Tableau 10.5, we can now make use of Hyper, a high-performing database, allowing us to query source data faster than ever before. Hyper is Tableau's data-handling engine, which is usually not well understood by even advanced developers, because it's not an overt part of day-to-day activities; however, if you want to truly grasp how to prepare data for Tableau, this understanding is crucial.
Hyper originally started as a research project at the University of Munich in 2008. In 2016, it was acquired by Tableau and appointed as the dedicated data engine group of Tableau, maintaining its base and employees in Munich. Initially in 10.5, Hyper replaced the earlier data-handling engine only for extracts. It is still true that live connections are not touched by Hyper, but Tableau Prep Builder now runs on the Hyper engine too, with more use cases to follow.
What makes Hyper so fast? Let's have a look under the hood!
The vision shared by the founders of Hyper was to create a high-performing, next-generation database; one system, one state, no trade-offs, and no delays. And it worked—today, Hyper can serve general database purposes, data ingestion, and analytics at the same time.
Memory prices have decreased exponentially. If we go back in time to 1996, 1 GB of data could cost $45,000 in production costs. Today, much more than that can be found on every phone, or even on a smartwatch, costing as little $2 to produce. The same goes for CPUs; transistor counts increased according to Moore's law, while other features stagnated. Memory is cheap but processing still needs to be improved.
Moore's Law is the observation made by Intel co-founder Gordon Moore that the number of transistors on a chip doubles every two years while the costs are halved. In 1965, Gordon Moore noticed that the number of transistors per square inch on integrated circuits had doubled every two years since their invention. Information on Moore's Law can be found on Investopedia at https://www.investopedia.com/terms/m/mooreslaw.asp.
While experimenting with Hyper, the founders measured that handwritten C code is faster than any existing database engine, so they came up with the idea to transform Tableau Queries into LLVM code and optimize it simultaneously, all behind the scenes, so the Tableau user won't notice it. This translation and optimization comes at a cost; traditional database engines can start executing code immediately. Tableau needs to first translate queries into code, optimize that code, then compile it into machine code, after which it can be executed. So the big question is, is it still faster? As proven by many tests on Tableau Public and other workbooks, the answer is yes!
Furthermore, if there is a query estimated to be faster if executed without the compilation to machine code, Tableau has its own virtual machine (VM) in which the query will be executed right away. And next to this, Hyper can utilize 99% of available CPUs, whereas other paralyzed processes can only utilize 29% of available CPUs. This is due to the unique and innovative technique of morsel-driven parallelization.
For those of you that want to know more about morsel-driven parallelization, a paper, which later on served as a baseline for the Hyper engine, can be found at https://15721.courses.cs.cmu.edu/spring2016/papers/p743-leis.pdf.
If you want to know more about the Hyper engine, I can highly recommend the following video at https://youtu.be/h2av4CX0k6s.
Hyper parallelizes three steps of traditional data warehousing operations:
Executing those steps simultaneously makes it more efficient and more performant, as opposed to traditional systems where those three steps are separated and executed one after the other.
To sum up, Hyper is a highly specialized database engine that allows us as users to get the best out of our queries. If you recall, in Chapter 1, Getting Up to Speed – A Review of the Basics, we already saw that every change on a sheet or dashboard, including drag and drop pills, filters, and calculated fields, among others, are translated into queries. Those queries are pretty much SQL-lookalikes, however, in Tableau we call the querying engine VizQL.
VizQL, another hidden gem in your Tableau Desktop, is responsible for visualizing data into chart format and is fully executed in memory. The advantage is that no additional space on the database site is required here. VizQL is generated when a user places a field on a shelf. VizQL is then translated into SQL, MDX, or Tableau Query Language (TQL), and passed to the backend data source with a driver. Two aspects of the VizQL module are of primary importance:
We'll discuss these two aspects in more detail in the following sections.
In this section, we'll demonstrate how changing a worksheet's field attribution will allow you more flexibility in your dashboard creation.
Let's look at the World Happiness Report. Please navigate to the workbook associated with this chapter on https://public.tableau.com/profile/marleen.meier and open the Score per country
sheet. We create the following worksheet by placing AVG(Happiness Score) and Country on the Columns and Rows shelves respectively. AVG(Happiness Score) is, of course, treated as a measure in this case. Lastly, sort the countries by their happiness score, highest to lowest.
Let's take a look at this in the following screenshot:
Figure 2.1: Happiness score by country
Next, please create a second worksheet called Score/Rank
to analyze the scores relative to the ranks by using Happiness Score on Rows and Happiness Rank on Columns. Both pills should be continuous, hence green-colored.
In order to accomplish this, the user defines Happiness Rank as a Dimension, as shown in the following screenshot:
Figure 2.2: Ranking score per country
Please note that Columns and Rows have been moved to the left for better readability. This can be achieved by dragging and dropping the shelves.
In order to add steps to your visualization, click on Path in the Marks Card and select the second option, Step.
You can view the code generated by Tableau that is passed to the data source with the performance recorder, which is accessible through Help, then Settings and Performance, and then Start Performance Recording. See Chapter 13, Improving Performance, for additional details.
Studying the SQL generated by VizQL to create the preceding visualization is particularly insightful:
SELECT ['Happiness Report$'].[Happiness.Rank] AS [Happiness.Rank],
AVG(['Happiness Report$'].[Happiness.Score]) AS [avg:Happiness.Score:ok] FROM
[dbo].['Happiness Report$'] ['Happiness Report$'] GROUP BY ['Happiness Report$'].[Happiness.Rank]"
The GROUP BY
clause clearly communicates that Happiness Rank
is treated as a dimension because grouping is only possible on dimensions. The takeaway is to note that VizQL enables the analyst to change the SQL code input by changing a field from measure to dimension rather than the source metadata. This on-the-fly ability enables creative exploration of the data that's not possible with other tools, and avoids lengthy exercises attempting to define all possible uses for each field.
The previous section taught us how we can manipulate data types in Tableau itself without touching the data source and its metadata itself. In the next section, we will take a closer look at table calculations.
In this section, we will explore how VizQL's table calculations can be used to add data to a dashboard without adding any data to the data source.
In the following example, which can be viewed by opening Sheet 4 on this chapter's workbook, note that Freedom on the vertical axis is set to Quick Table Calculation and Moving Average. Calculating a Moving Average, Running Total, or other such comparison calculations can be quite challenging to accomplish in a data source. Not only must a data architect consider what comparison calculations to include in the data source, but they must also determine the dimensions for which these calculations are relevant.
VizQL greatly simplifies such challenges using table calculations, as seen in the following screenshot:
Figure 2.3: Moving average
Taking a look at the relevant portion of SQL generated by the preceding worksheet shows that the table calculation is not performed by the data source. Instead, it is performed in Tableau by the VizQL module.
The following is the SQL query:
SELECT SUM([Happiness Report$].[Freedom]) AS [sum:Freedom:ok], [Happiness Report$].[Happiness.Rank] AS [Happiness.Rank]
FROM [dbo].[Happiness Report$] [Happiness Report$] GROUP BY ['Happiness Report$'].[Happiness.Score]
To reiterate, nothing in the preceding call to the data source generates the moving average. Only an aggregated total is returned, and Tableau calculates the moving average with VizQL.
This overview of the Tableau data-handling engine demonstrates a flexible approach to interfacing with data. Knowledge of the data-handling engine is helpful if you want to understand the parameters for Tableau data readiness. Two major takeaways from this section are as follows:
The knowledge of these two takeaways can reduce data preparation and data modeling efforts, and thus helps us streamline the overall data mining life cycle. Don't worry too much about data types and data that can be calculated based on the fields you have in your database. Tableau can do all the work for you in this respect. In the next section, we will discuss what you should consider from a data source perspective.
Data modeling, data preparation, database design, data architecture—the question that arises is, how do these and other similar terms fit together? This is no easy question to answer! Terms may be used interchangeably in some contexts and be quite distinct in others. Also, understanding the interconnectivity of any technical jargon can be challenging.
In the data world, data mining and knowledge discovery process models attempt to consistently define terms and contextually position and define the various data sub-disciplines. Since the early 1990s, various models have been proposed.
In the following table, we can see a comparison of blueprints for conducting a data mining project with three data processing models, all of which are used to discover patterns and relationships in data in order to help make better business decisions.
The following list is adapted from A Survey of Knowledge Discovery and Data Mining Process Models by Lukasz A. Kurgan and Petr Musilek, and published in The Knowledge Engineering Review, Volume 21, Issue 1, March 2006.
Later on, we will see how Tableau comes into play and makes this process easier and faster for us.
KDD |
CRISP-DM |
SEMMA |
|
Phase I |
Selection |
Business understanding |
Sample |
Phase II |
Pre-processing |
Data understanding |
Explore |
Phase III |
Transformation |
Data preparation |
Modify |
Phase IV |
Data mining |
Modeling |
Model |
Phase V |
Interpretation/ evaluation |
Evaluation |
Assess |
Phase VI |
Consolidate knowledge |
Deployment |
- |
Since CRISP-DM is used by four to five times the number of people as the closest competing model (SEMMA), it is the model we will consider in this chapter. For more information, see http://www.kdnuggets.com/2014/10/crisp-dm-top-methodology-analytics-data-mining-data-science-projects.html.
The important takeaway is that each of these models grapples with the same problems, particularly concerning the understanding, preparing, modeling, and interpreting of data.
Cross Industry Standard Process for Data Mining (CRISP-DM) was created between 1996 and 2000 as a result of a consortium including SPSS, Teradata, Daimler AG, NCR Corporation, and OHRA. It divides the process of data mining into six major phases, as shown in the CRISP-DM reference model in the preceding comparison table.
This model provides a bird's-eye view of a data-mining project life cycle. The sequence of the phases are not rigid; jumping back and forth from phase to phase is allowed and expected. Data mining does not cease upon the completion of a particular project. Instead, it exists as long as the business exists, and should be constantly revisited to answer new questions as they arise.
In the next section, we will consider each of the six phases that comprise CRISP-DM and explore how Tableau can be used throughout the life cycle. We will particularly focus on the data preparation phase, as that is the phase encompassing data cleaning, the focus of this chapter. By considering the following steps, you will be able to understand in more detail what a full data mining process circle looks like under CRISP-DM. This framework can be used to make your workflow in Tableau more efficient by working according to an established model.
In the following sections, we will briefly define each of the six CRISP-DM phases and include high-level information on how Tableau might be used.
Phase I – business understanding:
Phase II – data understanding:
Phase III – data preparation:
As Tableau has evolved, functionality has been introduced to do more and more of the actual data preparation work as well as the visualization. For example, Tableau Prep Builder is a standalone product that ships with Tableau Desktop and is dedicated to data prep tasks. We will cover Tableau Prep Builder in Chapter 3, Tableau Prep Builder.
Phase IV – modeling:
Phase V – evaluation:
Phase VI – deployment:
Now that we have learned what a full data mining circle looks like (and looked like pre-Tableau) and understood that every step can be executed in Tableau, we can see how it makes sense that data people celebrate Tableau Software products.
The phrase "data people" is especially memorable after listening to the song written for the 2019 Las Vegas Tableau Conference, at https://youtu.be/UBrH7MXf-Q4.
Tableau makes data mining so much easier and efficient, and the replication of steps is also easier than it was before, without Tableau. In the next section, we will take a look at a practical example to explore the content we've just learned with some hands-on examples.
As discussed earlier, Tableau can be used effectively throughout the CRISP-DM phases. Unfortunately, a single chapter is not sufficient to thoroughly explore how Tableau can be used in each phase. Indeed, such a thorough exploration may be worthy of an entire book! Our focus, therefore, will be directed to data preparation, since that phase has historically accounted for up to 60% of the data mining effort. Our goal will be to learn how Tableau can be used to streamline that effort.
Tableau can be a very effective tool for simply surveying data. Sometimes in the survey process, you may discover ways to clean the data or populate incomplete data based on existing fields. Sometimes, regretfully, there are simply not enough pieces of the puzzle to put together an entire dataset. In such cases, Tableau can be useful to communicate exactly what the gaps are, and this, in turn, may incentivize the organization to more fully populate the underlying data.
In this exercise, we will explore how to use Tableau to quickly discover the percentage of null values for each field in a dataset. Next, we'll explore how the data might be extrapolated from existing fields to fill in the gaps.
The following are the steps for surveying the data:
Figure 2.4: Null regions
Figure 2.5: Editing a parameter
% Populated
and write the following calculation:
SUM([Number of Records]) / TOTAL(SUM([Number of Records]))
Figure 2.6: Adjusting default properties
Null & Populated
and add the following code. Note that the complete case statement is fairly lengthy but is also repetitive.In cases requiring a lengthy but repetitive calculation, consider using Excel to more quickly and accurately write the code. By using Excel's CONCATENATE
function, you may be able to save time and avoid typos.
In the following code block, the code lines represent only a percentage of the total but should be sufficient to enable you to produce the whole:
CASE [Select Field]
WHEN 1 THEN IF ISNULL ([Country]) THEN 'Null Values' ELSE
'Populated Values' END
WHEN 2 THEN IF ISNULL ([Region]) THEN 'Null Values' ELSE
'Populated Values' END
WHEN 3 THEN IF ISNULL ([Economy (GDP per Capita)]) THEN 'Null Values' ELSE
'Populated Values' END
WHEN 4 THEN IF ISNULL ([Family]) THEN 'Null Values' ELSE
'Populated Values' END
WHEN 5 THEN IF ISNULL ([Freedom]) THEN 'Null Values' ELSE
'Populated Values' END
WHEN 6 THEN IF ISNULL ([Happiness Rank]) THEN 'Null Values' ELSE
'Populated Values' END
WHEN 7 THEN IF ISNULL ([Happiness Score]) THEN 'Null Values' ELSE
'Populated Values' END
WHEN 8 THEN IF ISNULL ([Health (Life Expectancy)]) THEN 'Null Values' ELSE
'Populated Values' END
WHEN 9 THEN IF ISNULL ([Standard Error]) THEN 'Null Values' ELSE
'Populated Values' END
END
Figure 2.7: Populated values
Figure 2.8: Comparing null and populated values
Building on this exercise, let's explore how we might clean and extrapolate data from existing data using the same dataset.
This exercise will expand on the previous exercise by cleaning existing data and populating some of the missing data from known information. We will assume that we know which country belongs to which region. We'll use that knowledge to fix errors in the Region
field and also to fill in the gaps using Tableau:
Region Extrapolated
with the following code block:
CASE [Country]
WHEN 'Afghanistan' THEN 'Southern Asia'
WHEN 'Albania' THEN 'Central and Eastern Europe'
WHEN 'Algeria' THEN 'Middle East and Northern Africa'
WHEN 'Angola' THEN 'Sub-Saharan Africa'
WHEN 'Argentina' THEN 'Latin America and Caribbean'
WHEN 'Armenia' THEN 'Central and Eastern Europe'
WHEN 'Australia' THEN 'Australia and New Zealand'
WHEN 'Austria' THEN 'Western Europe'
//complete the case statement with the remaining fields in the data set
END
To speed up the tedious creation of a long calculated field, you could download the data to an Excel file and create the calculated field by concatenating the separate parts, as shown here:
Figure 2.9: Compiling a calculation in Excel
You can then copy them from Excel into Tableau. However, for this exercise, I have created a backup field called Backup
, which can be found in the Tableau Workbook associated with this chapter, which contains the full calculation needed for the Region Extrapolated
field. Use this at your convenience. The Solutions dashboard also contains all of the countries. You can therefore copy the Region Extrapolated
field from that file too.
Figure 2.10: Adding Region Extrapolated to parameter
WHEN 10 THEN IF ISNULL ([Region Extrapolated]) THEN 'Null Values' ELSE
'Populated Values' END
Figure 2.11: Fully populated Region Extrapolated field
Now let's consider some of the specifics from the previous exercises:
Note that the complete case statement is several lines long. The following is a representative portion.
CASE [% Populated]
WHEN 1 THEN IF ISNULL ([Country]) THEN 'Null Values' ELSE
'Populated Values' END
...
This case statement is a row-level calculation that considers each field in the dataset and determines which rows are populated and which are not. For example, in the representative line of the preceding code, every row of the Country
field is evaluated for nulls. The reason for this is that a calculated field will add a new column to the existing data—only in Tableau, not in the data source itself—and every row will get a value. These values can be N/A or null values.
Percent of Total
:
SUM([Number of Records]) / TOTAL(SUM([Number of Records]))
In conjunction with the Null & Populated
calculated field, it allows us to see what percentage of our fields are actually populated with values.
It's a good idea to get into the habit of writing table calculations from scratch, even if an equivalent quick table calculation is available. This will help you more clearly understand the table calculations.
CASE [Country]
WHEN 'Afghanistan' THEN 'Southern Asia'
... END
For example, the Region
field in the dataset had a large percentage of null values, and even the existing data had errors. Based on our knowledge of the business (that is, which country belongs to which region) we were able to use the Country
field to achieve 100% population of the dataset with accurate information.
Nulls are a part of almost every extensive real dataset. Understanding how many nulls are present in each field can be vital to ensuring that you provide accurate business intelligence. It may be acceptable to tolerate some null values when the final results will not be substantially impacted, but too many nulls may invalidate results. However, as demonstrated here, in some cases one or more fields can be used to extrapolate the values that should be entered into an underpopulated or erroneously populated field.
As demonstrated in this section, Tableau gives you the ability to effectively communicate to your data team which values are missing, which are erroneous, and how possible workarounds can be invaluable to the overall data mining effort. Next, we will look into data that is a bit messier and not in a nice column format. Don't worry, Tableau has us covered.
The United States government provides helpful documentation for various bureaucratic processes. For example, the Department of Health and Human Services (HSS) provides lists of ICD-9 codes, otherwise known as International Statistical Classification of Diseases and Related Health Problems codes. Unfortunately, these codes are not always in easily accessible formats.
As an example, let's consider an actual HHS document known as R756OTN, which can be found at https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R756OTN.pdf.
Navigate to the Cleaning the Data
worksheet in this workbook and execute the following steps:
Figure 2.12: Selecting the raw file
Figure 2.13: Adding Diagnosis to Rows
DX
with the following code:
SPLIT( [Diagnosis], "", 1 )
Null Hunting
with the following code:
INT(MID([DX],2,1))
Null Hunting
from Measures to Dimensions.Figure 2.14: Ordering fields on Rows
Exclude from ICD codes
containing the following code:
ISNULL([Null Hunting])
Figure 2.15: Excluding junk data
Exclude from ICD Codes
equates to TRUE
), place Exclude from ICD Codes
on the Filter shelf and deselect True.Diagnosis Text
containing the following code:
REPLACE([Diagnosis],[DX] + "","")
Figure 2.16: Observing the cleaned data
Now that we've completed the exercise, let's take a moment to consider the code we have used:
SPLIT
function was introduced in Tableau 9.0:
SPLIT([Diagnosis], "", 1 )
As described in Tableau's help documentation about the function, the function does the following:
Returns a substring from a string, as determined by the delimiter extracting the characters from the beginning or end of the string.
This function can also be called directly in the Data Source tab when clicking on a column header and selecting Split. To extract characters from the end of the string, the token number (that is, the number at the end of the function) must be negative.
Null Hunting
field:
INT(MID([DX],2,1))
The use of MID
is quite straightforward, and is much the same as the corresponding function in Excel. The use of INT
in this case, however, may be confusing. Casting an alpha character with an INT
function will result in Tableau returning Null
. This satisfactorily fulfills our purpose, since we simply need to discover those rows not starting with an integer by locating the nulls.
ISNULL
is a Boolean function that simply returns TRUE
in the case of Null
:
ISNULL([Null Hunting])
REPLACE
function was used while creating the Diagnosis Text
field:
REPLACE([Diagnosis],[DX] + "","")
This calculated field uses the ICD-9 codes isolated in DX
to remove those same codes from the Diagnosis
field and thus provides a fairly clean description. Note the phrase fairly clean. The rows that were removed were initially associated with longer descriptions that thus included a carriage return. The resulting additional rows are what we removed in this exercise. Therefore, the longer descriptions are truncated in this solution using the replace
calculation.
The final output for this exercise could be to export the data from Tableau as an additional source of data. This data could then be used by Tableau and other tools for future reporting needs. For example, the DX
field could be useful in data blending.
Does Tableau offer a better approach that might solve the issue of truncated data associated with the preceding solution? Yes! Let's turn our attention to the next exercise, where we will consider regular expression functions.
Although, as shown in the previous exercise, Cleaning the data, the SPLIT
function can be useful for cleaning data, regular expression functions are far more powerful and represent a broadening of the scope from Tableau's traditional focus on visualization and analytics to also include data cleaning capabilities. Let's look at an example that requires us to deal with some pretty messy data in Tableau. Our objective will be to extract phone numbers.
The following are the steps:
Figure 2.17: Extracting data from a messy data format
Figure 2.18: Accessing underlying data
Figure 2.19: Regexpal
\([0-9]{3}\)-[0-9]{3}-[0-9]{4}
Phone Number
with the following code block. Note the regular expression nested in the calculated field:
REGEXP_EXTRACT([String of Data (String of Data)],'(\([0-9]{3}\)-[0-9]{3}-[0-9]{4})')
Figure 2.20: Extracting data final view
Now let's consider some of the specifics from the preceding exercise in more detail:
REGEXP_EXTRACT([String of Data],'()')
The expression pattern is purposely excluded here as it will be covered in detail later. The '()
' code acts as a placeholder for the expression pattern. The REGEXP_EXTRACT
function used in this example is described in Tableau's help documentation as follows:
Returns a substring of the given string that matches the capturing group within the regular expression pattern.
()
' around the pattern input section to avoid a null output.\([0-9]{3}\)-[0-9]{3}-[0-9]{4}
In this context, the \
indicates that the next character should not be treated as special but as literal. For our example, we are literally looking for an open parenthesis. [0-9]
simply declares that we are looking for one or more digits. Alternatively, consider \d
to achieve the same results. The {3}
designates that we are looking for three consecutive digits.
As with the opening parenthesis at the beginning of the pattern, the \
character designates the closing parentheses as a literal. The -
is a literal that specifically looks for a hyphen. The rest of the expression pattern should be decipherable based on the preceding information.
After reviewing this exercise, you may be curious about how to return just the email address. According to http://www.regular-expressions.info/email.html, the regular expression for email addresses adhering to the RFC 5322 standard is as follows:
(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-
]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-
\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-
]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-
z0-9])?|\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-
5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\x01-
\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-
\x7f])+)\])
Emails do not always adhere to RFC 5322 standards, so additional work may be required to truly clean email address data.
Although I won't attempt a detailed explanation of this code, you can read all about it at http://www.regular-expressions.info/email.html, which is a great resource for learning more about regular expressions. Also, YouTube has several helpful regular expression tutorials.
The final output for this exercise should probably be used to enhance existing source data. Data dumps such as this example do not belong in data warehouses; however, even important and necessary data can be hidden in such dumps, and Tableau can be effectively used to extract it.
We began this chapter with a discussion of the Tableau data-handling engine. This illustrated the flexibility Tableau provides in working with data. The data-handling engine is important to understand in order to ensure that your data mining efforts are intelligently focused. Otherwise, your effort may be wasted on activities not relevant to Tableau.
Next, we discussed data mining and knowledge discovery process models, with an emphasis on CRISP-DM. The purpose of this discussion was to get an appropriate bird's-eye view of the scope of the entire data mining effort. Tableau authors (and certainly end users) can become so focused on the reporting produced in the deployment phase that they end up forgetting or short-changing the other phases, particularly data preparation.
Our last focus in this chapter was on the phase that can be the most time-consuming and labor-intensive, namely data preparation. We considered using Tableau for surveying and also cleaning data. The data cleaning capabilities represented by the regular expression functions are particularly intriguing, and are worth further investigation.
Having completed our first data-centric discussion, we'll continue with Chapter 3, Tableau Prep Builder, looking at one of the newer features Tableau has brought to the market. Tableau Prep Builder is a dedicated data pre-processing interface that is able to reduce the amount of time you need for pre-processing even more. We'll take a look at cleaning, merging, filtering, joins, and the other functionality Tableau Prep Builder has to offer.
Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.
If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.
Please Note: Packt eBooks are non-returnable and non-refundable.
Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:
If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:
Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.
You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.
Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.
When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.
For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.