Reader small image

You're reading from  Building Statistical Models in Python

Product typeBook
Published inAug 2023
Reading LevelIntermediate
PublisherPackt
ISBN-139781804614280
Edition1st Edition
Languages
Concepts
Right arrow
Authors (3):
Huy Hoang Nguyen
Huy Hoang Nguyen
author image
Huy Hoang Nguyen

Huy Hoang Nguyen is a Mathematician and a Data Scientist with far-ranging experience, championing advanced mathematics and strategic leadership, and applied machine learning research. He holds a Master's in Data Science and a PhD in Mathematics. His previous work was related to Partial Differential Equations, Functional Analysis and their applications in Fluid Mechanics. He transitioned from academia to the healthcare industry and has performed different Data Science projects from traditional Machine Learning to Deep Learning.
Read more about Huy Hoang Nguyen

Paul N Adams
Paul N Adams
author image
Paul N Adams

Paul Adams is a Data Scientist with a background primarily in the healthcare industry. Paul applies statistics and machine learning in multiple areas of industry, focusing on projects in process engineering, process improvement, metrics and business rules development, anomaly detection, forecasting, clustering and classification. Paul holds a Master of Science in Data Science from Southern Methodist University.
Read more about Paul N Adams

Stuart J Miller
Stuart J Miller
author image
Stuart J Miller

Stuart Miller is a Machine Learning Engineer with degrees in Data Science, Electrical Engineering, and Engineering Physics. Stuart has worked at several Fortune 500 companies, including Texas Instruments and StateFarm, where he built software that utilized statistical and machine learning techniques. Stuart is currently an engineer at Toyota Connected helping to build a more modern cockpit experience for drivers using machine learning.
Read more about Stuart J Miller

View More author details
Right arrow

Distributions of Data

In this chapter, we will cover the essential aspects of data and distributions. We will start by covering the types of data and distributions of data. Having covered the essential measurements of distributions, we will describe the normal distribution and its important properties, including the central limit theorem. Finally, we will cover resampling methods such as permutations and transformation methods such as log transformations. This chapter covers the foundational knowledge necessary to begin statistical modeling.

In this chapter, we’re going to cover the following main topics:

  • Understanding data types
  • Measuring and describing distributions
  • The normal distribution and the central limit theorem
  • Bootstrapping
  • Permutations
  • Transformations

Technical requirements

This chapter will make use of Python 3.8.

The code for this chapter can be found here – https://github.com/PacktPublishing/Building-Statistical-Models-in-Python – in the ch2 folder.

Please set up a virtual environment or Anaconda environment with the following packages installed:

  • numpy==1.23.0
  • scipy==1.8.1
  • matplotlib==3.5.2
  • pandas==1.4.2
  • statsmodels==0.13.2

Understanding data types

Before discussing data distributions, it would be useful to understand the types of data. Understanding data types is critical because the type of data determines what kind of analysis can be used since the type of data determines what operations can be used with the data (this will become clearer through the examples in this chapter). There are four distinct types of data:

  • Nominal data
  • Ordinal data
  • Interval data
  • Ratio data

These types of data can also be grouped into two sets. The first two types of data (nominal and ordinal) are qualitative data, generally non-numeric categories. The last two types of data (interval and ratio) are quantitative data, generally numeric values.

Let’s start with nominal data.

Nominal data

Nominal data is data labeled with distinct groupings. As an example, take machines in a sign factory. It is common for factories to source machines from different suppliers, which would also have different...

Measuring and describing distributions

The distributions of data found in the wild come in many shapes and sizes. This section will discuss how distributions are measured and which measurements apply to the four types of data. These measurements will provide methods to compare and contrast different distributions. The measurements discussed in this section can be broken into the following categories:

  • Central tendency
  • Variability
  • Shape

These measurements are called descriptive statistics. The descriptive statistics discussed in this section are commonly used in statistical summaries of data.

Measuring central tendency

There are three types of measurement of central tendency:

  • Mode
  • Median
  • Mean

Let’s discuss each one of them.

Mode

The first measurement of central tendency we will discuss is the mode. The mode of a dataset is simply the most commonly occurring instance. Using the machines in the factory as an example (see...

The normal distribution and central limit theorem

When discussing the normal distribution, we refer to the bell-shaped, standard normal distribution, which is formally synonymous with the Gaussian distribution, named after Carl Friedrich Gauss, an 18th- and 19th-century mathematician and physicist who – among other things – contributed to the concepts of approximation, and, in 1795, invented the method of least squares and the normal distribution, which is commonly used in statistical modeling techniques, such as least squares regression [3]. The standard normal distribution, also referred to as a parametric distribution, is characterized by a symmetrical distribution with a probability of data point dispersion consistent around the mean – that is, the data appears near the mean more frequently than data farther away. Since the location data dispersed within this distribution follows the laws of probability, we can call this a standard normal probability distribution...

Bootstrapping

Bootstrapping is a method of resampling that uses random sampling – typically with replacement – to generate statistical estimates about a population by resampling from subsets of the sampled distribution, such as the following:

  • Confidence intervals
  • Standard error
  • Correlation coefficients (Pearson’s correlation)

The idea is that repeatedly sampling different random subsets of a sample distribution and taking the average each time, given enough repeats, will begin to approximate the true population using each subsample’s average. This follows directly the concept of the Central Limit Theorem, which to be restated, asserts that sampling means begins to approximate normal sampling distributions, centered around the original distribution’s mean, as sample sizes and counts increase. Bootstrapping is useful when a limited quantity of samples exists in a distribution relative to the amount needed for a specific test,...

Permutations

Before jumping into this testing analysis, we will review some basic knowledge of permutations and combinations.

Permutations and combinations

Permutations and combinations are two mathematical techniques for taking a set of objects to create subsets from a population but in two different ways. The order of objects matters in permutations but does not matter in combinations.

In order to understand these concepts easily, we will consider two examples. There are 10 people at an evening party. The organizer of the party wants to give 3 prizes of $1,000, $500, and $200 randomly to 3 people. The question is how many ways are there to distribute the prizes? Another example is that the organizer will give 3 equal prizes of $500 to 3 people out of 10 at the party. The organizer really does not care which prize is given to whom among the 3 selected people. Huy, Paul, and Stuart are our winners in these two examples but, in the first example, different situations may play...

Transformations

In this section, we will consider three transformations:

  • Log transformation
  • Square root transformation
  • Cube root transformation

First, we will import the numpy package to create a random sample drawn from a Beta distribution. The documentation on Beta distributions can be found here:

https://numpy.org/doc/stable/reference/random/generated/numpy.random.beta.html

The sample, df, has 10,000 values. We also use matplotlib.pyplot to create different histogram plots. Second, we transform the original data by using a log transformation, square root transformation, and cube root transformation, and we draw four histograms:

import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42) # for reproducible purpose
# create a random data
df = np.random.beta(a=1, b=10, size = 10000)
df_log = np.log(df) #log transformation
df_sqrt = np.sqrt(df) # Square Root transformation
df_cbrt = np.cbrt(df) # Cube Root transformation
plt.figure(figsize ...

Summary

In the first section of this chapter, we learned about types of data and how to visualize these types of data. Then, we covered how to describe and measure attributes of data distribution. We learned about the standard normal distribution, why it’s important, and how the central limit theorem is applied in practice by demonstrating bootstrapping. We also learned how bootstrapping can make use of non-normally distributed data to test hypotheses using confidence intervals. Next, we covered mathematical knowledge as permutations and combinations and introduced permutation testing as another non-parametric test in addition to bootstrapping. We finished the chapter with different data transformation methods that are useful in many situations when performing statistical tests requiring normally distributed data.

In the next chapter, we will take a detailed look at hypothesis testing and discuss how to draw statistical conclusions from the results of the tests. We will also...

References

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Building Statistical Models in Python
Published in: Aug 2023Publisher: PacktISBN-13: 9781804614280
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Huy Hoang Nguyen

Huy Hoang Nguyen is a Mathematician and a Data Scientist with far-ranging experience, championing advanced mathematics and strategic leadership, and applied machine learning research. He holds a Master's in Data Science and a PhD in Mathematics. His previous work was related to Partial Differential Equations, Functional Analysis and their applications in Fluid Mechanics. He transitioned from academia to the healthcare industry and has performed different Data Science projects from traditional Machine Learning to Deep Learning.
Read more about Huy Hoang Nguyen

author image
Paul N Adams

Paul Adams is a Data Scientist with a background primarily in the healthcare industry. Paul applies statistics and machine learning in multiple areas of industry, focusing on projects in process engineering, process improvement, metrics and business rules development, anomaly detection, forecasting, clustering and classification. Paul holds a Master of Science in Data Science from Southern Methodist University.
Read more about Paul N Adams

author image
Stuart J Miller

Stuart Miller is a Machine Learning Engineer with degrees in Data Science, Electrical Engineering, and Engineering Physics. Stuart has worked at several Fortune 500 companies, including Texas Instruments and StateFarm, where he built software that utilized statistical and machine learning techniques. Stuart is currently an engineer at Toyota Connected helping to build a more modern cockpit experience for drivers using machine learning.
Read more about Stuart J Miller