Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Building Statistical Models in Python

You're reading from  Building Statistical Models in Python

Product type Book
Published in Aug 2023
Publisher Packt
ISBN-13 9781804614280
Pages 420 pages
Edition 1st Edition
Languages
Concepts
Authors (3):
Huy Hoang Nguyen Huy Hoang Nguyen
Profile icon Huy Hoang Nguyen
Paul N Adams Paul N Adams
Profile icon Paul N Adams
Stuart J Miller Stuart J Miller
Profile icon Stuart J Miller
View More author details

Table of Contents (22) Chapters

Preface 1. Part 1:Introduction to Statistics
2. Chapter 1: Sampling and Generalization 3. Chapter 2: Distributions of Data 4. Chapter 3: Hypothesis Testing 5. Chapter 4: Parametric Tests 6. Chapter 5: Non-Parametric Tests 7. Part 2:Regression Models
8. Chapter 6: Simple Linear Regression 9. Chapter 7: Multiple Linear Regression 10. Part 3:Classification Models
11. Chapter 8: Discrete Models 12. Chapter 9: Discriminant Analysis 13. Part 4:Time Series Models
14. Chapter 10: Introduction to Time Series 15. Chapter 11: ARIMA Models 16. Chapter 12: Multivariate Time Series 17. Part 5:Survival Analysis
18. Chapter 13: Time-to-Event Variables – An Introduction 19. Chapter 14: Survival Models 20. Index 21. Other Books You May Enjoy

Distributions of Data

In this chapter, we will cover the essential aspects of data and distributions. We will start by covering the types of data and distributions of data. Having covered the essential measurements of distributions, we will describe the normal distribution and its important properties, including the central limit theorem. Finally, we will cover resampling methods such as permutations and transformation methods such as log transformations. This chapter covers the foundational knowledge necessary to begin statistical modeling.

In this chapter, we’re going to cover the following main topics:

  • Understanding data types
  • Measuring and describing distributions
  • The normal distribution and the central limit theorem
  • Bootstrapping
  • Permutations
  • Transformations

Technical requirements

This chapter will make use of Python 3.8.

The code for this chapter can be found here – https://github.com/PacktPublishing/Building-Statistical-Models-in-Python – in the ch2 folder.

Please set up a virtual environment or Anaconda environment with the following packages installed:

  • numpy==1.23.0
  • scipy==1.8.1
  • matplotlib==3.5.2
  • pandas==1.4.2
  • statsmodels==0.13.2

Understanding data types

Before discussing data distributions, it would be useful to understand the types of data. Understanding data types is critical because the type of data determines what kind of analysis can be used since the type of data determines what operations can be used with the data (this will become clearer through the examples in this chapter). There are four distinct types of data:

  • Nominal data
  • Ordinal data
  • Interval data
  • Ratio data

These types of data can also be grouped into two sets. The first two types of data (nominal and ordinal) are qualitative data, generally non-numeric categories. The last two types of data (interval and ratio) are quantitative data, generally numeric values.

Let’s start with nominal data.

Nominal data

Nominal data is data labeled with distinct groupings. As an example, take machines in a sign factory. It is common for factories to source machines from different suppliers, which would also have different...

Measuring and describing distributions

The distributions of data found in the wild come in many shapes and sizes. This section will discuss how distributions are measured and which measurements apply to the four types of data. These measurements will provide methods to compare and contrast different distributions. The measurements discussed in this section can be broken into the following categories:

  • Central tendency
  • Variability
  • Shape

These measurements are called descriptive statistics. The descriptive statistics discussed in this section are commonly used in statistical summaries of data.

Measuring central tendency

There are three types of measurement of central tendency:

  • Mode
  • Median
  • Mean

Let’s discuss each one of them.

Mode

The first measurement of central tendency we will discuss is the mode. The mode of a dataset is simply the most commonly occurring instance. Using the machines in the factory as an example (see...

The normal distribution and central limit theorem

When discussing the normal distribution, we refer to the bell-shaped, standard normal distribution, which is formally synonymous with the Gaussian distribution, named after Carl Friedrich Gauss, an 18th- and 19th-century mathematician and physicist who – among other things – contributed to the concepts of approximation, and, in 1795, invented the method of least squares and the normal distribution, which is commonly used in statistical modeling techniques, such as least squares regression [3]. The standard normal distribution, also referred to as a parametric distribution, is characterized by a symmetrical distribution with a probability of data point dispersion consistent around the mean – that is, the data appears near the mean more frequently than data farther away. Since the location data dispersed within this distribution follows the laws of probability, we can call this a standard normal probability distribution...

Bootstrapping

Bootstrapping is a method of resampling that uses random sampling – typically with replacement – to generate statistical estimates about a population by resampling from subsets of the sampled distribution, such as the following:

  • Confidence intervals
  • Standard error
  • Correlation coefficients (Pearson’s correlation)

The idea is that repeatedly sampling different random subsets of a sample distribution and taking the average each time, given enough repeats, will begin to approximate the true population using each subsample’s average. This follows directly the concept of the Central Limit Theorem, which to be restated, asserts that sampling means begins to approximate normal sampling distributions, centered around the original distribution’s mean, as sample sizes and counts increase. Bootstrapping is useful when a limited quantity of samples exists in a distribution relative to the amount needed for a specific test,...

Permutations

Before jumping into this testing analysis, we will review some basic knowledge of permutations and combinations.

Permutations and combinations

Permutations and combinations are two mathematical techniques for taking a set of objects to create subsets from a population but in two different ways. The order of objects matters in permutations but does not matter in combinations.

In order to understand these concepts easily, we will consider two examples. There are 10 people at an evening party. The organizer of the party wants to give 3 prizes of $1,000, $500, and $200 randomly to 3 people. The question is how many ways are there to distribute the prizes? Another example is that the organizer will give 3 equal prizes of $500 to 3 people out of 10 at the party. The organizer really does not care which prize is given to whom among the 3 selected people. Huy, Paul, and Stuart are our winners in these two examples but, in the first example, different situations may play...

Transformations

In this section, we will consider three transformations:

  • Log transformation
  • Square root transformation
  • Cube root transformation

First, we will import the numpy package to create a random sample drawn from a Beta distribution. The documentation on Beta distributions can be found here:

https://numpy.org/doc/stable/reference/random/generated/numpy.random.beta.html

The sample, df, has 10,000 values. We also use matplotlib.pyplot to create different histogram plots. Second, we transform the original data by using a log transformation, square root transformation, and cube root transformation, and we draw four histograms:

import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42) # for reproducible purpose
# create a random data
df = np.random.beta(a=1, b=10, size = 10000)
df_log = np.log(df) #log transformation
df_sqrt = np.sqrt(df) # Square Root transformation
df_cbrt = np.cbrt(df) # Cube Root transformation
plt.figure(figsize ...

Summary

In the first section of this chapter, we learned about types of data and how to visualize these types of data. Then, we covered how to describe and measure attributes of data distribution. We learned about the standard normal distribution, why it’s important, and how the central limit theorem is applied in practice by demonstrating bootstrapping. We also learned how bootstrapping can make use of non-normally distributed data to test hypotheses using confidence intervals. Next, we covered mathematical knowledge as permutations and combinations and introduced permutation testing as another non-parametric test in addition to bootstrapping. We finished the chapter with different data transformation methods that are useful in many situations when performing statistical tests requiring normally distributed data.

In the next chapter, we will take a detailed look at hypothesis testing and discuss how to draw statistical conclusions from the results of the tests. We will also...

References

lock icon The rest of the chapter is locked
You have been reading a chapter from
Building Statistical Models in Python
Published in: Aug 2023 Publisher: Packt ISBN-13: 9781804614280
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}