In this chapter, we are going to deal with univariate data, which is a fancy way of saying samples of one variable--the kind of data that goes into a single R vector. Analysis of univariate data isn't concerned with the why questions—causes, relationships, or anything like that; the purpose of univariate analysis is simply to describe.
In univariate data, one variable—let's call it x—can represent categories such as soy ice cream flavors, heads or tails, names of cute classmates, the roll of a die, and so on. In cases like these, we call x a categorical variable.
categorical.data <- c("heads", "tails", "tails", "heads")
Categorical data is represented, in the preceding statement, as a vector of character type. In this particular example, we could further specify that this is a binary or dichotomous variable because it only takes on two values, namely, heads
and tails
.
Our variable x could also represent a number such as air temperature, the prices of financial instruments...
A common way of describing univariate data is with a frequency distribution. We've already seen an example of a frequency distribution when we looked at the preferences for soy ice cream at the end of the last chapter. For each flavor of ice cream (categorical variable), it depicted the count or frequency of the occurrences in the underlying dataset.
To demonstrate examples of other frequency distributions, we need to find some data. Fortunately, for the convenience of useRs everywhere, R comes preloaded with almost one hundred datasets. You can view a full list if you execute help (package="datasets")
. There are also hundreds more available from add-on packages.
The first dataset that we are going to use is mtcars
--data on the design and performance of 32 automobiles, which was extracted from the 1974 Motor Trend US magazine. (To find out more information about this dataset, execute ?mtcars
).
Take a look at the first few lines of this dataset using the head
function...
One very popular question to ask about univariate data is, What is the typical value? or What's the value around which the data are centered? To answer these questions, we have to measure the central tendency of a set of data.
We've seen one measure of central tendency already: the mode. The mtcars$carburetors
data subset was bimodal, with a two and four carburetor setup being the most popular. The mode is the central tendency measure that is applicable to categorical data.
The mode of a discretized continuous distribution is usually considered to be the interval that contains the highest frequency of data points. This makes it dependent on the method and parameters of the binning. Finding the mode of data from a non-discretized continuous distribution is a more complicated procedure, which we'll see later.
Perhaps the most famous and commonly used measure of central tendency is the mean. The mean is the sum of a set of numerics divided by the number of elements in that set...
Another very popular question regarding univariate data is, How variable are the data points? or How spread out or dispersed are the observations? To answer these questions, we have to measure the spread, or dispersion, of a data sample.
The simplest way to answer this question is to take the smallest value in the dataset and subtract it by the largest value. This will give you the range. However, this suffers from a problem similar to the issue of the mean. The range in salaries at the law firm will vary widely depending on whether the CEO is included in the set. Further, the range is just dependent on two values, the highest and lowest, and therefore, can't speak of the dispersion of the bulk of the dataset.
One tactic that solves the first of these problems is to use the interquartile range.
One of the core ideas of statistics is that we can use a subset of a group, study it, and then make inferences or conclusions about that much larger group.
For example, let's say we wanted to find the average (mean) weight of all the people in Germany. One way do to this is to visit all the 81 million people in Germany, record their weights, and then find the average. However, it is a far more sane endeavor to take down the weights of only a few hundred Germans, and use these to deduce the average weight of all Germans. In this case, the few hundred people we do measure is the sample, and the entirety of the people in Germany is called the population.
Now, there are Germans of all shapes and sizes: some heavier, some lighter. If we only pick a few Germans to weigh, we run the risk of, by chance, choosing a group of primarily underweight Germans or overweight ones. We might then come to an inaccurate conclusion about the weight of all Germans. However, as...
Up until this point, when we spoke of distributions, we were referring to frequency distributions. However, when we talk about distributions later in the book--or when other data analysts refer to them--we will be talking about probability distributions, which are much more general.
It's easy to turn a categorical, discrete, or discretized frequency distribution into a probability distribution. As an example, refer to the frequency distribution of carburetors in the first image in this chapter. Instead of asking What number of cars have n number of carburetors?, we can ask, What is the probability that, if I choose a car at random, I will get a car with n carburetors?
We will talk more about probability (and different interpretations of probability) in Chapter 4, Probability, but for now, probability is a value between 0 and 1 (or 0 percent and 100 percent) that measures how likely an event is to occur. To answer the question, What's the probability that I will pick...
In an earlier image, we saw three very different distributions, all with the same mean and median. I said then that we need to quantify variance to tell them apart. In the following image, there are three very different distributions, all with the same mean, median, and variance:
Figure 2.10: Three PDFs with the same mean, median, and standard deviation
If you just rely on basic summary statistics to understand univariate data, you'll never get the full picture. It's only when we visualize it that we can clearly see, at a glance, whether there are any clusters or areas with a high density of data points, the number of clusters there are, whether there are outliers, whether there is a pattern to the outliers, and so on. When dealing with univariate data, the shape is the most important part. (That's why this chapter is called Shape of Data!)
We will be using ggplot2's qplot
function to investigate these shapes and visualize this data. qplot
(for quick plot) is the simpler...
Here are a few exercises for you to revise the concepts learned in this chapter:
One of the hardest things about data analysis is statistics, and one of the hardest things about statistics (not unlike computer programming) is that the beginning is the toughest hurdle because the concepts are so new and unfamiliar. As a result, some might find this to be one of the more challenging chapters in this text.
However, hard work during this phase pays enormous dividends; it provides a sturdy foundation on which to pile on and organize new knowledge.
To recap, in this chapter, you learned about univariate data. You also learned about:
Along the way, we also discussed a little bit about probability distributions and population/sample statistics.
I'm glad you made it through! Relax, make yourself a mocktail, and I'll see you in Chapter 3, Describing Relationships shortly!