Before we dive into the (other) fun stuff (sampling multi-dimensional probability distributions, using convex optimization to fit data models, and so on), it would be helpful if we review those aspects of R that all subsequent chapters will assume knowledge of.
If you fancy yourself as an R guru, you should still, at least, skim through this chapter, because you'll almost certainly find the idioms, packages, and style introduced here to be beneficial in following along with the rest of the material.
If you don't care much about R (yet), and are just in this for the statistics, you can heave a heavy sigh of relief that, for the most part, you can run the code given in this book in the interactive R interpreter with very little modification, and just follow along with the ideas. However, it is my belief (read: delusion) that by the end of this book, you'll cultivate a newfound appreciation of R alongside a robust understanding of methods in data analysis.
Fire up your R interpreter, and let's get started!
In the interactive R interpreter, any line starting with a >
character denotes R asking for input (If you see a +
prompt, it means that you didn't finish typing a statement at the prompt and R is asking you to provide the rest of the expression.). Striking the return key will send your input to R to be evaluated. R's response is then spit back at you in the line immediately following your input, after which R asks for more input. This is called a REPL (Read-Evaluate-Print-Loop). It is also possible for R to read a batch of commands saved in a file (unsurprisingly called batch mode), but we'll be using the interactive mode for most of the book.
As you might imagine, R supports all the familiar mathematical operators as most other languages:
Check out the following example:
> 2 + 2 [1] 4 > 9 / 3 [1] 3 > 5 %% 2 # modulus operator (remainder of 5 divided by 2) [1] 1
Anything that occurs after the octothorpe or pound sign, #
, (or hash-tag for you young'uns), is ignored by the R interpreter. This is useful for documenting the code in natural language. These are called comments.
In a multi-operation arithmetic expression, R will follow the standard order of operations from math. In order to override this natural order, you have to use parentheses flanking the sub-expression that you'd like to be performed first.
> 3 + 2 - 10 ^ 2 # ^ is the exponent operator [1] -95 > 3 + (2 - 10) ^ 2 [1] 67
In practice, almost all compound expressions are split up with intermediate values assigned to variables which, when used in future expressions, are just like substituting the variable with the value that was assigned to it. The (primary) assignment operator is <-
.
> # assignments follow the form VARIABLE <- VALUE > var <- 10 > var [1] 10 > var ^ 2 [1] 100 > VAR / 2 # variable names are case-sensitive Error: object 'VAR' not found
Notice that the first and second lines in the preceding code snippet didn't have an output to be displayed, so R just immediately asked for more input. This is because assignments don't have a return value. Their only job is to give a value to a variable, or to change the existing value of a variable. Generally, operations and functions on variables in R don't change the value of the variable. Instead, they return the result of the operation. If you want to change a variable to the result of an operation using that variable, you have to reassign that variable as follows:
> var # var is 10 [1] 10 > var ^ 2 [1] 100 > var # var is still 10 [1] 10 > var <- var ^ 2 # no return value > var # var is now 100 [1] 100
Be aware that variable names may contain numbers, underscores, and periods; this is something that trips up a lot of people who are familiar with other programming languages that disallow using periods in variable names. The only further restrictions on variable names are that it must start with a letter (or a period and then a letter), and that it must not be one of the reserved words in R such as TRUE, Inf, and so on.
Although the arithmetic operators that we've seen thus far are functions in their own right, most functions in R take the form: function_name
(value(s) supplied to the function). The values supplied to the function are called arguments of that function.
> cos(3.14159) # cosine function [1] -1 > cos(pi) # pi is a constant that R provides [1] -1 > acos(-1) # arccosine function [1] 2.141593 > acos(cos(pi)) + 10 [1] 13.14159 > # functions can be used as arguments to other functions
(If you paid attention in math class, you'll know that the cosine of π is -1, and that arccosine is the inverse function of cosine.)
There are hundreds of such useful functions defined in base R, only a handful of which we will see in this book. Two sections from now, we will be building our very own functions.
Before we move on from arithmetic, it will serve us well to visit some of the odd values that may result from certain operations:
> 1 / 0 [1] Inf > 0 / 0 [1] NaN
It is common during practical usage of R to accidentally divide by zero. As you can see, this undefined operation yields an infinite value in R. Dividing zero by zero yields the value NaN
, which stands for Not a Number.
So far, we've only been dealing with numerics, but there are other atomic data types in R. To wit:
> foo <- TRUE # foo is of the logical data type > class(foo) # class() tells us the type [1] "logical" > bar <- "hi!" # bar is of the character data type > class(bar) [1] "character"
The logical data type (also called Booleans) can hold the values TRUE
or FALSE
or, equivalently, T or F. The familiar operators from Boolean algebra are defined for these types:
> foo [1] TRUE > foo && TRUE # boolean and [1] TRUE > foo && FALSE [1] FALSE > foo || FALSE # boolean or [1] TRUE > !foo # negation operator [1] FALSE
In a Boolean expression with a logical value and a number, any number that is not 0 is interpreted as TRUE
.
> foo && 1 [1] TRUE > foo && 2 [1] TRUE > foo && 0 [1] FALSE
Additionally, there are functions and operators that return logical values such as:
> 4 < 2 # less than operator [1] FALSE > 4 >= 4 # greater than or equal to [1] TRUE > 3 == 3 # equality operator [1] TRUE > 3 != 2 # inequality operator [1] TRUE
Just as there are functions in R that are only defined for work on the numeric and logical data type, there are other functions that are designed to work only with the character data type, also known as strings:
> lang.domain <- "statistics" > lang.domain <- toupper(lang.domain) > print(lang.domain) [1] "STATISTICS" > # retrieves substring from first character to fourth character > substr(lang.domain, 1, 4) [1] "STAT" > gsub("I", "1", lang.domain) # substitutes every "I" for "1" [1] "STAT1ST1CS" # combines character strings > paste("R does", lang.domain, "!!!") [1] "R does STATISTICS !!!"
The last topic in this section will be flow of control constructs.
The most basic flow of control construct is the if
statement. The argument to an if
statement (what goes between the parentheses), is an expression that returns a logical value. The block of code following the if
statement gets executed only if the expression yields TRUE. For example:
> if(2 + 2 == 4) + print("very good") [1] "very good" > if(2 + 2 == 5) + print("all hail to the thief") >
It is possible to execute more than one statement if an if
condition is triggered; you just have to use curly brackets ({}
) to contain the statements.
> if((4/2==2) && (2*2==4)){ + print("four divided by two is two...") + print("and two times two is four") + } [1] "four divided by two is two..." [1] "and two times two is four" >
It is also possible to specify a block of code that will get executed if the if
conditional is FALSE.
> closing.time <- TRUE > if(closing.time){ + print("you don't have to go home") + print("but you can't stay here") + } else{ + print("you can stay here!") + } [1] "you don't have to go home" [1] "but you can't stay here" > if(!closing.time){ + print("you don't have to go home") + print("but you can't stay here") + } else{ + print("you can stay here!") + } [1] "you can stay here!" >
There are other flow of control constructs (like while
and for
), but we won't directly be using them much in this text.
Before we go further, it would serve us well to have a brief section detailing how to get help in R. Most R tutorials leave this for one of the last sections—if it is even included at all! In my own personal experience, though, getting help is going to be one of the first things you will want to do as you add more bricks to your R knowledge castle. Learning R doesn't have to be difficult; just take it slowly, ask questions, and get help early. Go you!
It is easy to get help with R right at the console. Running the help.start()
function at the prompt will start a manual browser. From here, you can do anything from going over the basics of R to reading the nitty-gritty details on how R works internally.
You can get help on a particular function in R if you know its name, by supplying that name as an argument to the help function. For example, let's say you want to know more about the gsub()
function that I sprang on you before. Running the following code:
> help("gsub") > # or simply > ?gsub
will display a manual page documenting what the function is, how to use it, and examples of its usage.
This rapid accessibility to documentation means that I'm never hopelessly lost when I encounter a function which I haven't seen before. The downside to this extraordinarily convenient help mechanism is that I rarely bother to remember the order of arguments, since looking them up is just seconds away.
Occasionally, you won't quite remember the exact name of the function you're looking for, but you'll have an idea about what the name should be. For this, you can use the help.search()
function.
> help.search("chisquare") > # or simply > ??chisquare
For tougher, more semantic queries, nothing beats a good old fashioned web search engine. If you don't get relevant results the first time, try adding the term programming or statistics in there for good measure.
Vectors are the most basic data structures in R, and they are ubiquitous indeed. In fact, even the single values that we've been working with thus far were actually vectors of length 1
. That's why the interactive R console has been printing [1]
along with all of our output.
Vectors are essentially an ordered collection of values of the same atomic data type. Vectors can be arbitrarily large (with some limitations), or they can be just one single value.
The canonical way of building vectors manually is by using the c()
function (which stands for combine).
> our.vect <- c(8, 6, 7, 5, 3, 0, 9) > our.vect [1] 8 6 7 5 3 0 9
In the preceding example, we created a numeric vector of length 7 (namely, Jenny's telephone number).
Note that if we tried to put character data types into this vector as follows:
> another.vect <- c("8", 6, 7, "-", 3, "0", 9) > another.vect [1] "8" "6" "7" "-" "3" "0" "9"
R would convert all the items in the vector (called elements) into character data types to satisfy the condition that all elements of a vector must be of the same type. A similar thing happens when you try to use logical values in a vector with numbers; the logical values would be converted into 1 and 0 (for TRUE and FALSE, respectively). These logicals will turn into TRUE and FALSE (note the quotation marks) when used in a vector that contains characters.
It is very common to want to extract one or more elements from a vector. For this, we use a technique called indexing or subsetting. After the vector, we put an integer in square brackets ([]
) called the subscript operator. This instructs R to return the element at that index. The indices (plural for index, in case you were wondering!) for vectors in R start at 1, and stop at the length of the vector.
> our.vect[1] # to get the first value [1] 8 > # the function length() returns the length of a vector > length(our.vect) [1] 7 > our.vect[length(our.vect)] # get the last element of a vector [1] 9
Note that in the preceding code, we used a function in the subscript operator. In cases like these, R evaluates the expression in the subscript operator, and uses the number it returns as the index to extract.
If we get greedy, and try to extract an element at an index that doesn't exist, R will respond with NA, meaning, not available. We see this special value cropping up from time to time throughout this text.
> our.vect[10] [1] NA
One of the most powerful ideas in R is that you can use vectors to subset other vectors:
> # extract the first, third, fifth, and > # seventh element from our vector > our.vect[c(1, 3, 5, 7)] [1] 8 7 3 9
The ability to use vectors to index other vectors may not seem like much now, but its usefulness will become clear soon.
Another way to create vectors is by using sequences.
> other.vector <- 1:10 > other.vector [1] 1 2 3 4 5 6 7 8 9 10 > another.vector <- seq(50, 30, by=-2) > another.vector [1] 50 48 46 44 42 40 38 36 34 32 30
Above, the 1:10
statement creates a vector from 1 to 10. 10:1
would have created the same 10 element vector, but in reverse. The seq()
function is more general in that it allows sequences to be made using steps (among many other things).
Combining our knowledge of sequences and vectors subsetting vectors, we can get the first 5 digits of Jenny's number thusly:
> our.vect[1:5] [1] 8 6 7 5 3
Part of what makes R so powerful is that many of R's functions take vectors as arguments. These vectorized functions are usually extremely fast and efficient. We've already seen one such function, length()
, but there are many many others.
> # takes the mean of a vector > mean(our.vect) [1] 5.428571 > sd(our.vect) # standard deviation [1] 3.101459 > min(our.vect) [1] 0 > max(1:10) [1] 10 > sum(c(1, 2, 3)) [1] 6
In practical settings, such as when reading data from files, it is common to have NA
values in vectors:
> messy.vector <- c(8, 6, NA, 7, 5, NA, 3, 0, 9) > messy.vector [1] 8 6 NA 7 5 NA 3 0 9 > length(messy.vector) [1] 9
Some vectorized functions will not allow NA
values by default. In these cases, an extra keyword argument must be supplied along with the first argument to the function.
> mean(messy.vector) [1] NA > mean(messy.vector, na.rm=TRUE) [1] 5.428571 > sum(messy.vector, na.rm=FALSE) [1] NA > sum(messy.vector, na.rm=TRUE) [1] 38
As mentioned previously, vectors can be constructed from logical values too.
> log.vector <- c(TRUE, TRUE, FALSE) > log.vector [1] TRUE TRUE FALSE
Since logical values can be coerced into behaving like numerics, as we saw earlier, if we try to sum a logical vector as follows:.
> sum(log.vector) [1] 2
we will, essentially, get a count of the number of TRUE
values in that vector.
There are many functions in R which operate on vectors and return logical vectors. is.na()
is one such function. It returns a logical vector—that is, the same length as the vector supplied as an argument—with a TRUE
in the position of every NA
value. Remember our messy vector (from just a minute ago)?
> messy.vector [1] 8 6 NA 7 5 NA 3 0 9 > is.na(messy.vector) [1] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE FALSE > # 8 6 NA 7 5 NA 3 0 9
Putting together these pieces of information, we can get a count of the number of NA values in a vector as follows:
> sum(is.na(messy.vector)) [1] 2
When you use Boolean operators on vectors, they also return logical vectors of the same length as the vector being operated on.
> our.vect > 5 [1] TRUE TRUE TRUE FALSE FALSE FALSE TRUE
If we wanted to—and we do—count the number of digits in Jenny's phone number that are greater than five, we would do so in the following manner:
> sum(our.vect > 5) [1] 4
Did I mention that we can use vectors to subset other vectors? When we subset vectors using logical vectors of the same length, only the elements corresponding to the TRUE values are extracted. Hopefully, sparks are starting to go off in your head. If we wanted to extract only the legitimate non-NA digits from Jenny's number, we can do it as follows:
> messy.vector[!is.na(messy.vector)] [1] 8 6 7 5 3 0 9
This is a very critical trait of R, so let's take our time understanding it; this idiom will come up again and again throughout this book.
The logical vector that yields TRUE when an NA
value occurs in messy.vector
(from is.na()
) is then negated (the whole thing) by the negation operator !
. The resultant vector is TRUE whenever the corresponding value in messy.vector
is not NA. When this logical vector is used to subset the original messy vector, it only extracts the non-NA values from it.
Similarly, we can show all the digits in Jenny's phone number that are greater than five as follows:
> our.vect[our.vect > 5] [1] 8 6 7 9
Thus far, we've only been displaying elements that have been extracted from a vector. However, just as we've been assigning and re-assigning variables, we can assign values to various indices of a vector, and change the vector as a result. For example, if Jenny tells us that we have the first digit of her phone number wrong (it's really 9), we can reassign just that element without modifying the others.
> our.vect [1] 8 6 7 5 3 0 9 > our.vect[1] <- 9 > our.vect [1] 9 6 7 5 3 0 9
Sometimes, it may be required to replace all the NA
values in a vector with the value 0
. To do that with our messy vector, we can execute the following command:
> messy.vector[is.na(messy.vector)] <- 0 > messy.vector [1] 8 6 0 7 5 0 3 0 9
Elegant though the preceding solution is, modifying a vector in place is usually discouraged in favor of creating a copy of the original vector and modifying the copy. One such technique for performing this is by using the ifelse()
function.
Not to be confused with the if/else control construct, ifelse()
is a function that takes 3 arguments: a test that returns a logical/Boolean value, a value to use if the element passes the test, and one to return if the element fails the test.
The preceding in-place modification solution could be re-implemented with ifelse
as follows:
> ifelse(is.na(messy.vector), 0, messy.vector) [1] 8 6 0 7 5 0 3 0 9
The last important property of vectors and vector operations in R is that they can be recycled. To understand what I mean, examine the following expression:
> our.vect + 3 [1] 12 9 10 8 6 3 12
This expression adds three to each digit in Jenny's phone number. Although it may look so, R is not performing this operation between a vector and a single value. Remember when I said that single values are actually vectors of the length 1? What is really happening here is that R is told to perform element-wise addition on a vector of length 7 and a vector of length 1. Since element-wise addition is not defined for vectors of differing lengths, R recycles the smaller vector until it reaches the same length as that of the bigger vector. Once both the vectors are the same size, then R, element-by-element, performs the addition and returns the result.
> our.vect + 3 [1] 12 9 10 8 6 3 12
is tantamount to…
> our.vect + c(3, 3, 3, 3, 3, 3, 3) [1] 12 9 10 8 6 3 12
If we wanted to extract every other digit from Jenny's phone number, we can do so in the following manner:
> our.vect[c(TRUE, FALSE)] [1] 9 7 3 9
This works because the vector c(TRUE, FALSE)
is repeated until it is of the length 7
, making it equivalent to the following:
> our.vect[c(TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE)] [1] 9 7 3 9
One common snag related to vector recycling that R users (useRs, if I may) encounter is that during some arithmetic operations involving vectors of discrepant length, R will warn you if the smaller vector cannot be repeated a whole number of times to reach the length of the bigger vector. This is not a problem when doing vector arithmetic with single values, since 1
can be repeated any number of times to match the length of any vector (which must, of course, be an integer). It would pose a problem, though, if we were looking to add three to every other element in Jenny's phone number.
> our.vect + c(3, 0) [1] 12 6 10 5 6 0 12 Warning message: In our.vect + c(3, 0) : longer object length is not a multiple of shorter object length
You will likely learn to love these warnings, as they have stopped many useRs from making grave errors.
Before we move on to the next section, an important thing to note is that in a lot of other programming languages, many of the things that we did would have been implemented using for
loops and other control structures. Although there is certainly a place for loops and such in R, oftentimes a more sophisticated solution exists in using just vector/matrix operations. In addition to elegance and brevity, the solution that exploits vectorization and recycling is often many, many times more efficient.
If we need to perform some computation that isn't already a function in R a multiple number of times, we usually do so by defining our own functions. A custom function in R is defined using the following syntax:
function.name <- function(argument1, argument2, ...){ # some functionality }
For example, if we wanted to write a function that determined if a number supplied as an argument was even, we can do so in the following manner:
> is.even <- function(a.number){ + remainder <- a.number %% 2 + if(remainder==0) + return(TRUE) + return(FALSE) + } > > # testing it > is.even(10) [1] TRUE > is.even(9) [1] FALSE
As an example of a function that takes more than one argument, let's generalize the preceding function by creating a function that determines whether the first argument is divisible by its second argument.
> is.divisible.by <- function(large.number, smaller.number){ + if(large.number %% smaller.number != 0) + return(FALSE) + return(TRUE) + } > > # testing it > is.divisible.by(10, 2) [1] TRUE > is.divisible.by(10, 3) [1] FALSE > is.divisible.by(9, 3) [1] TRUE
Our function, is.even(),
could now be rewritten simply as:
> is.even <- function(num){ + is.divisible.by(num, 2) + }
It is very common in R to want to apply a particular function to every element of a vector. Instead of using a loop to iterate over the elements of a vector, as we would do in many other languages, we use a function called sapply()
to perform this. sapply()
takes a vector and a function as its argument. It then applies the function to every element and returns a vector of results. We can use sapply()
in this manner to find out which digits in Jenny's phone number are even:
> sapply(our.vect, is.even) [1] FALSE TRUE FALSE FALSE FALSE TRUE FALSE
This worked great because sapply
takes each element, and uses it as the argument in is.even()
which takes only one argument. If you wanted to find the digits that are divisible by three, it would require a little bit more work.
One option is just to define a function is.divisible.by.three()
that takes only one argument, and use that in sapply
. The more common solution, however, is to define an unnamed function that does just that in the body of the sapply
function call:
> sapply(our.vect, function(num){is.divisible.by(num, 3)}) [1] TRUE TRUE FALSE FALSE TRUE TRUE TRUE
Here, we essentially created a function that checks whether its argument is divisible by three, except we don't assign it to a variable, and use it directly in the sapply
body instead. These one-time-use unnamed functions are called anonymous functions or lambda functions. (The name comes from Alonzo Church's invention of the lambda calculus, if you were wondering.)
This is somewhat of an advanced usage of R, but it is very useful as it comes up very often in practice.
If we wanted to extract the digits in Jenny's phone number that are divisible by both, two and three, we can write it as follows:
> where.even <- sapply(our.vect, is.even) > where.div.3 <- sapply(our.vect, function(num){ + is.divisible.by(num, 3)}) > # "&" is like the "&&" and operator but for vectors > our.vect[where.even & where.div.3] [1] 6 0
Neat-O!
Note that if we wanted to be sticklers, we would have a clause in the function bodies to preclude a modulus computation, where the first number was smaller than the second. If we had, our function would not have erroneously indicated that 0 was divisible by two and three. I'm not a stickler, though, so the functions will remain as is. Fixing this function is left as an exercise for the (stickler) reader.
In addition to the vector data structure, R has the matrix, data frame, list, and array data structures. Though we will be using all these types (except arrays) in this book, we only need to review the first two in this chapter.
A matrix in R, like in math, is a rectangular array of values (of one type) arranged in rows and columns, and can be manipulated as a whole. Operations on matrices are fundamental to data analysis.
One way of creating a matrix is to just supply a vector to the function matrix()
.
> a.matrix <- matrix(c(1, 2, 3, 4, 5, 6)) > a.matrix [,1] [1,] 1 [2,] 2 [3,] 3 [4,] 4 [5,] 5 [6,] 6
This produces a matrix with all the supplied values in a single column. We can make a similar matrix with two columns by supplying matrix()
with an optional argument, ncol
, that specifies the number of columns.
> a.matrix <- matrix(c(1, 2, 3, 4, 5, 6), ncol=2) > a.matrix [,1] [,2] [1,] 1 4 [2,] 2 5 [3,] 3 6
We could have produced the same matrix by binding two vectors, c(1, 2, 3)
and c(4, 5, 6)
by columns using the cbind()
function as follows:
> a2.matrix <- cbind(c(1, 2, 3), c(4, 5, 6))
We could create the transposition of this matrix (where rows and columns are switched) by binding those vectors by row instead:
> a3.matrix <- rbind(c(1, 2, 3), c(4, 5, 6)) > a3.matrix [,1] [,2] [,3] [1,] 1 2 3 [2,] 4 5 6
or by just using the matrix transposition function in R, t()
.
> t(a2.matrix)
Some other functions that operate on whole vectors are rowSums()/colSums()
and rowMeans()/colMeans()
.
> a2.matrix [,1] [,2] [1,] 1 4 [2,] 2 5 [3,] 3 6 > colSums(a2.matrix) [1] 6 15 > rowMeans(a2.matrix) [1] 2.5 3.5 4.5
If vectors have sapply()
, then matrices have apply()
. The preceding two functions could have been written, more verbosely, as:
> apply(a2.matrix, 2, sum) [1] 6 15 > apply(a2.matrix, 1, mean) [1] 2.5 3.5 4.5
where 1 instructs R to perform the supplied function over its rows, and 2, over its columns.
The matrix multiplication operator in R is %*%
> a2.matrix %*% a2.matrix Error in a2.matrix %*% a2.matrix : non-conformable arguments
Remember, matrix multiplication is only defined for matrices where the number of columns in the first matrix is equal to the number of rows in the second.
> a2.matrix [,1] [,2] [1,] 1 4 [2,] 2 5 [3,] 3 6 > a3.matrix [,1] [,2] [,3] [1,] 1 2 3 [2,] 4 5 6 > a2.matrix %*% a3.matrix [,1] [,2] [,3] [1,] 17 22 27 [2,] 22 29 36 [3,] 27 36 45 > > # dim() tells us how many rows and columns > # (respectively) there are in the given matrix > dim(a2.matrix) [1] 3 2
To index the element of a matrix at the second row and first column, you need to supply both of these numbers into the subscripting operator.
> a2.matrix[2,1] [1] 2
Many useRs get confused and forget the order in which the indices must appear; remember—it's row first, then columns!
If you leave one of the spaces empty, R will assume you want that whole dimension:
> # returns the whole second column > a2.matrix[,2] [1] 4 5 6 > # returns the first row > a2.matrix[1,] [1] 1 4
And, as always, we can use vectors in our subscript operator:
> # give me element in column 2 at the first and third row > a2.matrix[c(1, 3), 2] [1] 4 6
Thus far, we've only been entering data directly into the interactive R console. For any data set of non-trivial size this is, obviously, an intractable solution. Fortunately for us, R has a robust suite of functions for reading data directly from external files.
Go ahead, and create a file on your hard disk called favorites.txt
that looks like this:
flavor,number pistachio,6 mint chocolate chip,7 vanilla,5 chocolate,10 strawberry,2 neopolitan,4
This data represents the number of students in a class that prefer a particular flavor of soy ice cream. We can read the file into a variable called favs
as follows:
> favs <- read.table("favorites.txt", sep=",", header=TRUE)
If you get an error that there is no such file or directory, give R the full path name to your data set or, alternatively, run the following command:
> favs <- read.table(file.choose(), sep=",", header=TRUE)
The preceding command brings up an open file dialog for letting you navigate to the file you've just created.
The argument sep=","
tells R that each data element in a row is separated by a comma. Other common data formats have values separated by tabs and pipes ("|"
). The value of sep
should then be "\t"
and "|"
, respectively.
The argument header=TRUE
tells R that the first row of the file should be interpreted as the names of the columns. Remember, you can enter ?read.table
at the console to learn more about these options.
Reading from files in this comma-separated-values format (usually with the .csv
file extension) is so common that R has a more specific function just for it. The preceding data import expression can be best written simply as:
> favs <- read.csv("favorites.txt")
Now, we have all the data in the file held in a variable of class data.frame
. A data frame can be thought of as a rectangular array of data that you might see in a spreadsheet application. In this way, a data frame can also be thought of as a matrix; indeed, we can use matrix-style indexing to extract elements from it. A data frame differs from a matrix, though, in that a data frame may have columns of differing types. For example, whereas a matrix would only allow one of these types, the data set we just loaded contains character data in its first column, and numeric data in its second column.
Let's check out what we have by using the head()
command, which will show us the first few lines of a data frame:
> head(favs) flavor number 1 pistachio 6 2 mint chocolate chip 7 3 vanilla 5 4 chocolate 10 5 strawberry 2 6 neopolitan 4 > class(favs) [1] "data.frame" > class(favs$flavor) [1] "factor" > class(favs$number) [1] "numeric"
I lied, ok! So what?! Technically, flavor
is a factor data type, not a character type.
We haven't seen factors yet, but the idea behind them is really simple. Essentially, factors are codings for categorical variables, which are variables that take on one of a finite number of categories—think {"high", "medium", and "low"}
or {"control", "experimental"}
.
Though factors are extremely useful in statistical modeling in R, the fact that R, by default, automatically interprets a column from the data read from disk as a type factor if it contains characters, is something that trips up novices and seasoned useRs alike. Because of this, we will primarily prevent this behavior manually by adding the stringsAsFactors
optional keyword argument to the read.*
commands:
> favs <- read.csv("favorites.txt", stringsAsFactors=FALSE) > class(favs$flavor) [1] "character"
Much better, for now! If you'd like to make this behavior the new default, read the ?options
manual page. We can always convert to factors later on if we need to!
If you haven't noticed already, I've snuck a new operator on you—$
, the extract operator. This is the most popular way to extract attributes (or columns) from a data frame. You can also use double square brackets ([[
and ]]
) to do this.
These are both in addition to the canonical matrix indexing option. The following three statements are thus, in this context, functionally identical:
> favs$flavor [1] "pistachio" "mint chocolate chip" "vanilla" [4] "chocolate" "strawberry" "neopolitan" > favs[["flavor"]] [1] "pistachio" "mint chocolate chip" "vanilla" [4] "chocolate" "strawberry" "neopolitan" > favs[,1] [1] "pistachio" "mint chocolate chip" "vanilla" [4] "chocolate" "strawberry" "neopolitan"
Note
Notice how R has now printed another number in square brackets—besides [1]
—along with our output. This is to show us that chocolate
is the fourth element of the vector that was returned from the extraction.
You can use the names()
function to get a list of the columns available in a data frame. You can even reassign names using the same:
> names(favs) [1] "flavor" "number" > names(favs)[1] <- "flav" > names(favs) [1] "flav" "number"
Lastly, we can get a compact display of the structure of a data frame by using the str()
function on it:
> str(favs) 'data.frame': 6 obs. of 2 variables: $ flav : chr "pistachio" "mint chocolate chip" "vanilla" "chocolate" ... $ number: num 6 7 5 10 2 4
Actually, you can use this function on any R structure—the property of functions that change their behavior based on the type of input is called polymorphism.
Robust, performant, and numerous though base R's functions are, we are by no means limited to them! Additional functionality is available in the form of packages. In fact, what makes R such a formidable statistics platform is the astonishing wealth of packages available (well over 7,000 at the time of writing). R's ecosystem is second to none!
Most of these myriad packages exist on the Comprehensive R Archive Network (CRAN). CRAN is the primary repository for user-created packages.
One package that we are going to start using right away is the ggplot2
package. ggplot2 is a plotting system for R. Base R
has sophisticated and advanced mechanisms to plot data, but many find ggplot2 more consistent and easier to use. Further, the plots are often more aesthetically pleasing by default.
Let's install it!
# downloads and installs from CRAN > install.packages("ggplot2")
Now that we have the package downloaded, let's load it into the R session, and test it out by plotting our data from the last section:
> library(ggplot2) > ggplot(favs, aes(x=flav, y=number)) + + geom_bar(stat="identity") + + ggtitle("Soy ice cream flavor preferences")

Figure 1.1: Soy ice cream flavor preferences
You're all wrong, Mint Chocolate Chip is way better!
Don't worry about the syntax of the ggplot
function, yet. We'll get to it in good time.
You will be installing some more packages as you work through this text. In the meantime, if you want to play around with a few more packages, you can install the gdata
and foreign
packages that allow you to directly import Excel spreadsheets and SPSS data files respectively directly into R.
You can practice the following exercises to help you get a good grasp of the concepts learned in this chapter:
Write a function called
simon.says
that takes in a character string, and returns that string in all upper case after prepending the string "Simon says: " to the beginning of it.Write a function that takes two matrices as arguments, and returns a logical value representing whether the matrices can be matrix multiplied.
Find a free data set on the web, download it, and load it into R. Explore the structure of the data set.
Reflect upon how Hester Prynne allowed her scarlet letter to be decorated with flowers by her daughter in Chapter 10. To what extent is this indicative of Hester's recasting of the scarlet letter as a positive part of her identity. Back up your thesis with excerpts from the book.
In this chapter, we learned about the world's greatest analytics platform, R. We started from the beginning and built a foundation, and will now explore R further, based on the knowledge gained in this chapter. By now, you have become well versed in the basics of R (which, paradoxically, is the hardest part).You now know how to:
Use R as a big calculator to do arithmetic
Make vectors, operate on them, and subset them expressively
Load data from disk
Install packages
You have by no means finished learning about R; indeed, we have gone over mostly just the basics. However, we have enough to continue ahead, and you'll pick up more along the way. Onward to statistics land!