Reader small image

You're reading from  Learning Quantitative Finance with R

Product typeBook
Published inMar 2017
Reading LevelIntermediate
PublisherPackt
ISBN-139781786462411
Edition1st Edition
Languages
Right arrow
Authors (2):
Dr. Param Jeet
Dr. Param Jeet
author image
Dr. Param Jeet

Dr. Param Jeet is a Ph.D. in mathematics from one of India's leading technological institute in Madras (IITM), India. Dr. Param Jeet has a couple of mathematical research papers published in various international journals. Dr. Param Jeet has been into the analytics industry for the last few years and has worked with various leading multinational companies as well as consulted few of companies as a data scientist.
Read more about Dr. Param Jeet

PRASHANT VATS
PRASHANT VATS
author image
PRASHANT VATS

Prashant Vats is a masters in mathematics from one of India's leading technological institute, IIT Mumbai. Prashant has been into analytics industry for more than 10 years and has worked with various leading multinational companies as well as consulted few of companies as data scientist across several domain.
Read more about PRASHANT VATS

View More author details
Right arrow

Chapter 2. Statistical Modeling

In this chapter, we are going to discuss statistical modeling, which will be the first step in learning quantitative finance in R as the concepts of statistical modeling are the driving force for quantitative finance. Before starting this chapter, the assumption is that learners are familiar with basic programming in R and have a sound knowledge of statistical concepts. We will not be discussing statistical concepts in this chapter. We will be discussing how to do the statistical modeling in R.

This chapter covers the following topics:

  • Probability distributions

  • Sampling

  • Statistics

  • Correlation

  • Hypothesis testing

  • Parameter estimation

  • Outlier detection

  • Standardization

  • Normalization

Probability distributions


Probability distributions determine how the values of random variables are spread. For example, the set of all the possible outcomes of the tossing of a sequence of coins gives rise to binomial distribution. The means of large samples of the data population follow normal distribution, which is the most common and useful distribution.

The features of these distributions are very well known and can be used to extract inferences about the population. We are going to discuss in this chapter some of the most common probability distributions and how to compute them.

Normal distribution

Normal distribution is the most widely used probability distribution in the financial industry. It is a bell-shaped curve and mean, median mode is the same for normal distribution. It is denoted by  where  is the mean and  is the variance of the sample. If the mean is 0 and variance is 1 then the normal distribution is known as standard normal distribution N(1, 0).

Now let us discuss the main...

Sampling


When building any model in finance, we may have very large datasets on which model building will be very time-consuming. Once the model is built, if we need to tweak the model again, it is going to be a time-consuming process because of the volume of data. So it is better to get the random or proportionate sample of the population data on which model building will be easier and less time-consuming. So in this section, we are going to discuss how to select a random sample and a stratified sample from the data. This will play a critical role in building the model on sample data drawn from the population data.

Random sampling

Select the sample where all the observation in the population has an equal chance. It can be done in two ways, one without replacement and the other with replacement.

A random sample without replacement can be done by executing the following code:

> RandomSample <- Sampledata[sample(1:nrow(Sampledata), 10,  
>+ replace=FALSE),] 

This generates the...

Statistics


In a given dataset, we try to summarize the data by the central position of the data, which is known as measure of central tendency or summary statistics. There are several ways to measure the central tendency, such as mean, median, and mode. Mean is the widely used measure of central tendency. Under different scenarios, we use different measures of central tendency. Now we are going to give an example of how to compute the different measures of central tendency in R.

Mean

mean is the equal weightage average of the sample. For example, we can compute the mean of Volume in the dataset Sampledata by executing the following code, which gives the arithmetic mean of the volume:

mean(Sampledata$Volume) 

Median

Median is the mid value of the matrix when it is arranged in a sorted way, which can be computed by executing the following code:

median(Sampledata$Volume) 

Mode

Mode is the value present in the attribute which has maximum frequency. For mode, there does not exist an inbuilt...

Correlation


Correlation plays a very important role in quant finance. It not only determines the relation between the financial attributes but also plays a crucial role in predicting the future of financial instruments. Correlation is the measure of linear relationship between the two financial attributes. Now let us try to compute the different types of correlation in R using Sampledata, which is used in identifying the orders of components of predictive financial models.

Correlation can be computed by the following code. Let's first subset the data and then run the function for getting correlation:

x<-Sampledata[,2:5] 
rcorr(x, type="pearson") 

This generates the following correlation matrix, which shows the measure of linear relationship between the various daily level prices of a stock:

Hypothesis testing


Hypothesis testing is used to reject or retain a hypothesis based upon the measurement of an observed sample. We will not be going into theoretical aspects but will be discussing how to implement the various scenarios of hypothesis testing in R.

Lower tail test of population mean with known variance

The null hypothesis is given by  where is the hypothesized lower bound of the population mean.

Let us assume a scenario where an investor assumes that the mean of daily returns of a stock since inception is greater than $10. The average of 30 days' daily return sample is $9.9. Assume the population standard deviation is 0.011. Can we reject the null hypothesis at .05 significance level?

Now let us calculate the test statistics z which can be computed by the following code in R:

> xbar= 9.9           
> mu0 = 10            
> sig = 1.1            
> n = 30                  
> z = (xbar-mu0)/(sig/sqrt(n))  
> z  

Here:

  • xbar: Sample...

Parameter estimates


In this section, we are going to discuss some of the algorithms used for parameter estimation.

Maximum likelihood estimation

Maximum likelihood estimation (MLE) is a method for estimating model parameters on a given dataset.

Now let us try to find the parameter estimates of a probability density function of normal distribution.

Let us first generate a series of random variables, which can be done by executing the following code:

> set.seed(100) 
> NO_values <- 100 
> Y <- rnorm(NO_values, mean = 5, sd = 1) 
> mean(Y) 

This gives 5.002913.

> sd(Y) 

This gives 1.02071.

Now let us make a function for log likelihood:

LogL <- function(mu, sigma) { 
+      A = dnorm(Y, mu, sigma) 
+      -sum(log(A)) 
+  } 

Now let us apply the function mle to estimate the parameters for estimating mean and standard deviation:

  > library(stats4) 
> mle(LogL, start = list(mu = 2, sigma=2)) 

mu and sigma have been...

Outlier detection


Outliers are very important to be taken into consideration for any analysis as they can make analysis biased. There are various ways to detect outliers in R and the most common one will be discussed in this section.

Boxplot

Let us construct a boxplot for the variable volume of the Sampledata, which can be done by executing the following code:

> boxplot(Sampledata$Volume, main="Volume", boxwex=0.1) 

The graph is as follows:

Figure 2.16: Boxplot for outlier detection

An outlier is an observation which is distant from the rest of the data. When reviewing the preceding boxplot, we can clearly see the outliers which are located outside the fences (whiskers) of the boxplot.

LOF algorithm

The local outlier factor (LOF) is used for identifying density-based local outliers. In LOF, the local density of a point is compared with that of its neighbors. If the point is in a sparser region than its neighbors then it is treated as an outlier. Let us consider some of the variables from...

Standardization


In statistics, standardization plays a crucial role as we have various attributes for modeling and all of them have different scales. So for comparison purposes, we need to standardize the variables to bring them on the same scale. Centering the values and creating the z scores is done in R by the scale() function. It takes the following arguments:

  • x: A numeric object

  • center: If TRUE, the object's column means are subtracted from the values in those columns (ignoring NAs); if FALSE, centering is not performed

  • scale: If TRUE, the centered column values are divided by the column's standard deviation (when center is also TRUE; otherwise, the root mean square is used); if FALSE, scaling is not performed

If we want to center the data of Volume in our dataset, we just need to execute the following code:

>scale(Sampledata$Volume, center=TRUE, scale=FALSE) 

If we want to standardize the data of volume in our dataset, we just need to execute the following code:

>scale(Sampledata...

Normalization


Normalization is done using the minmax concept to bring the various attributes on the same scale. It is calculated by the formula given here:

normalized = (x-min(x))/(max(x)-min(x))

So if we want to normalize the volume variable, we can do it by executing the following code:

> normalized = (Sampledata$Volume-+min(Sampledata$Volume))/(max(Sampledata$Volume)-+min(Sampledata$Volume)) 
> normalized 

Questions


  1. Construct examples of normal, Poisson, and uniform distribution in R.

  2. How do you do random and stratified sampling in R?

  3. What are the different measures of central tendency and how do you find them in R?

  4. How do you compute kurtosis and skewness in R?

  5. How do you do hypothesis testing in R with known/unknown variance of population in R?

  6. How do you detect outliers in R?

  7. How do you do parameter estimates for a linear model and MLE in R?

  8. What is standardization and normalization in R and how do you perform it in R?

Summary


In this chapter, we have discussed the most commonly used distributions in the finance domain and associated metrics computations in R; sampling (random and stratified); measures of central tendencies; correlations and types of correlation used for model selections in time series; hypothesis testing (one-tailed/two-tailed) with known and unknown variance; detection of outliers; parameter estimation; and standardization/normalization of attributes in R to bring attributes on comparable scales.

In the next chapter, analysis done in R associated with simple linear regression, multivariate linear regression, ANOVA, feature selection, ranking of variables, wavelet analysis, fast Fourier transformation, and Hilbert transformation will be covered.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Learning Quantitative Finance with R
Published in: Mar 2017Publisher: PacktISBN-13: 9781786462411
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Dr. Param Jeet

Dr. Param Jeet is a Ph.D. in mathematics from one of India's leading technological institute in Madras (IITM), India. Dr. Param Jeet has a couple of mathematical research papers published in various international journals. Dr. Param Jeet has been into the analytics industry for the last few years and has worked with various leading multinational companies as well as consulted few of companies as a data scientist.
Read more about Dr. Param Jeet

author image
PRASHANT VATS

Prashant Vats is a masters in mathematics from one of India's leading technological institute, IIT Mumbai. Prashant has been into analytics industry for more than 10 years and has worked with various leading multinational companies as well as consulted few of companies as data scientist across several domain.
Read more about PRASHANT VATS

Open

High

Low

Close

Open

1

0.962062

0.934174

0.878553

High

0.962062

1

0.952676

0.945434

Low

0.934174

0.952676

1

0.960428

Close

0.878553

0.945434

0.960428...