Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Learning Predictive Analytics with R

You're reading from  Learning Predictive Analytics with R

Product type Book
Published in Sep 2015
Publisher Packt
ISBN-13 9781782169352
Pages 332 pages
Edition 1st Edition
Languages
Author (1):
Eric Mayor Eric Mayor
Profile icon Eric Mayor

Table of Contents (23) Chapters

Learning Predictive Analytics with R
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
1. Setting GNU R for Predictive Analytics 2. Visualizing and Manipulating Data Using R 3. Data Visualization with Lattice 4. Cluster Analysis 5. Agglomerative Clustering Using hclust() 6. Dimensionality Reduction with Principal Component Analysis 7. Exploring Association Rules with Apriori 8. Probability Distributions, Covariance, and Correlation 9. Linear Regression 10. Classification with k-Nearest Neighbors and Naïve Bayes 11. Classification Trees 12. Multilevel Analyses 13. Text Analytics with R 14. Cross-validation and Bootstrapping Using Caret and Exporting Predictive Models Using PMML Exercises and Solutions Further Reading and References Index

Working with k-NN in R


When explaining the way k-NN works, we have used the same data as training and testing data. The risk here is overfitting: there is noise in the data almost always (for instance due to measurement errors) and testing on the same dataset does not let us examine the impact of noise on our classification. In other words, we want to ensure that our classification reflects real associations in the data.

There are several ways to solve this issue. The most simple is to use a different set of data for training and testing. We have already seen this when discussing Naïve Bayes. Another, better, solution is to use cross-validation. In cross-validation, the data is split in any number of parts lower than the number of observations. One part is then left out for testing and the rest is used for training. Training is then performed again, leaving another part of the data out for testing, but including the part that was previously used for testing. We will discuss cross-validation...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}