Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Learning Predictive Analytics with R

You're reading from  Learning Predictive Analytics with R

Product type Book
Published in Sep 2015
Publisher Packt
ISBN-13 9781782169352
Pages 332 pages
Edition 1st Edition
Languages
Author (1):
Eric Mayor Eric Mayor
Profile icon Eric Mayor

Table of Contents (23) Chapters

Learning Predictive Analytics with R
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
1. Setting GNU R for Predictive Analytics 2. Visualizing and Manipulating Data Using R 3. Data Visualization with Lattice 4. Cluster Analysis 5. Agglomerative Clustering Using hclust() 6. Dimensionality Reduction with Principal Component Analysis 7. Exploring Association Rules with Apriori 8. Probability Distributions, Covariance, and Correlation 9. Linear Regression 10. Classification with k-Nearest Neighbors and Naïve Bayes 11. Classification Trees 12. Multilevel Analyses 13. Text Analytics with R 14. Cross-validation and Bootstrapping Using Caret and Exporting Predictive Models Using PMML Exercises and Solutions Further Reading and References Index

Classification and regression trees and random forest


We will now introduce classification and regression trees (CART) and random forest. CART uses different statistical criteria to decide on tree splits. Random forest uses ensemble learning (a combination of CART trees) to improve classification using a voting principle.

CART

There are a number of differences between CART used for classification and the family of algorithms we just discovered. Here, we only superficially discuss the partitioning criterion and pruning.

In CART, the attribute to be partition is selected with the Gini index as a decision criterion. In classification trees, the Gini index is simply computed as: 1—the sum of the squared probabilities for each possible partition on the attribute. The formula notation is:

This is more efficient compared to information gain and information ratio. Note that CART does not only do necessary partitioning on the modalities of the attribute, but also merges modalities together for the partition...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}