Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Machine Learning Quick Reference

You're reading from   Machine Learning Quick Reference Quick and essential machine learning hacks for training smart data models

Arrow left icon
Product type Paperback
Published in Jan 2019
Publisher Packt
ISBN-13 9781788830577
Length 294 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
 Kumar Kumar
Author Profile Icon Kumar
Kumar
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Quantifying Learning Algorithms FREE CHAPTER 2. Evaluating Kernel Learning 3. Performance in Ensemble Learning 4. Training Neural Networks 5. Time Series Analysis 6. Natural Language Processing 7. Temporal and Sequential Pattern Discovery 8. Probabilistic Graphical Models 9. Selected Topics in Deep Learning 10. Causal Inference 11. Advanced Methods 12. Other Books You May Enjoy

Curve fitting

So far, we have learned about the learning curve and its significance. However, it only comes into the picture once we tried fitting a curve on the available data and features. But what does curve fitting mean? Let's try to understand this.

Curve fitting is nothing but establishing a relationship between a number of features and a target. It helps in finding out what kind of association the features have with respect to the target. 

Establishing a relationship (curve fitting) is nothing but coming up with a mathematical function that should be able to explain the behavioral pattern in such a way that it comes across as a best fit for the dataset.

There are multiple reasons behind why we do curve fitting:

  • To carry out system simulation and optimization
  • To determine the values of intermediate points (interpolation)
  • To do trend analysis (extrapolation)
  • To carry out hypothesis testing

There are two types of curve fitting:

  1. Exact fit: In this scenario, the curve would pass through all the points. There is no residual error (we'll discuss shortly what's classed as an error) in this case. For now, you can understand an error as the difference between the actual error and the predicted error. It can be used for interpolation and is majorly involved with a distribution fit. 

The following diagram shows the polynomial but exact fit:

The following diagram shows the line but exact fit:

  1. Best fit: The curve doesn't pass through all the points. There will be a residual associated with this.

Let's look at some different scenarios and study them to understand these differences.

Here, we will fit a curve for two numbers:

# importing libraries
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

# writing a function of Line
def func(x, a, b):
return a + b * x
x_d = np.linspace(0, 5, 2) # generating 2 numbers between 0 & 5
y = func(x_d,1.5, 0.7)
y_noise = 0.3 * np.random.normal(size=x_d.size)
y_d = y + y_noise
plt.plot(x_d, y_d, 'b-', label='data')

popt, pcov = curve_fit(func, x_d, y_d) # fitting the curve
plt.plot(x_d, func(x_d, *popt), 'r-', label='fit')

From this, we will get the following output:

Here, we have used two points to fit the line and we can very well see that it becomes an exact fit. When introducing three points, we will get the following:

 x_d = np.linspace(0, 5, 3) # generating 3 numbers between 0 & 5

Run the entire code and focus on the output:

Now, you can see the drift and effect of noise. It has started to take the shape of a curve. A line might not be a good fit here (however, it's too early to say). It's no longer an exact fit.

What if we try to introduce 100 points and study the effect of that? By now, we know how to introduce the number of points.

By doing this, we get the following output:

This is not an exact fit, but rather a best fit that tries to generalize the whole dataset.

Residual

Residuals are the difference between an observed or true value and a predicted (fitted) value. For example, in the following diagram, one of the residuals is (A-B), where A is the observed value and B is the fitted value:

The preceding scatter plot depicts that we are fitting a line that could represent the behavior of all the data points. However, one thing that's noticeable is that the line doesn't pass through all of the points. Most of the points are off the line.

The sum and mean of residuals will always be 0. ∑e =0 and mean of e =0.
Visually different images
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Machine Learning Quick Reference
You have been reading a chapter from
Machine Learning Quick Reference
Published in: Jan 2019
Publisher: Packt
ISBN-13: 9781788830577
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Modal Close icon
Modal Close icon