Reader small image

You're reading from  Machine Learning for Developers

Product typeBook
Published inOct 2017
Reading LevelBeginner
PublisherPackt
ISBN-139781786469878
Edition1st Edition
Languages
Right arrow
Authors (2):
Rodolfo Bonnin
Rodolfo Bonnin
author image
Rodolfo Bonnin

Rodolfo Bonnin is a systems engineer and Ph.D. student at Universidad Tecnolgica Nacional, Argentina. He has also pursued parallel programming and image understanding postgraduate courses at Universitt Stuttgart, Germany. He has been doing research on high-performance computing since 2005 and began studying and implementing convolutional neural networks in 2008, writing a CPU- and GPU-supporting neural network feedforward stage. More recently he's been working in the field of fraud pattern detection with Neural Networks and is currently working on signal classification using machine learning techniques. He is also the author of Building Machine Learning Projects with Tensorflow and Machine Learning for Developers by Packt Publishing.
Read more about Rodolfo Bonnin

View More author details
Right arrow

Neural Networks

As a developer, you have surely gained an interest in machine learning from looking at all the incredibly amazing applications that you see on your regular devices every day—automatic speech translation, picture style transfer, the ability to generate new pictures from sample ones, and so on. Brace yourself... we are heading directly into the technology that has made all these things possible.

Linear and logistic models, such as the ones we've observed, have certain limitations in terms of the complexity of the training dataset they train a model on, even when they are the basis of very articulated and efficient solutions.

How complex does a model have to be to capture the style of writing of an author, the concept of an image of a cat versus an image of a dog, or the classification of a plant based on visual elements? These things require the summation...

History of neural models

Neural models, in the sense of being disciplines that try to build representations of the internal workings of the brain, originated pretty distantly in the computer science timescale. They even date back to the time when the origins of modern computing were being invented, the mid-1940s.

At that time, the fields of neuroscience and computer science began to collaborate by researching ways of emulating the way the brain processes information, starting from its constituent unit—the neuron.

The first mathematical method for representing the learning function of the human brain can be assigned to McCulloch and Pitts, in their 1943 paper A Logical Calculus of Ideas Immanent in Nervous Activity:

McCulloch and Pitts model

This simple model was a basic but realistic model of a learning algorithm. You will be surprised by what happens if we use a linear...

Implementing a simple function with a single-layer perceptron

Take a look at the following code snippet to implement a single function with a single-layer perceptron:

    import numpy as np
    import matplotlib.pyplot as plt
    plt.style.use('fivethirtyeight')
    from pprint import pprint
    %matplotlib inline
    from sklearn import datasets
    import matplotlib.pyplot as plt

Defining and graphing transfer function types

The learning properties of a neural network would not be very good with just the help of a univariate linear classifier. Even some mildly complex problems in machine learning involve multiple non-linear variables, so many variants were developed as replacements for the transfer functions of...

Summary

In this chapter, we took a very important step towards solving complex problems together by means of implementing our first neural network. Now, the following architectures will have familiar elements, and we will be able to extrapolate the knowledge aquired on this chapter, to novel architectures.

In the next chapter, we will explore more complex models and problems, using more layers and special configurations, such as convolutional and dropout layers.

References

Refer to the following content:

  • McCulloch, Warren S., and Walter Pitts,. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics 5.4 (1943): 115-133. Kleene, Stephen Cole. Representation of events in nerve nets and finite automata. No. RAND-RM-704. RAND PROJECT AIR FORCE SANTA MONICA CA, 1951.
  • Farley, B. W. A. C., and W. Clark, Simulation of self-organizing systems by digital computer. Transactions of the IRE Professional Group on Information Theory 4.4 (1954): 76-84.
  • Rosenblatt, Frank, The perceptron: A probabilistic model for information storage and organization in the brain, Psychological review 65.6 (1958): 386.Rosenblatt, Frank. x.
  • Principles of Neurodynamics: perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington DC, 1961
  • Werbos, P.J. (1975), Beyond Regression: New Tools for Prediction and Analysis...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Machine Learning for Developers
Published in: Oct 2017Publisher: PacktISBN-13: 9781786469878
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Rodolfo Bonnin

Rodolfo Bonnin is a systems engineer and Ph.D. student at Universidad Tecnolgica Nacional, Argentina. He has also pursued parallel programming and image understanding postgraduate courses at Universitt Stuttgart, Germany. He has been doing research on high-performance computing since 2005 and began studying and implementing convolutional neural networks in 2008, writing a CPU- and GPU-supporting neural network feedforward stage. More recently he's been working in the field of fraud pattern detection with Neural Networks and is currently working on signal classification using machine learning techniques. He is also the author of Building Machine Learning Projects with Tensorflow and Machine Learning for Developers by Packt Publishing.
Read more about Rodolfo Bonnin