Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Python Machine Learning - Third Edition

You're reading from  Python Machine Learning - Third Edition

Product type Book
Published in Dec 2019
Publisher Packt
ISBN-13 9781789955750
Pages 772 pages
Edition 3rd Edition
Languages
Authors (2):
Sebastian Raschka Sebastian Raschka
Profile icon Sebastian Raschka
Vahid Mirjalili Vahid Mirjalili
Profile icon Vahid Mirjalili
View More author details

Table of Contents (21) Chapters

Preface 1. Giving Computers the Ability to Learn from Data 2. Training Simple Machine Learning Algorithms for Classification 3. A Tour of Machine Learning Classifiers Using scikit-learn 4. Building Good Training Datasets – Data Preprocessing 5. Compressing Data via Dimensionality Reduction 6. Learning Best Practices for Model Evaluation and Hyperparameter Tuning 7. Combining Different Models for Ensemble Learning 8. Applying Machine Learning to Sentiment Analysis 9. Embedding a Machine Learning Model into a Web Application 10. Predicting Continuous Target Variables with Regression Analysis 11. Working with Unlabeled Data – Clustering Analysis 12. Implementing a Multilayer Artificial Neural Network from Scratch 13. Parallelizing Neural Network Training with TensorFlow 14. Going Deeper – The Mechanics of TensorFlow 15. Classifying Images with Deep Convolutional Neural Networks 16. Modeling Sequential Data Using Recurrent Neural Networks 17. Generative Adversarial Networks for Synthesizing New Data 18. Reinforcement Learning for Decision Making in Complex Environments 19. Other Books You May Enjoy 20. Index

Modeling complex functions with artificial neural networks

At the beginning of this book, we started our journey through machine learning algorithms with artificial neurons in Chapter 2, Training Simple Machine Learning Algorithms for Classification. Artificial neurons represent the building blocks of the multilayer artificial NNs that we will discuss in this chapter.

The basic concept behind artificial NNs was built upon hypotheses and models of how the human brain works to solve complex problem tasks. Although artificial NNs have gained a lot of popularity in recent years, early studies of NNs go back to the 1940s when Warren McCulloch and Walter Pitts first described how neurons could work. (A logical calculus of the ideas immanent in nervous activity, W. S. McCulloch and W. Pitts. The Bulletin of Mathematical Biophysics, 5(4):115–133, 1943.)

However, in the decades that followed the first implementation of the McCulloch-Pitts neuron model—Rosenblatt's...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}