Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
50 Algorithms Every Programmer Should Know - Second Edition

You're reading from  50 Algorithms Every Programmer Should Know - Second Edition

Product type Book
Published in Sep 2023
Publisher Packt
ISBN-13 9781803247762
Pages 538 pages
Edition 2nd Edition
Languages
Author (1):
Imran Ahmad Imran Ahmad
Profile icon Imran Ahmad

Table of Contents (22) Chapters

Preface 1. Section 1: Fundamentals and Core Algorithms
2. Overview of Algorithms 3. Data Structures Used in Algorithms 4. Sorting and Searching Algorithms 5. Designing Algorithms 6. Graph Algorithms 7. Section 2: Machine Learning Algorithms
8. Unsupervised Machine Learning Algorithms 9. Traditional Supervised Learning Algorithms 10. Neural Network Algorithms 11. Algorithms for Natural Language Processing 12. Understanding Sequential Models 13. Advanced Sequential Modeling Algorithms 14. Section 3: Advanced Topics
15. Recommendation Engines 16. Algorithmic Strategies for Data Handling 17. Cryptography 18. Large-Scale Algorithms 19. Practical Considerations 20. Other Books You May Enjoy
21. Index

Defining Gradient Descent

The purpose of training a neural networkneural network model is to find the right values for weights. We start training a neuralneural network with random or default values for the weights. Then, we iteratively use an optimizer algorithm, such as gradient descent, to change the weights in such a way that our predictions improve.The starting point of a gradient descent algorithm is the random values of weights that need to be optimized as we iterate through the algorithm. In each of the subsequent iterations, the algorithm proceeds by changing the values of the weights in such a way that the cost is minimized.The following diagram explains the logic of the gradient descent algorithm:

Figure 8.6: Gradient descent algorithm

In the preceding diagram, the input is the feature vector X. The actual value of the target variable is Y and the predicted value of the target variable is Y’. We determine the deviation of the actual value from the...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}