Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
50 Algorithms Every Programmer Should Know - Second Edition

You're reading from  50 Algorithms Every Programmer Should Know - Second Edition

Product type Book
Published in Sep 2023
Publisher Packt
ISBN-13 9781803247762
Pages 538 pages
Edition 2nd Edition
Languages
Author (1):
Imran Ahmad Imran Ahmad
Profile icon Imran Ahmad

Table of Contents (22) Chapters

Preface 1. Section 1: Fundamentals and Core Algorithms
2. Overview of Algorithms 3. Data Structures Used in Algorithms 4. Sorting and Searching Algorithms 5. Designing Algorithms 6. Graph Algorithms 7. Section 2: Machine Learning Algorithms
8. Unsupervised Machine Learning Algorithms 9. Traditional Supervised Learning Algorithms 10. Neural Network Algorithms 11. Algorithms for Natural Language Processing 12. Understanding Sequential Models 13. Advanced Sequential Modeling Algorithms 14. Section 3: Advanced Topics
15. Recommendation Engines 16. Algorithmic Strategies for Data Handling 17. Cryptography 18. Large-Scale Algorithms 19. Practical Considerations 20. Other Books You May Enjoy
21. Index

Summary

In this chapter we discussed advanced sequential models, which are advanced techniques designed to process input sequences, especially when the length of output sequences may differ from that of the input. Autoencoders, a type of neural network architecture, are particularly adept at compressing data. They work by encoding input data into a smaller representation and then decoding it back to resemble the original input. This process can be useful in tasks like image denoising, where noise from an image is filtered out to produce a clearer version.

Another influential model is the Seq2Seq model. It’s designed to handle tasks where input and output sequences have varying lengths, making it ideal for applications like machine translation. However, traditional Seq2Seq models face the information bottleneck challenge, wherein the entire context of an input sequence needs to be captured in a single, fixed-size representation. Addressing this, the attention mechanism was...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}