Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Machine Learning Infrastructure and Best Practices for Software Engineers

You're reading from  Machine Learning Infrastructure and Best Practices for Software Engineers

Product type Book
Published in Jan 2024
Publisher Packt
ISBN-13 9781837634064
Pages 346 pages
Edition 1st Edition
Languages
Author (1):
Miroslaw Staron Miroslaw Staron
Profile icon Miroslaw Staron

Table of Contents (24) Chapters

Preface 1. Part 1:Machine Learning Landscape in Software Engineering
2. Machine Learning Compared to Traditional Software 3. Elements of a Machine Learning System 4. Data in Software Systems – Text, Images, Code, and Their Annotations 5. Data Acquisition, Data Quality, and Noise 6. Quantifying and Improving Data Properties 7. Part 2: Data Acquisition and Management
8. Processing Data in Machine Learning Systems 9. Feature Engineering for Numerical and Image Data 10. Feature Engineering for Natural Language Data 11. Part 3: Design and Development of ML Systems
12. Types of Machine Learning Systems – Feature-Based and Raw Data-Based (Deep Learning) 13. Training and Evaluating Classical Machine Learning Systems and Neural Networks 14. Training and Evaluation of Advanced ML Algorithms – GPT and Autoencoders 15. Designing Machine Learning Pipelines (MLOps) and Their Testing 16. Designing and Implementing Large-Scale, Robust ML Software 17. Part 4: Ethical Aspects of Data Management and ML System Development
18. Ethics in Data Acquisition and Management 19. Ethics in Machine Learning Systems 20. Integrating ML Systems in Ecosystems 21. Summary and Where to Go Next 22. Index 23. Other Books You May Enjoy

Current developments

At the time of writing this book, the Technology Innovation Institute (https://www.tii.ae/) has just released its largest model – Falcon 170B. It is the largest fully open source model that is similar to the GPT-3.5 model. It shows the current direction of the research in large language models.

Although GPT-4 exists, which is larger by a factor of 1,000, we can develop very good software with moderately large models such as GPT-3.5. This brings us to some of the current topics that we, as a community, need to discuss. One of them is the energy sustainability of these models. Falcon-170B requires 400 GB of RAM (eight times that of an Nvidia A100 GPU) to execute (according to Hugging Face). We do not know how much hardware the GPT-4 model needs. The amount of electricity that it takes and the resources that it uses must be on par with what we get as value from that model.

We also approach limits to the conventional computational power when it comes...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}