Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Machine Learning Infrastructure and Best Practices for Software Engineers

You're reading from  Machine Learning Infrastructure and Best Practices for Software Engineers

Product type Book
Published in Jan 2024
Publisher Packt
ISBN-13 9781837634064
Pages 346 pages
Edition 1st Edition
Languages
Author (1):
Miroslaw Staron Miroslaw Staron
Profile icon Miroslaw Staron

Table of Contents (24) Chapters

Preface 1. Part 1:Machine Learning Landscape in Software Engineering
2. Machine Learning Compared to Traditional Software 3. Elements of a Machine Learning System 4. Data in Software Systems – Text, Images, Code, and Their Annotations 5. Data Acquisition, Data Quality, and Noise 6. Quantifying and Improving Data Properties 7. Part 2: Data Acquisition and Management
8. Processing Data in Machine Learning Systems 9. Feature Engineering for Numerical and Image Data 10. Feature Engineering for Natural Language Data 11. Part 3: Design and Development of ML Systems
12. Types of Machine Learning Systems – Feature-Based and Raw Data-Based (Deep Learning) 13. Training and Evaluating Classical Machine Learning Systems and Neural Networks 14. Training and Evaluation of Advanced ML Algorithms – GPT and Autoencoders 15. Designing Machine Learning Pipelines (MLOps) and Their Testing 16. Designing and Implementing Large-Scale, Robust ML Software 17. Part 4: Ethical Aspects of Data Management and ML System Development
18. Ethics in Data Acquisition and Management 19. Ethics in Machine Learning Systems 20. Integrating ML Systems in Ecosystems 21. Summary and Where to Go Next 22. Index 23. Other Books You May Enjoy

Quantifying and Improving Data Properties

Procuring data in machine learning systems is a long process. So far, we have focused on data collection from source systems and cleaning noise from data. Noise, however, is not the only problem that we can encounter in data. Missing values or random attributes are examples of data properties that can cause problems with machine learning systems. Even the length of the input data can be problematic if it is outside of the expected values.

In this chapter, we will dive deeper into the properties of data and how to improve them. In contrast to the previous chapter, we will work on feature vectors rather than raw data. Feature vectors are already a transformation of the data and therefore, we can change properties such as noise or even change how the data is perceived.

We’ll focus on the processing of text, which is an important part of many machine learning algorithms nowadays. We’ll start by understanding how to transform...

Feature engineering – the basics

Feature engineering is the process of transforming raw data into vectors of numbers that can be used in machine learning algorithms. This process is structured and requires us to first select which feature extraction mechanism we need to use – which depends on the type of the task – and then configure the chosen feature extraction mechanism. When the chosen algorithm is configured, we can use it to transform the raw input data into a matrix of features – we call this process feature extraction. Sometimes, the data needs to be processed before (or after) the feature extraction, for example, by merging fields or removing noise. This process is called data wrangling.

The number of feature extraction mechanisms is large, and we cannot cover all of them. Not that we need to either. What we need to understand, however, is how the choice of feature extraction mechanism influences the properties of the data. We’ll dive...

Clean data

One of the most problematic aspects of datasets, when it comes to machine learning, is the presence of empty data points or empty values of features for data points. Let’s illustrate that with the example of the features extracted in the previous section. In the following table, I introduced an empty data point – the NaN value in the middle column. This means that the value does not exist.

Hello

printf

return

printf(“Hello world!”);

1

NaN

0

return 1

0

0

1

Figure 5.2 – Extracted features with a NaN value in the table

If we use this data as input to a machine learning algorithm, we’ll get an error...

Noise in data management

Missing data and contradictory annotations are only one type of problem with data. In many cases, large datasets, which are generated by feature extraction algorithms, can contain too much information. Features can be superfluous and not contribute to the end results of the algorithm. Many machine learning models can deal with noise in the features, called attribute noise, but too many features can be costly in terms of training time, storage, and even data collection itself.

Therefore, we should also pay attention to the attribute noise, identify it, and then remove it.

Attribute noise

There are a few methods to reduce attribute noise in large datasets. One of these methods is an algorithm named the Pairwise Attribute Noise Detection Algorithm (PANDA). PANDA compares features pairwise and identifies which of them adds noise to the dataset. It is a very effective algorithm, but unfortunately very computationally heavy. If our dataset had a few hundred features (which is when we would really need to use this algorithm), we would need a lot of computational power to identify these features that bring in little to the analysis.

Fortunately, there are machine learning algorithms that provide similar functionality with little computational overhead. One of these algorithms is the random forest algorithm, which allows you to retrieve the set of feature importance values. These values are a way of identifying which features are not used in any of the decision trees in this forest.

Let us then see how to use that algorithm to extract and visualize the...

Splitting data

For the process of designing machine learning-based software, another important property is to understand the distribution of data, and, subsequently, ensure that the data used for training and testing is of a similar distribution.

The distribution of the data used for training and validation is important as the machine learning models identify patterns and re-create them. This means that if the data in the training is not distributed in the same way as the data in the test set, our model misclassifies data points. The misclassifications (or mispredictions) are caused by the fact that the model learns patterns in the training data that are different from the test data.

Let us understand how splitting algorithms work in theory, and how they work in practice. Figure 5.5 shows how the splitting works on a theoretical and conceptual level:

Figure 5.5 – Splitting data into train and test sets

Figure 5.5 – Splitting data into train and test sets

Icons represent review comments (and...

How ML models handle noise

Reducing noise from datasets is a time-consuming task, and it is also a task that cannot be easily automated. We need to understand whether we have noise in the data, what kind of noise is in the data, and how to remove it. Luckily, most machine learning algorithms are pretty good at handling noise.

For example, the algorithm that we have used quite a lot so far – random forest – is quite robust to noise in datasets. Random forest is an ensemble model, which means that it is composed of several separate decision trees that internally “vote” for the best result. This voting process can therefore filter out noise and coalescence toward the pattern contained in the data.

Deep learning algorithms have similar properties too – by utilizing a number of small neurons, these networks are robust to noise in large datasets. They can coerce the pattern in the data.

Best practice #33

In large-scale software systems, if possible...

References

  • Scott, S. and S. Matwin. Feature engineering for text classification. in ICML. 1999.
  • Kulkarni, A., et al., Converting text to features. Natural Language Processing Recipes: Unlocking Text Data with Machine Learning and Deep Learning Using Python, 2021: p. 63-106.
  • Van Hulse, J.D., T.M. Khoshgoftaar, and H. Huang, The pairwise attribute noise detection algorithm. Knowledge and Information Systems, 2007. 11: p. 171-190.
  • Li, X., et al., Exploiting BERT for end-to-end aspect-based sentiment analysis. arXiv preprint arXiv:1910.00883, 2019.
  • Xu, Y. and R. Goodacre, On splitting training and validation set: a comparative study of cross-validation, bootstrap and systematic sampling for estimating the generalization performance of supervised learning. Journal of analysis and testing, 2018. 2(3): p. 249-262.
  • Mosin, V., et al. Comparing Input Prioritization Techniques for Testing Deep Learning Algorithms. in 2022 48th Euromicro Conference on Software Engineering...
lock icon The rest of the chapter is locked
You have been reading a chapter from
Machine Learning Infrastructure and Best Practices for Software Engineers
Published in: Jan 2024 Publisher: Packt ISBN-13: 9781837634064
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}