Automated Machine Learning with AutoKeras

By Luis Sobrecueva
    What do you get with a Packt Subscription?

  • Instant access to this title and 7,500+ eBooks & Videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Free Chapter
    Chapter 1: Introduction to Automated Machine Learning
About this book

AutoKeras is an AutoML open-source software library that provides easy access to deep learning models. If you are looking to build deep learning model architectures and perform parameter tuning automatically using AutoKeras, then this book is for you.

This book teaches you how to develop and use state-of-the-art AI algorithms in your projects. It begins with a high-level introduction to automated machine learning, explaining all the concepts required to get started with this machine learning approach. You will then learn how to use AutoKeras for image and text classification and regression. As you make progress, you'll discover how to use AutoKeras to perform sentiment analysis on documents. This book will also show you how to implement a custom model for topic classification with AutoKeras. Toward the end, you will explore advanced concepts of AutoKeras such as working with multi-modal data and multi-task, customizing the model with AutoModel, and visualizing experiment results using AutoKeras Extensions.

By the end of this machine learning book, you will be able to confidently use AutoKeras to design your own custom machine learning models in your company.

Publication date:
May 2021


Chapter 1: Introduction to Automated Machine Learning

In this chapter, we cover the main concepts relating to Automated Machine Learning (AutoML) with an overview of the types of AutoML methods and its software systems.

If you are a developer working with AutoML, you will be able to put your knowledge to work with this practical guide to develop and use state-of-the-art AI algorithms in your projects. By the end of this chapter, you will have a clear understanding of the anatomy of the Machine Learning (ML) workflow, what AutoML is, and its different types.

Through clear explanations of essential concepts and practical examples, you will see the differences between the standard ML and the AutoML approaches and the pros and cons of each.

In this chapter, we're going to cover the following main topics:

  • The anatomy of a standard ML workflow
  • What is AutoML?
  • Types of AutoML

The anatomy of a standard ML workflow

In a traditional ML application, professionals have to train a model using a set of input data. If this data is not in the proper form, an expert may have to apply some data preprocessing techniques, such as feature extraction, feature engineering, or feature selection.

Once the data is ready and the model can be trained, the next step is to select the right algorithm and optimize the hyperparameters to maximize the accuracy of the model's predictions. Each step involves time-consuming challenges, and typically also requires a data scientist with the experience and knowledge to be successful. In the following figure, we can see the main steps represented in a typical ML pipeline:

Figure 1.1 – ML pipeline steps

Figure 1.1 – ML pipeline steps

Each of these pipeline processes involves a series of steps. In the following sections, we describe each process and related concepts in more detail.

Data ingestion

Piping incoming data to a data store is the first step in any ML workflow. The target here is to store that raw data without doing any transformation, to allow us to have an immutable record of the original dataset. The data can be obtained from various data sources, such as databases, message buses, streams, and so on.

Data preprocessing

The second phase, data preprocessing, is one of the most time-consuming tasks in the pipeline and involves many sub-tasks, such as data cleaning, feature extraction, feature selection, feature engineering, and data segregation. Let's take a closer look at each one:

  • The data cleaning process is responsible for detecting and fixing (or deleting) corrupt or wrong records from a dataset. Because the data is unprocessed and unstructured, it is rarely in the correct form to be processed; it implies filling in missing fields, removing duplicate rows, or normalizing and fixing other errors in the data.
  • Feature extraction is a procedure for reducing the number of resources required in a large dataset by creating new features from the combination of others (and eliminating the original ones). The main problem when analyzing large datasets is the number of variables to take into account. Processing a large number of variables generally requires a lot of hardware resources, such as memory and computing power, and can also cause overfitting, which means that the algorithm works very well for training samples and generalizes poorly for new samples. Feature extraction is based on the construction of new variables, combining existing ones to solve these problems without losing precision in the data.
  • Feature selection is the process of selecting a subset of variables to use in building the model. Performing feature selection simplifies the model (making it more interpretable for humans), reduces training times, and improves generalization by reducing overfitting. The main reason to apply feature selection methods is that the data contains some features that can be redundant or irrelevant, so removing them wouldn't incur much loss of information.
  • Feature engineering is the process by which, through data mining techniques, features are extracted from raw data using domain knowledge. This typically requires a knowledgeable expert and is used to improve the performance of ML algorithms.

Data segregation consists of dividing the dataset into two subsets: a train dataset for training the model and a test dataset for testing the prediction modeling.

Modeling is divided into three parts:

  1. Choose candidate models to evaluate.
  2. Train the chosen model (improve it).
  3. Evaluate the model (compare it with others).

This process is iterative and involves testing various models until one is obtained that solves the problem in an efficient way. The following figure shows a detailed schema of the modeling phases of the ML pipeline:

Figure 1.2 – Modeling phases of the ML pipeline

Figure 1.2 – Modeling phases of the ML pipeline

After taking an overview of the modeling phase, let's look at each modeling step in more detail.

Let's dive deeper into the three parts of modeling to have a detailed understanding of them.

Model selection

In choosing a candidate model to use, in addition to performance, it is important to consider several factors, such as readability (by humans), ease of debugging, the amount of data available, as well as hardware limitations for training and prediction.

The main points to take into account for selecting a model would be as follows:

  • Interpretability and ease of debugging: How to know why a model made a specific decision. How do we fix the errors?
  • Dataset type: There are algorithms that are more suitable for specific types of data.
  • Dataset size: How much data is available and will this change in the future?
  • Resources: How much time and resources do you have for training and prediction?

Model training

This process uses the training dataset to feed each chosen candidate model, allowing the models to learn from it by applying a backpropagation algorithm that extracts the patterns found in the training samples.

The model is fed with the output data from the data preprocessing step. This dataset is sent to the chosen model and once trained, both the model configuration and the learned parameters will be used in the model evaluation.

Model evaluation

This step is responsible for evaluating model performance using test datasets to measure the accuracy of the prediction. This process involves tuning and improving the model, generating a new candidate model version to be trained again.

Model tuning

This model evaluation step involves modifying hyperparameters such as the learning rate, the optimization algorithm, or model-specific architecture parameters, such as the number of layers and types of operations for neural networks. In standard ML, these procedures need to be performed manually by an expert.

Other times, the evaluated model is discarded, and another new model is chosen for training. Often, starting with a previously trained model through transfer learning leads to shortened training time as well as better precision on the final model predictions.

Since the main bottleneck is the training time, the adjustment of the models should focus on efficiency and reproducibility so that the training is as fast as possible and someone can reproduce the steps that have been taken to improve performance.

Model deployment

Once the best model is chosen, it is usually put into production through an API service to be consumed by the end user or other internal services.

Usually, the best model is selected to be deployed in one of two deployment modes:

  • Offline (asynchronous): In this case, the model predictions are calculated in a batch process periodically and stored in a data warehouse as a key-value database.
  • Online (synchronous): In this mode, the predictions are calculated in real time.

Deployment consists of exposing your model to a real-world application. This application can be anything, from recommending videos to users of a streaming platform to predicting the weather on a mobile application.

Releasing an ML model into production is a complex process that generally involves multiple technologies (version control, containerization, caching, hot swapping, a/b testing, and so on) and is outside the scope of this book.

Model monitoring

Once in production, the model is monitored to see how it performs in the real world and calibrated accordingly. This schema represents the continuous model cycle, from data ingestion to deployment:

Figure 1.3 – Model cycle phases

Figure 1.3 – Model cycle phases

In the following sections, we will explain the main reasons why it's really important to monitor your production model.

Why monitor your model?

Your model predictions will degrade over time. This phenomenon is called drift. Drift is a consequence of input data changes, so over time, the predictions get worse in a natural way.

Let's look at the users of a search engine as an example. A predictive model can use user features such as your personal information, search types, and clicked results to predict which ads to show. But after a while, these searches may not represent current user behavior.

A possible solution would be to retrain the model with the most recent data, but this is not always possible and sometimes may even be counterproductive. Imagine training the model with searches at the start of the COVID-19 pandemic. This would only show ads for products related to the pandemic, causing a sharp decline in the number of sales for the rest of the products.

A smarter alternative to combat drift is to monitor our model, and by knowing what is happening, we can decide when and how to retrain it.

How can you monitor your model?

In cases where you have the actual values to compare to the prediction in no time—I mean you have the true labels right after making a prediction—you just need to monitor the performance measures such as accuracy, F1 score, and so on. But often, there is a delay between the prediction and the basic truth; for example, in predicting spam in emails, users can report that an email is spam up to several months after it was created. In this case, you must use other measurement methods based on statistical approaches.

For other complex processes, sometimes it is easier to do traffic/case splitting and monitor pure business metrics, in a case where it is difficult to consider direct relationships between classical ML evaluation metrics and real-world-related instances.

What should you monitor in your model?

Any ML pipeline involves performance data monitoring. Some possible variables of the model to monitor are as follows:

  • Chosen model: What kind of model was chosen, and what are the architecture type, the optimizer algorithm, and the hyperparameter values?
  • Input data distribution: By comparing the distribution of the training data with the distribution of the input data, we can detect whether the data used for the training represents what is happening now in the real world.
  • Deployment date: Date of the release of the model.
  • Features used: Variables used as input for the model. Sometimes there are relevant features in production that we are not using in our model.
  • Expected versus observed: A scatter plot comparing expected and observed values is often the most widely used approach.
  • Times published: The number of times a model was published, represented usually using model version numbers.
  • Time running: How long has it been since the model was deployed?

Now that we have seen the different components of the pipeline, we are ready to introduce the main AutoML concepts in the next section.


What is AutoML?

The main task in the modeling phase is to select the different models to be evaluated and adjust the different hyperparameters of each one. This work that data scientists normally perform requires a lot of time as well as experienced professionals. From a computational point of view, hyperparameter tuning is a comprehensive search process, so it can be automated.

AutoML is a process that automates, using AI algorithms, every step of the ML pipeline described previously, from the data preprocessing to the deployment of the ML model, allowing non-data scientists (such as software developers) to use ML techniques without the need for experience in the field. In the following figure, we can see a simple representation of the inputs and outputs of an AutoML system:

Figure 1.4 – How AutoML works

Figure 1.4 – How AutoML works

AutoML is also capable of producing simpler solutions, more agile proof-of-concept creation, and unattended training of models that often outperform those created manually, dramatically improving the predictive performance of the model and allowing data scientists to perform more complex tasks that are more difficult to automate, such as data preprocessing and feature engineering, defined in the Model monitoring section. Before introducing the AutoML types, let's take a quick look at the main differences between AutoML and traditional ML.

Differences from the standard approach

In the standard ML approach, data scientists have an input dataset to train. Usually, this raw data is not ready for the training algorithms, so an expert must apply different methods, such as data preprocessing, feature engineering, and feature extraction methods, as well as model tuning through algorithm selection and hyperparameter optimization, to maximize the model's predictive performance.

All of these steps are time-consuming and resource-intensive, being the main obstacle to putting ML into practice.

With AutoML, we simplify these steps for non-experts, making it possible to apply ML to solve a problem in an easier and faster way.

Now that the main concepts of AutoML have been explained, we can put them into practice. But first, we will see what the main types of AutoML are and some of the widely used tools to perform AutoML.


Types of AutoML

This chapter will explore the frameworks available today for each of the previously listed AutoML types, giving you an idea of what is possible now in terms of AutoML. But first, let's briefly discuss the end-to-end ML pipeline and see where each process occurs in that pipeline.

As we saw in the previous workflow diagram, the ML pipeline involves more steps than the modeling ones, such as data steps and deployment steps. In this book, we will focus on the automation of modeling because it is one of the phases that require more investment of time and as we will see later, AutoKeras, the AutoML framework we will work on, uses neural architecture search and hyperparameter optimization methods, both applied in the modeling phase.

AutoML tries to automate each of the steps in the pipeline but the main time-consuming steps to automate usually are the following:

  • Automated feature engineering
  • Automated model selection and hyperparameter tuning
  • Automated neural network architecture selection

Automated feature engineering

The features used by the model have a direct impact on the performance of an ML algorithm. Feature engineering requires a large investment of time and human resources (data scientists) and involves a lot of trial and error, as well as deep domain knowledge.

Automated feature engineering is based on creating new sets of features iteratively until the ML model achieves good prediction performance.

In a standard feature engineering process, a dataset is collected, for example, a dataset from a job search website that collects data on the behavior of candidates. Usually, a data scientist will create new features if they are not already in the data, such as the following:

  • Search keywords
  • Titles of the job offers read by the candidates
  • Candidate application frequency
  • Time since the last application
  • Type of job offers to which the candidate applies

Feature engineering automation tries to create an algorithm that automatically generates or obtains these types of features from the data.

There is also a specialized form of ML called deep learning, in which features are extracted from images, text, and videos automatically using matrix transformations on the model layers.

Automated model choosing and hyperparameter optimization

After the data preprocessing phase, an ML algorithm has to be searched to train with these features so that it is able to predict from new observations. In contrast to the previous step, the selection of models is full of options to choose from. There are classification and regression models, neural network-based models, clustering models, and many more.

Each algorithm is suitable for a certain class of problems and with automated model selection, we can find the optimal model by executing all the appropriate models for a particular task and selecting the one that is most accurate. There is no ML algorithm that works well with all datasets and there are some algorithms that require more hyperparameter tuning than others. In fact, during model selection, we tend to experiment with different hyperparameters.

What are hyperparameters?

In the training phase of the model, there are many variables to be set. Basically, we can group them into two types: parameters and hyperparameters. Parameters are those that are learned in the model training process, such as weight and bias in a neural network, while hyperparameters are those that are initialized just before the training process as a learning rate, dropout factor, and so on.

Types of search methods

There are many algorithms to find the optimal hyperparameters of a model. The following figure highlights the best-known ones that are also used by AutoKeras:

Figure 1.5 – Hyperparameter search method paths

Figure 1.5 – Hyperparameter search method paths

Let's try to understand these methods in more detail:

  • Grid search: Given a set of variables (hyperparameters) and a set of values for each variable, grid search performs an exhaustive search, testing all possible combinations of these values in the variables to find the best possible model based on a defined evaluation metric, such as precision. In the case of a neural network with learning rate and dropout as hyperparameters to tune, we can define a learning rate set of values as [0.1, 0,01] and a dropout set of values as [0.2, 0,5], so grid search will train the model with these combinations:

    (a) learning_rate: 0.1, dropout=0.2 => Model version 1

    (b) learning_rate: 0.01, dropout=0.2 => Model version 2

    (c) learning_rate: 0.1, dropout=0.5 => Model version 3

    (d) learning_rate: 0.01, dropout=0.5 => Model version 4

  • Random search: This is similar to grid search but runs the training of the model combinations in a random order. That random exploration feature makes random search usually cheaper than grid search.
  • Bayesian search: This method performs a hyperparameter fit based on the Bayesian theorem that explores only combinations that maximize the probability function.
  • Hyperband: This is a novel variation of random search that tries to resolve the exploration/exploitation dilemma using a bandit-based approach to hyperparameter optimization.

Automated neural network architecture selection

The design of neural network architectures is one of the most complex and tedious tasks in the world of ML. Typically, in traditional ML, data scientists spend a lot of time iterating through different neural network architectures with different hyperparameters to optimize a model objective function. This is time-consuming, requires deep knowledge, and is prone to errors at times.

In the middle of the 2010s, the idea of implementing neural network search by employing evolutionary algorithms and reinforcement learning to design and find an optimal neural network architecture was introduced. It was called Network Architecture Search (NAS). Basically, it trains a model to create layers, stacking them to create a deep neural network architecture.

A NAS system involves these three main components:

  • Search space: Consists of a set of blocks of operations (full connected, convolution, and so on) and how these operations are connected to each other to form valid network architectures. Traditionally, the design of the search space is done by a data scientist.
  • Search algorithm: A NAS search algorithm tests a number of candidate network architecture models. From the metrics obtained, it selects the candidates with the highest performance.
  • Evaluation strategy: As a large number of models are required to be tested in order to obtain successful results, the process is computationally very expensive, so new methods appear every so often to save time or computing resources.

In the next figure, you can see the relationships between the three described components:

Figure 1.6 – NAS component relationships

Figure 1.6 – NAS component relationships

Currently, NAS is a new area of research that is attracting a lot of attention and several research papers have been published: Some of the most cited papers are as follows:

  • NASNet ( – Learning Transferable Architecture for Scalable Image Recognition: High-precision models for image classification are based on very complex neural networks with lots of layers. NASNet is a method of learning model architectures directly from the dataset of interest. Due to the high cost of doing so when the dataset is very large, it first looks for an architectural building block in a small dataset, and then transfers the block to a larger dataset. This approach is a successful example of what you can achieve with AutoML, because NASNet-generated models often outperform state-of-the-art, human-designed models. In the following figure, we can see how NASNet works:
Figure 1.7 – Overview of NAS

Figure 1.7 – Overview of NAS

  • AmoebaNetRegularized Evolution for Image Classifier Architecture Search: This approach uses an evolutionary algorithm to efficiently discover high-quality architectures. To date, the evolutionary algorithms applied to image classification have not exceeded those created by humans. AmoebaNet-A surpasses them for the first time. The key has been to modify the selection algorithm by introducing an age property to favor the youngest genotypes. AmoebaNet-A has a similar precision to the latest generation ImageNet models discovered with more complex architecture search methods, showing that evolution can obtain results faster with the same hardware, especially in the early search stages, something that is especially important when there are few computational resources available. The following figure shows the correlation between precision and model size for some representative next-generation image classification models in history. The dotted circle shows 84.3% accuracy for an AmoebaNet model:
Figure 1.8 – Correlation between the top-1 accuracy and model size for state-of-the-art image classification models using the ImageNet dataset

Figure 1.8 – Correlation between the top-1 accuracy and model size for state-of-the-art image classification models using the ImageNet dataset

  • Efficient Neural Architecture Search (ENAS): This variant of NASNet improves its efficiency by allowing all child models to share their weights, so it is not necessary to train each child model from scratch. This optimization significantly improves classification performance.

There are many ML tools available, all of them with similar goals, to automate the different steps of the ML pipeline. The following are some of the most used tools:

  • AutoKeras: An AutoML system based on the deep learning framework Keras and using hyperparameter searching and NAS.
  • auto-sklearn: An AutoML toolkit that allows you to use a special type of scikit-learn estimator, which automates algorithm selection and hyperparameter tuning, using Bayesian optimization, meta-learning, and model ensembling.
  • DataRobot: An AI platform that automates the end-to-end process for building, deploying, and maintaining AI at scale.
  • Darwin: An AI tool that automates the slowest steps in the model life cycle, ensuring long-term quality and the scalability of models.
  • H2O-DriverlessAI: An AI platform for AutoML.
  • Google's AutoML: A suite of ML products that enable developers with no ML experience to train and use high-performance models in their projects. To do this, this tool uses Google's powerful next-generation transfer learning and neural architecture search technology.
  • Microsoft Azure AutoML: This cloud service creates many pipelines in parallel that try different algorithms and parameters for you.
  • Tree-based Pipeline Optimization Tool (TPOT): A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

We can see an exhaustive comparison of the main AutoML tools that currently exist in the paper Evaluation and Comparison of AutoML Approaches and Tools, and from it we can conclude that while the main commercial solutions, such as H2O-DriverlessAI, DataRobot, and Darwin, allow us to detect the data schema, execute the feature engineering, and analyze detailed results for interpretation purposes, open source tools are more focused on automating the modeling tasks, training, and model evaluation, leaving the data-oriented tasks to the data scientists.

The study also concludes that in the various evaluations and benchmarks tested, AutoKeras is the most stable and efficient tool, which is very important in a production environment where both performance and stability are key factors. These good features, in addition to being a widely used tool, are the main reason why AutoKeras was the AutoML framework chosen when writing this book.



In this chapter, we defined the purpose and benefits of AutoML, from describing the different phases of an ML pipeline to detailing the types of algorithms for hyperparameter optimization and neural architecture searching.

Now that we have learned the main concepts of AutoML, we are ready to move on to the next chapter, where you will learn how to install AutoKeras and how to use it to train a simple network and then train advanced models as you progress to more complicated techniques.

About the Author
  • Luis Sobrecueva

    Luis Sobrecueva is a senior software engineer and ML/DL practitioner currently working at Cabify. He has been a contributor to the OpenAI project as well as one of the contributors to the AutoKeras project.

    Browse publications by this author
Automated Machine Learning with AutoKeras
Unlock this book and the full library FREE for 7 days
Start now