Machine Learning for Mobile

5 (1 reviews total)
By Revathi Gopalakrishnan , Avinash Venkateswarlu
  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Introduction to Machine Learning on Mobile

About this book

Machine learning presents an entirely unique opportunity in software development. It allows smartphones to produce an enormous amount of useful data that can be mined, analyzed, and used to make predictions. This book will help you master machine learning for mobile devices with easy-to-follow, practical examples.

You will begin with an introduction to machine learning on mobiles and grasp the fundamentals so you become well-acquainted with the subject. You will master supervised and unsupervised learning algorithms, and then learn how to build a machine learning model using mobile-based libraries such as Core ML, TensorFlow Lite, ML Kit, and Fritz on Android and iOS platforms. In doing so, you will also tackle some common and not-so-common machine learning problems with regard to Computer Vision and other real-world domains.

By the end of this book, you will have explored machine learning in depth and implemented on-device machine learning with ease, thereby gaining a thorough understanding of how to run, create, and build real-time machine-learning applications on your mobile devices.

Publication date:
December 2018
Publisher
Packt
Pages
274
ISBN
9781788629355

 

Chapter 1. Introduction to Machine Learning on Mobile

We're livingin a world of mobile applications. They've become such a part and parcel of our everyday lives that we rarely look into the numbers behind them. (These include the revenue they make, the actual market size of the business, and the quantitative figures that would fuel the growth of mobile applications.) Let's take a peek at the numbers:

  • Forbes predicts that mobile application revenue is slated to hit $189 billion by the year 2020
  • We are also seeing that the global smartphone installation base is increasing exponentially. Therefore, the revenue from applications getting installed on them is also increasing at an unimaginable rate

Mobile devices and services are now the hubs for people's entertainment and business lives, as well as for communication. The smartphone has replaced the PC as the most important smart connected device. Mobile innovations, new business models, and mobile technologies are transforming every walk of human life.

Now, we come to machine learning. Why has machine learning been booming recently? Machine learning is not a new subject. It existed over 10-20 years ago, so why is it in focus now and why is everyone talking about it? The reason is simple: data explosion. Social networking and mobile devices have enabled the generation of user data like never before. Ten years ago, you didn't have images uploaded to the cloud like you do today because mobile phone penetration then cannot be compared to what it is today. The 4G connection makes it possible even to live stream video data on-demand (VDO) now, so it means more data is running all around the world like never before. The next era is predicted to be the era of the internet of things (IOT), where there is going to be more data-sensor-based data.

All this data is valuable only when we can put it to proper use, derive insights that bring value to us, and bring about unseen data patterns that provide new business opportunities. So, for this to happen, machine learning is the right tool to unlock the stored value in these piles and piles of data that are being accumulated each day.

So, it has become obvious that it is a great time to be a mobile application developer and a great time to be a machine learning data scientist. But how cool would it be if we were able to bring the power of machine learning to mobile devices and develop really cool mobile applications that leverage the power of machine learning? That's what we are trying to do through this book: give insights to mobile application developers on the basics of machine learning, expose them to various machine learning algorithms and mobile machine learning SDKs/tools, and go over developing mobile machine learning applications using these SDKs/tools.

Machine learning in the mobile space is a key innovation area that must be properly understood by mobile developers as it is transforming the way users can visualize and utilize mobile applications. So, how can machine learning transform mobile applications and convert them into applications that are any user's dream? Let me give you some examples to give a bird's eye view of what machine learning can do for mobile applications:

  • Facebook and YouTube mobile applications use machine learning—Recommendations or People you might know are nothing but machine learning in action.
  • Apple and Google read the behavior or wording of each user behavior and recommend the next word that is suitable for your style of typing. They have already implemented this in both iOS and Android devices.
  • Oval Money analyzes a user's previous transactions and offers them different ways to avoid extra spending.
  • Google Maps is using machine learning to make your life easier.
  • Django uses machine learning to solve the problem to find a perfect emoji. It is a floating assistant that can be integrated into different messengers.

Machine learning can be applied to mobile applications belonging to any domain—healthcare, finance, games, communication, or anything under the sun. So, let's understand what machine learning is all about. 

In this chapter, we will cover the following topics:

  • What is machine learning?
  • When is it appropriate to go for solutions that get implemented using machine learning?
  • Categories of machine learning
  • Key algorithms in machine learning
  • The process that needs to be followed for implementing machine learning
  • Some of the key concepts of machine learning that are good to know
  • Challenges in implementing machine learning
  • Why use machine learning in mobile applications?
  • Ways to implement machine learning in mobile applications
 

Definition of machine learning


Machine learning is focused on writing software that can learn from past experience. One of the standard definitions of machine learning, as given by Tom Mitchell, a professor at the Carnegie Mellon University (CMU), is the following:

A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.

For example, a computer program that learns to play chess might improve its performance as measured by its ability to win at the class of tasks involving playing chess, through experience obtained by playing chess against itself. In general, to have a well-defined learning problem, we must identify the class of tasks, the measure of performance to be improved, and the source of experience. Consider that a chess-learning problem consists of the following: task, performance measure, and training experience, where:

  • Task T is playing chess
  • Performance measure P is the percentage of games won against opponents
  • Training experience E is the program playing practice chess games against itself

To put it in simple terms, if a computer program is able to improve the way it performs a task with the help of previous experience, this way you will know the computer has learned. This scenario is very different from one where a program can perform a particular task because its programmers have already defined all the parameters and have provided the data required to do so. A normal program can perform the task of playing chess because the programmers have written the code to play chess with a built-in winning strategy. However, a machine learning program does not possess a built-in strategy; in fact, it only has a set of rules of the legal moves in the game, and what a winning scenario is. In such a case, the program needs to learn by repeatedly playing the game until it can win.

When is it appropriate to go for machine learning systems?

Is machine learning applicable to all scenarios? When exactly should wehave the machine learn rather than directly programming the machine with instructions to carry out the task?

Machine learning systems are not knowledge-based systems. In knowledge-based systems, we can directly use the knowledge to codify all possible rules to infer a solution. We go for machine learning when such codification of instructions is not straightforward. Machine learning programs will be more applicable in the following scenarios:

  • Very complex tasks that are difficult to program: There are regular tasks humans perform, such as speaking, driving, seeing and recognizing things, tasting, and classifying things by looking at them, which seem so simple to us. But, we do not know how our brains are wired or programmed or what rules need to be defined to perform all this seamlessly, for which we could create a program to replicate these actions. It is possible through machine learning to perform some of them, not to the extent that humans do, but machine learning has great potential here.
  • Very complex tasks that deal with a huge volume of data: There are tasks that include analyzing huge volumes of data and finding hidden patterns, or coming up with new correlations in the data, that are not humanly possible. Machine learning is helpful for tasks for which we do not humanly know the steps to arrive at a solution and which are so complex in nature due to the various solution possibilities that it is not humanly possible to determine solutions.
  • Adapting to changes in environment and data: A program hardcoded with a set of instructions cannot adapt itself to the changing environment and is not capable of scaling up to new environments. Both of these can be achieved using machine learning programs.

Note

Machine learning is an art, and a data scientist who specializes in machine learning needs to have a mixture of skills—mathematics, statistics, data analysis, engineering, creative arts, bookkeeping, neuroscience, cognitive science, economics, and so on. He needs to be a jack of all trades and a master of machine learning.

 

The machine learning process


The machine learning process is an iterative process. It cannot be completed in one go. The most important activities to be performed for a machine learning solution are as follows:

  1. Define the machine learning problem (it must be well-defined).
  2. Gather, prepare, and enhance the data that is required.
  3. Use that data to build a model. This step goes in a loop and covers the following substeps. At times, it may also lead to revisiting Step 2 on data or even require the redefinition of the problem statement:
    • Select the appropriate model/machine learning algorithm
    • Train the machine learning algorithm on the training data and build the model
    • Test the model
    • Evaluate the results
    • Continue this phase until the evaluation result is satisfactory and finalize the model
  4. Use the finalized model to make future predictions for the problem statement.

There are four major steps involved in the whole process, which is iterative and repetitive, till the objective is met. Let's get into the details of each step in the following sections. The following diagram will give a quick overview of the entire process, so it is easy to go into the details:

Defining the machine learning problem

As defined by Tom Mitchell, the problem must be a well-defined machine learning problem. The three important questions to be solved at this stage include the following:

  • Do we have the right problem?
  • Do we have the right data?
  • Do we have the right success criteria?

The problem should be such that the outcome that is going to be obtained as a solution to the problem is valuable for the business. There should be sufficient historical data that should be available for learning/training purposes. The objective should be measurable and we should know how much of the objective has been achieved at any point in time.

For example, if we are going to identify fraudulent transactions from a set of online transactions, then determining such fraudulent transactions is definitely valuable for the business. We need to have a sufficient set of online transactions. We should have a sufficient set of transactions that belong to various fraudulent categories. We should also have a mechanism to determine whether the outcome predicted as a fraudulent or nonfraudulent transaction can be verified and validated for the accuracy of prediction.

Note

To give users an idea of what data would be sufficient to implement machine learning, we could say that a dataset of at least 100 items should be fine for starters and 1,000 would be nice. The more data we have that may cover all realistic scenarios for the problem domain, the better it is for the learning algorithm.

Preparing the data

The data preparation activity is key to the success of the learning solution. The data is the key entity required for machine learning and it must be prepared properly to ensure the proper end results and objectives are obtained.

Note

Data engineers usually spend around 80-90 percent of their overall time in the data preparation phase to get the right data, as this is fundamental and the most critical task for the success of the implementation of the machine learning program.

The following actions need to be performed in order to prepare the data:

  1. Identify all sources of data: We need to identify all data sources that can solve the problem at hand and collect the data from multiple sources—files, databases, emails, mobile devices, the internet, and so on.
  2. Explore the data: This step involves understanding the nature of the data, as follows:
    • Integrate data from different systems and explore it.
    • Understand the characteristics and nature of the data.
    • Go through the correlations between data entities.
    • Identify the outliers. Outliers will help with identifying any problems with the data.
    • Apply various statistical principles such as calculating the median, mean, mode, range, and standard deviation to arrive at data skewness. This will help with understanding the nature and spread of data.
    • If data is skewed or we see the value of the range is outside the expected boundary, we know that the data has a problem and we need to revisit the source of the data.
    • Visualization of data through graphs will also help with understanding the spread and quality of the data.
  3. Preprocess the data: The goal of this step is to create data in a format that can be used for the next step:
    • Data cleansing:
      • Addressing the missing values. A common strategy used to impute missing values is to replace missing values with the mean or median value. It is important to define a strategy for replacing missing values.
      • Addressing duplicate values, invalid data, inconsistent data, outliers, and so on.
    • Feature selection: Choosing the data features that are the most appropriate for the problem at hand. Removing redundant or irrelevant features that will simplify the process.
    • Feature transformation: This phase maps the data from one format to another that will help in proceeding to the next steps of machine learning. This involves normalizing the data and dimensionality reduction. This involves combining various features into one feature or creating new features. For example, say we have the date and time as attributes. It would be more meaningful to have them transformed as a day of the week, a day of the month, and a year, which would provide more meaningful insight:
      • To create Cartesian products of one variable with another. For example, if we have two variables, such as population density (maths, physics, and commerce) and gender (girls and boys), the features formed by a Cartesian product of these two variables might contain useful information resulting in features such as (maths_girls, physics_girls, commerce_girls, maths_boys, physics_boys, and commerce_boys).
      • Binning numeric variables to categories. For example, the size value of hips/shoulders can be binned to categories such as small, medium, large, and extra large.
      • Domain-specific features, for example, combining the subjects maths, physics, and chemistry to a maths group and combining physics, chemistry, and biology to a biology group.
  4. Divide the data into training and test sets: Once the data is transformed, we then need to select the required test set and a training set. An algorithm is evaluated against the test dataset after training it on the training dataset. This split of the data into training and test datasets may be as direct as performing a random split of data (66 percent for training, 34 percent for testing) or it may involve more complicated sampling methods.

Note

The 66 percent/34 percent split is just a guide. If you have 1 million pieces of data, a 90 percent/10 percent split should be enough. With 100 million pieces of data, you can even go down to 99 percent/1 percent.

A trained model is not exposed to the test dataset during training and any predictions made on that dataset are designed to be indicative of the performance of the model in general. As such, we need to make sure the selection of datasets is representative of the problem that we are solving.

Building the model

The model-building phase consists of many substeps, as indicated earlier, such as the selection of an appropriate machine learning algorithm, training the model, testing it, evaluating the model to determine whether the objectives have been achieved, and, if not, entering into the retraining phase by either selecting the same algorithm with different datasets or selecting an entirely new algorithm till the objectives are reached.

Selecting the right machine learning algorithm

The first step toward building the model is to select the right machine learning algorithm that might solve the problem.

This step involves selecting the right machine learning algorithm and building a model, then training it using the training set. The algorithm will learn from the training data patterns that map the variables to the target, and it will output a model that captures these relationships. The machine learning model can then be used to get predictions on new data for which you do not know the target answer.

Training the machine learning model

The goal is to select the most appropriate algorithm for building the machine learning model, training it, and then analyzing the results received. We begin by selecting appropriate machine learning techniques to analyze the data. The next chapter, that is, Chapter 2, Random Forest on iOS, will talk about the different machine learning algorithms and presents details of the types of problems for which they would be apt.

The training process and analyzing the results also varies based on the algorithms selected for training.

The training phase usually uses all the attributes of data present in the transformed data, which will include the predictor attributes as well as the objective attributes. All the data features are used in the training phase.

Testing the model

Once the machine learning algorithm is trained in the training data, the next step is to run the model in the test data.

The entire set of attributes or features of the data is divided into predictor attributes and objective attributes. The predictor attributes/features of the dataset are fed as input to the machine learning model and the model uses these attributes to predict the objective attributes. The test set uses only the predictor attributes. Now, the algorithm uses the predictor attributes and outputs predictions on objective attributes. Once the output is provided, it is compared against the actual data to understand the quality of output from the algorithm.

The results should be properly presented for further analysis. What to present in the results and how to present them are critical. They may also bring to the fore new business problems.

Evaluation of the model

There should be a process to test machine learning algorithms and discover whether or not we have chosen the right algorithms, and to validate the output the algorithm provides against the problem statement.

This is the last step in the machine learning process, where we check the accuracy with the defined threshold for success criteria and, if the accuracy is greater than or equal to the threshold, then we are done. If not, we need to start all over again with a different machine learning algorithm, different parameter settings, more data, and changed data transformation. All steps in the entire machine learning process can be repeated, or a subset of it can be repeated. These are repeated till we come to the definition of "done" and are satisfied with the results.

Note

The machine learning process is a very iterative one. Findings from one step may require a previous step to be repeated with new information. For example, during the data transformation step, we may find some data quality issues that may require us to go back to acquire more data from a different source.

Each step may also require several iterations. This is of particular interest, as the data preparation step may undergo several iterations, and the model selection may undergo several iterations. In the entire sequence of activities stated for performing machine learning, any activity can be repeated any number of times. For example, it is common to try different machine learning algorithms to train the model before moving on to testing the model. So, it is important to recognize that this is a highly iterative process and not a linear one.

Test set creation: We have to define the test dataset clearly. The goal of the test dataset is as follows:

  • Quickly and consistently test the algorithm that has been selected to solve the problem
  • Test a variety of algorithms to determine whether they are able to solve the problem
  • Determine which algorithm would be worth using to solve the problem
  • Determine whether there is a problem with the data considered for evaluation purposes as, if all algorithms consistently fail to produce proper results, there is a possibility that the data itself might require a revisit

Performance measure: The performance measure is a way to evaluate the model created. Different performance metrics will need to be used to evaluate different machine learning models. These are standard performance measures from which we can choose to test our model. There may not be a need to customize the performance measures for our model.

The following are some of the important terms that need to be known to understand the performance measure of algorithms:

  • Overfitting: The machine learning model is overfitting the training data when we see that the model performs well on the training data but does not perform well on the evaluation data. This is because the model is memorizing the data it has seen and is unable to generalize to unseen examples.
  • Underfitting: The machine learning model is underfitting the training data when the model performs poorly on the training data. This is because the model is unable to capture the relationship between the input examples (often called X) and the target values (often called Y).
  • Cross-validation: Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation, the original sample is randomly partitioned into k equally sized subsamples.
  • Confusion matrix: In the field of machine learning, and specifically the problem of statistical classification, a confusion matrix, also known as an error matrix, is a specific table layout that allows visualization of the performance of an algorithm.
  • Bias: Bias is the tendency of a model to make predictions in a consistent way.
  • Variance: Variance is the tendency of a model to make predictions that vary from the true relationship between the parameters and the labels.
  • Accuracy: Correct results are divided by total results.
  • Error: Incorrect results are divided by total results.
  • Precision: The number of correct results returned by a machine learning algorithm are divided by the number of all returned results.
  • Recall: The number of correct results returned by a machine learning algorithm are divided by the number of results that should have been returned.

Making predictions/Deploying in the field

Once the model is ready, it can be deployed to the field for usage. Predictions can be done on the upcoming dataset using the model that has been built and deployed in the field.

 

Types of learning


There some variations in how to define the types of machine learning algorithms. The most common categorization of algorithms is done based on the learner type of the algorithm and is categorized as follows:

  • Supervised learning
  • Unsupervised learning
  • Semi-supervised learning
  • Reinforcement learning

Supervised learning

Supervised learning is a type of learning where the model is fed with enough information and knowledge and closely supervised to learn, so that, based on the learning it has done, it can predict the outcome for a new dataset.

Here, the model is trained in supervision mode, similar to supervision by teachers, where we feed the model with enough training data containing the input/predictors and train it and show the correct answers or output. So, based on this, it learns and will become capable of predicting the output for unseen data that may come in the future.

A classic example of this would be the standard Iris dataset. The Iris dataset consists of three species of iris and for each species, the sepal length, sepal width, petal length, and petal width is given. And for a specific pattern of the four parameters, the label is provided as to what species such a set should belong to. With this learning in place, the model will be able to predict the label—in this case, the iris species, based on the feature set—in this case, the four parameters.

Supervised learning algorithms try to model relationships and dependencies between the target prediction output and the input features such that we can predict the output values for new data based on those relationships which it learned from the previous datasets.

The following diagram will give you an idea of what supervised learning is. The data with labels is given as input to build the model through supervised learning algorithms. This is the training phase. Then the model is used to predict the class label for any input data without the label. This is the testing phase:

Again, in supervised learning algorithms, the predicted output could be a discrete/categorical value or it could be a continuous value based on the type of scenario considered and the dataset taken into consideration. If the output predicted is a discrete/categorical value, such algorithms fall under the classification algorithms, and if the output predicted is a continuous value, such algorithms fall under the regression algorithms.

If there is a set of emails and you want to learn from them and be able to tell which emails belong to the spam category and which emails belong to the non-spam category, then the algorithm to be used for this purpose will be a supervised learning algorithm belonging to the classification type. Here, you need to feed the model with a set of emails and feed enough knowledge to the model about the attributes, based on which it would segregate the email to either the spam category or the non-spam category. So the predicted output would be a categorical value, that is, spam or non-spam.

Let's take the use case where based on a given set of parameters, we need to predict what would be the price of a house in a given area. This cannot be a categorical value. It is going to be a range or a continuous value and also be subject to change on a regular basis. In this problem, the model also needs to be provided with sufficient knowledge, based on which it is going to predict the pricing value. This type of algorithm belongs to the supervised learning regression category of algorithms. 

There are various algorithms belonging to the supervised category of the machine learning family:

  • K-nearest neighbors
  • Naive Bayes
  • Decision trees
  • Linear regression
  • Logistic regression
  • Support vector machines
  • Random forest

Unsupervised learning

In this learning pattern, there is no supervision done to the model to make it learn. The model learns by itself based on the data fed to it and provides us with patterns it has learned. It doesn't predict any discrete categorical value or a continuous value, but rather provides the patterns it has understood by looking at the data fed into it. The training data fed in is unlabeled and doesn't provide sufficient knowledge information for the model to learn. 

Here, there's no supervision at all; actually, the model might be able to teach us new things after it learns the data. These algorithms are very useful where a feature set is too large and the human user doesn't know what to look for in the data.

This class of algorithms is mainly used for pattern detection and descriptive modeling. Descriptive modeling summarizes the relevant information from the data and presents a summary of what has already occurred, whereas predictive modeling summarizes the data and presents a summary of what can occur.

Unsupervised learning algorithms can be used for both categories of prediction. They use the input data to come up with different patterns, a summary of the data points, and insights that are not visible to human eyes. They come up with meaningful derived data or patterns of data that are helpful for end users.

The following diagram will give you an idea of what unsupervised learning is. The data without labels is given as input to build the model through unsupervised learning algorithms. This is the Training Phase. Then the model is used to predict the proper patterns for any input data without the label. This is the Testing Phase:

In this family of algorithms, which is also based on the input data fed to the model and the method adopted by the model to infer patterns in the dataset, there emerge two common categories of algorithms. These are clustering and association rule mapping algorithms. 

Clustering is the model that analyzes the input dataset and groups data items with similarity into the same cluster. It produces different clusters and each cluster will hold data items that are more similar to each other than in items belonging to other clusters. There are various mechanisms that can be used to create these clusters. 

Customer segmentation is one example for clustering. We have a huge dataset of customers and capture all features of customers. The model could come up with interesting cluster patterns of customers that may be very obvious to the human eye. Such clusters could be very helpful for targeted campaigns and marketing.

On the other hand, association rule learning is a model to discover relations between variables in large datasets. A classic example would be market basket analysis. Here, the model tries to find strong relationships between different items in the market basket. It predicts relationships between items and determines how likely or unlikely it is for a user to purchase a particular item when they also purchase another item. For example, it might predict that a user who purchases bread will also purchase milk, or a user who purchases wine will also purchase diapers, and so on.

The algorithms belonging to this category include the following:

  • Clustering algorithms:
    • Centroid-based algorithms
    • Connectivity-based algorithms
    • Density-based algorithms
    • Probabilistic
    • Dimensionality reduction
    • Neural networks/deep learning
  • Association rule learning algorithm

Semi-supervised learning

In the previous two types, either there are no labels for all the observations in the dataset or labels are present for all the observations. Semi-supervised learning falls in between these two. In many practical situations, the cost of labeling is quite high, since it requires skilled human experts to do that. So, if labels are absent in the majority of the observations, but present in a few, then semi-supervised algorithms are the best candidates for the model building. 

Speech analysis is one example of a semi-supervised learning model. Labeling audio files is very costly and requires a very high level of human effort. Applying semi-supervised learning models can really help to improve traditional speech analytic models.

In this class of algorithms, also based on the output predicted, which may be categorical or continuous, the algorithm family could be regression or classification. 

Reinforcement learning

Reinforcement learning is goal-oriented learning based on interactions with the environment. A reinforcement learning algorithm (called the agent) continuously learns from the environment in an iterative fashion. In the process, the agent learns from its experiences of the environment until it explores the full range of possible states and is able to reach the target state.

Let's take the example of a child learning to ride a bicycle. The child tries to learn by riding it, it may fall, it will understand how to balance, how to continue the flow without falling, how to sit in the proper position so that weight is not moved to one side, studies the surface, and also plans actions as per the surface, slope, hill, and so on. So, it will learn all possible scenarios and states required to learn to ride the bicycle. A fall may be considered as negative feedback and the ability to ride along stride may be a positive reward for the child. This is classic reinforcement learning. This is the same as what the model does to determine the ideal behavior within a specific context, in order to maximize its performance. Simple reward feedback is required for the agent to learn its behavior; this is known as the reinforcement signal:

Now, we will just summarize the type of learning algorithms we have seen through a diagram, so that it will be handy and a reference point for you to decide on choosing the algorithm for a given problem statement:

Challenges in machine learning

Some of the challenges we face in machine learning are as follows:

  • Lack of a well-defined machine learning problem. If the problem is not defined clearly as per the definition with required criteria, the machine learning problem is likely to fail.
  • Feature engineering. This relates to every activity with respect to data and its features that are essential for the success of the machine learning problem.
  • No clarity between the training set and test set. Often the model performs well in the training phase, but fails miserably in the field due to a lack of all possible data in the training set. This should be taken care of for the model to succeed in the field.
  • The right choice of algorithm. There is a wide range of algorithms available, but which one suits our problem best? This should be chosen properly in the iteration with proper parameters required.
 

Why use machine learning on mobile devices?


Machine learning is needed to extract meaningful and actionable information from huge amounts of data. A significant amount of computation is required to analyze huge amounts of data and arrive at an inference. This processing is ideal for a cloud environment. However, if we could carry out machine learning on a mobile, the following would be the advantages:

  • Machine learning could be performed offline, as there would be no need to send all the data that the mobile has to the network and wait for results back from the server.
  • The network bandwidth cost incurred, if any, due to the transmission of mobile data to the server is avoided. 
  • Latency can be avoided by processing data locally. Mobile machine learning has a great deal of responsiveness as we don't have to wait for connection and response back from the server. It might take up to 1-2 seconds for server response, but mobile machine learning can do it instantly.
  • Privacy—this is another advantage of mobile machine learning. There is no need to send the user data outside the mobile device, enabling better privacy.

Machine learning started in computers, but the emerging trend shows that mobile app development with machine learning implemented on mobile devices is the next big thing. Modern mobile devices show the high productive capacity level that is enough to perform appropriate tasks to the same degree as traditional computers do. Also, there are some signals from global corporations that confirm this assumption:

  • Google launched TensorFlow for Mobile. There is very significant interest from the developer community also.
  • Apple has launched Siri SDK and Core ML and now all developers can incorporate this feature into their apps.
  • Lenovo is working on their new smartphone that also performs without an internet connection and executes indoor geolocation and augmented reality.
  • There is significant research being undertaken by most of the mobile chip makers, whether it is Apple, Qualcomm, Samsung, or even Google itself, working on hardware dedicated to speeding up machine learning on mobile devices.
  • There are many innovations happening in the hardware layer to enable hardware acceleration, which would make machine learning on mobile easy.
  • Many mobile-optimized models such as MobileNets, Squeeze Net, and so on have been open sourced.
  • The availability of IoT devices and smart hardware appliances is increasing, which will aid in innovation.
  • There are more use cases that people are interested in for offline scenarios.
  • There is more and more focus on user data privacy and users' desire for their personal data not to leave their mobile devices at all.

Some classic examples of machine learning on mobile devices are as follows:

  • Speech recognition
  • Computer vision and image classification
  • Gesture recognition
  • Translation from one language into another
  • Interactive on-device detection of text
  • Autonomous vehicles, drone navigation, and robotics
  • Patient-monitoring systems and mobile applications interacting with medical devices

Ways to implement machine learning in mobile applications

Now, we clearly understand what machine learning is and what the key tasks to be performed in a learning problem are. The four main activities to be performed for any machine learning problem are as follows:

  1. Define the machine learning problem
  2. Gather the data required
  3. Use that data to build/train a model
  4. Use the model to make predictions

Training the model is the most difficult part of the whole process. Once we have trained the model and have the model ready, using it to infer or predict for a new dataset is very easy.

For all the four steps provided in the preceding points, we clearly need to decide where we intend to use them—on a device or in the cloud.

The main things we need to decide are as follows:

  • First of all, are we going to train and create a custom model or use a prebuilt model?
  • If we want to train our own model, do we do this training on our desktop machine or in the cloud? Is there a possibility to train the model on a mobile device? 
  • Once the model is available, are we going to put it in a local device and do the inference on the device or are we going to deploy the model in the cloud and do the inference from there?

The following are the broad possibilities to implement machine learning in mobile applications. We will get into the details of it in the upcoming sections:

Utilizing machine learning service providers for a machine learning model

There are many service providers offering machine learning as a service. We can just utilize them.

Examples of such providers who provide machine learning as a service are listed in the following points. This list is increasing every day:

  • Clarifai
  • Google Cloud Vision
  • Microsoft Azure Cognitive Services
  • IBM Watson
  • Amazon Web Services

If we were to go with this model, the training is already done, the model is built, and model features are exposed as web services. So, all we have to do from the mobile application is simply to invoke the model service with the required dataset and get the results from the cloud provider and then display the results in the mobile application as per our requirement:

Some of the providers provide an SDK that makes the integration work very simple.

There may be a charge that we need to provide to the cloud service provider for utilizing their machine learning web services. There may be various models based on which this fee is charged, for example, the number of times it is invoked, the type of model, and so on.

So, this is a very simple way to use machine learning services, without actually having to do anything about the model. On top of this, the machine learning service provider keeps the model updated by constant retraining, including new datasets whenever required, and so on. So, the maintenance and improvement of the model is automatically taken care of on a routine basis. 

So, this type of model is easy for people who are experts in mobile but don't know anything about ML, but want to build an ML-enabled app. 

So the obvious benefits of such a cloud-based machine learning service are as follows:

  • It is easy to use.
  • No knowledge of machine learning is required and the tough part of the training is done by the service provider.
  • Retraining, model updates, support, and maintenance of the model are done by the provider.
  • Charges are paid only as per usage. There is no overhead to maintain the model, the data for training, and so on.

Some of the flip sides of this approach are as follows:

  • The prediction will be done in the cloud. So, the dataset for which the prediction or inference is to be done has to be sent to the cloud. The dataset has to be maintained at the optimal size.
  • Since data moves over the network, there may be some performance issues experienced in the app, since the whole thing now becomes network-dependent.
  • Mobile applications won't work in offline mode and work as completely online applications.
  • Mostly, charges are to be paid per request. So, if the number of users of the application increases exponentially, the cost for the machine learning service also increases. 
  • The training and retraining is in the control of the cloud service provider. So, they might have done training for common datasets. If our mobile application is going to use something really unique, chances are that the predictions may not work.

To get started with ML-enabled mobile applications, the model is the right fit both with respect to cost and technical feasibility. And absolutely fine for a machine learning newbie.

Ways to train the machine learning model

There are various ways to go about training our own machine learning model. Before getting into ways to train our model, why would we go for training our own model?

Mostly, if our data is special or unique in some way and very much specific to our requirements and when the existing solutions cannot be used to solve our problem, we may decide to train our own model.

For training our own model, a good dataset is required. A good dataset is one which is qualitatively and quantitatively good and large.

Training our model can be done in multiple ways/places based on our requirements and the amount of data:

  • On a desktop (training in the cloud):
    • General cloud computing
    • Hosted machine learning
    • Private cloud/simple server machine
  • On a device: This is not very feasible. We can only deploy the trained model on a mobile device and invoke it from a mobile device. So far, the training process itself is not feasible from a mobile device.
On a desktop (training in the cloud)

If we have decided to carry out the training process on a desktop, we have to do it in the cloud or on our humble local server, based on our needs.

If we decide to use the cloud, again we have the following two options:

  • Generic cloud computing
  • Hosted machine learning

Generic cloud computing is similar to utilizing the cloud service provider to carry out our work. We want to carry out machine learning training. So, in order to carry this out, whatever is required, say hardware, storage, and so on, must be obtained from them. We can do whatever we need with these resources. We need to place our training dataset here, run the training logic/algorithms, build the model, and so on.

Once the training is done and the model is created, the model can be taken anywhere for usage. To the cloud provider, we pay the charges for utilizing the hardware and storage only.

Amazon Web Services (AWS) and Azure are some of the cloud-computing vendors.

The benefits of using this approach are as follows:

  • The hardware/storage can be procured and used on the go. There is no need to worry about increasing storage and so on, when the amount of training data increases. It can be incremented when needed by paying the charges.
  • Once the training is done and the model is created, we can release the computing resources. Costs incurred on computing resources are only for the training period and hence if we are able to finish the training quickly, we save a lot.
  • We are free to download the trained model and use it anywhere.

What we need to be careful about when we go for this approach is the following:

  • We need to take care of the entire training work and the model creation. We are only going to use the compute resources required to carry out this work.
  • So, we need to know how to train and build the model.

Several companies, such as Amazon, Microsoft, and Google, now offer machine learning as a service on top of their existing cloud services. In the hosted machine learning model, we neither need to worry about the compute resources nor the machine learning models. We need to upload the data for our problem set, choose the model that we want to train for our data from the available list of models, and that's all. The machine learning services take care of training the model and providing the trained model to us for usage.

This approach works really well when we are not so well-versed to write our own custom model and train it, but also do not want to go completely to a machine learning provider to use their service, but want to do something in between. We can choose between the models, upload our unique dataset, and then train it for our requirements.

In this type of approach, the provider usually makes us tied to their platform. We may not be able to download the model and deploy it anywhere else for usage. We may need to be tied to them and utilize their platform from our app for using the trained model.

One more thing to note is that if at a later point in time, we decide to move to another provider, the trained model cannot be exported and imported to the other provider. We may need to carry out the training process again on the new provider platform.

In this approach, we might need to pay for the compute resources –hardware/storage –plus, after the training, to use the trained model, we may need to pay on an ongoing per-usage basis, that is, an on-demand basis; whenever we use it, we need to pay for what we use.

The benefits of using this approach are as follows:

  • There is no need to worry about the compute resources/storage required for training the data.
  • There is no need to worry about understanding the details of machine learning models to build and train custom models.
  • Just upload the data, choose the model to use for training and that's it. Get the trained model for usage
  • There is no need to worry about deploying the model to anywhere for consumption from the mobile application.

What we need to be careful about when we go for this approach is as follows:

  • Mostly, we may get tied to their platform after the training process in order to use the model obtained after training. However, there are a few exceptions, such as Google's Cloud platform.
  • We may be able to choose only from the models provided by the provider. We can only choose from the available list.
  • A trained model from one platform cannot be moved to another platform. So, if we decide to change the platform later, we may need to retain again in their platform.
  • We may need to pay for compute resources and also pay on an ongoing basis for usage of the model.

Using our private cloud/simple server is similar to training on the generic cloud, except that we need to manage the compute resources/storage. In this approach, the only thing we miss out on is the flexibility given by generic cloud solution providers that include increasing/decreasing the compute and storage resources, the overhead to maintain and manage these compute resources, and so on.

The major advantage we get with this approach is about the security of the data we get. If we think our data is really unique and needs to be kept completely secured, this is a good approach to use. Here, everything is done in-house using our own resources.

The benefits of using this approach are as follows:

  • Absolutely everything is in our control, including the compute resources, training data, model, and so on
  • It is more secure

What we need to be careful about when we go for this approach is the following:

  • Everything needs to be managed by us
  • We should be clear with the machine learning concepts, data, model, and training process
  • Continuous availability of compute resources/hardware is to be managed by us
  • If our dataset is going to be huge, this might not be very effective, as we may need to scale the compute resources and storage as per the increasing dataset size
On a device

The training process on a device has still not picked up. It may be feasible for a very small dataset. Since the compute resources required to train the data and also the storage required to store the data is more, generally mobile is not the preferred platform to carry out the training process.

The retraining phase also becomes complicated if we use mobile as a platform for the training process. 

Ways to carry out the inference – making predictions

Once the model is created, we need to use the model for a new dataset in order to infer or make the predictions. Similar to how we had various ways in which we could carry out the training process, we can have multiple approaches to carry out the inference process also:

  • On a server:
    • General cloud computing
    • Hosted machine learning
    • Private cloud/simple server machine
  • On a device

Inference on a server would require a network request and the application will need to be online to use this approach. But, inference on the device means the application can be a completely offline application. So, obviously, all the overheads for an online app, in terms of speed/performance, and so on, is better for an offline application.

However, for inference, if there are more compute resources—that is, processing power/memory is required—the inference cannot be done on the device.

Inference on a server

In this approach, once the model is trained, we host the model on a server to utilize it from the application.

The model can be hosted either in a cloud machine or on a local server, or it can be that of a hosted machine learning provider. The server is going to publish the endpoint URL, which needs to be accessed to utilize it to make the required predictions. The required dataset is to be passed as input to the service.

Doing the inference on a server makes the mobile application simple. The model can be improved periodically, without having to redeploy the mobile client application. New features can be added into the model easily. There is no requirement to upgrade the mobile application for any model changes.

The benefits of using this approach are as follows:

  • Mobile application becomes relatively simple. 
  • The model can be updated at any time without the redeployment of the client application.
  • It is easy to support multiple OS platforms without writing the complex inference logic in an OS-specific platform. Everything is done in the backend.

What we need to be careful about when we go for this approach is the following:

  • The application can work only in online mode. The application has to connect to backend components in order to carry out the inference logic.
  • There is a requirement to maintain the server hardware and software and ensure it is up and running. It needs to scale for users. For scalability, the additional cost is required to manage multiple servers and ensure they are up and running always.
  • Users need to transmit the data to the backend for inference. If the data is huge, they might experience performance issues as well as users needing to pay for transmitting the data.
Inference on a device

In this approach, the machine learning model is loaded into the client mobile application. To make a prediction, the mobile application runs all the inference computations locally on the device, on its own CPU or GPU. It need not communicate to the server for anything related to machine learning.

Speed is the major reason for doing inference directly on a device. We need not send a request over the server and wait for the reply. Things happen almost instantaneously.

Since the model is bundled along with the mobile application, it is not very easy to upgrade the model in one place and reuse it. The mobile application upgrade has to be done. The upgrade push has to be provided to all active users. All this is a big overhead and will consume a lot of effort and time.

Even for small changes, retraining the model with very few additional parameters will involve a complex process of an application upgrade, pushing the upgrade to live users, and maintaining the required infrastructure for the same.

The benefits of using this approach are as follows:

  • Users can use the mobile application in offline mode. Availability of the network is not essential to operate the mobile application.
  • The prediction and inference can happen very quickly since the model is right there along with the application source code.
  • The data required to predict need not be sent over the network and hence no bandwidth cost is involved for users.
  • There is no overhead to run and maintain server infrastructure, and multiple servers can be managed for user scalability.

What we need to be careful about when we go for this approach is the following:

  • Since the model is included along with the application, it is difficult to make changes to the model. The changes can be done, but to make the changes reach all client applications is a costly process that consumes effort and time.
  • The model file, if huge, can increase the size of the application significantly.
  • The prediction logic should be written for each OS platform the application supports, say iOS or Android.
  • All of the model has to be properly encrypted or obfuscated to make sure it is not hacked by other developers. 

In this book, we are going to look into the details of utilizing the SDKs and tools available to perform tasks related to machine learning locally on a mobile device itself.

Popular mobile machine learning tools and SDKs

The following are the key machine learning SDKs we are going to explore in this book:

  • TensorFlow Lite from Google
  • Core ML from Apple
  • Caffe2Go from Facebook
  • ML Kit from Google
  • Fritz.ai

We will go over the details of the SDKs and also sample mobile machine learning applications built using these SDKs, leveraging different types of machine learning algorithms.

Skills needed to implement on-device machine learning

In order to implement machine learning on a mobile device, deep knowledge of machine learning algorithms, the entire process, and how to build the machine learning model is not required. For a mobile application developer who knows how to create mobile applications using iOS or Android SDK, just like how they utilize the backend APIs to invoke the backend business logic, they need to know the mechanism to invoke the machine learning models from their mobile application to make predictions. They need to know the mechanism to import the machine learning model into the mobile resources folder and then invoke the various features of the model to make the predictions.

To summarize, the following diagram shows the steps for a mobile developer to implement machine learning on a device:

Note

Machine learning implementation on mobiles can be considered similar to backend API integration. You build the API separately and then integrate where required. Similarly, you build the model separately outside the device and import it into the mobile application and integrate where required.

 

Summary


In this chapter, we were introduced to machine learning, including the types of machine learning, where they are used, and practical scenarios where they can be used. We also saw what a well-defined machine learning problem is and also understood when we need to go for a machine learning solution. Then we saw the machine learning process and the steps involved in building the machine learning model, from defining the problem of deploying the model to the field. We saw certain important terms used in the machine learning namespace that are good to know.

We saw the challenges in implementing machine learning and, specifically, we saw the need for implementing the machine learning in mobiles and the challenges surrounding this. We saw different design approaches for implementing machine learning on mobile applications. We also saw the benefits of using each of the design approaches and also noted the important considerations that we need to analyze and keep in mind when we decide to use each of the solution approaches for implementing machine learning on mobile devices. Lastly, we glanced through the important mobile machine learning SDKs that we are going to go through in detail in subsequent chapters. These include TensorFlow lite, Core ML, Fritz, ML Kit, and lastly, the cloud-based Google Vision.

In the next chapter, we will learn more about Supervised and Unsupervised machine learning and how to implement it for mobiles.

About the Authors

  • Revathi Gopalakrishnan

    Revathi Gopalakrishnan is a software professional with more than 17 years of experience in the IT industry. She has worked extensively in mobile application development and has played various roles, including developer and architect, and has led various enterprise mobile enablement initiatives for large organizations. She has also worked on a host of consumer applications for various customers around the globe. She has an interest in emerging areas, and machine learning is one of them. Through this book, she has tried to bring out how machine learning can make mobile application development more interesting and super cool. Revathi resides in Chennai and enjoys her weekends with her husband and her two lovely daughters.

    Browse publications by this author
  • Avinash Venkateswarlu

    Avinash Venkateswarlu has more than 3 years' experience in IT and is currently exploring mobile machine learning. He has worked in enterprise mobile enablement projects and is interested in emerging technologies such as mobile machine learning and cryptocurrency. Venkateswarlu works in Chennai, but enjoys spending his weekends in his home town, Nellore. He likes to do farming or yoga when he is not in front of his laptop exploring emerging technologies.

    Browse publications by this author

Latest Reviews

(1 reviews total)
Temas nuito úteis e atuais

Recommended For You

Machine Learning Projects for Mobile Applications

Build Android and iOS applications using TensorFlow Lite and Core ML

By Karthikeyan NG
Python Machine Learning - Third Edition

Applied machine learning with a solid foundation in theory. Revised and expanded for TensorFlow 2, GANs, and reinforcement learning.

By Sebastian Raschka and 1 more
Mobile Artificial Intelligence Projects

Learn to build end-to-end AI apps from scratch for Android and iOS using TensorFlow Lite, CoreML, and PyTorch

By Karthikeyan NG and 2 more
Machine Learning with Core ML

Leverage the power of Apple's Core ML to create smart iOS apps

By Joshua Newnham