We're livingin a world of mobile applications. They've become such a part and parcel of our everyday lives that we rarely look into the numbers behind them. (These include the revenue they make, the actual market size of the business, and the quantitative figures that would fuel the growth of mobile applications.) Let's take a peek at the numbers:
Mobile devices and services are now the hubs for people's entertainment and business lives, as well as for communication. The smartphone has replaced the PC as the most important smart connected device. Mobile innovations, new business models, and mobile technologies are transforming every walk of human life.
Now, we come to machine learning. Why has machine learning been booming recently? Machine learning is not a new subject. It existed over 10-20 years ago, so why is it in focus now and why is everyone talking about it? The reason is simple: data explosion. Social networking and mobile devices have enabled the generation of user data like never before. Ten years ago, you didn't have images uploaded to the cloud like you do today because mobile phone penetration then cannot be compared to what it is today. The 4G connection makes it possible even to live stream video data on-demand (VDO) now, so it means more data is running all around the world like never before. The next era is predicted to be the era of the internet of things (IOT), where there is going to be more data-sensor-based data.
All this data is valuable only when we can put it to proper use, derive insights that bring value to us, and bring about unseen data patterns that provide new business opportunities. So, for this to happen, machine learning is the right tool to unlock the stored value in these piles and piles of data that are being accumulated each day.
So, it has become obvious that it is a great time to be a mobile application developer and a great time to be a machine learning data scientist. But how cool would it be if we were able to bring the power of machine learning to mobile devices and develop really cool mobile applications that leverage the power of machine learning? That's what we are trying to do through this book: give insights to mobile application developers on the basics of machine learning, expose them to various machine learning algorithms and mobile machine learning SDKs/tools, and go over developing mobile machine learning applications using these SDKs/tools.
Machine learning in the mobile space is a key innovation area that must be properly understood by mobile developers as it is transforming the way users can visualize and utilize mobile applications. So, how can machine learning transform mobile applications and convert them into applications that are any user's dream? Let me give you some examples to give a bird's eye view of what machine learning can do for mobile applications:
Machine learning can be applied to mobile applications belonging to any domain—healthcare, finance, games, communication, or anything under the sun. So, let's understand what machine learning is all about.
In this chapter, we will cover the following topics:
Machine learning is focused on writing software that can learn from past experience. One of the standard definitions of machine learning, as given by Tom Mitchell, a professor at the Carnegie Mellon University (CMU), is the following:
A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.
For example, a computer program that learns to play chess might improve its performance as measured by its ability to win at the class of tasks involving playing chess, through experience obtained by playing chess against itself. In general, to have a well-defined learning problem, we must identify the class of tasks, the measure of performance to be improved, and the source of experience. Consider that a chess-learning problem consists of the following: task, performance measure, and training experience, where:
To put it in simple terms, if a computer program is able to improve the way it performs a task with the help of previous experience, this way you will know the computer has learned. This scenario is very different from one where a program can perform a particular task because its programmers have already defined all the parameters and have provided the data required to do so. A normal program can perform the task of playing chess because the programmers have written the code to play chess with a built-in winning strategy. However, a machine learning program does not possess a built-in strategy; in fact, it only has a set of rules of the legal moves in the game, and what a winning scenario is. In such a case, the program needs to learn by repeatedly playing the game until it can win.
Is machine learning applicable to all scenarios? When exactly should wehave the machine learn rather than directly programming the machine with instructions to carry out the task?
Machine learning systems are not knowledge-based systems. In knowledge-based systems, we can directly use the knowledge to codify all possible rules to infer a solution. We go for machine learning when such codification of instructions is not straightforward. Machine learning programs will be more applicable in the following scenarios:
Machine learning is an art, and a data scientist who specializes in machine learning needs to have a mixture of skills—mathematics, statistics, data analysis, engineering, creative arts, bookkeeping, neuroscience, cognitive science, economics, and so on. He needs to be a jack of all trades and a master of machine learning.
The machine learning process is an iterative process. It cannot be completed in one go. The most important activities to be performed for a machine learning solution are as follows:
There are four major steps involved in the whole process, which is iterative and repetitive, till the objective is met. Let's get into the details of each step in the following sections. The following diagram will give a quick overview of the entire process, so it is easy to go into the details:
As defined by Tom Mitchell, the problem must be a well-defined machine learning problem. The three important questions to be solved at this stage include the following:
The problem should be such that the outcome that is going to be obtained as a solution to the problem is valuable for the business. There should be sufficient historical data that should be available for learning/training purposes. The objective should be measurable and we should know how much of the objective has been achieved at any point in time.
For example, if we are going to identify fraudulent transactions from a set of online transactions, then determining such fraudulent transactions is definitely valuable for the business. We need to have a sufficient set of online transactions. We should have a sufficient set of transactions that belong to various fraudulent categories. We should also have a mechanism to determine whether the outcome predicted as a fraudulent or nonfraudulent transaction can be verified and validated for the accuracy of prediction.
To give users an idea of what data would be sufficient to implement machine learning, we could say that a dataset of at least 100 items should be fine for starters and 1,000 would be nice. The more data we have that may cover all realistic scenarios for the problem domain, the better it is for the learning algorithm.
The data preparation activity is key to the success of the learning solution. The data is the key entity required for machine learning and it must be prepared properly to ensure the proper end results and objectives are obtained.
Data engineers usually spend around 80-90 percent of their overall time in the data preparation phase to get the right data, as this is fundamental and the most critical task for the success of the implementation of the machine learning program.
The following actions need to be performed in order to prepare the data:
maths_girls
, physics_girls
, commerce_girls
, maths_boys
, physics_boys
, and commerce_boys
).The 66 percent/34 percent split is just a guide. If you have 1 million pieces of data, a 90 percent/10 percent split should be enough. With 100 million pieces of data, you can even go down to 99 percent/1 percent.
A trained model is not exposed to the test dataset during training and any predictions made on that dataset are designed to be indicative of the performance of the model in general. As such, we need to make sure the selection of datasets is representative of the problem that we are solving.
The model-building phase consists of many substeps, as indicated earlier, such as the selection of an appropriate machine learning algorithm, training the model, testing it, evaluating the model to determine whether the objectives have been achieved, and, if not, entering into the retraining phase by either selecting the same algorithm with different datasets or selecting an entirely new algorithm till the objectives are reached.
The first step toward building the model is to select the right machine learning algorithm that might solve the problem.
This step involves selecting the right machine learning algorithm and building a model, then training it using the training set. The algorithm will learn from the training data patterns that map the variables to the target, and it will output a model that captures these relationships. The machine learning model can then be used to get predictions on new data for which you do not know the target answer.
The goal is to select the most appropriate algorithm for building the machine learning model, training it, and then analyzing the results received. We begin by selecting appropriate machine learning techniques to analyze the data. The next chapter, that is, Chapter 2, Random Forest on iOS, will talk about the different machine learning algorithms and presents details of the types of problems for which they would be apt.
The training process and analyzing the results also varies based on the algorithms selected for training.
The training phase usually uses all the attributes of data present in the transformed data, which will include the predictor attributes as well as the objective attributes. All the data features are used in the training phase.
Once the machine learning algorithm is trained in the training data, the next step is to run the model in the test data.
The entire set of attributes or features of the data is divided into predictor attributes and objective attributes. The predictor attributes/features of the dataset are fed as input to the machine learning model and the model uses these attributes to predict the objective attributes. The test set uses only the predictor attributes. Now, the algorithm uses the predictor attributes and outputs predictions on objective attributes. Once the output is provided, it is compared against the actual data to understand the quality of output from the algorithm.
The results should be properly presented for further analysis. What to present in the results and how to present them are critical. They may also bring to the fore new business problems.
There should be a process to test machine learning algorithms and discover whether or not we have chosen the right algorithms, and to validate the output the algorithm provides against the problem statement.
This is the last step in the machine learning process, where we check the accuracy with the defined threshold for success criteria and, if the accuracy is greater than or equal to the threshold, then we are done. If not, we need to start all over again with a different machine learning algorithm, different parameter settings, more data, and changed data transformation. All steps in the entire machine learning process can be repeated, or a subset of it can be repeated. These are repeated till we come to the definition of "done" and are satisfied with the results.
The machine learning process is a very iterative one. Findings from one step may require a previous step to be repeated with new information. For example, during the data transformation step, we may find some data quality issues that may require us to go back to acquire more data from a different source.
Each step may also require several iterations. This is of particular interest, as the data preparation step may undergo several iterations, and the model selection may undergo several iterations. In the entire sequence of activities stated for performing machine learning, any activity can be repeated any number of times. For example, it is common to try different machine learning algorithms to train the model before moving on to testing the model. So, it is important to recognize that this is a highly iterative process and not a linear one.
Test set creation: We have to define the test dataset clearly. The goal of the test dataset is as follows:
Performance measure: The performance measure is a way to evaluate the model created. Different performance metrics will need to be used to evaluate different machine learning models. These are standard performance measures from which we can choose to test our model. There may not be a need to customize the performance measures for our model.
The following are some of the important terms that need to be known to understand the performance measure of algorithms:
There some variations in how to define the types of machine learning algorithms. The most common categorization of algorithms is done based on the learner type of the algorithm and is categorized as follows:
Supervised learning is a type of learning where the model is fed with enough information and knowledge and closely supervised to learn, so that, based on the learning it has done, it can predict the outcome for a new dataset.
Here, the model is trained in supervision mode, similar to supervision by teachers, where we feed the model with enough training data containing the input/predictors and train it and show the correct answers or output. So, based on this, it learns and will become capable of predicting the output for unseen data that may come in the future.
A classic example of this would be the standard Iris dataset. The Iris dataset consists of three species of iris and for each species, the sepal length, sepal width, petal length, and petal width is given. And for a specific pattern of the four parameters, the label is provided as to what species such a set should belong to. With this learning in place, the model will be able to predict the label—in this case, the iris species, based on the feature set—in this case, the four parameters.
Supervised learning algorithms try to model relationships and dependencies between the target prediction output and the input features such that we can predict the output values for new data based on those relationships which it learned from the previous datasets.
The following diagram will give you an idea of what supervised learning is. The data with labels is given as input to build the model through supervised learning algorithms. This is the training phase. Then the model is used to predict the class label for any input data without the label. This is the testing phase:
Again, in supervised learning algorithms, the predicted output could be a discrete/categorical value or it could be a continuous value based on the type of scenario considered and the dataset taken into consideration. If the output predicted is a discrete/categorical value, such algorithms fall under the classification algorithms, and if the output predicted is a continuous value, such algorithms fall under the regression algorithms.
If there is a set of emails and you want to learn from them and be able to tell which emails belong to the spam category and which emails belong to the non-spam category, then the algorithm to be used for this purpose will be a supervised learning algorithm belonging to the classification type. Here, you need to feed the model with a set of emails and feed enough knowledge to the model about the attributes, based on which it would segregate the email to either the spam category or the non-spam category. So the predicted output would be a categorical value, that is, spam or non-spam.
Let's take the use case where based on a given set of parameters, we need to predict what would be the price of a house in a given area. This cannot be a categorical value. It is going to be a range or a continuous value and also be subject to change on a regular basis. In this problem, the model also needs to be provided with sufficient knowledge, based on which it is going to predict the pricing value. This type of algorithm belongs to the supervised learning regression category of algorithms.
There are various algorithms belonging to the supervised category of the machine learning family:
In this learning pattern, there is no supervision done to the model to make it learn. The model learns by itself based on the data fed to it and provides us with patterns it has learned. It doesn't predict any discrete categorical value or a continuous value, but rather provides the patterns it has understood by looking at the data fed into it. The training data fed in is unlabeled and doesn't provide sufficient knowledge information for the model to learn.
Here, there's no supervision at all; actually, the model might be able to teach us new things after it learns the data. These algorithms are very useful where a feature set is too large and the human user doesn't know what to look for in the data.
This class of algorithms is mainly used for pattern detection and descriptive modeling. Descriptive modeling summarizes the relevant information from the data and presents a summary of what has already occurred, whereas predictive modeling summarizes the data and presents a summary of what can occur.
Unsupervised learning algorithms can be used for both categories of prediction. They use the input data to come up with different patterns, a summary of the data points, and insights that are not visible to human eyes. They come up with meaningful derived data or patterns of data that are helpful for end users.
The following diagram will give you an idea of what unsupervised learning is. The data without labels is given as input to build the model through unsupervised learning algorithms. This is the Training Phase. Then the model is used to predict the proper patterns for any input data without the label. This is the Testing Phase:
In this family of algorithms, which is also based on the input data fed to the model and the method adopted by the model to infer patterns in the dataset, there emerge two common categories of algorithms. These are clustering and association rule mapping algorithms.
Clustering is the model that analyzes the input dataset and groups data items with similarity into the same cluster. It produces different clusters and each cluster will hold data items that are more similar to each other than in items belonging to other clusters. There are various mechanisms that can be used to create these clusters.
Customer segmentation is one example for clustering. We have a huge dataset of customers and capture all features of customers. The model could come up with interesting cluster patterns of customers that may be very obvious to the human eye. Such clusters could be very helpful for targeted campaigns and marketing.
On the other hand, association rule learning is a model to discover relations between variables in large datasets. A classic example would be market basket analysis. Here, the model tries to find strong relationships between different items in the market basket. It predicts relationships between items and determines how likely or unlikely it is for a user to purchase a particular item when they also purchase another item. For example, it might predict that a user who purchases bread will also purchase milk, or a user who purchases wine will also purchase diapers, and so on.
The algorithms belonging to this category include the following:
In the previous two types, either there are no labels for all the observations in the dataset or labels are present for all the observations. Semi-supervised learning falls in between these two. In many practical situations, the cost of labeling is quite high, since it requires skilled human experts to do that. So, if labels are absent in the majority of the observations, but present in a few, then semi-supervised algorithms are the best candidates for the model building.
Speech analysis is one example of a semi-supervised learning model. Labeling audio files is very costly and requires a very high level of human effort. Applying semi-supervised learning models can really help to improve traditional speech analytic models.
In this class of algorithms, also based on the output predicted, which may be categorical or continuous, the algorithm family could be regression or classification.
Reinforcement learning is goal-oriented learning based on interactions with the environment. A reinforcement learning algorithm (called the agent) continuously learns from the environment in an iterative fashion. In the process, the agent learns from its experiences of the environment until it explores the full range of possible states and is able to reach the target state.
Let's take the example of a child learning to ride a bicycle. The child tries to learn by riding it, it may fall, it will understand how to balance, how to continue the flow without falling, how to sit in the proper position so that weight is not moved to one side, studies the surface, and also plans actions as per the surface, slope, hill, and so on. So, it will learn all possible scenarios and states required to learn to ride the bicycle. A fall may be considered as negative feedback and the ability to ride along stride may be a positive reward for the child. This is classic reinforcement learning. This is the same as what the model does to determine the ideal behavior within a specific context, in order to maximize its performance. Simple reward feedback is required for the agent to learn its behavior; this is known as the reinforcement signal:
Now, we will just summarize the type of learning algorithms we have seen through a diagram, so that it will be handy and a reference point for you to decide on choosing the algorithm for a given problem statement:
Some of the challenges we face in machine learning are as follows:
Machine learning is needed to extract meaningful and actionable information from huge amounts of data. A significant amount of computation is required to analyze huge amounts of data and arrive at an inference. This processing is ideal for a cloud environment. However, if we could carry out machine learning on a mobile, the following would be the advantages:
Machine learning started in computers, but the emerging trend shows that mobile app development with machine learning implemented on mobile devices is the next big thing. Modern mobile devices show the high productive capacity level that is enough to perform appropriate tasks to the same degree as traditional computers do. Also, there are some signals from global corporations that confirm this assumption:
Some classic examples of machine learning on mobile devices are as follows:
Now, we clearly understand what machine learning is and what the key tasks to be performed in a learning problem are. The four main activities to be performed for any machine learning problem are as follows:
Training the model is the most difficult part of the whole process. Once we have trained the model and have the model ready, using it to infer or predict for a new dataset is very easy.
For all the four steps provided in the preceding points, we clearly need to decide where we intend to use them—on a device or in the cloud.
The main things we need to decide are as follows:
The following are the broad possibilities to implement machine learning in mobile applications. We will get into the details of it in the upcoming sections:
There are many service providers offering machine learning as a service. We can just utilize them.
Examples of such providers who provide machine learning as a service are listed in the following points. This list is increasing every day:
If we were to go with this model, the training is already done, the model is built, and model features are exposed as web services. So, all we have to do from the mobile application is simply to invoke the model service with the required dataset and get the results from the cloud provider and then display the results in the mobile application as per our requirement:
Some of the providers provide an SDK that makes the integration work very simple.
There may be a charge that we need to provide to the cloud service provider for utilizing their machine learning web services. There may be various models based on which this fee is charged, for example, the number of times it is invoked, the type of model, and so on.
So, this is a very simple way to use machine learning services, without actually having to do anything about the model. On top of this, the machine learning service provider keeps the model updated by constant retraining, including new datasets whenever required, and so on. So, the maintenance and improvement of the model is automatically taken care of on a routine basis.
So, this type of model is easy for people who are experts in mobile but don't know anything about ML, but want to build an ML-enabled app.
So the obvious benefits of such a cloud-based machine learning service are as follows:
Some of the flip sides of this approach are as follows:
To get started with ML-enabled mobile applications, the model is the right fit both with respect to cost and technical feasibility. And absolutely fine for a machine learning newbie.
There are various ways to go about training our own machine learning model. Before getting into ways to train our model, why would we go for training our own model?
Mostly, if our data is special or unique in some way and very much specific to our requirements and when the existing solutions cannot be used to solve our problem, we may decide to train our own model.
For training our own model, a good dataset is required. A good dataset is one which is qualitatively and quantitatively good and large.
Training our model can be done in multiple ways/places based on our requirements and the amount of data:
If we have decided to carry out the training process on a desktop, we have to do it in the cloud or on our humble local server, based on our needs.
If we decide to use the cloud, again we have the following two options:
Generic cloud computing is similar to utilizing the cloud service provider to carry out our work. We want to carry out machine learning training. So, in order to carry this out, whatever is required, say hardware, storage, and so on, must be obtained from them. We can do whatever we need with these resources. We need to place our training dataset here, run the training logic/algorithms, build the model, and so on.
Once the training is done and the model is created, the model can be taken anywhere for usage. To the cloud provider, we pay the charges for utilizing the hardware and storage only.
Amazon Web Services (AWS) and Azure are some of the cloud-computing vendors.
The benefits of using this approach are as follows:
What we need to be careful about when we go for this approach is the following:
Several companies, such as Amazon, Microsoft, and Google, now offer machine learning as a service on top of their existing cloud services. In the hosted machine learning model, we neither need to worry about the compute resources nor the machine learning models. We need to upload the data for our problem set, choose the model that we want to train for our data from the available list of models, and that's all. The machine learning services take care of training the model and providing the trained model to us for usage.
This approach works really well when we are not so well-versed to write our own custom model and train it, but also do not want to go completely to a machine learning provider to use their service, but want to do something in between. We can choose between the models, upload our unique dataset, and then train it for our requirements.
In this type of approach, the provider usually makes us tied to their platform. We may not be able to download the model and deploy it anywhere else for usage. We may need to be tied to them and utilize their platform from our app for using the trained model.
One more thing to note is that if at a later point in time, we decide to move to another provider, the trained model cannot be exported and imported to the other provider. We may need to carry out the training process again on the new provider platform.
In this approach, we might need to pay for the compute resources –hardware/storage –plus, after the training, to use the trained model, we may need to pay on an ongoing per-usage basis, that is, an on-demand basis; whenever we use it, we need to pay for what we use.
The benefits of using this approach are as follows:
What we need to be careful about when we go for this approach is as follows:
Using our private cloud/simple server is similar to training on the generic cloud, except that we need to manage the compute resources/storage. In this approach, the only thing we miss out on is the flexibility given by generic cloud solution providers that include increasing/decreasing the compute and storage resources, the overhead to maintain and manage these compute resources, and so on.
The major advantage we get with this approach is about the security of the data we get. If we think our data is really unique and needs to be kept completely secured, this is a good approach to use. Here, everything is done in-house using our own resources.
The benefits of using this approach are as follows:
What we need to be careful about when we go for this approach is the following:
The training process on a device has still not picked up. It may be feasible for a very small dataset. Since the compute resources required to train the data and also the storage required to store the data is more, generally mobile is not the preferred platform to carry out the training process.
The retraining phase also becomes complicated if we use mobile as a platform for the training process.
Once the model is created, we need to use the model for a new dataset in order to infer or make the predictions. Similar to how we had various ways in which we could carry out the training process, we can have multiple approaches to carry out the inference process also:
Inference on a server would require a network request and the application will need to be online to use this approach. But, inference on the device means the application can be a completely offline application. So, obviously, all the overheads for an online app, in terms of speed/performance, and so on, is better for an offline application.
However, for inference, if there are more compute resources—that is, processing power/memory is required—the inference cannot be done on the device.
In this approach, once the model is trained, we host the model on a server to utilize it from the application.
The model can be hosted either in a cloud machine or on a local server, or it can be that of a hosted machine learning provider. The server is going to publish the endpoint URL, which needs to be accessed to utilize it to make the required predictions. The required dataset is to be passed as input to the service.
Doing the inference on a server makes the mobile application simple. The model can be improved periodically, without having to redeploy the mobile client application. New features can be added into the model easily. There is no requirement to upgrade the mobile application for any model changes.
The benefits of using this approach are as follows:
What we need to be careful about when we go for this approach is the following:
In this approach, the machine learning model is loaded into the client mobile application. To make a prediction, the mobile application runs all the inference computations locally on the device, on its own CPU or GPU. It need not communicate to the server for anything related to machine learning.
Speed is the major reason for doing inference directly on a device. We need not send a request over the server and wait for the reply. Things happen almost instantaneously.
Since the model is bundled along with the mobile application, it is not very easy to upgrade the model in one place and reuse it. The mobile application upgrade has to be done. The upgrade push has to be provided to all active users. All this is a big overhead and will consume a lot of effort and time.
Even for small changes, retraining the model with very few additional parameters will involve a complex process of an application upgrade, pushing the upgrade to live users, and maintaining the required infrastructure for the same.
The benefits of using this approach are as follows:
What we need to be careful about when we go for this approach is the following:
In this book, we are going to look into the details of utilizing the SDKs and tools available to perform tasks related to machine learning locally on a mobile device itself.
The following are the key machine learning SDKs we are going to explore in this book:
We will go over the details of the SDKs and also sample mobile machine learning applications built using these SDKs, leveraging different types of machine learning algorithms.
In order to implement machine learning on a mobile device, deep knowledge of machine learning algorithms, the entire process, and how to build the machine learning model is not required. For a mobile application developer who knows how to create mobile applications using iOS or Android SDK, just like how they utilize the backend APIs to invoke the backend business logic, they need to know the mechanism to invoke the machine learning models from their mobile application to make predictions. They need to know the mechanism to import the machine learning model into the mobile resources folder and then invoke the various features of the model to make the predictions.
To summarize, the following diagram shows the steps for a mobile developer to implement machine learning on a device:
In this chapter, we were introduced to machine learning, including the types of machine learning, where they are used, and practical scenarios where they can be used. We also saw what a well-defined machine learning problem is and also understood when we need to go for a machine learning solution. Then we saw the machine learning process and the steps involved in building the machine learning model, from defining the problem of deploying the model to the field. We saw certain important terms used in the machine learning namespace that are good to know.
We saw the challenges in implementing machine learning and, specifically, we saw the need for implementing the machine learning in mobiles and the challenges surrounding this. We saw different design approaches for implementing machine learning on mobile applications. We also saw the benefits of using each of the design approaches and also noted the important considerations that we need to analyze and keep in mind when we decide to use each of the solution approaches for implementing machine learning on mobile devices. Lastly, we glanced through the important mobile machine learning SDKs that we are going to go through in detail in subsequent chapters. These include TensorFlow lite, Core ML, Fritz, ML Kit, and lastly, the cloud-based Google Vision.
In the next chapter, we will learn more about Supervised and Unsupervised machine learning and how to implement it for mobiles.
Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.
If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.
Please Note: Packt eBooks are non-returnable and non-refundable.
Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:
If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:
Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.
You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.
Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.
When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.
For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.