Reader small image

You're reading from  The Applied TensorFlow and Keras Workshop

Product typeBook
Published inJul 2020
Reading LevelIntermediate
PublisherPackt
ISBN-139781800201217
Edition1st Edition
Languages
Right arrow
Authors (2):
Harveen Singh Chadha
Harveen Singh Chadha
author image
Harveen Singh Chadha

Harveen Singh Chadha is an experienced researcher in deep learning and is currently working as a self-driving car engineer. He is focused on creating an advanced driver assistance systems (ADAS) platform. His passion is to help people who want to enter the data science universe. He is the author of the video course Hands-On Neural Network Programming with TensorFlow.
Read more about Harveen Singh Chadha

Luis Capelo
Luis Capelo
author image
Luis Capelo

Luis Capelo is a Harvard-trained analyst and a programmer, who specializes in designing and developing data science products. He is based in New York City, America. Luis is the head of the Data Products team at Forbes, where they investigate new techniques for optimizing article performance and create clever bots that help them distribute their content. He worked for the United Nations as part of the Humanitarian Data Exchange team (founders of the Center for Humanitarian Data). Later on, he led a team of scientists at the Flowminder Foundation, developing models for assisting the humanitarian community. Luis is a native of Havana, Cuba, and the founder and owner of a small consultancy firm dedicated to supporting the nascent Cuban private sector.
Read more about Luis Capelo

View More author details
Right arrow

4. Productization

Overview

In this chapter, you will handle new data and create a model that is able to learn continuously from the patterns it is shown and help make better predictions. We will use a web application as an example to show how to deploy deep learning models not only because of the simplicity and prevalence of web apps, but also the different possibilities they provide, such as getting predictions on mobile using a web browser and making REST APIs for users.

Introduction

This chapter focuses on how to productize a deep learning model. We use the word productize to define the creation of a software product from a deep learning model that can be used by other people and applications.

We are interested in models that use new data as and when it becomes available, continuously learn patterns from new data, and consequently, make better predictions. In this chapter, we will study two strategies to deal with new data: one that retrains an existing model, and another that creates a completely new model. Then, we implement the latter strategy in our Bitcoin price prediction model so that it can continuously predict new Bitcoin prices.

By the end of this chapter, we will be able to deploy a working web application (with a functioning HTTP API) and modify it to our heart's content.

Handling New Data

Models can be trained once using a set of data and can then be used to make predictions. Such static models can be very useful, but it is often the case that we want our model to continuously learn from new data—and to continuously get better as it does so.

In this section, we will discuss two strategies of handling new data and see how to implement them in Python.

Separating Data and Model

When building a deep learning application, the two most important areas are data and model. From an architectural point of view, it is recommended that these two areas be kept separate. We believe that is a good suggestion because each of these areas includes functions inherently separate from each other. Data is often required to be collected, cleaned, organized, and normalized, whereas models need to be trained, evaluated, and able to make predictions.

Following that suggestion, we will be using two different code bases to help us build our web application...

Summary

This lesson concludes our journey into creating a deep learning model and deploying it as a web application. Our very last steps included deploying a model that predicts Bitcoin prices built using Keras and the TensorFlow engine. We finished our work by packaging the application as a Docker container and deploying it so that others can consume the predictions of our model, as well as other applications, via its API.

Aside from that work, you have also learned that there is much that can be improved. Our Bitcoin model is only an example of what a model can do (particularly LSTMs). The challenge now is twofold: how can you make that model perform better as time passes? And what features can you add to your web application to make your model more accessible? With the concepts you've learned in this book, you will be able to develop models and keep enhancing them to make accurate predictions.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
The Applied TensorFlow and Keras Workshop
Published in: Jul 2020Publisher: PacktISBN-13: 9781800201217
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Harveen Singh Chadha

Harveen Singh Chadha is an experienced researcher in deep learning and is currently working as a self-driving car engineer. He is focused on creating an advanced driver assistance systems (ADAS) platform. His passion is to help people who want to enter the data science universe. He is the author of the video course Hands-On Neural Network Programming with TensorFlow.
Read more about Harveen Singh Chadha

author image
Luis Capelo

Luis Capelo is a Harvard-trained analyst and a programmer, who specializes in designing and developing data science products. He is based in New York City, America. Luis is the head of the Data Products team at Forbes, where they investigate new techniques for optimizing article performance and create clever bots that help them distribute their content. He worked for the United Nations as part of the Humanitarian Data Exchange team (founders of the Center for Humanitarian Data). Later on, he led a team of scientists at the Flowminder Foundation, developing models for assisting the humanitarian community. Luis is a native of Havana, Cuba, and the founder and owner of a small consultancy firm dedicated to supporting the nascent Cuban private sector.
Read more about Luis Capelo