Reader small image

You're reading from  The AI Product Manager's Handbook

Product typeBook
Published inFeb 2023
Reading LevelIntermediate
PublisherPackt
ISBN-139781804612934
Edition1st Edition
Languages
Right arrow
Author (1)
Irene Bratsis
Irene Bratsis
author image
Irene Bratsis

Irene Bratsis is a director of digital product and data at the International WELL Building Institute (IWBI). She has a bachelor's in economics, and after completing various MOOCs in data science and big data analytics, she completed a data science program with Thinkful. Before joining IWBI, Irene worked as an operations analyst at Tesla, a data scientist at Gesture, a data product manager at Beekin, and head of product at Tenacity. Irene volunteers as NYC chapter co-lead for Women in Data, has coordinated various AI accelerators, moderated countless events with a speaker series with Women in AI called WaiTalk, and runs a monthly book club focused on data and AI books.
Read more about Irene Bratsis

Right arrow

Model Development and Maintenance for AI Products

In this chapter, we will be exploring the nuances of model development, from linear regression to deep learning neural network models. We’ll cover the variety of models that are available to use, as well as what’s entailed for the maintenance of those models, from how they’re developed and trained to how they’re deployed and ultimately tested. This will be a basic overview to understand the end-to-end process of model maintenance that product managers can expect from the engineering and dev ops teams that support their products.

There’s a lot involved with bringing any new product to market, and if you’ve been a product manager for a while, you’re likely familiar with the new product development (NPD) process – or set of steps. As a precursor to the rest of the chapter, particularly for those that are unfamiliar with the NPD process, we’re going to be summarizing each...

Understanding the stages of NPD

In this section, we will be covering the various stages of the NPD cycle as it relates to the emergence of an AI/ML product. Through each stage, we’ll cover the major foundational areas, from the ideation to the launch of an acceptable first version of a product. The steps are laid out incrementally from the discovery stage, in which you brainstorm about the need you’re looking to address in the market and why that need needs to be bolstered by AI. In the define stage, you bring in your product requirements for your product. In the design stage, you bring in the active visual and experiential elements of your end product. In the implementation stage, you build it out. In the marketing stage, you craft a message for your broader audience. In the training stage, you put your product to the test and make sure it’s being used as intended. Finally, in the launch stage, you release your product to a broader audience for feedback. Let&...

Model types – from linear regression to neural networks

In the previous chapter, we looked at a few model types that you’ll likely encounter, use, and implement in various types of products for different purposes. To jog your memory, here’s a list of the ML models/algorithms you’ll likely use in production for various products:

  • Naive Bayes classifier: This algorithm “naively” considers every feature in your dataset as its own independent variable, so it’s essentially trying to find associations probabilistically without holding any assumptions about the data. It’s one of the simpler algorithms out there and its simplicity is actually what makes it so successful with classification. It’s commonly used for binary values, such as trying to decipher whether something is spam or not.
  • Support Vector Machine (SVM): This algorithm is also largely used for classification problems and will essentially try to split your...

Training – when is a model ready for market?

In this section, we will explore the standard process for gathering data to train a model and tune hyperparameters optimally to achieve a certain level of performance and optimization. In the implementation phase (step 4 of the NPD process), we’re looking for a level of performance that would be considered optimal based on the define phase (step 2 of the NPD process) before we move to the next phase of marketing and crafting our message for what success looks like when using our product. A lot has to happen in the implementation phase before we can do that.

Data accessibility is the most important factor when it comes to AI/ML products. At first, you might have to start with third-party data, which you’ll have to purchase, or public data that’s freely available or easily scraped. This is why you’ll likely want or need to partner with a few potential customers. Partnering with customers you can trust...

Deployment – what happens after the workstation?

In Chapter 1, we discussed deployment strategies that can be used as you manage your AI/ML products in production. In this section, we’d like you to understand the avenues available from a DevOps perspective, where you will ultimately use and deploy the models in production outside of the training workstation or training environment itself. Perhaps you’re using something such as GitLab to manage the branches of your code repository for various applications of AI/ML in your product and experimenting there. However, once you are ready to make changes or update your models after retraining, you’ll push the new models into production regularly. This means you need a pipeline that can support this kind of experimentation, retraining, and deployment regularly. This section will primarily focus on the considerations after we place a finished ML model into production (a live environment) where it will be accessed...

Testing and troubleshooting

In Chapter 1, we discussed the idea of continuous maintenance, which included continuous integration, continuous delivery, continuous training, and continuous monitoring. This section will build on that and expand on how to test and troubleshoot issues related to ML products on an ongoing basis so that your product is set up for success. Once you’ve made your first deployment, we jump right into the continuous training and continuous maintenance portion of the continuous maintenance process we discussed in Chapter 1.

Remember, managing the performance of your models post-deployment is crucial and it will be a highly iterative, never-ending process of model maintenance. As is the case with traditional software development, you will continue to test, troubleshoot, and fix bugs for your AI/ML products as well. The only difference is that you will also screen for lags in performance and bugs related to your model.

Continuously monitoring your model...

Refreshing – the ethics of how often we update our models

When we think about the amazing power we have as humans, the complex brain operations we employ for things such as weighing up different choices or deciding whether or not we can trust someone, we may find it hard or impossible to believe that we could ever use machines to do even a fraction of what our minds can do. Most of us make choices, selections, and judgments without fully understanding the mechanism that powers those experiences. However, when it comes to ML, with the exception of neural networks, we can understand the underlying mechanisms that power certain determinations and classifications. We love the idea that ML can mirror our own ability to come to conclusions and that we can employ our critical thinking skills to make sure that process is as free from bias as possible.

The power of AI/ML allows us to automate repetitive, boring, uninspiring actions. We’d rather have content moderators, for...

Summary

In this chapter, we covered the NPD cycle and a review of the common AI/ML model types. We also covered an overview of how to train, deploy, and troubleshoot the models that are chosen, giving us a reasonable foundation on what to expect when working with models in production. We also touched on some of the most important ethical practices, coming from some of the most rigorous standards that exist, when building products with AI/ML components.

If you’re interested in expanding further on building ethical AI, we’ve provided some handy links in the following section for additional study. Keep in mind that we’re at a critical juncture with regard to AI/ML ethics. We’re building this ship as we’re sailing it, and as AI/ML products continue to enter the zeitgeist, we will see additional measures put in place to reign in the potential harm caused by improper AI deployments through the diligent work of lawmakers and activists. We’re not...

Additional resources

Reading about and familiarizing ourselves with AI ethics is important for everyone because AI is becoming increasingly impossible to avoid in our day-to-day lives. Additionally, if you actively work in the field of AI/ML as a data scientist, developer, engineer, product manager, or leader, it’s doubly important that you’re aware of the potential risks AI poses and how to build AI responsibly.

For further reading on ethical AI principles, we recommend the following reputable publications:

  • Blueprint for An AI Bill of Rights: https://www.whitehouse.gov/ostp/ai-bill-of-rights/
  • DoD Joint Artificial Intelligence Center Ethical Principles for AI: https://www.defense.gov/News/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/
  • National AI Initiative Office on Advancing Trustworthy AI: https://www.ai.gov/strategic-pillars/advancing-trustworthy-ai/
  • Algorithmic Justice League: https://www.ajl.org...

References

lock icon
The rest of the chapter is locked
You have been reading a chapter from
The AI Product Manager's Handbook
Published in: Feb 2023Publisher: PacktISBN-13: 9781804612934
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Irene Bratsis

Irene Bratsis is a director of digital product and data at the International WELL Building Institute (IWBI). She has a bachelor's in economics, and after completing various MOOCs in data science and big data analytics, she completed a data science program with Thinkful. Before joining IWBI, Irene worked as an operations analyst at Tesla, a data scientist at Gesture, a data product manager at Beekin, and head of product at Tenacity. Irene volunteers as NYC chapter co-lead for Women in Data, has coordinated various AI accelerators, moderated countless events with a speaker series with Women in AI called WaiTalk, and runs a monthly book club focused on data and AI books.
Read more about Irene Bratsis