Home Data Practical Machine Learning on Databricks

Practical Machine Learning on Databricks

By Debu Sinha
ai-assist-svg-icon Book + AI Assistant
eBook + AI Assistant $35.99 $24.99
Print $44.99
Subscription $15.99 $10 p/m for three months
ai-assist-svg-icon NEW: AI Assistant (beta) Available with eBook, Print, and Subscription.
ai-assist-svg-icon NEW: AI Assistant (beta) Available with eBook, Print, and Subscription. $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime! ai-assist-svg-icon NEW: AI Assistant (beta) Available with eBook, Print, and Subscription.
What do you get with a Packt Subscription?
Gain access to our AI Assistant (beta) for an exclusive selection of 500 books, available during your subscription period. Enjoy a personalized, interactive, and narrative experience to engage with the book content on a deeper level.
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
Gain access to our AI Assistant (beta) for an exclusive selection of 500 books, available during your subscription period. Enjoy a personalized, interactive, and narrative experience to engage with the book content on a deeper level.
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Along with your eBook purchase, enjoy AI Assistant (beta) access in our online reader for a personalized, interactive reading experience.
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
ai-assist-svg-icon NEW: AI Assistant (beta) Available with eBook, Print, and Subscription. ai-assist-svg-icon NEW: AI Assistant (beta) Available with eBook, Print, and Subscription. BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime! ai-assist-svg-icon NEW: AI Assistant (beta) Available with eBook, Print, and Subscription.
eBook + AI Assistant $35.99 $24.99
Print $44.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
Gain access to our AI Assistant (beta) for an exclusive selection of 500 books, available during your subscription period. Enjoy a personalized, interactive, and narrative experience to engage with the book content on a deeper level.
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
Gain access to our AI Assistant (beta) for an exclusive selection of 500 books, available during your subscription period. Enjoy a personalized, interactive, and narrative experience to engage with the book content on a deeper level.
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Along with your eBook purchase, enjoy AI Assistant (beta) access in our online reader for a personalized, interactive reading experience.
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Chapter 1: The ML Process and Its Challenges
About this book
Unleash the potential of databricks for end-to-end machine learning with this comprehensive guide, tailored for experienced data scientists and developers transitioning from DIY or other cloud platforms. Building on a strong foundation in Python, Practical Machine Learning on Databricks serves as your roadmap from development to production, covering all intermediary steps using the databricks platform. You’ll start with an overview of machine learning applications, databricks platform features, and MLflow. Next, you’ll dive into data preparation, model selection, and training essentials and discover the power of databricks feature store for precomputing feature tables. You’ll also learn to kickstart your projects using databricks AutoML and automate retraining and deployment through databricks workflows. By the end of this book, you’ll have mastered MLflow for experiment tracking, collaboration, and advanced use cases like model interpretability and governance. The book is enriched with hands-on example code at every step. While primarily focused on generally available features, the book equips you to easily adapt to future innovations in machine learning, databricks, and MLflow.
Publication date:
November 2023
Publisher
Packt
Pages
244
ISBN
9781801812030

 

The ML Process and Its Challenges

Welcome to the world of simplifying your machine learning (ML) life cycle with the Databricks platform.

As a senior specialist solutions architect at Databricks specializing in ML, over the years, I have had the opportunity to collaborate with enterprises to architect ML-capable platforms to solve their unique business use cases using the Databricks platform. Now, that experience will be at your service to learn from. The knowledge you will gain from this book will open new career opportunities for you and change how you approach architecting ML pipelines for your organization’s ML use cases.

This book does assume that you have a reasonable understanding of the Python language as the accompanying code samples will be in Python. This book is not about teaching you ML techniques from scratch; it is assumed that you are an experienced data science practitioner who wants to learn how to take your ML use cases from development to production and all the steps in the middle using the Databricks platform.

For this book, some Python and pandas know-how is required. Being familiar with Apache Spark is a plus, and having a solid grasp of ML and data science is necessary.

Note

This book focuses on the features that are currently generally available. The code examples provided utilize Databricks notebooks. While Databricks is actively developing features to support workflows using external integrated development environments (IDEs), these specific features are not covered in this book. Also, going through this book will give you a solid foundation to quickly pick up new features as they become GA.

In this chapter, we will cover the following:

  • Understanding the typical ML process
  • Discovering the personas involved with the machine learning process in organizations
  • Challenges with productionizing machine learning use cases in organizations
  • Understanding the requirements of an enterprise machine learning platform
  • Exploring Databricks and the Lakehouse architecture

By the end of this chapter, you should have a fundamental understanding of what a typical ML development life cycle looks like in an enterprise and the different personas involved in it. You will also know why most ML projects fail to deliver business value and how the Databricks Lakehouse Platform provides a solution.

 

Understanding the typical machine learning process

The following diagram summarizes the ML process in an organization:

Figure 1.1 – The data science development life cycle consists of three main stages – data preparation, modeling, and deployment

Figure 1.1 – The data science development life cycle consists of three main stages – data preparation, modeling, and deployment

Note

Source: https://azure.microsoft.com/mediahandler/files/resourcefiles/standardizing-the-machine-learning-lifecycle/Standardizing%20ML%20eBook.pdf.

It is an iterative process. The raw structured and unstructured data first lands into a data lake from different sources. A data lake utilizes the scalable and cheap storage provided by cloud storage such as Amazon Simple Storage Service (S3) or Azure Data Lake Storage (ADLS), depending on which cloud provider an organization uses. Due to regulations, many organizations have a multi-cloud strategy, making it essential to choose cloud-agnostic technologies and frameworks to simplify infrastructure management and reduce operational overhead.

Databricks defined a design pattern called the medallion architecture to organize data in a data lake. Before moving forward, let’s briefly understand what the medallion architecture is:

Figure 1.2 – Databricks medallion architecture

Figure 1.2 – Databricks medallion architecture

The medallion architecture is a data design pattern that’s used in a Lakehouse to organize data logically. It involves structuring data into layers (Bronze, Silver, and Gold) to progressively improve its quality and structure. The medallion architecture is also referred to as a “multi-hop” architecture.

The Lakehouse architecture, which combines the best features of data lakes and data warehouses, offers several benefits, including a simple data model, ease of implementation, incremental extract, transform, and load (ETL), and the ability to recreate tables from raw data at any time. It also provides features such as ACID transactions and time travel for data versioning and historical analysis. We will expand more on the lakehouse in the Exploring the Databricks Lakehouse architecture section.

In the medallion architecture, the Bronze layer holds raw data sourced from external systems, preserving its original structure along with additional metadata. The focus here is on quick change data capture (CDC) and maintaining a historical archive. The Silver layer, on the other hand, houses cleansed, conformed, and “just enough” transformed data. It provides an enterprise-wide view of key business entities and serves as a source for self-service analytics, ad hoc reporting, and advanced analytics.

The Gold layer is where curated business-level tables reside that have been organized for consumption and reporting purposes. This layer utilizes denormalized, read-optimized data models with fewer joins. Complex transformations and data quality rules are applied here, facilitating the final presentation layer for various projects, such as customer analytics, product quality analytics, inventory analytics, and more. Traditional data marts and enterprise data warehouses (EDWs) can also be integrated into the lakehouse to enable comprehensive “pan-EDW” advanced analytics and ML.

The medallion architecture aligns well with the concept of a data mesh, where Bronze and Silver tables can be joined in a “one-to-many” fashion to generate multiple downstream tables, enhancing data scalability and autonomy.

Apache Spark has taken over Hadoop as the de facto standard for processing data at scale in the last six years due to advancements in performance and large-scale developer community adoption and support. There are many excellent books on Apache Spark written by the creators of Apache Spark themselves; these have been listed in the Further reading section. They can give more insights into the other benefits of Apache Spark.

Once the clean data lands in the Gold standard tables, features are generated by combining gold datasets, which act as input for ML model training.

During the model development and training phase, various sets of hyperparameters and ML algorithms are tested to identify the optimal combination of the model and corresponding hyperparameters. This process relies on predetermined evaluation metrics such as accuracy, R2 score, and F1 score.

In the context of ML, hyperparameters are parameters that govern the learning process of a model. They are not learned from the data itself but are set before training. Examples of hyperparameters include the learning rate, regularization strength, number of hidden layers in a neural network, or the choice of a kernel function in a support vector machine. Adjusting these hyperparameters can significantly impact the performance and behavior of the model.

On the other hand, training an ML model involves deriving values for other model parameters, such as node weights or model coefficients. These parameters are learned during the training process using the training data to minimize a chosen loss or error function. They are specific to the model being trained and are determined iteratively through optimization techniques such as gradient descent or closed-form solutions.

Expanding beyond node weights, model parameters can also include coefficients in regression models, intercept terms, feature importance scores in decision trees, or filter weights in convolutional neural networks. These parameters are directly learned from the data during the training process and contribute to the model’s ability to make predictions.

Parameters

You can learn more about parameters at https://en.wikipedia.org/wiki/Parameter.

The finalized model is deployed either for batch, streaming, or real-time inference as a Representational State Transfer (REST) endpoint using containers. In this phase, we set up monitoring for drift and governance around the deployed models to manage the model life cycle and enforce access control around usage. Let’s take a look at the different personas involved in taking an ML use case from development to production.

 

Discovering the roles associated with machine learning projects in organizations

Typically, three different types of persona are involved in developing an ML solution in an organization:

  • Data engineers: The data engineers create data pipelines that take in structured, semi-structured, and unstructured data from source systems and ingest them in a data lake. Once the raw data lands in the data lake, the data engineers are also responsible for securely storing the data, ensuring that the data is reliable, clean, and easy to discover and utilize by the users in the organization.
  • Data scientists: Data scientists collaborate with subject matter experts (SMEs) to understand and address business problems, ensuring a solid business justification for projects. They utilize clean data from data lakes and perform feature engineering, selecting and transforming relevant features. By developing and training multiple ML models with different sets of hyperparameters, data scientists can evaluate them on test sets to identify the best-performing model. Throughout this process, collaboration with SMEs validates the models against business requirements, ensuring their alignment with objectives and key performance indicators (KPIs). This iterative approach helps data scientists select a model that effectively solves the problem and meets the specified KPIs.
  • Machine learning engineers: The ML engineering teams deploy the ML models created by data scientists into production environments. It is crucial to establish procedures, governance, and access control early on, including defining data scientist access to specific environments and data. ML engineers also implement monitoring systems to track model performance and data drift. They enforce governance practices, track model lineage, and ensure access control for data security and compliance throughout the ML life cycle.

A typical ML project life cycle consists of data engineering, then data science, and lastly, production deployment by the ML engineering team. This is an iterative process.

Now, let’s take a look at the various challenges involved in productionizing ML models.

 

Challenges with productionizing machine learning use cases in organizations

At this point, we understand what a typical ML project life cycle looks like in an organization and the different personas involved in the ML process. It looks very intuitive, though we still see many enterprises struggling to deliver business value from their data science projects.

In 2017, Gartner analyst Nick Heudecker admitted that 85% of data science projects fail. A report published by Dimensional Research (https://dimensionalresearch.com/) also uncovered that only 4% of companies have been successful in deploying ML use cases to production. A recent study done by Rackspace Global Technologies in 2021 uncovered that only 20% of the 1,870 organizations in various industries have mature AI and ML practices.

Sources

See the Further reading section for more details on these statistics.

Most enterprises face some common technical challenges in successfully delivering business value from data science projects:

  • Unintended data silos and messy data: Data silos can be considered as groups of data in an organization that are governed and accessible only by specific users or groups within the organization. Some valid reasons to have data silos include compliance with particular regulations around privacy laws such as General Data Protection Regulation (GDPR) in Europe or the California Privacy Rights Act (CCPA). These conditions are usually an exception to the norm. Gartner stated that almost 87% of organizations have low analytics and business intelligence maturity, meaning that data is not being fully utilized.

    Data silos generally arise as different departments within organizations. They have different technology stacks to manage and process the data.

    The following figure highlights this challenge:

Figure 1.3 – The tools used by the different teams in an organization and the different silos

Figure 1.3 – The tools used by the different teams in an organization and the different silos

The different personas work with different sets of tools and have different work environments. Data analysts, data engineers, data scientists, and ML engineers utilize different tools and development environments due to their distinct roles and objectives. Data analysts rely on SQL, spreadsheets, and visualization tools for insights and reporting. Data engineers work with programming languages and platforms such as Apache Spark to build and manage data infrastructure. Data scientists use statistical programming languages, ML frameworks, and data visualization libraries to develop predictive models. ML engineers combine ML expertise with software engineering skills to deploy models into production systems. These divergent toolsets can pose challenges in terms of data consistency, tool compatibility, and collaboration. Standardized processes and knowledge sharing can help mitigate these challenges and foster effective teamwork. Traditionally, there is little to no collaboration between these teams. As a result, a data science use case with a validated business value may not be developed at the required pace, negatively impacting the growth and effective management of the business.

When the concept of data lakes came up in the past decade, they promised a scalable and cheap solution to support structured and unstructured data. The goal was to enable organization-wide effective usage and collaboration of data. In reality, most data lakes ended up becoming data swamps, with little to no governance regarding the quality of data.

This inherently made ML very difficult since an ML model is only as good as the data it’s trained on.

  • Building and managing an effective ML production environment is challenging: The ML teams at Google have done a lot of research on the technical challenges around setting up an ML development environment. A research paper published in NeurIPS on hidden technical debt in ML systems engineering from Google (https://proceedings.neurips.cc/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf) documented that writing ML code is just a tiny piece of the whole ML development life cycle. To develop an effective ML development practice in an organization, many tools, configurations, and monitoring aspects need to be integrated into the overall architecture. One of the critical components is monitoring drift in model performance and providing feedback and retraining:
Figure 1.4 – Hidden Technical Debt in Machine Learning Systems, NeurIPS 2015

Figure 1.4 – Hidden Technical Debt in Machine Learning Systems, NeurIPS 2015

Let’s understand the requirements of an enterprise-grade ML platform a bit more.

 

Understanding the requirements of an enterprise-grade machine learning platform

In the fast-paced world of artificial intelligence (AI) and ML, an enterprise-grade ML platform takes center stage as a critical component. It is a comprehensive software platform that offers the infrastructure, tools, and processes required to construct, deploy, and manage ML models at a grand scale. However, a truly robust ML platform goes beyond these capabilities, extending to every stage of the ML life cycle, from data preparation, model training, and deployment to constant monitoring and improvements.

When we speak of an enterprise-grade ML platform, several key attributes determine its effectiveness, each of which is considered a cornerstone of such platforms. Let’s delve deeper into each of these critical requirements and understand their significance in an enterprise setting.

Scalability – the growth catalyst

Scalability is an essential attribute, enabling the platform to adapt to the expanding needs of a burgeoning organization. In the context of ML, this encompasses the capacity to handle voluminous datasets, manage multiple models simultaneously, and accommodate a growing number of concurrent users. As the organization’s data grows exponentially, the platform must have the capability to expand and efficiently process the increasing data without compromising performance.

Performance – ensuring efficiency and speed

In a real-world enterprise setting, the ML platform’s performance directly influences business operations. It should possess the capability to deliver high performance both in the training and inference stages. These stages are critical to ensure that models can be efficiently trained with minimum resources, and then deployed into production environments, ready to make timely and accurate predictions. A high-performance platform translates to faster decisions, and in today’s fast-paced business world, every second counts.

Security – safeguarding data and models

In an era where data breaches are common, an ML platform’s security becomes a paramount concern. A robust ML platform should prioritize security and comply with industry regulations. This involves an assortment of features such as stringent data encryption techniques, access control mechanisms to prevent unauthorized access, and auditing capabilities to track activities in the system, all of which contribute to securely handling sensitive data and ML models.

Governance – steering the machine learning life cycle

Governance is an often overlooked yet vital attribute of an enterprise-grade ML platform. Effective governance tools can facilitate the management of the entire life cycle of ML models. They can control versioning, maintain lineage tracking to understand the evolution of models, and audit for regulatory compliance and transparency. As the complexity of ML projects increases, governance tools ensure smooth sailing by managing the models and maintaining a clean and understandable system.

Reproducibility – ensuring trust and consistency

Reproducibility serves as a foundation for trust in any ML model. The ML platform should ensure the reproducibility of the results from ML experiments, thereby establishing credibility and confidence in the models. This means that given the same data and the same conditions, the model should produce the same outputs consistently. Reproducibility directly impacts the decision-making process, ensuring the decisions are consistent and reliable, and the models can be trusted.

Ease of use – balancing complexity and usability

Last, but by no means least, is the ease of use of the ML platform. Despite the inherent complexity of ML processes, the platform should be intuitive and user-friendly for a wide range of users, from data scientists to ML engineers. This extends to features such as a streamlined user interface, a well-documented API, and a user-centric design, making it easier for users to develop, deploy, and manage models. An easy-to-use platform reduces the barriers to entry, increases adoption, and empowers users to focus more on the ML tasks at hand rather than struggling with the platform.

In essence, an enterprise MLOps platform needs capabilities for model development, deployment, scalability, collaboration, monitoring, and automation. Databricks fits in by offering a unified environment for ML practitioners to develop and train models, deploy them at scale, and monitor their performance. It supports collaboration, integrates with popular deployment technologies, and provides automation and CI/CD capabilities.

Now, let’s delve deeper into the capabilities of the Databricks Lakehouse architecture and its unified AI/analytics platform, which establish it as an exceptional ML platform for enterprise readiness.

 

Exploring Databricks and the Lakehouse architecture

Databricks is a renowned cloud-native and enterprise-ready data analytics platform that integrates data engineering, data science, and ML to enable organizations to develop and deploy ML models at scale.

Cloud-native refers to an approach where software applications are designed, developed, and deployed specifically for cloud environments. It involves utilizing technologies such as containers, microservices, and orchestration platforms to achieve scalability, resilience, and agility. By leveraging the cloud’s capabilities, Databricks can scale dynamically, recover from failures, and adapt quickly to changing demands, enabling organizations to maximize the benefits of cloud computing.

Databricks achieves the six cornerstones of an enterprise-grade ML platform. Let’s take a closer look.

Scalability – the growth catalyst

Databricks provides fully managed Apache Spark (an open source distributed computing system known for its ability to handle large volumes of data and perform computations in a distributed manner) clusters.

Apache Spark consists of several components, including nodes and a driver program. Nodes refer to the individual machines or servers within the Spark cluster that contribute computational resources. The driver program is responsible for running the user’s application code and coordinating the overall execution of the Spark job. It communicates with the cluster manager to allocate resources and manages the SparkContext, which serves as the entry point to the Spark cluster. RDDs are the core data structure, enabling parallel processing, and Spark uses a directed acyclic graph (DAG) to optimize computations. Transformations and actions are performed on RDDs, while cluster managers handle resource allocation. Additionally, caching and shuffling enhance performance.

The DataFrames API in Spark is a distributed collection of data that’s organized into named columns. It provides a higher-level abstraction compared to working directly with RDDs in Spark, making it easier to manipulate and analyze structured data. It supports a SQL-like syntax and provides a wide range of functions for data manipulation and transformation.

Spark provides APIs in various languages, including Scala, Java, Python, and R, allowing users to leverage their existing skills and choose the language they are most comfortable with.

Apache Spark processes large datasets across multiple nodes, making it highly scalable. It supports both streaming and batch processing. This means that you can use Spark to process real-time data streams as well as large-scale batch jobs. Spark Structured Streaming, a component of Spark, allows you to process live data streams in a scalable and fault-tolerant manner. It provides high-level abstractions that make it easy to write streaming applications using familiar batch processing concepts.

Furthermore, Databricks allows for dynamic scaling and autoscaling of clusters, which adjusts resources based on the workload, ensuring the efficient use of resources while accommodating growing organizational needs.

While this book doesn’t delve into Apache Spark in detail, we have curated a Further reading section with excellent recommendations that will help you explore Apache Spark more comprehensively.

Performance – ensuring efficiency and speed

Databricks Runtime is optimized for the cloud and includes enhancements over open source Apache Spark that significantly increase performance. The Databricks Delta engine provides fast query execution for big data and AI workflows while reducing the time and resources needed for data preparation and iterative model training. Its optimized runtime improves both model training and inference speeds, resulting in more efficient operations.

Security – safeguarding data and models

Databricks ensures a high level of security through various means. It offers data encryption at rest and in transit, uses role-based access control (RBAC) to provide fine-grained user permissions, and integrates with identity providers for single sign-on (SSO).

Databricks also has a feature called Unity Catalog. Unity Catalog is a centralized metadata store for Databricks workspaces that offers data governance capabilities such as access control, auditing, lineage, and data discovery. Its key features include centralized governance, a universal security model, automated lineage tracking, and easy data discovery. Its benefits include improved governance, reduced operational overhead, and increased data agility. Unity Catalog is a powerful tool for enhancing data governance in Databricks. Unity Catalog is a complex topic that will not be covered extensively in this book. However, you can find more information on it in the Further reading section, where a link has been provided.

The Databricks platform is compliant with several industry regulations, including GDPR, CCPA, HIPAA, SOC 2 Type II, and ISO/IEC 27017. For a complete list of certifications, check out https://www.databricks.com/trust/compliance.

Governance – steering the machine learning life cycle

Databricks provides MLflow, an open source platform for managing the ML life cycle, including experimentation, reproducibility, and deployment. It supports model versioning and model registry for tracking model versions and their stages in the life cycle (staging, production, and others). Additionally, the platform provides audit logs for tracking user activity, helping meet regulatory requirements and promoting transparency. Databricks has its own hosted feature store as well, which we will cover in more detail in later chapters.

Reproducibility – ensuring trust and consistency

With MLflow, Databricks ensures the reproducibility of ML models. MLflow allows users to log parameters, metrics, and artifacts for each run of an experiment, providing a record of what was done and allowing for exact replication of the results. It also supports packaging code into reproducible runs and sharing it with others, further ensuring the repeatability of experiments.

Ease of use – balancing complexity and usability

Databricks provides a collaborative workspace that enables data scientists and engineers to work together seamlessly. It offers interactive notebooks with support for multiple languages (Python, R, SQL, and Scala) in a single notebook, allowing users to use their preferred language. The platform’s intuitive interface, coupled with extensive documentation and a robust API, makes it user-friendly, enabling users to focus more on ML tasks rather than the complexities of platform management. In addition to its collaborative and analytical capabilities, Databricks integrates with various data sources, storage systems, and cloud platforms, making it flexible and adaptable to different data ecosystems. It supports seamless integration with popular data lakes, databases, and cloud storage services, enabling users to easily access and process data from multiple sources. Although this book specifically focuses on the ML and MLOps capabilities of Databricks, it makes sense to understand what the Databricks Lakehouse architecture is and how it simplifies scaling and managing ML project life cycles for organizations.

Lakehouse, as a term, is a combination of two terms: data lakes and data warehouses. Data warehouses are great at handling structured data and SQL queries. They are extensively used for powering business intelligence (BI) applications but have limited support for ML. They store data in proprietary formats and can only be accessed using SQL queries.

Data lakes, on the other hand, do a great job supporting ML use cases. A data lake allows organizations to store a large amount of their structured and unstructured data in a central scalable store. They are easy to scale and support open formats. However, data lakes have a significant drawback when it comes to running BI workloads. Their performance is not comparable to data warehouses. The lack of schema governance enforcement turned most data lakes in organizations into swamps.

Typically, in modern enterprise architecture, there is a need for both. This is where Databricks defined the Lakehouse architecture. Databricks provides a unified analytics platform called the Databricks Lakehouse Platform. The Lakehouse Platform provides a persona-based single platform that caters to all the personas involved in data processing and gains insights. The personas include data engineers, BI analysts, data scientists, and MLOps. This can tremendously simplify the data processing and analytics architecture of any organization.

At the time of writing this book, the Lakehouse Platform is available on all three major clouds: Amazon Web Services (AWS), Microsoft Azure, and Google Compute Platform (GCP).

Lakehouse can be thought of as a technology that combines data warehouses’ performance and data governance aspects and makes them available at the scale of data lakes. Under the hood, Lakehouse uses an open protocol called Delta (https://delta.io/).

The Delta format adds reliability, performance, and governance to the data in data lakes. Delta also provides Atomicity, Consistency, Isolation, and Durability (ACID) transactions, making sure that all data operations either fully succeed or fail. In addition to ACID transaction support, under the hood, Delta uses the Parquet format. Unlike the regular Parquet format, the Delta format keeps track of transaction logs, offering enhanced capabilities. It also supports granular access controls to your data, along with versioning and the ability to roll back to previous versions. Delta format tables scale effortlessly with data and are underpinned by Apache Spark while utilizing advanced indexing and caching to improve performance at scale. There are many more benefits that the Delta format provides that you can read about on the official website.

When we say Delta Lake, we mean a data lake that uses the Delta format to provide the previously described benefits to the data lake.

The Databricks Lakehouse architecture is built on the foundation of Delta Lake:

Figure 1.5 – Databricks Lakehouse Platform

Figure 1.5 – Databricks Lakehouse Platform

Note

Source: Courtesy of Databricks

Next, let’s discuss how the Databricks Lakehouse architecture can simplify ML.

Simplifying machine learning development with the Lakehouse architecture

As we saw in the previous section, the Databricks Lakehouse Platform provides a cloud-native enterprise-ready solution that simplifies the data processing needs of an organization. It provides a single platform that enables different teams across enterprises to collaborate and reduces time to market for new projects.

The Lakehouse Platform has many components specific to data scientists and ML practitioners; we will cover these in more detail later in this book. For instance, at the time of writing this book, the Lakehouse Platform released a drop-down button that allows users to switch between persona-based views. There are tabs to quickly access the fully integrated and managed feature store, model registry, and MLflow tracking server in the ML practitioner persona view:

Figure 1.6 – Databricks Lakehouse Platform persona selection dropdown

Figure 1.6 – Databricks Lakehouse Platform persona selection dropdown

With that, let’s summarize this chapter.

 

Summary

In this chapter, we learned about ML, including the ML process, the personas involved, and the challenges organizations face in productionizing ML models. Then, we learned about the Lakehouse architecture and how the Databricks Lakehouse Platform can potentially simplify MLOps for organizations. These topics give us a solid foundation to develop a more profound understanding of how different Databricks ML-specific tools fit in the ML life cycle.

For in-depth learning about the various features and staying up to date with announcements, the Databricks documentation is the ideal resource. You can access the documentation via the link provided in the Further reading section. Moreover, on the documentation page, you can easily switch to different cloud-specific documentation to explore platform-specific details and functionalities.

In the next chapter, we will dive deeper into the ML-specific features of the Databricks Lakehouse Platform.

 

Further reading

To learn more about the topics that were covered in this chapter, take a look at the following resources:

About the Author
  • Debu Sinha

    Debu is an experienced Data Science and Engineering leader with deep expertise in Software Engineering and Solutions Architecture. With over 10 years in the industry, Debu has a proven track record in designing scalable Software Applications, Big Data, and Machine Learning systems. As Lead ML Specialist on the Specialist Solutions Architect team at Databricks, Debu focuses on AI/ML use cases in the cloud and serves as an expert on LLMs, Machine Learning, and MLOps. With prior experience as a startup co-founder, Debu has demonstrated skills in team-building, scaling, and delivering impactful software solutions. An established thought leader, Debu has received multiple awards and regularly speaks at industry events.

    Browse publications by this author
Practical Machine Learning on Databricks
Unlock this book and the full library FREE for 7 days
Start now