Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
$9.99 | ALL EBOOKS & VIDEOS
Over 7,000 tech titles at $9.99 each with AI-powered learning assistants on new releases
SQL Query Design Patterns and Best Practices
SQL Query Design Patterns and Best Practices

SQL Query Design Patterns and Best Practices: A practical guide to writing readable and maintainable SQL queries using its design patterns

By Steve Hughes , Dennis Neer , Dr. Ram Babu Singh , Shabbir H. Mala , Leslie Andrews , Chi Zhang
$15.99 per month
Full star icon Full star icon Full star icon Full star icon Full star icon 5 (1 Ratings)
Book Mar 2023 270 pages 1st Edition
eBook
$27.99 $9.99
Print
$34.99
Subscription
$15.99 Monthly
eBook
$27.99 $9.99
Print
$34.99
Subscription
$15.99 Monthly

What do you get with a Packt Subscription?

Free for first 7 days. $15.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details


Publication date : Mar 31, 2023
Length 270 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781837633289
Category :
Table of content icon View table of contents Preview book icon Preview Book

SQL Query Design Patterns and Best Practices

Reducing Rows and Columns in Your Result Sets

Today the sources of data that a data analyst has access to have grown to the point that the amount of data that is available to you is unlimited. The challenge that a data analyst faces today is to determine how to generate a result set that is manageable and has the information that ensures that it will meet the needs of the analyst for their reports and analysis. If there is too much data, the result set will become unmanageable and unusable due to information overload; too little data and the data will have gaps, and the end user will lose trust in the data.

In this chapter, we will review how you determine how much data and what data you should keep in your result set and how to filter that data efficiently. We will also review how to determine which columns you should keep and how you can efficiently select the correct columns. The chapter will then wrap up with a short discussion on how these activities will impact future data aggregations.

By the end of this chapter, you will understand how to identify the data and columns that you need and the most efficient method for getting the data into a usable result set that can easily be recreated.

In this chapter, we will cover the following main topics:

  • Identifying data to be removed from the dataset
  • Understanding the value of creating views versus removing data
  • Exploring the impact of row and column reductions on aggregations

Technical requirements

To work with the examples and illustrations in this chapter, you will need to have SQL Server Management Studio installed. We will be working with the WideWorldImportersDW database on SQL Server. Please refer to the Appendix for tool installation and database restoration guidance.

You will find the code from this chapter on GitHub: https://github.com/PacktPublishing/SQL-Query-Design-Best-Practices/tree/main/Chapter01

Identifying data to remove from the dataset

With the amount of data that is freely available today, databases are getting larger and larger, and that makes it a challenge for data analysts to analyze the data properly. A challenge that data analysts face is determining what data is required to be able to produce a dataset that provides the information that is relevant for analysis. In this chapter, you will learn how to reduce the amount of data and columns that are contained in a dataset without affecting the result set.

To do this, you will need to first understand what data is required through interviews with the people requesting the result set. The interview process will help you to understand what the person requesting the result set wants to accomplish and give you an idea of where to find the data and what database and table contain the information that is required. With this knowledge, you will need to perform some initial analysis of the data in the database tables to determine how much and what columns of data are needed. This is often done through simple queries that perform row counts and table descriptions. The following are examples of the type of queries that may be used.

The following is a query to get an idea of the data in a table:

SELECT TOP (1000) *
  FROM database.schema.table;

This type of query will give you an idea of what data is available in a particular table by showing you up to the first 1,000 rows in the table, and if the table has fewer than 1,000 rows, it will show you all of the rows in the table.

The following query will show you all of the columns and their data types in a particular schema:

SELECT Table_name as [Table] ,
       Column_name as [Column] ,
       Table_catalog as [Database],
       table_schema as [Schema]
FROM   information_schema.columns
WHERE  table_schema = 'Fact'
ORDER BY Table_name, Column_name;

This type of query will read the system tables in the database to return the names of all of the Column instances that each table in the schema contains. The table that we are interested in is the information_schema.columns table. With this information, you can determine what columns are available for you to use.

With this information, let’s look at an example for solving the following sample request that was determined by interviewing a user. For the examples in this chapter, we will assume that the interview has resulted in the following analysis:

We want to be able to analyze the number of orders that resort in a back-order item being created by each year and month and how many customers were impacted.

How do we go about this? Let us check in the following sections.

Reducing the amount of data

We start by determining which tables seem to contain the data that is required as just described:

SELECT Table_name as [Table] ,
       Column_name as [Column] ,
       Table_catalog as [Database],
       Table_schema as [Schema]
FROM information_schema.columns
Where Table_schema = 'fact'
AND Table_name = 'Order'
ORDER BY Table_name, Column_name;

Figure 1.1 shows the results of the query:

Figure 1.1 – Results of the query to show all columns in a table

Figure 1.1 – Results of the query to show all columns in a table

Based on the results, the Fact.Order table is a good candidate to start with, so let’s run the following query:

SELECT TOP (1000) *
  FROM [WideWorldImportersDW].[Fact].[Order];

Figure 1.2 shows the results of this query:

Figure 1.2 – Sample data and columns from the Fact.Order query

Figure 1.2 – Sample data and columns from the Fact.Order query

This query shows us that there are 19 columns, and of those columns, we are only interested in Customer Key, Stock Item Key, Order Date Key, and WWI Backorder ID. So this is, in fact, the table that we should be using. Now that we have identified the table that we want to use, let’s run the following query to see how much data is in the table:

SELECT count(*)
FROM [WideWorldImportersDW].[Fact].[Order]

The results show that there are 231,412 rows of data in the table, so how do we reduce the amount of data that will be required in the result set? The answer is that we do some more analysis; for example, instead of keeping all the columns in the table, we will only include the columns in the query that are needed, as identified earlier. We also know that we are only interested in orders with a back-ordered item. So, let’s run this query and see how many records remain:

SELECT count(*)
FROM [WideWorldImportersDW].[Fact].[Order]
WHERE [WWI Backorder ID] IS NOT NULL;

The result from this query shows that we have reduced the size to 20,593 records; this is a much more manageable size to work with. The query that will get us the initial result set is as follows:

SELECT [Order Date Key] as "Order Date",
    [Order Key] as "Order",
 [stock item key] as "Stock Item",
 [Customer Key] as "Customer",
 [WWI Order ID] as "WWI Order",
 [WWI Backorder ID] as "WWI Backorder"
FROM [WideWorldImportersDW].[Fact].[Order]
WHERE [WWI Backorder ID] IS NOT NULL;

To explain what you have just done with this query, let’s break down each step.

The names after SELECT are the columns that you want the query to return in the result set. In this case, you are returning Order Date Key, Order Key, Stock item key, Customer Key, WWI Order ID, and WWI Backorder ID.

With these columns, you will have a date field in which to analyze the data by month, Order Key allows you to see how many distinct orders are impacted, stock item key tells you which items in the order have been back-ordered, Customer Key tells you which customer has been impacted by the backorders, and WWI Order ID lets you determine how many orders have been impacted. WWI Backorder ID is included when you want to see how many backorders are in the system.

FROM tells the query where to get the data from, in this case, from the WWI database using the Fact Order table.

The most important part is the WHERE clause; this is the part of the code that reduces the size of the result set to a manageable size. After all, you are not interested in all the orders in the table, only the orders that have an item that is on backorder. Figure 1.3 shows what the result set will look like:

Figure 1.3 – Sample result set

Figure 1.3 – Sample result set

Since you are interested in data by months, you will want to modify the Order Date Key column for the year and a column for the month as follows:

SELECT Year([Order Date Key]) as "Order Year",
       Month([Order Date Key]) as "Order Month",
  [Order Key] as "Order",
  [stock item key] as "Stock Item",
  [Customer Key] as "Customer",
  [WWI Order ID] as "WWI Order",
  [WWI Backorder ID] as "WWI Backorder"
FROM [WideWorldImportersDW].[Fact].[Order]
WHERE [WWI Backorder ID] IS NOT NULL;

You now have this result set you can see in Figure 1.4, and you are ready to answer the question that came from the interview with the user:

Figure 1.4 – Result set totaling by month

Figure 1.4 – Result set totaling by month

Now that you have learned how to get the data that you require for analysis, we will discuss the impact this filtering of data has on the aggregations that you may want to do in the analysis.

Understanding the value of creating views versus removing data

You have now learned how to create a query to get a result set that you can use for analysis and answer questions for a user. The next challenge is how you make this reusable so that you do not have to recreate the query every time you need the same data for other analyses. The reason for the challenge is that as the query gets more complex, the more likely the query is to be incorrectly typed. The solution to this challenge is to create a view. A view is a way to save the query as a logical table so that anybody with access to the database can run the query, and if you move on to another opportunity, the next person can recreate the result set with very little effort.

So, how do you create a view? It is as simple as adding the following line to the beginning of the SELECT query:

Create View 'name of the view' AS

Here is how the query that we created earlier would look to create a view of the data by adding the following line to the beginning of the SELECT query:

CREATE VIEW v_Backorders as
SELECT Year([Order Date Key]) as "Order Year",
       Month([Order Date Key]) as "Order Month",
  [Order Key] as "Order",
  [stock item key] as "Stock Item",
  [Customer Key] as "Customer",
  [WWI Order ID] as "WWI Order",
  [WWI Backorder ID] as "WWI Backorder"
FROM [WideWorldImportersDW].[Fact].[Order]
WHERE [WWI Backorder ID] IS NOT NULL;

Now you can run the analysis query as the following:

SELECT [Order Year],
       [Order Month],
   [Order],
   [Stock Item],
   [Customer],
   [WWI Order],
   [WWI Backorder]
FROM [dbo].[v_Backorders];

In Figure 1.5, you will notice that the following results are the same as you saw in the preceding result, and you do not have to include the filters because they are already included in the view:

Figure 1.5 – Result set using a view

Figure 1.5 – Result set using a view

This can save you the time of having to create the query in the future once the initial query has been created, and you can be assured that the data is correct. Most things that you can do in a query can also be done in a view, and you can use the view as though it is a table and just select columns from the view as you would in the table.

Now let’s look at how this filtering impacts any aggregations that you may plan to do with the result set.

Exploring the impact of row and column reductions on aggregations

Now you know how to reduce the number of rows and specify the columns that you need in your result set, let’s talk about what the impact will be on any aggregations that you may be interested in.

First of all, based on this result set, you can view the number of backorders by any combination of columns. For example, to see the number of backorders based on year and month, you could use the following query:

SELECT Year([Order Date Key]) as "Order Year",
       Month([Order Date Key]) as "Order Month",
  COUNT([WWI Backorder ID]) as "Number of backorders",
  COUNT(distinct [Customer Key]) as "Impacted Customers",
  COUNT([Order Key]) as "Number of orders"
FROM [WideWorldImportersDW].[Fact].[Order]
WHERE [WWI Backorder ID] IS NOT NULL
GROUP BY Year([Order Date Key]),
         Month([Order Date Key])
ORDER BY Year([Order Date Key]),
         Month([Order Date Key]);

You could also run the following query using the view that you created, and you will get the same results:

SELECT [Order Year],
       [Order Month],
   COUNT([WWI Backorder]) as "number of backorders",
   COUNT([customer]) as "number of impacted customers",
   COUNT([Order]) as "number of orders"
FROM v_backorders
GROUP BY [Order Year],
     [Order Month];

Figure 1.6 shows a subset of the results from the query:

Figure 1.6 – Subset of results from the earlier queries

Figure 1.6 – Subset of results from the earlier queries

On closer investigation, you will notice that the values in the number of backorders and number of orders columns are the same. Why is this? The answer is in the filter; it only returns records that have an associated backorder, so you will not get the actual number of orders that have been placed, and any calculations will not be accurate. So, to get an accurate number of orders, you will need to get all the orders in the table, not just the orders associated with a backorder. You will also see that the order of the result set is different for the query that uses the view and the query that does not use the view. This is due to the query that uses the view being stored in a memory heap from the view and accessed on demand, whereas the query that does not use the view is stored on the disk in the primary key order of the table.

This is the impact of using a filter; to get around this, you can add a subquery to remove the filter. So here is how you can update your query to get that additional information:

SELECT Year([Order Date Key]) as [Order Year],
       Month([Order Date Key]) as [Order Month],
   COUNT(distinct [Customer Key]) as [Impacted Customers],
   COUNT(distinct [Stock Item Key]) as [Backorder Items],
   COUNT([WWI Backorder ID]) as [Number of backorders],
   fo.orders,
       fo.Customers
FROM [WideWorldImportersDW].[Fact].[Order] o,
     (select  Year([Order Date Key]) as [Order Year],
              Month([Order Date Key]) as [Order Month],
              COUNT (distinct [Order Key]) as [orders],
              COUNT (distinct [customer Key]) as [Customers]
      FROM [WideWorldImportersDW].[Fact].[Order]
  GROUP BY Year([Order Date Key]),
               Month([Order Date Key])) as fo
WHERE [WWI Backorder ID] IS NOT NULL
AND year(o.[Order Date Key]) = fo.[Order Year]
AND month(o.[Order Date Key]) = fo.[Order Month]
GROUP BY Year([Order Date Key]),
         Month([Order Date Key]),
    fo.orders,
    fo.Customers
ORDER BY Year([Order Date Key]),
         Month([Order Date Key]);

This will give you the following result set, as seen in Figure 1.7. Notice you now see values in the orders, Impacted Customers, Number of backorders, and Customers columns for each Order Month:

Figure 1.7 – Results of using filters in a query

Figure 1.7 – Results of using filters in a query

This can be done for any number of columns as long as the subquery is grouped by the same fields as the main query. To simplify this query, you could very easily create a view of all the orders and then use the views to get the same results with less query development.

Here is a sample of the query to create the order view that you can use in future queries:

CREATE VIEW [dbo].[v_orders] AS
SELECT Year([Order Date Key]) as [Order Year],
      MONTH([Order Date Key]) as [Order Month],
      COUNT(distinct [Order Key]) as [orders],
      COUNT(distinct [customer Key]) as [Customers]
FROM [WideWorldImportersDW].[Fact].[Order]
GROUP BY Year([Order Date Key]),
         Month([Order Date Key])

Here is a sample of the query that uses the two views (v_Backorders and v_orders) that have been created in this chapter:

  SELECT o.[Order Year],
         o.[Order Month],
         o.Customers,
         o.orders,
     COUNT(b.[WWI Backorder]) as [total backorders],
     COUNT(distinct b.[customer]) as [impacted customers]
  FROM [WideWorldImportersDW].[dbo].[v_Backorders] b,
  [WideWorldImportersDW].[dbo].[v_orders] o
  WHERE b.[Order Year] = o.[Order Year]
  AND b.[Order Month] = o.[Order Month]
  GROUP BY o.[Order Year],
           o.[Order Month],
  o.Customers,
  o.orders
  ORDER BY o.[Order Year],
           o.[Order Month];

The following is the result of the query:

   Figure 1.8 – Sample results of using multiple views in a query

Figure 1.8 – Sample results of using multiple views in a query

So as you can see, it is easy to reduce the amount of data that you bring into your result, but sometimes, you will need to include other data that may have been filtered out. This is where views come in handy and allow you to include data that may have been filtered in previous activities. You will need to use caution when deciding between using a view over a subquery due to the performance implications. In this example, the data size is small enough that the performance implications are negligible.

As you can see from this section, there are multiple ways in which you will be able to create result sets and get the same results.

Summary

In this chapter, we discussed how to determine what data you need in your result set to meet your analysis needs.

We started with some simple queries to identify what table we needed to get the data we needed, and then we queried the table to get a sample of data that was contained in it. Next, we created a query that would create a result set that met the needs of the request and showed how it could be turned into a view that makes the query easily reusable. We then wrapped up the chapter by showing how there may be a need to get data that is not available because it was filtered out.

In the next chapter, we will expand upon this knowledge to look at how to efficiently aggregate the data so that your queries will run more efficiently.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Examine query design and performance using query plans and indexes
  • Solve business problems using advanced techniques such as common table expressions and window functions
  • Use SQL in modern data platform solutions with JSON and Jupyter notebooks

Description

SQL has been the de facto standard when interacting with databases for decades and shows no signs of going away. Through the years, report developers or data wranglers have had to learn SQL on the fly to meet the business needs, so if you are someone who needs to write queries, SQL Query Design and Pattern Best Practices is for you. This book will guide you through making efficient SQL queries by reducing set sizes for effective results. You’ll learn how to format your results to make them easier to consume at their destination. From there, the book will take you through solving complex business problems using more advanced techniques, such as common table expressions and window functions, and advance to uncovering issues resulting from security in the underlying dataset. Armed with this knowledge, you’ll have a foundation for building queries and be ready to shift focus to using tools, such as query plans and indexes, to optimize those queries. The book will go over the modern data estate, which includes data lakes and JSON data, and wrap up with a brief on how to use Jupyter notebooks in your SQL journey. By the end of this SQL book, you’ll be able to make efficient SQL queries that will improve your report writing and the overall SQL experience.

What you will learn

Build efficient queries by reducing the data being returned Manipulate your data and format it for easier consumption Form common table expressions and window functions to solve complex business issues Understand the impact of SQL security on your results Understand and use query plans to optimize your queries Understand the impact of indexes on your query performance and design Work with data lake data and JSON in SQL queries Organize your queries using Jupyter notebooks

What do you get with a Packt Subscription?

Free for first 7 days. $15.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details


Publication date : Mar 31, 2023
Length 270 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781837633289
Category :

Table of Contents

21 Chapters
Preface Chevron down icon Chevron up icon
1. Part 1: Refining Your Queries to Get the Results You Need Chevron down icon Chevron up icon
2. Chapter 1: Reducing Rows and Columns in Your Result Sets Chevron down icon Chevron up icon
3. Chapter 2: Efficiently Aggregating Data Chevron down icon Chevron up icon
4. Chapter 3: Formatting Your Results for Easier Consumption Chevron down icon Chevron up icon
5. Chapter 4: Manipulating Data Results Using Conditional SQL Chevron down icon Chevron up icon
6. Part 2: Solving Complex Business and Data Problems in Your Queries Chevron down icon Chevron up icon
7. Chapter 5: Using Common Table Expressions Chevron down icon Chevron up icon
8. Chapter 6: Analyze Your Data Using Window Functions Chevron down icon Chevron up icon
9. Chapter 7: Reshaping Data with Advanced Techniques Chevron down icon Chevron up icon
10. Chapter 8: Impact of SQL Server Security on Query Results Chevron down icon Chevron up icon
11. Part 3: Optimizing Your Queries to Improve Performance Chevron down icon Chevron up icon
12. Chapter 9: Understanding Query Plans Chevron down icon Chevron up icon
13. Chapter 10: Understanding the Impact of Indexes on Query Design Chevron down icon Chevron up icon
14. Part 4: Working with Your Data on the Modern Data Platform Chevron down icon Chevron up icon
15. Chapter 11: Handling JSON Data in SQL Server Chevron down icon Chevron up icon
16. Chapter 12: Integrating File Data and Data Lake Content with SQL Chevron down icon Chevron up icon
17. Chapter 13: Organizing and Sharing Your Queries with Jupyter Notebooks Chevron down icon Chevron up icon
18. Index Chevron down icon Chevron up icon
19. Other Books You May Enjoy Chevron down icon Chevron up icon
Appendix: Preparing Your Environment Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(1 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


Enrico Barillari Nov 7, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Un libro interessante ed utile, lo consiglio anche a chi ha esperienza, per un ripasso ed un'integrazione
Feefo Verified review Feefo image
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.