Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Mastering Predictive Analytics with scikit-learn and TensorFlow
Mastering Predictive Analytics with scikit-learn and TensorFlow

Mastering Predictive Analytics with scikit-learn and TensorFlow: Implement machine learning techniques to build advanced predictive models using Python

By Alvaro Fuentes
$32.99
Book Sep 2018 154 pages 1st Edition
eBook
$25.99 $17.99
Print
$32.99
Subscription
$15.99 Monthly
eBook
$25.99 $17.99
Print
$32.99
Subscription
$15.99 Monthly

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Black & white paperback book shipped to your address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Sep 29, 2018
Length 154 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781789617740
Category :
Table of content icon View table of contents Preview book icon Preview Book

Mastering Predictive Analytics with scikit-learn and TensorFlow

Ensemble Methods for Regression and Classification

Advanced analytical tools are widely used by business enterprises in order to solve problems using data. The goal of analytical tools is to analyze data and extract relevant information that can be used to solve problems or increase performance of some aspect of the business. It also involves various machine learning algorithms with which we can create predictive models for better results.

In this chapter, we are going to explore a simple idea that can drastically improve the performance of basic predictive models.

We are going to cover the following topics in this chapter:

  • Ensemble methods and their working
  • Ensemble methods for regression
  • Ensemble methods for classification

Ensemble methods and their working

Ensemble methods are based on a very simple idea: instead of using a single model to make a prediction, we use many models and then use some method to aggregate the predictions. Having different models is like having different points of view, and it has been demonstrated that by aggregating models that offer a different point of view; predictions can be more accurate. These methods further improve generalization over a single model because they reduce the risk of selecting a poorly performing classifier:

In the preceding diagram, we can see that each object belongs to one of three classes: triangles, circles, and squares. In this simplified example, we have two features to separate or classify the objects into the different classes. As you can see here, we can use three different classifiers and all the three classifiers represent different approaches and have different kinds of decision boundaries.

Ensemble learning combines all those individual predictions into a single one. The predictions made from combining the three boundaries usually have better properties than the ones produced by the individual models. This is the simple idea behind ensemble methods, also called ensemble learning.

The most commonly used ensemble methods are as follows:

  • Bootstrap sampling
  • Bagging
  • Random forests
  • Boosting

Before giving a high-level explanation of these methods, we need to discuss a very important statistical technique known as bootstrap sampling.

Bootstrap sampling

Many ensemble learning methods use a statistical technique called bootstrap sampling. A bootstrap sample of a dataset is another dataset that's obtained by randomly sampling the observations from the original dataset with replacement.

This technique is heavily used in statistics, for example; it is used for estimating standard errors on sample statistics like mean or standard deviation of values.

Let's understand this technique more by taking a look at the following diagram:

Let's assume that we have a population of 1 to 10, which can be considered original population data. To get a bootstrap sample, we need to draw 10 samples from the original data with replacement. Imagine you have the 10 numbers written in 10 cards in a hat; for the first element of your sample, you take one card at random from the hat and write it down, then put the card back in the hat and this process goes on until you get 10 elements. This is your bootstrap sample. As you can see in the preceding example, 9 is repeated thrice in the bootstrap sample.

This resampling of numbers with replacement improves the accuracy of the true population data. It also helps in understanding various discrepancies and features involved in the resampling process, thereby increasing accuracy of the same.

Bagging

Bagging, also known as bootstrap aggregation, is a general purpose procedure for reducing variance in the machine learning model. It is based on the bootstrap sampling technique and is generally used with regression or classification trees, but in principle this bagging technique can be used with any model.

The following steps are involved in the bagging process:

  1. We choose the number of estimators or individual models to use. Let's consider this as parameter B.
  2. We take sample datasets from B with replacement using the bootstrap sampling from the training set.
  3. For every one of these training datasets, we fit the machine learning model in each of the bootstrap samples. This way, we get individual predictors for the B parameter.
  4. We get the ensemble prediction by aggregating all of the individual predictions.

In the regression problem, the most common way to get the ensemble prediction would be to find the average of all of the individual predictions.

In the classification problem, the most common way to get the aggregated predictions is by doing a majority vote. The majority vote can be explained by an example. Let's say that we have 100 individual predictors and 80 of them vote for one particular category. Then, we choose that category as our aggregated prediction. This is what a majority vote means.

Random forests

This ensemble method is specifically created for regression or classification trees. It is very similar to bagging since, here, each individual tree is trained on a bootstrap sample of the training dataset. The difference with bagging is that it makes the model very powerful, and on splitting a node from the tree, the split that is picked is the best among a random subset of the features. So, every individual predictor considers a random subset of the features. This has the effect of making each individual predictor slightly worse and more biased but, due to the correlation of the individual predictors, the overall ensemble is generally better than the individual predictors.

Boosting

Boosting is another approach to ensemble learning. There are many methods for boosting, but one of the most successful and popular methods that people use for ensemble learning has been the AdaBoost algorithm. It is also called adaptive boosting. The core idea behind this algorithm is that, instead of fitting many individual predictors individually, we fit a sequence of weak learners. The next algorithm depends on the result of the previous one. In the AdaBoost algorithm, every iteration reweights all of these samples. The training data here reweights based on the result of the previous individual learners or individual models.

For example, in classification, the basic idea is that the examples that are misclassified gain weight and the examples that are classified correctly lose weight. So, the next learner in the sequence or the next model in the sequence focuses more on misclassified examples.

Ensemble methods for regression

Regarding regression, we will train these different models and later compare their results. In order to test all of these models, we will need a sample dataset. We are going to use this in order to implement these methods on the given dataset and see how this helps us with the performance of our models.

The diamond dataset

Let's make actual predictions about diamond prices by using different ensemble learning models. We will use a diamonds dataset(which can be found here: https://www.kaggle.com/shivam2503/diamonds). This dataset has the prices, among other features, of almost 54,000 diamonds. The following are the features that we have in this dataset:

  • Feature information: A dataframe with 53,940 rows and 10 variables
  • Price: Price in US dollars

The following are the nine predictive features:

  • carat: This feature represents weight of the diamond (0.2-5.01)
  • cut: This feature represents quality of the cut (Fair, Good, Very Good, Premium, and Ideal)
  • color: This feature represents diamond color, from J (worst) to D (best)
  • clarity: This feature represents a measurement of how clear the diamond is (I1 (worst), SI2, SI1, VS2, VS1, VVS2, VVS1, IF (best))
  • x: This feature represents length of diamond in mm (0-10.74)
  • y: This feature represents width of diamond in mm (0-58.9)
  • z: This feature represents depth of diamond in mm (0-31.8)
  • depth: This feature represents z/mean(x, y) = 2 * z/(x + y) (43-79)
  • table: This feature represents width of the top of the diamond relative to the widest point (43-95)

The x, y, and z variables denote the size of the diamonds.

The libraries that we will use are numpy, matplotlib, and pandas. For importing these libraries, the following lines of code can be used:

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline

The following screenshot shows the lines of code that we use to call the raw dataset:

The preceding dataset has some numerical features and some categorical features. Here, 53,940 is the exact number of samples that we have in this dataset. Now, for encoding the information in these categorical features, we use the one-hot encoding technique to transform these categorical features into dummy features. The reason behind this is because scikit-learn only works with numbers.

The following screenshot shows the lines of code used for the transformation of the categorical features to numbers:

Here, we can see how we can do this with the get_dummies function from pandas. The final dataset looks similar to the one in the following screenshot:

Here, for each of the categories in the categorical variable, we have dummy features. The value here is 1 when the category is present and 0 when the category is not present in the particular diamond.

Now, for rescaling the data, we will use the RobustScaler method to transform all the features to a similar scale.

The following screenshot shows the lines of code used for importing the train_test_split function and the RobustScaler method:

Here, we extract the features in the X matrix, mention the target, and then use the train_test_split function from scikit-learn to partition the data into two sets.

Training different regression models

The following screenshot shows the dataframe that we will use to record the metrics and the performance metrics that we will use for these models. Since this is a regression task, we will use the mean squared error. Here, in the columns, we have the four models that we will use. We will be using the KNN, Bagging, RandomForest, and Boosting variables:

KNN model

The K-Nearest Neighbours (KNN) model is not an ensemble learning model, but it performs the best among the simple models:

In the preceding model, we can see the process used while making a KNN. We will use 20 neighbors. We are using the euclidean metric to measure the distances between the points, and then we will train the model. Here, the performance metric is saved since the value is just 1, which is the mean squared error.

Bagging model

Bagging is an ensemble learning model. Any estimator can be used with the bagging method. So, let's take a case where we use KNN, as shown in the following screenshot:

Using the n_estimators parameter, we can produce an ensemble of 15 individual estimators. As a result, this will produce 15 bootstrap samples of the training dataset, and then, in each of these samples, it will fit one of these KNN regressors with 20 neighbors. In the end, we will get the individual predictions by using the bagging method. The method that this algorithm uses for giving individual predictions is a majority vote.

Random forests model

Random forests is another ensemble learning model. Here, we get all the ensemble learning objects from the ensemble submodule in scikit-learn. For example, here, we use the RandomForestRegressor method. The following screenshot, shows the algorithm used for this model:

So, in a case where we produce a forest of 50 individual predictors, this algorithm will produce 50 individual trees. Each tree will have max_depth of 16, which will then produce the individual predictions again by majority vote.

Boosting model

Boosting is also an ensemble learning model. Here, we are using the AdaBoostRegressor model, and we will again produce 50 estimators. The following screenshot shows the algorithm used for this model:

The following screenshot shows the train_mse and test_mse results that we get after training all these models:

The following screenshot shows the algorithm and gives the comparison of all of these models on the basis of the values of the test mean squared error. The result is shown with the help of a horizontal bar graph:

Now, when we compare the result of all of these models, we can see that the random forest model is the most successful. The bagging and KNN models come second and third, respectively. This is why we use the KNN model with the bagging model.

The following screenshot shows the algorithm used to produce a graphical representation between the predicted prices and the observed prices while testing the dataset, and also shows the performance of the random forest model:

On using this model again with a predict API or with a predict method, we can get individual predictions.

For example, let's predict the values for the first ten predictions that we get from the testing dataset. The following algorithm shows the prediction that is made by this random forest model, which in turns shows us the real price and the predicted price of the diamonds that we have from the testing dataset:

From this screenshot, we can see that the values for Real price and Predicted price are very close, both for the expensive and inexpensive diamonds.

Using ensemble methods for classification

We are now familiar with the basic concept of ensemble learning and ensemble methods. Now, we will actually put these methods into use in building models using various machine learning algorithms and compare the results generated by them. To actually test all of these methods, we will need a sample dataset in order to implement these methods on the given dataset and see how this helps us with the performance of our models.

Predicting a credit card dataset

Let's take an example of a credit card dataset. This dataset comes from a financial institution in Taiwan and can be found here: https://www.kaggle.com/uciml/default-of-credit-card-clients-dataset. Take a look at the following screenshot, which shows you the dataset's information and its features:

Here, we have the following detailed information about each customer:

  • It contains the limit balance, that is, the credit limit provided to the customer that is using the credit card
  • Then, we have a few features regarding personal information about each customer, such as gender, education, marital status, and age
  • We also have a history of past payments
  • We also have the bill statement's amount
  • We have the history of the bill's amount and previous payment amounts from the previous month up to six months prior, which was done by the customer

With this information, we are going to predict next month's payment status of the customer. We will first do a little transformation on these features to make them easier to interpret.

In this case, the positive class will be the default, so the number 1 represents the customers that fall under the default status category and the number 0 represents the customers who have paid their credit card dues.

Now, before we start, we need to import the required libraries by running a few commands, as shown in the following code snippet:

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline

The following screenshot shows the line of code that was used to prepare the credit card dataset:

Let's produce the dummy feature for education in grad _school, university, and high_school. Instead of using the word sex, use the male dummy feature, and instead of using marriage, let's use the married feature. This feature is given value of 1 when the person is married, and 0 otherwise. For the pay_1 feature, we will do a little simplification process. If we see a positive number here, it means that the customer was late in his/her payments for i months. This means that this customer with an ID of 1 delayed the payment for the first two months. We can see that, 3 months ago, he/she was not delayed on his/her payments. This is what the dataset looks like:

Before fitting our models, the last thing we will do is rescale all the features because, as we can see here, we have features that are in very different scales. For example, limit_bal is in a very different scale than age.

This is why we will be using the RobustScaler method from scikit-learn—to try and transform all the features to a similar scale:

As we can see in the preceding screenshot in the last line of code, we are partitioning our dataset into a training set and a testing set and below that, the CMatrix function is used to print the confusion matrix for each model. This function is explained in the following code snippet:

def CMatrix(CM, labels=['pay', 'default']):
df = pd.DataFrame(data=CM, index=labels, columns=labels)
df.index.name='TRUE'
df.columns.name='PREDICTION'
df.loc['Total'] = df.sum()
df['Total'] = df.sum(axis=1)
return df

Training different regression models

The following screenshot shows a dataframe where we are going to save performance. We are going to run four models, namely logistic regression, bagging, random forest, and boosting:

We are going to use the following evaluation metrics in this case:

  • accuracy: This metric measures how often the model predicts defaulters and non-defaulters correctly
  • precision: This metric will be when the model predicts the default and how often the model is correct
  • recall: This metric will be the proportion of actual defaulters that the model will correctly predict

The most important of these is the recall metric. The reason behind this is that we want to maximize the proportion of actual defaulters that the model identifies, and so the model with the best recall is selected.

Logistic regression model

As in scikit-learn, we just import the object and then instantiate the estimator, and then pass training set X and training set Y to the fit() method. First, we will predict the test dataset and then produce the accuracy, precision, and recall scores. The following screenshot shows the code and the confusion matrix as the output:

Later, we will save these into our pandas dataframe that we just created.

Bagging model

Training the bagging model using methods from the ensemble learning techniques involves importing the bagging classifier with the logistic regression methods. For this, we will fit 10 of these logistic regression models and then we will combine the 10 individual predictions into a single prediction using bagging. After that, we will save this into our metrics dataframe.

The following screenshot shows the code and the confusion matrix as the output:

Random forest model

To perform classification with the random forest model, we have to import the RandomForestClassifier method. For example, let's take 35 individual trees with a max_depth of 20 for each tree. The max_features parameter tells scikit-learn that, when deciding upon the best split among possible features, we should use the square root of the total number of features that we have. These are all hyperparameters that we can tune.

The following screenshot shows the code and the confusion matrix as the output:

Boosting model

In classification with the boosting model, we'll use the AdaBoostClassifier object. Here, we'll also use 50 estimators to combine the individual predictions. The learning rate that we will use here is 0.1, which is another hyperparameter for this model.

The following screenshot shows the code and the confusion matrix:

Now, we will compare the four models as shown in the following screenshot:

The preceding screenshot shows the similar accuracies for the four models, but the most important metric for this particular application is the recall metric.

The following screenshot shows that the model with the best recall and accuracy is the random forest model:

The preceding screenshot proves that the random forest model is better than the other models overall.

To see the relationship between precision, recall, and threshold, we can use the precision_recall_curve function from scikit-learn. Here, pass the predictions and the real observed values, and the result we get consists of the objects that will allow us to produce the code for the precision_recall_curve function.

The following screenshot shows the code for the precision_recall_curve function from scikit-learn:

The following screenshot will now visualize the relationship between precision and recall when using the random forest model and the logistic regression model:

The preceding screenshot shows that the random forest model is better because it is above the logistic regression curve. So, for a precision of 0.30, we get more recall with the random forest model than the logistic regression model.

To see the performance of the RandomForestClassifier method, we change the classification threshold. For example, we set a classification threshold of 0.12, so we will get a precision of 30 and a recall of 84. This model will correctly predict 84% of the possible defaulters, which will be very useful for a financial institution. This shows that the boosting model is better than the logistic regression model for this.

The following screenshot shows the code and the confusion matrix:

Feature importance is something very important that we get while using a random forest model. The scikit-learn library calculates this metric of feature importance for each of the features that we use in our model. The internal calculation allows us to get a metric for the importance of each feature in the predictions.

The following screenshot shows the visualization of these features, hence highlighting the importance of using a RandomForestClassifier method:

The most important feature for predicting whether the customer will default next month or whether the customer defaulted the month before is pay_1. Here, we just have to verify whether the customer paid last month or not. The other important features of this model are the bill amounts of two months, while the other feature in terms of importance is age.

The features that are not important for predicting the target are gender, marital status, and the education level of the customer.

Overall, the random forest model has proved to be better than the logistic regression model.

According to the no free lunch theorem, there is no single model that works best for every problem in every dataset. This means that ensemble learning cannot always outperform simpler methods because sometimes simpler methods perform better than complex methods. So, for every machine learning problem, we must use simple methods over complex methods and then evaluate the performance of both approaches to get the best results.

Summary

In this chapter, we introduced different ensemble methods such as bootstrap sampling, bagging, random forest, and boosting, and their working was explained with the help of some examples. We then used them for regression and classification. For regression, we took the example of a diamond dataset, and we also trained some KNN and other regression models. Later, their performance was compared. For classification, we took the example of a credit card dataset. Again, we trained all of the regression models. We compared their performance, and we found that the random forest model was the best performer.

In the next chapter, we will study k-fold cross-validation and parameter tuning. We will compare different ensemble learning models with k-fold cross-validation and later, we'll use k-fold cross-validation for hyperparameter tuning.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Use ensemble methods to improve the performance of predictive analytics models
  • Implement feature selection, dimensionality reduction, and cross-validation techniques
  • Develop neural network models and master the basics of deep learning

Description

Python is a programming language that provides a wide range of features that can be used in the field of data science. Mastering Predictive Analytics with scikit-learn and TensorFlow covers various implementations of ensemble methods, how they are used with real-world datasets, and how they improve prediction accuracy in classification and regression problems. This book starts with ensemble methods and their features. You will see that scikit-learn provides tools for choosing hyperparameters for models. As you make your way through the book, you will cover the nitty-gritty of predictive analytics and explore its features and characteristics. You will also be introduced to artificial neural networks and TensorFlow, and how it is used to create neural networks. In the final chapter, you will explore factors such as computational power, along with improvement methods and software enhancements for efficient predictive analytics. By the end of this book, you will be well-versed in using deep neural networks to solve common problems in big data analysis.

What you will learn

Use ensemble algorithms to obtain accurate predictions Apply dimensionality reduction techniques to combine features and build better models Choose the optimal hyperparameters using cross-validation Implement different techniques to solve current challenges in the predictive analytics domain Understand various elements of deep neural network (DNN) models Implement neural networks to solve both classification and regression problems

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Black & white paperback book shipped to your address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Sep 29, 2018
Length 154 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781789617740
Category :

Table of Contents

7 Chapters
Preface Chevron down icon Chevron up icon
Ensemble Methods for Regression and Classification Chevron down icon Chevron up icon
Cross-validation and Parameter Tuning Chevron down icon Chevron up icon
Working with Features Chevron down icon Chevron up icon
Introduction to Artificial Neural Networks and TensorFlow Chevron down icon Chevron up icon
Predictive Analytics with TensorFlow and Deep Neural Networks Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela