Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Rancher Deep Dive
Rancher Deep Dive

Rancher Deep Dive: Manage enterprise Kubernetes seamlessly with Rancher

By Matthew Mattox
€30.99 €20.99
Book Jul 2022 408 pages 1st Edition
eBook
€30.99 €20.99
Print
€39.99
Subscription
€14.99 Monthly
eBook
€30.99 €20.99
Print
€39.99
Subscription
€14.99 Monthly

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Jul 15, 2022
Length 408 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781803246093
Table of content icon View table of contents Preview book icon Preview Book

Rancher Deep Dive

Chapter 1: Introduction to Rancher and Kubernetes

This chapter will focus on the history of Rancher and Kubernetes. We will cover what products and solutions came before Rancher and Kubernetes and how they have evolved into what they are today. At the end of this chapter, you should have a good understanding of the origins of Rancher and Kubernetes and their core concepts. This knowledge is essential for you to understand why Rancher and Kubernetes are what they are.

In this chapter, we're going to cover the following main topics:

  • The history of Rancher Labs as a company
  • Rancher's earlier products
  • What is Rancher's core philosophy?
  • Where did Kubernetes come from?
  • What problem is Kubernetes trying to solve?
  • Comparing Kubernetes with Docker Swarm and OpenShift

The history of Rancher Labs as a company

Rancher Labs was founded in 2014 in Cupertino, California, by Sheng Liang, Shannon Williams, Darren Shepherd, and Will Chanas. It was a container management platform before Kubernetes was a thing. From the beginning, Rancher was built on the idea that everything should be open source and community-driven. With Rancher being an open source company, all of the products they have released (including Rancher, RancherOS, RKE, K3s, Longhorn, and more) have been 100% open source. Rancher Lab's flagship product is Rancher. Primarily, Rancher is a management and orchestration platform for containerized workloads both on-premises and in the cloud. Rancher can do this because it has always been vendor-neutral; that is, Rancher can deploy a workload using physical hardware in your data center to cloud VMs in AWS to even a Raspberry Pi in a remote location.

Rancher's earlier products

When Rancher v1.0 was released in March of 2016, it only supported Docker Swarm and Rancher Cattle clusters. Docker Swarm was the early cluster orchestration tool that created a number of the core concepts that we still use today; for instance, the idea that an application should be defined as a group of containers that can be created and destroyed at any time. Another concept is that containers should live on a virtual network that is accessible on all nodes in a cluster. You can expose your containers via a load balancer which, in the case of Docker Swarm, is just a basic TCP load balancer.

While the Rancher server was being created, Rancher Labs was working on their own Docker cluster software, called Cattle, which is when Rancher went General Availability (GA) with the launch of Rancher v1.0. Cattle was designed to address the limitations of Docker Swarm, which spanned several different areas.

The first was networking. Originally, Docker Swarm's networking overlay was built on Internet Protocol Security (IPsec) with the idea that each node in the cluster would be assigned a subnet; that is, a class C subnet by default. Each node would create an IPsec tunnel to all other nodes in the cluster. It would then use basic routing rules to direct traffic to the node where that container was hosted. For example, let's say a container on node01 with an IP address of 192.168.11.22 wants to connect to another container hosted on node02 with an IP address of 192.168.12.33. The networking swarm uses basic Linux routing to route anything inside the 192.168.12.0/24 subnet to node02 over the IPsec tunnel. This core concept is still in use today by the majority of Kubernetes's CNI providers. The main issue is in managing the health of these tunnels over time and dealing with compatibility issues between the nodes. Cattle addressed this issue by moving IPsec into a container and then wrapping a management layer to handle the creation, deletion, and monitoring of the tunnels.

The second main issue was to do with load balancing. With Docker Swarm, we were limited to very basic TCP/layer4 load balancing. We didn't have sessions, SSL, or connection management. This is because load balancing was all done by iptable rules. Cattle addressed this issue by deploying HAProxy on all nodes in the cluster. Following this, Cattle used a custom container, called rancher-metadata, to dynamically build HAProxy's config every time a container was created or deleted.

The third issue was storage. With Docker Swarm, there weren't any storage options outside bind mounting to a host filesystem. This meant that you had to create a clustered filesystem or shared network and then manually map them to all of your Docker hosts. Cattle addressed this by creating rancher-nfs, which is a tool that can mount NFS shares inside a container and create a bind mount. As Rancher went on, other storage providers were added, such as AWS and VMware.

Then, as time moved forward at Rancher, the next giant leap was when authentication providers were added, because Rancher provides access to the clusters that Rancher manages by integrating external authentication providers such as Active Directory, LDAP, and GitHub. This is unique to Rancher, as Kubernetes still doesn't integrate very well with external authentication providers.

What is Rancher's core philosophy?

Rancher is built around several core design principles:

  • Open source: All code, components, and services that make up Rancher or come with Rancher must be open source. Because of this, Rancher has a large community built around it, with users providing feedback, documentation, and contributing code.
  • No lock-ins: Rancher is designed with no vendor lock-in, including being locked inside Rancher. With containerization evolving so quickly, Rancher needs to enable users to change technologies with as little impact as possible. A core requirement of all products and solutions that Rancher provides is that they can be used with or without the Rancher server. An example of this is Longhorn; there are zero dependencies between Rancher and Longhorn. This means that at any time, a user can uninstall one without impacting the other. This includes the ability to uninstall Rancher without losing your clusters. Rancher does this by having a process in place for a user to take over the management of a cluster directly and kick Rancher out of the picture.
  • Everything is a Kubernetes object: With the release of Rancher v2.0, which was released in May 2018, after approximately a year and a half of work, Rancher made the switch from storing all of its resources and configurations inside a MySQL database to storing everything as a Kubernetes object. This is done by using custom resources (or CRDs) in Kubernetes. For example, let's consider that the definition of a cluster in Rancher is stored as a Custom Resource Definition (CRD) called clusters.management.cattle.io and the same with nodes as an object under nodes.management.cattle.io, which is scoped to a namespace with the cluster ID. Because of this, users and applications can directly query Rancher objects without needing to talk to Rancher's API. The reason for this change was mainly to do with scalability. With Cattle and MySQL, all cluster-related tasks had to go back to the Rancher server. So, as you scaled up the size of your cluster and the number of clusters, you had to scale up the Rancher server, too. This resulted in customers hitting issues such as "task storms" where a single node rebooting in a cluster causes a flood of requests that are sent to the Rancher server, which, in turn, causes other tasks to timeout, which then causes more requests. In the end, the only thing you can do is to shut everything down and slowly bring it back up.
  • Everything is stateless: Because everything is a Kubernetes object, there is no need for a database for Rancher. All Rancher pods are stateless, meaning they can be destroyed at any time for any reason. Additionally, Rancher can rely on Kubernetes controllers to simply spin up new pods without needing Rancher to do anything.
  • Controller model: All Rancher services are designed around the Kubernetes controller model. A control loop is always running, watching the current state, and comparing it to the desired state. And if any differences are found, it applies the application logic to make the current state match the desired state. Alongside this, it uses the same leader election process with Kubernetes core components. This ensures there is only one source of truth and ensures certain controllers will handle failing over after a failure.

Where did Kubernetes come from?

The name Kubernetes originates from Greek, meaning helmsman or pilot. Kubernetes is abbreviated to k8s due to the number of letters between the K and S. Initially, engineers created Kubernetes at Google from an internal project called Borg. Google's Borg system is a cluster manager that was designed to run Google's internal applications. These applications are made up of tens of thousands of microservices hosted on clusters worldwide, with each cluster being made up of tens of thousands of machines. Borg provided three main benefits. The first benefit was the abstraction of resource and failure management, so application designers could focus on application development. The second benefit was its high reliability and availability by design. All parts of Borg were designed, from the beginning, to be highly available. This was done by making applications stateless. This was done so that any component could be destroyed at any time for any reason without impacting availability and, at the same time, could be scaled horizontally to hundreds of instances across clusters. The third benefit was an effective workload; Borg was designed to have a minimal overhead on the compute resources being managed.

Kubernetes can be traced directly back to Borg, as many of the developers at Google that worked on Kubernetes were formerly developers on the Borg project. Because of this, many of its core concepts were incorporated into Kubernetes, with the only real difference being that Borg was custom-made for Google, and its requirements for Kubernetes need to be more generalized and flexible. However, there are four main features that have been derived from Borg:

  • Pods: A pod is the smallest unit of scheduling in Kubernetes. This object can include one or more containers, with each container in the pods sharing resources such as an IP address, volumes, and other local resources. One of the main design principles is that a pod should be disposable and shouldn't change after creation. Another primary principle is that all application configurations should be handled at the pod level. For example, a database connection string should be defined as part of the pod's definition instead of the application code. This is done so that any changes to the configuration of an application won't require the code to be recompiled and redeployed. Additionally, the pod takes the concept of paired processes from Borg, with the classic example being a log collector. This is because, typically, your container should only have one primary process running inside it.

An example of this is a web server: the server creates logs, but how do you ship those logs to a log server like Splunk? One option is to add a custom agent to your application pod which is easy. But now that you manage more than one process inside a container, you'll have duplicate code in your environment, and most importantly, you now have to do error handling for both your main application and this additional logging agent. This is where sidecars come into play and allow you to bolt containers together inside a pod in a repeatable and consistent manner.

  • Services: One of Borg's primary roles was the life cycle management of applications and pods. Because of this, the pod name and IP address are ephemeral and can change at any time for any reason. So, the concept of a service was created as an abstraction level wherein you can define a service object that references a pod or pods by using labels. Kubernetes will then handle the mapping of the service records to its pods. These benefits load balance the traffic for a service among the pods that make up that service. Service records allow Kubernetes to add and remove pods without disrupting the applications because the service to pod mapping can simply be changed without the requesting client being aware.
  • Labels: Because Borg was designed to manage containers at scale, things such as a hostname were impractical for mapping a pod to its running application. The idea was that if you define a set of labels for your application, those can be added to its pods, allowing Kubernetes to track instances at scale. Labels are arbitrary key-value pairs that can be assigned to any Kubernetes resource, including pods, services, nodes, and more. One example set is "application=web_frontend," "environment=production," "department=marketing". Note that each of these keys is a different label selector rule that can create a service record. This has the side benefit of making the reporting and tracking of usage much easier.
  • Every pod has an IP: When Borg was created, all of the containers on a host would share the host's IP address and then use different ports for each container. This allowed Borg to use a standard IP network. However, this created a burden on infrastructure and application teams, as Borg needed to schedule ports for containers. This required applications to have a set of predefined ports that would be needed for their container.

What problem is Kubernetes trying to solve?

Kubernetes was designed to solve several problems. The primary areas are as follows:

  • Availability: Everyone, from the application owner to the developers, to the end users, has come to expect 24x7x365 uptime, with work outages and downtime being a four-letter word in IT. With containerization and microservices, this bar has only gotten higher. Kubernetes addresses this issue by scheduling containers across nodes and using the desired state versus the actual state. The idea is that any failures are just a change in the actual state that triggers the controllers to schedule pods until the actual state matches the desired state.
  • CI/CD: Traditional development was carried out using monolithic developments, with a few significant releases per year. This required a ton of developers working for months to test their releases and build a ton of manual processes to deploy their applications. Kubernetes addresses this issue by being driven by the desired state and config file. This means implementing a DevOps workflow that allows developers to automate steps and continuously integrate, test, and deploy code. All of this will enable teams to fail fast and fix fast.
  • Efficiency: Traditional IT was a black hole that companies threw money into. One of the reasons behind this was high availability. For one application, you would need at least two servers for each component of your production application. Also, you would require additional servers for each of your lower environments (such as DEV, QAS, Test, and more). Today, companies want to be as efficient with their IT spending as possible. Kubernetes addresses this need by making spinning up environments very easy. With CI/CD, you can simply create a new namespace, deploy your application, run whatever tests you want, and then tear down the namespace to reclaim its resources.
  • Automate scaling: Traditionally, you would design and build your environment around your peak workload. For instance, let's say your application is mainly busy during business hours and is idle during off-peak hours. You are wasting money because you pay the same amount for your compute resources at 100% and 1%. However, traditionally, it would take days or even weeks to spin up a new server, install your application, config it, and, finally, update the load balancer. This made it impossible to scale up and down rapidly. So, some companies just decided to scale up and stay there. Kubernetes addresses this issue by making it easy to scale up or down, as it just involves a simple change to the desired state.

Let's say that an application currently has two web servers, and you want to add a pod to handle the load. Just change the number of replicas to three because the current state doesn't match the desired state. The controllers kick up and start spinning up a new pod. This can be automated using Kubernetes' built-in horizontal pod autoscaler (HPA), which uses several metrics ranging from simple metrics such as CPU and memory to custom metrics such as overall application response times. Additionally, Kubernetes can use its vertical pod autoscaler (VPA) to automatically tune your CPU and memory limits over time. Following this, Kubernetes can use node scaling to dynamically add and remove nodes to your clusters as resources are required. This means your application might have 10 pods with 10 worker nodes during the day, but it might drop to only 1 pod with 1 worker node after hours. This means you can save the cost of 9 nodes for 16 hours per day plus the weekends; all of this without your application having to do anything.

Comparing Kubernetes with Docker Swarm and OpenShift

We will compare both of these in the following section.

Kubernetes versus Docker Swarm

Kubernetes and Docker Swarm are open source container orchestration platforms that have several identical core functions but significant differences.

Scalability

Kubernetes is a complex system with several components that all need to work together to make the cluster operate, making it more challenging to set up and administrate. Kubernetes requires you to manage a database (etcd), including taking backups and creating SSL certificates for all of the different components.

Docker Swarm is far simpler, with everything just being included in Docker. All you need to do is create a manager and join nodes to the swarm. However, because everything is baked-in, you don't get the higher-level features such as autoscaling, node provisioning, and more.

Networking

Kubernetes uses a flat network model with all pods sharing a large network subnet and another network for creating services. Additionally, Kubernetes allows you to customize and change network providers. For example, if you don't like a particular canal, or can't do network-level encryption, you can switch to another provider, such as Weave, which can do network encryption.

Docker Swarm networking is fundamental. By default, Docker Swarm creates IPsec tunnels between all nodes in the cluster using IPsec for the encryption. This speed can be good because modern CPUs provide hardware acceleration for AES; however, you can still take a performance hit depending on your hardware and workload. Additionally, with Docker Swarm, you can't switch network providers as you only get what is provided.

Application deployment

Kubernetes uses YAML and its API to enable users to define applications and their resources. Because of this, there are tools such as Helm that allow application owners to define their application in a templatized format, making it very easy for applications to be published in a user-friendly format called Helm charts.

Docker Swarm is built on the Docker CLI with a minimal API for management. The only package management tool is Docker Compose, which hasn't been widely adopted due to its limited customization and the high degree of manual work required to deploy it.

High availability

Kubernetes has been built from the ground up to be highly available and to have the ability to handle a range of failures, including pods detecting unhealthy pods using advanced features such as running commands inside the pods to verify their health. This includes all of the management components such as Kube-scheduler, Kube-apiserver, and more. Each of these components is designed to be stateless with built-in leader election and failover management.

Docker Swarm is highly available mainly by its ability to clone services between nodes, with the Swarm manager nodes being in an active-standby configuration in the case of a failure.

Load balancing

Kubernetes pods can be exposed using superficial layer 4 (TCP/UDP mode) load balancing services. Then, for external access, Kubernetes has two options. The first is node-port, which acts as a simple method of port-forwarding from the node's IP address to an internal service record. The second is for more complex applications, where Kubernetes can use an ingress controller to provide layer 7 (HTTP/HTTPS mode) load balancing, routing, and SSL management.

Docker Swarm load balancing is DNS-based, meaning Swarm uses round-robin DNS to distribute incoming requests between containers. Because of this, Docker Swarm is limited to layer 4 only, with no option to use any of the higher-level features such as SSL and host-based routing.

Management

Kubernetes provides several tools in which to manage the cluster and its applications, including kubectl for command-line access and even a web UI via the Kubernetes dashboard service. It even offers higher-level UIs such as Rancher and Lens. This is because Kubernetes is built around a REST API that is highly flexible. This means that applications and users can easily integrate their tools into Kubernetes.

Docker Swarm doesn't offer a built-in dashboard. There are some third-party dashboards such as Swarmpit, but there hasn't been very much adoption around these tools and very little standardization.

Security

Kubernetes provides a built-in RBAC model allowing fine-grained control for Kubernetes resources. For example, you can grant pod permission to just one secret with another pod being given access to all secrets in a namespace. This is because Kubernetes authorization is built on SSL certifications and tokens for authentication. This allows Kubernetes to simply pass the certificate and token as a file mounted inside a pod. This makes it straightforward for applications to gain access to the Kubernetes API.

The Docker Swarm security model is primarily network-based using TLS (mTLS) and is missing many fine-grained controls and integrations, with Docker Swarm only having the built-in roles of none, view only, restricted control, scheduler, and full control. This is because the access model for Docker Swarm was built for cluster administration and not application integration. In addition to this, originally, the Docker API only supported basic authentication.

Kubernetes versus OpenShift

Both Kubernetes and OpenShift share a lot of features and architectures. Both follow the same core design practices, but they differ in terms of how they are executed.

Networking

Kubernetes lacks a built-in networking solution and relies on third-party plug-ins such as canal, flannel, and Weave to provide networking for the cluster.

OpenShift provides a built-in network solution called Open vSwitch. This is a VXLAN--based software-defined network stack that can easily be integrated into RedHat's other products. There is some support for third-party network plugins, but they are limited and much harder to support.

Application deployment

Kubernetes takes the approach of being as flexible as possible when deploying applications to the cluster, allowing users to deploy any Linux distribution they choose, including supporting Windows-based images and nodes. This is because Kubernetes is vendor-agnostic.

OpenShift takes the approach of standardizing the whole stack on RedHat products such as RHEL for the node's operating system. Technically, there is little to nothing to stop OpenShift from running on other Linux distributions such as Ubuntu. Additionally, Openshift puts limits on the types of container images that are allowed to run inside the cluster. Again, technically, there isn't much preventing a user from deploying an Ubuntu image on the Openshift cluster, but they will most likely run into issues around supportably.

Security

Kubernetes had a built-in tool for pod-level security called Pod Security Policies (PSPs). PSPs were used to enforce limits on pods such as blocking a pod from running as root or binding to a host's filesystem. PSPs were deprecated in v1.21 due to several limitations of the tool. Now, PSPs are being replaced by a third-party tool called OPA Gatekeeper, which allows all of the same security rules but with a different enforcement model.

OpenShift has a much stricter security mindset, with the option to be secure as a default, and it doesn't require cluster hardening like Kubernetes.

Summary

In this chapter, we learned about Rancher's history and how it got its start. Following this, we went over Rancher's core philosophy and how it was designed around Kubernetes. Then, we covered where Kubernetes got its start and its core philosophy. We then dived into what the core problems are that Kubernetes is trying to solve. Finally, we examined the pros and cons of Kubernetes, Docker Swarm, and OpenShift.

In the next chapter, we will cover the high-level architecture and processes of Rancher and its products, including RKE, K3s, and RancherD.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Gain a complete understanding of how Rancher works
  • Discover how to design and deploy Kubernetes clusters using Rancher
  • Understand how to extend Kubernetes and Rancher's capabilities to take your apps to the next level

Description

Knowing how to use Rancher enables you to manage multiple clusters and applications without being locked into a vendor’s platform. This book will guide you through Rancher’s capabilities while deepening your understanding of Kubernetes and helping you to take your applications to a new level. The book begins by introducing you to Rancher and Kubernetes, helping you to learn and implement best practices. As you progress through the chapters, you’ll understand the strengths and limitations of Rancher and Kubernetes and discover all the different ways to deploy Rancher. You’ll also find out how to design and deploy Kubernetes clusters to match your requirements. The concluding chapters will show you how to set up a continuous integration and continuous deployment (CI/CD) pipeline for deploying applications into a Rancher cluster, along with covering supporting services such as image registries and Helm charts. By the end of this Kubernetes book, you’ll be able to confidently deploy your mission-critical production workloads on Rancher-managed Kubernetes clusters.

What you will learn

Deploy Rancher in a production-ready configuration Architect an application cluster to support mission-critical workloads Build the type of Kubernetes cluster that makes sense for your environment Discover the tools and services needed to make a new, ready-to-deploy cluster Prepare your applications to be deployed into Rancher for Kubernetes Expand your Kubernetes cluster by providing additional services such as Longhorn, OPA, and monitoring

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Jul 15, 2022
Length 408 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781803246093

Table of Contents

25 Chapters
Preface Chevron down icon Chevron up icon
Part 1 – Rancher Background and Architecture and Design Chevron down icon Chevron up icon
Chapter 1: Introduction to Rancher and Kubernetes Chevron down icon Chevron up icon
Chapter 2: Rancher and Kubernetes High-Level Architecture Chevron down icon Chevron up icon
Part 2 – Installing Rancher Chevron down icon Chevron up icon
Chapter 3: Creating a Single Node Rancher Chevron down icon Chevron up icon
Chapter 4: Creating an RKE and RKE2 Cluster Chevron down icon Chevron up icon
Chapter 5: Deploying Rancher on a Hosted Kubernetes Cluster Chevron down icon Chevron up icon
Part 3 – Deploying a Kubernetes Cluster Chevron down icon Chevron up icon
Chapter 6: Creating an RKE Cluster Using Rancher Chevron down icon Chevron up icon
Chapter 7: Deploying a Hosted Cluster with Rancher Chevron down icon Chevron up icon
Chapter 8: Importing an Externally Managed Cluster into Rancher Chevron down icon Chevron up icon
Part 4 – Getting Your Cluster Production-Ready Chevron down icon Chevron up icon
Chapter 9: Cluster Configuration Backup and Recovery Chevron down icon Chevron up icon
Chapter 10: Monitoring and Logging Chevron down icon Chevron up icon
Chapter 11: Bringing Storage to Kubernetes Using Longhorn Chevron down icon Chevron up icon
Chapter 12: Security and Compliance Using OPA Gatekeeper Chevron down icon Chevron up icon
Chapter 13: Scaling in Kubernetes Chevron down icon Chevron up icon
Chapter 14: Load Balancer Configuration and SSL Certificates Chevron down icon Chevron up icon
Chapter 15: Rancher and Kubernetes Troubleshooting Chevron down icon Chevron up icon
Part 5 – Deploying Your Applications Chevron down icon Chevron up icon
Chapter 16: Setting Up a CI/CD Pipeline and Image Registry Chevron down icon Chevron up icon
Chapter 17: Creating and Using Helm Charts Chevron down icon Chevron up icon
Chapter 18: Resource Management Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.