Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Containers in OpenStack
Containers in OpenStack

Containers in OpenStack: Leverage OpenStack services to make the most of Docker, Kubernetes and Mesos

€22.99 €15.99
Book Dec 2017 176 pages 1st Edition
eBook
€22.99 €15.99
Print
€28.99
Subscription
€14.99 Monthly
eBook
€22.99 €15.99
Print
€28.99
Subscription
€14.99 Monthly

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Dec 21, 2017
Length 176 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781788394383
Vendor :
OpenStack
Table of content icon View table of contents Preview book icon Preview Book

Containers in OpenStack

Chapter 1. Working with Containers

This chapter covers containers and various topics related to them. In this chapter, we will be covering the following topics:

  • The historical context of virtualization
  • Introduction to containers
  • Container components
  • Types of containers
  • Types of container runtime tools
  • Installation of Docker
  • Docker hands-on

The historical context of virtualization


Traditional virtualization appeared on the Linux kernel in the form of hypervisors such as Xen and KVM. This allowed users to isolate their runtime environment in the form of virtual machines (VMs). Virtual machines run their own operating system kernel. Users attempted to use the resources on host machines as much as possible. However, high densities were difficult to achieve with this form of virtualization, especially when a deployed application was small in size compared to a kernel; most of the host's memory was consumed by multiple copies of kernels running on it. Hence, in such high-density workloads, machines were divided using technologies such as chroot jails which provided imperfect workload isolation and carried security implications.

In 2001, an operating system virtualization in the form of Linux vServer was introduced as a series of kernel patches.

This led to an early form of container virtualization. In such forms of virtualization, the kernel groups and isolates processes belonging to different tenants, each sharing the same kernel.

Here is a table that explains the various developments that took place to enable operating system virtualization:

Year and Development

Description

1979: chroot

The concept of containers emerged way back in 1979 with UNIX chroot. Later, in 1982, this was incorporated into BSD. With chroot, users can change the root directory for any running process and its children, separating it from the main OS and directory.

2000: FreeBSD Jails

FreeBSD Jails was introduced by Derrick T. Woolworth at R&D associates in 2000 for FreeBSD. It is an operating system's system call similar to chroot, with additional process sandboxing features for isolating the filesystem, users, networking, and so on.

2001: Linux vServer

Another jail mechanism that can securely partition resources on a computer system (filesystem, CPU time, network addresses, and memory).

2004: Solaris containers

Solaris containers were introduced for x86 and SPARC systems, and first released publicly in February 2004. They are a combination of system resource controls and the boundary separations provided by zones.

2005: OpenVZ

OvenVZ is similar to Solaris containers and makes use of a patched Linux kernel for providing virtualization, isolation, resource management, and checkpointing.

2006: Process containers

Process containers were implemented at Google in 2006 for limiting, accounting, and isolating the resource usage (CPU, memory, disk I/O, network, and so on) of a collection of processes.

2007: Control groups

Control groups, also known as CGroups, were implemented by Google and added to the Linux Kernel in 2007. CGroups help in the limiting, accounting, and isolation of resource usages (memory, CPU, disks, network, and so on) for a collection of processes.

2008: LXC

LXC stands for Linux containers and was implemented using CGroups and Linux namespaces. In comparison to other container technologies, LXC works on the vanilla Linux kernel.

2011: Warden

Warden was implemented by Cloud Foundry in 2011 using LXC at the initial stage; later on, it was replaced with their own implementation.

2013: LMCTFY

LMCTFY stands for Let Me Contain That For You. It is the open source version of Google's container stack, which provides Linux application containers.

2013: Docker

Docker was started in the year of 2016. Today it is the most widely used container management tool.

2014: Rocket

Rocket is another container runtime tool from CoreOS. It emerged to address security vulnerabilities in early versions of Docker. Rocket is another possibility or choice to use instead of Docker, with the most resolved security, composability, speed, and production requirements.

2016: Windows containers

Microsoft added container support (Windows containers) to the Microsoft Windows Server operating system in 2015 for Windows-based applications. With the help of this implementation, Docker would be able to run Docker containers on Windows natively without having to run a virtual machine to run.

Introduction to containers


Linux containers are operating system level virtualization which provides multiple isolated environments on a single host. Rather than using dedicated guest OS like VMs, they share the host OS kernel and hardware.

Before containers came into the limelight, multitasking and traditional hypervisor-based virtualization were used, mainly. Multitasking allows multiple applications to run on the same host machine, however, it provides less isolation between different applications.

Traditional hypervisor-based virtualization allows multiple guest machines to run on top of host machines. Each of these guest machines runs their own operating system. This approach provides the highest level of isolation as well as the ability to run different operating systems simultaneously on the same hardware.

However, it comes with a number of disadvantages:

  • Each operating system takes a while to boot
  • Each kernel takes up its own memory and CPU, so the overhead of virtualization is large
  • The I/O is less efficient as it has to pass through different layers
  • Resource allocation is not done on a fine-grained basis, for example, memory is allocated to a virtual machine at the time of creation, and memory left idle by one virtual machine can't be used by others
  • The maintenance load of keeping each kernel up to date is large

The following figure explains the concept of virtualization:

Containers provide the best of both words. To provide an isolated and secure environment for containers, they use Linux kernel features such as chroot, namespaces, CGroups, AppArmor, SELinux profiles, and so on.

The secure access to the host machine kernel from the container is ensured by Linux security modules.. Boot is faster as there is no kernel or operating system to start up. Resource allocation is fine-grained and handled by the host kernel, allowing the effective per container quality of service (QoS). The next figure explains container virtualization.

However, there are some disadvantages of containers compared to traditional hypervisor-based virtualization: guest operating systems are limited to those which can use the same kernel.

Traditional hypervisors provide additional isolation that is not available in containers, meaning the noisy neighbor problem is more significant in containers than it is with a traditional hypervisor:

Container components


Linux containers are typically comprised of five major components:

  • Kernel namespaces: Namespaces are the major building blocks of Linux containers. They isolate various types of Linux resources such as the network, processes, users, and the filesystem into different groups. This allows different groups of processes to have completely independent views of their resources. Other resources that can be segregated include the process ID space, the IPC space, and semaphore space.
  • Control groups: Control groups, also known as CGroups, limit and account for different types of resource usage such as the CPU, memory, disk I/O, network I/O, and so on, across a group of different processes. They help in preventing one container from resource starvation or contention caused by another container, and thereby maintains QoS.
  • Security: Security in containers is provided via the following components:
    • Root capabilities: This will help in enforcing namespaces in so-called privileged containers by reducing the power of root, in some cases to no power at all.
    • Discretionary Access Control (DAC): It mediates access to resources based on user-applied policies so that individual containers can't interfere with each other and can be run by non-root users securely.
    • Mandatory Access Controls (MAC): Mandatory Access Controls (MAC), such as AppArmor and SELinux, are not required for creating containers, but are often a key element to their security. MAC ensures that neither the container code itself nor the code running in the containers has a greater degree of access than the process itself requires. This way, it minimizes the privileges granted to rogue or compromised processes.
    • Toolsets: Above the host kernel lies the user-space toolsets such as LXD, Docker, and other libraries, which help in managing containers:

Types of containers


The types of containers are as follows:

Machine containers

Machine containers are virtual environments that share the kernel of the host operating system but provide user space isolation. They look far more similar to virtual machines. They have their own init process, and may run a limited number of daemons. Programs can be installed, configured, and run just as they would be on any guest operating system. Similar to a virtual machine, anything running inside a container can only see resources that have been assigned to that container. Machine containers are useful when the use case is to run a fleet of identical or different flavors of distros.

Machine containers having their own operating system does not mean that they are running a full-blown copy of their own kernel. Rather, they run a few lightweight daemons and have a number of necessary files to provide a separate OS within another OS.

Container technologies such as LXC, OpenVZ, Linux vServer, BSD Jails, and Solaris zones are all suitable for creating machine containers.

The following figure shows the machine container concept:

Application containers

While machine containers are designed to run multiple processes and applications, application containers are designed to package and run a single application. They are designed to be very small. They need not contain a shell or init process. The disk space required for an application container is very small. Container technologies such as Docker and Rocket are examples of application containers.

The following figure elaborates on application containers:

Types of container runtime tools


Multiple solutions are available today for managing containers. This section discusses alternative types of containers.

Docker

Docker is the world's leading container platform software. It has been available since 2013. Docker is a container runtime tool designed to make it easier to create, deploy, and run applications by using containers. Docker has drastically reduced the complexity of managing applications by containerizing them. It allows applications to use the same Linux kernel as the host OS, unlike VMs, which create a whole new OS with dedicated hardware. Docker containers can run both on Linux and Windows workloads. Docker containers have enabled huge efficiencies in the development of software, but require runtime tools such as Swarm or Kubernetes.

Rocket

Rocket is another container runtime tool from CoreOS. It emerged to address security vulnerabilities in early versions of Docker. Rocket is another possibility or choice to Docker, with the most resolved security, composability, speed, and production requirements. Rocket has built things differently to Docker in many aspects. The main difference is that Docker runs a central daemon with root privileges and spins off a new container as its sub process, whereas Rocket never spins a container with root privileges. However, Docker always recommends running containers within SELinux or AppArmor. Since then, Docker has come up with many solutions to tackle the flaws.

LXD

LXD is a container hypervisor for managing LXC by Ubuntu. LXD is a daemon which provides a REST API for running containers and managing related resources. LXD containers provide the same user experience as traditional VMs, but using LXC, which provides similar runtime performance to containers and improved utilization over VMs. LXD containers run a full Linux OS so are typically long running, whereas Docker application containers are short-lived. This makes LXD a machine management tool that is different to Docker and is closer to software distribution.

OpenVZ

OpenVZ is a container-based virtualization for Linux which allows the running of multiple secure, isolated Linux containers also known as virtual environments (VEs) and virtual private server (VPS) on a single physical server. OpenVZ enables better server utilization and ensures that applications do not conflict. It is similar to LXC. It can only run on a Linux-based OS. Since all OpenVZ containers share the same kernel version as hosts, users are not allowed to do any kernel modification. However, it also has the advantage of a low memory footprint due to the shared host kernel.

Windows Server containers

Windows Server 2016 introduced Linux containers to Microsoft workloads. Microsoft has partnered with Docker to bring the benefits of the Docker container to Microsoft Windows Server. They have also re-engineered the core windows OS to enable container technology. There are two types of Windows containers: Windows server containers and Hyper-V isolation.

Windows server containers are used for running application containers on Microsoft workloads. They use process and namespace isolation technology for ensuring the isolation between multiple containers. They also share the same kernel as the host OS, as these containers require the same kernel version and configuration as the host. These containers do not provide a strict security boundary and should not be used to isolate untrusted code.

Hyper-V containers

Hyper-V containers are types of Windows containers which provide higher security compared to Windows server containers. Hyper-V hosts Windows server containers in lightweight, highly optimized Hyper-V virtual machines. Thus, they bring a higher degree of resource isolation, but at the cost of efficiency and density on the host. They can be used when the trust boundaries of the host OS requires additional security. In this configuration, the kernel of the container host is not shared with other containers on the same host. Since these containers do not share the kernel with the host or other containers on the host, they can run kernels with different versions and configurations. Users can choose to run containers with or without Hyper-V isolation at runtime.

Clear container

Virtual machines are secure but very expensive and slow to start, whereas containers are fast and provide a more efficient alternative, but are less secure. Intel's Clear containers are a trade-off solution between hypervisor-based VMs and Linux containers that offer agility similar to that of conventional Linux containers, while also offering the hardware-enforced workload isolation of hypervisor-based VMs.

A Clear container is a container wrapped in its own individual ultra-fast, trimmed down VM which offers security and efficiency. The Clear container model uses a fast and lightweight QEMU hypervisor that has been optimized to reduce memory footprints and improve startup performance. It has also optimized, in the kernel, the systemd and core user space for minimal memory consumption. These features improve the resource utilization efficiency significantly and offer enhanced security and speed compared to traditional VMs.

Intel Clear containers provide a lightweight mechanism to isolate the guest environment from the host and also provide hardware-based enforcement for workload isolation. Moreover, the OS layer is shared transparently and securely from the host into the address space of each Intel Clear container, providing an optimal combination of high security with low overhead.

With the security and agility enhancements offered by Clear containers, they have seen a high adoption rate. Today, they seamlessly integrate with the Docker project with the added protection of Intel VT. Intel and CoreOS have collaborated closely to incorporate Clear containers into CoreOS's Rocket (Rkt) container runtime.

Installation of Docker


Docker is available in two editions, Community Edition (CE) and Enterprise Edition (EE):

  • Docker Community Edition (CE): It is ideal for developers and small teams looking to get started with Docker and may be experimenting with container-based apps
  • Docker Enterprise Edition (EE): It is designed for enterprise development and IT teams who build, ship, and run business critical applications in production at scale

This section will demonstrate the instructions for installing Docker CE on Ubuntu 16.04. The Docker installation package, available in the official Ubuntu 16.04 repository, may not be the latest version. To get the latest and greatest version, install Docker from the official Docker repository. This section shows you how to do just that:

  1. First, add the GPG key for the official Docker repository to the system:
        $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg |
        sudo apt-key add 
  1. Add the Docker repository to APT sources:
        $ sudo add-apt-repository "deb [arch=amd64]
        https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 
  1. Next, update the package database with the Docker packages from the newly added repository:
        $ sudo apt-get update 
  1. Make sure you are about to install Docker repository instead of the default Ubuntu 16.04 repository:
        $ apt-cache policy docker-ce 
  1. You should see an output similar to the following:
        docker-ce:
          Installed: (none)
          Candidate: 17.06.0~ce-0~ubuntu
          Version table:
             17.06.0~ce-0~ubuntu 500
                500 https://download.docker.com/linux/ubuntu xenial/stable 
                amd64 Packages
             17.03.2~ce-0~ubuntu-xenial 500
                500 https://download.docker.com/linux/ubuntu xenial/stable 
                amd64 Packages
             17.03.1~ce-0~ubuntu-xenial 500
               500 https://download.docker.com/linux/ubuntu xenial/stable 
               amd64 Packages
             17.03.0~ce-0~ubuntu-xenial 500
              500 https://download.docker.com/linux/ubuntu xenial/stable 
              amd64 Packages

Note

Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository for Ubuntu 16.04. The docker-ce version number might be different.

  1. Finally, install Docker:
        $ sudo apt-get install -y docker-ce 
  1. Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it's running:
        $ sudo systemctl status docker
        docker.service - Docker Application Container Engine
           Loaded: loaded (/lib/systemd/system/docker.service; enabled; 
        vendor preset: enabled)
           Active: active (running) since Sun 2017-08-13 07:29:14 UTC; 45s
        ago
             Docs: https://docs.docker.com
         Main PID: 13080 (dockerd)
           CGroup: /system.slice/docker.service
                   ├─13080 /usr/bin/dockerd -H fd://
                   └─13085 docker-containerd -l 
           unix:///var/run/docker/libcontainerd/docker-containerd.sock --
           metrics-interval=0 --start
  1. Verify that Docker CE is installed correctly by running the hello-world image:
        $ sudo docker run hello-world 
        Unable to find image 'hello-world:latest' locally 
        latest: Pulling from library/hello-world 
        b04784fba78d: Pull complete 
        Digest:
        sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26
        ff74f 
        Status: Downloaded newer image for hello-world:latest 
 
        Hello from Docker!
        This message shows that your installation appears to be
        working correctly.
        To generate this message, Docker took the following steps:
        The Docker client contacted the Docker daemon
        The Docker daemon pulled the hello-world image from the Docker Hub
        The Docker daemon created a new container from that image, 
        which ran the executable that produced the output you are 
        currently reading 
        The Docker daemon streamed that output to the Docker client, 
        which sent it to your terminal
        To try something more ambitious, you can run an Ubuntu 
        container with the following:
         $ docker run -it ubuntu bash 
        Share images, automate workflows, and more with a free Docker ID: 
        https://cloud.docker.com/ 
        For more examples and ideas,
        visit: https://docs.docker.com/engine/userguide/.

Docker hands-on


This section explains how to use Docker to run any application inside containers. The Docker installation explained in the previous section also installs the docker command-line utility or the Docker client. Let's explore the docker command. Using the docker command consists of passing it a chain of options and commands followed by arguments.

The syntax takes this form:

$ docker [option] [command] [arguments]
# To see help for individual command
$ docker help [command] 

To view system wide information about Docker and the Docker version, use the following:

$ sudo docker info
$ sudo docker version 

Docker has many subcommands to manage multiple resources managed by the Docker daemon. Here is a list of management commands supported by Docker:

Management command

Description

Config

Manages Docker configs

container

Manages containers

image

Manages images

network

Manages networks

Node

Manages Swarrn nodes

Plugin

Manages plugins

secret

Manages Docker secrets

Service

Manages services

Stack

Manages Docker stacks

Swarm

Manages swarm

System

Manages Docker

Volume

Manages volumes

 

In the next section, we will explore container and image resources.

Working with Docker images

An image is a lightweight, standalone executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and configuration files. Docker images are used to create Docker containers. Images are stored in the Docker Hub.

Listing images

You can list all of the images that are available in the Docker host by running the Docker images subcommand. The default Docker images will show all top-level images, their repository and tags, and their size:

$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
wordpress           latest              c4260b289fc7        10 days ago         406MB
mysql               latest              c73c7527c03a        2 weeks ago         412MB
hello-world         latest              1815c82652c0        2 months ago        1.84kB

Getting new images

Docker will automatically download any image that is not present in the Docker host system. The docker pull subcommand will always download the image that has the latest tag in that repository if a tag is not provided. If a tag is provided, it pulls the specific image with that tag.

To pull a base image, do the following:

$ sudo docker pull Ubuntu 
# To pull specific version 
$ sudo docker pull ubuntu:16.04

Searching Docker images

One of the most important features of Docker is that a lot of people have created Docker images for a variety of purposes. Many of these have been uploaded to Docker Hub. You can easily search for Docker images in the Docker Hub registry by using the docker search subcommand:

$ sudo docker search ubuntu
NAME                                           DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
rastasheep/ubuntu-sshd                         Dockerized SSH service, built on top of of...   97                   [OK]
ubuntu-upstart                                 Upstart is an event-based replacement for ...   76        [OK]
ubuntu-debootstrap                             debootstrap --variant=minbase --components...   30        [OK]
nuagebec/ubuntu                                Simple always updated Ubuntu docker images...   22                   [OK]
tutum/ubuntu                                   Simple Ubuntu docker images with SSH access     18

Deleting images

To delete an image, run the following:

$ sudo docker rmi hello-world
Untagged: hello-world:latest
Untagged: hello-world@sha256:b2ba691d8aac9e5ac3644c0788e3d3823f9e97f757f01d2ddc6eb5458df9d801
Deleted: sha256:05a3bd381fc2470695a35f230afefd7bf978b566253199c4ae5cc96fafa29b37
Deleted: sha256:3a36971a9f14df69f90891bf24dc2b9ed9c2d20959b624eab41bbf126272a023 

Please refer to the Docker documentation for the rest of the commands related to Docker images.

Working with Docker containers

A container is a runtime instance of an image. It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so.

Creating containers

Launching a container is simple, as docker run passes the image name you would like to run and the command to run this within the container. If the image doesn't exist on your local machine, Docker will attempt to fetch it from the public image registry:

$ sudo docker run --name hello_world ubuntu /bin/echo hello world 

In the preceding example, the container will start, print hello world, and then stop. Containers are designed to stop once the command executed within them has exited.

As an example, let's run a container using the latest image in Ubuntu. The combination of the -i and -t switches gives you interactive shell access to the container:

$ sudo docker run -it ubuntu
root@a5b3bce6ed1b:/# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  
run  sbin  srv  sys  tmp  usr  var 

Listing containers

You can list the all containers running on the Docker host using the following:

# To list active containers
$ sudo docker ps
    
# To list all containers
$ sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS               NAMES
2db72a5a0b99        ubuntu              "/bin/echo hello w..."  
58 seconds ago      Exited (0) 58 seconds ago                       
hello_world 

Checking container's logs

You can also view the information logged by a running container using the following:

$ sudo docker logs hello_world
hello world 

Starting containers

You can start a stopped container using the following:

$ sudo docker start hello_world 

Similarly, you can use commands such as stop, pause, unpause, reboot, restart, and so on to operate containers.

Deleting containers

You can also delete a stopped container using the following:

$ sudo docker delete hello_world
    
# To delete a running container, use -force parameter
$ sudo docker delete --force [container]

Please refer to the Docker documentation for the rest of the commands related to Docker containers.

Summary


In this chapter, we have learned about containers and their types. We have also learned about the components in containers. We took a look at the different container runtime tools. We took a deep dive into Docker, we installed it, and we did a hands-on exercise. We also learned the commands for managing containers and images using Docker. In the next chapter, we will read about different COE tools available today.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Gets you acquainted with containerization in private cloud
  • Learn to effectively manage and secure your containers in OpenStack
  • Practical use cases on container deployment and management using OpenStack components

Description

Containers are one of the most talked about technologies of recent times. They have become increasingly popular as they are changing the way we develop, deploy, and run software applications. OpenStack gets tremendous traction as it is used by many organizations across the globe and as containers gain in popularity and become complex, it’s necessary for OpenStack to provide various infrastructure resources for containers, such as compute, network, and storage. Containers in OpenStack answers the question, how can OpenStack keep ahead of the increasing challenges of container technology? You will start by getting familiar with container and OpenStack basics, so that you understand how the container ecosystem and OpenStack work together. To understand networking, managing application services and deployment tools, the book has dedicated chapters for different OpenStack projects: Magnum, Zun, Kuryr, Murano, and Kolla. Towards the end, you will be introduced to some best practices to secure your containers and COE on OpenStack, with an overview of using each OpenStack projects for different use cases.

What you will learn

•Understand the role of containers in the OpenStack ecosystem •Learn about containers and different types of container runtimes tools. •Understand containerization in OpenStack with respect to the deployment framework, platform services, application deployment, and security •Get skilled in using OpenStack to run your applications inside containers •Explore the best practices of using containers in OpenStack.

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Dec 21, 2017
Length 176 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781788394383
Vendor :
OpenStack

Table of Contents

17 Chapters
Title Page Chevron down icon Chevron up icon
Credits Chevron down icon Chevron up icon
About the Authors Chevron down icon Chevron up icon
About the Reviewers Chevron down icon Chevron up icon
www.PacktPub.com Chevron down icon Chevron up icon
Customer Feedback Chevron down icon Chevron up icon
Preface Chevron down icon Chevron up icon
Working with Containers Chevron down icon Chevron up icon
Working with Container Orchestration Engines Chevron down icon Chevron up icon
OpenStack Architecture Chevron down icon Chevron up icon
Containerization in OpenStack Chevron down icon Chevron up icon
Magnum – COE Management in OpenStack Chevron down icon Chevron up icon
Zun – Container Management in OpenStack Chevron down icon Chevron up icon
Kuryr – Container Plugin for OpenStack Networking Chevron down icon Chevron up icon
Murano – Containerized Application Deployment on OpenStack Chevron down icon Chevron up icon
Kolla – Containerized Deployment of OpenStack Chevron down icon Chevron up icon
Best Practices for Containers and OpenStack Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.