Home Cloud & Networking Mastering Docker Enterprise

Mastering Docker Enterprise

By Mark Panthofer
books-svg-icon Book
Subscription FREE
eBook $38.99
Print + eBook $54.99
READ FOR FREE Free Trial for 7 days. $15.99 p/m after trial. Cancel Anytime! BUY NOW BUY NOW
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
READ FOR FREE Free Trial for 7 days. $15.99 p/m after trial. Cancel Anytime! BUY NOW BUY NOW
Subscription FREE
eBook $38.99
Print + eBook $54.99
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
  1. Free Chapter
    Making the Case for Docker Enterprise
About this book
While known mostly as the open source engine behind tens of millions of server nodes, Docker also offers commercially supported enterprise tooling known as the Docker Enterprise. This platform leverages the deep roots from Docker Engine - Community (formerly Docker CE) and Kubernetes, but adds support and tooling to efficiently operate a secure container platform at scale. With hundreds of enterprises on board, best practices and adoption patterns are emerging rapidly. These learning points can be used to inform adopters and help manage the enterprise transformation associated with enterprise container adoption. This book starts by explaining the case for Docker Enterprise, as well as its structure and reference architecture. From there, we progress through the PoC,pilot and production stages as a working model for adoption, evolving the platform’s design and configuration for each stage and using detailed application examples along the way to clarify and demonstrate important concepts.The book concludes with Docker’s impact on other emerging software technologies, such as Blockchain and Serverless computing. By the end of this book, you’ll have a better understanding of what it takes to get your enterprise up and running with Docker Enterprise and beyond.
Publication date:
March 2019
Publisher
Packt
Pages
488
ISBN
9781789612073

 

Chapter 1. Making the Case for Docker Enterprise

If you have been around the technology scene for a while, you have probably figured out that guiding principles are key to achieving long-term success and without them you end up running in circles—always bouncing to the next cool tech fad without actually getting anything done.

Furthermore, these same guiding principles inspire enterprise practices as a means to ensure the principles are achieved. Finally, principles and practices combine to inform our choice and style for the tools used to make it all happen. Therefore, before we jump into the details of using Docker's Enterprise tooling, it is important to understand how we got here, what running Docker means, and where Docker's enterprise tooling fits into the enterprise platform space.

The following are globally some sample principles and practices to help guide your enterprise container adoption journey:

Principles, Practices and Tools for Enterprise Container Adoption

 

Now lets take a look at the topics which will be covered in this chapter:

  • What are Docker, Inc., Docker Engine-Community, and Docker Enterprise?
  • Where did containers come from and why are they so popular?
  • How do Kubernetes and Docker fit together?
  • How do containers impact your business?
  • Why would I choose Docker Enterprise?
 

Zero to everywhere in five years


Technical operations teams are justifiably skeptical about new technology platforms such as containers. They are usually most concerned about hardening for security and reliability because they exist to keep enterprise applications up and running securely. At the same time, product owners within their organizations need to deliver better, often more complex, software faster. Yes, the business landscape has changed profoundly; in today's business world, software is not only used to achieve competitive advantage, it is the business and provides the frontline customer experience.

Subsequently, significant pressure is mounting to accelerate the software pipeline in nearly every organization. This section briefly explains the roots of containers and why their benefits (a secure and fast software pipeline) have driven such a rapid adoption of containers. 

The Docker story

Docker was born out of a lightning talk presentation, entitled The future of Linux Containers, delivered at PyCon on Friday, March 15, 2013. The presenter was Solomon Hykes, the founder of Docker. On that day, the software world changed even though Linux containers had been evolving in the Linux community for nearly 13 years. It was not the technology that Solomon shepherded that got the Docker movement off the ground, it was the vision behind it and the packaging of the container ecosystem. Solomon's vision was to create tools for mass innovation and his packaging of Linux containers in the Docker experience delivered this powerful technology and put containers within the grasp of mere mortals. Today, Docker runs on tens of millions of servers around the world.

Here are some notes on Linux containers:

  • They have been evolving since 2000
  • LinuxContainers (LXC) was released in 2008
  • Google's lmctfy (let me container that for you) supports Docker's libcontainer in 2015
  • Standards emerged, including OCI, and CNCF, around 2015
  • Center for internet security benchmark support 

Over the last 5 years, thousands of developers joined Docker's open source community to deliver what is known as Docker Community Edition (Docker Engine-Community). Docker has remained committed to an open platform and a level playing field. Docker has donated significant assets to the open source and standards community, including the Docker container format and runtime, to provide the cornerstone of the Open Container Initiative (OCI) in 2015 and the container runtime to the Cloud Native Computing Foundation (CNCF) in 2017. 

At Dockercon in 2017, Solomon Hykes released Project Moby, which effectively gives anyone the tooling they need to build their own Docker. This was very cool and ultimately in the best interests of the container community. However, this well-intentioned effort led to some comprehensive repackaging of Docker community assets without community buy-in. From a big-picture point of view, Docker has demonstrated its commitment to the community and Solomon's vision of tools for mass innovation.

Containers change application development and deployment

Containers allow application developers to package up their application, along with all of their dependencies, into a portable unit called an image. These images are then stored in a remote repository where they can be pulled and run on any compliant container engine. Furthermore, the applications running on each container engine are isolated from each other and the host operating system:

  • Illustrative scenario: Let's say I want to test out NGINX without installing anything (I already have Docker installed of course). I create a sample HTML page called index.html in my local directory and run the following:
docker run -p 8000:80 -v ${PWD}:/usr/share/nginx/html:ro -d nginx
  • What is happening here?
    • I'm telling Docker to run the official nginx image in the background on my local Docker Engine, forwarding my host adapter's port 8000 to the container's port 80 and mounting my local directory to share my HTML file with nginx as a read-only folder.
    • Then, I point my local browser at http://localhost:8000 and I see my HTML page rendered. When I'm done, I ask Docker to remove the container. So, in the span of about a minute, I created a test web page, used NGINX to render it locally without installing anything locally, and ran it in complete isolation. The only possible collision with a host resource was around the host adapter's port 8000, which was arbitrary.
  • This is cool, but don't VMs already do that for us?
    • Conceptually there are some similarities, but container implementation is much more lightweight and efficient. The key implementation differences are:
      • All containers share the host's kernel:
        • Docker uses Linux container security futures to isolate containers from the host and other containers.
        • Since the kernel is already running, startup time for containers is usually a second or two, versus waiting a minute or two for the guest OS to boot on a VM.
      • Containers use a layered filesystem with caching:
        • Docker images are composed of read-only layers that can be cached and shared across multiple containers.
        • Major portions of Docker images can be shared across containers, meaning you don't have to pull the entire image every time. VMs on the other hand have a monolithic, opaque filesystem that's completely reloaded every time it's started. This leads to slow load times and inefficient image storage with VMs. 

In the following figure, you can see how the applications in the VMs (right side of the diagram) have a full copy of the OS and the supporting binaries in each virtual machine, whereas the containerized applications (left side of the diagram) all share the same Alpine binaries (no kernel necessary ~ 3 MB) and runtime binaries. There have been various reports on the financial impact of containers versus VMs, but the number I have seen ranges from a 15% to a 70% reduction in operational costs. As they say, your mileage may vary based on your OS, binaries, and whether or not you move to bare metal to eliminate hypervisor licensing costs:

Containerized apps vs VM apps

Containers gain popularity

The following is globally a summary of what I hear from customers and students:

  • Faster developer onboarding: Container-based development
  • Easy to run and test on dev machines: Great for simulating production
  • Faster release cycles and shorter time to fix bugs: No more monolithic deployments 
  • Better quality software: Consistent images across all environments 
  • It is too hard to manage microservices without them: Stacks are great for isolation and deployment
  • Easier to support legacy web applications: Containerize old apps and manage them on a modern platform
  • Reduction of VMware tax: Better use of compute resources through increased density and consolidation of multiple non-prod environments (using Docker Enterprise RBAC)

Note

Even the free stuff will cost you somethingI am closing this section on a practical note by suggesting your initial operational savings will be offset by the investment required to transform your enterprise to a container platform. When done right, the impact of container adoption impacts a broad group within the enterprise, spanning the entire software development and delivery pipeline. Like any transformation worth doing, there is some investment required. More about the impact of container adoption later. 

Docker Engine-Community – free Docker

The open source version of Docker is called Docker Engine-Community and it is distributed under the Apache 2.0 licence. Sometimes referred to as free Docker, this version is self-and community-supported. Docker has two packaging schemes:

  • Docker Engine-Community for x86 64-bit desktop architectures for Mac and Windows 10 Pro+
  • Server CE for targeting CentOS, Debian, Fedora, and Ubuntu Linux distributions

In addition to the platform packaging, Docker Engine-Community comes with two channels. It is important to note that as of Docker Engine-Community version 18.09, the stable channel will release on a six-month cadence and the edge channel will be replaced with a nightly build:

  • Stable channel: General availability code is released through this channel after being thoroughly tested.
  • Edge channel for desktop platforms: Monthly release of prerelease code that is in various stages of testing. 
  • Nightly channel: Fresh code is released here! Subsequently, cool new features show up here first, but this code base is not completely tested and should not be used for production. Also, if your developers use the edge channel (or run with the —experimental flag) on their workstations, you will need to be very careful to avoid the works on my machine scenario! Take care to ensure new code is not relying on unreleased GA or experimental features that will work on the developer's workstation, but will break later as images and/or configurations move through the pipeline. 

You may consider having a development cluster where developers deploy their code on a Docker infrastructure that matches production versions. If a dev cluster is available, developers should always deploy to dev before their code is checked in, to ensure no builds are broken. 

Docker Engine-Community includes key capabilities

Docker Engine-Community is a feature-rich container platform that includes a full API, a CLI (Docker client), and a rich plugin architecture for integration and extension. It allows you to run production applications on either a single node or in a secure cluster that includes overlay networking and layer-4 load balancing. That's all included when you install the Docker Engine-Community engine!

Running Docker Engine-Community on AWS or Azure

Please note there are AWS and Azure quickstart packages for cloud users. These convenience bundles include a Docker-supported AMI/VM image, as well as cloud utilities to wire up and support a cluster of Docker Engine-Community nodes. The real assets here are cloud provider-native IaaS templates (AWS CloudFormation or Azure resource manager), Docker VM images, and Docker4x utility containers for interacting with the cloud provider's services. For instance, the AWS bundle allows you to include the cloudstore volume plugin, where instead of using local EBS volumes, you can use EFS and S3 backed volumes across the entire cluster.

Note

While you might use NFS to achieve a cluster-wide storage solution on-premise, due to some unpredictable latency on cloud providers' networks, where NFS mounts may unexpectedly become read-only, I strongly recommend using Cloudstor on AWS and Azure. More information can be found at https://docs.docker.com/docker-for-aws/persistent-data-volumes/.

Finally, please note that Docker for AWS and Docker for Azure only apply to Docker Engine-Community installations. Docker Enterprise now uses the Docker certified infrastructure tooling, using Terraform and Ansible to target VMware, Azure, and AWS implementations of Docker Enteprise.

 

Docker Enterprise – enterprise support and features

Free Docker is great! But supporting yourself is not always so great. Therefore, Docker Engine-Community is usually a fine choice for learning and getting started, but as soon as you head toward production, you should consider stepping up to Docker Enterprise for the support and/or the enterprise class tooling it provides. 

Docker Enterprise builds on Docker Engine-Community's already rich feature set and adds commercial support for the Docker Engine (Docker Enterprise Basic), as well as tooling that's important for managing multiple teams and production applications, including Kubernetes applications (Kubernetes is included in Docker Enterprise Standard and Advanced).

Docker offers the following support models for Docker Engine-Community and Docker Enterprise:

  • Docker Engine-Community: Starting in CE 18.09, you will need to upgrade (deal with possible breaking changes) every 7 months if you want hotfixes and patch support. This is a recent improvement as, prior to CE 18.09, the support cycle was only four months. Docker Engine-Community relies on community-based support forums; you post an issue in a public forum and wait for someone to help you or to generate a fix. Docker has a great community, but with Docker Engine-Community there are no Service Level Agreements (SLAs). 
  • Docker Enterprise: You will need to upgrade (deal with possible breaking changes) every 24 months to maintain access to hotfixes and patch support. Docker Enteprise's cornerstone is their enterprise-grade private support channel with either a business-critical or business day support level agreement.
  • Hint: Business critical has a faster response time SLA, but costs more. 

Docker Enterprise also includes seamless support for ISV-provided Docker certified plugins and Docker certified containers. That means if you have an issue with a certified plugin or container, you just call Docker for support.

Note

Docker Engine-Community support issues are posted publicly for anyone to see. This can be a problem if you are, for example, a financial institution publicly announcing a security vulnerability you discovered and thus tipping off hackers. If you have concerns about the public visibility of your issues or need SLAs, you may want to consider purchasing Docker Enterprise Basic with business day support. 

Docker Enterprise also comes in three tiers:

  • Docker Enterprise basic tier: Docker Engine-Community feature set with Docker Enterprise support as described previously.
  • Docker Enterprise standard tier: Built on top of Docker Engine-Community with Docker Enterprise support as described previously, but adds the universal control plane (UCP; integrated security with LDAP connections and RBAC through a GUI or CLI bundle for policy management, layer-7 routing, Kubernetes up-and-running out-of-the-box, and a web interface) and the Docker Trusted Registry (DTR; a private image registry tied into the UCP security model with image signing, promotions, webhooks, and full API access).
  • Docker Enterprise advanced tier: Includes all of the features in the Docker Enterprise standard tier, but gives Universal Control Plane (UCP) additional finer-grained RBAC to allow for node isolation. The advanced tier enhances the Docker Trusted Registry (DTR) with image vulnerability scanning and images mirroring to remote DTRs.

Note

The advanced tier enforces a high degree of resource isolation down to the node level. This allows an enterprise to consolidate all of its non-production environments into a single non-prod docker cluster. This can considerably reduce the number of services required for non-production activities. Developers, testers, and operators are issued appropriate RBAC grants to work in isolation.

Kubernetes and Docker Enterprise 

Unless you have been hiding under a rock, you have probably heard about Kubernetes. Too many times I have heard (uninformed) members of the technology community say we don't use Docker, we use Kubernetes. This is a little naive since the vast majority of clusters running Kubernetes orchestration are doing so with the Docker Engine. 

Note

Orchestrators allow developers to wire up individual container nodes into a cluster to improve scaling and availability, and reap the benefits of self-healing and distributed/microservice application management. As soon as multi-service applications needed to coordinate more than one container to run, orchestration became a thing. Orchestrators allows containerized application developers to specify how their collection of containers works together to form an application. Then, they later deploy the application using this specification to schedule the required containers across a cluster of (usually Docker) hosts.

Early on, born-in-the-cloud startups that were running at scale and usually deploying microservices became aware of a need for orchestration. Hence, the brilliant minds at Google created what has become the Kubernetes orchestration framework and later created an independent body to mange it, the Cloud Native Computing Foundation (CNCF), with Kubernetes as the CNCF's cornerstone project. Meanwhile, the Docker community started working on its own orchestration project called Swarmkit.

Kubernetes and Swarm orchestration

While there are many variations and incantations in the orchestration space, the market boils down to very different players: Kubernetes and Swarm. And no, Swarm is not dead. 

Kubernetes has rapidly evolved as a modular and highly configurable third-party orchestration platform supported and used by 12-factor, cloud native developers. From an engineering point of view, it is a very interesting platform with many degrees of freedom and points of extensibility. For hardcore 12-factor folks (also known as the cool kids) who are usually delivering highly complex systems at massive scale, using Kubernetes is a no-brainer. However, if you are not Google or eBay, Kubernetes might be a little much for you, especially as you get started.

Swarm, Docker's orchestration tool, started off as an add-in, but in version 1.12 was added to the Docker Engine. As such, there is nothing to install; rather, you activate it using the docker swarm init and docker swarm join commands to create a TLS encrypted cluster with overlay networking ready to go! So, it's sort of the easy button for orchestration because there's nothing extra to install, and it is both secure and ready to use out of the box. Swarm is included in Docker Engine-Community and Docker Enterprise's UCP piggybacks directly off of Swarm.

Kubernetes and Swarm – different philosophies to solve different problems

Which is better (for you)? Well, it depends…

Getting started with Kubernetes is pretty challenging. This is somewhat because in addition to a container runtime (such as Docker Engine-Community), you need to install and configure kubectl, kubeadm, and kubelet, but that's just the beginning. You also have to make some decisions up front, such as picking a networking model and configuring/installing the provider's container CNI implementation. Kubernetes does not usually provide default options (some cloud services do this for you), which gives you some great flexibility, but naturally forces you to make upfront decision and complicates installation. Again, this is great if you need this flexibility and you know exactly what you are doing.

On the other hand, if you have Docker 1.12 or newer (we strongly recommend having something much newer), you only need to activated it with thedocker swarm initcommand. It creates a certificate authority, an encrypted Raft store, and overlay networking automatically, as well as a tokenized join command for securely adding additional nodes to your cluster. However, in the spirit of simplicity and security, Docker made some default choices for you. That's great if you can live with those choices, at least while you get up to speed on enterprise container platforms.

Beyond installation, describing application deployment (using YAML files) in Kubernetes is inherently more complex. When it comes to styles of deployment, Kubernetes deploys discrete components and wires them up with a collection of YAML files that rely on labels and selectors to connect them. Again in Kubernetes style, when creating components, you have to define a wide range of behavior parameters to describe exactly what you want, rather than assuming some default behavior. This can be verbose, but it is very powerful, precise, and flexible!

Kubernetes also includes the pod as the atomic deployment unit, which can be handy for isolating a bundle of containers from the flat networking model. This leads to the use of sidecar containers that essentially interface pods to the rest of the world. This is very cool and a great way to handle networks of loosely coupled, shared, long-running services in a flat address space:

Swarm and Kubernetes

Swarm takes an application-centric, monolithic approach by defining a stack of related services in a .yaml file. Swarm stacks assume you are running a collection of related services, isolated by overlay networks, to support specific functionality in the context of an application. This makes it easy to deploy a typical application stack such as an Angular frontend with an application's RESTful API and a Postgres database.

Moving Kubernetes to the mainstream

Many PaaS and IaaS providers are diving into Kubernetes and providing turnkey setups. They are betting on the Kubernetes API as a specification for deploying application workloads in their service. Examples include Google's Kubernetes engine, Azure Kubernetes Service, and last but certainly not least, Amazon Elastic Container Service for Kubernetes (Amazon EKS). These are great to get you started, but what about if/when you move steady workloads back on premises due to cost or security concerns?

Finally, beware of the limitations tied to PaaS solutions. If you use a PaaS Kubernetes management plane, you may be limited to using the PaaS provider's CNI plugin and their implementation may limit your options. As an example, if you are running Kubernetes on AWS, the networking implementation may require one virtual IP/pod, but you only get a limited number of virtual IPs per instance type. Subsequently, you might need to move up to a bigger, more expensive instance type to support more pods, even though you don't really need any more/better CPUs, network, or storage. 

 

New era for app Dev, DevOps, and IT operations


Using containers and orchestrators changes the way we look at building software and defining a software delivery pipeline. Container-based development fundamentally supports what the DevOps folks call a shift left, where developers of distributed systems become more accountable for the quality of the overall solution, meaning the binaries and how they are connected. Hence, wiring up my services in no longer the networking, integration, or operations teams' problem; it belongs to the developers. In fact, the YAML specification for connecting and deploying their application is now an artifact that gets checked into source code control! 

Faster deployment of fixes and enhancements is a prime motivation for containerizing monolithic web applications built with job and .NET. Containerization allows each team to operate independently and deploy its application as soon as it is ready to go, no longer having to wait for all of the other application teams or the next quarterly release cycle.

Containerizing applications can be really helpful for breaking up the organization log jams associated with pre-container monolithic deployments, as each application gets its own runtime container. This container includes all of their specific runtime dependencies, such as Java and Tomcat, for the application to run. Because we are using containers, we become less concerned about the overhead associated with starting and operating similar containers in production by remembering how Docker isolates application execution, while sharing common layers from the filesystem for fast start times and efficient resource utilization. So, rather then having to coordinate across all of the teams involved in a deployment, each team has its own isolated stack of dependencies, which allows them to deploy and test on their own schedule. Not surprisingly, after applications are containerized, it is much easier to independently refactor them. 

Note

First containerize applications without changing any code if possible. After you have the application containerized, then take on any refactoring. Trying to accomplish both containerizing and refactoring simultaneously can be daunting and may stall the project.

DevOps

Leveraging containers in your continuous integration and continuous deployment pipeline has become a best practice for most DevOps teams. Even the pipeline itself is often run as a containerized platform. The motivation here is a higher-quality product, based on the immutable server pattern where containers are built once and promoted through the pipeline. In other words, the application is not re-installed on a new virtual server between each step in the pipeline. Instead, the same container image is pulled from a central repository and run in the next step. Environment variables and configuration files are used to account for variations between the environments.

Operations

Since the application team has taken on the tasks of wiring up and configuring the application for deployment, the operations team can shift its focus toward the container platform. This includes setting up and maintaining the container platform's operational environments, monitoring and centralized logging for applications running in the cluster, and security/policy management within the cluster to ensure that applications and users are behaving as expected.

 

Container-first and strategic impact of containers


Containers add some great new possibilities when approaching your enterprise application strategy, primarily with respect to cloud migration and application modernization. Containers allow organizations to elevate this conversation from the tactics around cloud migration and application stacks to a common application platform where virtually any application stack runs more efficiently while at the same time making the applications cloud-portable. 

Container-first as a cloud adoption strategy

What if, before you started migrating all of your applications to a specific cloud-specific provider, you instead containerize your applications first and then migrate them to the cloud. This is sometimes referred to as a container-first strategy. There are several significant benefits to this approach:

  • Abstracting platform-specific knowledge from application teams
  • Gaining operational efficiency (usually in the range of 15% to 70%) from containerized applications
  • It gives you the ability you move your applications between on-premise and any cloud providers with minimal effort

Note

Container-first thinking should reduce cloud-specific staffing needs; instead of a cloud admin/application, you have a cloud admin/container cluster. Once containerized, application migrations between cloud providers and on-premise should be measured in hours and not weeks or months.

Get ready to bring workloads back from the public cloud

Moving to the cloud is fun and cool! However, it can get very expensive and hard for an enterprise to control. Most of what I see from clients is that the cloud makes sense for highly variable workloads where elastic capacity is very important. However, when steady, predictable workloads are involved, many organizations find the public cloud to be too expensive and ultimately migrate them back to their data center or private cloud. This is referred to as workload repatriation and it's becoming a very common event.

Application modernization – the containerization path

Docker customers have documented significant reductions in operational costs achieved by simply containerizing traditional web-based (.NET 2.0+ and Java) applications and establishing a Docker Enterprise infrastructure to run them. This infrastructure can run in the cloud, VMware, or even bare metal. There is a methodology used by Docker solution architects to help enterprises migrate traditional applications. It starts with accelerated PoC, moves to a pilot application, and finally to production, where each step builds on the last. 

Support for microservices and DevOps

At this point in the game, most development teams won't even attempt to build and deploy microservices without containers and orchestrators. 12-factor style development teams have lots of moving part and deploy often—a perfect fit for Docker and Kubernetes! These teams use containerized, decentralized CI/CD systems that come with built-in container support to achieve low error rates using high-speed automated deployments. 

Compliance

While compliance for container platforms is achievable with third-party products such as Sysdig Secure, Twistlock, and AquaSec, Docker Enterprise 2.1 adds FIPS compliance support for Windows and RHEL platforms. This means the Docker platform is validated against widely accepted standards and best practices during Docker Enterprise product development.With this, the companies and agencies get additional confidence to adopt Docker containers. Federal Information Processing Standard (FIPS) Publication 140-2 being the most remarkable standards, verifies and permits the use of various security encryption modules within an organization software stack which now includes the Docker Enterprise Engine. 

Note

For more information on FIPs, please visit docker's website: https://docs.docker.com/compliance/nist/fips140_2/. For more information on general Docker compliance, please visit: https://docs.docker.com/compliance/.

 

How Docker Enterprise 2.0 has changed the game


In April of 2018, Docker Enterprise 2.0 was a release. In this release, Docker added support for Kubernetes. Not some wrapped or repackaged version, but the real open source version. The advantage of running Kubernetes on Docker Enterprise 2.0 is simplicity. With Docker Enterprise 2.0, the universal control plane includes pre-installed Kubernetes, which runs alongside Swarm. This means that enterprises do not need to choose between Kubernetes and Swarm; they can have them both. This is a big deal for organizations that need to deal with both pockets of advanced microservice applications and simpler n-tier traditional applications. With Docker Enterprise 2.0, microservice teams are free get their Kube on, while the rest of the teams get up to speed with Swarm. Also, it allows an enterprise to handle a more manageable learning curve by getting started with Swarm and later introducing more complex Kubernetes configurations as required.

Additionally, at Dockercon 2018, Docker announced some very exciting features on their near-term roadmap regarding integration between Docker Enterprise and cloud-based Kubernetes Services. Essentially, Docker Enterprise 2 will be able to take an on-premise Kubernetes app running on Docker Enterprise 2.0 and deploy it to a web-based Kubernetes provider such as Amazon or Google.

While Docker Enterprise 2.0 may not be the perfect choice for a small cloud-native startup, its flexibility, integrated security model, and single platform should make it a top consideration for on premise and hybrid container platforms. 

 

Summary


In the last 5 years, containers have come out of obscurity and into the spotlight across the software industry and DevOps. The profound organizational impact of containers spans software developers, IT admins, DevOps engineers, architects, and executives alike. From the beginning, Docker, Inc. has been at the center of the container movement and remains committed to the long term success by supporting industry standards, the open source community, and most recently enterprise customers with a Kubernetes-capable Docker Enterprise 2. 

In the next 5 years, the enterprise adoption of containers will blossom. Subsequently, most organizations will begin looking for an enterprise-grade solution that balances cost and security with speed and leading-edge platform features. Docker Enterprise's single pane of glass for hybrid cloud and on-premises clusters, along with support for the latest container technologies, including Kubernetes support, is very likely to draw the attention of astute IT leaders around the world. 

Coming up in Chapter 2, Docker Enterprise – an Architectural Overview, our journey continues as we explore the features and architecture of Docker Enterprise. 

 

Questions


  1. How long have containers been around?
  2. What is Docker, Inc. and what does it do?
  3. What is the difference between Docker Engine-Community and Docker Enterprise?
  4. What is the difference between Docker and Kubernetes?
  5. What is the best orchestrator for deploying simple n-tier web applications?
  6. How does Docker support Kubernetes?
  7. How does container-based development impact application developers? 
  8. Why would I need fewer cloud admins with a cloud-first strategy?
 
About the Author
  • Mark Panthofer

    Mark Panthofer earned a degree in computer engineering in 1991 and has since accumulated 20+ years of technology adoption experience in a wide variety of positions, ranging from software engineer to software executive. He is currently the vice president of NVISIA's technology centers, where he focuses on Docker-related technology adoption. As a Docker-accredited consultant and instructor, Mark's responsibilities include the following: Providing training to leading commercial and government agencies on the best practices with Docker Enterprise for developers, DevOps, and operations Providing advisory services to regional IT organizations on enterprise container adoption Co-organizing Docker Chicago and Docker Milwaukee meetups Collaborating with Docker's solution architects

    Browse publications by this author
Latest Reviews (2 reviews total)
Acquistato e poi letto in due settimane. che dire di più.
Great book by trusted author.
Mastering Docker Enterprise
Unlock this book and the full library FREE for 7 days
Start now