Mastering CoreOS

5 (2 reviews total)
By Sreenivas Makam
    Advance your knowledge in tech with a Packt subscription

  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. CoreOS Overview

About this book

CoreOS makes Google and Amazon-style Cloud infrastructure available for anyone building their own private Cloud. This book covers the CoreOS internals and the technologies used in the deployment of container-based distributed applications. It starts with an overview of CoreOS and distributed application development while sharing knowledge on related technologies. Critical CoreOS services and networking and storage considerations for CoreOS are covered next.

In latter half of the book, you will learn about Container runtime systems such as Docker and Rkt and Container Orchestration using Kubernetes. You will also find out about the integration of popular orchestration solutions such as OpenStack, the AWS Container service, and the Google Container Engine with CoreOS and Docker. Lastly, we cover troubleshooting as well as production considerations.

Publication date:
February 2016


Chapter 1. CoreOS Overview

CoreOS is a Container-optimized Linux-based operating system to deploy a distributed application across a cluster of nodes. Along with providing a secure operating system, CoreOS provides services such as etcd and fleet that simplify the Container-based distributed application deployment. This chapter will provide you with an overview of Microservices and distributed application development concepts along with the basics of CoreOS, Containers, and Docker. Microservices is a software application development style where applications are composed of small, independent services talking to each other with APIs. After going through the chapter, you will be able to appreciate the role of CoreOS and Containers in the Microservices architecture.

The following topics will be covered in this chapter:

  • Distributed application development—an overview and components

  • Comparison of currently available minimalist Container-optimized OSes

  • Containers—technology and advantages

  • Docker—architecture and advantages

  • CoreOS—architecture and components

  • An overview of CoreOS components—systemd, etcd, fleet, flannel, and rkt

  • Docker versus Rkt

  • A workflow for distributed application development with Docker, Rkt, and CoreOS


Distributed application development

Distributed application development involves designing and coding a microservice-based application rather than creating a monolithic application. Each standalone service in the microservice-based application can be created as a Container. Distributed applications existed even before Containers were available. Containers provide the additional benefit of isolation and portability to each individual service in the distributed application. The following diagram shows you an example of a microservice-based application spanning multiple hosts:

Components of distributed application development

The following are the primary components of distributed application development. This assumes that individual services of the distributed application are created as Containers:

  • Applications or microservices.

  • Cloud infrastructure—public (AWS, GCE, and Digital Ocean) or private.

  • Base OS—CoreOS, Atomic, Rancher OS, and others.

  • Distributed data store and service discovery—etcd, consul, and Zookeeper.

  • Load balancer—NGINX and HAProxy.

  • Container runtime—Docker, Rkt, and LXC.

  • Container orchestration—Fleet, Kubernetes, Mesos, and Docker Swarm.

  • Storage—local or distributed storage. Some examples are GlusterFS and Ceph for cluster storage and AWS EBS for cloud storage. Flocker's upcoming storage driver plugin promises to work across different storage mechanisms.

  • Networking—using cloud-based networking such as AWS VPC, CoreOS Flannel, or Docker networking.

  • Miscellaneous—Container monitoring (cadvisor, Sysdig, and Newrelic) and Logging (Spout and Logentries).

  • An update strategy to update microservices, such as a rolling upgrade.

Advantages and disadvantages

The following are some advantages of distributed application development:

  • Application developers of each microservice can work independently. If necessary, different microservices can even have their own programming language.

  • Application component reuse becomes high. Different unrelated projects can use the same microservice.

  • Each individual service can be horizontally scaled. CPU and memory usage for each microservice can be tuned appropriately.

  • Infrastructure can be treated like cattle rather than a pet, and it is not necessary to differentiate between each individual infrastructure component.

  • Applications can be deployed in-house or on a public, private, or hybrid cloud.

The following are some problems associated with the microservices approach:

  • The number of microservices to manage can become huge and this makes it complex to manage the application.

  • Debugging can become difficult.

  • Maintaining integrity and consistency is difficult so services must be designed to handle failures.

  • Tools are constantly changing, so there is a need to stay updated with current technologies.


A minimalist Container-optimized OS

This is a new OS category for developing distributed applications that has become popular in recent years. Traditional Linux-based OSes were bulky for Container deployment and did not natively provide the services that Containers need. The following are some common characteristics of a Container-optimized OS:

  • The OS needs to be bare-minimal and fast to bootup

  • It should have an automated update strategy

  • Application development should be done using Containers

  • Redundancy and clustering should be built-in

The following table captures the comparison of features of four common Container-optimized OSes. Other OSes such as VMWare Photon and Mesos DCOS have not been included.



Rancher OS


Ubuntu snappy



Rancher Labs

Red Hat



Docker and Rkt



Snappy packages and Docker


First release in 2013, relatively mature

First release in early 2015, pretty new

First release in early 2015, pretty new

First release in early 2015, pretty new

Service management

Systemd and Fleet

System docker manages system services and user docker manages user containers


Systemd and Upstart


Etcd, fleet, and flannel

Rancher has tools for service discovery, load balancing, dns, storage, and networking

Flannel and other RedHat tools

Ubuntu tools


Kubernetes and Tectonic

Rancher's own orchestration and Kubernetes

Kubernetes. Atomic app, and Nulecule also used

Kubernetes and any other orchestration tool


Automatic, uses A and B partitions


Automatic, uses rpm-os-tree



Docker hub and Quay

Docker hub

Docker hub

Docker hub



Rancher's own tools and external tools

RedHat tools

Ubuntu debug tools


SELinux can be turned on

There is a plan to add SELinux and AppArmor support

SELinux enabled by default, additional security

AppArmor security profile can be used



Containers do virtualization at the OS level while VMs do virtualization at the hardware level. Containers in a single host share the same kernel. As Containers are lightweight, hundreds of containers can run on a single host. In a microservices-based design, the approach taken is to split a single application into multiple small independent components and run each component as a Container. LXC, Docker, and Rkt are examples of Container runtime implementations.


The following are the two critical Linux kernel technologies that are used in Containers:

  • Namespaces: They virtualize processes, networks, filesystems, users, and so on

  • cgroups: They limit the usage of the CPU, memory, and I/O per group of processes


The following are some significant advantages of Containers:

  • Each container is isolated from other Containers. There is no issue of shared package management, shared libraries, and so on.

  • Compared to a VM, Containers have smaller footprints and are faster to load and run.

  • They provide an efficient usage of computing power.

  • They can work seamlessly across dev, test, and production. This makes Containers DevOps-friendly.

An overview of Docker architecture

Docker is a Container runtime implementation. Even though Containers were available for quite a long time, Docker revolutionized Container technology by making it easier to use. The following image shows you the main components of Docker (the Docker engine, Docker CLI, Docker REST, and Docker hub) and how they interact with each other:

Following are some details on the Docker architecture:

  • The Docker daemon runs in every host where Docker is installed and started.

  • Docker uses Linux kernel container facilities such as namespaces and cgroups through the libcontainer library.

  • The Docker client can run in the host machine or externally and it communicates with the Docker daemon using the REST interface. There is also a CLI interface that the Docker client provides.

  • The Docker hub is the repository for Docker images. Both private and public images can be hosted in the Docker hub repository.

  • Dockerfile is used to create container images. The following is a sample Dockerfile that is used to create a Container that starts the Apache web service exposing port 80 to the outside world:

  • The Docker platform as of release 1.9 includes orchestration tools such as Swarm, Compose, Kitematic, and Machine as well as native networking and storage solutions. Docker follows a batteries-included pluggable approach for orchestration, storage, and networking where a native Docker solution can be swapped with vendor plugins. For example, Weave can be used as an external networking plugin, Flocker can be used as an external storage plugin, and Kubernetes can be used as an external orchestration plugin. These external plugins can replace the native Docker solutions.

Advantages of Docker

The following are some significant advantages of Docker:

  • Docker has revolutionized Container packaging and tools around Containers and this has helped both application developers and infrastructure administrators

  • It is easier to deploy and upgrade individual containers

  • It is more suitable for the microservices architecture

  • It works great across all Linux distributions as long as the kernel version is greater than or equal to 3.10

  • The Union filesystem makes it faster to download and keep different versions of container images

  • Container management tools such as Dockerfile, Docker engine CLI, Machine, Compose, and Swarm make it easy to manage containers

  • Docker provides an easy way to share Container images using public and private registry services



CoreOS belongs to the minimalist Container-optimized OS category. CoreOS is the first OS in this category and many new OSes have appeared recently in the same category. CoreOS's mission is to improve the security and reliability of the Internet. CoreOS is a pioneer in this space and its first alpha release was in July 2013. A lot of developments have happened in the past two years in the area of networking, distributed storage, container runtime, authentication, and security. CoreOS is used by PaaS providers (such as Dokku and Deis), Web application development companies, and many enterprise and service providers developing distributed applications.


The following are some of the key CoreOS properties:

  • The kernel is very small and fast to bootup.

  • The base OS and all services are open sourced. Services can also be used standalone in non-CoreOS systems.

  • No package management is provided by the OS. Libraries and packages are part of the application developed using Containers.

  • It enables secure, large server clusters that can be used for distributed application development.

  • It is based on principles from the Google Chrome OS.

  • Container runtime, SSH, and kernel are the primary components.

  • Every process is managed by systemd.

  • Etcd, fleet, and flannel are all controller units running on top of the kernel.

  • It supports both Docker and Rkt Container runtime.

  • Automatic updates are provided with A and B partitions.

  • The Quay registry service can be used to store public and private Container images.

  • CoreOS release channels (stable, beta, and alpha) are used to control the release cycle.

  • Commercial products include the Coreupdate service (part of the commercially managed and enterprise CoreOS), Quay enterprise, and Tectonic (CoreOS + Kubernetes).

  • It currently runs on x86 processors.


The following are some significant advantages of CoreOS:

  • The kernel auto-update feature protects the kernel from security vulnerabilities.

  • The CoreOS memory footprint is very small.

  • The management of CoreOS machines is done at the cluster level rather than at an individual machine level.

  • It provides service-level (using systemd) and node-level (using fleet) redundancy.

  • Quay provides you with a private and public Container repository. The repository can be used for both Docker and Rkt containers.

  • Fleet is used for basic service orchestration and Kubernetes is used for application service orchestration.

  • It is supported by all major cloud providers such as AWS, GCE, Azure, and DigitalOcean.

  • Majority of CoreOS components are open sourced and the customer can choose the combination of tools that is necessary for their specific application.

Supported platforms

The following are the official and community-supported CoreOS platforms. This is not an exhaustive list.


For exhaustive list of CoreOS supported platforms, please refer to this link (

The platforms that are officially supported are as follows:

  • Cloud platforms such as AWS, GCE, Microsoft Azure, DigitalOcean, and OpenStack

  • Bare metal with PXE

  • Vagrant

The platforms that are community-supported are as follows:

  • CloudStack

  • VMware

CoreOS components

The following are the CoreOS core components and CoreOS ecosystem. The ecosystem can become pretty large if automation, management, and monitoring tools are included. These have not been included here.

  • Core components: Kernel, systemd, etcd, fleet, flannel, and rkt

  • CoreOS ecosystem: Docker and Kubernetes

The following image shows you the different layers in the CoreOS architecture:


CoreOS uses the latest Linux kernel in its distribution. The following screenshot shows the Linux kernel version running in the CoreOS stable release 766.3.0:


Systemd is an init system used by CoreOS to start, stop, and manage processes. SysVinit is one of the oldest init systems. The following are some of the common init systems used in the Unix world:

  • Systemd: CoreOS and RedHat

  • Upstart: Ubuntu

  • Supervisord: The Python world

The following are some of the common functionality performed by an init system:

  • It is the first process to start

  • It controls the ordering and execution of all the user processes

  • It takes care of restarting processes if they die or hang

  • It takes care of process ownership and resources

The following are some specifics of systemd:

  • Every process in systemd runs in one cgroup and this includes forked processes. If the systemd service is killed, all the processes associated with the service, including forked processes, are killed. This also provides you with a nice way to control resource usage. If we run a Container in systemd, we can control the resource usage even if the container contains multiple processes. Additionally, systemd takes care of restarting containers that die if we specify the restart option in systemd.

  • Systemd units are run and controlled on a single machine.

  • These are some systemd unit types—service, socket, device, and mount.

  • The Service type is the most common type and is used to define a service with its dependencies. The Socket type is used to expose services to the external world. For example, docker.service exposes external connectivity to the Docker engine through docker.socket. Sockets can also be used to export logs to external machines.

  • The systemctl CLI can be used to control Systemd units.

Systemd units

The following are some important systemd units in a CoreOS system.


The following is an example etcd2.service unit file:

The following are some details about the etcd2 service unit file:

  • All units have the [Unit] and [Install] sections. There is a type-specific section such as [Service] for service units.

  • The Conflicts option notifies that either etcd or etcd2 can run, but not both.

  • The Environment option specifies the environment variables to be used by etcd2. The %m unit specifier allows the machine ID to be taken automatically based on where the service is running.

  • The ExecStart option specifies the executable to be run.

  • The Restart option specifies whether the service can be restarted. The Restartsec option specifies the time interval after which the service should be restarted.

  • LimitNoFILE specifies the file count limit.

  • The WantedBy option in the Install section specifies the group to which this service belongs. The grouping mechanism allows systemd to start up groups of processes at the same time.


The following is an example of the fleet.service unit file:

In the preceding unit file, we can see two dependencies for fleet.service. etcd.Service and etcd2.service are specified as dependencies as Fleet depends on them to communicate between fleet agents in different nodes. The fleet.socket socket unit is also specified as a dependency as it is used by external clients to talk to Fleet.


The Docker service consists of the following components:

  • Docker.service: This starts the Docker daemon

  • Docker.socket: This allows communication with the Docker daemon from the CoreOS node

  • Docker-tcp.socket: This allows communication with the Docker daemon from external hosts with port 2375 as the listening port

The following docker.service unit file starts the Docker daemon:

The following docker.socket unit file starts the local socket stream:


Downloading the example code

You can download the example code files from your account at for all the Packt Publishing books you have purchased. If you purchased this b

The following docker-tcp.socket unit file sets up a listening socket for remote client communication:

The docker ps command uses docker.socket and docker -H tcp:// ps uses docker-tcp.socket unit to communicate with the Docker daemon running in the local system.

The procedure to start a simple systemd service

Let's start a simple hello1.service unit that runs a Docker busybox container, as shown in the following image:

The following are the steps to start hello1.service:

  1. Copy hello1.service as sudo to /etc/systemd/system.

  2. Enable the service:

    sudo systemctl enable /etc/systemd/system/hello1.service
  3. Start hello1.service:

    sudo systemctl start hello1.service

This creates the following link:

[email protected] /etc/systemd/system/ $ ls -la
lrwxrwxrwx 1 root root   34 Aug 12 13:25 hello1.service -> /etc/systemd/system/hello1.service

Now, we can see the status of hello1.service:

In the preceding output, we can see that the service is in the active state. At the end, we can also see stdout where the echo output is logged.

Let's look at the running Docker containers:


When starting Docker Containers with systemd, it is necessary to avoid using the -d option as it prevents the Container process to be monitored by systemd. More details can be found at

Demonstrating systemd HA

In the hello1.service created, we specified two options:


This means that the service should be restarted after 30 seconds in case the service exits for some reason.

Let's stop the Docker hello1 container:

Service gets restarted automatically after 30 seconds, as shown in the following screenshot:

The following screenshot shows you that the hello1 container is running again. From the Container status output, we can see that the container is up only for a minute:

We can also confirm the service restarted from the systemd logs associated with that service. In the following output, we can see that the service exited and restarted after 30 seconds:


Etcd is a distributed key-value store used by all machines in the CoreOS cluster to read/write and exchange data. Etcd uses the Raft consensus algorithm ( to maintain a highly available cluster. Etcd is used to share configuration and monitoring data across CoreOS machines and for doing service discovery. All other CoreOS services such as Fleet and Flannel use etcd as a distributed database. Etcd can also be used as a standalone outside CoreOS. In fact, many complex distributed application projects such as Kubernetes and Cloudfoundry use etcd for their distributed key-value store. The etcdctl utility is the CLI frontend for etcd.

The following are two sample use cases of etcd.

  • Service discovery: Service discovery can be used to communicate service connectivity details across containers. Let's take an example WordPress application with a WordPress application container and MySQL database container. If one of the machines has a database container and wants to communicate its service IP address and port number, it can use etcd to write the relevant key and data; the WordPress container in another host can use the key value to write to the appropriate database.

  • Configuration sharing: The Fleet master talks to Fleet agents using etcd to decide which node in the cluster will execute the Fleet service unit.

Etcd discovery

The members in the cluster discover themselves using either a static approach or dynamic approach. In the static approach, we need to mention the IP addresses of all the neighbors statically in every node of the cluster. In the dynamic approach, we use the discovery token approach where we get a distributed token from a central etcd server and use this in all members of the cluster so that the members can discover each other.

Get a distributed token as follows:


The following is an example of getting a discovery token for a cluster size of three:

The discovery token feature is hosted by CoreOS and is implemented as an etcd cluster as well.

Cluster size

It is preferable to have an odd-sized etcd cluster as it gives a better failure tolerance. The following table shows the majority count and failure tolerance for common cluster sizes up to five. With a cluster size of two, we cannot determine majority.

Cluster size


Failure tolerance













The Majority count tells us the number of nodes that is necessary to have a working cluster, and failure tolerance tells us the number of nodes that can fail and still keep the cluster operational.

Etcd cluster details

The following screenshot shows the Etcd member list in a 3 node CoreOS cluster:

We can see that there are three members that are part of the etcd cluster with their machine ID, machine name, IP address, and port numbers used for etcd server-to-server and client-to-server communication.

The following output shows you the etcd cluster health:

Here, we can see that all three members of the etcd cluster are healthy.

The following output shows you etcd statistics with the cluster leader:

We can see that the member ID matches with the leader ID, 41419684c778c117.

The following output shows you etcd statistics with the cluster member:

Simple set and get operations using etcd

In the following example, we will set the /message1 key to the Book1 value and then later retrieve the value of the /message1 key:


Fleet is a cluster manager/scheduler that controls service creation at the cluster level. Like systemd being the init system for a node, Fleet serves as the init system for a cluster. Fleet uses etcd for internode communication.

The Fleet architecture

The following image shows you the components of the Fleet architecture:

  • Fleet uses master, slave model with Fleet Engine playing master role and Fleet agent playing slave role. Fleet engine is responsible for scheduling Fleet units and Fleet agent is responsible for executing the units as well as reporting the status back to the Fleet engine.

  • One master engine is elected among the CoreOS cluster using etcd.

  • When the user starts a Fleet service, each agent bids for that service. Fleet uses a very simple least-loaded scheduling algorithm to schedule the unit to the appropriate node. Fleet units also consist of metadata that is useful to control where the unit runs with respect to the node property as well as based on other services running on that particular node.

  • The Fleet agent processes the unit and gives it to systemd for execution.

  • If any node dies, a new Fleet engine is elected and the scheduled units in that node are rescheduled to a new node. Systemd provides HA at the node level; Fleet provides HA at the cluster level.

Considering that CoreOS and Google are working closely on the Kubernetes project, a common question that comes up is the role of Fleet if Kubernetes is going to do container orchestration. Fleet is typically used for the orchestration of critical system services using systemd while Kubernetes is used for application container orchestration. Kubernetes is composed of multiple services such as the kubelet server, API server, scheduler, and replication controller and they all run as Fleet units. For smaller deployments, Fleet can also be used for application orchestration.

A Fleet scheduling example

The following is a three-node CoreOS cluster with some metadata present for each node:

A global unit example

A global unit executes the same service unit on all the nodes in the cluster.

The following is a sample helloglobal.service:

Description=My Service
ExecStartPre=-/usr/bin/docker kill hello
ExecStartPre=-/usr/bin/docker rm hello
ExecStartPre=/usr/bin/docker pull busybox
ExecStart=/usr/bin/docker run --name hello busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done"
ExecStop=/usr/bin/docker stop hello

Let's execute the unit as follows:

We can see that the same service is started on all three nodes:

Scheduling based on metadata

Let's say that we have a three-node CoreOS cluster with the following metadata:

  • Node1 (compute=web, rack=rack1)

  • Node2 (compute=web, rack=rack2)

  • Node3 (compute=db, rack=rack3)

We have used the compute metadata to identity the type of machine as web or db. We have used the rack metadata to identify the rack number. Fleet metadata for a node can be specified in the Fleet section of the cloud-config.

Let's start a web service and database service with each having its corresponding metadata and see where they get scheduled.

This is the web service:

Description=Apache web server service

ExecStartPre=-/usr/bin/docker kill nginx
ExecStartPre=-/usr/bin/docker rm nginx
ExecStartPre=/usr/bin/docker pull nginx
ExecStart=/usr/bin/docker run --name nginx -p ${COREOS_PUBLIC_IPV4}:8080:80 nginx
ExecStop=/usr/bin/docker stop nginx


This is the database service:

Description=Redis DB service

ExecStartPre=-/usr/bin/docker kill redis
ExecStartPre=-/usr/bin/docker rm redis
ExecStartPre=/usr/bin/docker pull redis
ExecStart=/usr/bin/docker run --name redis redis
ExecStop=/usr/bin/docker stop redis


Let's start the services using Fleet:

As we can see, nginxweb.service got started on Node1 and nginxdb.service got started on Node3. This is because Node1 and Node2 were of the web type and Node3 was of the db type.

Fleet HA

When any of the nodes has an issue and does not respond, Fleet automatically takes care of scheduling the service units to the next appropriate machine.

From the preceding example, let's reboot Node1, which has nginxweb.service. The service gets scheduled to Node2 and not to Node3 because Node2 has the web metadata:

In the preceding output, we can see that nginxweb.service is rescheduled to Node2 and that Node1 is not visible in the Fleet cluster.


Flannel uses an Overlay network to allow Containers across different hosts to talk to each other. Flannel is not part of the base CoreOS image. This is done to keep the CoreOS image size minimal. When Flannel is started, the flannel container image is retrieved from the Container image repository. The Docker daemon is typically started after the Flannel service so that containers can get the IP address assigned by Flannel. This represents a chicken-and-egg problem as Docker is necessary to download the Flannel image. The CoreOS team has solved this problem by running a master Docker service whose only purpose is to download the Flannel container.

The following image shows you how Flannel agents in each node communicate using etcd:

The following are some Flannel internals:

  • Flannel runs without a central server and uses etcd for communication between the nodes.

  • As part of starting Flannel, we need to supply a configuration file that contains the IP subnet to be used for the cluster as well as the backend protocol method (such as UDP and VXLAN). The following is a sample configuration that specifies the subnet range and backend protocol as UDP:

  • Each node in the cluster requests an IP address range for containers created in that host and registers this IP range with etcd.

  • As every node in the cluster knows the IP address range allocated for every other node, it knows how to reach containers created on any node in the cluster.

  • When containers are created, containers get an IP address in the range allocated to the node.

  • When Containers need to talk across hosts, Flannel does the encapsulation based on the backend encapsulation protocol chosen. Flannel, in the destination node, de-encapsulates the packet and hands it over to the Container.

  • By not using port-based mapping to talk across containers, Flannel simplifies Container-to-Container communication.

The following image shows the data path for Container-to-Container communication using Flannel:

A Flannel service unit

The following is an example of a flannel service unit where we set the IP range for the flannel network as

In a three-node etcd cluster, the following is a sample output that shows the Container IP address range picked by each node. Each node requests an IP range with a 24-bit mask. is picked by node A, is picked by node B, and is picked by node C:


Rkt is the Container runtime developed by CoreOS. Rkt does not have a daemon and is managed by systemd. Rkt uses the Application Container image (ACI) image format, which is according to the APPC specification ( Rkt's execution is split into three stages. This approach was taken so that some of the stages can be replaced by a different implementation if needed. Following are details on the three stages of Rkt execution:

Stage 0:

This is the first stage of Container execution. This stage does image discovery, retrieval and sets up filesystem for stages 1 and 2.

Stage 1:

This stage sets up the execution environment for containers like Container namespace, cgroups using the filesystem setup by stage 0.

Stage 2:

This stage executes the Container using execution environment setup by stage 1 and filesystem setup by stage 0.

As of release 0.10.0, Rkt is still under active development and is not ready for production.

The CoreOS cluster architecture

Nodes in the CoreOS cluster are used to run critical CoreOS services such as etcd, fleet, Docker, systemd, flannel, and journald as well as application containers. It is important to avoid using the same host to run critical services as well as application containers so that there is no resource contention for critical services. This kind of scheduling can be achieved using the Fleet metadata to separate the core machines and worker machines. The following are two cluster approaches.

The development cluster

The following image shows a development cluster with three CoreOS nodes:

To try out CoreOS and etcd, we can start with a single-node cluster. With this approach, there is no need to have dynamic discovery of cluster members. Once this works fine, we can expand the cluster size to three or five to achieve redundancy. The static or dynamic discovery approach can be used to discover CoreOS members. As CoreOS critical services and application containers run in the same cluster, there could be resource contention in this approach.

The production cluster

The following image shows a production cluster with a three-node master cluster and five-node worker cluster:

We can have a three or five-node master cluster to run critical CoreOS services and then have a dynamic worker cluster to run application Containers. The master cluster will run etcd, fleet, and other critical services. In worker nodes, etcd will be set up to proxy to master nodes so that worker nodes can use master nodes for etcd communication. Fleet, in worker nodes, will also be set up to use etcd in master nodes.


Docker versus Rkt

As this is a controversial topic, I will try to give a neutral stand here.


CoreOS team started the Rkt project because of the following reasons:

  • Container interoperability issue needed to be addressed since Docker runtime was not fully following the Container manifest specification

  • Getting Docker to run under systemd had some issues because of Docker running as the daemon

  • Container image discovery and image signing required improvements

  • Security model for Containers needed to be improved

APPC versus OCI

APPC ( and OCI ( define Container standards.

The APPC specification is primarily driven by CoreOS along with a few other community members. The APPC specification defines the following:

  • Image format: Packaging and signing

  • Runtime: How to execute the Container

  • Naming and Sharing: Automatic discovery

APPC is implemented by Rkt, Kurma, Jetpack, and others.

OCI ( is an open container initiative project started in April 2015 and has members from all major companies including Docker and CoreOS. Runc is an implementation of OCI. The following image shows you how APPC, OCI, Docker, and Rkt are related:

The current status

Based on the latest developments, there is consensus among the community to having a common container specification called the Open Container Specification. Anyone can develop a Container runtime based on this specification. This will allow Container images to be interoperable. Docker, Rkt, and Odin are examples of Container runtime.

The original APPC container specification proposed by CoreOS covers four different elements of container management—packaging, signing, naming (sharing the container with others), and runtime. As per the latest CoreOS blog update (, APPC and OCI will intersect only on runtime and APPC will continue to focus on image format, signing, and distribution. Runc is an implementation of OCI and Docker uses Runc.

Differences between Docker and Rkt

Following are some differences between Docker and Rkt Container runtimes:

  • Docker uses LibContainer APIs to access the Linux kernel Container functionality while Rkt uses the Systemd-nspawn API to access the Linux kernel Container functionality. The following image illustrates this:

  • Docker requires a daemon to manage Container images, remote APIs, and Container processes. Rkt is daemonless and Container resources are managed by systemd. This makes Rkt integrate better with init systems such as systemd and upstart.

  • Docker has a complete platform to manage containers such as Machine, Compose, and Swarm. CoreOS will use some of its own tools such as Flannel for the Networking and combines it with tools such as Kubernetes for Orchestration.

  • Docker is pretty mature and production-ready as compared to Rkt. As of the Rkt release 0.10.0, Rkt is not yet ready for production.

  • For the Container image registry, Docker has the Docker hub and Rkt has Quay. Quay also has Docker images.

CoreOS is planning to support both Docker and Rkt and users will have a choice to use the corresponding Container runtime for their applications.


A workflow for distributed application development with Docker and CoreOS

The following is a typical workflow to develop microservices using Docker and CoreOS:

  • Select applications that need to be containerized. This could be greenfield or legacy applications. For legacy applications, reverse engineering might be required to split the monolithic application and containerize the individual components.

  • Create a Dockerfile for each microservice. The Dockerfile defines how to create the Container image from the base image. Dockerfile itself could be source-controlled.

  • Split the stateless and stateful pieces of the application. For stateful applications, a storage strategy needs to be decided.

  • Microservices need to talk to each other and some of the services should be reachable externally. Assuming that basic network connectivity between services is available, services can talk to each other either statically by defining a service name to IP address and port number mapping or by using service discovery where services can dynamically discover and talk to each other.

  • Docker container images need to be stored in a private or public repository so that they can be shared among development, QA, and production teams.

  • The application can be deployed in a private or public cloud. An appropriate infrastructure has to be selected based on the business need.

  • Select the CoreOS cluster size and cluster architecture. It's better to make infrastructure dynamically scalable.

  • Write CoreOS unit files for basic services such as etcd, fleet, and flannel.

  • Finalize a storage strategy—local versus distributed versus cloud.

  • For orchestration of smaller applications, fleet can be used. For complex applications, the Kubernetes kind of Orchestration solution will be necessary.

  • For production clusters, appropriate monitoring, logging, and upgrading strategies also need to be worked out.



In this chapter, we covered the basics of CoreOS, Containers, and Docker and how they help in distributed application development and deployment. These technologies are under active development and will revolutionize and create a new software development and distribution model. We will explore each individual topic in detail in the following chapters. In the next chapter, we will cover how to set up the CoreOS development environment in Vagrant as well as in a public cloud.




About the Author

  • Sreenivas Makam

    Sreenivas Makam is currently working as a senior engineering manager at Cisco Systems, Bangalore. He has a masters in electrical engineering and around 18 years of experience in the networking industry. He has worked in both start-ups and big established companies. His interests include SDN, NFV, Network Automation, DevOps, and cloud technologies, and he likes to try out and follow open source projects in these areas. His blog can be found at and his hacky code at

    Sreenivas is part of the Docker bloggers forum and his blog articles have been published in Docker weekly newsletters. He has done the technical reviewing for Mastering Ansible, Packt Publishing and Ansible Networking Report, O'Reilly Publisher. He has also given presentations at Docker meetup in Bangalore. Sreenivas has one approved patent.

    Browse publications by this author

Latest Reviews

(2 reviews total)
Advanced examples on CoreOS helped me better my existing skills.
Exclusive what I was looking for.
Mastering CoreOS
Unlock this book and the full library for FREE
Start free trial