Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Getting Started with Flocker

Save for later
  • 660 min read
  • 2016-11-11 00:00:00

article-image

In this article by Russ McKendrick, the author of the book Docker Data Management with Flocker, we are going to look at what both what problems Flocker has been developed to resolve as well as jumping in at the deep end and perform our first installation of Flocker.

(For more resources related to this topic, see here.)

By the end of the article, you will have:

  • Configured and used the default Docker volume driver
  • Learned a little about how Flocker came about and who wrote it
  • Installed and configured a Flocker control and storage node
  • Integrated your Flocker installation with Docker
  • Have visualized your volumes using the Volume Hub
  • Interacted with your Flocker installation using the Flocker CLI
  • Have installed and used dvol

However, before we start to look at both Docker and Flocker we should talk about compute resource.

Compute and Storage Resource

In later articles, we will be looking at running Flocker on public cloud services such as Amazon Web Services and Google Cloud so we will not be using those services for the practical examples in our early articles, instead I will be server instances launched in DigitalOcean.

DigitalOcean has quickly become the platform of choice for both developers & system administrators who want to experiment with new technologies and services.

While their cloud service is simple to use their SSD backed virtual machines offer the performance required to run modern software stacks at a fraction of the price of other public clouds making them perfect for launching prototyping and sandbox environments.

For more information on DigitalOcean, please see their website at: https://www.digitalocean.com/.

The main reason I choose to use an external cloud provider rather than local virtual machine is that we will need to launch multiple hosts to run Flocker.

You do not have to use DigitalOcean, as long as you are able to launch multiple virtual machines and attach some sort of block storage to your instances as a secondary device.

Finally, where we are doing manual installations of Flocker and Docker I will give instructions, which cover both CentOS 7.2 and Ubuntu 16.04.

Docker

If you are reading this article then Docker does not need much of an introduction, it is one of the most talked about technologies of recent years quickly gaining support from pretty much all what I would consider the big players in software.

Companies such as Google, Microsoft, Red Hat, VMWare, IBM, Amazon Web Services and Hewlett Packard all offer solutions based on Dockers container technology or have contributed towards its development.

Rather than me give you a current state of the union on Docker I would recommend you watch the opening key note from the 2016 Docker Conference, it gives a very rundown of how far Docker has come over the last few years. The video can be found at the following URL https://www.youtube.com/watch?v=vE1iDPx6-Ok.

Now you are all caught up with what's going in the world of Docker, let's take a look at what storage options you get out of the box with the core Docker Engine.

Installing Docker Engine

Let start with a small server instance, install Docker and then run a through a couple of scenarios where we may need storage for our containerized application:

  1. Before installing Docker, it is always best to check that you have the latest updates installed, to do this run one of the following commands. On CentOS you need to run:
    yum -y update
    and for Ubuntu run:
    apt-get -y update

    Both of these commands will check for and automatically install any available updates.

    Once your server instance is up-to-date, you can go-ahead and install Docker, the simplest way of doing this is to run the installation script provided by Docker by entering the following command:

    curl -fsSL https://get.docker.com/ | sh

    This method of installation works on both CentOS and Ubuntu, the script run checks to see which packages need to installed and then installs the latest version of the Docker Engine from Docker's own repositories.

    If you would prefer to check the contents of the script before running it on your server instance you can go to https://get.docker.com/ in your browser, this will display the full bash script so you can see exactly what the script is doing when executed on your server instance.

  2. If everything worked as expected, then you will have latest stable version of Docker installed. You check this by running the following command:
    docker --version

    This will return the version and build of the version of Docker which was installed, in my case this is Docker version 1.12.0, build 8eab29e.

  3. On both operating systems you can check that the Docker is running by using the following command:
    systemctl status docker
  4. If Docker is stopped then you can use the following command to start it:
    systemctl start docker

The Docker local volume driver

Now we have Docker installed and running on our server instance, we can start launching containers. From here, the instructions are the same if you are running CentOS or Ubuntu.

We are going to be launching an application called Moby Counter, this was written by Kai Davenport to demonstrate maintaining state between containers. It is a Nodejs application which allows you draw (using Docker logos) in your browser window.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime

The coordinates for your drawing are stored in a Redis key value store, the idea being if you stop, start and even remove the Redis service your drawing should persist:

  1. Let start by launching our Redis container, to do this run the following command:
    docker run -itd 
      --name redis 
      redis:alpine 
      redis-server --appendonly

    Using a backslash: As we will sometimes have a lot options to pass to the commands we are going to be running, we are going to be using to split the command over multiple lines so that it's easier to follow what is going on, like in the command above.

    The Docker Hub is the registry service provided by Docker. The service allows you to distribute your container images, either by uploading images directly or by building your Docker files which are stored in either GitHub or BitBicket. We will be using the Docker Hub throughout the article, for more information on the Docker Hub see the official documentation at: https://docs.docker.com/docker-hub/.

  2. This will download the official Redis image (https://hub.docker.com/_/redis/) from the Docker Hub and then launch a container called Redis by running the redis-server --appendonly command.
  3. Adding the --appendonly flag to the Redis server command means that each time Redis receives a command such as SET it will not only execute the command in the dataset which is held in memory, it will also append it to what's called an AOF file. When Redis is restarted the AOF file is replayed to restore the state of the dataset held in RAM.
  4. Note that we are using the Alpine Linux version of the container to keep the download size down. Alpine Linux is a Linux distribution which is designed to be small, in fact the official Docker image based on Alpine Linux is only 5 MB in size!
  5. Compared to the sizes other base operating system images such as the official Debian, Ubuntu and CentOS which you can see from the following terminal session

    getting-started-flocker-img-0

    It is easy to see why the majority of official Docker images are using Alpine Linux as their base image.

  6. Now we have Redis up and running, let's launch our application container by running:
    docker run -itd --name moby-counter 
      -p 80:80 
      --link redis:redis 
      russmckendrick/moby-counter
    
  7. You check that your two containers are running using the docker ps command, you should see something like the following terminal session:

    getting-started-flocker-img-1

  8. Now you have the Moby Counter application container running, open your browser and go to the IP address of your server instance, in my case this was http://159.203.185.102 once the page loads you should see a white page which says Click to add logos… in the center.
  9. As per the instructions, click around the page to add some Docker logos:

    getting-started-flocker-img-2

  10. Now we have some data stored in the Redis store let's do the equivalent of yanking the power cord out of the container by running the following command:
    docker rm -f redis
  11. This will cause your browser to looking something like the following screen capture:

    getting-started-flocker-img-3

  12. Relaunching the Redis container using the following command:
    docker run -itd 
      --name redis 
      redis:alpine 
      redis-server --appendonly

    and refreshing our browser takes us back to the page which says Click to add logos…

  13. So what gives, why didn't you see your drawing? It's quite simple, we never told Docker to start the container with any persistent storage. Luckily, the official Redis image is smart and assumes that you should really be using persistent storage to store your data in, even if you don't tell it you want to as no-one likes data loss.
  14. Using the docker volume ls command you should be able to see that there are two volumes, both with really long names, in my case the output of the command looked like the following terminal capture:

    getting-started-flocker-img-4

    So we now have two Docker Volumes, one which contains our original "master piece" and the second which is blank.

  15. Before we remove our Redis container again, let's check to see which of the two Docker Volumes is currently mounted on our Redis container, we can do this by running:
    docker inspect redis
  16. There quite a bit of information shown when the docker inspect command, there information we are after is in the Mounts section, and we need to know the Name:

    getting-started-flocker-img-5

    As you can see from the terminal output above, the currently mounted volume in my setup is called b76101d13afc2b33206f5a2bba9a3e9b9176f43ce57f74d5836c824c22c.

  17. Now we know the name of the blank volume, we know that the second volume has the data for our drawing. Let's terminate the running Redis container by running:
    docker rm -f redis

    and now launch our Redis container again, but this time telling Docker to use the local volume driver and also use volume we know contains the data for our drawing:

    docker run -itd 
      --name redis 
      --volume-driver=local 
      -v 37cb395253624782836cc39be1aa815682b70f73371abb6d500a:/data 
      redis:alpine 
      redis-server --appendonly yes

    Make sure you replace the name of volume which immediately follows the -v on the fourth line with the name of the volume you know contains your own data.

  18. After a few moments, refresh your browser and you should see your original drawing.
  19. Our volume was automatically created by the Redis container as it launched which is why it has a Unique Identification Number (UID) as its name, if we wanted to we could give it more friendly name by running the following command to launch our Redis container:
    docker run -itd 
      --name redis 
      --volume-driver=local 
      -v redis_data:/data 
      redis:alpine 
      redis-server --appendonly yes
  20. As you can when running the docker volume ls command again, this is a lot more identifiable:

    getting-started-flocker-img-6

As we have seen with the simple example above, it is easy to create volumes to persist your data on using Docker Engine, however there is one drawback to using the default volume driver and that is your volumes are only available on a single Docker host.

For most people just setting out on their journey into using containers this is fine, however for people who want to host their applications in production or would like a more resilient persistent data store using the local driver quickly becomes a single point of failure.

This is where Flocker comes in.

Summary

In this article we have worked through the most basic Flocker installation possible, we have integrated the installation with Docker, the Volume Hub and also the Flocker command to interact with the volumes created and managed by Flocker.

In the next article we are going to look at how Flocker can be used to provide more resilient storage for a cluster of Docker hosts rather than just the single host we have been using in this article.

Resources for Article:


Further resources on this subject:


Modal Close icon
Modal Close icon