In this chapter, we will cover the following recipes:
Adding an Ubuntu Xenial (16.04 LTS) Vagrant box
Using a disposable Ubuntu Xenial (16.04) in seconds
Enabling VirtualBox Guest Additions in Vagrant
Using a disposable CentOS 7.x with VMware in seconds
Extending the VMware VM capabilities
Enabling multiprovider Vagrant environments
Customizing a Vagrant VM
Using Docker with Vagrant
Using Docker in Vagrant for a Ghost blog behind NGINX
Using Vagrant remotely with AWS EC2 and Docker
Simulating dynamic multiple host networking
Simulating a networked three-tier architecture app with Vagrant
Showing your work on the LAN while working with Laravel
Sharing access to your Vagrant environment with the world
Simulating Chef upgrades using Vagrant
Using Ansible with Vagrant to create a Docker host
Using Docker containers on CoreOS with Vagrant
Vagrant is a free and open source tool by Hashicorp aimed at building a repeatable development environment inside a virtual machine, using simple Ruby code. You can then distribute this simple file with other people, team members, and external contributors, so that they immediately have a working running environment as long as they have virtualization on their laptop. It also means that you can use a Mac laptop, and with a simple command, launch a fully configured Linux environment for you to use locally. Everyone can work using the same environment, regardless of their own local machine. Vagrant is also very useful to simulate full production environments, with multiple machines and specific operating system versions. Vagrant is compatible with most hypervisors, such as VMware, VirtualBox, or Parallels, and can be largely extended using plugins.
Vagrant uses boxes to run. These boxes are just packaged virtual machines images that are available, for example, from https://atlas.hashicorp.com/boxes/search, or you can alternatively build your own using various tools.
Vagrant can be greatly extended using plugins. There're plugins for almost anything you can think about, and most of them are community supported. From specific guest operating systems to remote IaaS providers, features around sharing, caching or snapshotting, networking, testing or specifics to Chef/Puppet, a lot can be done through plugins in Vagrant.
A list of all available plugins, including all Vagrant providers is available on the Vagrant wiki here: https://github.com/mitchellh/vagrant/wiki/Available-Vagrant-Plugins.
More information about all integrated providers can be found on Vagrant's website: https://www.vagrantup.com/docs/providers/.
You can download a Vagrant installer for your platform from https://www.vagrantup.com/downloads.html.
Vagrant boxes are referred to by their names, usually following the username/boxname naming scheme. A 64-bits Precise box released by Ubuntu will be named ubuntu/precise64 while the centos/7 box will always be the latest CentOS 7 official box.
To step through this recipe, you will need the following:
A working Vagrant installation using the free and open source Virtualbox hypervisor
An Internet connection
Open a terminal and type the following code:
$ vagrant box add ubuntu/xenial64 ==> box: Loading metadata for box 'ubuntu/xenial64' box: URL: https://atlas.hashicorp.com/ubuntu/xenial64 ==> box: Adding box 'ubuntu/xenial64' (v20160815.0.0) for provider: virtualbox box: Downloading: https://atlas.hashicorp.com/ubuntu/boxes/xenial64/versions/20160815.0.0/providers/virtualbox.box ==> box: Successfully added box 'ubuntu/xenial64' (v20160815.0.0) for 'virtualbox'!
Vagrant knows where to look for the latest version for the requested box on the Atlas service and automatically downloads it over the Internet. All boxes are stored by default in ~/.vagrant.d/boxes
.
If you're interested in creating your own base Vagrant boxes, refer to Packer (https://www.packer.io/) and the Chef Bento project (http://chef.github.io/bento/).
We want to access and use an Ubuntu Xenial system (16.04 LTS) as quickly as possible.
To do that, Vagrant uses a file named Vagrantfile
to describe the Vagrant infrastructure. This file is in fact pure Ruby that Vagrant reads to manage your environment. Everything related to Vagrant is done inside a block such as the following:
Vagrant.configure("2") do |config| # all your Vagrant configuration here end
To step through this recipe, you will need the following:
A working Vagrant installation
A working VirtualBox installation
An Internet connection
Create a folder for the project:
$ mkdir vagrant_ubuntu_xenial_1 && cd $_
Using your favorite editor, create this very minimal Vagrantfile to launch an ubuntu/
xenial64
box:Vagrant.configure("2") do |config| config.vm.box = "ubuntu/xenial64" end
Now you can execute Vagrant, by explicitly using the Virtualbox hypervisor:
$ vagrant up --provider=virtualbox
Within seconds, you'll have a running Ubuntu 16.04 Vagrant box on your host and you can do whatever you want with it. For example, start by logging into it via Secure Shell (SSH) by issuing the following
vagrant
command and use the system normally:$ vagrant ssh Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-34-generic x86_64) […] ubuntu@ubuntu-xenial:~$ hostname ubuntu-xenial ubuntu@ubuntu-xenial:~$ free -m ubuntu@ubuntu-xenial:~$ cat /proc/cpuinfo
When you're done with your Vagrant VM, you can simply destroy it:
$ vagrant destroy ==> default: Forcing shutdown of VM... ==> default: Destroying VM and associated drives...
Alternatively, we can just stop the Vagrant VM with the goal of restarting it later in its current state using vagrant halt:
$ vagrant halt
When you started Vagrant, it read the Vagrantfile, asking for a specific box to run (Ubuntu Xenial). If you previously added it, it will launch it right away through the default hypervisor (in this case, VirtualBox), or if it's a new box, download it for you automatically. It created the required virtual network interfaces, then the Ubuntu VM got a private IP address. Vagrant took care of configuring SSH by exposing an available port and inserting a default key, so you can log into it via SSH without problems.
The VirtualBox Guest Additions are a set of drivers and applications to be deployed on a virtual machine to have better performance and enable features such as folder sharing. While it's possible to include the Guest Additions directly in the box, not all the boxes you'll find have it, and even when they do, they can be outdated very quickly.
The solution is to automatically deploy the VirtualBox Guest Additions on demand, through a plugin.
Note
The downside to using this plugin is that the Vagrant box may now take longer to boot, as it may need to download and install the right guest additions for the box.
To step through this recipe, you will need the following:
A working Vagrant installation
A working VirtualBox installation
An internet connection
The Vagrantfile from the previous recipe
Follow these steps to enable VirtualBox Guest Additions in Vagrant:
Install the
vagrant-vbguest
plugin:$ vagrant plugin install vagrant-vbguest Installing the 'vagrant-vbguest' plugin. This can take a few minutes... Installed the plugin 'vagrant-vbguest (0.13.0)'!
Confirm that the plugin is installed:
$ vagrant plugin list vagrant-vbguest (0.13.0)
Start Vagrant and see that the VirtualBox Guest Additions are installed:
$ vagrant up […] Installing Virtualbox Guest Additions 5.0.26 […] Building the VirtualBox Guest Additions kernel modules ...done. Doing non-kernel setup of the Guest Additions …done.
Now, maybe you don't want to do this every time you start you Vagrant box, because it takes time and bandwidth or because the minor difference between your host VirtualBox version and the one already installed in the Vagrant box isn't a problem for you. In this case, you can simply tell Vagrant to disable the auto-update feature right from the Vagrantfile:
config.vbguest.auto_update = false
An even better way to keep your code compatible with people without this plugin is to use this plugin configuration only if the plugin is found by Vagrant itself:
if Vagrant.has_plugin?("vagrant-vbguest") then config.vbguest.auto_update = false end
The full Vagrantfile now looks like this:
Vagrant.configure("2") do |config| config.vm.box = "ubuntu/xenial64" if Vagrant.has_plugin?("vagrant-vbguest") then config.vbguest.auto_update = false end end
Vagrant plugins are automatically installed from the vendor's website, and made available globally on your system for all other Vagrant environments you'll run. Once the virtual machine is ready, the plugin will detect the operating system, decide if the Guest Additions need to be installed or not, and if they do, install the necessary tools to do that (compilers, kernel headers, and libraries), and finally download and install the corresponding Guest Additions.
Using Vagrant plugins also extends what you can do with the Vagrant CLI. In the case of the VirtualBox Guest Addition plugin, you can do a lot of things such as status checks, manage the installation, and much more:
$ vagrant vbguest --status [default] GuestAdditions 5.0.26 running --- OK.
The plugin can later be called through Vagrant directly; here it's triggering the Guest Additions installation in the virtual machine:
$ vagrant vbguest --do install
Vagrant supports both VMware Workstation and VMware Fusion through official plugins available on the Vagrant store (https://www.vagrantup.com/vmware). Follow the indications from the official website to install the plugins.
Vagrant boxes depend on the hypervisor—a VirtualBox image won't run on VMware. You need to use dedicated images for each supervisor you choose to use. For example, Ubuntu official releases only provide VirtualBox images. If you try to create a Vagrant box with a provider while using an image built for another provider, you'll get an error.
To step through this recipe, you will need the following:
A working Vagrant installation
A working VMware Workstation (PC) or Fusion (Mac) installation
A working Vagrant VMware plugin installation
An Internet connection
The Chef Bento project provides various multiprovider images we can use. For example, let's use a CentOS 7.2 with Vagrant (bento/centos-7.2) with this simplest Vagrantfile:
Vagrant.configure("2") do |config| config.vm.box = "bento/centos-7.2" end
Start your CentOS 7.2 virtual environment and specify the hypervisor you want to run:
$ vagrant up --provider=vmware_fusion $ vagrant ssh
You're now running a CentOS 7.2 Vagrant box using VMware!
Vagrant is powered by plugins extending its usage and capabilities. In this case, the Vagrant plugin for VMware delegates all the virtualization features to the VMware installation, removing the need for VirtualBox.
If VMware is your primary hypervisor, you'll soon be tired to always specify the provider in the command line. By setting the VAGRANT_DEFAULT_PROVIDER
environment variable to the corresponding plugin, you will never have to specify the provider again, VMware will be the default:
$ export VAGRANT_DEFAULT_PROVIDER=vmware_fusion $ vagrant up
The Chef Bento Project at http://chef.github.io/bento/
A community VMware vSphere plugin at https://github.com/nsidc/vagrant-vsphere
A community VMware vCloud Director plugin at https://github.com/frapposelli/vagrant-vcloud
A community VMware vCenter plugin at https://github.com/frapposelli/vagrant-vcenter
A community VMware vCloud Air plugin at https://github.com/frapposelli/vagrant-vcloudair
The hardware specifications of the Vagrant box vary from image to image as they're specified at the creation time. However, it's not fixed forever: it's just the default behavior. You can set the requirements right in the Vagrantfile, so you can keep a daily small Vagrant box and on-demand.
To step through this recipe, you will need the following:
A working Vagrant installation
A working VMware Workstation (PC) or Fusion (Mac) installation
A working Vagrant VMware plugin installation
An internet connection
The Vagrantfile from the previous recipe using a bento/centos72 box
The VMware provider can be configured inside the following configuration blocks:
# VMware Fusion configuration config.vm.provider "vmware_fusion" do |vmware| # enter all the vmware configuration here end # VMware Workstation configuration config.vm.provider "vmware_workstation" do |vmware| # enter all the vmware configuration here end
If the configuration is the same, you'll end up with a lot of duplicated code. Take advantage of the Ruby nature of the Vagrantfile and use a simple loop to iterate through both values:
["vmware_fusion", "vmware_workstation"].each do |vmware| config.vm.provider vmware do |v| # enter all the vmware configuration here end end
Our default Bento CentOS 7.2 image has only 512 MB of RAM and one CPU. Let's double that for better performance using the vmx["numvcpus"]
and vmx["memsize"]
keys:
["vmware_fusion", "vmware_workstation"].each do |vmware| config.vm.provider vmware do |v| v.vmx["numvcpus"] = "2" v.vmx["memsize"] = "1024" end end
Start or restart your Vagrant machine to apply the changes:
$ vagrant up […]
Your box is now using two CPUs and 1 GB of RAM.
Virtual machine configuration is the last thing done by Vagrant before starting up. Here, it just tells VMware to allocate two CPUs and 1 GB of RAM to the virtual machine it's launching the way you would have done manually from inside the software.
Vagrant's authors may merge both plugins into one at some point in the future. The current 4.x version of the plugins is still split.
The VMX format is not very well documented by VMware. The possible keys and values can be found on most VMware Inc. documentation about VMX configuration.
You might be running VMware on your laptop, but your coworker might not. Alternatively, you want people to have the choice, or you simply want both environments to work! We'll see how to build a single Vagrantfile to support them all.
To step through this recipe, you will need the following:
A working Vagrant installation
A working VirtualBox installation
A working VMware Workstation (PC) or Fusion (Mac) installation
A working Vagrant VMware plugin installation
An internet connection
The Vagrantfile from the previous recipe using a bento/centos72 box
Some Vagrant boxes are available for multiple hypervisors, such as the CentOS 7 Bento box we previously used. This way, we can simply choose which one to use.
Let's start with our previous Vagrantfile including customizations for VMware:
Vagrant.configure("2") do |config| config.vm.box = "bento/centos-7.2" ["vmware_fusion", "vmware_workstation"].each do |vmware| config.vm.provider vmware do |v| v.vmx["numvcpus"] = "2" v.vmx["memsize"] = "1024" end end end
How would we add the same configuration on VirtualBox as we have on VMware? Here's how to customize VirtualBox similarly in the Vagrantfile:
config.vm.provider :virtualbox do |vb| vb.memory = "1024" vb.cpus = "2" end
Add this to your current Vagrantfile, reload and you'll get the requested resources from your hypervisor, be it VMware or VirtualBox.
It's nice, but we're still repeating ourselves with the values, leading to possible errors, omissions, or mistakes in the future. Let's take advantage once again of the Ruby nature of our Vagrantfile and declare some meaningful variables at the top of our file:
vm_memory = 1024 vm_cpus = 2
Now replace the four values by their variable names and you're done: you're centrally managing characteristics of the Vagrant environment you're using and distributing, whatever hypervisor you're using.
Vagrant supports many configuration options through the Vagrantfile. Here are the most useful ones for daily use.
To step through this recipe, you will need the following:
A working Vagrant installation (with a hypervisor)
An Internet connection
The Vagrantfile from the previous recipe using a bento/centos72 box
Here are some possible customizations for your Vagrant Virtual Machine.
If you want to specify the VM name right from Vagrant, just add the following:
config.vm.hostname = "vagrant-lab-1"
This will also add an entry with the hostname to the /etc/host
file.
You may be using a slow internet connection, or you know you do want to use your current installed box, or maybe you're in a hurry and just want to get the job done; you can just remove the option to check for a new version of the box at startup by adding the following:
config.vm.box_check_update = false
If you know you want to use a specific version of the box (maybe for debugging purposes or compliance) and not the latest, you can simply declare it as follows:
config.vm.box_version = "2.2.9"
A useful feature is to display some basic but relevant information to the user launching the Vagrant box, such as usage or connection information. Don't forget to escape the special characters. As it's Ruby, you can access all available variables, so the message can be even more dynamic and useful to the user:
config.vm.post_up_message = "Use \"vagrant ssh\" to log into the box. This VM uses #{vm_cpus} CPUs and #{vm_memory}MB of RAM."
Development environments can often be mixed, using both virtual machines and Docker containers. While virtual machines include everything needed to run a full operating system like memory, CPU, a kernel and all required libraries, a container is much more lightweight and can share all this with its host, while keeping a good isolation through special kernel features named cgroups. Docker containers helps developers use, share and ship a bundle including everything needed to run their application. Here, we'll show how to use Vagrant to start containers. Since Docker usage is a little different between Linux hosts and other platforms, the reference used here is the native Docker platform—Linux.
To step through this recipe, you will need the following:
A working Vagrant installation (no hypervisor needed)
A working Docker installation and basic Docker knowledge
An Internet connection
We'll see how to use, access, and manipulate an NGINX container in Vagrant using Docker as a provider.
Let's start with the simplest Vagrantfile possible, using the nginx:stable
container with the Docker Vagrant provider:
Vagrant.configure("2") do |config| config.vm.hostname = "vagrant-docker-1" config.vm.post_up_message = "HTTP access: http://localhost/" config.vm.provider "docker" do |docker| docker.image = "nginx:stable" end end
Simply start it up with the following code:
$ vagrant up --provider=docker Bringing machine 'default' up with 'docker' provider... ==> default: Creating the container... […] ==> default: HTTP access: http://localhost/
Let's remove the need to specify the provider on the command line by setting a simple Ruby environment access code at the top of the Vagrantfile:
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'docker'
Now you can distribute your Vagrantfile and not worry about people forgetting to explicitly specify the Docker provider.
Okay, the previous example wasn't terribly useful as we didn't expose any ports. Let's tell Vagrant to expose the Docker container HTTP (TCP/80) port to our host's HTTP (TCP/80) port:
config.vm.provider "docker" do |docker| docker.image = "nginx:stable" docker.ports = ['80:80'] end
Restart the Vagrant and verify you can access your NGINX container:
$ curl http://localhost/
What about sharing a local folder so you can code on your laptop and see the result processed by the Vagrant environment? The default NGINX configuration reads files from /usr/share/nginx/html
. Let's put our own index.html
in there.
Create a simple src/index.html
file, containing some text:
$ mkdir src; echo "<h1>Hello from Docker via Vagrant<h1>" > src/index.html
Add the Docker volume configuration to our Docker provider block in Vagrant:
config.vm.provider "docker" do |docker| docker.image = "nginx:stable" docker.ports = ['80:80'] docker.volumes = ["#{Dir.pwd}/src:/usr/share/nginx/html"] end
Note
#{Dir.pwd}
is the Ruby for finding the current directory, so you don't hardcode paths, making it highly distributable.
Restart the Vagrant environment and see the result:
$ curl http://localhost <h1>Hello from Docker via Vagrant<h1>
You can choose not to use your local or default Docker installation, but instead use a dedicated VM, maybe to reflect production or a specific OS (such as CoreOS). In this case, you can specify a dedicated Vagrantfile as follows:
config.vm.provider "docker" do |docker| docker.vagrant_vagrantfile = "docker_host/Vagrantfile" end
Vagrant in Docker can be used more usefully to simulate traditional setups such as an application behind a load balancer or a reverse proxy. We've already set up NGINX, so what about using it as a front reverse proxy with a blog engine such as Ghost behind it? We'll end up by showing how to do something similar with docker-compose.
To step through this recipe, you will need the following:
A working Vagrant installation (no hypervisor needed)
A working Docker installation and basic Docker knowledge
An Internet connection
The previous example allows only one container to be launched simultaneously, which is sad considering the power of Docker. Let's define multiple containers and start by creating a front
container (our previous NGINX):
config.vm.define "front" do |front| front.vm.provider "docker" do |docker| docker.image = "nginx:stable" docker.ports = ['80:80'] docker.volumes = ["#{Dir.pwd}/src:/usr/share/nginx/html"] end end
Now how about creating an application container, maybe a blog engine such as Ghost? Ghost publishes a ready-to-use container on the Docker Hub, so let's use that (version 0.9.0 at the time of writing) and expose on TCP/8080 the application container listening on TCP/2368:
config.vm.define "app" do |app| app.vm.provider "docker" do |docker| docker.image = "ghost:0.9.0" docker.ports = ['8080:2368'] end end
Check if you can access the blog on http://localhost:8080
and NGINX on http://localhost
:
$ curl -IL http://localhost:8080 HTTP/1.1 200 OK X-Powered-By: Express […] $ curl -IL http://localhost HTTP/1.1 200 OK Server: nginx/1.10.1
Now let's use NGINX for what it's for—serving the application. Configuring NGINX as a reverse proxy is beyond the scope of this book, so just use the following simple configuration for the nginx.conf
file at the root of your working folder:
server { listen 80; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_pass http://app:2368; } }
Change the configuration of the front
container in Vagrant to use this configuration, remove the old index.html
as we're not using it anymore, and link this container to the app
container:
config.vm.define "front" do |front| front.vm.provider "docker" do |docker| docker.image = "nginx:stable" docker.ports = ['80:80'] docker.volumes = ["#{Dir.pwd}/nginx.conf:/etc/nginx/conf.d/default.conf"] docker.link("app:app") end end
Linking the app
container makes it available to the front
container, so now there's no need to expose the Ghost blog container directly, let's make it simpler and more secure behind the reverse proxy:
config.vm.define "app" do |app| app.vm.provider "docker" do |docker| docker.name = "app" docker.image = "ghost:0.9.0" end end
We're close! But this setup will eventually fail for a simple reason: our systems are too fast, and Vagrant parallelizes the startup of virtual machines by default, and also does this for containers. Containers start so fast that the app
container may not be ready for NGINX when it's started. To ensure sequential startup, use the VAGRANT_NO_PARALLEL
environment variable at the top of the Vagrantfile:
ENV['VAGRANT_NO_PARALLEL'] = 'true'
Now you can browse to http://localhost/admin
and start using your Ghost blog in a container, behind a NGINX reverse proxy container, with the whole thing managed by Vagrant!
You can access the containers logs directly using Vagrant:
$ vagrant docker-logs --follow ==> app: > ghost@0.9.0 start /usr/src/ghost ==> app: > node index ==> app: Migrations: Creating tables... […] ==> front: 172.17.0.1 - - [21/Aug/2016:10:55:08 +0000] "GET / HTTP/1.1" 200 1547 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:48.0) Gecko/20100101 Firefox/48.0" "-" ==> app: GET / 200 113.120 ms - - […]
Docker Compose is a tool to orchestrate multiple containers and manage Docker features from a single YAML file. So if you're more familiar with Docker Compose, or if you'd like to do something similar with this tool, here's what the code would look like in the docker-compose.yml
file:
version: '2' services: front: image: nginx:stable volumes: - "./nginx.conf:/etc/nginx/conf.d/default.conf" restart: always ports: - "80:80" depends_on: - app links: - app app: image: ghost:0.9.0 restart: always
Another powerful usage of Vagrant can be with remote IaaS resources such as Amazon EC2. Amazon Web Services Elastic Compute Cloud (EC2) and similar Infrastructure-as-a-Service providers like Google Cloud, Azure or Digital Ocean, to name a few, are selling virtual machines with varying compute power and network bandwidth for a fee. You don't always have all the necessary CPU and memory you need on your laptop, or you need to have some specific computing power for a task, or you just want to replicate part of an existing production environment: here's how you can leverage the power of Vagrant using Amazon EC2.
Here, we'll deploy a Ghost blog with an NGINX reverse proxy, all on Docker, using an Ubuntu Xenial 16.04 on AWS EC2! This is to simulate a real deployment of an application, so you can see if it is working in real conditions.
To step through this recipe, you will need the following:
A working Vagrant installation (no hypervisor needed)
An Amazon EC2 account (or create one for free at https://aws.amazon.com/ if you don't have one already), with valid Access Keys, a keypair named iac-lab, a security group named iac-lab allowing at least HTTP ports, and SSH access.
An Internet connection
Begin by installing the plugin:
$ vagrant plugin install vagrant-aws
A requirement of this plugin is the presence of a dummy Vagrant box that does nothing:
$ vagrant box add dummy https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box
Remember how we configured the Docker provider in the previous recipes? This is no different:
config.vm.provider :aws do |aws, override| # AWS Configuration override.vm.box = "dummy" end
Then, defining an application VM will consist of specifying which provider it's using (AWS in our case), the Amazon Machine Image (AMI) (Ubuntu 16.04 LTS in our case), and a provisioning script that we creatively named script.sh
.
You can find other AMI IDs at http://cloud-images.ubuntu.com/locator/ec2/:
config.vm.define "srv-1" do |config| config.vm.provider :aws do |aws| aws.ami = "ami-c06b1eb3" end config.vm.provision :shell, :path => "script.sh" end
So what is the AWS-related information we need to fill in so Vagrant can launch servers on AWS?
We need the AWS Access Keys, preferably from environment variables so you don't hardcode them in your Vagrantfile:
aws.access_key_id = ENV['AWS_ACCESS_KEY_ID'] aws.secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
Indicate the region and availability zone where you want the instance to start:
aws.region = "eu-west-1" aws.availability_zone = "eu-west-1a"
Include the instance type; here, we've chosen the one included in the AWS free tier plan so it won't cost you a dime with a new account:
aws.instance_type = "t2.micro"
Indicate in which security group this instance will live (it's up to you to adapt the requirements to your needs):
aws.security_groups = ['iac-lab']
Specify the AWS keypair name, and override the default SSH username and keys:
aws.keypair_name = "iac-lab" override.ssh.username = "ubuntu" override.ssh.private_key_path = "./keys/iac-lab.pem"
Under some circumstances, you can experience a bug with NFS while using Vagrant and AWS EC2, so I choose to disable this feature:
override.nfs.functional = false
Finally, it's a good practice to tag the instances, so you can later find out where they come from:
aws.tags = { 'Name' => 'Vagrant' }
Add a simple shell script that will install Docker and docker-compose
, then execute the docker-compose file:
#!/bin/sh # install Docker curl -sSL https://get.docker.com/ | sh # add ubuntu user to docker group sudo usermod -aG docker ubuntu # install docker-compose curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose # execute the docker compose file cd /vagrant docker-compose up -d
Include both NGINX configuration and docker-compose.yml
files from the previous recipe and you're good to go:
$ vagrant up Bringing machine 'srv-1' up with 'aws' provider... […] ==> srv-1: Launching an instance with the following settings... ==> srv-1: -- Type: t2.micro ==> srv-1: -- AMI: ami-c06b1eb3 ==> srv-1: -- Region: eu-west-1 […] ==> srv-1: Waiting for SSH to become available... ==> srv-1: Machine is booted and ready for use! […] ==> srv-1: docker version […] ==> srv-1: Server: ==> srv-1: Version: 1.12.1 […] ==> srv-1: Creating vagrant_app_1 ==> srv-1: Creating vagrant_front_1
Open your browser at http://a.b.c.d/
(using the EC2 instance public IP) and you'll see your Ghost blog behind an NGINX reverse proxy, using Docker containers, using Vagrant on Amazon EC2.
A common usage for such a setup is for the developer to test the application in close to real production conditions, maybe to show a new feature to a remote product owner, replicate a bug seen only in this setup, or at some point in the CI. Once Docker containers have been built, smoke test them on EC2 before going any further.
Vagrant is also very useful when used to simulate multiple hosts in a network. This way you can have full systems able to talk to each other in the same private network and easily test connectivity between systems.
To step through this recipe, you will need the following:
A working Vagrant installation
A working VirtualBox installation
An Internet connection
Here's how we would create one CentOS 7.2 machine with 512 MB of RAM and one CPU, in a private network with a fixed IP 192.168.50.11, and a simple shell output:
vm_memory = 512 vm_cpus = 1 Vagrant.configure("2") do |config| config.vm.box = "bento/centos-7.2" config.vm.provider :virtualbox do |vb| vb.memory = vm_memory vb.cpus = vm_cpus end config.vm.define "srv-1" do |config| config.vm.provision :shell, :inline => "ip addr | grep \"inet\" | awk '{print $2}'" config.vm.network "private_network", ip: "192.168.50.11", virtualbox__intnet: "true" end end
To add a new machine to this network, we could simply duplicate the srv-1
machine definition, as in the following code:
config.vm.define "srv-2" do |config| config.vm.provision :shell, :inline => "ip addr | grep \"inet\" | awk '{print $2}'" config.vm.network "private_network", ip: "192.168.50.12", virtualbox__intnet: "true" end
That's not very DRY, so let's take advantage of the Ruby nature of the Vagrantfile to create a loop that will dynamically and simply create as many virtual machines as we want.
First, declare a variable with the amount of virtual machines we want (2
):
vm_num = 2
Then iterate through that value, so it can generate values for an IP and for a hostname:
(1..vm_num).each do |n| # a lan lab in the 192.168.50.0/24 range lan_ip = "192.168.50.#{n+10}" config.vm.define "srv-#{n}" do |config| config.vm.provision :shell, :inline => "ip addr | grep \"inet\" | awk '{print $2}'" config.vm.network "private_network", ip: lan_ip, virtualbox__intnet: "true" end end
This will create two virtual machines (srv-1
at 192.168.50.11
and srv-2
at 192.168.50.12
) on the same internal network, so they can talk to each other.
Now you can simply change the value of vm_num
and you'll easily spawn new virtual machines in seconds.
We can optionally go even further, using the following cloning and networking features.
Linked clones is a feature that enables new VMs to be created based on an initial existing disk image, without the need to duplicate everything. Each VM stores only its delta state, allowing very fast virtual machines boot times.
As we're launching many machines, you can optionally enable linked clones to speed things up:
config.vm.provider :virtualbox do |vb|
vb.memory = vm_memory
vb.cpus = vm_cpus
vb.linked_clone = true
end
VirtualBox has the option to let you define your own networks for further reference or reuse. Configure them under Preferences | Network | NAT Networks. Luckily, Vagrant can work with those named NAT networks too. To test the feature, you can create in VirtualBox a network (like iac-lab) and assign it the network 192.168.50.0/24
.
Just change the network configuration from the preceding Vagrantfile to launch the VMs in this specific network:
config.vm.network "private_network", ip: lan_ip, virtualbox__intnet: "iac-lab"
Vagrant is a great tool to help simulate systems in isolated networks, allowing us to easily mock architectures found in production. The idea behind the multiple tiers is to separate the logic and execution of the various elements of the application, and not centralize everything in one place. A common pattern is to get a first layer that gets the common user requests, a second layer that does the application job, and a third layer that stores and retrieves data, usually from a database.
In this simulation, we'll have the traditional three tiers, each running CentOS 7 virtual machines on their own isolated network:
Front: NGINX reverse proxy
App: a Node.js app running on two nodes
Database: Redis
Virtual Machine Name |
front_lan IP |
app_lan IP |
db_lan IP |
---|---|---|---|
front-1 |
10.10.0.11/24 |
10.20.0.101/24 |
N/A |
app-1 |
N/A |
10.20.0.11/24 |
10.30.0.101/24 |
app-2 |
N/A |
10.20.0.12/24 |
10/30.0.102/24 |
db-1 |
N/A |
N/A |
10.30.0.11/24 |
You will access the reverse proxy (NGINX), which alone can contact the application server (Node.js), which is the only one to be able to connect to the database.
To step through this recipe, you will need the following:
A working Vagrant installation
A working VirtualBox installation
An Internet connection
Follow these steps for simulating a networked three-tier architecture app with Vagrant.
The database lives in a db_lan private network with the IP 10.30.0.11/24.
This application will use a simple Redis installation. Installing and configuring Redis is beyond the scope of this book, so we'll keep it as simple as possible (install it, configure it to listen on the LAN port instead of 127.0.0.1, and start it):
config.vm.define "db-1" do |config| config.vm.hostname = "db-1" config.vm.network "private_network", ip: "10.30.0.11", virtualbox__intnet: "db_lan" config.vm.provision :shell, :inline => "sudo yum install -q -y epel-release" config.vm.provision :shell, :inline => "sudo yum install -q -y redis" config.vm.provision :shell, :inline => "sudo sed -i 's/bind 127.0.0.1/bind 127.0.0.1 10.30.0.11/' /etc/redis.conf" config.vm.provision :shell, :inline => "sudo systemctl enable redis" config.vm.provision :shell, :inline => "sudo systemctl start redis" end
This tier is where our application lives, backed by an application (web) server. The application can connect to the database tier, and will be available to the end user through tier 1 proxy servers. This is usually where all the logic is done (by the application).
This will be simulated with the simplest Node.js code I could produce to demonstrate the usage, displaying the server hostname (the filename is app.js
).
First, it creates a connection to the Redis server on the db_lan
network:
#!/usr/bin/env node var os = require("os"); var redis = require('redis'); var client = redis.createClient(6379, '10.30.0.11'); client.on('connect', function() { console.log('connected to redis on '+os.hostname()+' 10.30.0.11:6379'); });
Then if it goes well, it creates an HTTP server listening on :8080
, displaying the server's hostname:
var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Running on '+os.hostname()+'\n'); }).listen(8080); console.log('HTTP server listening on :8080');
Start the app, the simplest of the systemd
service file (systemd
unit files are out of the scope of this book):
[Unit] Description=Node App After=network.target [Service] ExecStart=/srv/nodeapp/app.js Restart=always User=vagrant Group=vagrant Environment=PATH=/usr/bin Environment=NODE_ENV=production WorkingDirectory=/srv/nodeapp [Install] WantedBy=multi-user.target
Let's iterate through the deployment of a number of application servers (in this case: two) to serve the app. Once again, deploying Node.js applications is out of the scope of this book, so I kept it as simple as possible—simple directories and permissions creation and systemd unit deployment. In production, this would probably be done through a configuration management tool such as Chef or Ansible and maybe coupled with a proper deployment tool:
# Tier 2: a scalable number of application servers vm_app_num = 2 (1..vm_app_num).each do |n| app_lan_ip = "10.20.0.#{n+10}" db_lan_ip = "10.30.0.#{n+100}" config.vm.define "app-#{n}" do |config| config.vm.hostname = "app-#{n}" config.vm.network "private_network", ip: app_lan_ip, virtualbox__intnet: "app_lan" config.vm.network "private_network", ip: db_lan_ip, virtualbox__intnet: "db_lan" config.vm.provision :shell, :inline => "sudo yum install -q -y epel-release" config.vm.provision :shell, :inline => "sudo yum install -q -y nodejs npm" config.vm.provision :shell, :inline => "sudo mkdir /srv/nodeapp" config.vm.provision :shell, :inline => "sudo cp /vagrant/app.js /src/nodeapp" config.vm.provision :shell, :inline => "sudo chown -R vagrant.vagrant /srv/" config.vm.provision :shell, :inline => "sudo chmod +x /srv/nodeapp/app.js" config.vm.provision :shell, :inline => "cd /srv/nodeapp; npm install redis" config.vm.provision :shell, :inline => "sudo cp /vagrant/nodeapp.service /etc/systemd/system" config.vm.provision :shell, :inline => "sudo systemctl daemon-reload" config.vm.provision :shell, :inline => "sudo systemctl start nodeapp" end end
Tier 1 is represented here by an NGINX reverse proxy configuration on CentOS 7, as simple as it could be for this demo. Configuring an NGINX reverse proxy with a pool of servers is out of the scope of this book:
events { worker_connections 1024; } http { upstream app { server 10.20.0.11:8080 max_fails=1 fail_timeout=1s; server 10.20.0.12:8080 max_fails=1 fail_timeout=1s; } server { listen 80; server_name _; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_pass http://app; } } }
Now let's create the reverse proxy VM that will serve http://localhost:8080
through the pool of application servers. This VM listens on 10.10.0.11/24 on its own LAN (front_lan
), and on 10.20.0.101
/24
on the application servers' LAN (app_lan
):
# Tier 1: an NGINX reverse proxy VM, available on http://localhost:8080 config.vm.define "front-1" do |config| config.vm.hostname = "front-1" config.vm.network "private_network", ip: "10.10.0.11", virtualbox__intnet: "front_lan" config.vm.network "private_network", ip: "10.20.0.101", virtualbox__intnet: "app_lan" config.vm.network "forwarded_port", guest: 80, host: 8080 config.vm.provision :shell, :inline => "sudo yum install -q -y epel-release" config.vm.provision :shell, :inline => "sudo yum install -q -y nginx" config.vm.provision :shell, :inline => "sudo cp /vagrant/nginx.conf /etc/nginx/nginx.conf" config.vm.provision :shell, :inline => "sudo systemctl enable nginx" config.vm.provision :shell, :inline => "sudo systemctl start nginx" end
Start this up (vagrant up
) and navigate to http://localhost:8080
, where the app displays the application server hostname so you can confirm that the load balancing across networks is working (while application servers can talk to the Redis backend).
You're working on your application using Laravel, the free and open source PHP framework (https://laravel.com/), and you'd like to showcase your work to your colleagues. Using a Vagrant development environment can help keep your work machine clean and allow you to use your usual tools and editors while using an infrastructure close to production.
In this example, we'll deploy a CentOS 7 server, with NGINX, PHP-FPM, and MariaDB, all the PHP dependencies, and install Composer. You can build from this example and others in this book to create an environment that mimics production (three-tier, multiple machines, and other characteristics).
This environment will be available for access to all your coworkers on your network, and the code will be accessible to you locally.
To step through this recipe, you will need the following:
A working Vagrant installation
A working VirtualBox or VMware installation
An Internet connection
Let's start with the simplest Vagrant environment we know:
Vagrant.configure("2") do |config| config.vm.box = "bento/centos-7.2" config.vm.define "srv-1" do |config| config.vm.hostname = "srv-1" end end
Configuring NGINX for Laravel is out of the scope for this book, but for reference, here's a simple NGINX configuration that will work well for us, listening on HTTP, serving files located on /srv/app/public
, and using PHP-FPM (the file name is nginx.conf
):
events { worker_connections 1024; } http { sendfile off; server { listen 80; server_name _; root /srv/app/public ; try_files $uri $uri/ /index.php?q=$uri&$args; index index.php; location / { try_files $uri $uri/ /index.php?$query_string; } location ~ \.php$ { try_files $uri /index.php =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include fastcgi_params; } } }
We'll create a provisioning script that we'll name as provision.sh
, which contains all the steps we need to have a fully working Laravel environment. The details are out of the scope of this book, but here are the steps:
We want Extra Packages for Enterprise Linux (EPEL):
sudo yum install -q -y epel-release
We want PHP-FPM:
sudo yum install -q -y php-fpm
We want PHP-FPM to run as the Vagrant user so we have the rights:
sudo sed -i 's/user = apache/user = vagrant/' /etc/php-fpm.d/www.conf
Install a bunch of PHP dependencies:
sudo yum install -q -y php-pdo php-mcrypt php-mysql php-cli php-mbstring php-dom
Install Composer:
curl -sS https://getcomposer.org/installer | php sudo mv composer.phar /usr/local/bin/composer sudo chmod +x /usr/local/bin/composer
Install and ship a good enough NGINX configuration:
sudo yum install -q -y nginx sudo cp /vagrant/nginx.conf /etc/nginx/nginx.conf
Install MariaDB Server:
sudo yum install -q -y mariadb-server
Start all the services:
sudo systemctl enable php-fpm sudo systemctl start php-fpm sudo systemctl enable nginx sudo systemctl start nginx sudo systemctl enable mariadb sudo systemctl start mariadb
To enable provisioning using our script, add the following code in the VM definition block:
config.vm.provision :shell, :path => "provision.sh"
To share the src
folder between your host and the Vagrant VM under /srv/app
, you can add the following code:
config.vm.synced_folder "src/", "/srv/app"
The last thing we need to do now is to add a network interface to our Vagrant virtual machine, that will be on the real LAN, so our coworkers will access it easily through the network:
config.vm.network "public_network", bridge: "en0: Wi-Fi (AirPort)"
Adapt the name of your network adapter to use (this was on a Mac, as you can guess) to your needs. Another solution is not to specify any adapter name, so you will be presented a list of possible adapters to bridge:
==> srv-1: Available bridged network interfaces: 1) en0: Wi-Fi (AirPort) [...]
Start the Vagrant environment (vagrant up
), and when it's available, you can execute commands such as finding out the network information: vagrant ssh -c "ip addr"
. Your mileage will vary, but in this network, the public IP of this Vagrant box is 192.168.1.106
, so our work is available.
Now you can start coding in the ./src/
folder. This is not a Laravel book, but a way to create a new project in a clean directory is as follows:
cd /srv/app composer create-project --prefer-dist laravel/laravel.
Don't forget to remove all files from the folder beforehand. Navigate to http://local-ip/
and you'll see the default Laravel welcome screen.
To verify the file sharing sync is working correctly, edit the ./resources/views/welcome.blade.php
file and reload your browser to see the change reflected.
If you include the Vagrantfile directly with your project's code, coworkers or contributors will only have to run vagrant up
to see it running.
Other Vagrantfile sharing options include Windows Sharing (smb), rsync (useful with remote virtual machines such as on AWS EC2), and even NFS.
A noticeable bug in the sharing feature using VirtualBox leads to corrupted or non-updating files. The workaround is to deactivate in the web server configuration sendfile
, using NGINX:
sendfile off;
Using Apache, it is as follows:
EnableSendfile Off
You're working on your project with your local Vagrant environment, and you'd like to show the status of the job to your customer who's located in another city. Maybe you have an issue configuring something and you'd like some remote help from your coworker on the other side of the planet. Alternatively, maybe you'd like to access your work Vagrant box from home, hotel, or coworking space? There's a neat Vagrant sharing feature we'll use here, working with a Ghost blog on CentOS 7.2.
To step through this recipe, you will need the following:
A working Vagrant installation
A working VirtualBox installation
A free HashiCorp Atlas account (https://atlas.hashicorp.com/account/new)
An Internet connection
Let's start with this simple Vagrantfile:
Vagrant.configure("2") do |config| config.vm.box = "bento/centos-7.2" config.vm.define "blog" do |config| config.vm.hostname = "blog" end end
We know we'll have to install some packages, so let's add a provisioning script to be executed:
config.vm.provision :shell, :path => "provision.sh"
We'll want to hack locally on our Ghost blog, such as adding themes and more, so let's sync our src/
folder to the remote /srv/blog
folder:
config.vm.synced_folder "src/", "/srv/blog"
We want a local private network so we can access the virtual machine, with the 2368
TCP port (Ghost default) redirected to our host 8080
HTTP port:
config.vm.network "private_network", type: "dhcp" config.vm.network "forwarded_port", guest: 2368, host: 8080
To configure our new box, we'll first need to enable EPEL:
sudo yum install -q -y epel-release
Then install the requirements,
node
,npm
, andunzip
:sudo yum install -q -y node npm unzip
Download the latest Ghost version:
curl -L https://ghost.org/zip/ghost-latest.zip -o ghost.zip
Uncompress it in the
/srv/blog
folder:sudo unzip -uo ghost.zip -d /srv/blog/
Install the Ghost dependencies:
cd /srv/blog && sudo npm install --production
Put all those commands in the provisioning.sh
script and we're good to go: vagrant up
.
As you would do normally, log in to your Vagrant box to launch the node server:
vagrant ssh cd /srv/blog && sudo npm start --production […] Ghost is running in production... Your blog is now available on http://my-ghost-blog.com Ctrl+C to shut down
Change the host IP from 127.0.0.1
to 0.0.0.0
in the generated config.js
file so the server listens on all interfaces:
server: { host: '0.0.0.0', port: '2368' }
Restart the node server:
cd /srv/blog && sudo npm start --production
You now have a direct access to the blog through your box LAN IP (adapt the IP to your case): http://172.28.128.3:2368/
.
Now you can access your application locally through your Vagrant box, let's give access to it to others through the Internet using vagrant share
:
The default is to share through HTTP, so your work is available through a web browser:
$ vagrant share ==> srv-1: Detecting network information for machine... [...] ==> srv-1: Your Vagrant Share is running! Name: anxious-cougar-6317 ==> srv-1: URL: http://anxious-cougar-6317.vagrantshare.com
This URL is the one you can give to anyone to access publicly your work: Vagrant servers being used as proxy.
Another possible sharing option is by SSH (deactivated by default). The program will ask you for a password you'll need to connect to the box remotely:
$ vagrant share --ssh ==> srv-1: Detecting network information for machine... [...] srv-1: Please enter a password to encrypt the key: srv-1: Repeat the password to confirm: [...] ==> srv-1: You're sharing with SSH access. This means that another user ==> srv-1: simply has to run `vagrant connect --ssh subtle-platypus-4976` ==> srv-1: to SSH to your Vagrant machine. [...]
Now, at home or at the coworking space, you can simply connect to your work Vagrant box (if needed, the default Vagrant password is vagrant):
$ vagrant connect --ssh subtle-platypus-4976 Loading share 'subtle-platypus-4976'... [...] [vagrant@srv-1 ~]$ head -n1 /srv/blog/config.js // # Ghost Configuration
You or your coworker are now remotely logged into your own Vagrant box over the Internet!
Wouldn't it be awesome to simulate production changes quickly? Chances are you're using Chef in production. We'll see how to use both Chef cookbooks with Vagrant, as well as how to simulate Chef version upgrades between environments. This kind of setup is the beginning of a good combination of infrastructure as code.
To step through this recipe, you will need the following:
A working Vagrant installation
A working VirtualBox installation
An Internet connection
Let's start with a minimal virtual machine named prod
that simply boots a CentOS 7.2, like we have in our production environment:
Vagrant.configure("2") do |config| config.vm.box = "bento/centos-7.2" config.vm.define "prod" do |config| config.vm.hostname = "prod" config.vm.network "private_network", type: "dhcp" end end
Now, if we want to use Chef code, if we want to use Chef code (Ruby files organized in directories that form a unit called a 'cookbook' that configure and maintain a specific area of a system), we first need to install Chef on the Vagrant box. There're many ways to do this, from provisioning shell scripts to using boxes with Chef already installed. A clean, reliable, and repeatable way is to use a Vagrant plugin to do just that—vagrant-omnibus. Omnibus is a packaged Chef. Install it like any other Vagrant plugin:
$ vagrant plugin install vagrant-omnibus Installing the 'vagrant-omnibus' plugin. This can take a few minutes... Installed the plugin 'vagrant-omnibus (1.4.1)'!
Then, just add the following configuration in your VM definition of the Vagrantfile and you'll always have the latest Chef version installed on this box:
config.omnibus.chef_version = :latest
However, our goal is to mimic production, maybe we're still using the latest in v11.x series of Chef instead of the latest 12.x, so instead let's specify exactly which version we want:
config.omnibus.chef_version = "11.18.12"
Now that we're using a new plugin, our Vagrantfile won't work out of the box for everybody. Users will have to install this vagrant-omnibus plugin. If you care about consistency and repeatability, an option is to add the following Ruby check at the beginning of your Vagrantfile:
%w(vagrant-vbguest vagrant-omnibus).each do |plugin| unless Vagrant.has_plugin?(plugin) raise "#{plugin} plugin is not installed! Please install it using `vagrant plugin install #{plugin}`" end end
This code snippet will simply iterate over each plugin name to verify that Vagrant returns them as installed. If not, stop there and return a helpful exit message on how to install the required plugins.
This part of the book isn't about writing Chef recipes (read more about it later in the book!), so we'll keep that part simple. Our objective is to install the Apache 2 web server on CentOS 7 (httpd
package), and start it. Here's what our sample recipe looks like (cookbooks/apache2/recipes/default.rb
); it does exactly what it says in plain English:
package "httpd" service "httpd" do action [ :enable, :start ] end
Here's how, in our VM definition block, we'll tell Vagrant to work with Chef Solo (a way of running Chef in standalone mode, without the need of a Chef server) to provision our box:
config.vm.provision :chef_solo do |chef| chef.add_recipe 'apache2' end
As simple as that. Vagrant this up (vagrant up
), and you'll end up with a fully provisioned VM, using the old 11.18.12 version, and a running Apache 2 web server.
Our manual tests can include checking that the chef-solo version is the one we requested:
$ chef-solo --version Chef: 11.18.12
They can also check if we have httpd
installed:
$ httpd -v Server version: Apache/2.4.6 (CentOS)
Also, we can check if httpd
is running:
$ pidof httpd 13029 13028 13027 13026 13025 13024
So we simulated our production environment locally, with the same CentOS version, the apache2 cookbook used in production, and the old Chef version 11. Our next task is to test if everything is still running smoothly after an upgrade to the new version 12. Let's create a second "staging" VM, very similar to our production setup, except we want to install the current latest Chef version (12.13.37 at the time of writing, feel free to use :latest
instead):
config.vm.define "staging" do |config| config.vm.hostname = "staging" config.omnibus.chef_version = "12.13.37" config.vm.network "private_network", type: "dhcp" config.vm.provision :chef_solo do |chef| chef.add_recipe 'apache2' end end
Launch this new machine (vagrant up staging
) and we'll see if our setup still works with the new major Chef version:
$ vagrant ssh staging $ chef-solo --version Chef: 12.13.37 $ httpd -v Server version: Apache/2.4.6 (CentOS) $ pidof httpd 13029 13028 13027 13026 13025 13024
So we can safely assume, as far as our testing goes, that the newest Chef version still works correctly with our production Chef code.
Here are more ways of controlling a Vagrant environment, and use even better Chef tooling inside it.
You may not always want to boot both production and staging vagrant virtual machines, especially when you just want to work on the default production setup. To specify a default VM:
config.vm.define "prod", primary: true do |config|
[…]
end
To not start automatically a VM when issuing the vagrant up
command:
config.vm.define "staging", autostart: false do |config|
[…]
end
Chances are, if your production environment is using Chef, you're also using Berkshelf for dependency management and not 100% local cookbooks (if you aren't, you should!).
Vagrant work pretty well with a Berkshelf enabled Chef environment, using the vagrant-berkshelf
plugin.
Note
Your workstation will need the Chef Development Kit (Chef DK: https://downloads.chef.io/chef-dk/) for this to work correctly.
Ansible (https://www.ansible.com/) is a very simple and powerful open source automation tool. While using and creating Ansible playbooks is off-topic for this book, we'll use a very simple playbook to install and configure Docker on a CentOS 7 box. Starting from here, you'll be able to iterate through more complex Ansible playbooks.
To step through this recipe, you will need the following:
A working Vagrant installation
A working hypervisor
A working Ansible installation on your machine (an easy way is to
$ pip install ansible
or to pick your usual package manager like APT or YUM/DNF)An Internet connection
Because writing complex Ansible playbooks is out of the scope of this book, we'll use a very simple one, so you can learn more about Ansible later and still reuse this recipe.
Our playbook file (playbook.yml
) is a plain YAML file, and we'll do the following in this order:
Install EPEL.
Create a Docker Unix group.
Add the default Vagrant user to the new Docker group.
Install Docker from CentOS repositories.
Enable and start Docker Engine.
Here's how the playbook.yml
file looks:
--- - hosts: all become: yes tasks: - name: Enable EPEL yum: name=epel-release state=present - name: Create a Docker group group: name=docker state=present - name: Add the vagrant user to Docker group user: name=vagrant groups=docker append=yes - name: Install Docker yum: name=docker state=present - name: Enable and Start Docker Daemon service: name=docker state=started enabled=yes
To use our Ansible playbook, let's start with a simple Vagrantfile starting a CentOS 7 box:
Vagrant.configure("2") do |config| config.vm.box = "bento/centos-7.2" config.vm.define "srv-1" do |config| config.vm.hostname = "srv-1" config.vm.network "private_network", type: "dhcp" end end
Simply add Ansible provisioning like this to the VM definition so it will load and apply your playbook.yml
file:
config.vm.provision "ansible" do |ansible| ansible.playbook = "playbook.yml" end
You can now run vagrant up
and use CentOS 7 Docker Engine version right away:
$ vagrant ssh [vagrant@srv-1 ~]$ systemctl status docker [vagrant@srv-1 ~]$ docker --version Docker version 1.10.3, build d381c64-unsupported [vagrant@srv-1 ~]$ docker run -it --rm alpine /bin/hostname 0f44a4d7afcd
What if for some reason you don't or can't have Ansible installed on your host machine? Alternatively, maybe you need a specific Ansible version on your Vagrant box to mimic production and you don't want to mess with your local Ansible installation. There's an interesting variant Ansible provider you can use: it will either use Ansible directly from the guest VM, and if it's not installed, it will install it from official repositories or PIP. You can use this very simple default configuration:
config.vm.provision "ansible_local" do |ansible|
ansible.playbook = "playbook.yml"
end
You can also use the following command:
$ vagrant up […] ==> srv-1: Running provisioner: ansible_local... srv-1: Installing Ansible... srv-1: Running ansible-playbook... […]
Log in to the box via SSH and check that Ansible is locally installed with the latest version:
$ vagrant ssh $ ansible --version ansible 2.1.1.0
If your use case is different, you can use more precise deployment options, to be able to fix an Ansible version number using PIP (here, version 1.9.6 instead of the latest 2.x series):
Note
It will take noticeably longer to start, as it needs to install many packages on the guest system.
config.vm.provision "ansible_local" do |ansible| ansible.version = "1.9.6" ansible.install_mode = :pip ansible.playbook = "playbook.yml" end
You can also use the following command:
$ vagrant up […] ==> srv-1: Running provisioner: ansible_local... srv-1: Installing Ansible... srv-1: Installing pip... (for Ansible installation) srv-1: Running ansible-playbook...
Inside the Vagrant guest, you can now check for the PIP and Ansible versions:
$ pip --version pip 8.1.2 from /usr/lib/python2.7/site-packages (python 2.7) $ ansible --version ansible 1.9.6
You can also check if our playbook has been installed correctly with the old 1.x Ansible version:
$ docker version
Also check if Docker is installed, and verify now it's working as the Vagrant user:
$ docker run -it --rm alpine ping -c2 google.com PING google.com (216.58.211.78): 56 data bytes 64 bytes from 216.58.211.78: seq=0 ttl=61 time=22.078 ms 64 bytes from 216.58.211.78: seq=1 ttl=61 time=21.061 ms
Vagrant can help in simulating environments, and Docker containers are not forgotten with Vagrant. We'll use one of the best platforms to run containers, the free and open source lightweight operating system CoreOS. Based on Linux, targeting easy container and clustered deployments, it also provides official Vagrant boxes. We'll deploy the official WordPress container with MariaDB on another container using the Vagrant Docker provisioner (and not the Vagrant Docker provider).
To step through this recipe, you will need the following:
A working Vagrant installation
A working hypervisor
An Internet connection
CoreOS doesn't host its official images at the default location on Atlas, it hosts it itself. So, we have to specify the full URL to the Vagrant box in our Vagrantfile:
Vagrant.configure("2") do |config| config.vm.box = https://stable.release.core-os.net/amd64-usr/current/coreos_production_vagrant.box end
As CoreOS is a minimal OS, it doesn't support any of the VirtualBox guest addition tools, so we'll disable them, and don't try anything if we (most likely) have the vagrant-vbguest
plugin:
config.vm.provider :virtualbox do |vb| vb.check_guest_additions = false vb.functional_vboxsf = false end if Vagrant.has_plugin?("vagrant-vbguest") then config.vbguest.auto_update = false end
Let's create a new VM definition, using the CoreOS Vagrant box:
config.vm.define "core-1" do |config| config.vm.hostname = "core-1" config.vm.network "private_network", type: "dhcp" end
We now need to run the mariadb
and wordpress
official containers from the Docker Hub. Using Docker directly, we would have run the following:
$ docker run -d --name mariadb -e MYSQL_ROOT_PASSWORD=h4ckm3 mariadb $ docker run -d -e WORDPRESS_DB_HOST=mariadb -e 'WORDPRESS_DB_PASSWORD=h4ckm3 --link mariadb:mariadb -p 80:80 wordpress
Let's translate this into our Vagrantfile:
db_root_password = "h4ckm3" config.vm.provision "docker" do |docker| docker.run "mariadb", args: "--name 'mariadb' -e 'MYSQL_ROOT_PASSWORD=#{db_root_password}'" docker.run "wordpress", args: "-e 'WORDPRESS_DB_HOST=mariadb' -e 'WORDPRESS_DB_PASSWORD=#{db_root_password}' --link 'mariadb:mariadb' -p '80:80'" end
Vagrant this up ($ vagrant up
), and you'll access a ready-to-use WordPress installation running on CoreOS:
$ curl -IL http://172.28.128.3/wp-admin/install.php HTTP/1.1 200 OK Date: Thu, 25 Aug 2016 10:54:17 GMT Server: Apache/2.4.10 (Debian) X-Powered-By: PHP/5.6.25 Expires: Wed, 11 Jan 1984 05:00:00 GMT Cache-Control: no-cache, must-revalidate, max-age=0 Content-Type: text/html; charset=utf-8
The CoreOS team proposes a full Vagrant environment to try and manipulate a CoreOS cluster https://github.com/coreos/coreos-vagrant. You'll then be able to try all CoreOS features and configuration options for all release channels (alpha, beta, or stable).
Other operating systems such as Ubuntu or CentOS are fully supported to provision Docker containers, even if Docker isn't installed at first on the base image. Vagrant will install Docker for you, so it will work transparently and run the containers as soon as it's installed.