Ansible 2 Cloud Automation Cookbook

5 (4 reviews total)
By Aditya Patawari , Vikas Aggarwal
    Advance your knowledge in tech with a Packt subscription

  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Getting Started with Ansible and Cloud Management

About this book

Ansible has a large collection of inbuilt modules to manage various cloud resources. The book begins with the concepts needed to safeguard your credentials and explain how you interact with cloud providers to manage resources. Each chapter begins with an introduction and prerequisites to use the right modules to manage a given cloud provider. Learn about Amazon Web Services, Google Cloud, Microsoft Azure, and other providers. Each chapter shows you how to create basic computing resources, which you can then use to deploy an application. Finally, you will be able to deploy a sample application to demonstrate various usage patterns and utilities of resources.

Publication date:
February 2018
Publisher
Packt
Pages
200
ISBN
9781788295826

 

Getting Started with Ansible and Cloud Management

In this chapter, we will cover the following recipes:

  • Installing Ansible
  • Executing the Ansible command line to check connectivity
  • Working with cloud providers
  • Executing playbooks locally
  • Managing secrets with Ansible Vault
  • Understanding sample application
  • Using dynamic inventory
 

Introduction

Ansible is a modern automation tool that makes our lives easier by helping us manage our servers, deployments, and infrastructure. We declare what we want and let Ansible do the hard work. Some of the things that Ansible can do are as follows:

  • Install and configure software
  • Manage users and databases
  • Deploy applications
  • Remote execution
  • Manage Infrastructure as Code

We will focus on the Infrastructure as Code part of Ansible for a significant part of this book.

Ansible has certain distinct advantages over other similar tools.

  • Ansible is agentless. So we do not need to install any software on the servers that are to be managed. It does require Python runtime on the servers and a SSH server on remote hosts.
  • Ansible supports both push and pull modes. So we can execute Ansible code from a central control machine to make changes on remote machines or the remote machines can pull configuration from a well defined source periodically.
  • Code for Ansible is written in YAML (http://yaml.org/), which stands for YAML Ain't Markup Language. Ansible did not create and manage a language (or DSL) from scratch. YAML is easy to read, write, and understand. This makes up most of Ansible code's self documentation and reduces the learning curve of Ansible significantly.
  • Ansible does not try to re-invent the wheel. Hence it uses SSH as a transport and YAML as a Domain Specific Language (DSL). In typical cases, there are two entities involved, a system (A) where the playbook execution is initiated, and another system (B), usually remote, which is configured using Ansible:

In a nutshell, Ansible helps to manage various components of servers, deployments and infrastructure in a repeatable manner. Its self-documenting nature helps with understanding and auditing true nature of infrastructure.

Infrastructure as Code

Traditionally, infrastructure has been managed manually. At best, there would be a user interface which could assist in creating and configuring compute instances. For most of the users who begin their journey with a cloud, a web-based dashboard is the first and most convenient way to interact. However, such a manual method is error prone. Some of the most commonly faced problems are:

  • Requirement for more personnel to manage infrastructure round the clock
  • Probability of errors and inconsistencies due to human involvement
  • Lack of repeatability and auditability

Creating Infrastructure as Code addresses these concerns and helps in more than one way. A well maintained code base will allow us to refer to the infrastructure state, not only at the present but also at various points in the past.

Ansible helps us code various aspects of infrastructure including provisioning, configuring, and eventually, retiring. Ansible supports coding over 20 cloud providers and self-managed infrastructure setups. Due to its open nature, existing providers can be enhanced and customized and new providers can be added easily.

Once we start managing Infrastructure as Code, we open ourselves to the possibility of a lot of automation. While this book focuses on creating and managing the infrastructure, the possibilities are limitless. We can:

  • Raise an alarm if a critical machine becomes unreachable.
  • Personnel who do not have access to infrastructure can still help by coding the infrastructure. A code review exercise could help to enforce best practices.
  • We can scale infrastructure dynamically based on our requirements.
  • In case of a disaster, we can create replacements quickly.
  • Passing knowledge of best practices within and outside the organization becomes easier.

Throughout this book, we will create our infrastructure from Ansible code and demonstrate its usability and repeatability.

Introduction of Ansible entities

Before we start diving into the Ansible world, we need to know some basics:

  • Inventory: We need to have a list of hosts that we want to manage. Inventory is that list. In its simplest form, this can be a text file created manually which just lists the IP addresses of the servers. This is usually enough for small infrastructure or if the infrastructure is static in nature. It follows ini syntax and a typical inventory would look like this:
[webservers]
server1
[application]
server1
server2

If our infrastructure is dynamic, where we add and remove servers frequently, we can use dynamic inventory. This would allow us to generate the inventory in real time. Ansible provides dynamic inventory scripts for many cloud providers and allows us to create dynamic inventory scripts as per our need for non-standard setups. We will use dynamic inventory in this book since it is better suited to cloud based environments.

  • Modules: Ansible modules are executable plugins that get the real job done. Ansible comes with thousands of modules which can do tasks from installing packages to creating a server. Most of the modules accept parameters based upon which they take suitable actions to move the server towards a desired state. While this book uses modules primarily in the YAML code, it is possible to use modules in command line as an argument to Ansible ad hoc command.
  • Tasks: A task is a call to an Ansible module along with all the requirements like parameters, variables etc. For example, a task may call upon the template module to read a Jinja template and set of variables, and generate the desired state of a file for the remote server.
  • Roles: Ansible roles describe the particular role that a server is going to play. It consists of YAML code which defines tasks. It also consists of the dependencies for the execution of tasks like required files, templates and variables.
  • Playbooks: A playbook is a YAML file where we associate roles and hosts. It is possible to write the tasks in the playbook itself and not use roles but we would strongly discourage this practice. For the sake of readability and better code management, we suggest that playbooks should be kept small with just the name of the hosts or groups to target as defined in inventory and calls to the roles.
  • Variables: Just like any other programming language, Ansible also has good old-fashioned variables. They hold values which can be used in tasks and templates to perform certain actions. Variables can be created before execution in a file or generated and modified during runtime. A very common use case is to invoke certain tasks if a variable holds a certain value or to store the output of a task in a variable and use it in one of the subsequent tasks.
 

Installing Ansible

There are many ways to install Ansible. Most of the Linux distributions has Ansible packages in their repositories. Compiling from source is also an option. However, for the sake of uniformity, we are going to use Python pip to install the same version of Ansible for all the readers.

How to do it...

The following command will fetch the Ansible source and install Ansible 2.4.0.0 on our working machine. We have used this version of Ansible throughout the book and we urge our readers to install the same:

$ sudo pip install ansible==2.4.0.0
 

Executing the Ansible command line to check connectivity

The simplest form in which we can use Ansible with Ansible ad hoc command line tool. We can execute the tasks using modules without actually writing code in the file. This is great for quick testing or for one off tasks but we should not turn this into a habit since this type of usage is not easily documented and auditable.

How to do it...

We just have to use the Ansible command and pass the ping module as an argument to parameter -m. A successful execution will return the string pong. It signifies that Ansible can reach the server and execute tasks, subject to the authorization level of the user, of course:

$ ansible localhost -m ping
localhost | SUCCESS => {
"changed": false,
"failed": false,
"ping": "pong"
}
 

Working with cloud providers

Under normal circumstances, users execute the ansible-playbook command from a system, say A. This system has inventory, playbooks, roles, variable definitions and other information required to configure a remote system, say B, to a desired state.

When we talk about building infrastructure using Ansible, things change a bit. Now, we are not configuring a remote system. We are actually interacting with a cloud provider to create or allocate certain resources to us. We may, at a later point in time, choose to configure these resources using Ansible as well. Interacting with a cloud provider is slightly different from executing a regular playbook. There are two important points that we need to keep in mind:

  • A lot of the tasks will execute on the local machine and will interact with API provided by a cloud provider. In principle, we won't need SSH setup because, in typical cases, requests will go from our local machine to the cloud provider using HTTPS.
  • The cloud provider will need to authenticate and authorize our requests. Usually this is done by providing a set of secrets, or keys, or tokens. Since these tokens are sensitive, we should learn a little bit about Ansible Vault.
 

Executing playbooks locally

Most of the time, during the course of this book, our playbooks will be running locally and interacting with a cloud provider. The cloud provider, usually, exposes an API over HTTPS. Generally, we need an inventory file, which has a record of all the hosts, for Ansible to run the playbooks. Let us try to work around it.

The easiest way of running a playbook locally is by using the keyword localhost in the playbook as a value for the hosts key. This will save us from creating and managing the inventory file altogether.

How to do it...

Consider the following playbook which we can execute without an inventory:

---
- hosts: localhost
tasks:
- name: get value
debug:
msg: "The value is: secret-value"

To execute this, we can just run the ansible-playbook command with this playbook:

$ ansible-playbook playbook.yml
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [include secret] **********************************************************
ok: [localhost]
TASK [get value] ***************************************************************
ok: [localhost] => {
"msg": "The value is: secret-value"
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
 

Managing secrets with Ansible Vault

Secret management is an important aspect of any configuration management tool. Ansible comes with a tool called Ansible Vault which encrypts secrets (technically, it can encrypt any arbitrary file but we will focus on secrets) at rest with 256 bit AES encryption. These secrets can be used in tasks in various ways.

To understand this better, let us create a sample secret and use it in a task.

How to do it...

We will begin with a standard variable file, let us call it secret.yml, in Ansible:

---
mysecret: secret-value

To use this in a playbook, we can include the file as a variable and call it in a task:

---
- hosts: localhost
tasks:
- name: include secret
include_vars: secret.yml

- name: get value
debug:
msg: "The value is: {{ mysecret }}"

Let us run our playbook to verify that everything is good:

$ ansible-playbook playbook.yml
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [include secret] **********************************************************
ok: [localhost]
TASK [get value] ***************************************************************
ok: [localhost] => {
"msg": "The value is: secret-value"
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0

Our goal is to protect the content of secret.yml. So let us use ansible-vault and encrypt it:

$ ansible-vault encrypt secret.yml
Vault password:
Encryption successful

Now the content of secret.yml should look something like this:

$ANSIBLE_VAULT;1.1;AES256
64656138356263336432653663323966373961363637383035393631383963643363343162393764
6634663662333863373937373139326230326366643862390a643435663237333832366336323861
31666565333937343333373133353838396166356233316435643363356161366536356230396534
3038316565336630630a393938613764616530336565653866346130666466346130633563346564
33313230336265383532313033653237643662616437636263633039373065346537

Executing the playbook like before will fail because our variable file is encrypted. Ansible provides a way to read encrypted files on the fly without decrypting it on the disk. The flag, --ask-vault-pass, will request the password from and execute the playbook normally when provided with the correct password:

$ ansible-playbook --ask-vault-pass playbook.yml
Vault password:
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [include secret] **********************************************************
ok: [localhost]
TASK [get value] ***************************************************************
ok: [localhost] => {
"msg": "The value is: secret-value"
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0

We will be using Ansible Vault throughout this book to store secrets.

Code Layout

We follow standard code layout to make it easy for everyone to understand the roles. Each chapter has a two playbooks and two roles. One playbook and role has code specific to managing our cloud resources. The other roles has code for deploying the phonebook application. Since there are secrets with each chapter, our final layout would look more or less like this:

└── chapter1

└── roles

├── <cloud provider>

│ ├── files

│ ├── tasks

│ │ └── main.yml

│ ├── templates

│ └── vars

│ ├── main.yml

│ └── secrets.yml

└── phonebook

├── files

│ └── phone-book.service

├── tasks

│ └── main.yml

├── templates

│ └── config.py

└── vars

└── secrets.yml

 

Understanding sample application

Throughout the book, we will give examples of how to deploy an application on the infrastructure created by the Ansible playbooks. We have written a simple phone book application using Python's flask framework (http://flask.pocoo.org). The phone book application listens on port 8080 and we can use any browser to use the phone book. The app has two variations, one uses SQLite as a database whereas the other one uses MySQL. The application code remains the same, we have just used different databases to demonstrate the application running in a single compute instance and it running across multiple instances or even different components of a cloud provider.

The application code can be obtained from:

How to do it...

The application deployment can be done using Ansible. If we are going to deploy the application using SQLite then the following tasks for the phonebook role are good enough:

---
- name: install epel repository
package:
name: epel-release
state: present

- name: install dependencies
package:
name: "{{ item }}"
state: present
with_items:
- git
- python-pip
- gcc
- python-devel

- name: install python libraries
pip:
name: "{{ item }}"
state: present
with_items:
- flask
- flask-sqlalchemy
- flask-migrate
- uwsgi

- name: get the application code
git:
repo: [email protected]:ansible-cookbook/phonebook-sqlite.git
dest: /opt/phone-book

- name: upload systemd unit file
copy:
src: phone-book.service
dest: /etc/systemd/system/phone-book.service

- name: start phonebook
systemd:
state: started
daemon_reload: yes
name: phone-book
enabled: yes

In the case of MySQL, we need to add some more tasks and information to work with Ansible:

---
- name: include secrets
include_vars: secrets.yml

- name: install epel repository
package:
name: epel-release
state: present

- name: install dependencies
package:
name: "{{ item }}"
state: present
with_items:
- git
- python-pip
- gcc
- python-devel
- mysql-devel

- name: install python libraries
pip:
name: "{{ item }}"
state: present
with_items:
- flask
- flask-sqlalchemy
- flask-migrate
- uwsgi
- MySQL-python

- name: get the application code
git:
repo: [email protected]:ansible-cookbook/phonebook-mysql.git
dest: /opt/phone-book
force: yes

- name: upload systemd unit file
copy:
src: phone-book.service
dest: /etc/systemd/system/phone-book.service

- name: upload app config file
template:
src: config.py
dest: /opt/phone-book/config.py

- name: create phonebook database
mysql_db:
name: phonebook
state: present
login_host: "{{ mysql_host }}"
login_user: root
login_password: "{{ mysql_root_password }}"

- name: create app user for phonebook database
mysql_user:
name: app
password: "{{ mysql_app_password }}"
priv: 'phonebook.*:ALL'
host: "%"
state: present
login_host: "{{ mysql_host }}"
login_user: root
login_password: "{{ mysql_root_password }}"

- name: start phonebook
systemd:
state: started
daemon_reload: yes
name: phone-book
enabled: yes

Accordingly, we will create a secrets.yml in vars directory and encrypt it using ansible-vault. The unencrypted data will look like this:

---
mysql_app_password: appSecretPassword
mysql_root_password: secretPassword
mysql_host: 35.199.168.191

The phone-book.service will take care of initializing the database and running the uwsgi server for serving the application for both SQLite and MySQL based setups:

[Unit]
Description=Simple Phone Book

[Service]
WorkingDirectory=/opt/phone-book
ExecStartPre=/bin/bash /opt/phone-book/init.sh
ExecStart=/usr/bin/uwsgi --http-socket 0.0.0.0:8080 --manage-script-name --mount /phonebook=app:app
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Throughout the coming chapters, we will use this role to deploy our phone book application.

 

Using dynamic inventory

We have talked about dynamic inventory a little bit in this chapter. Throughout this book, in every chapter, we are going to talk about and use dynamic inventory. So let us explore the concept in a bit more depth.

Reiterating what we wrote earlier, dynamic inventory is useful for infrastructures that are dynamic in nature or for cases where we do not want to or cannot maintain a static inventory. Dynamic inventory queries a datasource and builds the inventory in real time. For the sake of this book, we will query cloud providers to get data and build the inventory. Ansible provides dynamic inventory scripts for most of the popular cloud providers.

However, it is simple to create a dynamic inventory script by ourselves. Any executable script that can return a JSON with a list of inventory host groups and hosts in a predetermined format, when passed with a parameter --list can be used as an inventory script. A very simple inventory would output something like this:

{
"application": ["10.0.0.11", "10.0.0.12"],
"database": ["10.0.1.11"]
}

More elaborate inventory scripts would output much more information like instance tags, names, operating systems, geographical locations, and, also known as host facts.

How to do it...

To present a realistic example, we have created a simple inventory script for Amazon Web Service in Python. The code is available on GitHub (https://github.com/ansible-cookbook/ec2_tags_inventory):

#!/usr/bin/env python
import boto3
import json
import ConfigParser
import os

def get_address(instance):
if "PublicIpAddress" in instance:
address = instance["PublicIpAddress"]
else:
address = instance["PrivateIpAddress"]
return address

if os.path.isfile('ec2.ini'):
config_path = 'ec2.ini'
elif os.path.isfile(os.path.expanduser('~/ec2.ini')):
config_path = os.path.expanduser('~/ec2.ini')

config = ConfigParser.ConfigParser()
config.read(config_path)
id = config.get("credentials", "aws_access_key_id", raw=True)
key = config.get("credentials", "aws_secret_access_key", raw=True)

client = boto3.client('ec2', aws_access_key_id = id, aws_secret_access_key = key, region_name="us-east-1")

inventory = {}

reservations = client.describe_instances()['Reservations']
for reservation in reservations:
for instance in reservation['Instances']:
address = get_address(instance)
for tag in instance['Tags']:
if tag['Key'] == "ansible_role":
roles = tag['Value'].split(",")
for role in roles:
if role in inventory:
inventory[role].append(address)
else:
inventory[role] = [address]
print json.dumps(inventory)

This script reads a file called ec2.ini for AWS access and secret key. For the sake of simplicity, we have hardcoded the region to us-east-1 but this can be changed easily. The script goes through AWS EC2 in the us-east-1 region and looks for any instance that has a tag with the name ansible_role and any valid value like webserver or database. It will add the IP addresses of those instances to the Python dictionary variable called inventory. In the end, this variable is dumped as JSON as output.

We can test this by executing:

$ python ec2_tags_inventory.py --list
{"application": ["10.0.0.11", "10.0.0.12"], "database": ["10.0.1.11"]}

Note that output may vary depending on the instances that are tagged in EC2. To use this in an Ansible command, we need to make it executable and just pass the script instead of inventory file to -i flag like this:

$ chmod +x ec2_tags_inventory.py
$ ansible -i ec2_tags_inventory.py database -m ping
database | SUCCESS => {
"changed": false,
"failed": false,
"ping": "pong"
}

Needless to say, this is a very simple example and the actual dynamic inventory script provided by Ansible is much more comprehensive and it looks beyond EC2 to other services, such as RDS.

About the Authors

  • Aditya Patawari

    Aditya Patawari is a Systems Engineer by profession, and just loves to play around with Linux and other open source technologies. He works on various parts of system lifecycles, and handles infrastructure automation and scaling of applications. He is also a contributor at Fedora project, and can be heard talking about the same along with the Linux systems automation at several conferences and events. He has worked on Ansible both at BrowserStack.com where he leads a team of systems engineers and at Fedora Project.

    I would like to thank my family with being patient with me. I would also appreciate my colleagues at BrowserStack for their support and my fellow contributors at Fedora Project who taught me so much. Lastly, a big thanks to all my friends for being there for me when I just could not manage it all.

    Browse publications by this author
  • Vikas Aggarwal

    Vikas Aggarwal is an Infrastructure Engineer and a SRE, employed with HelpShift Technologies where he is a part of the Operations team, codifying automation for managing numerous deployed application clusters across different cloud services. He also helped his previous firm Browserstack in introducing a lot of automation to deploy and manage a hybrid cloud of thousands of servers. He has matured his skills in cloud and automation in course of his career by investing big-time deploying cloud automation with various tools such as Ansible, Terraform, and more.

    Browse publications by this author

Latest Reviews

(4 reviews total)
It has the info I wanted.
The book covers most topics that I want to know.
Great and relevant content

Recommended For You

Book Title
Unlock this book and the full library for FREE
Start free trial