Introduction to Ansible Automation Platform
Ansible Automation Platform (AAP) has many parts, with its most popular part being the Automation controller, previously known as Tower. First, we will cover the different parts of AAP and how they interact with each other.
In addition, there are multiple ways of interacting with the platform, including manually interacting with the web interface, and using Ansible modules and roles to configure different parts of the platform. These roles can be used to implement Configuration as Code (CaC).
In this chapter, we will cover the following topics:
- AAP overview
- Key differences between upstream and official Red Hat products
- Overview of the methods that will be used in this book
- Execution environments and Ansible Navigator
Technical requirements
In this chapter, we will cover the platform and methods that will be used in this book. In the Overview of the methods that will be used in this book section, the code that will be referenced can be found at https://github.com/PacktPublishing/Demystifying-Ansible-Automation-Platform/tree/main/ch01. It is assumed that you have Ansible installed to run the code provided. Additional Python packages will be referenced for installation.
AAP overview
This book will mainly focus on the Ansible Automation controller, which is the largest part of AAP. Most of these components have upstream projects that they relate to and allow you to report issues, look at the code, and even contribute.
The platform is made up of the following parts:
- Automation controller (formerly Red Hat Ansible Tower)
- Automation execution environments
- Automation hub
- Automation services catalog
- Red Hat Insights for Red Hat AAP
- Ansible content tools
This chapter will provide a brief description of each of these parts and how they relate to the Automation controller. A model of these relationships can be seen in the following diagram:
Figure 1.1 – AAP relationship model
In the next section, we will provide an overview of AAP, as shown in the preceding diagram.
Automation controller (Red Hat Ansible Tower)
The Automation controller is the workhorse of AAP. It is the central place for users to run their Ansible automation. It provides a GUI, Role-Based Access Control (RBAC), and an API. It allows you to scale and schedule your automation, log events, link together playbooks in workflows, and much more. Later in this book, we will look at the functions of the controller in more detail.
Up until recently, the Automation controller was referred to as Ansible Tower until it was split to be the Automation controller and various execution environments. This removed the reliance on Python virtual environments and allowed the platform and the command line to use execution containers.
Note
This book will refer to the Ansible Automation controller, though 90% of the time, it will still apply to the older versions of Tower.
But what does the Automation controller have to do with Ansible? The short answer is that the controller runs Ansible. It provides an environment that allows jobs to be repeated in an idempotent fashion – in other words, a repeatable way that does not change the final state. It does so by storing inventories, credentials, jobs, and other related objects in a centralized place. Add in RBAC, credential management, logging, schedules, and notifications and you can execute Ansible so that it's more like a service, rather than a script.
The upstream version of the Automation controller is AWX. It can be found at https://galaxy.ansible.com/awx/awx. AWX is an unsupported version of the controller and was built to be installed inside a Kubernetes cluster of some kind. It is frequently updated, and upgrades between versions are not tested. Fixes are not backported to previous AWX versions as they are with the Automation controller and Tower. For these reasons, it's recommended for enterprise users to use the Automation controller.
The Automation controller will pull collections and roles from Ansible Galaxy or a configured Automation hub. It will also pull container images from an image repository or Automation hub if none are specified.
Automation execution environments
The Automation execution environments are something new that was introduced with AAP 2.0. Previously, Ansible Tower relied on Python virtual environments to execute everything. This was not portable and sometimes made development difficult. With the introduction of execution environments, Ansible can now be run inside portable containers called execution environments. These containers are built from base images provided by Ansible that can be customized. Details of that customization will be covered in Chapter 8, Creating Execution Environments.
Execution environments can be built using the ansible-builder CLI tool, which takes definition files to describe what to add to a base image. They then can be executed using the ansible-navigator CLI/TUI tool to execute local playbooks. These same containers can be used inside the Ansible controller to execute playbooks as well, decreasing the difference between executing something locally and on the controller.
Automation hub
The Automation hub is an on-premises deployment of Ansible Galaxy (https://galaxy.ansible.com/). It can host collections and container images. It can be configured to get certified collections from Red Hat's Content Collections of certified content that reside on their Automation hub at https://console.redhat.com/ansible/automation-hub, from the public Ansible Galaxy, or any valid collection that's been uploaded by an administrator. It is also a place for users to go to discover collections that have been created by other groups inside an organization. The upstream repository of the Automation hub is Galaxy_ng
(https://github.com/ansible/galaxy_ng). It is based on pulp and has much of the same features as the Automation hub.
Collections have replaced roles in terms of a form of distributing content. They contain playbooks, roles, modules, and plugins. They should be used so that code can be reused across playbooks. Automation hub was built so that users could store and curate collections on-premises without resorting to storing tarballs in a Git repository or a direct connection to Ansible Galaxy.
When you're installing Automation hub, it is possible to set up an image repository as well to host execution environment container images. This is not present when you're using the OpenShift operator as it is assumed that if you have an OpenShift cluster, you should already have an image repository.
Automation services catalog
The Automation services catalog provides a frontend for users to run automation using a simplified GUI that is separate from the Ansible Automation controller. It allows for multiple controllers to be linked to it, and for users to order products that will launch jobs. It also allows users to approve jobs through the services catalog. It is an extension of the Automation controller. A good example of this service can be found at https://www.youtube.com/watch?v=Ry_ZW78XYc0.
Red Hat Insights for Red Hat AAP
Red Hat Insights provides a dashboard that contains health information and statistics about jobs and hosts. It can also calculate savings from automation to create reports. Go to the following website to access the Insights dashboard: https://console.redhat.com/ansible/ansible-dashboard.
This includes the following:
- Monitoring the Automation controller's cluster health
- Historical job success/failure over time
- An automation calculator to approximate time and money that's been saved by using automation
Ansible content tools
While not directly related to the Ansible controller, Ansible has tools that assist in creating and developing playbooks and execution environments. These include the CLI/TUI tool known as ansible-navigator, which allows users to run playbooks in an execution environment, the ansible-builder CLI tool, which can be used to create execution environments, and ansible-lint
, a linter that you can use to check your code to make sure it follows best practices and conventions, as well as identifying errors in code.
Another tool is the Ansible VS Code Extension for Visual Studio Code at https://marketplace.visualstudio.com/items?itemName=redhat.ansible. This is an IDE extension for syntax highlighting, keywords, module names, and module options. While there are several code editors out there, including Visual Studio Code, Atom, and PyCharm, to name a few, the Ansible Visual Studio Code Extension is a great way to double-check your Ansible work.
That concludes our overview of AAP. Now, let's address how this book goes about interacting with the different parts of the platform.
Key differences between upstream and official Red Hat products
Earlier, we briefly mentioned upstream projects. The key ones are AWX and Galaxy_ng
. These projects are built to be bleeding edge regarding rapid changes as developers from both the public and Red Hat make changes and improvements. Things are expected to break, and the upgrade path from one version to another is not guaranteed or tested. Bug fixes are also not backported to previous versions. However, their downstream versions, such as the Automation controller and Automation hub, go through rigorous testing, including testing on upgrading from one version to another. Not all the changes that are made upstream make it to the next release of the downstream product. In addition, most bug fixes do get backported to previous versions.
For these reasons, it is not recommended to use upstream products in production. Because of these caveats, they are fine for a home lab, proof of concept, and development, but not production.
Overview of the methods that will be used in this book
This book is built around defining CaC. What this means is that along with showing you how to define something in the GUI, a more Ansible approach will be presented. Ansible is a method of automation, and time and time again I have seen folks manually creating and setting their automation through manual processes. In that vein, this will serve as a guide to automating your automation through code.
The benefits of CaC are as follows:
- Standardized settings across multiple instances, such as development and production.
- Version control is inherent to storing your configuration in a Git repository.
- Easy to track changes and troubleshoot problems caused by a change.
- Ability to use CI/CD processes to keep deployments up to date and prevent drift.
A best practice is that you create a development environment to make changes, initial deployments, and run tests. Nothing ever runs perfectly the first time – it is through iteration that things improve, including your automation. Through the methods described here, using this method helps prevent drift and allows you to keep multiple instances of an Automation controller configured and synced, such as development and production, which should be a simple process.
There are three approaches you can take to managing the services as part of AAP: the manual approach and managing the configuration through Ansible modules or roles.
Introduction to the roles and modules that will be used in this book
Throughout this book, various roles and modules will be referred to that belong to collections. Collections are a grouping of modules, roles, and plugins that can be used in Ansible.
awx.awx
, ansible.tower
, and ansible.controller
are module collections that can be used interchangeably. These are all built off of the same code base. Each is built to be used with their respective product – that is, AWX, Tower, and the Automation controller.
redhat_cop.controller_configuration
is a role-based collection. This means that it is a set of roles that's been built to use one of the three aforementioned collections to take definitions of objects and push their configurations to the Automation controller/Tower/AWX.
The redhat_cop.ah_configuration
collection is built to manage the Automation hub. It contains a combination of modules and roles that are designed to manage and push configuration to the hub. It is built on the code from both of the previous collections, but specifically is tailored to the Automation hub.
redhat_cop.ee_utilties
is built to help build execution environments. Its role is to help migrate from Tower to the Automation controller and build execution environments from definition variables.
The last one we will mention is redhat_cop.aap_utilities
. This is a collection that was built to help with installing a backup and restore of the Automation controller and other useful tools that don't belong with the other controller collections.
The manual approach
Nearly everything after the installer can be set manually through the GUI. This involves navigating to the object page, making a change, and saving it, which works fine if you are only managing a handful of things, or making one small change. For example, to create an organization using the Ansible controller web interface, follow these steps:
- Navigate to the Ansible controller web interface; for example,
https://10.0.0.1/
. - Log in using your username and password.
- On the home page in the left-hand section, select Organizations | Add.
- Fill in the name, description, and any other pertinent sections.
- Click Save.
The following screenshot shows an example of the page where you can add an organization:
Figure 1.2 – Create New Organization
This method can be repeated for all the objects in the Automation controller. Although it is prone to mistakes, it can be useful for making quick changes when you're testing before committing the changes to code.
Using Ansible to manage the configuration
The best method is using Ansible to automate and define your deployment. The Ansible team has created a collection of modules that you can use to interact with the Automation controller. Upstream, this is known as awx.awx
, and the official collection is named ansible.controller
. The code is roughly the same between the two, but the latter has gone through additional testing and is supported by Red Hat. ansible-galaxy
will need to be used to install the collections. You can use either of the two commands to do so:
ansible-galaxy collection install awx.awx redhat_cop.controller_configuration ansible-galaxy collection install -r requirements.yml
This file can be found in /ch01/requirements.yml
in this book's GitHub repository: https://github.com/PacktPublishing/Demystifying-Ansible-Automation-Platform.
There is a module for each object in the Automation controller. For example, the following module is used to create an organization:
// create_organization_using_module.yml
---
- name: Create Organization
hosts: localhost
connection: local
gather_facts: false
collections:
- ansible.controller
tasks:
- name: Create Organization
ansible.controller.organization:
name: Satellite
controller_host: https://10.0.0.1
controller_username: admin
controller_password: password
validate_certs: false
...
These modules do the heavy lifting of finding object IDs and creating the related links between various objects. This especially simplifies the creation of an object such as a job template.
Alongside this collection of modules, consultants at Red Hat, and a few other people working with the Ansible Automation controller, came up with the redhat_cop.controller_configuration
collection. A series of roles was created to wrap around either the awx.awx
or ansible.controller
collection, to make it easier to invoke the modules and define a controller instance, as well as several other collections to help manage other parts of AAP. This book will assume you are using one of these two collections in conjunction with the redhat_cop
collections.
The basic idea of the controller configuration collection is to have a designated top-level variable to loop over and create each object in the controller. The following is an example of using the controller configuration collection:
// create_organization_using_role.yml
---
- name: Playbook to push organizations to controller
hosts: localhost
connection: local
vars:
controller_host: 10.0.0.1
controller_username: admin
controller_password: password
controller_validate_certs: false
controller_organizations:
- name: Satellite
- name: Default
collections:
- awx.awx
- redhat_cop.controller_configuration
roles:
- redhat_cop.controller_configuration.organizations
...
This allows you to define the objects as variables, invoke the roles, and have everything created in the controller. It is often easier to import a folder full of variable files, such as the following task:
// create_objects_using_role_include_files.yaml
---
- name: Playbook to push objects to controller
hosts: localhost
connection: local
collections:
- awx.awx
- redhat_cop.controller_configuration
pre_tasks:
- name: Include vars from configs directory
include_vars:
dir: ./configs
extensions: ["yml"]
roles:
- redhat_cop.controller_configuration.organizations
- redhat_cop.controller_configuration.projects
...
The included files define the organizations exactly like the previous task, but the projects are defined as follows:
configs/projects.yaml
---
controller_projects:
- name: Test Project
scm_type: git
scm_url: https://github.com/ansible/tower-example.git
scm_branch: master
scm_clean: true
description: Test Project 1
organization: Default
wait: true
update: true
- name: Test Project 2
scm_type: git
scm_url: https://github.com/ansible/ansible-examples.git
description: Test Project 2
organization: Default
...
Because the GUI, modules, and roles are all the same information in different forms, each section will contain details about creating the YAML definitions and how to use them.
Using these methods is the primary way to interact with AAP as a whole. The focus will be on CaC since it is the recommended way of interacting with the Automation services. Like most tasks in Ansible, it is one of many ways to do the same thing.
Execution environments and Ansible Navigator
A newer feature of Ansible is the addition of execution environments. These are prebuilt containers that are made to run Ansible playbooks. These replace using Python virtual environments as the standard way of using different versions of Python and Python packages. The Ansible controller takes advantage of these environments to scale and run job templates as well. They solve the issue of it works for me, maintaining different environments across all nodes, and other problems that arose from the previous solution. They also double as a simplified developmental version of the Automation controller for testing a job template when you're using the same container as the controller:
Figure 1.3 – Ansible Navigator inputs
ansible-navigator
was built to replace ansible-playbook
; it allows you to run playbooks in a container in the command line, similar to how jobs are run inside the controller. To install ansible-navigator
, use the pip3 install 'ansible-navigator[ansible-core]'
command on your desired machine. Afterward, you can run the demo.yml
playbook in the ch01
folder:
//demo.yml
---
- name: Demo Playbook
hosts: localhost
gather_facts: false
tasks:
- debug:
msg: Hello world
...
To run this playbook in a container, use the ansible-navigator run demo.yml
-m stdout
command. It should output a Hello world
message. Using the –ee
or –eei
option, the execution environment can be specified. This allows the user to use the same execution environment that was used in the controller for testing and development.
Additional Python libraries and collections can be added to an execution environment, which will be covered in Chapter 8, Creating Execution Environments. Additional information can also be found at https://ansible-builder.readthedocs.io/en/stable/.
Summary
Now that you know how AAP interacts with services and have been introduced to the methods that will be used in this book, you are armed with the information you need to go further.
In the next chapter, we will cover how to install the controller and AAP on physical machines.