Mastering SaltStack

2 (1 reviews total)
By Joseph Hall
    What do you get with a Packt Subscription?

  • Instant access to this title and 7,500+ eBooks & Videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Free Chapter
    Reviewing a Few Essentials
About this book

SaltStack is known as a popular configuration management system, but that barely scratches the surface. It is, in fact, a powerful automation suite, which is designed not only to help you manage your servers, but to help them manage themselves. SaltStack is used worldwide by organizations ranging from just a few servers, to tens of thousands of nodes across data centers in multiple continents. This award-winning software is fast becoming the standard for systems management in the cloud world.

This book will take you through the advanced features of SaltStack, bringing forward capabilities that will help you excel in the management of your servers.

You will be taken through the the mind of the modern systems engineer, and discover how they use Salt to manage their infrastructures, and why those design decisions are so important. The inner workings of Salt will be explored, so that as you advance your knowledge of Salt, you will be able to swim with the current, rather than against it.

Various subsystems of Salt are explained in detail, including Salt SSH, Salt Cloud, and external pillars, filesystems, and job caches.

You will be taken through an in-depth discussion of how to effectively scale Salt to manage thousands of machines, and how to troubleshoot issues when things don't go exactly the way you expect them to.

You will also be taken through an overview of RAET, Salt's new transport protocol, and given an insight into how this technology improves Salt, and the possibilities that it brings with it.

Publication date:
August 2015


Chapter 1. Reviewing a Few Essentials

Salt is a very powerful automation framework. Before we delve into the more advanced topics that this book covers, it may be wise to go back and review a few essentials. In this chapter, we will cover the following topics:

  • Using remote execution

  • Basic SLS file tree structure

  • Using States for configuration management

  • Basics of Grains, Pillars, and templates

This book assumes that you already have root access on a device with a common distribution of Linux installed. The machine used in the examples in this book is running Ubuntu 14.04, unless stated otherwise. Most examples should run on other major distributions, such as recent versions of Fedora, RHEL 5/6/7, or Arch Linux.


Executing commands remotely

The underlying architecture of Salt is based on the idea of executing commands remotely. This is not a new concept; all networking is designed around some aspect of remote execution. This could be as simple as asking a remote Web server to display a static Web page, or as complex as using a shell session to interactively issue commands against a remote server.

Under the hood, Salt is an example of one of the more complex types of remote execution. But whereas most Internet users are used to interacting with only one server at a time (so far as they are aware), Salt is designed to allow users to explicitly target and issue commands to multiple machines directly.

Master and Minions

Salt is based around the idea of a Master, which controls one or more Minions. Commands are normally issued from the Master to a target group of Minions, which then execute the tasks specified in the commands and then return any resulting data back to the Master.

Targeting Minions

The first facet of the salt command is targeting. A target must be specified with each execution, which matches one or more Minions. By default, the type of target is a glob, which is the style of pattern matching used by many command shells. Other types of targeting are also available, by adding a flag. For instance, to target a group of machines inside a particular subnet, the -S option is used:

# salt -S

The following are most of the available target types, along with some basic usage examples. Not all target types are covered here; Range, for example, extends beyond the scope of this book. However, the most common types are covered.


This is the default target type for Salt, so it does not have a command line option. The Minion ID of one or more Minions can be specified, using shell wildcards if desired.

When the salt command is issued from most command shells, wildcard characters must be protected from shell expansion:

# salt '*'
# salt \*

When using Salt from an API or from other user interfaces, quoting and escaping wildcard characters is generally not required.

Perl Compatible Regular Expression (PCRE)

Short Option: -E

Long Option: --pcre

When more complex pattern matching is required, a Perl Compatible Regular Expression (PCRE) can be used. This type of targeting was added to the earliest versions of Salt, and was meant largely to be used inside shell scripts. However, its power can still be realized from the command line:

# salt -E '^[m|M]in.[e|o|u]n$'


Short Option: -L

Long Option: --list

This option allows multiple Minions to be specified as a comma-separated list. The items in this list do not use pattern matching such as globbing or regular expressions; they must be declared explicitly:

# salt -L web1,web2,db1,proxy1


Short Option: -S

Long Option: --ipcidr

Minions may be targeted based on a specific IPv4 or an IPv4 subnet in CIDR notation:

# salt -S
# salt -S

As of Salt version 2015.5, IPv6 addresses cannot be targeted by a specific command line option. However, there are other ways to target IPv6 addresses. One way is to use Grain matching.


Short Version: -G

Long Version: --grain

Salt can target Minions based on individual pieces of information that describe the machine. This can range from the OS to CPU architecture to custom information (covered in more detail later in this chapter). Because some network information is also available as Grains, IP addresses can also be targeted this way.

Since Grains are specified as key/value pairs, both the name of the key and the value must be specified. These are separated by a colon:

# salt -G 'os:Ubuntu'
# salt -G 'os_family:Debian'

Some Grains are returned in a multi-level dictionary. These can be accessed by separating each key of the dictionary with a colon:

# salt -G 'ip_interfaces:eth0:'

Grains which contain colons may also be specified, though it may look strange. The following will match the local IPv6 address (::1). Note the number of colons used:

# salt -G 'ipv6:::1'

Grain PCRE

Short Version: (not available)

Long Version: --grain-pcre

Matching by Grains can be powerful, but the ability to match by a more complex pattern is even more so.

# salt --grain-pcre 'os:red(hat|flag)'


Short Option: -I

Long Option: --pillar

It is also possible to match based on Pillar data. Pillars are described in more detail later in the chapter, but for now we can just think of them as variables that look like Grains.

# salt -I 'my_var:my_val'


Short Option: -C

Long Option: --compound

Compound targets allow the user to specify multiple target types in a single command. By default, globs are used, but other target types may be specified by preceding the target with the corresponding letter followed by the @ sign:






PCRE Minion ID


PCRE Grains






Subnet/IP address


SECO Range

The following command will target the Minions that are running Ubuntu, have the role Pillar set to web, and are in the subnet.

# salt -C 'G@os:Ubuntu and I@role:web and S@'

Boolean grammar may also be used to join target types, including and, or, and not operators.

# salt -C 'min* or *ion'
# salt -C 'web* or *qa,G@os:Arch'


Short Option: -N

Long Option: --nodegroup

While node groups are used internally in Salt (all targeting ultimately results in the creation of an on-the-fly nodegroup), it is much less common to explicitly use them from the command line. Node groups must be defined as a list of targets (using compound syntax) in the Salt Master's configuration before they can be used from the command line. Such a configuration might look like the following:

  webdev: 'I@role:web,G@cluster:dev'
  webqa: 'I@role:web,G@cluster:qa'
  webprod: 'I@role:web,G@cluster:prod'

Once a nodegroup is defined and the master configuration reloaded, it can be targeted from Salt:

# salt -N webdev

Using module functions

After a target is specified, a function must be declared. The preceding examples all use the function but, obviously, other functions are available. Functions are actually defined in two parts, separated by a period:

<module> . <function>

Inside a Salt command, these follow the target, but precede any arguments that might be added for the function:

salt <target> <module>.<function> [arguments...]

For instance, the following Salt command will ask all Minions to return the text, "Hello world":

salt '*' test.echo 'Hello world'

A number of execution modules ship with the core Salt distribution, and it is possible to add more. Version 2015.5 of Salt ships with over 200 execution modules. Not all modules are available for every platform; in fact, by design, some modules will only be available to the user if they are able to detect the required underlying functionality.

For instance, all functions in the test module are necessarily available on all platforms. These functions are designed to test the basic functionality of Salt and the availability of Minions. Functions in the Apache module, however, are only available if the necessary commands are located on the Minion in question.

Execution modules are the basic building blocks of Salt; other modules in Salt use them for their heavy lifting. Because execution modules are generally designed to be used from the command line, an argument for a function can usually be passed as a string. However, some arguments are designed to be used from other parts of Salt. To use these arguments from the command line, a Python-like data structure is emulated using a JSON string.

This makes sense, since Salt is traditionally configured using YAML, and all JSON is syntactically-correct YAML. Be sure to surround the JSON with single quotes on the command line to avoid shell expansion, and use double quotes inside the string. The following examples will help.

A list is declared using brackets:


A dictionary is declared using braces (that is, curly brackets):


A list can include a dictionary, and a dictionary can include a list:


There are a few modules which can be considered core to Salt, and a handful of functions in each that are widely used.

This is the most basic Salt command. Ultimately, it only asks the Minion to return True. This function is widely used in documentation because of its simplicity, and to check whether a Minion is responding. Don't worry if a Minion doesn't respond right away; that doesn't necessarily mean it's down. A number of variables could cause a slower-than-usual return. However, successive failed attempts may be cause for concern.


This function does little more than the command; it merely asks the Minion to echo back a string that is passed to it. A number of other functions exist that perform similar tasks, including test.arg, test.kwarg, test.arg_type, and test.arg_repr.


A slightly more advanced testing scenario may require a Minion to sleep for a number of seconds before returning True. This is often used to test or demonstrate the utilities that make use of the jobs system. The test.rand_sleep function is also useful for test cases where it is desirable to check the return from a large number of Minions, with the return process spread out.


In a large enough infrastructure, a number of Minions are bound to be running in a different version of Salt than others. When troubleshooting issues specific to certain versions of Salt, it helps to be able to take a quick look at the Salt version on each Minion. This is the simplest way to check that. Checking the version of other packages that are maintained by the system packaging system can be performed with pkg.version.


Every package manager in Salt (as of version 2015.5) supports installing a package. This function can be as simple as asking for a single package name, or as complex as passing through a list of packages, each with a specific version. When using an execution module, you generally do not need to specify more than just a single package name, but inside the State module (covered later) the advanced functionality becomes more important.


This matches the pkg.install function, allowing a certain package to be removed. Because versions are not so important when removing packages, this function doesn't get so complex. But it does allow passing a list of packages to be removed (using the pkgs argument) as a Python list. From the command line, this can be done using a JSON string.


The sed command is one of the oldest members of the Unix administrator's toolkit. It has been the go-to command largely for tasks that involve editing files in-line, and performing search and replace tasks. There have been a few attempts over the years to duplicate the functionality of the sed command. Initially, the file.sed function simply wrapped the Unix sed command. The file.psed function provided a Python-based replacement. However, sed is more than just a find/replace tool; it is a full language that can be problematic when used incorrectly. The file.replace function was designed from the ground up to provide the find/replace functionality that most users need, while avoiding the subtle nuances that can be caused by wrapping sed.

Other file functions

A number of common Unix commands have been added to the file function. The following functions complement the Unix command set for managing files and their metadata: file.chown, file.chgrp, file.get_mode, file.set_mode,, file.symlink, file.rename, file.copy, file.move, file.remove, file.mkdir, file.makedirs, file.mknod, and a number of others.

Various user and group functions

The Unix toolset for managing users and groups is also available in Salt and includes user.add, user.delete,, group.add, group.delete,, user.chuid, user.chgid, user.chshell, user.chhome, user.chgroups, and many, many more.


By design, every public function in every execution module must be self-documenting. The documentation that appears at the top of the function should contain a description just long enough to describe the general use of the function, and must include at least one CLI example demonstrating the usage of that function.

This documentation is available from the Minion using the sys.doc function. Without any arguments, it will display all the functions available on a particular Minion. Adding the name of a module will show only the available functions in that module, and adding the name of a function will show only the documentation for that function, if it is available. This is an extremely valuable tool, both for providing simple reminders of how to use a function and for discovering which modules are available.


SLS file trees

There are a few subsystems in Salt that use an SLS file tree. The most common one of course is /srv/salt/, which is used for Salt States. Right after States are Pillars (/srv/pillar/), which use a different file format but the same directory structure. Let's take a moment to talk about how these directories are put together.

SLS files

SLS stands for SaLt State, which was the first type of file inside Salt to use this kind of file structure. While they can be rendered in a number of different formats, by far the widest use is the default, YAML. Various templating engines are also available to help form the YAML (or other data structure) and again, the most popular is the default, Jinja.

Keep in mind that Salt is all about data. YAML is a serialization format that in Python, represents a data structure in a dictionary format. When thinking about how SLS files are designed, remember that they are a key/value pair: each item has a unique key, which is used to refer to a value. The value can in turn contain a single item, a list of items, or another set of key/value pairs.

The key to a stanza in an SLS file is called an ID. If no name inside the stanza is explicitly declared, the ID is copied to the name. Remember that IDs must be globally unique; duplicate IDs will cause errors.

Tying things together with top files

Both the State and the Pillar system use a file called top.sls to pull the SLS files together and serve them to the appropriate Minions, in the appropriate environments.

Each key in a top.sls file defines an environment. Typically, a base environment is defined, which includes all the Minions in the infrastructure. Then other environments are defined that contain only a subset of the Minions. Each environment includes a list of the SLS files that are to be included. Take the following top.sls file:

    - common
    - vim
    - jenkins
    - apache2

With this top.sls, three environments have been declared: base, qa, and web. The base environment will execute the common and vim States across all Minions. The qa environment will execute the jenkins State across all the Minions whose ID ends with _qa. The Web environment will execute the apache2 State across all the Minions whose ID starts with web_.

Organizing the SLS directories

SLS files may be named either as an sls file themselves (that is, apache2.sls) or as an init.sls file inside a directory with the SLS name (that is, apache2/init.sls).


Note that apache2.sls will be searched for first; if it is not there, then apache2/init.sls will be used.

SLS files may be hierarchical, and there is no imposed limit on how deep directories may go. When defining deeper directory structures, each level is appended to the SLS name with a period (that is, apache2/ssl/init.sls becomes apache2.ssl). It is considered best practice by developers to keep a directory more shallow; don't make your users search through your SLS tree to find things.


Using States for configuration management

The files inside the /srv/salt/ directory define the Salt States. This is a configuration management format that enforces the State that a Minion will be in: package X needs to be installed, file Y needs to look a certain way, service Z needs to be enabled and running, and so on. For example:

    - installed
    - running
    - name: /etc/apache2/apache2.conf

States may be saved in a single SLS file, but it is far better to separate them into multiple files, in a way that makes sense to you and your organization. SLS files can use include blocks that pull in other SLS files.

Using include blocks

In a large SLS tree, it often becomes reasonable to have SLS files include other SLS files. This is done using an include block, which usually appears at the top of an SLS file:

  - base
  - emacs

In this example, the SLS file in question will replace the include block with the contents of base.sls (or base/init.sls) and emacs.sls (or emacs/init.sls). This imposes some important restrictions on the user. Most importantly, the SLS files that are included may not contain IDs that already exist in the SLS file that includes them.

It is also important to remember that include itself, being a top-level declaration, cannot exist twice in the same file. The following is invalid:

  - base
  - emacs

Ordering with requisites

State SLS files are unique among configuration management formats in that they are both declarative and imperative. They are imperative, as each State will be evaluated in the order in which it appears in the SLS file. They are also declarative because States may include requisites that change the order in which they are actually executed. For instance:

    - name: apache2
    - require:
      - pkg: web_package
    - name: apache2

If a service is declared, which requires a package that appears after it in the SLS file, the pkg States will be executed first. However, if no requirements are declared, Salt will attempt to start the service before installing the package, because its codeblock appears before the pkg codeblock. The following will require two executions to complete properly:

    - name: apache2
    - name: apache2

Requisites point to a list of items elsewhere in the SLS file that affect the behavior of the State. Each item in the list contains two components: the name of the module and the ID of the State being referenced.

The following requisites are available inside Salt States and other areas of Salt that use the State compiler.


The require requisite is the most basic; it dictates that the State that it is declared in is not executed until every item in the list that has been defined for it has executed successfully. Consider the following example:

    - installed
    - require
      - file: apache2
    - running
    - require:
      - pkg: apache2
    - managed
    - name: /etc/apache2/apache2.conf
    - source: salt://apache2/apache2.conf

In this example, a file will be copied to the Minion first, then a package installed, then the service started. Obviously, the service cannot be started until the package that provides it is installed. But Debian-based operating systems such as Ubuntu automatically start services the moment they're installed, which can be problematic if the default configuration files aren't correct. This States will ensure that Apache is properly configured before it is even installed.


In the preceding example, a new Minion will be properly configured the first time. However, if the configuration file changes, the apache2 service will need to be restarted. Adding a watch requisite to the service will force that State to perform a specific action when the State that it is watching reports changes.

    - running
    - require:
      - pkg: apache2
    - watch:
      - file: apache2

The watch requisite is not available for every type of State module. This is because it performs a specific action, depending on the type of module. For instance, when a service is triggered with a watch, Salt will attempt to start a service that is stopped. If it is already running, it will attempt either a service.reload, service.full_restart, or service.restart, as appropriate.

As of version 2015.5, the following States modules support using the watch requisite: service, cmd, event, module, mount, supervisord, docker, tomcat, and test.


The onchanges requisite is similar to watch, except that it does not require any special support from the State module that is using it. If changes happen, which should only occur when a State completes successfully, then the list of items referred to with onchanges will be evaluated.


In a simple State tree, the onfail requisite is less commonly used. However, a more advanced State tree, which is written to attempt alerting the user, or to perform auto-correcting measures, can make use of onfail. When a State is evaluated and fails to execute correctly, every item listed under onfail will be evaluated. Assuming that the PagerDuty service is properly configured via Salt and an apache_failure State has been written to use it, the following State can notify the operations team if Apache fails to start:

    - running
    - onfail
      - pagerduty: apache_failure


It is possible to declare default values in one State and then inherit them into another State. This typically occurs when one State file has an include statement that refers to another file.

If an item in the State that is being used has been redeclared, it will be overwritten with the new value. Otherwise, the item that is being used will appear unchanged. Requisites will not be inherited with use; only non-requisite options will be inherited. Therefore, in the following SLS, the mysql_conf State will safely inherit the user, group, and mode from the apache2_conf State, without also triggering Apache restarts:

    - managed
    - name: /etc/apache2/apache2.conf
    - user: root
    - group: root
    - mode: 755
    - watch_in:
      - service: apache2
  - managed
    - name: /etc/mysql/my.cnf
    - use:
      - file: apache2_conf
    - watch_in:
      - service: mysql


There are some situations in which a State does not need to run, unless another State is expected to make changes. For example, consider a Web application that makes use of Apache. When the codebase on a production server changes, Apache should be turned off, so as to avoid errors with the code that has not yet finished being installed.

The prereq requisite was designed exactly for this kind of use. When a State makes use of prereq, Salt will first perform a test run of the State to see if the items referred to in the prereq are expected to make changes. If so, then Salt will flag the State with the prereq as needing to execute.

    - running
    - watch:
      - file: codebase
    - recurse
    - dead
    - name: apache2
    - prereq:
      - file: codebase

In the preceding example, the shutdown_apache State will only make changes if the codebase State reports that changes need to be made. If they do, then Apache will shutdown, and then the codebase State will execute. Once it is finished, it will trigger the apache2 service State, which will start up Apache again.

Inverting requisites

Each of the aforementioned requisites can be used inversely, by adding _in at the end. For instance, rather than State X requiring State Y, an SLS can be written so that State X declares that it is required by State Y, as follows:

    - installed
    - require_in:
      - service: apache2
    - running

It may seem silly to add inverses of each of the States but there is in fact a very good use case for doing so: include blocks.

SLS files cannot use requisites that point to a code that does not exist inside them. However, using an include block will cause the contents of other SLS files to appear inside the SLS file. Therefore, generic (but valid) configuration can be defined in one SLS file, included in another, and modified to be more specific with a use_in requisite.

Extending SLS files

In addition to an include block, State SLS files can also contain an extend block that modifies SLS files that appear in the include block. Using an extend block is similar to a use requisite, but there are some important differences.

Whereas a use or use_in requisite will copy defaults to or from another State, the extend block will only modify the State that has been extended.

# cat /srv/generic_apache/init.sls)
  - managed
    - name: /etc/apache2/apache2.conf
    - source: salt://apache2/apache2.conf
(In django_server/init.sls)
- generic_apache
    - file:
    - source: salt://django/apache2.conf
(In image_server/init.sls)
  - generic_apache
    - file:
      - source: salt://django/apache2.conf

The preceding example makes use of a generic Apache configuration file, which will be overridden as appropriate for either a Django server or a Web server that is only serving images.


The basics of Grains, Pillars, and templates

Grains and Pillars provide a means of allowing user-defined variables to be used in conjunction with a Minion. Templates can take advantage of those variables to create files on a Minion that are specific to that Minion.

Before we get into details, let me start off by clarifying a couple of things: Grains are defined by the Minion which they are specific to, while Pillars are defined on the Master. Either can be defined statically or dynamically (this book will focus on static), but Grains are generally used to provide data that is unlikely to change, at least without restarting the Minion, while Pillars tend to be more dynamic.

Using Grains for Minion-specific data

Grains were originally designed to describe the static components of a Minion, so that execution modules could detect how to behave appropriately. For instance, Minions which contain the Debian os_family Grain are likely to use the apt suite of tools for package management. Minions which contain the RedHat os_family Grain are likely to use yum for package management.

A number of Grains will automatically be discovered by Salt. Grains such as os, os_family, saltversion, and pythonversion are likely to be always available. Grains such as shell, systemd, and ps are not likely to be available on, for instance, Windows Minions.

Grains are loaded when the Minion process starts up, and then cached in memory. This improves Minion performance, because the salt-minion process doesn't need to rescan the system for every operation. This is critical to Salt, because it is designed to execute tasks immediately, and not wait several seconds on each execution.

To discover which Grains are set on a Minion, use the grains.items function:

salt myminion grains.items

To look at only a specific Grain, pass its name as an argument to grains.item:

salt myminion grains.item os_family

Custom Grains can be defined as well. Previously, static Grains were defined in the Minion configuration file (/etc/salt/minion on Linux and some Unix platforms):

  foo: bar
  baz: qux

However, while this is still possible, it has fallen out of favor. It is now more common to define static Grains in a file called Grains (/etc/salt/grains on Linux and some Unix platforms). Using this file has some advantages:

  • Grains are stored in a central, easy-to-find location

  • Grains can be modified by the Grains execution module

That second point is important: whereas the Minion configuration file is designed to accommodate user comments, the Grains file is designed to be rewritten by Salt as necessary. Hand-editing the Grains file is fine, but don't expect any comments to be preserved. Other than not including the Grains top-level declaration, the Grains file looks like the Grains configuration in the Minion file:

foo: bar
baz: qux

To add or modify a Grain in the Grains file, use the grains.setval function:

salt myminion grains.setval mygrain 'This is the content of mygrain'

Grains can contain a number of different types of values. Most Grains contain only strings, but lists are also possible:

  - item1
  - item2

In order to add an item to this list, use the grains.append function:

salt myminion grains.append my_items item3

In order to remove a Grain from the grains file, use the grains.delval function:

salt myminion grains.delval my_items

Centralizing variables with Pillars

In most instances, Pillars behave in much the same way as Grains, with one important difference: they are defined on the Master, typically in a centralized location. By default, this is the /srv/pillar/ directory on Linux machines. Because one location contains information for multiple minions, there must be a way to target that information to the minions. Because of this, SLS files are used.

The top.sls file for Pillars is identical in configuration and function to the top.sls file for states: first an environment is declared, then a target, then a list of SLS files that will be applied to that target:

    - bash

Pillar SLS files are much simpler than State SLS files, because they serve only as a static data store. They define key/value pairs, which may also be hierarchical.

skel_dir: /etc/skel/
role: web
    - jpg
    - png
    - gif
    - css
    - js

Like State SLS files, Pillar SLS files may also include other Pillar SLS files.

  - users

To view all Pillar data, use the pillar.items function:

salt myminion pillar.items

Take note that, when running this command, by default the Master's configuration data will appear as a Pillar item called Master. This can cause problems if the Master configuration includes sensitive data. To disable this output, add the following line to the Master configuration:

pillar_opts: False

This is also a good time to mention that, outside the master configuration data, Pillars are only viewable to the Minion or Minions to which they are targeted. In other words, no Minion is allowed to access another Minion's Pillar data, at least by default. It is possible to allow a Minion to perform Master commands using the Peer system, but that is outside the scope of this chapter.

Managing files dynamically with templates

Salt is able to use templates, which take advantage of Grains and Pillars, to make the State system more dynamic. A number of other templating engines are also available, including (as of version 2015.5) the following:

  • jinja

  • mako

  • wempy

  • cheetah

  • genshi

These are made available via Salt's rendering system. The preceding list only contains Renderers that are typically used as templates to create configuration files and the like. Other Renderers are available as well, but are designed more to describe data structures:

  • yaml

  • yamlex

  • json

  • msgpack

  • py

  • pyobjects

  • pydsl

Finally, the following Renderer can decrypt GPG data stored on the Master, before passing it through another renderer:

  • gpg

By default, State SLS files will be sent through the Jinja renderer, and then the yaml renderer. There are two ways to switch an SLS file to another renderer. First, if only one SLS file needs to be rendered differently, the first line of the file can contain a shabang line that specifies the renderer:


The shabang can also specify multiple Renderers, separated by pipes, in the order in which they are to be used. This is known as a render pipe. To use Mako and JSON instead of Jinja and YAML, use:


To change the system default, set the renderer option in the Master configuration file. The default is:

renderer: yaml_jinja

It is also possible to specify the templating engine to be used on a file that created the Minion using the file.managed State:

    - managed
    - name: /etc/apache2/apache2.conf
    - source: salt://apache2/apache2.conf
    - template: jinja

A quick Jinja primer

Because Jinja is by far the most commonly-used templating engine in Salt, we will focus on it here. Jinja is not hard to learn, and a few basics will go a long way.

Variables can be referred to by enclosing them in double-braces. Assuming a Grain is set called user, the following will access it:

The user {{ grains['user'] }} is referred to here.

Pillars can be accessed in the same way:

The user {{ pillar['user'] }} is referred to here.

However, if the user Pillar or Grain is not set, the template will not render properly. A safer method is to use the salt built-in to cross-call an execution module:

The user {{ salt['grains.get']('user', 'larry') }} is referred to here.
The user {{ salt['pillar.get']('user', 'larry') }} is referred to here.

In both of these examples, if the user has not been set, then larry will be used as the default.

We can also make our templates more dynamic by having them search through Grains and Pillars for us. Using the config.get function, Salt will first look inside the Minion's configuration. If it does not find the requested variable there, it will check the Grains. Then it will search Pillar. If it can't find it there, it will look inside the Master configuration. If all else fails, it will use the default provided.

The user {{ salt['config.get']('user', 'larry') }} is referred to here.

Code blocks are enclosed within braces and percent signs. To set a variable that is local to a template (that is, not available via config.get), use the set keyword:

{% set myvar = 'My Value' %}

Because Jinja is based on Python, most Python data types are available. For instance, lists and dictionaries:

{% set mylist = ['apples', 'oranges', 'bananas'] %}
{% set mydict = {'favorite pie': 'key lime', 'favorite cake': 'saccher torte'} %}

Jinja also offers logic that can help define which parts of a template are used, and how. Conditionals are performed using if blocks. Consider the following example:

{% if grains['os_family'] == 'Debian' %}
{% elif grains['os_family'] == 'RedHat' %}
{% endif %}
    - installed
    - running

The Apache package is called apache2 on Debian-style systems, and httpd on RedHat-style systems. However, everything else in the State is the same. This template will auto-detect the type of system that it is on, install the appropriate package, and start the appropriate service.

Loops can be performed using for blocks, as follows:

{% set berries = ['blue', 'rasp', 'straw'] %}
{% for berry in berries %}
{{ berry }}berry
{% endfor %}


Salt is designed first and foremost for remote execution. Most tasks in Salt are performed as a type of remote execution. One of the most common types of remote execution in Salt is configuration management, using States. Minion-specific data can be declared in Grains and Pillars, and used in State files and templates.

With a basic foundation of Salt behind us, let's move on to the good stuff. In the next chapter, we will dive into the internals of Salt, and discuss why and how Salt does what it does.

About the Author
  • Joseph Hall

    Starting as a support technician and progressing to being a web programmer, QA engineer, systems administrator, Linux instructor, and cloud engineer, Joseph Hall has touched just about every area of the modern technology world. He is currently a senior cloud and integrations engineer at SaltStack. Joseph enjoys working with some of the best minds in the business with his coworkers and SaltStack's partners. He is also the author of Extending SaltStack, Packt Publishing.

    You can find him on LinkedIn at and on GitHub at

    Browse publications by this author
Latest Reviews (1 reviews total)
Some interesting chapters but overall it looks like the book hasn't been completed. At some point someone has decided that it has to be released while so many topics were not yet been covered.
Mastering SaltStack
Unlock this book and the full library FREE for 7 days
Start now