Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Managing Application Configuration

Save for later
  • 840 min read
  • 2016-10-18 00:00:00

article-image

In this article by Sean McCord author of the book CoreOS Cookbook, we will explore some of the options available to help bridge the configuration divide with the following topics:

  • Configuring by URL
  • Translating etcd to configuration files
  • Building EnvironmentFiles
  • Building an active configuration manager
  • Using fleet globals

(For more resources related to this topic, see here.)

Configuring by URL

One of the most direct ways to obtain application configurations is by URL. You can generate a configuration and store it as a file somewhere, or construct a configuration from a web request, returning the formatted file.

In this section, we will construct a dynamic redis configuration by web request and then run redis using it.

Getting ready

  • First, we need a configuration server. This can be S3, an object store, etcd, a NodeJS application, a rails web server, or just about anything. The details don't matter, as long as it speaks HTTP. We will construct a simple one here using Go, just in case you don't have one ready.

Make sure your GOPATH is set and create a new directory named configserver.

Then, create a new file in that directory called main.go with the following contents:

package main
   import (
      "html/template"
      "log"
      "net/http"
   )
   func init() {
      redisTmpl = template.Must(template.New("rcfg").Parse(redisString))
   }
   func main() {
      http.HandleFunc("/config/redis", redisConfig)
      log.Fatal(http.ListenAndServe(":8080", nil))
   }
   func redisConfig(w http.ResponseWriter, req *http.Request) {
      // TODO: pull configuration from database
      redisTmpl.Execute(w, redisConfigOpts{
         Save:       true,
         MasterIP:   "192.168.25.100",
         MasterPort: "6379",
      })
   }
   type redisConfigOpts struct {
      Save       bool   // Should redis save db to file?
      MasterIP   string // IP address of the redis master
      MasterPort string // Port of the redis master
   }
   var redisTmpl *template.Template
   const redisString = `
   {{if .Save}}
   save 900 1
   save 300 10
   save 60 10000
   {{end}}
   slaveof {{.MasterIP}} {{.MasterPort}}
`

For our example, we simply statically configure the values, but it is easy to see how we could query etcd or another database to fill in the appropriate values on demand.

Now, just go and build and run the config server, and we are ready to implement our configURL-based configuration.

How to do it...

By design, CoreOS is a very stripped down OS. However, one of the tools it does come with is curl, which we can use to download our configuration. All we have to do is add it to our systemd/fleet unit file.

For the redis-slave.service input the following:

 [Unit]
   Description=Redis slave server
   After=docker.service
   [Service]
   ExecStartPre=/usr/bin/mkdir -p /tmp/config/redis-slave
   ExecStartPre=/usr/bin/curl -s 
      -o /tmp/config/redis-slave/redis.conf 
      http://configserver-address:8080/config/redis
   ExecStartPre=-/usr/bin/docker kill %p
   ExecStartPre=-/usr/bin/docker rm %p
   ExecStart=/usr/bin/docker run --rm --name %p 
      -v /tmp/config/redis-slave/redis.conf:/tmp/redis.conf 
      redis:alpine /tmp/redis.conf

We have made the configserver's address configserver-address in the preceding code, so make certain you fill in the appropriate IP for the system running the config server.

How it works...

We outsource the work of generating the configuration to the web server or beyond. This is a common idiom in modern cluster-oriented systems: many small pieces work together to make the whole.

The idea of using a configuration URL is very flexible. In this case, it allows us to use a pre-packaged, official Docker image for an application that has no knowledge of the cluster, in its standard, default setup. While redis is fairly simple, the same concept can be used to generate and supply configurations for almost any legacy application.

Translating etcd to configuration files

In CoreOS, we have a well-suited database that is evidenced by its name and well suited to configuration (while the name etc is an abbreviation for the Latin et cetera, in common UNIX usage, /etc is where the system configuration is stored). It presents a standard HTTP server, which is easy to access from nearly anything.

This makes storing application configuration in etcd a natural choice. The only problem is devising methods of storing the configuration in ways that are sufficiently expressive, flexible, and usable.

Getting ready

A naive but simple way of using etcd is to simply use it as a key-oriented file store as follows:

   etcdctl set myconfig $(cat mylocalconfig.conf |base64)
   etcdctl get myconfig |base64 -d > mylocalconfig.conf

However, this method stores the configuration file in the database as a static, opaque blob and store/retrieve.

Decoupling the generation from the consumption yields much more flexibility both in adapting configuration content to multiple consumers and producers and scaling out multiple access uses.

How to do it...

We can store and retrieve an entire configuration blob storage very simply as follows:

   etcdctl set /redis/config $(cat redis.conf |base64)
   etcdctl get /redis/config |base64 -d > redis.conf

Or we can store more generally-structured data as follows:

   etcdctl set /redis/config/master 192.168.9.23
   etcdctl set /redis/config/loglevel notice
   etcdctl set /redis/config/dbfile dump.rdb

And use it in different ways:

   REDISMASTER=$(curl -s http://localhost:2379/v2/keys/redis/config/master 
      |jq .node.value)
   cat <<ENDHERE >/etc/redis.conf
   slaveof $(curl -s http://localhost:2379/v2/keys/redis/config/master 
      jq .node.value)
  loglevel $(etcdctl get /redis/config/loglevel)
   dbfile $(etcdctl get /redis/config/dbfile)
   ENDHERE

Building EnvironmentFiles

Environment variables are a popular choice for configuring container executions because nearly anything can read or write them, especially shell scripts. Moreover, they are always ephemeral, and by widely-accepted convention they override configuration file settings.

Getting ready

Systemd provides an EnvironmentFile directive that can be issued multiple times in a service file. This directive takes the argument of a filename that should contain key=value pairs to be loaded into the execution environment of the ExecStart program.

CoreOS provides (in most non-bare metal installations) the file /etc/environment, which is formatted to be included with an EnvironmentFile statement. It typically contains variables describing the public and private IPs of the host.

managing-application-configuration-img-0Environment file

A common misunderstanding when starting out with Docker is about environment variables. Docker does not inherit the environment variables of the environment that calls docker run. Environment variables that are to be passed to the container must be explicitly stated using the -e option. This can be particularly confounding since systemd units do much the same thing. Therefore, to pass environments into Docker from a systemd unit, you need to define them both in the unit and in the docker run invocation.

So this will work as expected:

[Service]
Environment=TESTVAR=testVal
ExecStart=/usr/bin/docker -e TESTVAR=$TESTVAR nginx

Whereas this will not:

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
[Service]
Environment=TESTVAR=unknowableVal
ExecStart=/usr/bin/docker nginx

How to do it...

We will start by constructing an environment file generator unit.

For testapp-env.service use the following:

   [Unit]
   Description=EnvironmentFile generator for testapp
   Before=testapp.service
   BindsTo=testapp.service
   [Install]
   RequiredBy=testapp.service
   [Service]
   ExecStart=/bin/sh -c "echo NOW=$(date +'%%'s) >/run/now.env"
   Type=oneshot
   RemainAfterExit=yes

You may note the odd syntax for the date format. Systemd expands %s internally, so it needs to be escaped to be passed to the shell unmolested.

For testapp.service use the following:

[Unit]
   Description=My Amazing test app, configured by EnvironmentFile
   [Service]
   EnvironmentFile=/run/now.env
   ExecStart=/usr/bin/docker run --rm -p 8080:8080 
      -e NOW=${NOW} 
      ulexus/environmentfile-demo

If you are using fleet, you can submit these service files. If you are using raw systemd, you will need to install them into the /etc/systemd/system. Then issue the following:

systemctl daemon-reload
   systemctl enable testapp-env.service
   systemctl start testapp.service

managing-application-configuration-img-1 testapp output

How it works...

The first unit writes the current UNIX timestamp to the file `/run/now.env and the second unit reads that file, parsing its contents into environment variables. We further pass the desired environment variables into the docker execution.

Taking apart the first unit, there a number of important components. They are as follows:

  • The Before statement tells systemd that the unit should be started before the main testapp. This is important so that the environment file exists before the service is started. Otherwise the unit will fail because the file does not exist or reads the wrong data if the file is stale.
  • The BindsTo setting tells systemd that the unit should be stopped and started with testapp.service. This makes sure that it is restarted when testapp is restarted, refreshing the environment file.
  • The RequiredBy setting tells systemd that this unit is required by the other unit. By stating the relationship in this manner, it allows the first unit to be separately enabled or disabled without any modification of the first unit. While that wouldn't matter in this case, in cases where the target service is a standard unit file which knows nothing about the helper unit, it allows us to use the add-on without fear of our changes to the official, standard service unit.
  • The Type and RemainAfterExit combination of settings tells systemd to expect that the unit will exit, but to treat the unit as up even after it has exited. This allows the prerequisite to operate even though the unit has exited.

In the second unit, the main service, the main thing to note is the EnvironmentFile line. It simply takes a file as an argument. We reference the file that was created (or updated) by the first script. Systemd reads it into the environment for any Exec* statements. Because Docker separates its containers' environments, we do still have to manually pass that variable into the container with the -e flag to docker run.

There's more...

You might be trying to figure out why we don't combine the units and try to set the environment variable with an ExecStartPre statement. Modifications to the environment from an Exec* statement are isolated from each other's Exec* statements. You can make changes to the environment within an Exec* statement, but those changes will not be carried over to any other Exec* statement. Also, you cannot execute any commands in an Environment or EnvironmentFile statement, nor can they expand any variables themselves.

Building an active configuration manager

Dynamic systems are, well, dynamic. They will often change while a dependent service is running. In such a case, simple runtime configuration systems as we have discussed thus far are insufficient. We need the ability to tell our dependent services to use the new, changed configuration.

For such cases as this, we can implement active configuration management. In an active configuration, some processes monitor the state of dynamic components and notify or restart dependent services with the updated data.

Getting ready

Much like the active service announcer, we will be building our active configuration manager in Go, so a functional Go development environment is required.

To increase readability, we have broken each subroutine into a separate file.

How to do it...

  1. First, we construct the main routine, as follows:
    main.go:
       package main
       import (
          "log"
          "os"
          "github.com/coreos/etcd/clientv3"
          "golang.org/x/net/context"
       )
       var etcdKey = "web:backends"
       func main() {
          ctx := context.Background()
          log.Println("Creating etcd client")
          c, err := clientv3.NewFromURL(os.Getenv("ETCD_ENDPOINTS"))
          if err != nil {
             log.Fatal("Failed to create etcd client:", err)
             os.Exit(1)
          }
          defer c.Close()
          w := c.Watch(ctx, etcdKey, clientv3.WithPrefix())
          for resp := range w {
             if resp.Canceled {
                log.Fatal("etcd watcher died")
                os.Exit(1)
             }
             go reconfigure(ctx, c)
          }
       }
    
  2. Next, our reconfigure routine, which pulls the current state from etcd, writes the configuration to file, and restarts our service, as follows:
    reconfigure.go:
       package main
       import (
          "github.com/coreos/etcd/clientv3"
          "golang.org/x/net/context"
       )
       // reconfigure haproxy
       func reconfigure(ctx context.Context, c *clientv3.Client) error {
          backends, err := get(ctx, c)
          if err != nil {
             return err
          }
          if err = write(backends); err != nil {
             return err
          }
          return restart()
       }
    

    The reconfigure routine just calls get, write and restart, in sequence. Let's create each of those as follows:

    get.go:
       package main
       import (
          "bytes"
          "github.com/coreos/etcd/clientv3"
          "golang.org/x/net/context"
       )
       // get the present list of backends
       func get(ctx context.Context, c *clientv3.Client) ([]string, error) {
          resp, err := clientv3.NewKV(c).Get(ctx, etcdKey)
          if err != nil {
             return nil, err
          }
          var backends = []string{}
          for _, node := range resp.Kvs {
             if node.Value != nil {
                v := bytes.NewBuffer(node.Value).String()
                backends = append(backends, v)
             }
          }
          return backends, nil
       }
    write.go:
    package main
    import (
        "html/template"
        "os"
    )
    var configTemplate *template.Template
    func init() {
        configTemplate = template.Must(template.New("config").Parse(configTemplateString))
    }
    // Write the updated config file
    func write(backends []string) error {
        cf, err := os.Create("/config/haproxy.conf")
        if err != nil {
            return err
        }
        defer cf.Close()
        return configTemplate.Execute(cf, backends)
    }
    var configTemplateString = `
    frontend public
        bind 0.0.0.0:80
        default_backend servers
    backend servers
    {{range $index, $ip := .}}
        server srv-$index $ip
    {{end}}
    `
    restart.go:
       package main
       import "github.com/coreos/go-systemd/dbus"
       // restart haproxy
       func restart() error {
          conn, err := dbus.NewSystemdConnection()
          if err != nil {
             return err
          }
          _, err = conn.RestartUnit("haproxy.service", "ignore-dependencies", nil)
          return err
       }
    
  3. With our active configuration manager available, we can now create a service unit to run it, as follows:
    haproxy-config-manager.service:
       [Unit]
       Description=Active configuration manager
       [Service]
       ExecStart=/usr/bin/docker run --rm --name %p 
          -v /data/config:/data 
          -v /var/run/dbus:/var/run/dbus 
          -v /run/systemd:/run/systemd 
          -e ETCD_ENDPOINTS=http://${COREOS_PUBLIC_IPV4}:2379 
          quay.io/ulexus/demo-active-configuration-manager
       Restart=always
       RestartSec=10
       [X-Fleet]
       MachineOf=haproxy.service
    

How it works...

First, we monitor the pertinent keys in etcd. It helps to have all of the keys under one prefix, but if that isn't the case, we can simply add more watchers. When a change occurs, we pull the present values for all the pertinent keys from etcd and then rebuild our configuration file.

Next, we tell systemd to restart the dependent service. If the target service has a valid ExecReload, we could tell systemd to reload, instead. In order to talk to systemd, we have passed in the dbus and systemd directories, to enable access to their respective sockets.

Using fleet globals

When you have a set of services that should be run on each of a set of machines, it can be tedious to run discrete and separate unit instances for each node. Fleet provides a reasonably flexible way to run these kinds of services, and when nodes are added, it will automatically start any declared globals on these machines.

Getting ready

In order to use fleet globals, you will need fleet running on each machine on which the globals will be executed. This is usually a simple matter of enabling fleet within the cloud-config as follows:

#cloud-config
   coreos:
     fleet:
       metadata: service=nginx,cpu=i7,disk=ssd
       public-ip: "$public_ipv4"
   units:
     - name: fleet.service
       command: start

How to do it...

To make a fleet unit a global, simply declare the Global=true parameter in the [X-Fleet] section of the unit as follows:

[Unit]
   Description=My global service
   
   [Service]
   ExecStart=/usr/bin/docker run --rm -p 8080:80 nginx
   [X-Fleet]
   Global=true

Globals can also be filtered with other keys. For instance, a common filter is to run globals on all nodes that have certain metadata:

[Unit]
   Description=My partial global service
   
   [Service]
   ExecStart=/usr/bin/docker run --rm -p 8080:80 nginx
   [X-Fleet]
   Global=true
   MachineMetadata=service=nginx

Note that the metadata that is being referred to here is the fleet metadata, which is distinct from the instance metadata of your cloud provider or even the node tags of Kubernetes.

How it works...

Unlike most fleet units, there is not a one-to-one correspondence between the fleet unit instance and the actual running services.

This has the side effect that modifications to a fleet global have immediate global effect. In other words, there is no rolling update with a fleet global. There is an immediate, universal replacement only. Hence, do not use globals for services that cannot be wholly down during upgrades.

Summary

We overcome the challenges for administrators who comes from traditional static deployment environments. We learned that we can't just build configuration or deploy it. It needs to be proactive in running environment. Any changes needs to be reloaded.

Resources for Article:


Further resources on this subject:


Modal Close icon
Modal Close icon