Connecting to backend servers (Should know)

Instant Varnish Cache How-to


January 2013

$10.00

Hands-on recipes to improve your website's load speed and overall user experience with Varnish

(For more resources related to this topic, see here.)

Getting ready

If you have a server architecture diagram, that's a good place to start listing all the required servers and grouping them, but you'll also need some technical data about those servers. You may find this information in a server monitoring diagram, where you will find the IP addresses, ports, and luckily a probing URL for health checks.

In our case, the main VCL configuration file default.vcl is located at /etc/varnish and defines the configuration that the Varnish Cache will use during the life cycle of the request, including the backend servers list.

How to do it...

  1. Open the default vcl file by using the following command:

    # sudo vim /etc/varnish/default.vcl

  2. A simple backend declaration would be:

    backend server01 {
    .host = "localhost";
    .port = "8080";
    }

    This small block of code indicates the name of the backend (server01), and also the hostname or IP, and which port to connect to.

  3. Save the file and reload the configuration using the following command:

    # sudo service varnish reload

    At this point, Varnish will proxy every request to the first declared backend using its default VCL file. Give it a try and access a known URL (like the index of your website) through the Varnish Cache and make sure that the content is delivered as it would be without Varnish.

    For testing purposes, this is an okay backend declaration, but we need to make sure that our backend servers are up and waiting for requests before we really start to direct web traffic to them.

  4. Let's include a probing request to our backend:

    backend website {
    .host = "localhost";
    .port = "8080";
    .probe = {
    .url = "/favicon.ico";
    .timeout = 60ms;
    .interval = 2s;
    .window = 5;
    .threshold = 3;
    }
    }

    Varnish will now probe the backend server using the provided URL with a timeout of 60 ms, every couple of seconds.

    To determine if a backend is healthy, it will analyze the last five probes. If three of them result in 200 – OK, the backend is marked as Healthy and the requests are forwarded to this backend; if not, the backend is marked as Sick and will not receive any incoming requests until it's Healthy again.

  5. Probe the backend servers that require additional information:

    In case your backend server requires extra headers or has an HTTP basic authentication, you can change the probing from URL to Request and specify a raw HTTP request. When using the Request probe, you'll always need to provide a Connection: close header or it will not work. This is shown in the following code snippet:

    backend api {
    .host = "localhost";
    .port = "8080";
    .probe = {
    .request =
    "GET /status HTTP/1.1"
    "Host: www.yourhostname.com"
    "Connection: close"
    "X-API-Key: e4d909c290d0fb1ca068ffaddf22cbd0"
    "Accept: application/json"
    .timeout = 60ms;
    .interval = 2s;
    .window = 5;
    .threshold = 3;
    }
    }

  6. Choose a backend server based on incoming data:

    After declaring your backend servers, you can start directing the clients' requests. The most common way to choose which backend server will respond to a request is according to the incoming URL, as shown in the following code snippet:

    vcl_recv {
    if ( req.url ~ "/api/") {
    set req.backend = api;
    } else {
    Set req.backend = website;
    }
    }

    Based on the preceding configuration, all requests that contain /api/ (www.yourdomain.com/api/) in the URL will be sent to the backend named api and the others will reach the backend named website.

    You can also pick the correct backend server, based on User-agent header, Client IP (geo-based), and pretty much every information that comes with the request.

How it works...

By probing your backend servers, you can automate the removal of a sick backend from your cluster, and by doing so, you avoid delivering a broken page to your customer. As soon as your backend starts to behave normally, Varnish will add it back to the cluster pool.

Directing requests to the appropriate backend server is a great way to make sure that every request reaches its destination, and gives you the flexibility to provide content based on the incoming data, such as a mobile device or an API request.

There's more...

If you have lots of servers to be declared as backend, you can declare probes as a separated configuration block and make reference to that block later at the backend specifications, avoiding repetition and improving the code's readability.

probe favicon {
.url = "/favicon.ico";
.timeout = 60ms;
.interval = 2s;
.window = 5;
.threshold = 3;
}
probe robots {
.url = "/robots.txt";
.timeout = 60ms;
.interval = 2s;
.window = 5;
.threshold = 3;
}
backend server01 {
.host = "localhost";
.port = "8080";
.probe = favicon;
}
backend server02 {
.host = "localhost";
.port = "8080";
.probe = robots;
}

The server01 server will use the probe named favicon, and the server02 server will use the probe named robots.

Summary

This article explained how to connect to backend servers.

Resources for Article :


Further resources on this subject:


Books to Consider

comments powered by Disqus