Acting as a proxy (HttpProxyModule)

Exclusive offer: get 50% off this eBook here
Nginx Module Extension

Nginx Module Extension — Save 50%

Customize and regulate the robust Nginx web server, and write your own Nginx modules efficiently with this book and ebook

$20.99    $10.50
by Usama Dar | December 2013 | Open Source

This article written by Usama Dar, the author of Nginx Module Extension, is a reference to the standard and optional HTTP modules, their synopsis, directives as well as practical configuration examples.

 

(For more resources related to this topic, see here.)

The HttpProxyModule allows Nginx to act as a proxy and pass requests to another server.

location / {   proxy_pass        http://app.localhost:8000; }

Note when using the HttpProxyModule (or even when using FastCGI), the entire client request will be buffered in Nginx before being passed on to the proxy server.

Explaining directives

Some of the important directives of the HttpProxyModule are as follows.

proxy_pass

The proxy_pass directive sets the address of the proxy server and the URI to which the location will be mapped. The address may be given as a hostname or an address and port, for example:

proxy_pass http://localhost:8000/uri/;

Or, the address may be given as an UNIX socket path:

proxy_pass http://unix:/path/to/backend.socket:/uri/;

path is given after the word unix between two colons.

You can use the proxy_pass directive to forward headers from the client request to the proxied server.

proxy_set_header Host $host;

While passing requests, Nginx replaces the location in the URI with the location specified by the proxy_pass directive.

If inside the proxied location, URI is changed by the rewrite directive and this configuration will be used to process the request. For example:

location  /name/ {   rewrite      /name/([^/] +)  /users?name=$1  break;   proxy_pass   http://127.0.0.1; }

A request URI is passed to the proxy server after normalization as follows:

  • Double slashes are replaced by a single slash
  • Any references to current directory like "./" are removed
  • Any references to the previous directory like "../" are removed.

If proxy_pass is specified without a URI (for example in "http://example.com/request",/request is the URI part), the request URI is passed to the server in the same form as sent by a client

location /some/path/ {   proxy_pass http://127.0.0.1; }

If you need the proxy connection to an upstream server group to use SSL, your proxy_pass rule should use https:// and you will also have to set your SSL port explicitly in the upstream definition. For example:

upstream https-backend {   server 10.220.129.20:443; }   server {   listen 10.220.129.1:443;   location / {     proxy_pass https://backend-secure;   } }

proxy_pass_header

The proxy_pass_header directive allows transferring header lines forbidden for response.

For example: location / {   proxy_pass_header X-Accel-Redirect; }

proxy_connect_timeout

The proxy_connect_timeout directive sets a connection timeout to the upstream server. You can't set this timeout value to be more than 75 seconds. Please remember that this is not the response timeout, but only a connection timeout.

This is not the time until the server returns the pages which is configured through proxy_read_timeout directive. If your upstream server is up but hanging, this statement will not help as the connection to the server has been made.

proxy_next_upstream

The proxy_next_upstream directive determines in which cases the request will be transmitted to the next server:

  • error: An error occurred while connecting to the server, sending a request to it, or reading its response
  • timeout: The timeout occurred during the connection with the server, transferring the request, or while reading the response from the server
  • invalid_header: The server returned an empty or incorrect response
  • http_500: The server responded with code 500
  • http_502: The server responded with code 502
  • http_503: The server responded with code 503
  • http_504: The server responded with code 504
  • http_404: The server responded with code 404
  • off: Disables request forwarding

Transferring the request to the next server is only possible if there is an error sending the request to one of the servers. If the request sending was interrupted due to an error or some other reason, the transfer of request will not take place.

proxy_redirect

The proxy_redirect directive allows you to manipulate the HTTP redirection by replacing the text in the response from the upstream server. Specifically, it replaces text in the Location and Refresh headers.

The HTTP Location header field is returned in response from a proxied server for the following reasons:

  • To indicate that a resource has moved temporarily or permanently.
  • To provide information about the location of a newly created resource. This could be the result of an HTTP PUT.

Let us suppose that the proxied server returned the following:

Location: http://localhost:8080/images/new_folder

If you have the proxy_redirect directive set to the following:

proxy_redirect http://localhost:8080/images/ http://xyz/;

The Location text will be rewritten to be similar to the following:

Location: http://xyz/new_folder/.

It is possible to use some variables in the redirected address:

proxy_redirect http://localhost:8000/ http://$location:8000;

You can also use regular expressions in this directive:

proxy_redirect ~^(http://[^:]+):\d+(/.+)$ $1$2;

The value off disables all the proxy_redirect directives at its level.

proxy_redirect off;

proxy_set_header

The proxy_set_header directive allows you to redefine and add new HTTP headers to the request sent to the proxied server.

You can use a combination of static text and variables as the value of the proxy_set_header directive.

By default, the following two headers will be redefined:

proxy_set_header Host $proxy_host; proxy_set_header Connection Close;

You can forward the original Host header value to the server as follows:

proxy_set_header Host $http_host;

However, if this header is absent in the client request, nothing will be transferred.

It is better to use the variable $host; its value is equal to the request header Host or to the basic name of the server in case the header is absent from the client request.

proxy_set_header Host $host;

You can transmit the name of the server together with the port of the proxied server:

proxy_set_header Host $host:$proxy_port;

If you set the value to an empty string, the header is not passed to the upstream proxied server. For example, if you want to disable the gzip compression on upstream, you can do the following:

proxy_set_header  Accept-Encoding  "";

proxy_store

The proxy_store directive sets the path in which upstream files are stored, with paths corresponding to the directives alias or root. The off directive value disables local file storage. Please note that proxy_store is different from proxy_cache. It is just a method to store proxied files on disk. It may be used to construct cache-like setups (usually involving error_page-based fallback). This proxy_store directive parameter is off by default. The value can contain a mix of static strings and variables.

proxy_store   /data/www$uri;

The modification date of the file will be set to the value of the Last-Modified header in the response. A response is first written to a temporary file in the path specified by proxy_temp_path and then renamed. It is recommended to keep this location path and the path to store files the same to make sure it is a simple renaming instead of creating two copies of the file.

Example:

location /images/ {   root                 /data/www;   error_page           404 = @fetch; }   location /fetch {   internal;   proxy_pass           http://backend;   proxy_store          on;   proxy_store_access   user:rw  group:rw  all:r;   proxy_temp_path      /data/temp;   alias                /data/www; }

In this example, proxy_store_access defines the access rights of the created file.

In the case of an error 404, the fetch internal location proxies to a remote server and stores the local copies in the /data/temp folder.

proxy_cache

The proxy_cache directive either turns off caching when you use the value off or sets the name of the cache. This name can then be used subsequently in other places as well. Let's look at the following example to enable caching on the Nginx server:

http {   proxy_cache_path  /var/www/cache levels=1:2 keys_zone=my-
    cache:8m max_size=1000m inactive=600m;   proxy_temp_path /var/www/cache/tmp;   server {     location / {       proxy_pass http://example.net;       proxy_cache my-cache;       proxy_cache_valid  200 302  60m;       proxy_cache_valid  404      1m;     }   } }

The previous example creates a named cache called my-cache. It sets up the validity of the cache for response codes 200 and 302 to 60m, and for 404 to 1m, respectively.

The cached data is stored in the /var/www/cache folder. The levels parameter sets the number of subdirectory levels in the cache. You can define up to three levels.

The name of key_zone is followed by an inactive interval. All the inactive items in my-cache will be purged after 600m. The default value for inactive intervals is 10 minutes.

 

Chapter 5 of the book, Creating Your Own Module, is inspired by the work from Mr. Evan Miller which can be found at http://www.evanmiller.org/nginx-modules-guide.html.

 

Summary

In this article we looked at several standard HTTP modules. These modules provide a very rich set of functionalities by default. You can disable these modules if you please at the time of configuration. However, they will be installed by default if you don't. The list of modules and their directives in this chapter is by no means exhaustive. Nginx's online documentation can provide you with more details.

Resources for Article:



Nginx Module Extension Customize and regulate the robust Nginx web server, and write your own Nginx modules efficiently with this book and ebook
Published: December 2013
eBook Price: $20.99
Book Price: $34.99
See more
Select your format and quantity:

About the Author :


Usama Dar

Usama Dar has over 13 years' experience working with software systems. During this period, he has not only worked with large companies such as Nortel Networks, Ericsson, and Huawei, but has also been involved with successful startups such as EnterpriseDB. He has worked with systems with mission-critical requirements of scalability and high availability. He writes actively on this website: www.usamadar.com. These days, Usama works at the Huawei Research Centre in Munich, where he spends most of his time researching highly scalable, high-performing infrastructure software (such as operating systems), databases, and web and application servers.

Books From Packt


Nginx 1 Web Server Implementation Cookbook
Nginx 1 Web Server Implementation Cookbook

web2py Application Development Cookbook
web2py Application Development Cookbook

Instant Nginx Starter
Instant Nginx Starter

Nginx HTTP Server - Second Edition
Nginx HTTP Server - Second Edition

CryENGINE Game Programming with C++, C#, and Lua
CryENGINE Game Programming with C++, C#, and Lua

Mastering Nginx
Mastering Nginx

Building Websites with ExpressionEngine 1.6
Building Websites with ExpressionEngine 1.6

Nginx HTTP Server
Nginx HTTP Server


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software