Instant Redis Persistence [Instant] — Save 50%
Everything you need to know about configuring, maintaining, and optimizing your Redis data storage with this book and ebook
In this article by Matt Palmer, author of Instant Redis Persistence, we'll step through a number of different scenarios, and how you can go about protecting against each one.
While securing your data isn't strictly a persistence topic, it is an important one to consider when planning your data storage strategy. If your stored data can be accessed improperly, you're in a whole world of pain.
A common complaint about Redis is that it has no means of controlling who has access to the Redis server—if you can connect to the Redis server, you've got the ability to fully access and manipulate the data stored in that server.
(For more resources related to this topic, see here.)
How to do it...
Anyone who can read the files that Redis uses to persist your dataset has a full copy of all your data. Worse, anyone who can write to those files can, with a minimal amount of effort and some patience, change the data that your Redis server contains. Both of these things are probably not what you want, and thankfully it isn't particularly difficult to prevent.
All you have to do is prevent anyone but the user running your Redis server from accessing the data directory your Redis instance is using. The simplest way to achieve this is by changing the owner of the directory to the user who runs your Redis server, and then disallow all privileges to everyone else, like this:
- Determine the user under whom you are running your Redis instance. You can typically find this out by running ps caux |grep redis-server. The name in the first column is the user under which Redis is running.
- Determine the directory in which Redis is storing its files. If you don't already know this, you can ask Redis by running CONFIG GET dir from within redis-cli.
- Ensure that the user running your Redis instance owns its data directory:
chown <redisuser> /path/to/redis/datadir
- Restrict permissions on the data directory so that only the owner can access it at all:
chmod 0700 /path/to/redis/datadir
It is important that you protect the Redis data directory, and not individual data files, because Redis is regularly rewriting those data files, and the permissions you choose won't necessarily be preserved on the next rewrite. It is also a good practice to restrict access to your redis.conf file, because in some cases it can contain sensitive data. This is simply achieved:
chmod 0600 /path/to/redis.conf
If you run your Redis using applications on a server which is shared with other people, your Redis instance is at pretty serious risk of abuse. The most common way of connecting to Redis is via TCP, which can only limit access based on the address connecting to it. On a shared server, that address is shared amongst everyone using it, so anyone else on the same server as you can connect to your Redis. Not cool!
If, however, the programs that need to access your Redis server are on the same machine as the Redis server, there is another, more secure, method of connection called Unix sockets. A Unix socket looks like a file on disk, and you can control its permissions just like a file, but Redis can listen on it (and clients can connect to it), in a very similar way to a TCP socket.
Enabling Redis to listen on a Unix socket is fairly straightforward:
- Set the port parameter to 0 in your Redis configuration file. This will tell Redis to not listen on a TCP socket. This is very important to prevent miscreants from still being able to connect to your Redis server while you're happily using a Unix socket.
- Set the unixsocket parameter in your Redis configuration file to a fully-qualified filename where you want the socket to exist. If your Redis server runs as the same user as your client programs (which is common in shared-hosting situations), I recommend making the name of the file redis.sock, in the same directory as your Redis dataset. So, if you keep your Redis data in /home/joe/redis, set unixsocket to /home/joe/redis/redis.sock.
- Set the unixsocketperm parameter in your Redis configuration file to 600, or a more relaxed permission set if you know what you're doing. Again, this assumes that your Redis server and Redis-using programs are running as the same user. If they're not, you'll probably need a dedicated group and things get a lot more complicated—and beyond the scope of what can be covered in this guide.
- Once you've changed those configuration parameters and restarted Redis, you should find that the file you specified for unixsocket has magically appeared, and you can no longer connect to Redis using TCP.
- All that remains to do now is to configure your Redis-using programs to connect using the Unix socket, which is something you should find how to do in the manual for your particular Redis client library or application.
Configuring Redis to use Unix sockets is all well and good when it's practical, but what about if you need to connect to Redis over a network? In that case, you'll need to let Redis listen on a TCP socket, but you should at least limit the computers that can connect to it with a suitable firewall configuration.
While the properly paranoid systems administrator runs their systems with a default deny firewalling policy, not everyone shares this philosophy. However, given that by default, anyone who can connect to your Redis server can do anything they want with it, you should definitely configure a firewall on your Redis servers to limit incoming TCP connections to those which are coming from machines that have a legitimate need to talk to your Redis server. While it won't protect you from all attacks, it will cut down significantly on the attack surface, which is an important part of a defense-in-depth security strategy.
Unfortunately, it is hard to give precise commands to configure a firewall ruleset, because there are so many firewall management tools in common use on systems today. In the interest of addressing the greatest common factor, though, I'll provide a set of Linux iptables rules, which should be translatable to whatever means of managing your firewall (whether it be an iptables wrapper of some sort on Linux, or a pf-based system on a BSD).
In all of the following commands, replace the word <port> with the TCP port that your Redis server listens on. Also, note that these commands will temporarily stop all traffic to your Redis instance, so you'll want to avoid doing this on a live server. Setting up your firewall in an init script is the best course of action.
- Insert a rule that will drop all traffic to your Redis server port by default:
iptables -I INPUT -p tcp --dport <port> -j DROP
- For each IP address you want to allow to connect, run these two commands to let the traffic in:
iptables -I INPUT -p tcp --dport <port> -s <clientIP> -j ACCEPT iptables -I OUTPUT -p tcp --sport <port> -d <clientIP> -j ACCEPT
A firewall is great, but sometimes you can't trust everyone with access to a machine that needs to talk to your Redis instance. In that case, you can use authentication to provide a limited amount of protection against miscreants:
- Select a very strong password. Redis is not hardened against repeated password guessing, so you want to make this very long and very random. If you make the password too short, an attacker can just write a program that tries every possible password very quickly and guess your password that way, not cool! Thankfully, since humans should rarely be typing this password, it can be a complete jumble, and very long. I like the command pwgen -sy 32 1 for all my "generating very strong password" needs.
- Configure all your clients to authenticate against the server, by sending the following command when they first connect to the server:
- Edit your Redis configuration file to include a line like this:
If your selected password contains any double-quotes, you'll need to escape them with a backslash (so " would become \"), as I've done in the preceding example. You'll also need to double any actual backslashes (so \ becomes \\), again as I've done in the password of the preceding example.
- Let the configuration changes take effect by restarting Redis. The authentication password cannot be changed at runtime.
If you don't need certain commands, or want to limit the use of certain commands to a subset of clients, you can use the rename-command configuration parameter. Like firewalling, restricting, or disabling commands reduces your attack surface, but is not a panacea.
The simplest solution to the risk of a dangerous command is to disable it. For example, if you want to stop anyone from accidentally (or deliberately) nuking all the data in your Redis server with a single command, you might decide to disable the FLUSHDB and FLUSHALL commands, by putting the following in your Redis config file:
rename-command FLUSHDB ""rename-command FLUSHALL ""
This doesn't stop someone from enumerating all the keys in your dataset with KEYS * and then deleting them all one-by-one, but it does raise the bar somewhat. If you never wanted to delete keys (but, say, only let them expire) you could disable the DEL command; although all that would probably do is encourage the wily cracker to enumerate all your keys and run PEXPIRE 1 over them. Arms races are a terrible thing...
While disabling commands entirely is great when it can be done, you sometimes need a particular command, but you'd prefer not to give access to it to absolutely everyone—commands that can cause serious problems if misused, such as CONFIG. For those cases, you can rename the command to something hard-to-guess, as shown in the following command:
rename-command CONFIG somegiantstringnobodywouldguess
It's important to not make the new name of the command something easy-to-guess. Like the AUTH command, which we discussed previously, someone who wanted to do bad things could easily write a program to repeatedly guess what you've renamed your commands to.
For any environment in which you can't trust the network (which these days is pretty much everywhere, thanks to the NSA and the Cloud), it is important to consider the possibility of someone watching all your data as it goes over the wire. There's little point configuring authentication, or renaming commands, if an attacker can watch all your data and commands flow back and forth.
The least-worst option we have for generically securing network traffic from eavesdropping is still the Secure Sockets Layer (SSL). Redis doesn't support SSL natively; however, through the magic of the stunnel program, we can create a secure tunnel between Redis clients and servers. The setup we will build will look like the following diagram:
In order to set this up, you'll need to do the following:
- In your redis.conf, ensure that Redis is only listening on 127.0.0.1, by setting the bind parameter:
- Create a private key and certificate, which stunnel will use to secure the network communications. First, create a private key and a certificate request, by running:
openssl req -out /etc/ssl/redis.csr \ -keyout /etc/ssl/redis.key \ -nodes -newkey rsa:2048
This will ask you all sorts of questions which you can answer with whatever you like.
- Create the self-signed certificate itself, by running:
openssl x509 -req -days 3650 \ -signkey /etc/ssl/redis.key \ -in /etc/ssl/redis.csr \ -out /etc/ssl/redis.crt
- Finally, stunnel expects to find the private key and the certificate in the same file, so we'll concatenate the two together into one file:
cat /etc/ssl/redis.key /etc/ssl/redis.crt \ >/etc/ssl/redis.pem
- Now, we've got our SSL keys, we can start stunnel on the server side, configuring it to listen out for SSL connections, and forward them to our local Redis server:
stunnel -d 46379 -r 6379 -p /etc/ssl/redis.pem
If your local Redis instance isn't listening on port 6379, or if you'd like to change the public port that stunnel listens on, you can, of course, adjust the preceding command line to suit. Also, don't forget to open up your firewall for the port you're listening on!
Once you run the preceding command, you should be returned to a command line pretty quickly, because stunnel runs in the background. Although you examine your listening ports with netstat -ltn, you will still find that port 46379 is listening. If that's the case, you're done configuring the server.
On the client(s), the process is somewhat simpler, because you don't have to create a whole new key pair. However, you do need the certificate from the server, because you want to be able to verify that you're connecting to the right SSL-enabled service. There's little point in using SSL if an attacker can just set up a fake SSL service and trick you into connecting to it. To set up the client, do the following:
- Copy /etc/ssl/redis.crt from the server to the same location on the client.
- Start stunnel on the client, as shown in the following code snippet:
stunnel -c -v 3 -A /etc/ssl/redis.crt \ -d 127.0.0.1:56379 -r 192.0.2.42:46379
Replace 192.0.2.42 with the IP address of your Redis server.
- Verify that stunnel is listening correctly by running netstat -ltn, and look for something listening on port 56379.
- Reconfigure your client to connect to 127.0.0.1:56379, rather than directly to the remote Redis server.
This article contains an assortment of quick enhancements that you can deploy to your systems to protect them from various threats, which are frequently encountered on the Internet today.
Resources for Article:
- Implementing persistence in Redis (Intermediate) [Article]
- Python Text Processing with NLTK: Storing Frequency Distributions in Redis [Article]
- Coding for the Real-time Web [Article]
|Everything you need to know about configuring, maintaining, and optimizing your Redis data storage with this book and ebook|
eBook Price: $12.99
About the Author :
Matt Palmer is an experienced systems engineer and software developer, with over a decade's experience in building and running large-scale Internet-facing systems. He has dealt with the infrastructure of the likes of Engine Yard and GitHub. As an avid open source contributor, he has been involved in the development and maintenance of everything from the Linux kernel to web-based asset tracking systems, as well as volunteering as a Debian developer. He is currently the CTO of Anchor, where he is guiding the development of the next generation of Internet application infrastructure.