Random musings and out of date things

Real config on staging, featuring nginx and dante

27th June 2020

I’m a firm believer in keeping staging as close to production as possible. It just makes sense to me, there’s too much room for error if things are different, and you’re not really testing what you think you are.

Now this can get a bit tricky when it comes to configuration files, especially when they contain hostnames, doubly so if you’re using HTTPS, which of course you are aren’t you?

Platform Overview

OK, “platform” might be stretching it for my current setup. I look after the technical side of a pretty niche website, Ninehertz. It’s been around for a long time now, and still gets a decent amount of traffic but nothing too taxing.

For the staging side I use a virtual machine. Like a lot of people these days I manage it with Vagrant using the libvirt backend.

The site itself is served by nginx.

TLS Complications

If we’re going to use the exact same config on live and staging, then we’re going to need to use TLS (fka SSL), and if we’re going to use TLS, we need certificates for it. Now, we don’t really want genuine private keys on staging, access to those should be restricted, so we’re going to make some dummy certificates ourselves.

We could simply use self-signed certificates, but then we’d get annoying warnings in the browser. So what I do is create my own CA certificate.

This is really easy to do, and completely safe as long as you follow some precautions:

First we create a private key for the CA:

openssl genrsa -out CA.key 4096

Next the CA itself:

openssl req -x509 -days 365 -key CA.key -subj '/CN=dummy CA cert' -out CA.crt

Adjust the subject and days to taste, though I recommend making it obvious that this isn’t a real CA in the subject. 365 days is a year, so it will expire after that time, though it’s just a case of re-running the command to renew it.

Now we need to create a key and a Certificate Signing Request for the server to use. For this example we’ll use www.example.com and will create a cert that’s also valid for example.com:

openssl genrsa -out server.key 2048
openssl req -key server.key -new -subj '/CN=www.example.com' -out server.csr

Keep hold of these two files, server.key and server.csr. The key will be needed to use the certificate we create and the CSR can be used to renew the cert later.

Now to sign the certificate request to generate a certificate:

printf 'basicConstraints=CA:FALSE
keyUsage=nonRepudiation, digitalSignature, keyEncipherment
subjectAltName=DNS:www.example.com,DNS:example.com' | \
openssl x509 -req -days 365 -in server.csr -out server.crt -CA CA.crt -CAkey CA.key \
    -set_serial 1 \
    -extfile -

Some things to note:

Now you have a brand new server.crt and server.key you can use in the nginx config in place of the real certs, just use the same paths for them so the config remains the same.

Approach 1: Hostfiles

The simplest way to approach this problem is to simply add entries to /etc/hosts or the equivalent on your host OS, to point records to your VM.

Downsides to this are:

For these reasons I decided I needed something better - a proxy.

Approach 2: SOCKS proxy via SSH

You may or may not know this already, but ssh includes a pretty usable SOCKS proxy, you just invoke it with -D and a port number, then use that port as your proxy. Combined with adding hosts records redirecting to the loopback device this worked great.

But hang on! You said you’d moved away from using the hosts file!?

Indeed I had, but my main problem was editing /etc/hosts on the host OS, and this change is in the VM. As clients can use the SOCKS proxy to resolve hostnames too this is a nice solution, you know when you’re using the proxy that you’re hitting staging.

I used this approach for a little while, but I wasn’t completely happy with it for the following reasons:

With these concerns in mind I decided I needed a SOCKS proxy that would start automatically when the VM started, and had some access control. After some searching I found…

Approach 3: dante

Dante is a free SOCKS server, developed by Inferno Nettverk A/S. It’s included in Debian, my OS of choice, so was simple to install. It also supports allowing and denying access to resources, so I quickly set up some rules. I knew it would resolve the hostnames I was interested in to the local loopback address, so I just added a rule to allow traffic to that interface and deny all other traffic.

This worked really well, I didn’t need to ssh in any more, but something looked a bit strange. The fonts weren’t right.

I knew straight away why this was, Ninehertz uses Google fonts, which were now being blocked. For a while I lived with that, but I got curious and started thinking of a better way.

Approach 4: nginx proxy_pass for specific hostnames

My next approach was to add specific hostnames to the VM’s hosts file and direct them to the local nginx instance, with a small config snippet using proxy_pass to redirect the traffic. I’d also create certs for these hostnames, signed with my CA. With a bit of Ansible this was pretty easy to do so I got it up and running fairly quickly.

This worked great for Google Fonts, but I hit another problem when I loaded a page with an embedded SoundCloud embed, which was trying to fetch things from loads of subdomains. This wasn’t going to scale nicely at all - I really needed a way I could use wildcards.

Approach 5: nginx stream proxy_pass, with dante redirecting

I had a bit of a think and came up with what I thought was a cunning plan. I’m a bit of an nginx enthusiast and like to stay up to date with new features etc, even if I don’t really use them. One such feature popped in to my head, ssl_preread.

Combined with the nginx map directive, this allows us to read the target hostname from the TLS handshake (amongst other things) and route the traffic to different ports depending on the values it finds. We can even use wildcards here, which makes it even easier.

This uses the nginx stream module which operates at a low level, so we don’t even need to think about TLS handshakes, we can now forward the TCP traffic unencrypted to the hosts we want to allow.

For other hosts we’ll just forward it to a dummy HTTPS server, also in nginx, with an invalid certificate.

All I need to do now, is capture all requests to HTTPS on dante to this nginx stream server. Now we can’t use port 443 here without changing our own http server config, which would defeat the point of this, so we run the stream server on a different port.

Looking at the dante documentation it looked really easy to redirect this traffic to our arbitrary port, but when I put the config in place, the danted service failed to start. I’d missed a key part of the documentation, from Redirection:

Some additional functionality is however offered as modules that can be purchased separately. The redirect module can be used to modify traffic by redirecting it to a different location than the client requested.

OK, so dante can do it, but not without a proprietary module. This was a little disappointing, but if that’s their model then that’s fair enough.

Finished setup: nginx stream proxy_pass, dante, and iptables

Finally I arrived at the setup I’m using today. The final piece in the puzzle was to simply use iptables to redirect all http and https traffic to the local nginx instance, with the exception of traffic from nginx itself as that would cause a loop.

Here’s the nginx configuration for the stream module:

map $ssl_preread_server_name $backend {

  fonts.googleapis.com $ssl_preread_server_name:443;
  fonts.gstatic.com $ssl_preread_server_name:443;
  .soundcloud.com $ssl_preread_server_name:443;
  .sndcdn.com $ssl_preread_server_name:443;


server {
  listen 8000;
  proxy_pass $backend;
  ssl_preread on;

server {
  listen 8002;

I should probably mention that I use PowerDNS on this box and on live as a local resolver, that’s what resolver is referring to. We need to set a resolver up here so nginx can look up the hostnames it needs to proxy the external traffic.

Most of the work here is done in the map section. $ssl_preread_server_name is the server name passed in the TLS negotiation. If it matches the hostnames listed and it’s one of the sites that are being staged on the VM then the variable $backend is set to the loopback device. If it’s a site we don’t host, but want to allow access to $backend is set to the original hostname.

We are using the hostnames option in the map (which needs to be listed before the list of values we’re checking) so we can use the special leading . form to match a domain and any subdomains at the same time. e.g. .ninehertz.co.uk matches ninehertz.co.uk, www.ninehertz.co.uk, foo.bar.ninehertz.co.uk etc.

Finally default is a special record which is used to set a default value, which will be for anything we want requests to to fail. These will be redirected to another nginx server block in the http scope, which is running on port 8001.

The first server block simply listens for connections on port 8000, and passes them to the $backend set by the map. We need to have ssl_preread on here or the special $ssl_preread_server_name variable won’t get set.

The next server block is for handling any requests on port 80, it just passes them all to the loopback address.

Here’s the config of the server that non-matching results are forwarded to:

server {
  listen 8001 default_server ssl;

  ssl_certificate ssl/default.crt;
  ssl_certificate_key ssl/default.key;

  return 403 "No\n";

This is very simple, we just have a cert signed by our CA and it’ll return 403 for all requests.

Dante wise it’s pretty simple, we pass anything going to to ports 80 or 443, and reject everything else.

Finally, here’s the iptables rules to make it all work:

iptables -t nat -A OUTPUT -m owner --uid-owner nginx -j ACCEPT
iptables -t nat -A OUTPUT -d -j ACCEPT
iptables -t nat -A OUTPUT -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8002
iptables -t nat -A OUTPUT -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 8000

It simply accepts all requests from the nginx user, all requests to the local loopback, and redirects any remaining requests to the default https port of 443 to port 8000 on the loopback device.

And a nice feature of this approach is that we no longer need to touch /etc/hosts on either our host machine or the VM.

I hope you might find this useful, if you have any feedback drop me a message on any of the platforms listed at the bottom of each page.