Caching How To | Flanga Keyserver

This guide shows a simple way to cache requests on the sks backend to reduce the response time and to improve the availability of the service. In our case, we use multiple keyserver backends distributed around the world, and NGINX as caching and reverse proxy.
NGINX will cache all requested keys for a period of time, reducing the load on the background.


  • VPS with at least 2 GB RAM

  • Root Access

The Cache

The complete keyserver database contains about 10GB of key material, every key has the size of a few kilobytes. The small size of every key allows us to store the cache inside the RAM where keys can be retrieved much faster.

Let's create a new RAM Disk - choose a size dependent on the amount of RAM on your system. The RAM Disk should be mounted in the same path where nginx stores its caching data.

mount -t tmpfs -o size=2G tmpfs /var/cache/nginx

The new RAM Disk should be automatically mounted when the system restarts. Edit /etc/fstab and add the following line.

tmpfs       /var/cache/nginx tmpfs   size=2G   0 0

That's it! In the next step, we will configure NGINX to use our ram disk as cache.

Configure NGINX to use the new RAM Disk

Locate your nginx.conf file and open it. Inside the http block, add the line:

proxy_cache_path /var/cache/nginx/proxy_cache levels=1:2 keys_zone=ramcache:1024m max_size=1g inactive=1d use_temp_path=off;

Update the path to your cache location.

  • levels option sets the depth of the folder hierarchy in your cache directory.

  • keys_zone parameters configures the name of your cache (we will use it later) and the amount of RAM that is used to store the hashes (not the cacheitem itself). 1 GB RAM can store about 8 Mio Keys which is more than the total amount of keys, so less RAM should be still enough.

  • max_size parameter configures the maximum size of your cache - you should set it to the size of your RAM disk.

  • Inactive defines how long an item is kept in cache without being accessed

Our cache is now ready, however, we still need to tell NGINX to cache requests on our SKS backend

Configure Caching on SKS Backend

Open the configuration file where your configuration for sks is located. You should already serve the default page with nginx and only forward requests on /pks to the sks backend. SKS does not add any cache control headers by default and NGINX forwards them to the client with the default configuration, so we must override this and tell NGINX to cache all requests. Do not forget to set this for all /pks locations!

location /pks {
    proxy_pass         http://sksservers/pks;
    proxy_pass_header  Server;
    add_header         Via "2.0";
    proxy_ignore_client_abort on;
    client_max_body_size 8m;
    proxy_cache ramcache;
    proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
    proxy_cache_background_update on;
    proxy_cache_lock on;
    proxy_ignore_headers X-Accel-Expires;
    proxy_ignore_headers Expires;
    proxy_ignore_headers Cache-Control;
    proxy_cache_valid any 120m;

Let's take a quick look on the options here:

  • proxy_pass tells your backend where it should forward the requests - we use the internal sks server on If you use multiple servers, you should create a new upstream directive (as shown in the example above)

  • proxy_pass_header tells NGINX to put the header of the upstream SKS server into its response instead of NGINX's own header. This is important because your server will be automatically removed from the pool if it does not serve the default SKS headers.

  • add_header adds the protocol and the name of our reverse_proxy host to the response

  • proxy_ignore_client_abort determines what should be done with the forwarded request on the backend if the request is aborted by the user. Since we are interested in as many requests we can get to fill our cache, we will complete the request and add it to our cache.

  • proxy_cache tells NGINX which cache should be used. The name must match the name in your nginx.conf

  • proxy_cache_use_stale allows NGINX to serve requests from its cache even if the backend is down.

  • proxy_cache_background_update updates expired cache items in the background and serves the old cache entries until the new entries have been requested.

  • proxy_cache_lock combines multiple requests of a key into a single request on the backend and serves the other requests from the backend.

  • proxy_ignore_headers tells NGINX to ignore no-caching headers from our sks backend.

  • proxy_cache_valid sets the expiration time of the keys in our cache - we say that a key should be updated every 2 hours.

That's it! Check your nginx configuration and restart your server - now it's time to test your configuration!

Testing and Verification

Our NGINX server is now ready to cache requests - but we have to verify that everything is working as expected. Open your nginx.conf file and add the $upstream_cache_status parameter to your log_format option. This adds the status of the cache to your access.log

Your log_format should look like:

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    '$status $body_bytes_sent "$http_referer" '
    '"$http_user_agent" "$http_x_forwarded_for" "$upstream_cache_status"';

Restart NGINX again and request a key from your server - your access log should report that the key was not found in the cache ("MISS") but that the request was still successful (Status Code 200):

"GET /pks/lookup?search=Flanga HTTP/2.0" 200 7223 "" "-" "-" "MISS"

Reload the site and check your access log again - your server should now respond with "HIT" which means that the key was found in your cache.

GET /pks/lookup?search=Flanga HTTP/2.0" 200 7223 "-" "-" "-" "HIT"

If a key is found in your cache but has expired, the log will show "EXPIRED" - the key is requested from the backend and added to the cache again.

This tutorial is licensed with CC-BY-SA-4.0. Last Updated: 2018-07-06