Monthly Archives: June 2017

Tiered Fallback Images

A few years back, I wrote about using nginx to serve fallback images from another domain, when those images were not available on the local filesystem. Today, I ran into the need to do something very similar, but with more than one level of fallback servers to try for the images on a staging site.

For some background on the setup, the images are all stored on S3 using Human Made’s S3 Uploads plugin on production as well as on the staging site. Every now and then, the production database is synced over to the staging site so that there is a complete set of production content to work with on staging. As part of this sync, all the image records come over as well, but since staging is pointed to a different S3 bucket, the images don’t work. A simple solution would be to copy the images from the production bucket to the staging bucket, but this results in a 2x cost increase for storage, which is less than ideal. Instead, I wanted a tiered image fallback approach that would serve the first image found, in this order:

  1. Local Files (on the staging server)
  2. Staging S3 Bucket
  3. Production S3 Bucket

In this way, all images ultimately fall back to the production S3 bucket, which means that any images records that come over in the production sync still work.


By default, the S3 Uploads plugin replaces all the urls to media with the S3 bucket URL. Since I wanted more control over where the images are serving from, we need to disable this behavior. Luckily, all this requires is defining a constant in wp-config.php:


The addition of this constant prevents rewriting of the media URLs, so they now all pointed back to my staging site domain.

Now that all the images were pointing back to my server, I just needed to set up the fallback logic in nginx. In the original approach, I defined an @image_fallback location block that used proxy_pass to proxy images from the other server, however, when using this approach, if a 404 error is returned, that error is passed directly on to the client. I needed to find a way to detect that error, and try yet another fallback. Turns out, there are a couple nginx configuration options that allow me to do just that: proxy_intercept_errors and error_page.

Here’s a modified version of the old image fallback location blocks, with a tiered fallback strategy:

location ~* ^.+\.(svg|svgz|jpg|jpeg|gif|png|ico|bmp)$ {
    try_files $uri $uri/ @stage;

location @stage {
    rewrite ^/wp-content/(.*) /$1; # In S3, the path starts with /uploads
    proxy_intercept_errors on;
    error_page 404 = @production;

location @production {
    rewrite ^/wp-content/(.*) /$1; # In S3, the path starts with /uploads

By enabling proxy_intercept_errors, nginx is able to detect the 404 error when the stage bucket does not have a copy of the image. The error_page declaration then instructs nginx to pass any 404 errors to the @production block, where we try the other bucket.

S3 Gotchas

If you’re using S3 for the fallbacks, make sure to keep the following things in mind, as they caused a few snags along the way. First, you’ll need to enable static website hosting on your bucket, and ensure you use that url in the proxy_pass declarations, or else S3 will throw 403 errors. Second, watch out for unintentional duplicate slashes in your urls. S3 is very literal in its parsing of urls – the path /uploads/1/image.jpg is treated differently than //uploads/1/image.jpg