Monthly Archives: April 2016

Home Server KVM Base Image

At home, I have a server that I use for all sorts of random things, and because I like to complicate things (just ask Zach, he knows), I run a bunch of VMs inside of the server, to keep unrelated things separate from each other. I’ve worked on this over time, fixing pain points and bottlenecks, and currently have it so I can spin up a new VM in about a minute (with the help of Ansible). Here’s how it all currently works.

On the host, I have all the things installed required for KVM virtualization (qemu, libvirt, virtualization-tools, probably some others – I really need to get the hypervisor config into Ansible…). I use LVM to manage all of the storage volumes for guests. I have a volume group (vg_vps) on the host, and inside of that, a bunch of logical volumes. Each logical volume gets mounted as the disk drive for each guest.

Initially, the process to create a new VM was slow and painful – I’d create the new logical volume, attach it to the VM, and also attach a CentOS installation ISO to the VM, boot, manually install CentOS… It was a pain. Over the past week, I worked on getting a base image setup that I can clone new VMs from rather than having to do the manual installation step, and the time savings are awesome.

 Creating the Base Image

Creating the base image was very similar to just spinning up any other VM. First, I created a new logical volume, but this time, made it pretty small (4GB) to keep the clone time as quick as possible, and then booted the VM with the installation ISO. I named the logical volume after the CentOS version, so that I know what version I’m working with (centos7_1511). I installed CentOS, made sure the network interfaces start automatically, and configured partitions for the guest (I use LVM inside the guest as well, mostly because that’s what the installer wanted to do, and I didn’t want to fight it – It’s probably not really necessary). Once installed, I loaded up the OS, installed my public key and turned off SELinux. Then, I just shut off the VM, deleted it (but made sure NOT to delete the logical volume) and I have my base image!

Creating a VM

Once I have the base image, creating a new VM is easy.

  1. First, I create a new logical volume that is at least 4GB
    lvcreate --size=30G --name=mynewvm vg_vps
  2. Once I have that, I copy the base image to the new logical volume with the virt-resize tool
    virt-resize --expand vda2 /dev/vg_vps/centos7_1511 /dev/vg_vps/mynewvm.
    In this command, vda2 is the partition *inside* the VM that you want to expand (in my case, vda1 is just /boot, and vda2 contains everything else).
  3. Then, I remove anything specific to the base VM (like network configurations, ssh-hostkeys, log files, mail spool, cron-spool, etc).
    virt-sysprep -a /dev/vg_vps/mynewvm --enable=cron-spool,dhcp-client-state,dhcp-server-state,logfiles,mail-spool,net-hwaddr,rhn-systemid,ssh-hostkeys,udev-persistent-net,utmp,yum-uuid,customize

At this point, you just associate the “mynewvm” logical volume with a new VM definition, and you have a fully working VM.

LVM Gotchas

Remember above when I said I use LVM inside the guest? Turns out there is one more step you have to do to actually expand the guest filesystem because of this, that I don’t think you’d otherwise have to do. To work around this, I created a script in my base image (/root/ Here’s what’s in the script:

# Expands the vda2 filesystem to fill up the available space
# Does *NOT* expand the actual partition - assuming this is done with virt tools on the host side

echo "Expanding Physical Volume"
pvresize /dev/vda2

echo "Expanding Logical Volume centos-root"
lvextend -l +100%FREE /dev/mapper/centos-root

echo "Growing filesystem"
xfs_growfs /

To make sure I don’t forget to run it, I also have ansible setup the script to be run at boot, again using the virt-sysprep tool (This tool is seriously really handy)
virt-sysprep -a /dev/vg_vps/mynewvm --firstboot-command /root/

Centralized Let’s Encrypt Management

Updated March 16, 2017 to reflect current webroot settings

Recently I set out to see how I could manage lets encrypt certificates from one central server, even though the actual websites didn’t live on that server. My reasoning was basically “This is how I did it with SSLMate, so let’s keep doing it” but it should also be helpful in situations where you have a cluster of webservers, and probably some other situations that I can’t think of at this time.

Before I get too in depth with how this all works, I’m going to define what I mean by two servers we have to work with:

  • Cert Manager: This is the server that actually runs Let’s Encrypt, where we run commands to issue certificates.
  • Client Server: This is the server serving the website, say… 😉

Additionally, I have a domain setup that I point to the Cert Manager. For the purposes of this article, lets just call it

High Level Overview

At a high level, here’s how it works with the web root verification strategy:

  1. I set up nginx on the Cert Manager to listen for requests at, and if the request is for anything under the path /.well-known/ I serve up the file the request is asking for.
  2. On the client servers, I have a common nginx include that matches the /.well-known/ location, and proxies that request over to the server.

Nginx Configuration

Here’s what the configuration files look like, for both the Cert Manager Server as well as the common include for the client servers:

Cert Manager Nginx Conf:

server {
    listen 80;
    access_log /var/log/nginx/cert-manager.access.log;
    error_log /var/log/nginx/cert-manager.error.log;

    root /etc/letsencrypt/webroot;

    location /.well-known {
        try_files $uri $uri/ =404;

    location / {
        return 403;

Client Server Common Nginx Include:

location ~ /\.well-known {

Issuing a Certificate

Now lets say I want to issue a certificate for – here is what the process would look like.
I’m assuming is already set up to serve the website on a client server by this point.

SSH to the Cert Manager server, and run the following command:

letsencrypt certonly -a webroot --webroot-path /etc/letsencrypt/webroot -d -d

Eventually, this command generates a verification file in the /etc/letsencrypt/live/.well-known/ directory, and then Let’s Encrypt tries to load the file to verify domain ownership at<file>.

Since the client server hosting is set up to proxy requests under /.well-known/ to the Cert Manager server (using the common include above), the file that was just created on the Cert Manager server is transparently served to Let’s Encrypt, and ownership of the domain is verified. Now, I have some fancy new certificates sitting in /etc/letsencrypt/live/

At this point, you just have to move the certificates to the final web server, reload nginx, and you’re in business.

In practice, I actually use ansible to manage all of this – I’ll work on a follow up post explaining how that all works as well, but generally I end up issuing SSL certificates as part of the site provisioning process on the Client Servers, in combination with `delegate_to`. Also, ansible makes steps like the moving of certificates to the final web server must less labor intensive 🙂

Things to Figure Out

I’m still trying to figure out the best strategy to keep the certificates updated. I can run the Let’s Encrypt updater on the Cert Manager server and get new certificates automatically, but since it’s not the web server that actually serves the websites, I need to figure out how I want to distribute new certificates to appropriate servers when they are updated. Feel free to comment if you have a brilliant idea 😉