It’s been six months since my last update. Wow, I knew it had been some time, but that’s obviously way longer than I expected. I’ve had plenty to say and plenty of updates, but I was waiting for a specific event. Let’s take a step back so I can explain:

Six months ago I ran into an issue where LDAP broke after a TLS certificate expired. It expired because it was not set up to renew automatically. I mentioned in the post about the event that there were other mistakes I probably made in setting up the cluster initially - also known as technical debt. As I started looking, the primary issues were:

  1. RancherOS - I used RancherOS as the operating system for the nodes because it seemed purpose-built for running Kubernetes, was lightweight, and I didn’t know any better. It’s now been end-of-life’d and there are no new updates. It’s also based on Docker and the docker shim for Kubernetes has been deprecated. I deployed RKE (Rancher Kubernetes Engine) using the Rancher management web interface and RKE is also based on Docker. RKE2 has replaced RKE and is based on containerd.

  2. Manual deployments - Since I was a complete Kubernetes newbie, I used the Rancher UI to create all of the early deployments directly in Rancher instead of creating manifests and managing deployments by updating and applying the manifests. This became a huge problem when the management VM running the Rancher web UI went down and I had no idea how to recover it. I did eventually recover the management VM, but it was a tense time. Fortunately, I did switch to using Ansible to manage and deploy Kubernetes manifests for the applications that I deployed later.

  3. Single points of failure - I started with a single Proxmox node with only a single small SSD disk so storage for the Kubernetes cluster came primarily from my QNAP NAS using NFS. The QNAP NAS which needs to be rebooted regularly to apply firmware updates which means downtime for the entire cluster.

  4. Not using pinned versions - Despite my best effort, I didn’t do a perfect job of pinning everything to specific versions. This is best practice as the latest version can introduce breaking changes or instability. I did take a few shortcuts, mostly because of the manual deployments which made it more difficult to control for versions.

Looking into these issues, I came to the conclusion that any time and effort I spent to fix these initial mistakes would take much more time than just taking a step back and building a new cluster. To eliminate some of the single points of failure, I started by building a new server with an Intel Core i9-9900 Coffee Lake (8-Core) CPU, 64GB of memory, and 4x 8TB disks. Later I also added 4x 8TB disks to the original server.

I added this new server, along with a QDevice to maintain quorum, and created a Proxmox cluster. Initially, I set up GlusterFS on ZFS for shared cluster storage, migrated all VM disks to use this storage, and set up high-availability to allow migration between nodes. However, I ran into some scaling issues which I’ll go into more detail on in future updates.

Now that I had the hardware SPOFs addressed, I set about addressing the other issues. In my research, I wanted everything to be (as much as possible) redundant, fault-tolerant, automated, and idempotent. I was already using Ansible so it made sense to centralize on Ansible as the primary driver for everything. I plan to go into much more detail in posts over the next few weeks, but I have now completed enough of the project to relaunch this blog on the new cluster:

The Life of Lachlan blog was previously running on Ghost, first running on a Digital Ocean droplet, then on Kubernetes. I have no complaints about about Ghost itself, but there was a shift in philosophy on my part. I want digital sovereignty and I want to use simple and plain-text data formats. In addition, any code running server-side increases the attack surface.

I used the Ghost to Hugo converter to export all of the posts and create a new Hugo site. The code for the site is hosted in my own Git repo using Gitea. When I write a new post in Markdown and check it into the git repository, it kicks off a Drone pipeline to generate the site, build a container with nginx as the web server, push it to a private container registry, and install it in the cluster at the staging site. Once I proofread and ensure that the post looks good, I can then tell Drone to promote it to production and the same thing happens, but it will be live on the live site.