A Tour of the "Datacenter"

This will be a quick article - I wanted to show off generally what my little mini hackerspace in the woods looks like. I'm incredibly proud of it, especially considering I'm the only one who lives here and I own the property outright (so I'm responsible for maintenance, construction, design, purchases, etc). Note that this article has a lot of ~2MB images so it may take some time to load if your internet isn't fast.

Service Abuse - April 8th, 2023

Unfortunately, there's too much bandwidth coming out of some porn again (6 Gbps and counting) that was uploaded recently that is starting to degrade my services a bit. I don't have a good way to reroute this traffic or curb usage of it, so the fastest way to restore service is to just dump the files.

The following files were affected:

aqkjaxqc.mp4
s1c1fe27.mp4
crsalu9.mp4
l0eqatl.mp4
f3ntd94.mp4

March 2023 Updates and Metrics

It's been a while since I've done one of these! Hello again, dear metrics nerds. There's been a lot since November's last metrics update that I'll run through, and then all the near-term future projects I'll go over:

Updates:

Historical updates are as follows:

Hosting Stuff From Home (Safely)

It occurs to me that I've never written an article on the most important foundational architecture decision I took, which is the method in which I use to publish things from my home safely and reliably.

    Let's dive in. I'll try to make this one a simple article so it's more accessible to the masses. Self hosting is a good thing for anyone to do, after all, as it fights centralization!

    Note: This article was last updated on August 4th, 2023.

    The Redundant LAN Project

    Another project complete! This one stretched one of my weakest skills - networking.

    The Problem:

    So, every server previously had two network connections - one dedicated to the management network, and one dedicated to the host network. Both of these connections were 10Gbit connections, via SFP+ fiber transceivers. However, if a transceiver, cable, NIC or even the entire switch crapped out, that host was toast, because both connections needed to stay up at all times. This scenario happened twice in recent memory:

    Caching LVM for Pomf

    After the slice range improvement, IOWait (The amount of time the CPU is deadlocked waiting for storage calls to finish) across the four edge nodes uses for Pomf traffic went up quite a bit due to the need to address a larger amount of fragmented files across the slow storage cache disk. Before it wasn't really a problem, but now when there's over 100,000 slices to manage, it changes the IOPS requirements quite a bit.