I've grappled with the thought recently that my peers may look down on me due to some proprietary software running in Lain.la's stack. The purists among you may completely discredit my infrastructure because of non-free software. Usually I pay these people no mind, because their fanaticism isolates precisely those that they wish to convert, but I needed to justify to myself and others exactly why I didn't use entirely FOSS for this project and why I disappointed ol' RMS.

To start: Let's catalog everything that isn't free that could have been.

  • 2 Windows 10 VMs.
  • vCenter Hypervisor Management Software and VMWare ESXi Hypervisors.
  • Synology NAS Software.
  • Dell iDRAC Firmware (Kinda).
  • Brocade Switch Firmware (Kinda).
  • Intel CPUs (Honorable mention).

That's pretty much it. Let's go one by one.

Windows 10 VMs

One is used for my proprietary Veeam backup system (I have free NFR licenses). I chose Veeam because of its integration with vCenter, it's incredible deduplication rates, the vast customization potential, and the fact that I have freebie licenses. I also chose it so I could learn it on my own for career development.

As a personal rule of mine, I never screw around with backups/data loss. I am perfectly comfortable offloading the risk of data loss to software that is tested by big fat corporations who would sic their entire legal team on Veeam if their software failed. To isolate the risk of running this proprietary software and to secure it, the entire VM is cut off from the internet and the Lain.la network, and is encrypted once more outside of Veeam to prevent any backdoors in their own encryption algorithms.

The other VM is used for the proprietary CyberPowerPC PowerPanel software that talks to my UPS systems, which CAN run on Linux but probably is a pain in the ass. Plus, I configured it with the ProtonMail bridge so I can get emails (SMTP) for power events. On Linux, to use a headless ProtonMail bridge you have to use a piece of software called Hydroxide and it's not... ideal.

VMWare Stuff

Ah yes, probably the most controversial considering the existence of Proxmox, KVM, Xen, etc etc. I just... didn't want to learn another hypervisor. I have free keys for everything VMWare and it was too attractive an option. A lot of the reliability and scalability plans of my systems start at the hypervisor level - if you blow up a host, you're not having a fun day. VMWare is (mostly) top notch when it comes to validating their software against hardware and general stability. I can count on one finger the amount of times I've had a software issue with a VMWare host in a very large, multi-TB's of RAM deployment.

I also pick and choose when I want to learn and when I want to use my past experience. I'm happy learning Pfsense, web caching, OpenVPN. But not hypervisors. VMWare has a very compelling and simple to use product and I've met the people there and everything. They're expensive (well, not in my case), but they continue to have the most solid virtualization platform out there.

Long story short: I'm taking a personal "I didn't want to deal with rolling my own shit" mulligan here.

Update: With the acquisition of VMWare by Broadcom, this may change. Idk. I could also just yar-har it.

Synology NAS

(All storage moved to TrueNAS! Huzzah!) To be blunt I never expected to use my NAS as a storage device. I bought it YEARS ago. I only found out it had iSCSI support AFTER starting my deployment. Now it runs a core storage role for a lot of bulk, low performance data storage.

There is an opportunity here for me to buy a big fat 24 drive 2U server and make that into a NAS, as well as a 2U 12 drive 3.5in NAS so that I can have automatic host failover in case of a server crash, but that is a VERY expensive project. The Synology will have to do. It's pretty good at what it does anyway.

Dell iDRAC Firmware

Well there's not really a way around this one, is there? I like the remote management tools and virtual console.

Brocade Switch Firmware

Mail me when there's a FOSS 48 port RJ45 switch with SFP+ and QSFP+ ports.

Intel CPUs

Yes, I would have liked to use AMD Epyc CPUs but they are WAY too new to be cheap on the used market. Oh, and have you SEEN how cheap Intel chips are on eBay?! You can get 8 core or more Broadwell Xeons for something like $30. Seriously. When the market for Epyc CPUs has aged a bit more and they are in more general rotation, first generation Epyc CPUs will be a great buy, I think. Maybe some day.