It begins. I’m finally getting to a point that I’d consider “ready”. So currently, I have three servers set up as virtualization hosts with XCP-ng. I was planning on using ESXi, but XCP-ng offers me the option to run the latest version on nearly any piece of kit I throw at it without having to worry about version incompatibility and the like. Had I gone with ESXi, I’d have the PE1900 on 4.x and the other two on 5.5, missing out on all the relevant features added in 6.x. Nope. I try to run the PE1900 as little as possible because even with the L5335s I dropped in there, it still slurps power like crazy. The rack and rails are in the mail, so the R310 and my networking gear can finally move from their temporary home on or next to an old end table I have down in the basement to their permanent home in the laundry room. I’m sure the girl will appreciate not having a line hanging out of a ceiling tile to get the AP over to the router anymore.

As it stands right now, I’m okay with how things are, for the most part. Since I’m on a pretty stringent budget, I’m limited on expansion, but there’s only a few more pieces I want to add before I’d consider the setup to be the ideal minimum.

Hardware Upgrade Plans:

  • UPS of some sort, leaning toward a CyberPower rackmount unit
  • Proper PDU. Power strips won’t cut it much longer, and I desperately need surge protection.
  • Either an R510 or a DL180se, unless I can find a nice DAS for cheap. I wanted to go with a Norco 3216, but it’d be probably 3-4x more expensive. I’d need to get a new CPU and mobo sooner than later since the X10SLA-F only has two PCIe slots, and if I planned on getting all 16 drives live, I need 4 miniSAS connectors. So unless I get a PCI NIC, which I really don’t want to to, I’m already out of PCIe slots. On the plus side though, I could build something designed to be as power efficient as possible and put a super low draw CPU in there that has ECC support. I’m still mulling my options around right now though, so who knows.
  • An end-all, be-all virt host. The R310 is serving me well right now, but it caps out at 32 gigs of ram. Ideally, I’ll max that out and run it 24/7, and then get another host with iLO or iDRAC so I can turn it on and off at will and only have it on for when I’m doing lab stuff. But I’ll probably wind up getting lazy and just leaving it on 24/7, to be completely honest. I’m not sold on anything yet, but a DL3{6/8}0e G8 or an R420 both sound like great options. The E5-2400 series chips are abundant and cheap, and the low power SKUs are really enticing. I also have been considering an R710 as it’s kind of the homelabber staple machine. It’d be nice to get some newer-ish gear though.

Network Upgrade Plans:

  • I need to properly implement VLANs. Currently I’m cheating and have my LAN on 192.168.0.0/22. DHCP is on for 1.0 and 0.0, 2.0, and 3.0 are where my VMs/management interfaces/OOB management devices exist with manually assigned IPs. I’ve been lazy with getting this set up since it breaks my entire network when I do it wrong, so I’ll worry about it later.
  • Long term goal: 10Gbit networking for at least my hosts, and maybe my rig. Not super important at the moment, as I have a reasonable amount of VM storage in each of my machines, but I’d like to move my VMs to a network share eventually and cut down on internal storage almost entirely.

VM Upgrade Plans:

  • s e c u r i t y a u d i t
  • Actually get Logstash/ElasticSearch/Grafana set up. I just installed them and said “okay cool I’ll be back later”, and that hasn’t happened yet.
  • Sane backup strategy for all my mission critical data. I haven’t figured out what host to go with yet, I might go with BackBlaze because their hard drive failure reports are what lead me to my HGST drives, which haven’t let me down yet. And they’re just generally pretty awesome guys. rsync.net is also an option, but I don’t think I need 200 gigs of storage.
  • Getting all my external services hidden away behind a reverse proxy. None of them are exposed on this domain, but you can scan it if you want. (please don’t dunk my boxes)

Other Plans:

  • Writing my own IPAM/DCIM service. phpIPAM’s lack of port management is frustrating, and NetBox is geared toward actual DCs and has way more features than I need. I want to create something in the middle that is more or less a “barebones” implementation. I want to have device management options/port tagging for switches, firewall appliances, routers, patch panels, and endpoint devices, and I want to have a robust IPAM system. Nothing’s really filled the niche that I’m looking for, at least that I’ve found.

Since we just moved in to a new house, I had the idea of building a networked audio system using some Raspberry Pis and HiFiBerry hats. I’d set up some cheap tablets (maybe some Fire 7s or old iPads) and have them show the internal web applications for remote mopidy management. Mopidy is the server of choice because of its integration with Google Play Music All Access, which I use heavily (it’s free with YouTube Red. I might as well…) I have three main design ideas:

Option 1: VM hosting a single mopidy instance with every RPi client acting as a PulseAudio remote sink

  • Pros:
    • (presumably) Easier to manage and add clients to, creating a new output should be as simple as imaging the SD card, reconfiguring the networking, and turning it on.
    • Less resource consumption
  • Cons:
    • Only a single audio stream is available
    • Single point of failure

Option 2: Single VM with nginx running a web interface for each client, mopidy on server only acts as a remote db, mopidy providing actual playback located on client

  • Pros:
    • Independant audio streams per client, should eliminate “fighting over the aux cord”
    • Less web interface configuration, just get nginx + PHP working and create subdomains for each device pointing to a separate installation (which would be easily duplicated)
  • Cons:
    • More time consuming to provision a new device, but less time consuming than it would be to do the third solution.
    • Heavier computationally, the Pi Zero (which I planned on using for most clients) may not have the grunt to handle mopidy and audio streams
    • Still a single point of failure

Option 3: Single VM with nginx running a web interface for each client, master mopidy interface as remote db, mopidy interface for each client on server (with clients acting as remote PA sinks)

  • Pros:
    • As above
    • Heavier on the server, but not nearly as taxing as it would be to the clients
    • Same ease of management pro as in option 1, at least presumably
  • Cons:
    • More time consuming than option 2, but simpler
    • Still a single point of failure

I’m choosing not to run a mopidy server with a db on each client as the last time I tried mounting my music share on an RPi 3 with NFS and creating the mpd database, it took nearly 3 hours and wasn’t even 3/4 done. It might be feasible to set one up and create a database for the time I made the image, and then just incrementally upgrade each client, but I’m not sure the zero would be able to handle doing everything.

I just wrote all that for an audience of zero, but it helped me flesh out some of my ideas! Maybe someone will read this eventually. I’m going to keep an upgrade report going in a non-chronological post that you can see up top next to my current homelab setup instead of constantly updating the list in one of these posts. On an unrelated note: Jekyll owns, and writing your blog posts in Vim owns even more.