Evolution of My Home Lab: Past, Present, and Future
08 Apr 2025Historical Background
I got started with homelabbing when I was pretty young, using an old iMac and running Puppy Linux. That early curiosity grew into a deeper interest as I built a basic NAS using a Supermicro Intel Atom (X7) board running FreeBSD. At some point, a friend and I built a Proxmox server, and I was playing with BeagleBone boards for side projects. Around the same time, I was forcing myself to use Arch Linux on a laptop, while working professionally in a Windows Server environment. That mix of self-learning and hands-on exposure to different platforms helped me get comfortable picking up new technologies quickly.
Current Setup
These days, my home lab lives across two small 19-inch racks. The main rack, mounted high on the wall, has our core networking gear: a Ubiquiti UDM-PRO and an Enterprise 24-port PoE switch. That powers our home’s Wi-Fi (upgraded recently to Wi-Fi 6E) and a handful of PoE security cameras. There’s also a SFP+ fiber line that runs across the office to a UDM Enterprise 8-port switch in a second 12U rack.
That rack also holds an UNVR. I added that after running into memory issues with the UDM-PRO doing too much—it was right at the suggested limits, and it showed. The star of the setup, though, is our Unraid server—a project I built with my brother and a good friend:
- Motherboard: Supermicro H11SSL-NC
- Processor: AMD EPYC 7281 16-Core
- Memory: 64GB ECC RAM
- GPU: NVIDIA RTX A2000
Storage Configuration:
- Main Storage: 8x 16TB SATA HDDs in an Unraid array with dual parity
- SSD Pools: 2x 1TB consumer SSDs in ZFS mirrors, split across three pools for VMs, Docker containers, and caching
It runs everything from media services and game servers to tools like Pi-hole, budget trackers, Nextcloud, and more recently, Ollama models. It’s been fun pushing into AI workloads, but also kind of the tipping point for this system.
Identified Limitations
Once I started running LLMs through Ollama, I noticed performance hitting a ceiling—IOPS, memory, and CPU were all getting pushed. I’ve also been using Docker Compose more heavily, and while Unraid’s UI was great when I was starting out, now it feels like it’s getting in the way more than helping.
To be honest, I also went through a phase where tech as a hobby just wasn’t doing it for me anymore. But over the last 6–8 months, that spark came back—mostly thanks to AI tools helping me break through a few roadblocks. That motivation has been key to where I’m going next.
Future Plans and Learning Opportunities
I stumbled onto the idea of a mini-rack build while browsing setups—and the concept really clicked. The plan is to build out a new 10-inch rack and move most of the workloads over to that, while the Unraid server continues as a NAS and VM host.
New Mini-Rack Infrastructure
Networking Backbone
- 10Gbps uplink to the main rack via an 8-port Ubiquiti switch
- 2.5Gbps ports for internal communication across the new stack
Storage & Kubernetes Traffic
- A MikroTik 8-port SFP+ switch for 10Gbps communication between storage and k8s nodes
Control Plane
- 3x Raspberry Pi 5 units (8GB each), active cooling, M.2 SATA hats & 1x 16GB as a jump / admin box.
- Connected over a Unifi Lite 1Gbps switch
Compute Nodes
Two Minisforum MS01 boxes will act as the primary nodes:
- Intel Core i9-12900H
- 96GB RAM (2x48GB Crucial)
- 1TB NVMe (Inland) for the OS
- 2x 2TB SK Hynix NVMe drives each for Ceph storage
The goal is to run Kubernetes on bare metal, using Ceph for distributed storage. We’re still doing hardware testing and burn-in now, but once that’s done, I’ll start migrating workloads and services over. Eventually, I’d love to add a small SSD-only NAS to complement the setup—but we’ll see what budget and time allow.
Learning Goals and Opportunities
This project is giving me a real-world sandbox to explore:
- Running a high-availability k8s control plane on ARM-based nodes
- Building and maintaining Ceph storage in a home environment
- Learn about container orchestration.
- Using GitOps to manage and deploy infrastructure and applications
The new setup is a chance to go deeper into container orchestration and storage architectures, while still scratching that tinkering itch. It’s been a lot of fun diving back in with a more intentional approach—and I’m excited to see where it leads.