StoutPanda The Bamboo Grove

HomeLab Revamp Network Plan: Game Plan

Below is initial overview of my plan for implementing the updated network plan to facilitate kubernetes.

Network Topology Overview with VLANs and Ports

Internet
    │
    ▼
UDM-PRO (10.0.1.1)
    │ 
    │ DAC → Port 1 (VLANs: 1,2,3,4,5,6,10,16,18,28,38,48,58)
    ▼
Unifi Enterprise 24 Port (10.0.1.238) - Core Switch
    │
    │ SFP+ → Port 24 (VLANs: 1,16,18,28,38,48,58)
    ▼
Ubiquiti Flex 2.5G (10.0.1.80)
    │
    ├───── Port 1: Uplink to Enterprise 24 Port (Trunk: All VLANs, VLAN 1 native)
    │
    ├───── Port 2-3: MS-01 Node 1 (10.8.16.90) 2.5G ports (VLAN 16 native, VLAN 18 tagged)
    │
    ├───── Port 4-5: MS-01 Node 2 (10.8.16.91) 2.5G ports (VLAN 16 native, VLAN 18 tagged)
    │
    ├───── Port 6: Connection to Flex Mini (VLAN 16 native, VLAN 18 tagged)
    │      │
    │      ▼
    │     Ubiquiti Flex Mini (10.0.1.81)
    │       │
    │       ├── Port 1: Uplink to Flex 2.5G (VLAN 16 native, VLAN 18 tagged)
    │       │
    │       ├── Port 2: K8s-CP-01 (10.8.16.86) (VLAN 16 native, VLAN 18 tagged)
    │       │
    │       ├── Port 3: K8s-CP-02 (10.8.16.87) (VLAN 16 native, VLAN 18 tagged)
    │       │
    │       └── Port 4: K8s-CP-03 (10.8.16.88) (VLAN 16 native, VLAN 18 tagged)
    │
    ├───── Port 7: MikroTik CRS309 Management (VLAN 1 only)
    │      │
    │      ▼
    │     MikroTik CRS309-1G-8S+ (10.0.1.82)
    │       │
    │       ├── Port 1: Management to Flex 2.5G (VLAN 1 only)
    │       │
    │       ├── Port 2-3: MS-01 Node 1 SFP+ (VLANs: 28,38,48)
    │       │
    │       └── Port 4-5: MS-01 Node 2 SFP+ (VLANs: 28,38,48)
    │
    └───── Port 8: Admin Box (10.8.16.85) (VLAN 16 native, VLAN 18 tagged)

VLAN Structure

VLAN ID Purpose Subnet Gateway Notes
16 Kubernetes Management 10.8.16.0/27 10.8.16.1 Native VLAN for K8s components
18 Kubernetes Control Plane 10.8.18.0/27 10.8.18.1 For API server, etcd, scheduler
28 Kubernetes Pod Network 10.8.28.0/23 10.8.28.1 512 IPs for pod allocation
38 Kubernetes Service Network 10.8.38.0/26 10.8.38.1 64 IPs for Kubernetes services
48 Storage Network 10.8.48.0/27 10.8.48.1 Dedicated for Ceph traffic
58 Load Balancer IPs 10.8.58.0/27 10.8.58.1 32 IPs for external service access

Mermaid Diagram

graph TD
    %% VLAN information - positioned at the top
    subgraph "VLAN Structure"
        VLAN1["VLAN 1: Management (10.0.1.0/24)"]
        VLAN16["VLAN 16: K8s Management (10.8.16.0/27)"]
        VLAN18["VLAN 18: K8s Control Plane (10.8.18.0/27)"]
        VLAN28["VLAN 28: K8s Pod Network (10.8.28.0/23)"]
        VLAN38["VLAN 38: K8s Service Network (10.8.38.0/26)"]
        VLAN48["VLAN 48: Storage Network (10.8.48.0/27)"]
        VLAN58["VLAN 58: Load Balancer Network (10.8.58.0/27)"]
    end
    
    %% Main network path
    Internet[Internet] --> UDMPRO[UDM-PRO\n10.0.1.1]
    UDMPRO --"DAC → Port 1\nVLANs: 1,2,3,4,5,6,10,16,18,28,38,48,58"--> Enterprise24["Unifi Enterprise 24 Port\n10.0.1.238"]
    Enterprise24 --"SFP+ → Port 24\nVLANs: 1,16,18,28,38,48,58"--> UbiquitiFlex25G["Ubiquiti Flex 2.5G\n10.0.1.80"]
    
    %% Flex 2.5G connections
    UbiquitiFlex25G --"Port 2-3\nVLAN 16 native, VLAN 18 tagged"--> MS01Node1["MS-01 Node 1\n10.8.16.90"]
    UbiquitiFlex25G --"Port 4-5\nVLAN 16 native, VLAN 18 tagged"--> MS01Node2["MS-01 Node 2\n10.8.16.91"]
    UbiquitiFlex25G --"Port 6\nVLAN 16 native, VLAN 18 tagged"--> UbiquitiFlexMini["Ubiquiti Flex Mini\n10.0.1.81"]
    UbiquitiFlex25G --"Port 7\nVLAN 1 only"--> MikroTikCRS309["MikroTik CRS309-1G-8S+\n10.0.1.82"]
    UbiquitiFlex25G --"Port 8\nVLAN 16 native, VLAN 18 tagged"--> AdminBox["Admin Box\n10.8.16.85"]
    
    %% Flex Mini connections
    UbiquitiFlexMini --"Port 2\nVLAN 16 native, VLAN 18 tagged"--> K8sCP01["K8s-CP-01\n10.8.16.86"]
    UbiquitiFlexMini --"Port 3\nVLAN 16 native, VLAN 18 tagged"--> K8sCP02["K8s-CP-02\n10.8.16.87"]
    UbiquitiFlexMini --"Port 4\nVLAN 16 native, VLAN 18 tagged"--> K8sCP03["K8s-CP-03\n10.8.16.88"]
    
    %% MikroTik connections
    MikroTikCRS309 --"Port 2-3\nVLANs: 28,38,48"--> MS01Node1SFP["MS-01 Node 1 SFP+"]
    MikroTikCRS309 --"Port 4-5\nVLANs: 28,38,48"--> MS01Node2SFP["MS-01 Node 2 SFP+"]
    

    
    %% Device-type styling
    classDef router fill:#f96,stroke:#333,stroke-width:2px
    classDef switch fill:#69b,stroke:#333,stroke-width:2px
    classDef client fill:#ddd,stroke:#333,stroke-width:1px
    classDef vlan fill:#e8f4f8,stroke:#333,stroke-width:1px,stroke-dasharray: 5 5
    
    %% Apply classes
    class UDMPRO router
    class Enterprise24,UbiquitiFlex25G,UbiquitiFlexMini,MikroTikCRS309 switch
    class MS01Node1,MS01Node2,K8sCP01,K8sCP02,K8sCP03,AdminBox,MS01Node1SFP,MS01Node2SFP client
    class VLAN1,VLAN16,VLAN18,VLAN28,VLAN38,VLAN48,VLAN58 vlan

Evolution of My Home Lab: Past, Present, and Future

Historical Background

I got started with homelabbing when I was pretty young, using an old iMac and running Puppy Linux. That early curiosity grew into a deeper interest as I built a basic NAS using a Supermicro Intel Atom (X7) board running FreeBSD. At some point, a friend and I built a Proxmox server, and I was playing with BeagleBone boards for side projects. Around the same time, I was forcing myself to use Arch Linux on a laptop, while working professionally in a Windows Server environment. That mix of self-learning and hands-on exposure to different platforms helped me get comfortable picking up new technologies quickly.

Current Setup

These days, my home lab lives across two small 19-inch racks. The main rack, mounted high on the wall, has our core networking gear: a Ubiquiti UDM-PRO and an Enterprise 24-port PoE switch. That powers our home’s Wi-Fi (upgraded recently to Wi-Fi 6E) and a handful of PoE security cameras. There’s also a SFP+ fiber line that runs across the office to a UDM Enterprise 8-port switch in a second 12U rack.

That rack also holds an UNVR. I added that after running into memory issues with the UDM-PRO doing too much—it was right at the suggested limits, and it showed. The star of the setup, though, is our Unraid server—a project I built with my brother and a good friend:

  • Motherboard: Supermicro H11SSL-NC
  • Processor: AMD EPYC 7281 16-Core
  • Memory: 64GB ECC RAM
  • GPU: NVIDIA RTX A2000

Storage Configuration:

  • Main Storage: 8x 16TB SATA HDDs in an Unraid array with dual parity
  • SSD Pools: 2x 1TB consumer SSDs in ZFS mirrors, split across three pools for VMs, Docker containers, and caching

It runs everything from media services and game servers to tools like Pi-hole, budget trackers, Nextcloud, and more recently, Ollama models. It’s been fun pushing into AI workloads, but also kind of the tipping point for this system.

Identified Limitations

Once I started running LLMs through Ollama, I noticed performance hitting a ceiling—IOPS, memory, and CPU were all getting pushed. I’ve also been using Docker Compose more heavily, and while Unraid’s UI was great when I was starting out, now it feels like it’s getting in the way more than helping.

To be honest, I also went through a phase where tech as a hobby just wasn’t doing it for me anymore. But over the last 6–8 months, that spark came back—mostly thanks to AI tools helping me break through a few roadblocks. That motivation has been key to where I’m going next.

Future Plans and Learning Opportunities

I stumbled onto the idea of a mini-rack build while browsing setups—and the concept really clicked. The plan is to build out a new 10-inch rack and move most of the workloads over to that, while the Unraid server continues as a NAS and VM host.

New Mini-Rack Infrastructure

Networking Backbone

  • 10Gbps uplink to the main rack via an 8-port Ubiquiti switch
  • 2.5Gbps ports for internal communication across the new stack

Storage & Kubernetes Traffic

  • A MikroTik 8-port SFP+ switch for 10Gbps communication between storage and k8s nodes

Control Plane

  • 3x Raspberry Pi 5 units (8GB each), active cooling, M.2 SATA hats & 1x 16GB as a jump / admin box.
  • Connected over a Unifi Lite 1Gbps switch

Compute Nodes

Two Minisforum MS01 boxes will act as the primary nodes:

  • Intel Core i9-12900H
  • 96GB RAM (2x48GB Crucial)
  • 1TB NVMe (Inland) for the OS
  • 2x 2TB SK Hynix NVMe drives each for Ceph storage

The goal is to run Kubernetes on bare metal, using Ceph for distributed storage. We’re still doing hardware testing and burn-in now, but once that’s done, I’ll start migrating workloads and services over. Eventually, I’d love to add a small SSD-only NAS to complement the setup—but we’ll see what budget and time allow.

Learning Goals and Opportunities

This project is giving me a real-world sandbox to explore:

  • Running a high-availability k8s control plane on ARM-based nodes
  • Building and maintaining Ceph storage in a home environment
  • Learn about container orchestration.
  • Using GitOps to manage and deploy infrastructure and applications

The new setup is a chance to go deeper into container orchestration and storage architectures, while still scratching that tinkering itch. It’s been a lot of fun diving back in with a more intentional approach—and I’m excited to see where it leads.

Self-Hosting Pangolin on Unraid: Transitioning to a VM-based Setup

Self-Hosting Pangolin on Unraid

Pangolin is an exciting new tool designed to simplify remote access to self-hosted resources, bypass CGNAT issues, and seamlessly integrate with security tools such as Crowdsec. With its recent 1.0 release, Pangolin continues to evolve, incorporating powerful new features.

Why Switch to a VM-Based Setup?

In my experience, the optimal approach to using Pangolin on Unraid is to bypass the official Unraid installation guide provided by Fossorial and instead deploy Pangolin within an isolated virtual machine (VM). This method simplifies the integration with Crowdsec and leverages Docker Compose for easier management.

Steps to My Transition

Network Configuration

  • Created a dedicated VLAN specifically for Pangolin.
  • Configured firewall rules to allow outbound traffic only, preventing communication with other internal networks.
  • Enabled administrative access from trusted local networks.

Unraid Configuration

  • Established a new network interface linked directly to the VLAN.

Virtual Machine Setup

  • VM Specs: 4 CPU cores, 4 GB RAM (additional resources allocated for Crowdsec).
  • Set networking to default settings within the VM for simplicity.
  • Mounted shared host files via VirtioFS at /pangolin to enable easy backups of configurations and logs.
  • Installed Pangolin using the Fossorial Quick Install Guide.

Cleanup and Streamlining

Transitioning to a VM allowed me to simplify my setup by removing several Docker containers previously managed in Unraid:

  • Pangolin
  • Gerbil
  • Traefik
  • Crowdsec

Newt Configuration

  • Adjusted the networking setup within Unraid Community Applications.
  • Added two custom networks, and left the built in unraid networking field blank.:
    • New VLAN (br3), assigned a manual IP due to networking conflicts.
    • Docker network (fossorial) for containers accessed through Pangolin and Newt. Unraid Newt CA Picture
  • Generated Newt ID and secret, configuring these as container variables from the new Panolin install.

Crowdsec Integration

The VM setup significantly simplified Crowdsec integration, particularly through tools like Crowdsec Manager, enabling straightforward management of enrollments, whitelists, and custom security scenarios including Cloudflare Turnstile.

Enhanced Security with Cloudflare Tunnels

To further enhance security, I’m utilizing Cloudflare Tunnels with their WAF. The tunnel is hosted via Unraid Community Applications within the dedicated VLAN. The traffic flows as follows:

Cloudflare → Cloudflare Tunnel → Pangolin VM (with Crowdsec) → Newt → Docker Containers (Unraid)

For accurate IP address detection through Cloudflare, I’m using the Traefik Real-IP Plugin.

Future Plans

My next step is integrating Crowdsec directly with Cloudflare’s WAF to shift blocking responsibilities away from Traefik. Previously attempted IPS and geoblocking solutions with UDM Pro proved resource-intensive, reaffirming my preference for Cloudflare’s robust security.

18XX Primer

18xx 101

Here’s a tutorial for 1889 on the 18xx.games platform: 18xx Games Tutorial. It walks you through turns of a three-player game, illustrating typical gameplay elements.

For 18xx games, most are derived from 1830 with modifications (sometimes significant ones that the community refers to as “McGuffins”). Key variations include the maps, available companies, tile availability, and train rules. Additionally, auction methods for companies, stock market mechanisms, and company-specific rules often differ, leading to uniquely challenging games.

Although 1830 is one of the earliest 18xx games, it followed the lesser-known 1829. 1830 is the foundation for most other 18xx games.

18XX Chart

The more introductory games like 1889, 18Chesapeake, and 18MS, which are based on financial strategies from 1830. These games focus on timing investments and manipulating stock prices. The most significant game in this financial category is 1817, which can last over 12 hours due to its complex stock market dynamics.

Another category within the 18xx series focuses on operational games, such as 1846, designed by the creator of Race for the Galaxy. 1846 emphasizes efficient company operation, route competition, and critical timing of train upgrades, making it an accessible entry point for those new to operational strategies within the 18xx series. 1822, another significant game in this genre, is known for its lengthy gameplay and intricate management of private companies, but it may not be as suited for beginners due to its complexity.

Gameplay typically alternates between Stock Rounds and Operating Rounds, which may vary in number as the game progresses. During Stock Rounds, players buy and sell shares, affecting company valuations. In Operating Rounds, companies operate trains, manage routes, handle dividends, and invest in new trains. A crucial tip for beginners is to prioritize train purchases, which can accelerate game progression and disrupt opponents’ strategies.

Trains, which have finite lifespans, need careful management to prevent obsolescence and ensure operational efficiency. Newer trains make older models obsolete (“rusting”), forcing players to continually adapt their strategies. Companies are usually divided into public and private. Private companies, often starting smaller, can later be integrated into public ones, providing strategic financial benefits. Company funds stem solely from share purchases, and operational decisions during rounds can significantly impact overall success.

Tracks are categorized into three types: Yellow, Green, and Brown, representing different stages of development. Starting with Yellow, players can upgrade to Green and then Brown, each providing different strategic options. Tile availability is typically limited, encouraging strategic placement to potentially hinder opponents’ expansion.

During Operating Rounds, per train you can run a route where track cannot be reused by your company for multiple routes. You can’t backtrack normally. The number of stations a train visits, typically its numeric value like a “2 train” or a “4 train,” is crucial as it dictates the potential revenue from that route. Each station adds a set value to the route’s total income, influencing strategic decisions about train operations and route planning.

Here is an example board from a completed 1889 game: 1889 Finished Game

Example: If you had a station in Nahari, and two “2” trains, you could run two routes. One from Muroto to Nahari for 40 dollars, and one from Nangoku to Nanari for 30. A total of 70 for the operating round.

If you had a 3 train, you could run between Muroto to Nanori to Nangoku for 50, or if you additionally had a station in Kouchi and a 3 train, you could run to Kouichi to Nangoku to Nanari for 90.

You may not reuse track during a route, but you may visit a station with multiple trains as long as you do not resuse track.

TigerVNC Server & Ubuntu 16.04 Xenial Xerus

Today I tried to install TigerVNC Server on Ubuntu 16.04, and was surprised to find that Ubuntu still uses TightVNC in their main repos. Additionally, there are no PPAs for TigerVNC and Ubuntu 16.04 Xenial Xerus yet.

Installing it is simple:

+Download the latest binary from TigerVNC. At the time of writing, version 1.7.0 for Ubuntu is located here for 64bit and here for x86. Latest release information can be found on the (TigerVNC github page)[https://github.com/TigerVNC/tigervnc/releases].

Then run:

sudo dpkg -i tigervncserver_filename.deb

sudo apt-get install -f

vncserver

You will be prompted for a few questions the first time you start it up, but that it is it. Your VNC Server is up and running on port 5901 by default. You can connect to it using the Tiger VNC viewer on another computer by pointing it to the IP:5901.

There are a lot of options for TigerVNC server, but one of my favorite things about TigerVNC is that the connections are encrypted by default. However, I recommend only connecting to VNC servers over VPNs or SSH tunnels just to be sure.