StoutPanda The Bamboo Grove

HomeLab Revamp Network -- Revised Plan & Diagrams

I recently built out our new minilab hardware cluster. After several 3D prints, I’m not 100% satisfied with it, but it’s good enough to start provisioning.

I ran into a few issues: there really isn’t a standard for minilabs yet—though two prevailing widths seem to dominate: 10 inch and 10.5 inch. If you’re diving into this, be aware! Jeff Geerling recently started an excellent collection of resources: Project mini Rack. If I’d started with that, I wouldn’t have run into the same problems.

This led to a problem: my MikroTik switch is a few mm too large to fit in the DeskPi T2. I have a few ideas—maybe more 3D printing, or even some basic woodworking to attach it to a case—but I haven’t decided on a final path. Honestly, I have enough space for a 6-8U 19-inch rack on the wall next to my current rack, and long term I may just do that.

It also seems pineboard.io has gone out of business, so my original plan to use their NVMe + PoE hat may not be possible. For now, I’ve implemented some of the awesome new GaN-based USB-C power supplies, precariously mounted on mini shelves, to power everything in the rack. (You should research GaN PD—these can renegotiate when new devices initially draw power, which might lead to issues down the line. But it’s a home lab, and I’m hyped on GaN & Navitas as a company, so I’m doing it anyways!)

Enough lamentations—here’s a picture of the in-progress build we’re starting with:

Minilab Build

Once everything is finalized, I’ll take a better photo and post more info on where I landed with the final setup.

For now, here are the 3D printed resources I’ve landed on for the DeskPi T2:

Here is the finalized implementation plan, and diagram of my network:

Kubernetes Network Infrastructure Implementation Plan

1. Network Device Standardization

Network Equipment Naming & IP Assignment

Current Name Current IP New Name New Static IP Model
Usw-flex-2.5-k8s-main 10.0.1.215 USW-Flex-2.5G-K8s-Main 10.0.1.80 Ubiquiti Networks Flex 2.5G (USW-Flex-2.5G-8)
Use-flex-k8s-control 10.0.1.226 USW-Flex-Mini-K8s-Control 10.0.1.81 Ubiquiti UniFi Flex Mini
New Device N/A MikroTik-CRS309-K8s-Storage 10.0.1.82 MikroTik CRS309-1G-8S+
USW-Flex-Printer 10.0.1.204 USW-Flex-Printer 10.0.1.204 Ubiquiti USW-Flex
U6-Enterprise-LivingArea 10.0.1.207 U6-Enterprise-LivingArea 10.0.1.207 Ubiquiti U6 Enterprise
USW-Flex-Media 10.0.1.208 USW-Flex-Media 10.0.1.208 Ubiquiti USW-Flex
USW-Flex-SimRoom 10.0.1.212 USW-Flex-SimRoom 10.0.1.212 Ubiquiti USW-Flex
U6-Enterprise-IW-Upstairs-Living 10.0.1.227 U6-Enterprise-IW-Upstairs-Living 10.0.1.227 Ubiquiti U6 Enterprise In-Wall
Enterprise 24 PoE - Main 10.0.1.239 Enterprise-24-PoE-Main 10.0.1.239 Unifi Enterprise 24 Port PoE
USW-Enterprise-8-PoE-Server 10.0.1.238 USW-Enterprise-8-PoE-Server 10.0.1.238 Ubiquiti USW Enterprise 8 PoE
U6-Lite-Game 10.0.1.242 U6-Lite-Game 10.0.1.242 Ubiquiti U6 Lite
UDM-PRO 10.0.1.1 UDM-PRO 10.0.1.1 Ubiquiti Dream Machine Pro
New Virtual Device N/A Vault-Backup-Unraid 10.0.1.110 VM/Container on Unraid

K8s Node Integration

Current Device Current IP Role in New Design New Management IP New Control Plane IP vPro Management IP
k8s-admin 10.0.1.223 K8s-Admin-Box 10.8.16.85 10.8.18.85 (tagged) N/A
k8s-cp-02 10.0.1.229 K8s-CP-02 10.8.16.87 10.8.18.87 (tagged) N/A
k8s-cp-03 10.0.1.245 K8s-CP-03 10.8.16.88 10.8.18.88 (tagged) N/A
New Node N/A K8s-CP-01 10.8.16.86 10.8.18.86 (tagged) N/A
New Node N/A K8s-MS-01-Node-1 10.8.16.90 10.8.18.90 (tagged) 10.8.16.190
New Node N/A K8s-MS-01-Node-2 10.8.16.91 10.8.18.91 (tagged) 10.8.16.191

2. VLAN Structure

Network Segmentation Plan

VLAN ID Name Subnet Gateway Purpose
1 Default 10.0.1.0/24 10.0.1.1 Network Management
2 IoT 10.0.2.0/24 10.0.2.1 Smart Home Devices
3 Family 10.0.3.0/24 10.0.3.1 Personal Devices
4 Work-Devices 10.0.4.0/24 10.0.4.1 Work Equipment
5 Unifi-Protect 10.0.5.0/24 10.0.5.1 Security Cameras
6 Reverse-Proxy 10.6.6.0/24 10.6.6.1 External Access
10 Public-Services 10.10.10.0/24 10.10.10.1 Media & Shared Resources
16 K8s-Management 10.8.16.0/27 10.8.16.1 Kubernetes Management
18 K8s-Control-Plane 10.8.18.0/27 10.8.18.1 API Server, etcd
28 K8s-Pod-Network 10.8.28.0/23 10.8.28.1 Container Traffic
38 K8s-Service-Network 10.8.38.0/26 10.8.38.1 Service Discovery
48 K8s-Storage-Network 10.8.48.0/27 10.8.48.1 Ceph Storage Traffic
58 K8s-LoadBalancer 10.8.58.0/27 10.8.58.1 External Service Access

3. Implementation Phases

Phase 0: Automation & Security Infrastructure Setup

  1. Basic Network Connectivity Setup
    • Configure basic network access to K8s-Admin-Box (10.0.1.223)
    • Ensure reliable SSH access for subsequent automation tasks
    • Create backup of current network configurations
    • Document pre-implementation state for rollback purposes
  2. Ansible Configuration Management Setup
    • Install Ansible on K8s-Admin-Box (10.0.1.223)
    • Create Ansible directory structure:
      /opt/ansible/
      ├── inventory/
      │   ├── hosts.yml
      │   └── group_vars/
      ├── playbooks/
      │   ├── network/
      │   ├── kubernetes/
      │   └── verification/
      ├── roles/
      │   ├── unifi/
      │   ├── mikrotik/
      │   └── kubernetes/
      └── ansible.cfg
      
    • Configure Ansible inventory with network device groups:
      • unifi_devices
      • mikrotik_devices
      • kubernetes_nodes
      • control_plane_nodes
      • worker_nodes
    • Create UniFi API integration roles and playbooks
    • Create MikroTik API integration roles and playbooks
    • Create Kubernetes node configuration roles and playbooks
  3. Secrets Management Implementation
    • Primary Vault Installation:
      • Install HashiCorp Vault on K8s-Admin-Box
      • Configure Vault storage backend
      • Initialize Vault and set up unseal keys
      • Create secret paths for different environments:
        • network/unifi
        • network/mikrotik
        • kubernetes/certificates
        • kubernetes/tokens
      • Store critical credentials:
        • UniFi Controller credentials
        • SSH keys for network devices
        • MikroTik API credentials
        • VPN certificates and keys
        • Kubernetes sensitive configurations
      • Set up Ansible Vault integration
      • Configure automated credential rotation policies
    • Vault Replication to Unraid:
      • Create VM or container on Unraid (10.0.1.7) with IP 10.0.1.110
      • Install Vault on Unraid VM/container
      • Configure Performance Replication from primary Vault:
        # On primary (K8s-Admin-Box)
        vault write -f sys/replication/performance/primary/enable
        vault write sys/replication/performance/primary/secondary-token id=unraid-backup
               
        # On secondary (Unraid VM/Container)
        vault write sys/replication/performance/secondary/enable token=<token-from-primary>
        
      • Configure automated health checks for replication status
      • Set up periodic sealed backups of Vault data to off-site storage
  4. Version Control & CI/CD Setup
    • Initialize Git repository for network configurations
    • Create branching strategy for changes:
      • master (production)
      • staging (testing)
      • feature branches (development)
    • Set up pre-commit hooks for validation
    • Configure GitLab CI/CD pipeline for automated testing
    • Create deployment workflows for network changes
  5. Backup Infrastructure
    • Configure automated backups for:
      • Vault data
      • Network device configurations
      • UDM-PRO settings
      • Ansible playbooks and inventory
    • Set up off-site backup destination
    • Implement backup testing and verification
  6. Phase 0 Verification
    • Verify Ansible connectivity to all targeted devices
    • Confirm Vault initialization and unsealing processes
    • Test Vault replication to Unraid backup
    • Validate backup and restore procedures
    • Test Git workflow with sample configuration change

Phase 1: DHCP Reconfiguration

  • Current DHCP Range: 10.0.1.200 - 10.0.1.250
  • New DHCP Range: 10.0.1.200 - 10.0.1.220
  • Infrastructure Range: 10.0.1.1 - 10.0.1.99 (static)
  • Static Client Range: 10.0.1.221 - 10.0.1.254 (reserved)

Phase 1 Verification:

  • Verify DHCP assignments are properly updated
  • Test connectivity for existing clients
  • Confirm static IP reservations are functioning

Phase 2: Core Infrastructure Configuration

  1. Pre-Configuration Backup
    • Create backup of UDM-PRO configuration
    • Document current firewall rules and VLAN configurations
    • Set restore point for Enterprise-24-PoE-Main switch
  2. UDM-PRO Configuration (10.0.1.1)
    • Create all VLANs (1, 2, 3, 4, 5, 6, 10, 16, 18, 28, 38, 48, 58)
    • Define inter-VLAN routing policies
    • Set up DHCP services for each VLAN
    • Configure firewall rules for VLAN isolation:
      • Allow 10.0.1.0/24 (Default) subnet access to K8s-Admin-Box (10.8.16.85)
      • Allow 10.0.3.0/24 (Family) subnet access to K8s-Admin-Box (10.8.16.85)
      • Implement appropriate ACLs for other inter-VLAN communication
      • Add specific rules for vPro management traffic (10.8.16.190-191)
      • Configure rules for Vault replication traffic between 10.0.1.223 and 10.0.1.110
    • Enable routing between VLAN 1 and VLAN 16 for management
  3. Enterprise-24-PoE-Main Switch (10.0.1.239)
    • Assign static IP (confirm 10.0.1.239)
    • Configure port 1 as trunk to UDM-PRO with all VLANs
    • Configure port 24 as trunk to USW-Flex-2.5G-K8s-Main with K8s VLANs
    • Configure remaining ports according to network diagram

    Port Profiles Configuration:

    • Create “All VLANs Trunk” profile: All VLANs tagged, VLAN 1 native, full PoE, 1Gbps
    • Create “K8s Trunk” profile: VLANs 1, 16, 18, 28, 38, 48, 58 tagged, VLAN 1 native, auto-negotiate speed
    • Create “AP Profile”: VLANs 1, 2, 3 tagged, VLAN 1 native, PoE enabled, 2.5Gbps
    • Create “IoT Profile”: VLAN 2 native, no tagging, PoE enabled, 1Gbps
    • Create “Family Profile”: VLAN 3 native, no tagging, PoE disabled, 1Gbps
    • Create “Work Profile”: VLAN 4 native, no tagging, PoE disabled, 1Gbps
    • Create “Security Profile”: VLAN 5 native, no tagging, PoE enabled, 1Gbps
    • Create “Server Profile”: VLANs 1, 6, 10 tagged, VLAN 1 native, PoE disabled, 10Gbps
    • Create “Management-AMT” profile: VLAN 16 native only, PoE disabled, 1Gbps

Phase 2 Verification:

  • Verify VLAN creation and tagging on trunk ports
  • Test inter-VLAN routing according to defined policies
  • Validate firewall rules by testing connectivity between segments
  • Confirm port profiles are correctly applied
  • Document validation results

Phase 3: K8s Network Infrastructure Setup

  1. Pre-Configuration Backup
    • Backup configurations of USW-Flex-2.5-k8s-main and Use-flex-k8s-control
    • Document connection matrix of existing devices
  2. USW-Flex-2.5G-K8s-Main (10.0.1.80)
    • Change IP from 10.0.1.215 to 10.0.1.80
    • Configure port 1 as trunk to Enterprise-24-PoE-Main (all VLANs, native VLAN 1)
    • Configure ports 2-3: K8s-MS-01-Node-1 (VLAN 16 native, VLAN 18 tagged)
    • Configure ports 4-5: K8s-MS-01-Node-2 (VLAN 16 native, VLAN 18 tagged)
    • Configure port 6: Trunk to USW-Flex-Mini-K8s-Control (VLAN 16 native, VLAN 18 tagged)
    • Configure port 7: MikroTik-CRS309-K8s-Storage management (VLAN 1 only)
    • Configure port 8: K8s-Admin-Box (VLAN 16 native, VLAN 18 tagged)

    Port Profiles Configuration:

    • Create “K8s-Main-Uplink” profile: All VLANs tagged, VLAN 1 native, 2.5Gbps
    • Create “K8s-Management” profile: VLAN 16 native, VLAN 18 tagged, 2.5Gbps
    • Create “K8s-Control-Mini-Uplink” profile: VLAN 16 native, VLAN 18 tagged, PoE enabled, 1Gbps
    • Create “MikroTik-Management” profile: VLAN 1 only, 1Gbps
    • Create “K8s-Admin” profile: VLAN 16 native, VLAN 18 tagged, 1Gbps
    • Create “K8s-vPro” profile: VLAN 16 native only, 1Gbps
  3. USW-Flex-Mini-K8s-Control (10.0.1.81)
    • Change IP from 10.0.1.226 to 10.0.1.81
    • Configure port 1: Uplink to USW-Flex-2.5G-K8s-Main (VLAN 16 native, VLAN 18 tagged)
    • Configure port 2: K8s-CP-01 (VLAN 16 native, VLAN 18 tagged)
    • Configure port 3: K8s-CP-02 (VLAN 16 native, VLAN 18 tagged)
    • Configure port 4: K8s-CP-03 (VLAN 16 native, VLAN 18 tagged)
    • Configure port 5: Spare

    Port Profiles Configuration:

    • Create “K8s-Control-Uplink” profile: VLAN 16 native, VLAN 18 tagged, 1Gbps
    • Create “K8s-CP-Node” profile: VLAN 16 native, VLAN 18 tagged, PoE enabled, 1Gbps
  4. MikroTik-CRS309-K8s-Storage (10.0.1.82)
    • Set initial IP to 10.0.1.82
    • Configure port 1: Management to USW-Flex-2.5G-K8s-Main (VLAN 1 only)
    • Configure ports 2-3: K8s-MS-01-Node-1 SFP+ (VLANs 28, 38, 48 tagged)
    • Configure ports 4-5: K8s-MS-01-Node-2 SFP+ (VLANs 28, 38, 48 tagged)
    • Configure ports 6-8: Reserved for future expansion

    MikroTik-Specific Configuration:

    • Configure RouterOS with appropriate VLAN trunking:
      /interface vlan add interface=sfp-sfpplus2 name=vlan28-pod vlan-id=28
      /interface vlan add interface=sfp-sfpplus2 name=vlan38-service vlan-id=38
      /interface vlan add interface=sfp-sfpplus2 name=vlan48-storage vlan-id=48
      
    • Enable jumbo frames for storage network:
      /interface ethernet set sfp-sfpplus2 mtu=9000
      /interface ethernet set sfp-sfpplus3 mtu=9000
      /interface ethernet set sfp-sfpplus4 mtu=9000
      /interface ethernet set sfp-sfpplus5 mtu=9000
      
    • Configure bridge interfaces as needed:
      /interface bridge add name=bridge-storage protocol-mode=none
      /interface bridge port add bridge=bridge-storage interface=vlan48-storage
      

    Port Profiles Configuration:

    • Create “MikroTik-Management-Port” profile: VLAN 1 only, no VLAN tagging, 1Gbps
    • Create “K8s-Data-SFP” profile: VLANs 28, 38, 48 tagged, 10Gbps
    • Create “Future-Expansion” profile: No configuration, disabled

Phase 3 Verification:

  • Test connectivity between switch management interfaces
  • Verify VLAN tagging on all configured ports
  • Confirm MikroTik bridging and VLAN configuration
  • Test jumbo frame capability on storage network
  • Document network device MAC addresses and physical connections

Phase 4: Switch Management Standardization

  1. Pre-Configuration Backup
    • Create backups of all UniFi device configurations
    • Document current port assignments and VLAN configurations
  2. Access Switches & APs
    • Convert all UniFi devices to static IPs:
      • USW-Flex-Printer: 10.0.1.204
      • U6-Enterprise-LivingArea: 10.0.1.207
      • USW-Flex-Media: 10.0.1.208
      • USW-Flex-SimRoom: 10.0.1.212
      • U6-Enterprise-IW-Upstairs-Living: 10.0.1.227
      • USW-Enterprise-8-PoE-Server: 10.0.1.238
      • U6-Lite-Game: 10.0.1.242

    Port Profiles Assignment:

    • Assign appropriate profiles to access switches based on connected devices:
      • For AP connections: “AP Profile”
      • For IoT device connections: “IoT Profile”
      • For family device connections: “Family Profile”
      • For work device connections: “Work Profile”
      • For security camera connections: “Security Profile”
      • For server connections: “Server Profile”

Phase 4 Verification:

  • Confirm all devices have proper static IP assignment
  • Verify port profiles are correctly applied
  • Test device connectivity on appropriate VLANs
  • Validate AP SSID to VLAN mappings

Phase 5: K8s Node Configuration

  1. Pre-Configuration Backup
    • Create full system backups of all existing Kubernetes nodes
    • Document current network configurations and service assignments
  2. Admin Box Configuration
    • Migrate k8s-admin (10.0.1.223) to:
      • Management: 10.8.16.85/27 (VLAN 16)
      • Control Plane access: 10.8.18.85/27 (VLAN 18)
    • Update network configuration files for new IP addressing
    • Test Vault operation with new network configuration
  3. Control Plane Setup
    • Migrate existing k8s-cp-02 (10.0.1.229) to new IPs:
      • Management: 10.8.16.87
      • Control Plane: 10.8.18.87
    • Migrate existing k8s-cp-03 (10.0.1.245) to new IPs:
      • Management: 10.8.16.88
      • Control Plane: 10.8.18.88
    • Configure new K8s-CP-01:
      • Management: 10.8.16.86
      • Control Plane: 10.8.18.86
    • Update etcd peer addresses and API server configurations
  4. Worker Node Setup
    • Configure K8s-MS-01-Node-1:
      • Management: 10.8.16.90/27 (VLAN 16)
      • Control Plane: 10.8.18.90/27 (VLAN 18)
      • Pod Network: 10.8.28.90/23 (VLAN 28)
      • Service Network: 10.8.38.90/26 (VLAN 38)
      • Storage Network: 10.8.48.90/27 (VLAN 48)
      • vPro Management: 10.8.16.190 (VLAN 16)
    • Configure K8s-MS-01-Node-2:
      • Management: 10.8.16.91/27 (VLAN 16)
      • Control Plane: 10.8.18.91/27 (VLAN 18)
      • Pod Network: 10.8.28.91/23 (VLAN 28)
      • Service Network: 10.8.38.91/26 (VLAN 38)
      • Storage Network: 10.8.48.91/27 (VLAN 48)
      • vPro Management: 10.8.16.191 (VLAN 16)
  5. Intel vPro Configuration
    • Configure Intel AMT on worker nodes:
      • K8s-MS-01-Node-1: Enable AMT and assign 10.8.16.190
      • K8s-MS-01-Node-2: Enable AMT and assign 10.8.16.191
    • Set admin credentials in Vault
    • Configure network access policies and security settings
    • Test remote management capabilities

Phase 5 Verification:

  • Test control plane connectivity
  • Verify worker node communication on all network segments
  • Validate Intel vPro/AMT configurations
  • Test API server access from management network
  • Verify Kubernetes service discovery across new network segments

4. Transition Planning

  1. Service Migration Strategy
    • Document all running services and dependencies
    • Create downtime schedule for each component
    • Develop rollback procedures for each step
    • Define success criteria for each migration
  2. Maintenance Windows
    • Schedule Phase 1-2: [Date], [Time] (estimated 2 hours)
    • Schedule Phase 3: [Date], [Time] (estimated 3 hours)
    • Schedule Phase 4: [Date], [Time] (estimated 2 hours)
    • Schedule Phase 5: [Date], [Time] (estimated 4 hours)
  3. Component Transition Order
    • Network infrastructure first (switches, routers)
    • Control plane nodes (one at a time)
    • Worker nodes
    • Service reconfiguration
  4. Post-Migration Verification
    • Define test cases for each critical service
    • Verify external connectivity
    • Validate internal service discovery
    • Confirm backup systems are functioning

5. Verification & Testing

  1. Network Connectivity Tests
    • Verify trunk configurations with tcpdump on each VLAN
    • Test routing between allowed VLANs
    • Validate VLAN isolation for security
    • Verify jumbo frame support on storage network
    • Test vPro management interface connectivity
  2. Kubernetes Functionality Tests
    • Test control plane HA configuration
    • Verify pod networking across nodes
    • Validate service discovery
    • Test storage network performance
    • Confirm external connectivity via load balancer network
    • Verify Vault access from cluster nodes
    • Test remote management via Intel AMT

6. Backup & Rollback Procedures

  1. Backup Methodology
    • Create full device configuration backups before each change
    • Maintain version-controlled configuration files in Git
    • Document physical port connections with photos/diagrams
  2. Rollback Procedures
    • Network Device Rollback:
      • Documented procedure for restoring switch configs via controller
      • CLI-based restoration for MikroTik devices
    • Kubernetes Node Rollback:
      • System state restoration procedure
      • Network configuration reset process
    • Service Restoration:
      • Documented process for each critical service
      • Data persistence verification steps
  3. Testing Recovery Procedures
    • Schedule periodic recovery drills
    • Verify backup integrity
    • Document recovery time objectives (RTOs)

7. Documentation

  1. Network Documentation
    • Complete IP address inventory
    • VLAN assignments
    • Switch port configurations
    • Physical and logical network diagrams
    • Port profile definitions and assignments
    • vPro management interfaces and access procedures
    • Vault replication configuration and management
  2. Kubernetes Configuration
    • Node inventory
    • Network configurations per node
    • Service subnet allocations
    • HA configuration details
    • Detailed MikroTik configuration for storage network
    • Recovery and rollback procedures

Updated Planned Final Network Diagram

Network Diagram (SVG)

Download/view the SVG version of the network diagram

Homelab Scripts & Documentation

All scripts, documentation, and ongoing updates for this homelab project will be maintained in a dedicated repository:
https://github.com/stoutpanda/homelab

Check there for the latest scripts, guides, and notes as the build progresses.

HomeLab Revamp Network Plan: Game Plan

Below is initial overview of my plan for implementing the updated network plan to facilitate kubernetes.

Network Topology Overview with VLANs and Ports

Internet
    │
    ▼
UDM-PRO (10.0.1.1)
    │ 
    │ DAC → Port 1 (VLANs: 1,2,3,4,5,6,10,16,18,28,38,48,58)
    ▼
Unifi Enterprise 24 Port (10.0.1.238) - Core Switch
    │
    │ SFP+ → Port 24 (VLANs: 1,16,18,28,38,48,58)
    ▼
Ubiquiti Flex 2.5G (10.0.1.80)
    │
    ├───── Port 1: Uplink to Enterprise 24 Port (Trunk: All VLANs, VLAN 1 native)
    │
    ├───── Port 2-3: MS-01 Node 1 (10.8.16.90) 2.5G ports (VLAN 16 native, VLAN 18 tagged)
    │
    ├───── Port 4-5: MS-01 Node 2 (10.8.16.91) 2.5G ports (VLAN 16 native, VLAN 18 tagged)
    │
    ├───── Port 6: Connection to Flex Mini (VLAN 16 native, VLAN 18 tagged)
    │      │
    │      ▼
    │     Ubiquiti Flex Mini (10.0.1.81)
    │       │
    │       ├── Port 1: Uplink to Flex 2.5G (VLAN 16 native, VLAN 18 tagged)
    │       │
    │       ├── Port 2: K8s-CP-01 (10.8.16.86) (VLAN 16 native, VLAN 18 tagged)
    │       │
    │       ├── Port 3: K8s-CP-02 (10.8.16.87) (VLAN 16 native, VLAN 18 tagged)
    │       │
    │       └── Port 4: K8s-CP-03 (10.8.16.88) (VLAN 16 native, VLAN 18 tagged)
    │
    ├───── Port 7: MikroTik CRS309 Management (VLAN 1 only)
    │      │
    │      ▼
    │     MikroTik CRS309-1G-8S+ (10.0.1.82)
    │       │
    │       ├── Port 1: Management to Flex 2.5G (VLAN 1 only)
    │       │
    │       ├── Port 2-3: MS-01 Node 1 SFP+ (VLANs: 28,38,48)
    │       │
    │       └── Port 4-5: MS-01 Node 2 SFP+ (VLANs: 28,38,48)
    │
    └───── Port 8: Admin Box (10.8.16.85) (VLAN 16 native, VLAN 18 tagged)

VLAN Structure

VLAN ID Purpose Subnet Gateway Notes
16 Kubernetes Management 10.8.16.0/27 10.8.16.1 Native VLAN for K8s components
18 Kubernetes Control Plane 10.8.18.0/27 10.8.18.1 For API server, etcd, scheduler
28 Kubernetes Pod Network 10.8.28.0/23 10.8.28.1 512 IPs for pod allocation
38 Kubernetes Service Network 10.8.38.0/26 10.8.38.1 64 IPs for Kubernetes services
48 Storage Network 10.8.48.0/27 10.8.48.1 Dedicated for Ceph traffic
58 Load Balancer IPs 10.8.58.0/27 10.8.58.1 32 IPs for external service access

Mermaid Diagram

graph TD
    %% VLAN information - positioned at the top
    subgraph "VLAN Structure"
        VLAN1["VLAN 1: Management (10.0.1.0/24)"]
        VLAN16["VLAN 16: K8s Management (10.8.16.0/27)"]
        VLAN18["VLAN 18: K8s Control Plane (10.8.18.0/27)"]
        VLAN28["VLAN 28: K8s Pod Network (10.8.28.0/23)"]
        VLAN38["VLAN 38: K8s Service Network (10.8.38.0/26)"]
        VLAN48["VLAN 48: Storage Network (10.8.48.0/27)"]
        VLAN58["VLAN 58: Load Balancer Network (10.8.58.0/27)"]
    end
    
    %% Main network path
    Internet[Internet] --> UDMPRO[UDM-PRO\n10.0.1.1]
    UDMPRO --"DAC → Port 1\nVLANs: 1,2,3,4,5,6,10,16,18,28,38,48,58"--> Enterprise24["Unifi Enterprise 24 Port\n10.0.1.238"]
    Enterprise24 --"SFP+ → Port 24\nVLANs: 1,16,18,28,38,48,58"--> UbiquitiFlex25G["Ubiquiti Flex 2.5G\n10.0.1.80"]
    
    %% Flex 2.5G connections
    UbiquitiFlex25G --"Port 2-3\nVLAN 16 native, VLAN 18 tagged"--> MS01Node1["MS-01 Node 1\n10.8.16.90"]
    UbiquitiFlex25G --"Port 4-5\nVLAN 16 native, VLAN 18 tagged"--> MS01Node2["MS-01 Node 2\n10.8.16.91"]
    UbiquitiFlex25G --"Port 6\nVLAN 16 native, VLAN 18 tagged"--> UbiquitiFlexMini["Ubiquiti Flex Mini\n10.0.1.81"]
    UbiquitiFlex25G --"Port 7\nVLAN 1 only"--> MikroTikCRS309["MikroTik CRS309-1G-8S+\n10.0.1.82"]
    UbiquitiFlex25G --"Port 8\nVLAN 16 native, VLAN 18 tagged"--> AdminBox["Admin Box\n10.8.16.85"]
    
    %% Flex Mini connections
    UbiquitiFlexMini --"Port 2\nVLAN 16 native, VLAN 18 tagged"--> K8sCP01["K8s-CP-01\n10.8.16.86"]
    UbiquitiFlexMini --"Port 3\nVLAN 16 native, VLAN 18 tagged"--> K8sCP02["K8s-CP-02\n10.8.16.87"]
    UbiquitiFlexMini --"Port 4\nVLAN 16 native, VLAN 18 tagged"--> K8sCP03["K8s-CP-03\n10.8.16.88"]
    
    %% MikroTik connections
    MikroTikCRS309 --"Port 2-3\nVLANs: 28,38,48"--> MS01Node1SFP["MS-01 Node 1 SFP+"]
    MikroTikCRS309 --"Port 4-5\nVLANs: 28,38,48"--> MS01Node2SFP["MS-01 Node 2 SFP+"]
    

    
    %% Device-type styling
    classDef router fill:#f96,stroke:#333,stroke-width:2px
    classDef switch fill:#69b,stroke:#333,stroke-width:2px
    classDef client fill:#ddd,stroke:#333,stroke-width:1px
    classDef vlan fill:#e8f4f8,stroke:#333,stroke-width:1px,stroke-dasharray: 5 5
    
    %% Apply classes
    class UDMPRO router
    class Enterprise24,UbiquitiFlex25G,UbiquitiFlexMini,MikroTikCRS309 switch
    class MS01Node1,MS01Node2,K8sCP01,K8sCP02,K8sCP03,AdminBox,MS01Node1SFP,MS01Node2SFP client
    class VLAN1,VLAN16,VLAN18,VLAN28,VLAN38,VLAN48,VLAN58 vlan

Evolution of My Home Lab: Past, Present, and Future

Historical Background

I got started with homelabbing when I was pretty young, using an old iMac and running Puppy Linux. That early curiosity grew into a deeper interest as I built a basic NAS using a Supermicro Intel Atom (X7) board running FreeBSD. At some point, a friend and I built a Proxmox server, and I was playing with BeagleBone boards for side projects. Around the same time, I was forcing myself to use Arch Linux on a laptop, while working professionally in a Windows Server environment. That mix of self-learning and hands-on exposure to different platforms helped me get comfortable picking up new technologies quickly.

Current Setup

These days, my home lab lives across two small 19-inch racks. The main rack, mounted high on the wall, has our core networking gear: a Ubiquiti UDM-PRO and an Enterprise 24-port PoE switch. That powers our home’s Wi-Fi (upgraded recently to Wi-Fi 6E) and a handful of PoE security cameras. There’s also a SFP+ fiber line that runs across the office to a UDM Enterprise 8-port switch in a second 12U rack.

That rack also holds an UNVR. I added that after running into memory issues with the UDM-PRO doing too much—it was right at the suggested limits, and it showed. The star of the setup, though, is our Unraid server—a project I built with my brother and a good friend:

  • Motherboard: Supermicro H11SSL-NC
  • Processor: AMD EPYC 7281 16-Core
  • Memory: 64GB ECC RAM
  • GPU: NVIDIA RTX A2000

Storage Configuration:

  • Main Storage: 8x 16TB SATA HDDs in an Unraid array with dual parity
  • SSD Pools: 2x 1TB consumer SSDs in ZFS mirrors, split across three pools for VMs, Docker containers, and caching

It runs everything from media services and game servers to tools like Pi-hole, budget trackers, Nextcloud, and more recently, Ollama models. It’s been fun pushing into AI workloads, but also kind of the tipping point for this system.

Identified Limitations

Once I started running LLMs through Ollama, I noticed performance hitting a ceiling—IOPS, memory, and CPU were all getting pushed. I’ve also been using Docker Compose more heavily, and while Unraid’s UI was great when I was starting out, now it feels like it’s getting in the way more than helping.

To be honest, I also went through a phase where tech as a hobby just wasn’t doing it for me anymore. But over the last 6–8 months, that spark came back—mostly thanks to AI tools helping me break through a few roadblocks. That motivation has been key to where I’m going next.

Future Plans and Learning Opportunities

I stumbled onto the idea of a mini-rack build while browsing setups—and the concept really clicked. The plan is to build out a new 10-inch rack and move most of the workloads over to that, while the Unraid server continues as a NAS and VM host.

New Mini-Rack Infrastructure

Networking Backbone

  • 10Gbps uplink to the main rack via an 8-port Ubiquiti switch
  • 2.5Gbps ports for internal communication across the new stack

Storage & Kubernetes Traffic

  • A MikroTik 8-port SFP+ switch for 10Gbps communication between storage and k8s nodes

Control Plane

  • 3x Raspberry Pi 5 units (8GB each), active cooling, M.2 SATA hats & 1x 16GB as a jump / admin box.
  • Connected over a Unifi Lite 1Gbps switch

Compute Nodes

Two Minisforum MS01 boxes will act as the primary nodes:

  • Intel Core i9-12900H
  • 96GB RAM (2x48GB Crucial)
  • 1TB NVMe (Inland) for the OS
  • 2x 2TB SK Hynix NVMe drives each for Ceph storage

The goal is to run Kubernetes on bare metal, using Ceph for distributed storage. We’re still doing hardware testing and burn-in now, but once that’s done, I’ll start migrating workloads and services over. Eventually, I’d love to add a small SSD-only NAS to complement the setup—but we’ll see what budget and time allow.

Learning Goals and Opportunities

This project is giving me a real-world sandbox to explore:

  • Running a high-availability k8s control plane on ARM-based nodes
  • Building and maintaining Ceph storage in a home environment
  • Learn about container orchestration.
  • Using GitOps to manage and deploy infrastructure and applications

The new setup is a chance to go deeper into container orchestration and storage architectures, while still scratching that tinkering itch. It’s been a lot of fun diving back in with a more intentional approach—and I’m excited to see where it leads.

Self-Hosting Pangolin on Unraid: Transitioning to a VM-based Setup

Self-Hosting Pangolin on Unraid

Pangolin is an exciting new tool designed to simplify remote access to self-hosted resources, bypass CGNAT issues, and seamlessly integrate with security tools such as Crowdsec. With its recent 1.0 release, Pangolin continues to evolve, incorporating powerful new features.

Why Switch to a VM-Based Setup?

In my experience, the optimal approach to using Pangolin on Unraid is to bypass the official Unraid installation guide provided by Fossorial and instead deploy Pangolin within an isolated virtual machine (VM). This method simplifies the integration with Crowdsec and leverages Docker Compose for easier management.

Steps to My Transition

Network Configuration

  • Created a dedicated VLAN specifically for Pangolin.
  • Configured firewall rules to allow outbound traffic only, preventing communication with other internal networks.
  • Enabled administrative access from trusted local networks.

Unraid Configuration

  • Established a new network interface linked directly to the VLAN.

Virtual Machine Setup

  • VM Specs: 4 CPU cores, 4 GB RAM (additional resources allocated for Crowdsec).
  • Set networking to default settings within the VM for simplicity.
  • Mounted shared host files via VirtioFS at /pangolin to enable easy backups of configurations and logs.
  • Installed Pangolin using the Fossorial Quick Install Guide.

Cleanup and Streamlining

Transitioning to a VM allowed me to simplify my setup by removing several Docker containers previously managed in Unraid:

  • Pangolin
  • Gerbil
  • Traefik
  • Crowdsec

Newt Configuration

  • Adjusted the networking setup within Unraid Community Applications.
  • Added two custom networks, and left the built in unraid networking field blank.:
    • New VLAN (br3), assigned a manual IP due to networking conflicts.
    • Docker network (fossorial) for containers accessed through Pangolin and Newt. Unraid Newt CA Picture
  • Generated Newt ID and secret, configuring these as container variables from the new Panolin install.

Crowdsec Integration

The VM setup significantly simplified Crowdsec integration, particularly through tools like Crowdsec Manager, enabling straightforward management of enrollments, whitelists, and custom security scenarios including Cloudflare Turnstile.

Enhanced Security with Cloudflare Tunnels

To further enhance security, I’m utilizing Cloudflare Tunnels with their WAF. The tunnel is hosted via Unraid Community Applications within the dedicated VLAN. The traffic flows as follows:

Cloudflare → Cloudflare Tunnel → Pangolin VM (with Crowdsec) → Newt → Docker Containers (Unraid)

For accurate IP address detection through Cloudflare, I’m using the Traefik Real-IP Plugin.

Future Plans

My next step is integrating Crowdsec directly with Cloudflare’s WAF to shift blocking responsibilities away from Traefik. Previously attempted IPS and geoblocking solutions with UDM Pro proved resource-intensive, reaffirming my preference for Cloudflare’s robust security.

18XX Primer

18xx 101

Here’s a tutorial for 1889 on the 18xx.games platform: 18xx Games Tutorial. It walks you through turns of a three-player game, illustrating typical gameplay elements.

For 18xx games, most are derived from 1830 with modifications (sometimes significant ones that the community refers to as “McGuffins”). Key variations include the maps, available companies, tile availability, and train rules. Additionally, auction methods for companies, stock market mechanisms, and company-specific rules often differ, leading to uniquely challenging games.

Although 1830 is one of the earliest 18xx games, it followed the lesser-known 1829. 1830 is the foundation for most other 18xx games.

18XX Chart

The more introductory games like 1889, 18Chesapeake, and 18MS, which are based on financial strategies from 1830. These games focus on timing investments and manipulating stock prices. The most significant game in this financial category is 1817, which can last over 12 hours due to its complex stock market dynamics.

Another category within the 18xx series focuses on operational games, such as 1846, designed by the creator of Race for the Galaxy. 1846 emphasizes efficient company operation, route competition, and critical timing of train upgrades, making it an accessible entry point for those new to operational strategies within the 18xx series. 1822, another significant game in this genre, is known for its lengthy gameplay and intricate management of private companies, but it may not be as suited for beginners due to its complexity.

Gameplay typically alternates between Stock Rounds and Operating Rounds, which may vary in number as the game progresses. During Stock Rounds, players buy and sell shares, affecting company valuations. In Operating Rounds, companies operate trains, manage routes, handle dividends, and invest in new trains. A crucial tip for beginners is to prioritize train purchases, which can accelerate game progression and disrupt opponents’ strategies.

Trains, which have finite lifespans, need careful management to prevent obsolescence and ensure operational efficiency. Newer trains make older models obsolete (“rusting”), forcing players to continually adapt their strategies. Companies are usually divided into public and private. Private companies, often starting smaller, can later be integrated into public ones, providing strategic financial benefits. Company funds stem solely from share purchases, and operational decisions during rounds can significantly impact overall success.

Tracks are categorized into three types: Yellow, Green, and Brown, representing different stages of development. Starting with Yellow, players can upgrade to Green and then Brown, each providing different strategic options. Tile availability is typically limited, encouraging strategic placement to potentially hinder opponents’ expansion.

During Operating Rounds, per train you can run a route where track cannot be reused by your company for multiple routes. You can’t backtrack normally. The number of stations a train visits, typically its numeric value like a “2 train” or a “4 train,” is crucial as it dictates the potential revenue from that route. Each station adds a set value to the route’s total income, influencing strategic decisions about train operations and route planning.

Here is an example board from a completed 1889 game: 1889 Finished Game

Example: If you had a station in Nahari, and two “2” trains, you could run two routes. One from Muroto to Nahari for 40 dollars, and one from Nangoku to Nanari for 30. A total of 70 for the operating round.

If you had a 3 train, you could run between Muroto to Nanori to Nangoku for 50, or if you additionally had a station in Kouchi and a 3 train, you could run to Kouichi to Nangoku to Nanari for 90.

You may not reuse track during a route, but you may visit a station with multiple trains as long as you do not resuse track.