There Has Been a Slight Change with My Lab: An Upgrade

by Eric Zarnosky

šŸ”§Ā My Reasoning for the Upgrade

Running Kubernetes on a single machine was a great way to get started. It was easy to manage, simple to test deployments, and perfect for learning. But over time, I ran into several limitations.

With just one node, I couldn’t truly explore:

  • šŸ”Ā Scheduling across multiple nodes
  • šŸ›°ļøĀ Cluster communication and service discovery
  • āš™ļøĀ High Availability (HA) deployments
  • šŸ”„Ā Disaster recovery scenarios
  • šŸ“ˆĀ Horizontal scaling and load balancing

šŸ’øĀ The Upgrade

I decided to invest in a few low-cost Mini PCs to build out a proper cluster. I found a great option on Amazon: theĀ OUMAX Mini PC. These are currently priced at justĀ $141.54Ā USD.

They offer a solid set of specs for a home lab:

  • 🧠 CPU:Ā Intel N150 (4 cores, 4 threads)
  • šŸ’¾Ā RAM:Ā 16GB
  • šŸ“¦Ā Storage:Ā 500GB NVMe SSD
  • 🌐 Networking:Ā 2 x 2.5GbE NICs

šŸ—ļøĀ The New Lab Design

With three of these units, I’m now running a multi-node Kubernetes cluster where each node acts as both a control plane and a worker. The dual NICs let me physically separate:

  • šŸ”’Ā Internal traffic:Ā cluster and storage communications
  • šŸŒĀ External traffic:Ā ingress/egress to the internet

This setup allows me to:

  • šŸ”„Ā Simulate node failure and recovery
  • šŸ› ļøĀ Add/remove nodes dynamically
  • šŸ“”Ā Test longhorn and NFS over an isolated backend network
  • šŸ“ŠĀ Analyze service behavior under load

šŸ–„ļøĀ What About the Old Machine?

The original system isn’t going away. I’ll be dedicating it to GPU-related workloads. It’s perfect for testing things like:

  • 🧠 AI modelsĀ with the NVIDIA toolkit
  • šŸŽ„Ā Media workloadsĀ like transcoding and inference

🧪 What’s Next?

I’m considering picking up aĀ fourth Mini PCĀ and runningĀ Windows Server 2019/2022Ā on it. This would let me experiment with Windows containers and hybrid clusters.

šŸ’”Ā I know Microsoft recommends Azure or Azure Stack HCI for Windows-based pods, but I’m curious to see what’s possible in a pure local setup. Even if it’s not ideal, the experience alone will be valuable.

🧵 TL;DR

My lab just leveled up. Going from one node to a real multi-node Kubernetes setup opens the door to high availability, better simulation of production-grade environments, and hands-on experimentation with real-world scenarios — all without breaking the bank.