Small Homelab After 1 Year
Introduction
When you first set out to build a homelab, the mental picture is often a single rack‑mount server, a handful of virtual machines, and a modest network switch. The reality, however, tends to evolve faster than the original plan. After a year of tinkering, testing, and scaling, many hobbyists find their 10U rack filling up with devices they never imagined they’d need.
For seasoned sysadmins and DevOps engineers, a small homelab is more than a playground—it’s a sandbox for validating infrastructure‑as‑code pipelines, experimenting with self‑hosted services, and sharpening troubleshooting skills without risking production workloads. The challenge lies in balancing three core principles that most successful labs share: good performance, low power consumption, and future‑proof scalability.
In this guide we’ll walk through the entire lifecycle of a one‑year‑old homelab, from the hardware choices that got you here to the automation frameworks that keep it running smoothly. You’ll learn:
- How to evaluate the current state of a modest 10U rack and decide what stays, what goes, and what can be consolidated.
- The essential prerequisites—hardware, OS, networking, and security—that underpin a reliable self‑hosted environment.
- A step‑by‑step installation workflow for the most common components (Proxmox VE, Docker, Ansible, and Prometheus) with real‑world configuration snippets.
- Optimization techniques that keep power draw under control while delivering the performance needed for CI/CD pipelines, monitoring stacks, and personal cloud services.
- Daily operational practices, backup strategies, and scaling considerations that keep the lab healthy for the next year and beyond.
Whether you’re looking to prune an over‑grown rack, replicate a production‑grade CI pipeline at home, or simply document the lessons learned from a year of rapid growth, this post provides a definitive, SEO‑friendly roadmap for managing a small homelab in a DevOps‑centric world.
Understanding the Topic
What Is a “Small Homelab”?
A homelab is a privately owned collection of servers, networking gear, and storage devices used for learning, testing, and running self‑hosted applications. The qualifier small typically refers to a footprint that fits within a single 10U rack or a compact cabinet, consuming less than 500 W on average. Despite its size, a small homelab can host a surprisingly rich stack: virtualization hosts, container orchestration, monitoring, CI/CD, and even edge‑AI workloads.
Historical Context
The concept of a home‑based lab dates back to the early 2000s when hobbyists repurposed old PCs to run Linux services. The rise of virtualization (VMware ESXi, Xen) and later containerization (Docker, Kubernetes) dramatically reduced the hardware footprint required for complex environments. In the last decade, open‑source platforms like Proxmox VE, TrueNAS, and Home Assistant have made it possible to run production‑grade services on low‑power ARM or Intel NUC devices, turning a single rack unit into a full‑featured data center.
Core Features and Capabilities
| Feature | Typical Implementation in a Small Homelab | Benefits |
|---|
| Virtualization | Proxmox VE or VMware ESXi on a single Intel Xeon or AMD EPYC node | Consolidates multiple VMs, isolates workloads |
| Container Runtime | Docker Engine with Docker Compose, or lightweight K3s | Fast deployment, low overhead |
| Infrastructure as Code | Ansible playbooks, Terraform for cloud‑linked resources | Reproducibility, version control |
| Monitoring & Alerting | Prometheus + Grafana, Loki for logs | Real‑time visibility, proactive issue detection |
| Self‑Hosted Services | Nextcloud, Gitea, Home Assistant, Pi-hole | Data sovereignty, privacy |
| Backup & Disaster Recovery | Restic, BorgBackup, ZFS snapshots | Data integrity, quick restores |
| Network Segmentation | VLANs on a managed switch, pfSense firewall | Security isolation, traffic shaping |
Pros and Cons
| Pros | Cons |
|---|
| Hands‑on learning – Real‑world experience with production tools | Power consumption – Even low‑power devices add up |
| Cost‑effective – Reuse of old hardware, open‑source stack | Complexity – Managing many services can become overwhelming |
| Privacy & control – No reliance on third‑party SaaS | Maintenance overhead – Regular updates, backups, and security patches |
| Rapid prototyping – Test CI/CD pipelines, IaC scripts | Limited resources – CPU, RAM, and storage must be carefully allocated |
Ideal Use Cases
- CI/CD sandbox – Run GitLab Runner or Jenkins agents for personal projects.
- Self‑hosted cloud – Deploy Nextcloud, Bitwarden, and Syncthing for personal data.
- Network services – Pi‑hole, Unbound DNS, and a pfSense firewall for ad‑blocking and VPN.
- Home automation – Home Assistant with MQTT broker and Zigbee/Z‑Wave integration.
- Learning platform – Practice Ansible playbooks, Terraform modules, and Kubernetes manifests.
Current State and Future Trends
As of 2024, the homelab ecosystem is gravitating toward edge‑native workloads and low‑power ARM platforms (e.g., Raspberry Pi 5, RockPro64). Projects like K3s and MicroK8s enable Kubernetes on a single board, while ZFS on Linux continues to dominate for reliable storage. Expect tighter integration with GitOps workflows (Argo CD, Flux) and more AI‑in‑the‑edge experiments using TensorFlow Lite on ARM devices.
Comparison with Alternatives
| Alternative | Typical Size | Power Draw | Management Overhead | Best For |
|---|
| Full‑size rack (42U) | Large data‑center scale | >2 kW | High (multiple switches, PDUs) | Enterprise labs |
| Single‑box NAS (e.g., Synology) | 1U or desktop | 30‑80 W | Low (GUI‑driven) | Simple storage |
| Cloud‑only dev environment | No physical hardware | Variable (cloud spend) | Low (managed services) | Rapid scaling, no hardware |
| Small homelab (10U) | 1‑3 servers + networking | 150‑500 W | Medium (self‑managed) | Balanced learning & production |
Real‑World Success Stories
- The “Raspberry Pi Cluster” – A 4‑node Pi 4 cluster running K3s, serving as a personal CI runner for Go projects.
- The “Proxmox‑Powered Home Lab” – A single Intel NUC hosting Proxmox VE with 64 GB RAM, running VMs for Nextcloud, Gitea, and a pfSense firewall. Power consumption stays under 120 W.
- The “10U Rack of Dreams” – An enthusiast combined a Dell PowerEdge R640, a Synology DS1621+, and a Netgear managed switch, achieving 99.9 % uptime for a personal dev environment.
These examples illustrate that a small homelab can deliver production‑grade reliability while staying within a modest power envelope.
Prerequisites
Hardware Requirements
| Component | Recommended Model | Minimum Specs |
|---|
| Compute Node | Dell PowerEdge R640, Intel NUC 11, or AMD Ryzen 7 Mini‑PC | 2 × CPU cores, 8 GB RAM, 256 GB SSD |
| Storage | 2 × Seagate IronWolf 4 TB (ZFS mirror) | 1 TB usable, RAID‑1 or ZFS mirror |
| Network | Managed 24‑port Gigabit switch (VLAN‑capable) | 1 Gbps uplink, PoE optional |
| Power | 80 PLUS Gold 500 W PSU, optional UPS | 100‑200 W average load |
| Optional Edge Devices | Raspberry Pi 5 (4 GB) or RockPro64 | 2 GB RAM, micro‑SD 32 GB |
Operating System & Software Versions
| Software | Version (as of Mar 2026) | Purpose |
|---|
| Proxmox VE | 8.2‑1 | Hypervisor & container host |
| Debian | 12 (bookworm) | Base OS for VMs/containers |
| Docker Engine | 26.0.0 | Container runtime |
| Docker Compose | 2.23.0 | Multi‑container orchestration |
| Ansible | 9.5.0 | IaC automation |
| Prometheus | 2.53.0 | Metrics collection |
| Grafana | 11.2.0 | Visualization |
| pfSense | 2.7.0‑RELEASE | Firewall / router |
| ZFS on Linux | 2.2.5 | Storage pool management |
Network & Security Considerations
- VLAN Segmentation – Separate management, IoT, and guest traffic.
- Firewall Rules – Default‑deny inbound, allow only required ports (e.g., 22, 80, 443).
- SSH Hardening – Disable root login, enforce key‑based authentication, change default port.
- TLS Everywhere – Use Let’s Encrypt certificates for all web services.
- Backup Network – Isolate backup traffic on a dedicated VLAN or physical NIC.
User Permissions
| Role | Access Level | Typical Commands |
|---|
| admin | Full root on hypervisor, sudo on VMs | apt update, pveam download |
| devops | Sudo for Docker/Ansible, read‑only on storage | docker compose up, ansible-playbook |
| monitor | Read‑only Prometheus/Grafana | curl http://localhost:9090/api/v1/query |
Pre‑Installation Checklist
- Verify rack space and power distribution (PDU layout).
- Confirm firmware updates for server BIOS/UEFI and network switch.
- Create a Git repository for all IaC files (Ansible playbooks, Terraform state).
- Generate SSH key pairs for password‑less login across all nodes.
- Document IP scheme (e.g., 10.0.0.0/24 for management, 10.0.1.0/24 for services).
Installation & Setup
1. Hypervisor – Proxmox VE
1.1. Download and Verify ISO
1
2
3
| wget https://cdn.proxmox.com/iso/pve-enterprise-8.2.iso
sha256sum pve-enterprise-8.2.iso
# Verify against the checksum on the Proxmox download page
|
1.2. Install Proxmox on the Compute Node
- Boot from the ISO via the server’s iLO/DRAC console.
- Follow the installer prompts: select ZFS (RAID‑1) on the two 4 TB drives for resilience.
- Set a static management IP (e.g.,
10.0.0.10/24).
1.3. Post‑Installation Network Configuration
Edit /etc/network/interfaces to enable VLANs:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.0.0.10/24
gateway 10.0.0.1
dns-nameservers 1.1.1.1 8.8.8.8
# VLAN 10 – Management
auto eth0.10
iface eth0.10 inet static
address 10.0.10.1/24
# VLAN 20 – Services
auto eth0.20
iface eth0.20 inet static
address 10.0.20.1/24
|
Apply changes:
1
| systemctl restart networking
|
1.4. Enable Subscription‑Free Repository
1
2
| echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list
apt update && apt full-upgrade -y
|
1.5. Verify Installation
1
2
| pveversion -v
# Expected output: pve-manager 8.2-1 (running kernel 6.6.0-0‑proxmox‑amd64)
|
2. Container Runtime – Docker Engine
2.1. Install Docker on a LXC Container
Create an LXC container (Debian 12) via the Proxmox UI, then SSH into it:
1
2
3
4
5
6
7
8
9
| apt update && apt install -y ca-certificates curl gnupg lsb-release
mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list
apt update
apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
|
2.2. Verify Docker Installation
1
2
3
| docker version
# Client: Docker Engine - Community 26.0.0
# Server: Docker Engine - Community 26.0.0
|
2.3. Test a Simple Container
1
2
3
| docker run --rm -d --name hello-world -p 8080:80 nginx:alpine
# List running containers
docker ps -a
|
Sample output (replace placeholders with actual values):
1
2
| CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$CONTAINER_ID nginx:alpine "/docker-entrypoint.…" 5 seconds ago Up 4 seconds 0.0.0.0:8080->80/tcp $CONTAINER_NAMES
|
3. Orchestration – Docker Compose
Create a docker-compose.yml for a self‑hosted Gitea instance:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
| version: "3.8"
services:
db:
image: mariadb:10.11
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: gitea
MYSQL_USER: gitea
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- db_data:/var/lib/mysql
gitea:
image: gitea/gitea:1.22
restart: unless-stopped
depends_on:
- db
environment:
USER_UID: 1000
USER_GID: 1000
DB_TYPE: mysql
DB_HOST: db:3306
DB_NAME: gitea
DB_USER: gitea
DB_PASSWD: ${MYSQL_PASSWORD}
ports:
- "3000:3000"
- "222:22"
volumes:
- gitea_data:/data
volumes:
db_data:
gitea_data:
|
Deploy:
1
2
3
| export MYSQL_ROOT_PASSWORD=StrongRootPass!
export MYSQL_PASSWORD=GiteaPass!
docker compose up -d
|
Verify:
1
2
| docker compose ps
# Expected output includes $CONTAINER_STATUS for both db and gitea services
|
4. Automation – Ansible
Create an inventory file inventory.ini: