What Do I Do With 4 Prodesks
What Do I Do With 4 Prodesks
INTRODUCTION
You’ve just scored four HP ProDesk 600 G3 mini PCs – a sysadmin’s dream windfall. These compact workhorses (typically equipped with 6th/7th-gen Intel Core i5 processors, 8-32GB DDR4 RAM, and NVMe/SATA SSD support) present an ideal playground for building enterprise-grade infrastructure at home. But now comes the critical question: How do you transform this hardware quartet into a cohesive, production-resilient system that advances your DevOps skills?
Homelabs have become the proving ground for modern infrastructure strategies. A 2023 SysAdmin Survey by Spiceworks revealed that 68% of infrastructure professionals use personal labs to test technologies before deployment. With four identical nodes, you’re uniquely positioned to explore fault-tolerant architectures that mirror real-world environments – without cloud costs or corporate red tape.
In this comprehensive guide, we’ll transform your ProDesk fleet into:
- A high-availability Proxmox VE cluster with Ceph distributed storage
- A bare-metal Kubernetes lab using Talos Linux
- An automated backup/replication target with ZFS
- A distributed services platform for CI/CD pipelines and self-hosted apps
By the end, you’ll have a battle-tested environment for experimenting with infrastructure-as-code, container orchestration, and disaster recovery – using 100% open-source tooling.
UNDERSTANDING THE TOPIC
Homelab Architectures: From Single Node to Distributed Systems
Four identical nodes unlock cluster topologies that are impractical with heterogeneous hardware. Consider these configurations:
Pattern | Nodes Required | Use Case | ProDesk Implementation |
---|---|---|---|
Active/Active HA | 3+ | Proxmox VE Cluster | 3 nodes for quorum + 1 backup |
Primary/Replica | 2+ | PostgreSQL Streaming Replication | 2 DB nodes + 2 app nodes |
Control/Worker Split | 4+ | Kubernetes | 1 control plane + 3 workers |
Erasure Coding | 4+ | Ceph Object Storage | 4 OSD nodes |
Technology Comparison: Virtualization vs Orchestration
Proxmox VE (Virtualization)
- Strengths: Full hardware abstraction, live migration, ZFS/Ceph integration
- Weaknesses: Overhead for containerized workloads, steeper RAM requirements
- ProDesk Fit: Ideal when running mixed workloads (VMs + LXCs)
Kubernetes (Orchestration)
- Strengths: Declarative infrastructure, cloud-native ecosystem, scalability
- Weaknesses: Steep learning curve, hardware abstraction limitations
- ProDesk Fit: Optimal for container-focused pipelines and microservices
Hybrid Approach
- Proxmox hosts K8s VMs: Combines hardware flexibility with orchestration
- Requires nested virtualization (enable in BIOS:
Intel VT-x/AMD-V
+IOMMU
)
Real-World Cluster Sizing
Each ProDesk 600 G3 (i5-6500T, 32GB RAM, 512GB NVMe + 1TB SATA SSD) can handle:
- Proxmox: 6-8 LXC containers or 3-4 lightweight VMs per node
- Kubernetes: 15-20 pods/node (with 1GB RAM buffer for system)
- Ceph: ~400MB/s OSD throughput (SATA SSD), 1GbE as bottleneck
PREREQUISITES
Hardware Preparation
- BIOS Configuration (All Nodes):
1 2 3 4 5 6
Advanced → Integrated Peripherals → [X] SR-IOV Support [X] VT-d [X] AES-NI Power Management → [X] Wake on LAN
- Disk Layout (Recommended):
1 2
/dev/nvme0n1 # Proxmox OS / Kubernetes OS (128GB partition) /dev/sda # Ceph OSD / ZFS Mirror (1TB SSD)
Network Requirements
- Switch: Managed L2+ with 802.1Q VLAN support (e.g., Ubiquiti USW-Lite-8)
- IP Schema:
1 2 3 4
Node1: 192.168.10.11/24 Node2: 192.168.10.12/24 Node3: 192.168.10.13/24 Node4: 192.168.10.14/24
- Firewall Rules:
1 2 3 4 5 6 7 8
# Proxmox Cluster Communication ufw allow from 192.168.10.0/24 proto vrrp ufw allow 8006/tcp # Web UI ufw allow 5404:5405/udp # Corosync # Kubernetes ufw allow 6443/tcp # Kube API ufw allow 2379:2380/tcp # etcd
Software Matrix
| Tool | Version | Purpose | |—————-|————-|———————————| | Proxmox VE | 8.1 | Virtualization Platform | | Talos Linux | 1.5 | Kubernetes OS | | Ceph Quincy | 17.2.6 | Distributed Storage | | k0s | 1.28 | Lightweight Kubernetes |
INSTALLATION & SETUP
Proxmox VE Cluster with Ceph
- Base Installation (Repeat per Node):
1 2
# Download ISO: https://www.proxmox.com/en/downloads # Write to USB: dd if=proxmox-ve_8.1.iso of=/dev/sdX bs=4M status=progress
- Partitioning:
/dev/nvme0n1
as ext4,dev/sda
as unused disk - Hostname:
proxmox-01
, IP:192.168.10.11
- Partitioning:
- Cluster Initialization (First Node):
1
pvecm create PROXCLUSTER
- Join Subsequent Nodes:
1
pvecm add 192.168.10.11 -fingerprint [SHA-256]
- Ceph Deployment:
1 2 3 4 5 6 7
# On first node pveceph install pveceph init --network 192.168.10.0/24 pveceph createmon # Add OSDs to each node pveceph osd create /dev/sda
- Verify Cluster Status:
1 2
pvecm status ceph -s
Bare-Metal Kubernetes with Talos
- Prepare Bootstrap Node (Proxmox VM):
1 2 3 4 5
qm create 9000 -name talos-bootstrap -memory 2048 -cores 2 \ -net0 virtio,bridge=vmbr0 -scsihw virtio-scsi-pci qm importdisk 9000 talos-amd64.iso local-lvm qm set 9000 -scsi1 local-lvm:vm-9000-disk-0 qm start 9000
- Generate Machine Configs:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
# controlplane.yaml version: v1alpha1 machine: type: controlplane install: disk: /dev/nvme0n1 network: interfaces: - interface: eno1 dhcp: true cluster: clusterName: prodesk-cluster controlPlane: endpoint: https://192.168.10.11:6443
- Apply Configuration:
1
talosctl apply-config -n 192.168.10.11 -f controlplane.yaml
- Bootstrap Cluster:
1
talosctl bootstrap -n 192.168.10.11
CONFIGURATION & OPTIMIZATION
Proxmox Tuning
- ZFS ARC Size Adjustment:
1 2
echo "options zfs zfs_arc_max=4294967296" > /etc/modprobe.d/zfs.conf update-initramfs -u
- Ceph CRUSH Map Optimization:
1 2
ceph osd crush tunable minimal ceph osd set-require-min-compat-client jewel
- VM CPU Pinning (For NUMA-aware ProDesks):
1 2
qm set 101 -cpuflags +pdpe1gb qm set 101 -numa 1
Kubernetes Hardening
- Pod Security Admission: ```yaml
psps.yaml
apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins:
- name: PodSecurity configuration: apiVersion: pod-security.admission.config.k8s.io/v1 kind: PodSecurityConfiguration defaults: enforce: “restricted” enforce-version: “latest” ```
- Network Policies: ```yaml
default-deny.yaml
kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: default-deny spec: podSelector: {} policyTypes:
- Ingress
- Egress ```
USAGE & OPERATIONS
Proxmox Day-2 Operations
- Live Migration Test:
1
qm migrate 101 proxmox-02 --online
- Backup Automation:
1 2 3 4 5
# /etc/pve/vzdump.conf tmpdir: /mnt/backups/tmp storage: nas-backups mode: snapshot compress: lzo
Kubernetes Workload Deployment
- MetalLB Configuration: ```yaml apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: prodesk-pool spec: addresses:
- 192.168.10.200-192.168.10.220 ```
- Persistent Storage with Ceph CSI:
1 2
kubectl create -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-config-map.yaml kubectl create -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
TROUBLESHOOTING
Common Cluster Issues
- Proxmox Corosync Problems:
1 2 3
systemctl status corosync corosync-cmapctl | grep members pvecm expected 3 # Reset quorum
- Ceph HEALTH_WARN:
1 2 3
ceph health detail ceph osd df tree ceph osd pool autoscale-status
- Kubernetes Node Not Ready:
1 2 3
kubectl describe node proxmox-03 journalctl -u kubelet -n 100 --no-pager talosctl logs -k kubelet
CONCLUSION
Four ProDesk 600 G3s provide an ideal balance between capability and manageability for serious homelab experimentation. By implementing either a Proxmox HA cluster or bare-metal Kubernetes environment (or both via nesting), you’ve created a platform to safely test:
- Infrastructure-as-Code workflows with Terraform/Ansible
- GitOps pipelines using Flux or Argo CD
- Distributed storage systems like Ceph/GlusterFS
- High-availability service topologies
To deepen your expertise:
The true value lies not in the hardware itself, but in the operational discipline gained from maintaining production-like systems on constrained resources – skills that directly translate to enterprise environments.