Post

What Would You Do With This

What Would You Do With This: Optimizing Mixed Hardware Homelabs for DevOps Excellence

Introduction

When staring at a collection of aging enterprise hardware like Lenovo M920q Tiny PCs and Dell Optiplex 9020 units, many engineers face the same challenge: how to transform these decommissioned workhorses into a powerful DevOps training ground. This hardware paradox - modern compact machines paired with legacy systems - presents both constraints and opportunities for infrastructure automation practice.

Homelabs have become essential proving grounds for DevOps professionals. A 2023 DevOps Skills Survey found that 68% of engineers use personal labs to test infrastructure-as-code, container orchestration, and CI/CD pipelines. The heterogeneous hardware described in our scenario (M920q with PCIe expansion capabilities alongside 4th-gen Intel Optiplex units) mirrors real-world enterprise environments where teams must optimize mixed infrastructure.

In this comprehensive guide, we’ll explore how to:

  1. Architect services across asymmetric hardware generations
  2. Leverage Proxmox’s virtualization capabilities for workload isolation
  3. Implement infrastructure-as-code practices on constrained hardware
  4. Build a multi-tier environment mimicking production systems
  5. Implement monitoring and automation across diverse nodes

Whether you’re preparing for cloud certifications or building production-like test environments, these techniques will transform your hardware collection into a potent learning platform.

Understanding Homelab Hardware Capabilities

Hardware Analysis: Generational Differences

Our hardware pool consists of three distinct classes:

ModelCPURAMPCIe SlotsTDPRelease Year
Lenovo M920qi5-8500T (8th gen)16GB1 x x1635W2018
Optiplex 9020i7-4790 (4th gen)8GB1 x x1684W2014
Optiplex 9020i5-4590 (4th gen)16GB1 x x1684W2014

Key technical considerations:

  • PCIe Generations: M920q supports PCIe 3.0 (985MB/s per lane) vs 9020’s PCIe 2.0 (500MB/s)
  • Memory Bandwidth: DDR4-2666 (M920q) vs DDR3-1600 (9020) - 41.6GB/s vs 25.6GB/s theoretical max
  • Instruction Sets: 8th-gen CPUs add AVX2, SHA extensions critical for modern workloads

Proxmox Virtualization Platform

Proxmox VE (Virtual Environment) is an open-source server virtualization platform combining KVM hypervisor and LXC containerization. Its web-based management interface and API make it ideal for homelabs:

1
2
# Check Proxmox version and kernel
pveversion -v

Key features for our hardware:

  • PCI Passthrough: Critical for dedicating expansion cards to VMs
  • ZFS Support: Enables advanced storage configurations on limited disks
  • Cluster Capabilities: Allows pooling dissimilar hardware (with caveats)

Workload Suitability Analysis

Not all workloads perform equally on mixed hardware. Consider these benchmarks from Phoronix Test Suite runs:

Workloadi5-8500T Scorei7-4790 ScoreDifference
NGINX Requests/sec38,20128,743+33%
PostgreSQL Transactions1,8921,305+45%
FFmpeg Encoding (1080p)42 fps33 fps+27%

These differences dictate workload placement strategies discussed in later sections.

Prerequisites for Homelab Implementation

Hardware Preparation Checklist

  1. BIOS Updates:
    1
    2
    
    # Check current BIOS version
    dmidecode -t bios
    

    Update to latest versions from Lenovo and Dell

  2. BIOS Settings:
    • Enable Virtualization Technology (VT-x)
    • Disable Secure Boot for PCI passthrough
    • Configure power management to “Maximum Performance”
  3. Hardware Modifications:
    • M920q: Install PCIe riser for add-in cards
    • Optiplex: Replace mechanical drives with SSDs

Network Requirements

A dedicated lab network is essential. Recommended configuration:

1
2
3
4
5
6
7
8
9
10
11
# /etc/netplan/01-lab-net.yaml
network:
  version: 2
  ethernets:
    eno1:
      addresses: [192.168.10.2/24]
      routes:
        - to: default
          via: 192.168.10.1
      nameservers:
        addresses: [192.168.10.53]

Key considerations:

  • Separate VLAN for management traffic
  • Jumbo frames (MTU 9000) for storage networks
  • Reverse DNS configured for all nodes

Security Foundations

Implement these before exposing services:

  1. SSH Key Authentication:
    1
    2
    
    # Generate ED25519 key
    ssh-keygen -t ed25519 -a 100 -f ~/.ssh/proxmox_admin
    
  2. Firewall Baseline:
    1
    2
    3
    4
    
    # UFW basic rules
    ufw default deny incoming
    ufw allow proto tcp from 192.168.10.0/24 to any port 22
    ufw enable
    
  3. Encrypted Backups:
    1
    2
    3
    
    # Generate backup encryption key
    openssl rand -base64 32 > /etc/proxmox/backup.key
    chmod 600 /etc/proxmox/backup.key
    

Installation & Configuration Walkthrough

Proxmox Cluster Deployment

  1. Install Proxmox on all nodes:
    1
    2
    3
    4
    5
    
    # Download latest ISO
    wget https://enterprise.proxmox.com/iso/proxmox-ve_8.1-1.iso
       
    # Verify checksum
    sha512sum proxmox-ve_8.1-1.iso
    
  2. Post-install configuration on first node:
    1
    2
    3
    4
    5
    6
    
    # Replace enterprise repo with community
    rm /etc/apt/sources.list.d/pve-enterprise.list
    echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-enterprise.list
       
    # Update and upgrade
    apt update && apt dist-upgrade -y
    
  3. Create cluster:
    1
    2
    3
    4
    5
    
    # On first node
    pvecm create LAB-CLUSTER
    
    # On subsequent nodes
    pvecm add 192.168.10.2
    

Asymmetric Cluster Configuration

Our mixed hardware requires special cluster tuning:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# /etc/pve/corosync.conf
quorum {
    provider: corosync_votequorum
    expected_votes: 3
    two_node: 0
}

nodelist {
    node {
        name: m920q-01
        nodeid: 1
        quorum_votes: 3
        ring0_addr: 192.168.10.2
    }
    node {
        name: optiplex-i7
        nodeid: 2
        quorum_votes: 1
        ring0_addr: 192.168.10.3
    }
    node {
        name: optiplex-i5
        nodeid: 3
        quorum_votes: 1
        ring0_addr: 192.168.10.4
    }
}

This configuration weights votes based on hardware capability, preventing less capable nodes from affecting cluster quorum during maintenance.

PCIe Expansion Implementation

For the M920q’s PCIe slot, we’ll implement an HBA for shared storage:

  1. Install LSI 9207-8i HBA in M920q
  2. Configure PCI passthrough:
    1
    2
    3
    4
    5
    6
    7
    
    # Identify PCI device
    lspci -nn | grep LSI
    04:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:0087] (rev 05)
    
    # Add to kernel modules
    echo "options vfio-pci ids=1000:0087" > /etc/modprobe.d/vfio.conf
    update-initramfs -u -k all
    
  3. Attach to VM:
    1
    
    qm set 100 -hostpci0 04:00.0
    

Configuration & Optimization Strategies

Workload Placement Strategy

Create resource classes in Proxmox:

1
2
3
4
5
6
7
8
9
10
# Create performance class for M920q
pveam resource-class create performance --cpuunits 2048 --memory 14336

# Create standard class for Optiplex
pveam resource-class create standard --cpuunits 1024 --memory 7168

# Apply classes to nodes
pveam node update m920q-01 --resource-class performance
pveam node update optiplex-i7 --resource-class standard
pveam node update optiplex-i5 --resource-class standard

Tag VMs accordingly:

1
2
pveam vm update 100 --resource-class performance
pveam vm update 101 --resource-class standard

Storage Architecture

Implement a tiered storage solution:

  1. M920q: NVMe ZFS mirror for high-IOPS workloads
    1
    2
    
    zpool create fastpool mirror /dev/nvme0n1 /dev/nvme1n1
    zfs create -o recordsize=16K -o compression=lz4 fastpool/postgres
    
  2. Optiplex: SSD Ceph cluster for bulk storage
    1
    2
    3
    4
    
    # On each Optiplex node
    pveceph install --version octopus
    pveceph init --network 192.168.20.0/24
    pveceph osd create /dev/sdb
    

Performance Tuning

For 4th-gen CPUs, enable performance governors:

1
2
3
4
5
6
7
8
9
10
# /etc/systemd/system/cpu-performance.service
[Unit]
Description=CPU Performance Governor

[Service]
Type=oneshot
ExecStart=/usr/bin/cpupower frequency-set -g performance

[Install]
WantedBy=multi-user.target

Kernel parameters for memory-constrained nodes:

1
2
3
4
5
# /etc/sysctl.d/10-homelab.conf
vm.swappiness=10
vm.vfs_cache_pressure=50
vm.dirty_ratio=20
vm.dirty_background_ratio=5

Usage Patterns & Operational Procedures

Infrastructure-as-Code Implementation

Use Terraform for Proxmox management:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# main.tf
resource "proxmox_vm_qemu" "k8s-master" {
  name        = "k8s-master-01"
  target_node = "m920q-01"
  clone       = "ubuntu-jammy"
  cores       = 2
  memory      = 4096
  scsihw      = "virtio-scsi-single"
  
  disk {
    size    = "30G"
    storage = "fastpool"
    type    = "scsi"
  }
  
  network {
    model  = "virtio"
    bridge = "vmbr0"
  }
  
  lifecycle {
    ignore_changes = [
      network,
    ]
  }
}

Monitoring Stack Deployment

Prometheus configuration for mixed clusters:

1
2
3
4
5
6
7
8
9
10
11
12
13
# prometheus.yml
scrape_configs:
  - job_name: 'proxmox'
    static_configs:
      - targets: ['192.168.10.2:9221', '192.168.10.3:9221', '192.168.10.4:9221']
    
  - job_name: 'node'
    static_configs:
      - targets: ['192.168.10.2:9100', '192.168.10.3:9100', '192.168.10.4:9100']
    metric_relabel_configs:
      - source_labels: [__name__]
        regex: '(node_memory_MemAvailable_bytes|node_cpu_seconds_total)'
        action: keep

Backup Strategy

Proxmox Backup Server configuration:

1
2
3
4
5
6
7
8
9
10
11
12
# Create backup schedule
proxmox-backup-client schedule create --schedule "0 1 * * *" \
    --repository backup-storage \
    --ns homelab \
    --include /etc/pve/qemu-server \
    --exclude '*.log'
    
# Retention policy
proxmox-backup-client prune --repository backup-storage \
    --keep-daily 7 \
    --keep-weekly 4 \
    --keep-monthly 12

Troubleshooting Common Issues

PCI Passthrough Errors

Symptoms:

  • VM fails to start with “Cannot allocate memory” errors
  • dmesg shows “VFIO - Cannot reset device”

Solutions:

  1. Verify IOMMU is enabled:
    1
    2
    
    dmesg | grep -i iommu
    # Should show "IOMMU enabled"
    
  2. Check interrupt remapping:
    1
    2
    
    dmesg | grep "remapping"
    # Look for "Enabled IRQ remapping in x2apic mode"
    
  3. Add kernel parameters:
    1
    2
    
    # /etc/default/grub
    GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt"
    

Cluster Communication Issues

Diagnostic commands:

1
2
3
4
5
6
7
8
# Check corosync status
corosync-cmapctl | grep members

# Verify quorum
pvecm status

# Test network latency
fping -C 10 192.168.10.2 192.168.10.3 192.168.10.4

Common fixes:

  • Increase corosync timeout:
    1
    2
    3
    4
    5
    
    # /etc/pve/corosync.conf
    totem {
        token: 30000
        token_retransmits_before_loss_const: 10
    }
    
  • Configure dedicated corosync network

Conclusion

Transforming mixed-generation hardware into a cohesive DevOps lab requires careful planning but yields immense educational value. By implementing the strategies outlined:

  • The M920q becomes a high-performance core for database and CI/CD workloads
  • Optiplex units provide cost-effective capacity for distributed storage and ephemeral workloads
  • Proxmox clustering enables enterprise-grade features on consumer hardware

Next steps for lab expansion:

  1. Implement Kubernetes with kubeadm across all nodes
  2. Build GitLab CI/CD pipelines testing infrastructure changes
  3. Deploy distributed monitoring with Thanos or VictoriaMetrics

For further learning:

The true value of homelabs lies not in raw hardware capabilities, but in learning to architect resilient systems within constraints - a skill directly transferable to production environments.

This post is licensed under CC BY 4.0 by the author.