Picked Up 6 Lenovo Thinkcentre M910Q Tiny Pcs For 100 Total
Picked Up 6 Lenovo Thinkcentre M910Q Tiny Pcs For $100 Total: The Ultimate Homelab Infrastructure Guide
Introduction
Finding enterprise-grade hardware at bargain prices is the holy grail for homelab enthusiasts and DevOps practitioners. When you score 6 Lenovo ThinkCentre M910q Tiny PCs with 16GB RAM and 1TB SSDs for just $100 total, you’re holding the keys to a powerful distributed system that rivals commercial cloud infrastructure in flexibility at a fraction of the cost.
In modern DevOps and infrastructure management, the ability to design, deploy, and maintain distributed systems is no longer optional - it’s a core competency. These compact powerhouses (measuring just 7.2” x 7.0” x 1.4”) present unique opportunities for building resilient, self-hosted environments that mirror production setups while consuming minimal power (35W TDP).
This comprehensive guide will transform your $100 hardware windfall into a professional-grade infrastructure platform. We’ll explore:
- Cluster orchestration with Kubernetes and Docker Swarm
- Hyperconverged infrastructure via Proxmox VE
- Automated provisioning using Terraform and Ansible
- Enterprise network design for mini-PC clusters
- Monitoring and maintenance best practices
Whether you’re preparing for CKA certification, testing cloud-native applications, or building a self-hosted SaaS platform, this hardware foundation paired with proper configuration delivers unparalleled learning and operational value.
Understanding Homelab Infrastructure Design
Why Mini-PC Clusters Matter in DevOps
Modern infrastructure demands have shifted from monolithic servers to distributed systems. The Lenovo M910q Tiny’s specifications make them ideal for this paradigm:
| Specification | Value | DevOps Relevance |
|---|---|---|
| CPU | Intel Core i5-6500T (4c/4t) | Adequate for containerized workloads |
| RAM | 16GB DDR4 | Multiple node.js/Python/Java containers |
| Storage | 1TB SSD | Fast container storage (<1ms latency) |
| Networking | Intel I219-LM Gigabit | Suitable for East-West traffic |
| TDP | 35W | 24/7 operation at ~$3/month/node |
Architectural Considerations
When working with multiple identical nodes, we must choose between:
- Hyperconverged Infrastructure (Proxmox VE Ceph Cluster)
- Combines compute and storage resources
- Enables live migration (HA)
- Requires 3+ nodes for quorum
- Container Orchestration (Kubernetes/Docker Swarm)
- Optimized for microservices
- Built-in service discovery
- Declarative configuration
- Hybrid Approach
- Proxmox host OS with nested Kubernetes
- Maximizes hardware utilization
- Adds operational complexity
Performance Characteristics
Benchmark comparisons show how these mini PCs handle real workloads:
| Workload Type | Single Node Capacity | 6-Node Cluster Capacity |
|---|---|---|
| Web Servers | 150 concurrent (Nginx) | 900 concurrent |
| Databases | 5K TPS (PostgreSQL) | 30K TPS (sharded) |
| CI/CD Workers | 4 parallel jobs | 24 parallel jobs |
| Memory Cache | 12GB usable (Redis) | 72GB distributed |
Prerequisites
Hardware Preparation
Before software deployment, complete these hardware checks:
- BIOS Configuration (All Nodes)
- Enable VT-x/AMD-V (Virtualization)
- Activate vPro for remote management
- Set power policy to “Always On”
- Disable secure boot for flexibility
- Physical Connectivity
1 2 3 4 5
# Verify network link speeds ethtool eth0 | grep -E 'Speed|Duplex' # Check disk health smartctl -a /dev/sda | grep -i 'reallocated\|pending'
- Power Management
- Use a quality UPS (APC 1500VA recommended)
- Configure power sequencing to prevent boot storms
Software Requirements
- Base OS Options:
- Proxmox VE 8.1 (Debian 12 base)
- Ubuntu Server 22.04 LTS
- Rocky Linux 9.3
- Cluster Software:
- Kubernetes 1.28+ (with Cilium CNI)
- Docker Swarm 24.0+
- HashiCorp Nomad 1.6+
- Management Tools:
- Ansible Core 2.15+
- Terraform 1.6+
- FluxCD 2.1+
Installation & Configuration
Proxmox VE Cluster Setup
Step 1: Install Proxmox on First Node
1
2
3
4
5
# Download installer
wget https://enterprise.proxmox.com/iso/proxmox-ve_8.1-1.iso
# Create bootable USB (on Linux)
dd if=proxmox-ve_8.1-1.iso of=/dev/sdX status=progress bs=4M
Step 2: Create Cluster Foundation
1
2
3
4
5
# Initialize first node
pvecm create CLUSTER-01
# Add subsequent nodes
pvecm add IP_NODE_1
Step 3: Configure Ceph Storage
1
2
3
4
5
6
7
# /etc/pve/ceph.conf (minimal)
[global]
osd_pool_default_size = 3
mon_allow_pool_delete = true
[osd]
osd_memory_target = 4G
Kubernetes Cluster Deployment
Using kubeadm (Production-Grade)
1
2
3
4
5
6
7
# Initialize control plane
kubeadm init --pod-network-cidr=10.244.0.0/16 \
--control-plane-endpoint "LOAD_BALANCER_IP:6443"
# Join worker nodes
kubeadm join LOAD_BALANCER_IP:6443 --token <token> \
--discovery-token-ca-cert-hash <hash>
Cilium CNI Installation
1
2
3
4
helm install cilium cilium/cilium --version 1.14.3 \
--namespace kube-system \
--set ipam.mode=kubernetes \
--set tunnel=vxlan
Docker Swarm Alternative
Initialize Swarm Cluster
1
2
3
4
5
# On manager node
docker swarm init --advertise-addr 192.168.1.10
# Join workers
docker swarm join --token SWMTKN-1-... 192.168.1.10:2377
Service Deployment Example
1
2
3
4
5
6
7
8
9
10
11
12
13
# docker-compose.yml
version: '3.8'
services:
web:
image: nginx:alpine
deploy:
replicas: 6
resources:
limits:
memory: 256M
ports:
- "80:80"
Configuration & Optimization
Network Architecture
Implement this VLAN structure for production-like isolation:
| VLAN ID | Purpose | Subnet | Access |
|---|---|---|---|
| 10 | Management | 192.168.10.0/24 | SSH, API |
| 20 | Storage | 192.168.20.0/24 | Ceph, iSCSI |
| 30 | Application | 192.168.30.0/24 | Frontends |
| 40 | DMZ | 192.168.40.0/24 | Public apps |
Configure Linux Bridges (Proxmox)
1
2
3
4
5
6
7
# /etc/network/interfaces
auto vmbr10
iface vmbr10 inet static
address 192.168.10.2/24
bridge-ports eno1
bridge-stp off
bridge-fd 0
Performance Tuning
SSD Optimization (/etc/fstab)
1
UUID=... / ext4 defaults,noatime,nodiratime,discard 0 1
Kernel Parameters (/etc/sysctl.conf)
1
2
3
4
5
6
7
# Increase TCP buffers
net.core.rmem_max=16777216
net.core.wmem_max=16777216
# Ceph optimization
vm.swappiness=10
vm.vfs_cache_pressure=50
Security Hardening
SSH Configuration (/etc/ssh/sshd_config)
1
2
3
4
PermitRootLogin no
PasswordAuthentication no
AllowUsers devops-admin
ClientAliveInterval 300
Kubernetes Pod Security
1
2
3
4
5
6
7
8
9
# PodSecurityPolicy
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
spec:
privileged: false
readOnlyRootFilesystem: true
allowedCapabilities: []
Usage & Operations
Cluster Monitoring Stack
Deploy Prometheus + Grafana
1
2
3
4
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--set alertmanager.enabled=false \
--set grafana.adminPassword='StrongPass!2024'
Key Metrics to Track
| Metric | Warning Threshold | Critical Threshold |
|---|---|---|
| Node CPU Usage | 70% | 90% |
| Memory Pressure | 80% | 95% |
| Storage Space | 75% | 85% |
| Network Saturation | 60% | 80% |
Automated Provisioning
Terraform Example (Proxmox VM)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
resource "proxmox_vm_qemu" "k8s_worker" {
count = 3
name = "worker-${count.index}.lab"
target_node = "m910q-${count.index % 6}"
os_type = "cloud-init"
cores = 4
memory = 12288
disk {
type = "scsi"
storage = "ceph-ssd"
size = "50G"
}
network {
model = "virtio"
bridge = "vmbr30"
}
}
Troubleshooting
Common Issues and Solutions
Problem: Ceph Cluster Health WARN
1
2
3
4
5
6
7
# Check OSD status
ceph osd tree
# Typical fix
ceph osd out osd.$OSD_NUM
systemctl restart ceph-osd@$OSD_NUM
ceph osd in osd.$OSD_NUM
Problem: Kubernetes Node Not Ready
1
2
3
4
5
# Inspect kubelet logs
journalctl -u kubelet -n 100 --no-pager
# Common resolution
systemctl restart containerd kubelet
Problem: Docker Swarm Network Connectivity
1
2
3
4
5
6
7
# Check overlay network
docker network inspect $NETWORK_ID
# Reset swarm networking
docker swarm leave --force
iptables --flush
systemctl restart docker
Conclusion
Your $100 Lenovo M910q cluster now represents a formidable infrastructure platform capable of hosting production-grade workloads. By implementing the architectures and configurations outlined in this guide, you’ve created:
- A hyperconverged Proxmox VE cluster with enterprise storage
- A Certified Kubernetes conformant environment
- An automation-ready foundation with Terraform/Ansible
- A monitored and secured system following DevOps best practices
To deepen your expertise, explore these resources:
The true value of this homelab isn’t just in the cost savings - it’s in the operational experience gained managing distributed systems. Use this platform to experiment with service meshes (Istio, Linkerd), CI/CD pipelines (Argo CD, Tekton), and infrastructure-as-code patterns that will accelerate your professional DevOps journey.