Post

First Homelab - Built From Old School Computers I Bought At A Charity Auction

First Homelab - Built From Old School Computers I Bought At A Charity Auction

First Homelab - Built From Old School Computers I Bought At A Charity Auction

Introduction

The dream of building a personal DevOps sandbox—a homelab where infrastructure experimentation, automation testing, and self-hosted services can flourish without budget constraints—is often hindered by hardware costs and supply chain shortages. As a DevOps engineer with years of experience managing enterprise infrastructure, I’ve witnessed firsthand how hardware procurement bottlenecks can stall innovation. That is, until I discovered a hidden gem: local charity auctions offering decommissioned school hardware. This guide documents how I transformed a stack of Lenovo mini PCs acquired for next to nothing into a functional homelab ecosystem, demonstrating that budget constraints need not be a barrier to infrastructure experimentation.

Homelabs serve as invaluable playgrounds for DevOps professionals to hone skills in containerization, orchestration, monitoring, and infrastructure-as-code practices. By leveraging old school computers from charity auctions, we not only reduce electronic waste but also create sustainable learning environments. In this comprehensive guide, you’ll learn how to evaluate vintage hardware, design a cost-effective architecture, deploy essential services, and maintain your homelab with production-grade practices—all while maximizing the potential of previously depreciated assets.

Understanding the Topic

A homelab represents a self-hosted infrastructure environment typically located in a personal space, designed for experimentation, development, and learning. Unlike cloud environments, homelabs provide physical control over resources, unrestricted access to services, and zero subscription costs—making them ideal for DevOps professionals seeking hands-on experience with infrastructure management.

The evolution of homelabs traces back to the early days of personal computing when enthusiasts repurposed old desktops as basic servers. With the advent of virtualization technologies like VMware and VirtualBox in the mid-2000s, homelabs gained significant capabilities. The containerization revolution further democratized homelab deployment, allowing lightweight, isolated services on modest hardware. Today’s homelabs often incorporate orchestration tools like Kubernetes, monitoring stacks like Prometheus/Grafana, and automation frameworks like Ansible—all previously accessible only in enterprise environments.

Key characteristics of a well-designed homelab include:

  • Resource Efficiency: Optimizing workloads for limited hardware capabilities
  • Scalability Design: Planning for future expansion without architectural overhaul
  • Security Baselines: Implementing network segmentation, access controls, and encryption
  • Automation Integration: Using Infrastructure-as-Code (IaC) for reproducible deployments
  • Cost Sustainability: Maximizing utility from depreciated hardware

The pros of using charity auction hardware for homelabs are substantial:

  • Minimal Investment: Hardware costs often approach zero
  • Eco-Friendly: Extending device lifecycle reduces e-waste
  • Unique Challenges: Provides experience with legacy hardware constraints
  • Community Support: Schools often include documentation and configuration details

However, limitations exist:

  • Power Efficiency: Older hardware consumes more power per unit of compute
  • Noise and Heat: Requires adequate ventilation and noise management
  • Performance Bottlenecks: Limited CPU/RAM may constrain intensive workloads
  • Driver Support: Potential compatibility issues with modern software

Real-world applications for such homelabs include:

  • Development environments mirroring production stacks
  • Testing automation pipelines before deployment
  • Self-hosting services like Nextcloud, Plex, or Home Assistant
  • Practicing disaster recovery and backup strategies
  • Experimenting with emerging technologies like edge computing

Prerequisites

Before embarking on your homelab journey, ensure you have the following components prepared:

Hardware Requirements

  • Decommissioned Computers: Minimum 4GB RAM, 64GB SSD, dual-core CPU (e.g., Lenovo ThinkCentre M700/M900)
  • Network Infrastructure: Managed switch (10/100/1000Mbps), router with VLAN support
  • Storage Solutions: External drives or NAS for shared storage
  • Power Management: UPS for critical components, surge protectors
  • Cooling: Adequate ventilation space, potentially additional fans for older hardware

Software Requirements

  • Operating System: Ubuntu Server 22.04 LTS (minimal installation)
  • Container Runtime: Docker Engine 24.0+ or Podman
  • Orchestration: Kubernetes (k3s for lightweight deployment) or Docker Swarm
  • Monitoring: Prometheus + Grafana stack
  • Automation: Ansible 2.10+ for configuration management
  • Network Tools: net-tools, iproute2, nmap

Network Configuration

  • Static IP assignment for all homelab components
  • DNS resolution (local BIND or Pi-hole)
  • Firewall rules restricting external access while allowing internal communication
  • VLAN segmentation for isolating services

Security Considerations

  • SSH key-based authentication (passwordless login)
  • VPN access for remote management (WireGuard or OpenVPN)
  • Regular security updates and patch management
  • Network intrusion detection (Suricata or Snort)

Pre-Installation Checklist

  1. Verify hardware compatibility with chosen OS/container runtime
  2. Ensure all devices can boot from network (PXE) if using centralized deployment
  3. Document existing hardware specifications and interfaces
  4. Prepare backup strategy for configuration data
  5. Test network connectivity between all components

Installation & Setup

Operating System Installation

Begin with a minimal Ubuntu Server installation on each Lenovo mini PC:

1
2
3
4
5
# Download Ubuntu Server ISO
wget https://releases.ubuntu.com/22.04.3/ubuntu-22.04.3-live-server-amd64.iso

# Create bootable USB (using dd on Linux)
dd if=ubuntu-22.04.3-live-server-amd64.iso of=/dev/sdX bs=4M status=progress

During installation:

  1. Select “Minimal installation” option
  2. Configure static IP addressing (e.g., 192.168.1.10/24)
  3. Install only OpenSSH server for remote access
  4. Disable unnecessary services during boot

Docker Engine Installation

Install Docker CE with the following commands:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Update package index
sudo apt update

# Install prerequisites
sudo apt install -y ca-certificates curl gnupg

# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Set up the repository
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker Engine
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# Add user to docker group
sudo usermod -aG docker $USER

Docker Configuration

Create /etc/docker/daemon.json for optimized resource usage:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },
  "storage-driver": "overlay2",
  "live-restore": true,
  "default-shm-size": "128M",
  "exec-opts": ["native.cgroupdriver=cgroupfs"],
  "storage-opts": [
    "overlay2.size=10G"
  ]
}

Restart Docker and verify installation:

1
2
3
sudo systemctl restart docker
docker --version
docker run hello-world

Kubernetes (k3s) Deployment

For multi-node orchestration, install k3s (lightweight Kubernetes):

1
2
3
4
5
6
7
8
9
# On master node
curl -sfL https://get.k3s.io | sh -

# On worker nodes (replace TOKEN with actual token from master)
curl -sfL https://get.k3s.io | K3S_URL=https://MASTER_IP:6443 K3S_TOKEN=SECRET_TOKEN sh -

# Verify cluster status
kubectl get nodes
kubectl get pods -A

Initial Service Deployment

Deploy Portainer for container management:

1
2
3
4
5
6
7
8
docker run -d \
  --name=portainer \
  --restart=always \
  -p 8000:8000 \
  -p 9443:9443 \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v portainer_data:/data \
  portainer/portainer-ce:2.21.4

Access at https://<homelab-ip>:9443

Common Installation Pitfalls

  1. Hardware Compatibility: Older Lenovo models may require kernel parameters for USB 3.0 controllers
  2. Resource Contention: Adjust Docker’s memory limits on low-RAM systems
  3. Network Conflicts: Ensure no IP address duplication between nodes
  4. Permission Issues: Verify user group membership for Docker operations

Configuration & Optimization

Resource Management

Configure Docker resource limits in service definitions:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# docker-compose.yml example
version: '3.8'
services:
  webapp:
    image: node:18-alpine
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 512M
        reservations:
          cpus: '0.25'
          memory: 256M
    ports:
      - "80:8080"

Security Hardening

Implement non-root containers and image scanning:

1
2
3
4
5
# Build Dockerfile with non-root user
FROM node:18-alpine
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nextjs -u 1001
USER nextjs

Scan images with Trivy:

1
2
3
4
docker run --rm \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v $(pwd):/root/.cache/ \
  aquasec/trivy:latest image your-image:tag

Performance Optimization

For legacy hardware, prioritize these configurations:

  1. Filesystem: Use XFS instead of ext4 for large datasets
  2. Network: Enable Jumbo Frames (MTU 9000) on local network
  3. Caching: Implement Redis for database query caching
  4. Compression: Enable Brotli for web services

Monitoring Stack Deployment

Deploy Prometheus and Grafana:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# monitoring-stack/docker-compose.yml
version: '3.8'
services:
  prometheus:
    image: prom/prometheus:v2.45.0
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/etc/prometheus/console_libraries'
      - '--web.console.templates=/etc/prometheus/consoles'
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus_data:/prometheus
  grafana:
    image: grafana/grafana:10.2.0
    ports:
      - "3000:3000"
    volumes:
      - grafana_data:/var/lib/grafana

Backup Configuration

Automate Docker volume backups:

1
2
3
4
5
6
7
8
9
10
11
12
#!/bin/bash
# backup-volumes.sh
DATE=$(date +%Y%m%d)
BACKUP_DIR="/backups/docker-volumes"
mkdir -p $BACKUP_DIR

for VOLUME in $(docker volume ls -q); do
  docker run --rm \
    -v $VOLUME:/data \
    -v $BACKUP_DIR:/backup \
    alpine tar czf /backup/$VOLUME-$DATE.tar.gz -C /data .
done

Usage & Operations

Common Operations

Essential Docker commands for daily management:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# List running containers
docker ps

# View container details
docker inspect $CONTAINER_ID

# Follow logs
docker logs -f $CONTAINER_NAMES

# Execute commands in running container
docker exec -it $CONTAINER_ID bash

# Stop and remove containers
docker stop $CONTAINER_NAMES
docker rm $CONTAINER_NAMES

# Manage images
docker pull $CONTAINER_IMAGE
docker images
docker rmi $CONTAINER_IMAGE

Monitoring Maintenance

Regular health checks:

1
2
3
4
5
6
7
8
# Check disk usage
df -h

# Monitor Docker resource usage
docker stats

# Check container status
docker inspect --format='' $CONTAINER_ID

Scaling Considerations

Add worker nodes to Kubernetes cluster:

1
2
3
4
5
# Generate new join token on master
kubeadm token create --print-join-command

# On new worker node
sudo kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash <hash>

Daily Management Tasks

  1. Review security alerts
  2. Update container images
  3. Monitor resource utilization
  4. Verify backup integrity
  5. Check service health endpoints

Troubleshooting

Common Issues

  1. Container Failure: Check logs with docker logs $CONTAINER_ID
  2. Resource Exhaustion: Verify limits with docker inspect $CONTAINER_ID | grep -A 10 "Resources"
  3. Network Issues: Test connectivity between containers
  4. Image Pull Errors: Verify registry access and DNS resolution

Debug Commands

Essential diagnostic tools:

1
2
3
4
5
# View container IP address
docker inspect $CONTAINER_ID | grep IPAddress

# Check container mounts
docker inspect $CONTAINER_ID | grep Mount
This post is licensed under CC BY 4.0 by the author.