Post

Saved 32 Sff And 5 Minis From Being Scrapped Next Steps For A Novice

Saved 32 Sff And 5 Minis From Being Scrapped Next Steps For A Novice

Saved 32 SFF And 5 Minis From Being Scrapped: Next Steps For A Novice

Introduction

When I rescued 32 small form factor (SFF) computers and 5 mini PCs from being scrapped, I knew I had stumbled upon a goldmine for homelab enthusiasts and DevOps practitioners. These compact machines, often discarded by businesses upgrading their infrastructure, represent incredible value for learning, experimentation, and building a robust homelab environment.

This comprehensive guide will walk you through transforming your rescued hardware into a powerful infrastructure playground. Whether you’re interested in self-hosting services, learning DevOps practices, or simply want to breathe new life into these machines, you’ll find actionable steps to maximize their potential.

The beauty of working with SFF and mini PCs lies in their versatility—they’re power-efficient, space-saving, and capable of running various services simultaneously. From personal cloud storage to development environments and even small-scale production workloads, these machines can handle it all with the right configuration.

Understanding Your Hardware

Small Form Factor (SFF) computers and mini PCs are compact computing solutions that pack impressive capabilities into tiny footprints. Your rescued fleet likely includes various generations of Intel NUCs, Dell Optiplex Micro, HP EliteDesk Mini, or similar enterprise-grade compact systems.

These machines typically feature:

  • Low power consumption (15-65W under load)
  • Multiple USB ports and display outputs
  • M.2 and SATA storage options
  • Up to 64GB DDR4 RAM support
  • Integrated or dedicated graphics
  • Multiple NICs for networking flexibility

The diversity in your collection presents both opportunities and challenges. Different CPU architectures (Intel vs AMD), varying RAM capacities, and storage options mean you’ll need to strategically allocate workloads based on each machine’s strengths.

Prerequisites and Initial Assessment

Before diving into deployment, you need to inventory and assess your hardware. This critical first step will determine how to best utilize your rescued fleet.

Hardware Assessment Checklist

Start by examining each machine’s specifications:

1
2
3
4
5
6
7
8
9
10
# Check CPU information
lscpu | grep -E 'Model name|CPU\(s\)|Thread|Core'

# Check RAM details
free -h
sudo dmidecode --type memory | grep -E 'Size|Type|Speed'

# Check storage information
lsblk
sudo fdisk -l

Document the following for each machine:

  • CPU model and core count
  • RAM capacity and type
  • Storage devices and capacities
  • Network interface details
  • Available ports and expansion options

Initial Setup Steps

Once you’ve assessed your hardware, perform these initial setup tasks:

  1. Update BIOS/UEFI: Check for firmware updates on manufacturer websites
  2. Install a base OS: Ubuntu Server LTS is recommended for its stability and extensive documentation
  3. Configure networking: Set static IPs for easier management
  4. Update packages: Ensure all systems are current

Installation and Base Configuration

With your hardware assessed, it’s time to establish a solid foundation. I recommend standardizing on Ubuntu Server 22.04 LTS for consistency across your fleet.

Operating System Installation

1
2
3
4
5
6
7
8
# Create bootable USB
sudo dd if=ubuntu-22.04-live-server-amd64.iso of=/dev/sdX bs=4M status=progress

# Boot from USB and install
# During installation:
# - Choose "OpenSSH server" for remote management
# - Configure static IP during network setup
# - Create a standard user with sudo privileges

Base System Configuration

After installation, configure each machine with these essential settings:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Update system packages
sudo apt update && sudo apt upgrade -y

# Install essential tools
sudo apt install -y \
    vim \
    git \
    curl \
    wget \
    htop \
    iotop \
    iftop \
    nmap \
    net-tools \
    build-essential \
    openssh-server

# Configure SSH for key-based authentication
ssh-keygen -t ed25519 -C "homelab-key"
ssh-copy-id user@machine-ip

Strategic Hardware Optimization

With 32 SFF and 5 mini PCs, you have significant computing resources. The key is strategic allocation based on each machine’s capabilities.

RAM Optimization Strategy

Memory is often the limiting factor in compact systems. Consider this RAM allocation strategy:

1
2
3
4
5
6
7
8
# Check current RAM usage
free -h

# Upgrade RAM where possible
# Target configurations:
# - 16GB for general-purpose machines
# - 32GB for development/build servers
# - 64GB for database or memory-intensive workloads

For machines with expandable RAM, prioritize upgrades for:

  • Build servers and CI/CD runners
  • Database servers
  • Development environments
  • Virtualization hosts

Storage Configuration

Storage optimization is crucial for performance and data integrity:

1
2
3
4
5
6
7
# Check SMART status for all drives
sudo smartctl -a /dev/sda

# Configure RAID for critical data
# Example: RAID 1 for redundancy
sudo apt install -y mdadm
sudo mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sda /dev/sdb

Consider these storage strategies:

  • Use SSDs for OS and frequently accessed data
  • Implement RAID 1 for critical services
  • Use larger HDDs for bulk storage and backups
  • Consider network-attached storage (NAS) configurations

CPU Vendor Standardization

The Reddit comment about standardizing CPU vendors is spot-on. Having machines with the same CPU vendor simplifies management and allows for consistent workload distribution.

If you have mixed Intel and AMD systems:

  • Group similar CPUs together
  • Use Intel systems for specific workloads
  • Use AMD systems for others
  • Consider CPU affinity for containerized workloads

Containerization and Orchestration

With your hardware optimized, it’s time to implement containerization for efficient resource utilization.

Docker Installation and Configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

# Add user to docker group
sudo usermod -aG docker $USER

# Install Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

# Configure Docker daemon for production
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json > /dev/null <<EOF
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },
  "storage-driver": "overlay2",
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Hard": 64000,
      "Soft": 64000
    }
  }
}
EOF

Kubernetes Setup

For orchestrating your fleet, Kubernetes provides excellent scalability:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Install Kubernetes on all machines
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl

# Initialize master node
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

# Configure kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Install CNI (Flannel)
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

# Join worker nodes
# On master node: kubeadm token create --print-join-command
# On worker nodes: Run the join command

Service Deployment Strategy

With your infrastructure ready, let’s deploy practical services across your fleet.

Self-Hosted Services

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# docker-compose.yml for common services
version: '3.8'

services:
  # Nextcloud for personal cloud storage
  nextcloud:
    image: nextcloud:latest
    container_name: nextcloud
    ports:
      - "8080:80"
    volumes:
      - nextcloud_data:/var/www/html
    environment:
      - MYSQL_HOST=db
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud
      - MYSQL_PASSWORD=your_password

  # MariaDB database
  db:
    image: mariadb:latest
    container_name: nextcloud_db
    volumes:
      - db_data:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=your_root_password
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud
      - MYSQL_PASSWORD=your_password

  # Nginx reverse proxy
  nginx:
    image: nginx:latest
    container_name: nginx
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./certs:/etc/nginx/certs

volumes:
  nextcloud_data:
  db_data:

Development and Testing Environments

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# GitLab Runner for CI/CD
version: '3.8'

services:
  gitlab-runner:
    image: gitlab/gitlab-runner:latest
    container_name: gitlab-runner
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - gitlab-runner-config:/etc/gitlab-runner
    environment:
      - CI_SERVER_URL=https://gitlab.com/
      - RUNNER_REGISTRATION_TOKEN=your_token
      - RUNNER_EXECUTOR=docker

volumes:
  gitlab-runner-config:

Network Configuration and Security

With multiple machines, proper network configuration is essential.

Network Setup

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Configure static IP
sudo nano /etc/netplan/01-netcfg.yaml

# Example configuration
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: no
      addresses:
        - 192.168.1.100/24
      gateway4: 192.168.1.1
      nameservers:
        addresses: [8.8.8.8, 8.8.4.4]

Security Hardening

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Install and configure firewall
sudo apt install -y ufw
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow from 192.168.1.0/24 to any port 22
sudo ufw enable

# Install fail2ban for intrusion prevention
sudo apt install -y fail2ban
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo systemctl enable fail2ban
sudo systemctl start fail2ban

# Configure SSH for security
sudo nano /etc/ssh/sshd_config
# Set:
# Port 2222
# PermitRootLogin no
# PasswordAuthentication no
# AllowUsers your_username

Monitoring and Maintenance

Effective monitoring ensures your rescued fleet operates reliably.

Monitoring Stack

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# docker-compose.yml for monitoring stack
version: '3.8'

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus_data:/prometheus

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
      - "3000:3000"
    volumes:
      - grafana_data:/var/lib/grafana

  node-exporter:
    image: prom/node-exporter:latest
    container_name: node-exporter
    ports:
      - "9100:9100"
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro

volumes:
  prometheus_data:
  grafana_data:

Automated Maintenance

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Create automated maintenance script
#!/bin/bash

# System updates
sudo apt update && sudo apt upgrade -y

# Docker cleanup
docker system prune -a -f

# Log rotation
sudo logrotate -f /etc/logrotate.conf

# Backup critical data
# Implement your backup strategy here

# Send status report
echo "Maintenance completed on $(date)" | mail -s "Homelab Status" your_email@example.com

Scaling and Future Expansion

With your initial setup complete, consider these scaling strategies:

Load Balancing

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# HAProxy for load balancing
version: '3.8'

services:
  haproxy:
    image: haproxy:latest
    container_name: haproxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
    depends_on:
      - web1
      - web2

  web1:
    image: nginx:latest
    container_name: web1

  web2:
    image: nginx:latest
    container_name: web2

High Availability

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Install Keepalived for virtual IP management
sudo apt install -y keepalived

# Configure keepalived.conf
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass your_password
    }
    virtual_ipaddress {
        192.168.1.200
    }
}

Troubleshooting Common Issues

Even with careful planning, issues arise. Here are solutions to common problems:

Performance Issues

1
2
3
4
5
6
7
8
9
10
11
12
13
# Monitor system resources
htop
iotop
iftop

# Check for memory leaks
ps aux --sort=-%mem | head -10

# Analyze disk I/O
iostat -x 1

# Check CPU temperature
sensors

Network Connectivity Problems

1
2
3
4
5
6
7
8
9
# Test network connectivity
ping -c 4 google.com
traceroute google.com

# Check firewall status
sudo ufw status verbose

# Analyze network traffic
tcpdump -i eth0 -n port 80

Container Issues

1
2
3
4
5
6
7
8
9
10
11
# Check container status
docker ps -a

# View container logs
docker logs $CONTAINER_ID

# Inspect container configuration
docker inspect $CONTAINER_ID

# Check resource usage
docker stats

Conclusion

Your rescued fleet of 32 SFF and 5 mini PCs represents a tremendous opportunity for learning, experimentation, and building a robust homelab environment. By following this comprehensive guide, you’ve transformed potentially scrapped hardware into a powerful infrastructure playground.

The journey from rescue to production-ready infrastructure involves careful planning, strategic allocation of resources, and continuous learning. Your efforts not only save these machines from e-waste but also provide you with invaluable experience in system administration, DevOps practices, and infrastructure management.

Remember that this is just the beginning. As you become more comfortable with your setup, explore advanced topics like:

  • Infrastructure as Code with Terraform
  • Continuous Integration/Continuous Deployment pipelines
  • Advanced networking with VLANs and VPNs
  • Security hardening and compliance
  • Performance optimization and benchmarking

The skills you develop with this homelab will directly translate to professional DevOps and system administration roles. Your rescued fleet is more than just hardware—it’s a gateway to mastering modern infrastructure management.

For further learning, explore these external resources:

Your homelab journey is just beginning, and the knowledge you’ll gain from working with this diverse hardware fleet will serve you well in your DevOps career. Happy hosting!

This post is licensed under CC BY 4.0 by the author.