Post

Accidentally Won 4 Mac Minis On Ebay Oops

Accidentally Won 4 Mac Minis On Ebay Oops

Accidentally Won 4 Mac Minis On Ebay Oops: Building a Homelab Cluster on a Budget

INTRODUCTION

We’ve all been there - scrolling through online marketplaces late at night, only to find ourselves accidentally committing to infrastructure purchases. In this case, four Mac Minis acquired through an impulsive eBay bidding war. While this might seem like a predicament, it’s actually a golden opportunity to explore enterprise-grade infrastructure management techniques in a homelab environment.

For DevOps engineers and system administrators, homelabs serve as critical sandboxes for testing configurations, implementing automation workflows, and experimenting with clustering technologies without risking production environments. With Apple Silicon Mac Minis offering impressive performance-per-watt ratios (the M1 chip delivers up to 3.5x faster CPU performance and 6x faster GPU performance compared to previous Intel-based models), these compact machines present an intriguing platform for building cost-efficient, energy-conscious infrastructure clusters.

This guide will transform our accidental eBay acquisition into a fully functional DevOps lab capable of running containerized workloads, automated deployments, and distributed systems. We’ll cover:

  1. Cluster design considerations for heterogenous hardware
  2. macOS-specific infrastructure challenges and solutions
  3. Container orchestration options for ARM-based architectures
  4. Automated provisioning using infrastructure-as-code (IaC) tools
  5. Performance optimization for resource-constrained environments

By the end, you’ll understand how to leverage inexpensive or repurposed hardware to create production-like environments for testing Kubernetes configurations, CI/CD pipelines, and distributed system architectures - all while keeping power consumption under 100W for the entire cluster.

UNDERSTANDING MAC MINI CLUSTERING

What Is Homelab Clustering?

A homelab cluster is a small-scale implementation of enterprise infrastructure patterns using consumer-grade hardware. Unlike commercial data centers that focus on horizontal scalability, homelab clusters prioritize:

  • Educational value: Hands-on experience with distributed systems
  • Cost efficiency: Sub-$100/node hardware with minimal power consumption
  • Flexibility: Mixed architecture support (ARM/x86) and multi-OS environments
  • Real-world simulation: Production-like environments for testing IaC configurations

Why Mac Minis for DevOps Workloads?

The 2020 M1 Mac Mini revolutionized homelab economics:

SpecificationM1 Mac Mini (2020)Comparable x86 SFF PC
Cost (used)$100-$300$150-$400
Idle Power Draw6.8W15-30W
Max Power Draw39W65-150W
CPU Performance (ST)1500+ pts (Geekbench 5)1000-1400 pts
Memory Capacity8-16GB unified16-64GB DDR4
Storage OptionsNVMe SSD (256GB-2TB)SATA/NVMe (varies)

Key advantages for infrastructure workloads:

  • ARM64 architecture: Native support for modern container workloads
  • Metal API: GPU acceleration for ML workloads (TensorFlow/PyTorch)
  • macOS/Linux duality: Test cross-platform deployment scenarios
  • Silent operation: Fanless under moderate loads

Cluster Architecture Options

For our 4-node cluster, we have several architectural patterns to consider:

  1. Docker Swarm Mode:
    1
    2
    3
    4
    5
    
    # Initialize swarm on first node
    docker swarm init --advertise-addr 192.168.1.10
       
    # Join subsequent nodes
    docker swarm join --token SWMTKN-1-0zvzf1... 192.168.1.10:2377
    
  2. Kubernetes with K3s (lightweight Kubernetes by Rancher):
    1
    2
    3
    4
    5
    6
    7
    
    # Server node
    curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE=644 sh -s - server \
      --cluster-init
       
    # Agent nodes
    curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.10:6443 \
      K3S_TOKEN=mynodetoken sh -
    
  3. Nomad Cluster (Hashicorp’s workload orchestrator):
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    
    # Server configuration (nomad.hcl)
    server {
      enabled          = true
      bootstrap_expect = 1
    }
       
    client {
      enabled = true
      servers = ["192.168.1.10"]
    }
    

Performance Considerations

When working with ARM-based clusters, pay special attention to:

  • Multi-architecture container support: Use docker buildx for cross-platform builds
  • Memory constraints: Configure resource limits in orchestrators
  • Storage performance: Apple’s NVMe SSDs reach 2.8GB/s read speeds
  • Network bottlenecks: 1GbE built-in (consider USB3-to-2.5GbE adapters)

PREREQUISITES

Hardware Requirements

Our 4-node cluster setup requires:

  1. Network Infrastructure:
    • Gigabit switch with at least 5 ports
    • CAT6 cables (1 per node + uplink)
    • Router with static DHCP reservation capabilities
  2. Power Management:
    • Smart power strip (TP-Link Kasa HS300 recommended)
    • UPS battery backup (1500VA minimum)
  3. Peripheral Sharing:
    • USB-C KVM switch (TESmart HKS0401A1)
    • Bluetooth keyboard/mouse combo

Software Requirements

  1. Base Operating System Options:
    • macOS Ventura (13.5+) with Xcode Command Line Tools
    • Ubuntu Server 22.04 LTS for ARM
    • Fedora CoreOS (for immutable infrastructure)
  2. Cluster Management Tools:
    1
    2
    3
    4
    5
    6
    
    # Common dependencies
    brew install ansible terraform kubectl helm docker
       
    # ARM-specific packages
    brew install --cask docker
    arch -arm64 brew install k3s nomad consul
    
  3. Security Baseline:
    • SSH key authentication (ed25519 keys recommended)
    • Firewall rules (pfctl on macOS, ufw on Linux)
    • Encrypted ZFS filesystem for persistent storage

Network Configuration

Static IP allocation table:

HostnameIP AddressRoleResources
mini-ctl1192.168.1.10Control Plane8GB RAM, 256GB
mini-wrk1192.168.1.11Worker Node8GB RAM, 256GB
mini-wrk2192.168.1.12Worker Node8GB RAM, 256GB
mini-str1192.168.1.13Storage Node16GB RAM, 1TB

INSTALLATION & SETUP

macOS-Specific Preparation

  1. Disable SIP for Kernel Extensions:
    1
    2
    3
    
    # Reboot into Recovery Mode (Cmd+R)
    csrutil disable
    spctl kext-consent add EG7KH642X6
    
  2. Enable SSH Remote Login:
    1
    2
    
    sudo systemsetup -setremotelogin on
    sudo launchctl load -w /System/Library/LaunchDaemons/ssh.plist
    
  3. Configure Performance Settings:
    1
    2
    3
    4
    5
    
    # Prevent sleep during active SSH sessions
    sudo systemsetup -setcomputersleep Off
       
    # Disable Spotlight indexing on data volumes
    sudo mdutil -i off /Volumes/Data
    

Automated Provisioning with Ansible

Create inventory.ini:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[control]
mini-ctl1 ansible_host=192.168.1.10

[workers]
mini-wrk1 ansible_host=192.168.1.11
mini-wrk2 ansible_host=192.168.1.12

[storage]
mini-str1 ansible_host=192.168.1.13

[cluster:children]
control
workers
storage

Base playbook (cluster-setup.yml):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
- name: Configure Mac Mini cluster
  hosts: all
  become: yes
  vars:
    docker_version: "24.0.6"
    k3s_channel: "v1.27"
  tasks:
    - name: Install Homebrew (macOS)
      when: ansible_os_family == 'Darwin'
      shell: |
        /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
        echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile

    - name: Install Docker
      shell: |
        brew install --cask docker
        sudo /Applications/Docker.app/Contents/MacOS/Docker --unattended --install-privileged-components

    - name: Configure Docker daemon
      copy:
        dest: /etc/docker/daemon.json
        content: |
          {
            "experimental": true,
            "features": {"buildkit": true},
            "builder": {"gc": {"enabled": true}},
            "default-address-pools": [{"base":"10.10.0.0/16","size":24}]
          }

    - name: Enable Docker socket
      systemd:
        name: docker
        state: started
        enabled: yes

Kubernetes Installation with K3s

Control plane configuration (k3s-server.yaml):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: k3s.cattle.io/v1alpha1
kind: Cluster
metadata:
  name: mac-mini-cluster
spec:
  kubernetesVersion: v1.27.4+k3s1
  network:
    serviceCIDR: 10.43.0.0/16
    podCIDR: 10.42.0.0/16
  storage:
    etcd:
      storageClassName: zfs-local
    persistentVolumeReclaimPolicy: Retain
  workerProfiles:
    - name: high-io
      nodeSelector:
        storage: ssd
      tolerations:
        - key: "io"
          operator: "Exists"
          effect: "NoSchedule"

Apply configuration:

1
2
3
k3sup install --ip 192.168.1.10 --user admin --cluster \
  --k3s-extra-args "--flannel-backend host-gw --disable traefik" \
  --k3s-channel stable

CONFIGURATION & OPTIMIZATION

Storage Configuration with ZFS

Create striped mirror pool:

1
2
3
4
5
# On storage node
diskutil list
sudo zpool create datapool mirror disk4 disk5
sudo zfs create datapool/k8s -o recordsize=128k -o compression=zstd
sudo zfs set sharenfs=on datapool/k8s

Network Performance Tuning

Adjust MTU and buffer sizes:

1
2
3
4
5
6
7
8
# macOS (run on each node)
sudo ifconfig en1 mtu 9000
sudo sysctl -w net.inet.tcp.sendspace=1048576
sudo sysctl -w net.inet.tcp.recvspace=1048576

# Persistent settings
echo 'net.inet.tcp.sendspace=1048576' | sudo tee -a /etc/sysctl.conf
echo 'net.inet.tcp.recvspace=1048576' | sudo tee -a /etc/sysctl.conf

Kubernetes StorageClass Configuration

Define ZFS-based StorageClass:

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: zfs-local
provisioner: zfs.csi.openebs.io
parameters:
  recordsize: "128k"
  compression: "zstd"
  dedup: "off"
  poolname: "datapool/k8s"
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer

Security Hardening

  1. Pod Security Admission: ```yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins:
    • name: PodSecurity configuration: apiVersion: pod-security.admission.config.k8s.io/v1beta1 kind: PodSecurityConfiguration defaults: enforce: “restricted” enforce-version: “latest” exemptions: usernames: [“system:serviceaccount:kube-system:calico-kube-controllers”] ```
  2. Network Policies: ```yaml apiVersion: projectcalico.org/v3 kind: NetworkPolicy metadata: name: default-deny spec: selector: all() types:
    • Ingress
    • Egress ```

USAGE & OPERATIONS

Deploying Sample Workload

Stateful web application deployment:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 3000
        fsGroup: 2000
      containers:
      - name: web
        image: nginx:1.25-alpine
        ports:
        - containerPort: 80
        volumeMounts:
        - name: webdata
          mountPath: /usr/share/nginx/html
      volumes:
      - name: webdata
        persistentVolumeClaim:
          claimName: webdata-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: webdata-pvc
spec:
  storageClassName: zfs-local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Monitoring Stack

Loki/Prometheus/Grafana installation:

1
2
3
4
5
6
7
helm repo add grafana https://grafana.github.io/helm-charts
helm install observability grafana/loki-stack \
  --namespace monitoring \
  --set prometheus.enabled=true \
  --set loki.persistence.enabled=true \
  --set loki.persistence.storageClassName=zfs-local \
  --set loki.persistence.size=50Gi

Backup Strategy

Velero backup configuration:

1
2
3
4
5
6
7
8
9
10
velero install \
  --provider aws \
  --plugins velero/velero-plugin-for-aws:v1.7.0 \
  --bucket mac-mini-backups \
  --secret-file ./credentials-velero \
  --use-volume-snapshots=false \
  --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://192.168.1.13:9000

# Schedule daily backups
velero schedule
This post is licensed under CC BY 4.0 by the author.