My Company Is Offering Me 9 Laptops For 180
My Company Is Offering Me 9 Laptops For 180
Introduction
When a corporate IT department decides to liquidate a batch of lightly‑used laptops for a fraction of the retail price, the offer can feel like a hidden treasure for anyone who runs a homelab, manages self‑hosted services, or builds DevOps pipelines. The Reddit thread that sparked this article describes exactly that scenario: nine Lenovo ThinkPad L14 units (a mix of AMD Ryzen 5 4500U and Intel i5‑10210U CPUs) for $180 total.
At first glance the hardware looks modest—8 GB RAM, 256 GB SSD, integrated graphics—but the real value lies in the aggregate compute, storage, and network resources you can unlock when you treat the laptops as a distributed infrastructure rather than a set of disposable workstations.
In this guide we will:
- Examine why a cluster of inexpensive laptops is a viable platform for infrastructure‑as‑code, CI/CD, monitoring, and edge computing.
- Walk through the prerequisites, installation, and configuration steps needed to turn the nine machines into a functional, production‑grade environment.
- Provide concrete Docker, Kubernetes, Ansible, and Terraform examples that use the correct variable placeholders (
$CONTAINER_ID,$CONTAINER_STATUS, etc.) to avoid Jekyll/Liquid conflicts. - Highlight security hardening, performance tuning, and troubleshooting techniques that seasoned sysadmins expect.
By the end of this article you will have a repeatable blueprint for converting a low‑cost laptop bundle into a self‑hosted DevOps playground that can support everything from a personal GitLab Runner farm to a full‑scale K3s edge cluster.
Keywords: self‑hosted, homelab, DevOps, infrastructure, automation, open‑source, CI/CD, monitoring, Kubernetes, Docker, Ansible, Terraform
Understanding the Topic
What Are We Building?
The core idea is to treat each laptop as a node in a larger, software‑defined infrastructure. Rather than installing a single OS and using the machine for everyday tasks, we will:
- Standardize the OS – Ubuntu Server 22.04 LTS (or Rocky Linux 9) across all nodes.
- Install a lightweight container runtime – Docker Engine or containerd.
- Orchestrate containers – Docker Swarm for simplicity or K3s for a full Kubernetes experience.
- Automate provisioning – Ansible playbooks to enforce configuration drift‑free state.
- Manage infrastructure as code – Terraform to provision cloud‑linked resources (e.g., DNS, VPN).
The result is a distributed, self‑hosted platform that can run CI pipelines, host internal services, and act as a testbed for production workloads.
History and Development
| Year | Milestone | Relevance to Laptop Clusters |
|---|---|---|
| 2008 | Docker released (as an internal project at dotCloud) | Introduced containerization, enabling lightweight workloads on low‑end hardware. |
| 2014 | Kubernetes 1.0 GA | Provided a declarative API for managing containers at scale, later adapted for edge devices via K3s. |
| 2017 | Ansible 2.5 | Simplified agent‑less configuration management, perfect for heterogeneous laptop fleets. |
| 2020 | K3s 1.0 | A certified Kubernetes distribution optimized for ARM and low‑resource x86, ideal for laptops. |
| 2022 | Terraform 1.3 | Added robust provider ecosystem for on‑prem resources, allowing hybrid cloud‑on‑prem workflows. |
These tools have matured to the point where a nine‑node cluster can be provisioned, monitored, and updated with a handful of commands.
Key Features and Capabilities
| Feature | Docker | K3s | Ansible | Terraform |
|---|---|---|---|---|
| Container orchestration | Swarm mode (native) | Full Kubernetes API | N/A | N/A |
| Agent‑less configuration | N/A | N/A | SSH‑based playbooks | N/A |
| Infrastructure as code | Docker Compose | Helm charts | N/A | HCL (HashiCorp Configuration Language) |
| Resource footprint | ~200 MB daemon | ~150 MB binary | ~30 MB on control node | CLI only |
| Edge‑ready | Limited | Designed for edge | Yes | Yes (via providers) |
Pros and Cons
| Aspect | Advantages | Disadvantages |
|---|---|---|
| Cost | $20 per laptop → $180 total; low CAPEX | Limited CPU cores, 8 GB RAM per node |
| Scalability | Easy to add more laptops or Raspberry Pis | Network bandwidth limited to Wi‑Fi/Ethernet 1 Gbps |
| Power consumption | ~15 W per unit, suitable for 24/7 operation | Battery wear if not constantly plugged in |
| Management overhead | Uniform OS + Ansible reduces drift | Physical maintenance (dust, fan cleaning) |
| Performance | Sufficient for CI jobs, monitoring, small web services | Not suitable for heavy ML training or large DB clusters |
Ideal Use Cases
- CI/CD Runner Farm – Deploy GitLab Runner or GitHub Actions self‑hosted runners on each laptop to parallelize builds.
- Edge Kubernetes Cluster – Run K3s to host IoT gateways, lightweight micro‑services, or AI inference at the edge.
- Network Monitoring Hub – Install Prometheus, Grafana, and Loki to collect metrics from home network devices.
- Development Sandbox – Provide isolated environments for developers to test Terraform modules or Helm charts.
- Learning Platform – Perfect for certification prep (CKA, CKS, Ansible) without impacting production resources.
Current State and Future Trends
The open‑source ecosystem continues to push container runtimes and orchestration tools toward lower resource footprints. Projects like MicroK8s, k0s, and Nomad are gaining traction for edge deployments. As eBPF matures, future monitoring agents will consume even less CPU, making nine laptops a viable observability stack for small teams.
Comparison to Alternatives
| Alternative | Cost | Complexity | Typical Use |
|---|---|---|---|
| Single high‑end server (e.g., Dell PowerEdge) | $2,000+ | Moderate (single OS) | Production workloads |
| Cloud VM burst (AWS t3.micro) | $0.01/hr ≈ $7/mo | Low (managed) | Temporary CI jobs |
| Raspberry Pi 4 cluster (4 GB) | $300 for 9 units | Low (ARM) | IoT, learning |
| 9 Lenovo L14 laptops | $180 total | Medium (needs OS sync) | Homelab, edge, CI/CD |
The laptop bundle offers a sweet spot between raw power and flexibility, especially for teams that already have on‑prem networking and want to avoid recurring cloud costs.
Prerequisites
Hardware Requirements
| Item | Minimum Spec | Recommended Spec |
|---|---|---|
| CPU | AMD Ryzen 5 4500U or Intel i5‑10210U | 4 cores, 8 threads |
| RAM | 8 GB DDR4 | 16 GB (upgrade if possible) |
| Storage | 256 GB SSD | 512 GB SSD or add external HDD for backups |
| Network | 1 Gbps Ethernet (preferred) | Dual NICs for management + data plane |
| Power | AC adapter (keep plugged in) | UPS for graceful shutdowns |
All nine laptops should be identical or close enough to avoid driver inconsistencies.
Software Requirements
| Software | Minimum Version | Purpose |
|---|---|---|
| Ubuntu Server | 22.04 LTS | Base OS (or Rocky Linux 9) |
| Docker Engine | 24.0.5 | Container runtime |
| K3s | 1.27.4 | Lightweight Kubernetes |
| Ansible | 2.14 | Configuration management |
| Terraform | 1.6.0 | IaC for cloud resources |
| Git | 2.34 | Source control |
| OpenSSH | 8.9p1 | Remote management |
Network & Security Considerations
- Static IP addressing – Reserve a /24 subnet (e.g., 192.168.100.0/24) for the cluster.
- Firewall – Enable
ufwon each node, allowing only SSH (22), HTTP/HTTPS (80/443), and the Kubernetes API (6443). - VPN – Optional WireGuard tunnel for remote admin access.
- User Permissions – Create a dedicated
devopsuser withsudorights; disable password login in favor of SSH keys.
Pre‑Installation Checklist
- Verify BIOS is set to boot from USB and disable Secure Boot (if using Ubuntu).
- Update firmware on all laptops (
fwupd). - Create a bootable Ubuntu Server 22.04 LTS USB stick.
- Generate an SSH key pair (
ssh-keygen -t ed25519) for thedevopsuser. - Prepare a CSV inventory for Ansible (hostname, IP, role).
Installation & Setup
1. OS Deployment
Boot each laptop from the Ubuntu Server installer and follow the guided partitioning (use the entire disk). During the “User setup” step:
1
2
3
4
5
# Create the devops user with sudo privileges
adduser devops
usermod -aG sudo devops
mkdir -p /home/devops/.ssh
chmod 700 /home/devops/.ssh
Copy the public SSH key to /home/devops/.ssh/authorized_keys and set proper permissions:
1
2
chmod 600 /home/devops/.ssh/authorized_keys
chown -R devops:devops /home/devops/.ssh
Repeat for all nine nodes or use a preseed file to automate the process.
2. Network Configuration
Edit /etc/netplan/01-netcfg.yaml on each node (replace eth0 with the actual interface name):
1
2
3
4
5
6
7
8
9
10
11
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: no
addresses:
- 192.168.100.10/24 # Increment for each node
gateway4: 192.168.100.1
nameservers:
addresses: [8.8.8.8, 1.1.1.1]
Apply the configuration:
1
sudo netplan apply
Verify connectivity:
1
ping -c 3 192.168.100.1
3. Docker Engine Installation
Run the official Docker convenience script (validated for Ubuntu 22.04):
1
2
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
Add the devops user to the docker group:
1
2
sudo usermod -aG docker devops
newgrp docker
Test Docker:
1
docker run --rm hello-world
4. Docker Swarm (Optional)
If you prefer a simple orchestrator, initialize Swarm on the first node (manager):
1
docker swarm init --advertise-addr 192.168.100.10
The command outputs a join token. On each worker node, run:
1
docker swarm join --token <WORKER_JOIN_TOKEN> 192.168.100.10:2377
Verify the cluster:
1
docker node ls
You should see all nine nodes listed with $CONTAINER_STATUS Ready.
5. K3s Installation (Full Kubernetes)
For a more feature‑rich environment, install K3s on the first node (master):
1
curl -sfL https://get.k3s.io | sh -s - server --cluster-init --node-ip 192.168.100.10
Retrieve the K3s token:
1
sudo cat /var/lib/rancher/k3s/server/node-token
On each worker node, install K3s as an agent:
1
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.100.10:6443 K3S_TOKEN=<TOKEN> sh -
Check the cluster status from the master:
1
sudo k3s kubectl get nodes -o wide
All nodes should appear Ready.
6. Ansible Control Node Setup
On your workstation (or a dedicated management laptop), install Ansible:
1
sudo apt update && sudo apt install -y ansible
Create an inventory file inventory.ini:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[masters]
master ansible_host=192.168.100.10
[workers]
worker1 ansible_host=192.168.100.11
worker2 ansible_host=192.168.100.12
worker3 ansible_host=192.168.100.13
worker4 ansible_host=192.168.100.14
worker5 ansible_host=192.168.100.15
worker6 ansible_host=192.168.100.16
worker7 ansible_host=192.168.100.17
worker8 ansible_host=192.168.100.18
[all:vars]
ansible_user=devops
ansible_ssh_private_key_file=~/.ssh/id_ed25519
Run a quick ping test:
1
ansible -i inventory.ini all -m ping
All hosts should return pong.
7. Terraform Provider for Local Resources
Terraform can manage local resources via the null and local providers. Create a main.tf:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
terraform {
required_version = ">= 1.6.0"
required_providers {
null = {
source = "hashicorp/null"
version = "~> 3.2"
}
local = {
source = "hashicorp/local"
version = "~> 2.4"
}
}
}
resource "null_resource" "docker_swarm_init" {
provisioner "local-exec" {
command = "ssh -i ~/.ssh/id_ed25519 devops@192.168.100.10 'docker swarm init --advertise-addr 192.168.100.10'"
}
}
Initialize and apply:
1
2
terraform init
terraform apply -auto-approve
Terraform will execute the Swarm init command on the manager node, demonstrating infrastructure as code for on‑prem resources.
Common Pitfalls & How to Avoid Them
| Symptom | Likely Cause | Fix |
|---|---|---|
| Docker daemon fails to start | Missing cgroup driver mismatch | Ensure systemd is set as the cgroup driver in /etc/docker/daemon.json. |
K3s node remains NotReady | Incorrect node-ip or firewall blocking 6443 | Verify ufw allow 6443/tcp and that K3S_URL points to the master’s IP. |
| Ansible “UNREACHABLE” error | SSH key not copied or wrong permissions | chmod 600 ~/.ssh/id_ed25519 and ensure authorized_keys contains the public key. |
| Terraform “connection refused” | Provider cannot reach the host (network issue) | Ping the host, check that the firewall allows SSH (port 22). |
Configuration & Optimization
Docker Daemon Tuning
Create /etc/docker/daemon.json with performance‑oriented settings:
1
2
3
4
5
6
7
8
9
10
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"storage-driver": "overlay2",
"default-ulimits": {
"nofile": {