Post

My Company Is Offering Me 9 Laptops For 180

My Company Is Offering Me 9 Laptops For 180

My Company Is Offering Me 9 Laptops For 180

Introduction

When a corporate IT department decides to liquidate a batch of lightly‑used laptops for a fraction of the retail price, the offer can feel like a hidden treasure for anyone who runs a homelab, manages self‑hosted services, or builds DevOps pipelines. The Reddit thread that sparked this article describes exactly that scenario: nine Lenovo ThinkPad L14 units (a mix of AMD Ryzen 5 4500U and Intel i5‑10210U CPUs) for $180 total.

At first glance the hardware looks modest—8 GB RAM, 256 GB SSD, integrated graphics—but the real value lies in the aggregate compute, storage, and network resources you can unlock when you treat the laptops as a distributed infrastructure rather than a set of disposable workstations.

In this guide we will:

  • Examine why a cluster of inexpensive laptops is a viable platform for infrastructure‑as‑code, CI/CD, monitoring, and edge computing.
  • Walk through the prerequisites, installation, and configuration steps needed to turn the nine machines into a functional, production‑grade environment.
  • Provide concrete Docker, Kubernetes, Ansible, and Terraform examples that use the correct variable placeholders ($CONTAINER_ID, $CONTAINER_STATUS, etc.) to avoid Jekyll/Liquid conflicts.
  • Highlight security hardening, performance tuning, and troubleshooting techniques that seasoned sysadmins expect.

By the end of this article you will have a repeatable blueprint for converting a low‑cost laptop bundle into a self‑hosted DevOps playground that can support everything from a personal GitLab Runner farm to a full‑scale K3s edge cluster.

Keywords: self‑hosted, homelab, DevOps, infrastructure, automation, open‑source, CI/CD, monitoring, Kubernetes, Docker, Ansible, Terraform


Understanding the Topic

What Are We Building?

The core idea is to treat each laptop as a node in a larger, software‑defined infrastructure. Rather than installing a single OS and using the machine for everyday tasks, we will:

  1. Standardize the OS – Ubuntu Server 22.04 LTS (or Rocky Linux 9) across all nodes.
  2. Install a lightweight container runtime – Docker Engine or containerd.
  3. Orchestrate containers – Docker Swarm for simplicity or K3s for a full Kubernetes experience.
  4. Automate provisioning – Ansible playbooks to enforce configuration drift‑free state.
  5. Manage infrastructure as code – Terraform to provision cloud‑linked resources (e.g., DNS, VPN).

The result is a distributed, self‑hosted platform that can run CI pipelines, host internal services, and act as a testbed for production workloads.

History and Development

YearMilestoneRelevance to Laptop Clusters
2008Docker released (as an internal project at dotCloud)Introduced containerization, enabling lightweight workloads on low‑end hardware.
2014Kubernetes 1.0 GAProvided a declarative API for managing containers at scale, later adapted for edge devices via K3s.
2017Ansible 2.5Simplified agent‑less configuration management, perfect for heterogeneous laptop fleets.
2020K3s 1.0A certified Kubernetes distribution optimized for ARM and low‑resource x86, ideal for laptops.
2022Terraform 1.3Added robust provider ecosystem for on‑prem resources, allowing hybrid cloud‑on‑prem workflows.

These tools have matured to the point where a nine‑node cluster can be provisioned, monitored, and updated with a handful of commands.

Key Features and Capabilities

FeatureDockerK3sAnsibleTerraform
Container orchestrationSwarm mode (native)Full Kubernetes APIN/AN/A
Agent‑less configurationN/AN/ASSH‑based playbooksN/A
Infrastructure as codeDocker ComposeHelm chartsN/AHCL (HashiCorp Configuration Language)
Resource footprint~200 MB daemon~150 MB binary~30 MB on control nodeCLI only
Edge‑readyLimitedDesigned for edgeYesYes (via providers)

Pros and Cons

AspectAdvantagesDisadvantages
Cost$20 per laptop → $180 total; low CAPEXLimited CPU cores, 8 GB RAM per node
ScalabilityEasy to add more laptops or Raspberry PisNetwork bandwidth limited to Wi‑Fi/Ethernet 1 Gbps
Power consumption~15 W per unit, suitable for 24/7 operationBattery wear if not constantly plugged in
Management overheadUniform OS + Ansible reduces driftPhysical maintenance (dust, fan cleaning)
PerformanceSufficient for CI jobs, monitoring, small web servicesNot suitable for heavy ML training or large DB clusters

Ideal Use Cases

  1. CI/CD Runner Farm – Deploy GitLab Runner or GitHub Actions self‑hosted runners on each laptop to parallelize builds.
  2. Edge Kubernetes Cluster – Run K3s to host IoT gateways, lightweight micro‑services, or AI inference at the edge.
  3. Network Monitoring Hub – Install Prometheus, Grafana, and Loki to collect metrics from home network devices.
  4. Development Sandbox – Provide isolated environments for developers to test Terraform modules or Helm charts.
  5. Learning Platform – Perfect for certification prep (CKA, CKS, Ansible) without impacting production resources.

The open‑source ecosystem continues to push container runtimes and orchestration tools toward lower resource footprints. Projects like MicroK8s, k0s, and Nomad are gaining traction for edge deployments. As eBPF matures, future monitoring agents will consume even less CPU, making nine laptops a viable observability stack for small teams.

Comparison to Alternatives

AlternativeCostComplexityTypical Use
Single high‑end server (e.g., Dell PowerEdge)$2,000+Moderate (single OS)Production workloads
Cloud VM burst (AWS t3.micro)$0.01/hr ≈ $7/moLow (managed)Temporary CI jobs
Raspberry Pi 4 cluster (4 GB)$300 for 9 unitsLow (ARM)IoT, learning
9 Lenovo L14 laptops$180 totalMedium (needs OS sync)Homelab, edge, CI/CD

The laptop bundle offers a sweet spot between raw power and flexibility, especially for teams that already have on‑prem networking and want to avoid recurring cloud costs.


Prerequisites

Hardware Requirements

ItemMinimum SpecRecommended Spec
CPUAMD Ryzen 5 4500U or Intel i5‑10210U4 cores, 8 threads
RAM8 GB DDR416 GB (upgrade if possible)
Storage256 GB SSD512 GB SSD or add external HDD for backups
Network1 Gbps Ethernet (preferred)Dual NICs for management + data plane
PowerAC adapter (keep plugged in)UPS for graceful shutdowns

All nine laptops should be identical or close enough to avoid driver inconsistencies.

Software Requirements

SoftwareMinimum VersionPurpose
Ubuntu Server22.04 LTSBase OS (or Rocky Linux 9)
Docker Engine24.0.5Container runtime
K3s1.27.4Lightweight Kubernetes
Ansible2.14Configuration management
Terraform1.6.0IaC for cloud resources
Git2.34Source control
OpenSSH8.9p1Remote management

Network & Security Considerations

  • Static IP addressing – Reserve a /24 subnet (e.g., 192.168.100.0/24) for the cluster.
  • Firewall – Enable ufw on each node, allowing only SSH (22), HTTP/HTTPS (80/443), and the Kubernetes API (6443).
  • VPN – Optional WireGuard tunnel for remote admin access.
  • User Permissions – Create a dedicated devops user with sudo rights; disable password login in favor of SSH keys.

Pre‑Installation Checklist

  1. Verify BIOS is set to boot from USB and disable Secure Boot (if using Ubuntu).
  2. Update firmware on all laptops (fwupd).
  3. Create a bootable Ubuntu Server 22.04 LTS USB stick.
  4. Generate an SSH key pair (ssh-keygen -t ed25519) for the devops user.
  5. Prepare a CSV inventory for Ansible (hostname, IP, role).

Installation & Setup

1. OS Deployment

Boot each laptop from the Ubuntu Server installer and follow the guided partitioning (use the entire disk). During the “User setup” step:

1
2
3
4
5
# Create the devops user with sudo privileges
adduser devops
usermod -aG sudo devops
mkdir -p /home/devops/.ssh
chmod 700 /home/devops/.ssh

Copy the public SSH key to /home/devops/.ssh/authorized_keys and set proper permissions:

1
2
chmod 600 /home/devops/.ssh/authorized_keys
chown -R devops:devops /home/devops/.ssh

Repeat for all nine nodes or use a preseed file to automate the process.

2. Network Configuration

Edit /etc/netplan/01-netcfg.yaml on each node (replace eth0 with the actual interface name):

1
2
3
4
5
6
7
8
9
10
11
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: no
      addresses:
        - 192.168.100.10/24   # Increment for each node
      gateway4: 192.168.100.1
      nameservers:
        addresses: [8.8.8.8, 1.1.1.1]

Apply the configuration:

1
sudo netplan apply

Verify connectivity:

1
ping -c 3 192.168.100.1

3. Docker Engine Installation

Run the official Docker convenience script (validated for Ubuntu 22.04):

1
2
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

Add the devops user to the docker group:

1
2
sudo usermod -aG docker devops
newgrp docker

Test Docker:

1
docker run --rm hello-world

4. Docker Swarm (Optional)

If you prefer a simple orchestrator, initialize Swarm on the first node (manager):

1
docker swarm init --advertise-addr 192.168.100.10

The command outputs a join token. On each worker node, run:

1
docker swarm join --token <WORKER_JOIN_TOKEN> 192.168.100.10:2377

Verify the cluster:

1
docker node ls

You should see all nine nodes listed with $CONTAINER_STATUS Ready.

5. K3s Installation (Full Kubernetes)

For a more feature‑rich environment, install K3s on the first node (master):

1
curl -sfL https://get.k3s.io | sh -s - server --cluster-init --node-ip 192.168.100.10

Retrieve the K3s token:

1
sudo cat /var/lib/rancher/k3s/server/node-token

On each worker node, install K3s as an agent:

1
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.100.10:6443 K3S_TOKEN=<TOKEN> sh -

Check the cluster status from the master:

1
sudo k3s kubectl get nodes -o wide

All nodes should appear Ready.

6. Ansible Control Node Setup

On your workstation (or a dedicated management laptop), install Ansible:

1
sudo apt update && sudo apt install -y ansible

Create an inventory file inventory.ini:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[masters]
master ansible_host=192.168.100.10

[workers]
worker1 ansible_host=192.168.100.11
worker2 ansible_host=192.168.100.12
worker3 ansible_host=192.168.100.13
worker4 ansible_host=192.168.100.14
worker5 ansible_host=192.168.100.15
worker6 ansible_host=192.168.100.16
worker7 ansible_host=192.168.100.17
worker8 ansible_host=192.168.100.18

[all:vars]
ansible_user=devops
ansible_ssh_private_key_file=~/.ssh/id_ed25519

Run a quick ping test:

1
ansible -i inventory.ini all -m ping

All hosts should return pong.

7. Terraform Provider for Local Resources

Terraform can manage local resources via the null and local providers. Create a main.tf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
terraform {
  required_version = ">= 1.6.0"
  required_providers {
    null = {
      source  = "hashicorp/null"
      version = "~> 3.2"
    }
    local = {
      source  = "hashicorp/local"
      version = "~> 2.4"
    }
  }
}

resource "null_resource" "docker_swarm_init" {
  provisioner "local-exec" {
    command = "ssh -i ~/.ssh/id_ed25519 devops@192.168.100.10 'docker swarm init --advertise-addr 192.168.100.10'"
  }
}

Initialize and apply:

1
2
terraform init
terraform apply -auto-approve

Terraform will execute the Swarm init command on the manager node, demonstrating infrastructure as code for on‑prem resources.

Common Pitfalls & How to Avoid Them

SymptomLikely CauseFix
Docker daemon fails to startMissing cgroup driver mismatchEnsure systemd is set as the cgroup driver in /etc/docker/daemon.json.
K3s node remains NotReadyIncorrect node-ip or firewall blocking 6443Verify ufw allow 6443/tcp and that K3S_URL points to the master’s IP.
Ansible “UNREACHABLE” errorSSH key not copied or wrong permissionschmod 600 ~/.ssh/id_ed25519 and ensure authorized_keys contains the public key.
Terraform “connection refused”Provider cannot reach the host (network issue)Ping the host, check that the firewall allows SSH (port 22).

Configuration & Optimization

Docker Daemon Tuning

Create /etc/docker/daemon.json with performance‑oriented settings:

1
2
3
4
5
6
7
8
9
10
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },
  "storage-driver": "overlay2",
  "default-ulimits": {
    "nofile": {

This post is licensed under CC BY 4.0 by the author.