Did I Just Strike Gold Found Two Amfeltec Pcie Carrier Boards With 4X 1Tb Samsung 960 Pros In A 10 Flea Market Junk Box
Did I Just Strike Gold? Found Two Amfeltec PCIe Carrier Boards With 4X 1TB Samsung 960 Pros In A $10 Flea Market Junk Box
Introduction
The thrill of discovering high-performance hardware in unexpected places is every sysadmin’s hidden fantasy. When a €10 box of “junk” yielded two Amfeltec carrier boards loaded with eight 1TB Samsung 960 Pro NVMe SSDs, it became more than just luck – it became a masterclass in repurposing enterprise-grade hardware for modern DevOps workflows.
For homelab enthusiasts and self-hosted infrastructure architects, such finds represent critical opportunities to:
- Deploy high-IOPS storage without cloud costs
- Experiment with PCIe bifurcation and hardware passthrough
- Build durable storage arrays using MLC NAND – a rarity in today’s TLC/QLC-dominated market
This guide explores how to transform reclaimed enterprise storage into a hyper-converged powerhouse, covering:
- Technical analysis of Amfeltec’s PCIe carrier architecture
- Maximizing endurance on aged-but-legendary Samsung 960 Pro SSDs
- Implementation patterns for Kubernetes persistent storage
- Real-world performance tuning for NVMe arrays
Understanding the Technology
Amfeltec PCIe Carrier Boards (SKU-086-34)
These PCIe 3.0 x16 carrier boards implement hardware multiplexing to host four M.2 NVMe SSDs per card. Unlike software-based solutions, Amfeltec’s design:
- Uses PCIe packet switching to bypass OS-level limitations
- Requires no special driver support (appears as four discrete NVMe devices)
- Supports bifurcation-unaware motherboards
Key Specifications:
| Parameter | Value |
|——————–|————————|
| PCIe Version | Gen 3.0 (8GT/s) |
| Host Interface | x16 (split to 4x x4) |
| SSD Support | M.2 22110/2280 NVMe |
| Power Delivery | 12V via PCIe + SATA |
Samsung 960 Pro 1TB NVMe SSDs
These 2016-era flagships remain prized for:
- MLC NAND: 3,000 P/E cycles (vs. 1,000 in modern TLC)
- V-NAND Architecture: 48-layer 3D stacking
- Sustained Performance: 3,500/2,100 MB/s seq. R/W
Endurance Metrics:
1
2
3
4
5
# Check remaining lifespan via SMART
sudo smartctl -a /dev/nvme0n1 | grep "Percentage Used"
Percentage Used: 13%
Data Units Read: 15,246,123
Data Units Written: 22,561,495
Prerequisites
Hardware Requirements
- PCIe 3.0 x16 slot (x8 electrically acceptable with performance penalty)
- Supplemental SATA power connectors (12V/2A per board minimum)
- Adequate cooling (4x NVMe SSDs can dissipate 15W+ under load)
Software Requirements
- Linux 4.15+ kernel (for NVMe-oF support)
- nvme-cli v1.8+
- mdadm or ZFS 0.8+ for software RAID
Verification Checklist:
- Confirm PCIe lane allocation:
1 2
lspci -vv -s $PCI_ADDRESS | grep LnkSta LnkSta: Speed 8GT/s, Width x16
- Validate SSD health:
1
sudo nvme smart-log /dev/nvme0n1
Installation & Configuration
Hardware Initialization
- Power the carrier board properly:
- Connect both PCIe slot and SATA power
- Verify 12V rail stability:
1 2 3
sudo dmesg | grep "nvme nvme" [ 7.123456] nvme nvme0: pci $PCI_ADDRESS [ 7.123789] nvme nvme0: 4/0/0 default/read/poll queues
- Configure PCIe bifurcation if available (not required but improves performance):
1
BIOS Settings → PCIe Configuration → x16 Slot → 4x4x4x4
Filesystem Optimization
For mixed read/write workloads typical in DevOps environments:
1
2
3
4
5
# XFS with optimal stripe alignment
mkfs.xfs -d su=64k,sw=4 -l version=2,su=64k /dev/md0
# Mount options for NVMe array
/dev/md0 /mnt/array xfs defaults,noatime,nodiratime,discard,logbufs=8,logbsize=256k 0 2
ZFS Alternative Configuration:
1
2
3
zpool create -o ashift=12 tank mirror nvme0n1 nvme1n1 mirror nvme2n1 nvme3n1
zfs set compression=zstd-9 tank
zfs set atime=off tank
Performance Tuning
Kernel-Level Optimizations
1
2
3
4
# /etc/sysctl.d/99-nvme-optimize.conf
vm.dirty_ratio = 10
vm.dirty_background_ratio = 5
block/nvme_core.io_timeout=300
IRQ Balancing
1
2
3
4
# Assign IRQs to specific cores
for i in $(grep nvme /proc/interrupts | awk '{print $1}' | sed 's/://'); do
echo 3 > /proc/irq/$i/smp_affinity_list
done
Usage Patterns for DevOps
Kubernetes Persistent Storage
1
2
3
4
5
6
7
8
# storage-class-nvme.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nvme-raid0
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
Ansible Provisioning Playbook
1
2
3
4
5
6
7
8
9
10
11
12
- name: Configure NVMe array
hosts: storage_nodes
tasks:
- name: Update initramfs for NVMe
command: update-initramfs -u
when: ansible_kernel == '5.4.0-162-generic'
- name: Enable IOMMU for passthrough
lineinfile:
path: /etc/default/grub
regexp: '^GRUB_CMDLINE_LINUX='
line: 'GRUB_CMDLINE_LINUX="intel_iommu=on iommu=pt"'
Troubleshooting
Common Issues and Solutions
Problem: NVMe devices not detected
1
dmesg | grep -i "Controller reset"
Solution: Power cycle board with SATA connector attached
Problem: Performance degradation during sustained writes
1
2
iostat -x 1
# Check %util and await metrics
Solution: Implement write throttling:
1
echo 100 > /sys/block/nvme0n1/queue/write_back_dev_threshold
Conclusion
Salvaged enterprise hardware like Amfeltec carrier boards paired with Samsung’s MLC-based NVMe drives offers exceptional value for:
- Building high-density Kubernetes storage nodes
- Creating low-latency testbeds for distributed systems
- Developing cost-effective CI/CD infrastructure
Next Steps:
- Benchmark with fio using real DevOps workloads
- Implement SMART monitoring with Prometheus NVMe exporter
- Explore hardware passthrough to KVM/QEMU VMs
For further reading:
The true value in such finds lies not just in the hardware’s sticker price, but in the engineering education gained through implementing enterprise-grade solutions on reclaimed infrastructure.