Post

25G A Few Mods On My Proxmox Mini Pc Lenovo P360 Tiny

25G A Few Mods On My Proxmox Mini Pc Lenovo P360 Tiny

25G A Few Mods On My Proxmox Mini Pc Lenovo P360 Tiny

Introduction

The surge of interest in affordable, low‑power homelab platforms has pushed many enthusiasts to repurpose compact workstations for full‑featured virtualization stacks. The Lenovo ThinkPad P360 Tiny, with its tiny footprint and decent expansion options, has become a popular base for self‑hosted environments. This article dissects the practical steps required to turn a P360 Tiny into a reliable Proxmox VE host that can comfortably run multiple high‑performance virtual machines.

Readers will explore why adding a 2.5 Gb Ethernet adapter, augmenting cooling, and expanding storage are not merely cosmetic upgrades but critical moves that unlock higher network throughput, better thermal stability, and scalable disk layouts. The guide also covers the rationale behind increasing RAM to 64 GB, configuring Proxmox networking, and optimizing VM resource allocation.

By the end of this piece you will understand:

  • How each hardware modification aligns with Proxmox’s design goals.
  • The exact configuration changes needed to make the new NIC, fan, and SSDs usable within Proxmox.
  • Best practices for VM creation, storage pool management, and network bonding.
  • Real‑world performance expectations and troubleshooting tips for common pitfalls.

Keywords such as self‑hosted, homelab, DevOps, infrastructure automation, and open‑source appear throughout to support search visibility while keeping the focus on actionable technical content.

Understanding the Topic

What is Proxmox VE?

Proxmox Virtual Environment (VE) is an open‑source platform that combines KVM hypervisor, LXC containers, and software‑defined storage into a single management interface. It is widely adopted in homelab and small‑business settings because it offers a unified web UI, robust API, and the ability to run both full VMs and lightweight containers.

Historical Context Proxmox originated as a fork of the Debian‑based virtualization project “PVE” in 2008. Over the past decade, the project has integrated new kernel features, added support for ARM hardware, and expanded its clustering capabilities. The release of Proxmox 8.x introduced native support for NVMe over Fabrics and improved GPU passthrough, making it viable for more demanding workloads on modest hardware.

Key Features Relevant to This Build

FeatureWhy It Matters for a Mini PcTypical Use Cases
KVM with hardware accelerationEnables near‑native performance for Windows and Linux guestsDevelopment environments, CI pipelines
LXC containersLower overhead for stateless servicesCI runners, monitoring agents
Ceph or ZFS storage stacksFlexible pool management and data integrityMulti‑node storage, backup targets
Built‑in network bonding and VLAN supportAllows aggregation of multiple NICs or VLAN taggingHigh‑throughput LAN, isolated VM networks
Web UI + APISimplifies provisioning and monitoringAutomated deployments via Ansible or Terraform

Pros and Cons of Using a Mini Pc for Proxmox

Pros

  • Low power consumption – often under 30 W idle.
  • Small physical footprint fits into tight spaces.
  • Cost‑effective entry point for learning virtualization.

Cons

  • Limited PCIe lanes – may require creative use of M.2 slots.
  • Thermal headroom can be constrained under sustained loads.
  • Memory ceiling may restrict large‑scale VM fleets without upgrades.

Real‑World Success Stories

Several community members have documented successful migrations from traditional rack servers to compact platforms like the Intel NUC or, in this case, the Lenovo P360 Tiny. In many instances, adding a 2.5 Gb NIC and upgrading RAM to 64 GB transformed the device from a “toy” into a production‑grade hypervisor capable of hosting multiple CI runners, a GitLab Runner, and a personal Nextcloud instance without noticeable latency. #### Comparison to Alternatives

PlatformTypical RAMPCIe Slots2.5 Gb NIC AvailabilityTypical Cost
Lenovo P360 TinyUp to 64 GB1 (occupied by GPU)Requires M.2 A+E adapter$300‑$500
Intel NUC 13Up to 32 GB2Often built‑in$400‑$600
Raspberry Pi 58 GB max0No native 2.5 Gb$75‑$100

While the NUC offers more PCIe lanes, the P360 Tiny’s price point and existing GPU make it an attractive option when paired with targeted modifications.

Prerequisites

Hardware Requirements

ComponentMinimum SpecificationRecommended
CPUIntel Xeon E‑2224 or equivalentIntel Core i5‑1240P
RAM32 GB64 GB DDR4
Storage2 × 2.5” SSD (≈500 GB each)2 × 2.5” NVMe SSD (1 TB each)
Network1 GbE onboard2.5 GbE NIC via M.2 A+E
CoolingStock fanAdditional 60 mm Noctua fan
Power65 W PSU90 W PSU for sustained loads

All components should be compatible with the P360 Tiny’s chassis and power budget. Verify that the M.2 slot used for the NIC is of type A+E (key E) to accommodate the Realtek RTL8125‑B 2.5 Gb adapter.

Software Requirements

ItemVersionNotes
Proxmox VE8.2 or laterUse the latest stable ISO for security patches
Debian12 (Bookworm)Base OS for Proxmox; ensure all repositories are enabled
FirmwareLatest BIOSUpdate via Lenovo Vantage to enable M.2 hot‑plug support
SSHOpenSSH 9.xFor remote management; disable root login if possible
Text Editornano or vimPreference for configuration editing

Network and Security Considerations

  • Assign a static IP to the management interface (e.g., 192.168.1.10) to avoid DHCP churn. * Enable firewall rules that restrict inbound traffic to trusted management networks only.
  • Consider VLAN tagging for VM traffic isolation if multiple networks are required.

User Permissions

  • Create a dedicated pveadmin user with sudo privileges for routine operations. * Use role‑based access control (RBAC) in the web UI to limit destructive actions to authorized accounts.

Installation & Setup

Step 1 – Prepare Installation Media

Download the latest Proxmox VE ISO from the official repository. Verify the checksum to ensure integrity.

1
2
wget https://repo.proxmox.com/proxmox/8.x/iso-live/pve-enterprise-8.2.iso
sha256sum pve-enterprise-8.2.iso

Burn the ISO to a USB stick using dd or a tool like Rufus on a separate workstation.

Step 2 – BIOS Configuration Enter the Lenovo BIOS (F1 during boot) and:

  1. Enable “Virtualization Technology (VT‑x/AMD‑V)”.
  2. Set “Secure Boot” to “Disabled” (required for Proxmox).
  3. Configure the M.2 slot to “PCIe” mode to ensure the NIC is recognized at boot.

Step 3 – Disk Layout During installation, select the primary SSD for the Proxmox boot partition. Use the second SSD for VM storage. Create a dedicated partition for VM images and another for container root filesystems if desired.

Step 4 – Network Installation

When prompted for network configuration, choose “No network” if you plan to configure bonding later, or set a static IP for initial access.

Step 5 – Post‑Installation Reboot

After installation, reboot into the new Proxmox installation. ```bash reboot

1
2
3
4
5
6
7
#### Step 6 – Verify Hardware Detection  

Access the console (either via the attached monitor or IPMI) and run `lspci` to confirm the presence of the new 2.5 Gb NIC.  

```bash
lspci | grep -i Ethernet

You should see an entry similar to “Realtek RTL8125/RTL8125B”.

Step 7 – Install Additional Cooling

Mount the Noctua 60 mm fan onto the GPU header using the provided screws. Connect the fan’s 4‑pin PWM connector to the GPU fan header. Verify fan speed via pwmconfig.

Step 8 – Configure Bonded Network

Create a bonding interface named bond0 that combines the onboard 1 GbE and the new 2.5 Gb NIC.

1
2
3
4
5
6
7
8
9
cat <<EOF > /etc/network/interfaces.d/bond0.cfg
auto bond0
iface bond0 inet static
    address 192.168.1.10/24
    gateway 192.168.1.1    bond-slaves eno1 eno2
    bond-mode 802.3ad
    bond-miimon 100
    bond-primary eno1
EOF

Replace eno1 and eno2 with the actual interface names identified by ip link.

Apply the configuration:

1
systemctl restart networking

Verify connectivity with ping 8.8.8.8.

Step 9 – Set Up Storage Pools

Create a ZFS pool for VM disks using the second SSD.

1
2
zpool create vmpool /dev/disk/by-id/usb-SSD2
zfs set compression=lz4 vmpool

Add the pool to Proxmox by editing /etc/pve/storage.cfg.

1
2
3
zfspool: vmpool
    pool: vmpool
    content: images,root

Refresh the storage list in the web UI.

Step 10 – Verify System Resources

Navigate to the Proxmox web UI, select the host, and confirm that RAM shows 64 GB, CPU cores are recognized, and the two storage pools are online.

Configuration & Optimization

Optimizing VM Creation

When provisioning VMs, allocate CPU and memory based on actual workload requirements. For high‑throughput tasks, enable CPU host‑passthrough to reduce overhead.

1
2
3
4
5
6
# Example VM configuration snippetcpu: 
  cpus: 4
  execution cap: 100  numa: 0
memory: 8192
balloon: 
  enable: 0

Use qm set to adjust parameters after creation. #### Security Hardening

  • Disable unnecessary services (e.g., pve-ha-cron if HA is not used).
  • Apply firewall rules that only allow SSH from trusted IPs.
  • Enable SELinux in enforcing mode if the distribution supports it. #### Performance Tuning

  • Set zfs_vdev_async_write_max_active=1000 to improve write latency. * Enable zfs_txg_timeout=0 for faster transaction group commits.
  • Adjust net.core.rmem_max and net.core.wmem_max to accommodate 2.5 Gb throughput. bash sysctl -w net.core.rmem_max=2500000 sysctl -w net.core.wmem_max=2500000

Persist these settings in /etc/sysctl.d/99-proxmox-tuning.conf.

Monitoring and Alerts

Integrate Prometheus + node_exporter for metric collection, and configure alertmanager to notify when CPU temperature exceeds 80 °C or when RAM utilization surpasses 90 %.

1
2
3
4
5
6
7
8
# Sample Prometheus rule- alert: HighTemperature
  expr: node_thermal_temp_celsius{thermal_zone="thermal_zone0"} > 80
  for: 2m
  labels:
    severity: warning
  annotations:
    summary: "Host temperature critical"
    description: "Temperature on  exceeded 80°C"

Usage & Operations

Creating and Managing VMs

Use the web UI or qm CLI to create VMs. Example command to import an ISO and create a new VM:

1
2
3
4
qm create 101 --name ubuntu-vm --memory 4096 --cores 2 --scsi0 vmpool:vm-101-disk-0,size=64G
qm importdisk 101 ubuntu-22.04-live-server-amd64.iso vmpool:iso/101-live.iso
qm set 101 --boot c --bootdisk vmpool:vm-101-disk-0
qm start 101

Backup Strategies

Schedule incremental backups of VM disks using vzdump. Example:

1
vzdump 101 --mode snapshot --storage vmpool --com
This post is licensed under CC BY 4.0 by the author.