Post

Just Got My Cease Desist Letter From Broadcom

Just Got My Cease Desist Letter From Broadcom

Just Got My Cease Desist Letter From Broadcom: Migrating Away From VMware in 2024

Introduction

The email subject line no infrastructure administrator wants to see: “Notice of License Violation” from Broadcom. For many organizations running VMware virtualization stacks, this has become an unfortunate reality following Broadcom’s acquisition of VMware in November 2023. As highlighted in the Reddit discussion, companies are reporting 400%+ price increases for VMware renewals - an untenable situation for small-to-medium businesses.

This tectonic shift in the virtualization landscape has forced countless organizations to reevaluate their infrastructure strategy. The scenario described - a small manufacturing company with an on-premises Hyper-V migration underway after receiving a $25,000 renewal quote (up from $5,000 three years prior) - reflects a growing industry trend.

For DevOps engineers and system administrators, this presents both challenge and opportunity. The forced migration from VMware ESXi pushes us to:

  1. Re-examine modern virtualization alternatives
  2. Implement infrastructure-as-code practices
  3. Build cloud-agnostic architectures
  4. Optimize resource utilization
  5. Future-proof our environments

In this comprehensive guide, we’ll analyze:

  • The current VMware licensing landscape under Broadcom
  • Production-ready open-source alternatives
  • Migration strategies for vSphere environments
  • Hyper-V deployment best practices
  • Infrastructure automation approaches
  • Cost optimization techniques

Whether you’re managing a 6-VM homelab or enterprise-grade infrastructure, these skills have become essential for modern infrastructure management.

Understanding the Broadcom/VMware Shift

Historical Context

VMware dominated the virtualization market since its ESX 1.0 release in 2001. By 2016, VMware controlled over 80% of the hypervisor market. Their technology became the backbone of enterprise data centers with features like:

  • vMotion live migrations
  • Distributed Resource Scheduler (DRS)
  • High Availability (HA) clusters
  • Software-Defined Networking (NSX)

The Broadcom Acquisition Impact

Following the $61 billion acquisition finalized in November 2023, Broadcom immediately:

  1. Discontinued perpetual licenses
  2. Ended all partner reseller agreements
  3. Transitioned to subscription-only model
  4. Consolidated product bundles
  5. Increased prices 3-10x for many customers

As per Broadcom’s “Simplify Your Journey” transition guide, they’ve eliminated:

  • VMware vSphere Standard
  • VMware vCenter Server Standard
  • All Essentials Kits

Existing customers now face:

  • Mandatory transition to VMware Cloud Foundation (VCF)
  • Minimum commitment of 3 years
  • Core-based pricing model
  • Audit enforcement via cease-and-desist letters

The Financial Reality

The Reddit user’s experience mirrors widespread reports:

PeriodLicense CostIncrease
3 Years Ago$5,000Baseline
Current Quote$25,000400%

For many small businesses, this pricing model eliminates VMware as a viable option. The 10-day compliance deadline in cease-and-desist letters creates urgent migration timelines.

Alternatives Comparison Matrix

FeatureVMware ESXiHyper-VProxmox VEKVM/QEMU
Hypervisor TypeBare MetalType 1Bare MetalType 1
Live MigrationvMotionStorage+VMYesYes
High AvailabilityIncludedFailover ClusteringYesPacemaker
Max Host RAM24TB48TB4PB4PB
vGPU SupportYesLimitedYesYes
Storage Backends20+515+15+
Licensing Cost (3yr)$25k+Free*FreeFree
Commercial SupportBroadcomMicrosoftEnterpriseRed Hat

*Windows Server license required for full Hyper-V features

Prerequisites for Migration

Hardware Requirements

All hypervisors require:

  • 64-bit x86 CPU with VT-x/AMD-V
  • Minimum 8GB RAM (32GB+ recommended)
  • 100GB+ storage for hypervisor

Specific considerations:

Hyper-V:

  • Windows Server 2022 Standard/Datacenter
  • SLAT-capable processor (Intel EPT/AMD RVI)
  • UEFI firmware with Secure Boot

Proxmox VE:

  • Debian 12 compatible hardware
  • ZFS support recommended
  • Intel AMT (for out-of-band management)

Network Preparation

  1. Document current VMware network topology:
    1
    2
    3
    4
    
    # Export VMware networking config
    esxcli network ip interface list
    esxcli network vswitch standard list
    esxcli network vswitch standard portgroup list
    
  2. Ensure physical network supports:
    • VLAN tagging (802.1Q)
    • Jumbo frames (9000 MTU)
    • Link aggregation (LACP)
    • IGMP snooping for multicast
  3. Reserve IP ranges for:
    • Hypervisor management
    • VM migration networks
    • Storage networks (iSCSI/NFS)

Pre-Migration Checklist

  1. Inventory Documentation:
    • VM names, IPs, resource allocations
    • Storage locations and sizes
    • Network bindings and VLANs
    • Backup schedules and retention
  2. Compatibility Verification:
    1
    2
    3
    4
    
    # Check VMware VM hardware version
    vim-cmd vmsvc/getallvms | awk '{print $1,$2,$3}'
    
    # Verify guest OS support on target hypervisor
    
  3. Resource Allocation Plan:
    • CPU core pinning requirements
    • Memory reservations
    • Storage latency-sensitive workloads
  4. Downtime Window Scheduling:
    • Critical application maintenance periods
    • User communication plan
    • Rollback procedures

Hyper-V Migration Walkthrough

Windows Server 2022 Installation

  1. Prepare installation media:
    1
    2
    
    # Create bootable USB (Linux example)
    sudo dd if=Windows_Server_2022.iso of=/dev/sdX bs=4M status=progress
    
  2. Boot with UEFI mode enabled
  3. Select Windows Server 2022 Datacenter (Desktop Experience)
  4. Configure partitions:
    • 100GB OS partition (NTFS)
    • Remaining space for storage pool
  5. Post-installation configuration:
    1
    2
    3
    4
    5
    6
    7
    8
    
    # Rename server
    Rename-Computer -NewName "HYPERV-01" -Restart
    
    # Enable Hyper-V role
    Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart
    
    # Configure power profile
    powercfg /setactive 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c
    

Network Configuration

Define virtual switches via PowerShell:

1
2
3
4
5
6
7
# Create external vSwitch
New-VMSwitch -Name "EXTERNAL-LAN" -NetAdapterName "Ethernet0" -AllowManagementOS $true

# Create private migration network
New-VMSwitch -Name "MIGRATION-NET" -SwitchType Internal
Get-NetAdapter "vEthernet (MIGRATION-NET)" | Rename-NetAdapter -NewName "MIGRATION"
New-NetIPAddress -InterfaceAlias "MIGRATION" -IPAddress 172.16.1.1 -PrefixLength 24

Storage Configuration

1
2
3
4
5
6
7
8
9
10
11
# Create storage pool
$disks = Get-PhysicalDisk -CanPool $true
New-StoragePool -FriendlyName "HYPERV-POOL" -StorageSubsystemFriendlyName "Windows Storage*" -PhysicalDisks $disks

# Create virtual disk
New-VirtualDisk -StoragePoolFriendlyName "HYPERV-POOL" -FriendlyName "VMSTORAGE" -Size 10TB -ResiliencySettingName Mirror -ProvisioningType Fixed

# Format volume
Initialize-Disk -VirtualDisk (Get-VirtualDisk -FriendlyName "VMSTORAGE") -PartitionStyle GPT
New-Partition -DiskNumber 4 -UseMaximumSize -DriveLetter V
Format-Volume -DriveLetter V -FileSystem ReFS -AllocationUnitSize 65536 -NewFileSystemLabel "VMSTORE"

VMware to Hyper-V Conversion

  1. Export VMs from ESXi:
    1
    2
    3
    4
    5
    
    # Power off VM first
    vim-cmd vmsvc/power.off $VMID
    
    # Export OVA package
    ovftool --noSSLVerify vi://$ESXI_USER:$ESXI_PASSWORD@$ESXI_IP/$VM_NAME /export/$VM_NAME.ova
    
  2. Convert using Microsoft Virtual Machine Converter:
    1
    2
    
    Import-Module "C:\Program Files\Microsoft Virtual Machine Converter\MvmcCmdlet.psd1"
    ConvertTo-MvmcVirtualHardDisk -SourceLiteralPath "C:\export\vm01.ova" -DestinationLiteralPath "V:\VMs\" -VhdType DynamicHardDisk -VhdFormat Vhdx
    
  3. Create Hyper-V VM:
    1
    2
    3
    
    New-VM -Name "VM01" -MemoryStartupBytes 8GB -Generation 2 -VHDPath "V:\VMs\vm01.vhdx" -Path "V:\VMs\VM01"
    Set-VMProcessor -VMName "VM01" -Count 4
    Connect-VMNetworkAdapter -VMName "VM01" -SwitchName "EXTERNAL-LAN"
    

Cluster Configuration (Optional)

For high availability:

1
2
3
4
5
6
7
8
9
10
11
# Install Failover Clustering
Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools

# Validate cluster configuration
Test-Cluster -Node HYPERV-01,HYPERV-02 -ReportName "C:\ClusterValidation.html"

# Create cluster
New-Cluster -Name HYPERV-CLUSTER -Node HYPERV-01,HYPERV-02 -StaticAddress 192.168.1.100

# Configure Cluster Shared Volumes (CSV)
Add-ClusterSharedVolume -Name "Cluster Disk 1"

Proxmox VE Alternative Setup

Bare-Metal Installation

  1. Download ISO from Proxmox VE Official Site
  2. Boot from installation media
  3. Configure storage:
    • ZFS recommended for production
    • RAID level based on workload:
      1
      2
      3
      
      RAID10: High performance
      RAIDZ1: Balanced capacity/performance
      RAIDZ2: High redundancy
      
  4. Set network:
    1
    2
    3
    
    Management IP: 192.168.1.10/24
    Gateway: 192.168.1.1
    DNS: 8.8.8.8
    

Post-Installation Configuration

  1. Update repositories:
    1
    2
    3
    
    sed -i 's|security.debian.org|mirror.rackspace.com|g' /etc/apt/sources.list
    echo "deb https://enterprise.proxmox.com/debian/pve bullseye pve-enterprise" > /etc/apt/sources.list.d/pve-enterprise.list
    apt update && apt dist-upgrade -y
    
  2. Configure storage via CLI:
    1
    2
    3
    4
    5
    
    # Add NFS share
    pvesm add nfs nfs_vmstore --server 192.168.1.50 --export /mnt/vmstore --content images,iso
    
    # Create ZFS pool
    zpool create -f -o ashift=12 vm_pool /dev/sdb /dev/sdc /dev/sdd /dev/sde
    

VM Migration Using QEMU

  1. Export VMware disks as raw format:
    1
    
    qemu-img convert -f vmdk -O raw vm01.vmdk vm01.raw
    
  2. Transfer to Proxmox storage:
    1
    
    scp vm01.raw root@proxmox:/var/lib/vz/images/100/
    
  3. Create Proxmox VM:
    1
    2
    3
    4
    
    qm create 100 --name vm01 --memory 8192 --cores 4 --net0 virtio,bridge=vmbr0
    qm importdisk 100 vm01.raw local-zfs
    qm set 100 --scsihw virtio-scsi-pci --scsi0 local-zfs:vm-100-disk-0
    qm start 100
    

Infrastructure-as-Code Implementation

Terraform Hypervisor Management

Define infrastructure in version-controlled configs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# hyperv.tf
resource "hyperv_virtual_switch" "external" {
  name = "EXTERNAL-LAN"
  notes = "Main network bridge"
  allow_management_os = true
}

resource "hyperv_virtual_machine" "app_server" {
  name = "APP01"
  generation = 2
  memory_startup_bytes = 8192

  network_adaptors {
    name = "NIC1"
    switch_name = hyperv_virtual_switch.external.name
  }

  hard_disk_drives {
    path = "V:\\VMs\\APP01\\disk0.vhdx"
    size = 10737418240 # 10GB
  }
}

Ansible Configuration Management

Automate post-migration configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# hyperv_config.yml
- name: Configure Hyper-V host
  hosts: hyperv_servers
  tasks:
    - name: Set power plan
      win_shell: powercfg /setactive 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c

    - name: Configure NUMA spanning
      win_shell: Set-VMHost -NumaSpanningEnabled $true

    - name: Enable Live Migration
      win_shell: |
        Set-VMHost -UseAnyNetworkForMigration $true
        Set-VMHost -VirtualMachineMigrationAuthenticationType Kerberos
        Set-VMHost -VirtualMachineMigrationPerformanceOption SMB

    - name: Set storage paths
      win_shell: |
        Set-VMHost -VirtualHardDiskPath "V:\VHDs"
        Set-VMHost -VirtualMachinePath "V:\VMs"

Performance Optimization Techniques

Hyper-V Tuning

  1. Enable Hyper-Threading awareness:
    1
    
    Set-VMProcessor -VMName $VM_NAME -HwThreadCountPerCore 1
    
  2. Configure Dynamic Memory:
    1
    
    Set-VMMemory -VMName $VM_NAME -DynamicMemoryEnabled $true -MinimumBytes 2GB -MaximumBytes 16GB -StartupBytes 4GB
    
  3. Storage QoS Policies:
    1
    
    New-StorageQosPolicy -Name "Gold" -MinimumIops 1000 -MaximumIops 5000 -PolicyType Dedicated
    
This post is licensed under CC BY 4.0 by the author.