April 22, 2026

# The Solo Exodus: A One-Man Migration from VMWare to Hyper-V 2025

The Solo Exodus: A One-Man Migration to Hyper-V 2025

Date: April 22, 2026
Author: Me and AI Generation by Gemini
Context: Enterprise Infrastructure Reprovisioning (Small University Environment, 2024–2026)


1. The Timeline: A Journey of Persistence

The “Solo Exodus” was not a rapid sprint, but a multi-year marathon of research, failure, and ultimate success.

  • Late 2024: Initial pre-testing and evaluation of hypervisor alternatives in isolated labs.
  • July 2025: Formal project commencement; deep-dive production testing of Hyper-V and Proxmox.
  • Late 2025: The “Horror Phase”; months of troubleshooting blade chassis networking and SAN connectivity.
  • February 2026: The “February Blitz”; migration of the University’s core ERP and database systems, following the successful transition of numerous production-level auxiliary and infrastructure services over the prior months.
  • April 2026: Final cleanup, legacy environment decommissioning, and transition to full modernized operations.

2. The Catalyst: The Broadcom “Shockwave”

In early 2025, the university’s virtualization strategy hit a wall. With the restructuring of legacy hypervisor licensing, what had once been a routine, predictable renewal process became an existential threat. Broadcom’s aggressive shift in licensing models was perceived locally as a blatant extortion tactic, designed to outprice all but the Fortune 500.

For a solo administrator at a small university, the implications were immediate and dire. A renewal that once fit within a standard operational budget had ballooned into a figure that threatened the department’s ability to maintain even basic services. The mandate was clear and non-negotiable: Total evacuation of the legacy environment by April 30th. This was more than a technical migration; it was a high-stakes race against a licensing expiration date that carried catastrophic financial penalties.

2. The Crucible of Choice: Evaluating the Alternatives

The migration wasn’t a blind jump; it was a calculated escape. The administrator conducted deep-dive evaluations of several “post-VMware” homes, re-testing both Hyper-V and Proxmox on production hardware after initial successes in isolated DR testing.

The Nutanix Wall

While Nutanix offered a powerful hyperconverged alternative, it presented a fatal flaw: its requirement for local storage. Adopting Nutanix would have meant abandoning the university’s significant existing investment in iSCSI SAN hardware. Combined with a “half-million dollar” price tag, Nutanix was fundamentally incompatible with the university’s fiscal and physical reality.

The OpenShift Experiment

Testing Red Hat OpenShift revealed a platform described as “over-complex and convoluted” for a solo shop. The administrative overhead of managing a container orchestration layer just to run traditional VMs was deemed excessive for a one-man team. Furthermore, concerns regarding storage driver compatibility for live migrations—a critical requirement for maintaining university uptime—effectively ended this path as a viable production hypervisor home.

The Proxmox Nightmare

Proxmox initially showed great promise in a DR site, but re-testing on production hardware turned into a cautionary tale. A simple SSD upgrade led to a broken cluster, revealing that the platform’s manual resource management and “touchy” configuration made it “potentially unrecoverable” in a solo-admin emergency. The risk of being the only person in the organization capable of fixing a platform that could fail so catastrophically during a routine hardware change was deemed an unacceptable risk to the university’s continuity.

The Winner: Windows Server 2025 CORE. It offered a streamlined attack surface, modernized Failover Clustering, and native integration with the existing SAN hardware.

3. The Migration Engine: Veeam B&R v12

Veeam Backup & Replication v12 was the definitive saving grace of the entire migration. Its robust V2V (Virtual-to-Virtual) capabilities and instant recovery features provided the safety net required for a solo operation of this scale.

The migration was not a simple 1:1 move; it was a total environment lifecycle event. While 126 VMs comprise the final migrated fleet, the process involved the decommissioning of some obsolete legacy systems and the reprovisioning of some infrastructure and template VMs to meet the new standards of the Hyper-V 2025 environment. Without Veeam’s reliability and its ability to handle the “heavy lifting” of data conversion and repository management, the administrator faced a grim reality: outsourcing the project, forced retirement, or seeking new employment. Veeam didn’t just move data; it preserved the administrator’s sanity and the project’s viability.

4. The Engineering Engine: AI-Human Peer Collaboration

Behind the migration was a massive, high-intensity collaboration between the Human administrator and an A.I. peer. This partnership produced an entire Git project of scripts that transformed manual infrastructure into code.

This wasn’t just a collection of helper scripts; it was an entire ecosystem designed to turn a blank Server Core install into a cluster-ready node. While the legacy environment could often be provisioned in an hour or two, the Hyper-V 2025 “Solo Exodus” was a more deliberate, full-day process. From the initial Windows installation to the final cluster-join and configuration, every step was automated yet intensive, potentially requiring a full workday per node to ensure a production-ready state. The collaboration involved:

  • Iterative Design: Prompting and refining scripts through hundreds of cycles to handle the nuances of Windows Server 2025.
  • A 7-Phase Workflow: Developing a sequential process covering everything from bare-metal NIC renaming and Switch Embedded Teaming (SET) to complex iSCSI fabric orchestration and MPIO configuration.
  • Operational Autonomy: Creating 100+ VM operational workflows that allowed a single person to manage a massive fleet with the same efficiency as a much larger team.

5. The Scale of the Exodus: Environment Sizing

This project was a massive infrastructure reprovisioning across two geographically separated sites, involving both a comprehensive infrastructure refresh and a total hypervisor shift. While the server hardware had been updated in a previous 2023 cycle, the “Solo Exodus” was an entirely data-oriented and software-oriented overhaul of the university’s virtual ecosystem.

Compute Infrastructure

  • Production Cluster (On-Site): 5 x Dell PowerEdge R650 nodes with 512 GB RAM each, running Windows Server 2025 Core.
  • Disaster Recovery (DR) Cluster (Off-Site): 4 x Dell PowerEdge FC630 blade nodes with 384 GB RAM each.
  • Storage Shift: The only physical hardware change required was the installation of SSD OS disks for the DR hosts to replace the legacy SD cards used for VMware booting.
  • Capacity: Over 4 Terabytes of aggregate physical RAM across the clusters, providing the headroom for the final 126 production, testing, and template VMs.

Storage Infrastructure (iSCSI SAN)

  • Production (Dell PowerVault ME5084): 61.4 TB SSD / 526 TB 7K Spindles.
  • DR (Dell Compellent SC4020 + Expansions): 41 TB SSD / 196 TB 7K Spindles.
  • The ReFS “Size Tax” & Tiering: Standardizing on ReFS (Resilient File System) required a mandated 10-15% “size tax” buffer to ensure volume stability. This forced a strategic tiering model where mission-critical databases remained on SSD, while auxiliary systems were provisioned to 7K disks to manage the massive storage footprint.

6. The Technical Battleground: Storage & Networking

The migration’s “horror phase” centered on aging blade chassis hardware and the intricacies of the Windows iSCSI initiator. While Linux could ping iSCSI targets in minutes, Windows 2025 Core proved a stubborn adversary.

The 0.0.0.0 Binding Fix

The ultimate critical necessity for iSCSI/MPIO was the discovery that Windows defaults were binding iSCSI NICs to 0.0.0.0, essentially attempting to route storage traffic over the management network and breaking multipathing. Success was only achieved after hard-blocking these defaults through PowerShell and explicitly binding every session to the correct iSCSI NIC IPs.

The “Miracle” Firmware

DR hardware success was blocked for months by a refused release of firmware for the DR SAN hardware. The vendor claimed no updates were available that would resolve the VLAN tagging issues encountered on the blade chassis. The breakthrough occurred when the administrator discovered, by chance, that a firmware update had been released in the background without advertisement. Immediate installation solved the long-standing VLAN tagging issues, breathing new life into the DR site.

The VLAN Tangle

Tracing tagged ARP packets through the chassis IOMs using pktmon was a week-long battle against failing traffic, ultimately resolved by the “miracle” firmware discovery.

7. Modernized Resilience: Backup & Recovery

The migration necessitated a complete redesign and “surgical” reprovisioning of the entire backup infrastructure to support the new Hyper-V environment.

The Repository Reprovisioning Marathon

Rebuilding the Veeam backup infrastructure on the DR SAN was a high-stakes challenge due to the SAN’s near-total capacity saturation. The administrator couldn’t simply “wipe and start over.” Instead, the process involved days of meticulous data-moves paired with volume-level data UNMAPPING (TRIM/UNMAP) in small multi-terabyte chunks. This was a “surgical” operation to slowly reclaim space and reprovision the Veeam repositories without crashing the existing volumes.

Tiered Architecture & Cloud Strategy

  • Primary Backup Architecture: A 64 GB RAM Veeam Backup & Replication server (2023 refresh) managing a 28TB repository located in the same rack as the Production hosts.
  • Tiered Offsite Recovery: A 64 GB RAM offsite repo manages three distinct backup targets:
    • SSD Immutable Tier: 7-day immutable backups for rapid, secure recovery.
    • Long-Term Storage (LTS): 60-to-90 day retention for historical compliance.
    • VeeamZips: Specialized archival for legacy and point-in-time snapshots.
  • Scale-Out Strategy: Implementation of a Scale-Out Backup Repository (SOBR) targeting Backblaze.com for modern, cloud-native offsite protection.
  • Hardware Repurposing: The offsite tier was built by repurposing hardware from the pre-2023 production environment, transforming obsolete legacy assets into a functional recovery infrastructure.

8. The February Blitz: Moving the University’s Heart

The true test of the migration came in February 2026, when the administrator began moving the university’s entire Enterprise Resource Planning (ERP) ecosystem—the mission-critical systems that handle everything from student registration to financial records.

Within a two-week window, the following were successfully ported:

  • Production Databases: The primary Enterprise Resource Planning (ERP) and reporting backends.
  • Application Layers: The complete ERP application suite, including administrative portals, APIs, and student/staff self-service interfaces.
  • Critical Middleware: Single Sign-On (SSO) authentication services, document management systems, and enterprise file transfer protocols.

A mirror-image migration of the “EIS-TEST” stack was completed first to validate the V2V process, ensuring that the move of the production “heart” would be seamless.

9. Lessons Learned for the Solo Administrator

  1. Trust Automation, Not Defaults: Standard MPIO settings and iSCSI bindings failed where PowerShell-driven custom policies succeeded.
  2. Persistence through Failure: Success often lies just beyond the point where you want to quit; multiple failed attempts on production hardware were required to find the final technical breakthroughs.
  3. The Snapshot Trap: Unlike the old environment, Hyper-V’s silence on stale checkpoints can lead to silent performance degradation; monitoring must be a manual discipline.
  4. Capacity Planning is Key: The 10-15% ReFS capacity overhead is a critical planning factor. It forced a tiering shift where non-essential auxiliary systems were moved to 7K disks to preserve high-performance SSD space for the core ERP and database workloads.

10. Acknowledgements

This project, while a solo execution, was shaped by the insights and experiences of a broader community of professionals.

  • Peer Insights: Sincere thanks to my peers for the many conversations that helped refine the strategic directions and critical decision-making throughout the migration process.
  • External Guidance: A special note of gratitude to a fellow administrator from a partner university, whose early advice on the absolute necessity of Failover Clustering and the overall viability of the Hyper-V ecosystem provided the foundational confidence needed to commit to this path.
  • Family Support: My deepest gratitude to my family for their unwavering support and their incredible tolerance of the many self-induced stresses and headaches that accompanied this demanding technical odyssey.

Status: Legacy Environment Purge Complete. Success. 🚀 🙌 🥳


Produced via AI-Human Collaboration (Gemini)