Reusing Your Existing VMware Hosts

Hardware Compatibility and Repurposing Strategy

Reusing Your Existing VMware Hosts

The servers sitting in your datacenter right now, the Dell PowerEdges, the HPE ProLiants, and the Lenovo ThinkSystems, were designed to run hypervisors, not a specific hypervisor. Any hypervisor.

This might seem obvious, but it’s worth stating clearly: enterprise server hardware is hypervisor-agnostic. The same CPUs, memory, storage controllers, and network adapters that run ESXi today will run Hyper-V tomorrow. You’re not abandoning hardware investments when you change virtualization platforms; you’re simply loading different software.

If you’re reading this series, you’ve likely already decided to move off VMware. The question isn’t whether to leave—it’s where to land. And when it comes to hardware flexibility, not all landing zones are equal. Azure Local requires validated catalog hardware, specific NIC and storage configurations, and mandatory Azure connectivity. Hyper-V runs on the servers you already own—no catalog restrictions, no hardware refresh, no mandatory cloud dependency.

In this fourth post of the Hyper-V Renaissance series, we’re going to demonstrate that your existing VMware infrastructure is ready for Hyper-V—and that the hardware freedom Hyper-V offers is a significant advantage over both VMware and Azure Local.


Why Server Hardware is Universal

Modern enterprise servers from Dell, HPE, Lenovo, Cisco, and other major vendors are designed against industry standards—not proprietary hypervisor requirements:

  • x86-64 architecture with virtualization extensions (Intel VT-x, AMD-V)
  • UEFI firmware with Secure Boot capability
  • Standard storage interfaces (SAS, SATA, NVMe, Fibre Channel, iSCSI)
  • Standard networking (Ethernet, with optional RDMA capabilities)
  • IPMI/BMC management (iDRAC, iLO, XCC, CIMC)

VMware ESXi and Windows Server both run on this same hardware foundation. The hypervisor is just software that sits on top. Your Dell R750 doesn’t care whether it’s booting ESXi or Windows Server 2025—it presents the same CPUs, the same memory, the same PCIe devices to either operating system.

This isn’t theoretical. Every major server OEM certifies their hardware for both VMware and Windows Server. The same hardware models appear in both the Broadcom Compatibility Guide (BCG) and the Windows Server Catalog.


The VCF Hardware Deprecation Reality

Here’s where the story gets interesting—and where the pressure on your hardware investment is actually coming from.

Broadcom has replaced the legacy VMware Hardware Compatibility List (HCL) with the Broadcom Compatibility Guide (BCG). With the transition to VCF 9.0 and ESXi 9.0, Broadcom has deprecated and end-of-lifed hardware that was previously supported—both I/O devices (KB 391170) and CPU platforms (KB 428874). Some deprecated hardware will be removed entirely in the next major VCF release.

What does this mean in practice? Servers and components that ran ESXi 7 and 8 perfectly well are being pushed toward end-of-support under VCF 9.0. If you want to stay current on VMware, you may be forced into a hardware refresh—not because the hardware is failing, but because the software vendor decided to stop supporting it.

Meanwhile, that same hardware runs Windows Server 2025 without issue. Microsoft’s hardware requirements are straightforward and broadly compatible. Your servers may actually have a longer supported life under Windows Server 2025 than they would under future VMware releases.

The hardware refresh pressure is coming from the VMware side, not the Hyper-V side.

And it’s not just hardware deprecation. VCF 9.0 also deprecated vSphere Virtual Volumes (vVols), which will be removed in a future release. If your storage architecture was built around vVols, that’s another forced change coming from the VMware side—one that has nothing to do with hardware age or capability.


The Azure Local Hardware Question

If you’re evaluating Hyper-V, you’re probably also looking at Azure Local (formerly Azure Stack HCI). It’s Microsoft’s own product, it’s deeply integrated with Azure, and it runs Hyper-V under the hood. So why not just go with Azure Local?

Hardware constraints. Azure Local is a great platform for the right use case, but it comes with significant hardware restrictions that Hyper-V doesn’t.

Validated catalog hardware required. Azure Local requires servers from the Azure Local Solutions Catalog—validated nodes, integrated systems, or premier solutions from specific OEMs. You can’t just install it on whatever server is sitting in your rack. Microsoft support may only be provided for hardware listed in the catalog. If your existing VMware servers aren’t on that list, Azure Local means a hardware purchase.

No external SAN storage. This is the big one for many VMware shops. Azure Local uses Storage Spaces Direct (S2D) with local disks in each node. RAID controllers, Fibre Channel SANs, iSCSI arrays, shared SAS enclosures, and any form of MPIO are explicitly not supported. If you have a Pure Storage FlashArray, a NetApp AFF, a Dell PowerStore, or any other external storage array—Azure Local can’t use it. That investment sits idle or gets abandoned. Hyper-V works with all of them: iSCSI, Fibre Channel, SMB 3.x, whatever your SAN speaks.

Specific NIC and switch requirements. Azure Local requires NICs validated for S2D traffic, and your physical network switches need to support specific configurations for RDMA (DCB/PFC for RoCEv2, or iWARP). This isn’t “any managed Ethernet switch”—it’s validated switch configurations with specific firmware requirements. Hyper-V works with any managed Ethernet switch in your environment.

Azure connectivity required. Azure Local requires periodic connectivity to Azure for billing validation. Disconnect for more than 30 days and your VMs stop. For air-gapped environments, isolated security zones, or organizations that simply don’t want a cloud dependency on their on-premises infrastructure, this is a non-starter. Hyper-V has zero cloud dependency.

Specific disk configurations. S2D requires direct-attached drives in specific configurations—all-flash or hybrid with cache-to-capacity ratios, specific drive endurance requirements, and the Microsoft NVMe driver (not vendor drivers). Your existing RAID arrays and disk configurations from VMware likely don’t match these requirements.

The Three-Way Hardware Comparison

Here’s the table that tells the story:

RequirementVMware VCFAzure LocalHyper-V (Windows Server)
Hardware catalogBCG (shrinking)Azure Local Catalog (restrictive)Windows Server Catalog (broad) + any x86-64 that meets basic requirements
Existing serversDeprecated hardware being droppedMust match validated catalog entryWorks if it ran ESXi
External SAN storage✅ Supported (but vVols deprecated)❌ Not supported (S2D only)✅ Supported (iSCSI, FC, SMB)
RAID controllers✅ Supported❌ Not supported for S2D✅ Supported
NIC requirementsStandardValidated NICs + specific switch configStandard (vendor drivers for RDMA)
Network switchesStandard (NSX has own requirements)Validated configs for RDMA trafficAny managed Ethernet switch
Cloud connectivityOptionalRequired (30-day grace)Optional
Minimum cores72 per CPU socket16 per host16 per host
Edge / single-serverImpractical (cost)Supported (single-node)✅ No restrictions
Air-gapped deploymentPossibleSpecial arrangements required✅ Fully supported

Hyper-V wins the hardware flexibility comparison in almost every category. Azure Local wins on Azure integration—but that’s a feature decision, not a hardware decision.

Your Existing Storage Investment

This deserves emphasis because it’s often the largest infrastructure investment at stake. Many VMware shops have significant investments in external storage arrays—Pure Storage FlashArrays, NetApp AFF/FAS systems, Dell PowerStore or PowerVault, HPE Nimble or Alletra. These arrays often represent hundreds of thousands of dollars in investment with years of useful life remaining.

Hyper-V preserves that investment completely. Your existing SAN connects to Hyper-V hosts via the same iSCSI, Fibre Channel, or SMB protocols you’re using today. MPIO works. Vendor-specific PowerShell modules work. The storage array doesn’t care what hypervisor the host is running—it just presents LUNs or shares.

Azure Local abandons that investment. S2D uses only local disks in the cluster nodes. Your existing SAN can’t participate. We’ll cover the full storage integration story in Post 6: Three-Tier Storage Integration.

Edge, ROBO, and Small Deployments

This is where Hyper-V’s hardware freedom is most powerful. Consider a remote office or edge location:

A single standalone server running Windows Server 2025 with Hyper-V. No shared storage requirements. No cloud connectivity requirements. No minimum core count penalties. No hardware catalog restrictions. Even a two-node workgroup cluster (no Active Directory required) with certificate-based live migration works.

VCF’s 72-core minimum per socket makes it absurd for small sites. Azure Local needs Azure connectivity and validated hardware. Hyper-V just needs a Windows Server license and a server.


Windows Server 2025 Hardware Requirements

Let’s look at what Windows Server 2025 actually requires. If your servers meet these specs—and any enterprise hardware from the last 5 years almost certainly does—you’re good.

ComponentMinimumRecommended for Hyper-V
CPU1.4 GHz 64-bit with SLAT2+ GHz, multiple cores
CPU InstructionsNX, DEP, CMPXCHG16b, LAHF/SAHF, PrefetchW, SSE4.2, POPCNTSame (mandatory)
RAM2 GB (Server Core), 2 GB (Desktop Experience)4+ GB for host OS; size for VM density
RAM TypeECC (Error Correcting Code) or similar technology for physical hostsECC required
Storage32 GB minimum64+ GB recommended; separate VM storage
NetworkGigabit Ethernet (PCIe-compliant)10+ GbE; redundant paths
Virtualization ExtensionsRequired for Hyper-V roleIntel VT-x/VT-d or AMD-V/AMD-Vi
UEFIRequired for Secure BootUEFI 2.3.1c+ recommended
TPMTPM 2.0 recommendedRequired for Shielded VMs, BitLocker

New in Windows Server 2025: The SSE4.2 and POPCNT CPU instruction requirements are new compared to Windows Server 2022. POPCNT has been available since AMD Barcelona (2007) and Intel Nehalem (2008). SSE4.2 has been available since Intel Nehalem (2008) and AMD Bulldozer (2011). Any enterprise server from the last 13+ years supports both. For AMD EPYC servers (Zen-based, 2017+), this is a non-issue.

ECC RAM: Windows Server 2025 requires ECC (Error Correcting Code) or similar technology for physical host deployments. Enterprise servers universally use ECC—this is only a concern if you’re repurposing desktop or workstation hardware for lab use.

Secured-core Server: For organizations with advanced security requirements, Windows Server 2025 supports Secured-core server, which requires UEFI Secure Boot, TPM 2.0, IOMMU (VT-d/AMD-Vi), and Dynamic Root of Trust for Measurement (DRTM). Most current-generation enterprise servers support this, but verify if it’s a requirement for your environment.

The bottom line: if your hardware runs ESXi today, it meets or exceeds Windows Server 2025 requirements.


Component Compatibility: What Actually Matters

While the servers themselves are universal, individual components deserve attention. Not all components that work under ESXi have identical feature parity under Windows. Here’s where to focus your evaluation.

Storage Controllers: Non-Issue

Enterprise storage controllers from Dell (PERC), HPE (Smart Array), Lenovo (ThinkSystem RAID), and Broadcom (MegaRAID) have excellent Windows Server support with inbox drivers. This is rarely a problem area.

VendorCommon ControllersWindows Server 2025Notes
DellPERC H755, H755N, H355, BOSS-S2✅ SupportedInbox drivers; vendor drivers add management tools
HPEP816i-a, P408i-a, NS204i-p✅ SupportedInbox drivers; HPE SSA for advanced diagnostics
Lenovo930-8i, 930-16i, 940-8i✅ SupportedInbox drivers; XClarity integration with vendor drivers
Broadcom (LSI)MegaRAID 9500 series✅ SupportedVendor drivers recommended for StorCLI tools

Boot controllers (Dell BOSS, HPE NS204i, Lenovo ThinkSystem M.2) are all well-supported. If your servers boot ESXi from these devices, they’ll boot Windows Server too.

Fibre Channel HBAs: Non-Issue

VendorCommon HBAsWindows Server 2025Notes
Broadcom/EmulexLPe35000, LPe36000✅ SupportedMature Windows support
Marvell/QLogicQLE2770, QLE2870✅ SupportedMature Windows support

FC HBAs from major vendors have had Windows support for decades. This is a non-issue.

Network Adapters: The One Area to Validate Carefully

This is where VMware-to-Hyper-V transitions most commonly encounter friction. Basic connectivity is almost never a problem—inbox Windows drivers work for virtually all enterprise NICs. The issues arise with advanced features, particularly RDMA.

VendorCommon NICsBasic ConnectivityRDMA SupportNotes
IntelE810, X710, XXV710✅ Inbox driver✅ iWARP, RoCEv2Vendor driver required for RDMA/SR-IOV
Mellanox/NVIDIAConnectX-6, ConnectX-7✅ Inbox driver✅ RoCEv2WinOF-2 driver required for full features
BroadcomP2100G, N1100T✅ Inbox driver✅ RoCEv2Good Windows support
QLogic/MarvellFastLinQ QL45000✅ Inbox driver✅ RoCEv2, iWARPGood Windows support
ChelsioT6 series✅ Inbox driver✅ iWARPGood Windows support

The key distinction: inbox Windows drivers get your NICs connected and passing traffic. But if you need RDMA for live migration acceleration or SMB Direct storage traffic, you’ll need the vendor’s full driver package—not the inbox driver from Windows Update. This isn’t a compatibility problem; it’s a “download the right driver” problem.

RDMA protocol matters: Some NICs support RDMA under ESXi but require different firmware or configuration for Windows RDMA. If RDMA is part of your architecture, validate it during your proof of concept (Post 8). If you’re not planning to use RDMA, this is irrelevant.

VMware-specific NICs: Rare, but if you have NICs specifically designed for VMware (certain Cisco VIC configurations, for example), verify Windows driver availability. Most enterprise NICs aren’t VMware-specific, so this affects very few environments.

GPUs: Good News with Nuance

If your VMware hosts have GPUs for VDI, AI/ML, or graphics workloads, they’ll work under Windows Server 2025. The question is how they’ll work—specifically whether they support GPU partitioning (GPU-P) or only full passthrough (DDA).

VendorGPU SeriesDDA (Passthrough)GPU-P (Partitioning)Notes
NVIDIAA2, A10, A16, A40, L2, L4, L40, L40S✅ ValidatedMicrosoft-documented GPU-P models
NVIDIAA100MIG onlyUses MIG, not SR-IOV partitioning
NVIDIAT4DDA onlyNot in official GPU-P supported list; use DDA
NVIDIARTX 4000/5000 seriesDDA onlyWorkstation GPUs; GPU-P not validated
AMDInstinct MI seriesLimitedVerify specific model with AMD

GPU-P is Microsoft’s native GPU partitioning technology in Windows Server 2025. It uses SR-IOV to divide a physical GPU into isolated fractions shared across multiple VMs—no additional Microsoft licensing required. However, for live migration with GPU-P, Microsoft’s documentation requires NVIDIA vGPU Software v18.x or later drivers, obtained from NVIDIA’s enterprise licensing program. If your GPUs don’t support GPU-P, DDA (Discrete Device Assignment) provides full GPU passthrough to a single VM using existing drivers.

As we covered in Post 3, GPU-P with live migration and HA support is a significant Hyper-V differentiator that VMware’s vSphere component doesn’t offer natively.


BIOS/UEFI: The One Setting That’s Probably Different

Most BIOS settings that work for ESXi also work for Hyper-V. Virtualization extensions, IOMMU, boot mode—these should already be configured correctly on your VMware hosts. But there’s one common difference worth calling out.

Secure Boot is often disabled on VMware hosts because ESXi historically had complications with Secure Boot on some platforms. Windows Server 2025 benefits significantly from Secure Boot being enabled—it’s a requirement for Secured-core server and Shielded VM host attestation. When you repurpose a VMware host, enabling Secure Boot is a recommended step.

Other settings to verify (most should already be correct):

SettingESXi TypicalWindows/Hyper-V OptimalAction Needed?
VT-x/AMD-VEnabledEnabledNo change
VT-d/AMD-Vi (IOMMU)EnabledEnabledNo change
SR-IOVVariesEnabled (if using RDMA/GPU-P)Verify
Secure BootOften DisabledEnabledLikely change needed
TPMVariesEnabledVerify
Boot ModeUEFIUEFINo change
Power ProfilePerformancePerformanceNo change

Vendor Resources: Your OEM Has You Covered

Every major server vendor provides tools and documentation for Windows Server deployment. These are worth bookmarking—they’ll be valuable when you reach the hands-on posts in this series.

Dell Technologies

Dell provides Dell OpenManage for server management, Lifecycle Controller for driver injection during OS deployment, and Dell Repository Manager for creating custom driver repositories. Dell Command | Deploy Driver Packs provide pre-packaged Windows Server driver bundles. Start at Dell Support or the Dell InfoHub for deployment guides.

HPE (Hewlett Packard Enterprise)

HPE’s Service Pack for ProLiant (SPP) is a comprehensive firmware and driver bundle that includes Windows-tested drivers. Intelligent Provisioning (embedded in iLO) automates driver injection during OS install. Start at HPE Support Center.

Lenovo

Lenovo’s XClarity Administrator handles fleet-wide firmware updates, and Lenovo Press publishes excellent deployment guides—including a detailed Implementing Hyper-V on Windows Server 2025 paper that’s worth reading before you start building. Start at Lenovo Press.

Cisco (UCS)

For Cisco UCS environments, service profiles can be reconfigured for Hyper-V without hardware changes. Verify VIC (Virtual Interface Card) firmware supports Windows RDMA if needed—VIC RDMA under Windows may require specific firmware versions different from what you ran under ESXi. Use the Cisco UCS HCL Tool to verify compatibility.


Firmware: Update Before You Switch

One piece of practical advice before you change hypervisors: update your server firmware while you’re still running ESXi.

This might sound counterintuitive—why invest in firmware updates for a platform you’re leaving? Three reasons:

Reduce variables. If you update firmware and switch hypervisors simultaneously, and something doesn’t work, you won’t know which change caused the problem. Update firmware on ESXi where you know it works, confirm the servers are stable, then change the OS. Two separate changes, each independently validated.

Vendor firmware bundles are OS-independent. Dell Lifecycle Controller, HPE Service Pack for ProLiant, and Lenovo XClarity all apply firmware updates at the BMC level—they don’t care what OS is running. You can apply the latest firmware bundle to your ESXi hosts today, and that same firmware will be there when Windows Server boots tomorrow.

Current firmware means current Windows drivers. Storage controllers and NICs with updated firmware are more likely to work with inbox Windows drivers without requiring manual driver intervention. Older firmware revisions occasionally have quirks that require specific driver versions to work around.

The process is straightforward:

  • Dell: Use iDRAC Lifecycle Controller or Dell Repository Manager to download and apply the latest firmware bundle for your PowerEdge model
  • HPE: Download the latest Service Pack for ProLiant (SPP) and apply via iLO or boot media
  • Lenovo: Use XClarity Administrator for fleet-wide updates or UpdateXpress for individual servers
  • Cisco: Apply the latest Host Upgrade Utility (HUU) bundle via CIMC

Document your firmware versions after updating. This becomes your baseline for the Hyper-V deployment.


The Validation Approach (Preview)

We’re not going to walk through the full validation process here—that’s what Post 5: Build and Validate a Cluster-Ready Host is for. But here’s the approach you’ll follow:

Phase 1: Inventory — Document your current VMware host hardware from vCenter (models, CPUs, NICs, storage controllers, GPUs). We’ll provide PowerCLI scripts for this.

Phase 2: Firmware — Update firmware to current versions using the vendor tools described above. Do this while still running ESXi.

Phase 3: Test — Install Windows Server 2025 on a representative test server. Verify all hardware is detected, install Hyper-V, create a test VM. We’ll provide a comprehensive validation script.

Phase 4: Validate specifics — Test RDMA if you need it, verify storage connectivity (iSCSI/FC/SMB), confirm all NICs work with vendor drivers. This feeds into your proof-of-concept in Post 8.

Series Assets: The pre-migration validation script and driver compatibility matrix template for your environment are available in the series repository, and we’ll walk through using them hands-on in Post 5.


The Hardware Compatibility Checklist

Use this checklist to do a quick assessment of each host you’re considering for Hyper-V. If you can check most of these boxes from your existing vCenter inventory—without even touching the hardware—you have your answer.

ItemExpected ResultNotes
Server from major OEM (Dell, HPE, Lenovo, Cisco)YesEnterprise hardware is universally compatible
Purchased within last 5 yearsYesMeets all WS2025 CPU instruction requirements
64-bit CPU with VT-x/AMD-VAlready enabled for ESXiRequired for Hyper-V
ECC RAMStandard on all enterprise serversRequired for WS2025 physical hosts
UEFI boot modeLikely already configuredRequired for Secure Boot, Gen2 VMs
TPM 2.0 presentMost servers since ~2017Recommended; required for Shielded VMs
NICs from major vendor (Intel, Mellanox, Broadcom)YesInbox drivers for connectivity; vendor drivers for RDMA
Storage controllers from OEM (PERC, Smart Array, etc.)YesInbox drivers work; vendor drivers for management tools
Listed in Windows Server CatalogCheck windowsservercatalog.comNot listed ≠ won’t work; just means formal cert may be pending

If you’re running enterprise hardware from the last 5 years, the answer to almost every item above is “yes” before you even check.


The Bottom Line

Three posts ago, we established that the money makes sense. Two posts ago, we showed that the technology holds up. Now we’ve addressed the hardware question—and the answer is clear.

Your existing VMware servers run Hyper-V. Your existing SAN works with Hyper-V. Your existing network switches work with Hyper-V. Your edge and remote office hardware works with Hyper-V. No catalog restrictions. No cloud dependencies. No forced hardware refresh.

Azure Local is a strong platform when you need deep Azure integration—AKS, AVD, Azure Portal management. We’ll cover when it makes sense in Post 18. But if you’re looking for the most flexible, cost-effective landing zone for your VMware migration—one that preserves your existing hardware and storage investments—Hyper-V with Windows Server 2025 is it.

The business case for the Hyper-V Renaissance rests on three pillars: cost advantage, technical capability, and hardware freedom. We’ve now covered all three.


What’s Next

With the “Case for Change” complete, it’s time to start building. The next four posts shift from why to how.

In Post 5: Build and Validate a Cluster-Ready Host, we’ll deploy Windows Server 2025 with Hyper-V from scratch using PowerShell—including the hardware validation scripts, driver installation, networking setup, and comprehensive testing procedures we previewed here. Everything scripted, everything repeatable.

Time to turn hardware into a hypervisor.


Resources

Vendor Resources

Microsoft Documentation

VMware/Broadcom Documentation (for reference)


Series Navigation
← Previous: Post 3 - The Myth of “Old Tech”
→ Next: Post 5 - Build and Validate a Cluster-Ready Host


Post 4 of 18 in The Hyper-V Renaissance series
Last updated: March 2026


Disclaimer: Hardware compatibility information is based on vendor documentation and general enterprise server standards as of early 2026. Always verify specific model compatibility with your hardware vendor and test on representative hardware before production deployment. VMware/VCF hardware deprecation status should be verified against the Broadcom Compatibility Guide at evaluation time.