Marvell Octeon 10 Gets Wild at OCP Summit 2022

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MarcoP82

New Member
Feb 3, 2016
24
4
3
38
I'm confused, I thought DPUs were developed to split the host server's hardware and resources from the Hypervisor kernel to improve performance, security and manageability. In the sense that ESXi would be installed on the DPU with its own CPU and RAM for the hypervisor kernel, while allowing maximum use of CPU, RAM and hardware passthrough.

But it seems DPUs are just hypervisor-on-a-card. Doesn't this make maintenance and repair a lot more demanding as to replace a single card with multiple points of failure, you need to shut down the entire host ? Or is the best practice to install DPUs in a different host chassis than the base hypervisor ? I would expect the EDSFF E1.S form factor to be better suited for this kind of scalability instead of clunky PCIe card.