Hey
Felt the need for a new project so figured I'd reattempt something which I tried a few years ago. Hosting multiple Windows based media PCs within one physical machine running ESXi 6 with the graphics cards passed through to the VM using VT-d.
Last time when I attempted this in December 2012 I got the basics working, but things like DXVA (DirextX Video Acceleration) didn't work which resulted in slightly laggy playback. Also the host machine would occasionally crash which probably isn't ideal when watching a film! Here is the setup running three TVs each with a Windows 7 VM with the GPU passed through, running on a single physical box. https://pbs.twimg.com/media/A93MtsBCIAAiiti.jpg:large
My main reason for attempting this is that my rack is full of systems which feels a little overkill, currently theres 3 media PCs in there and one ESXi processing box (not really used enough). The plan is to combine all these systems into one box, leaving me with three servers. Primary server which runs 24/7, file server and ESXi box (with media PCs virtualised).
The most important point is I'm trying to carry out all testing without spending any additional money until I know it's going to work!
System spec:
Power usage idle: 102w
Power usage with movie playing: 128w
Next steps
Felt the need for a new project so figured I'd reattempt something which I tried a few years ago. Hosting multiple Windows based media PCs within one physical machine running ESXi 6 with the graphics cards passed through to the VM using VT-d.
Last time when I attempted this in December 2012 I got the basics working, but things like DXVA (DirextX Video Acceleration) didn't work which resulted in slightly laggy playback. Also the host machine would occasionally crash which probably isn't ideal when watching a film! Here is the setup running three TVs each with a Windows 7 VM with the GPU passed through, running on a single physical box. https://pbs.twimg.com/media/A93MtsBCIAAiiti.jpg:large
My main reason for attempting this is that my rack is full of systems which feels a little overkill, currently theres 3 media PCs in there and one ESXi processing box (not really used enough). The plan is to combine all these systems into one box, leaving me with three servers. Primary server which runs 24/7, file server and ESXi box (with media PCs virtualised).
The most important point is I'm trying to carry out all testing without spending any additional money until I know it's going to work!
System spec:
- X-Case RM 400/10 V4
- Unknown 5x3.5" hot swap SATA
- Corsair RM850i PSU
- Asus Z9PE-D8 WS
- 2x Xeon E5-2660 ES
- 2x Noctua NH-U9DX i4 CPU coolers
- 3x Arctic Cooling F12 PWM
- 2x Arctic Cooling F8 PWM
- 4x Kingston 8GB DDR3 ECC 1600Mhz (hoping to upgrade to 64GB once some pops up on eBay)
- OCZ Agility 3 60GB
- 2x Asus HD6450
- 2x 4 port USB3 PCIe adapters
- ESXi 6 booted off OCZ Agility 3 60GB (primarily to speed up boot times)
- Intel X520-DA2 10GbE NIC
- Test 1 - Concept testing
- Test 2 - Baremetal CPU usage comparison
- Test 3 - USB Passthrough (keyboards)
- Test 4 - Multiple GPUs
- Completed Build
- Install ESXi
- Configure GPU for passthrough
- Create Windows 10 VM, but no pass through configured yet
- (Optional) Disable User Account Control, enable auto login
- Make sure all ATI drivers are installed and ready
- Shutdown VM and add GPU PCI device (noting that both HDMI audio device and graphics card will need to be passed through)
- Reboot VM, TV went blank and finally I had a Windows booting screen
Power usage idle: 102w
Power usage with movie playing: 128w
Next steps
- Test CPU usage using bare metal
- Triple check DXVA is actually working
- Investigate how to pass through USB keyboard/mouse through to the VM
- Test multiple graphics cards simultaneously.
Last edited: