HBA passthrough best practice

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
Hi all,
I'm trying to virtualize my 3 servers together and i'm wondering if DDA can achieve this
1. Server 2019 drivepool server connected to HGST 4u60 JBOD rack with LSI 9300 card
2. Server 2019 Diskless ccboot server with Intel dual 10G X520 and U.2 intel ssd
3. Mikrotik server, probably with dual 1G nic

The board is supermicro X11SCA-F + Xeon E2278G + 128GB Udimm memory
Already got the hardware up and running,
but after reading some forum posts my primary concern is the PCIE passthrough, with my buiild i'd need to passthrough
- LSI 9300-8e
- X520 dual10g
- Dual 1g Nic (probably non intel, since mikrotik is pretty picky with intel NIC)

All 3 to 3 different Vm's, Hardware-wise there should be more than enough CPU and memory for these 3 servers.

Anyone has experience passing through multiple pcie devices under the same host? Especially controller devices.
I'm still trying to decide whether i should use ESXI or hyper-v route,
Googled a bit but doesn't seem like too many people go with hyper-v route
but since almost all my environment is windows based i'm leaning towards hyper-v

Also any potential stability issues with this sort of build?
 
Last edited:

Dreece

Active Member
Jan 22, 2019
503
160
43
In theory, passthrough works fine as long as the device plays by standards and isn't sharing IOMMUs I believe. Microsoft just doesn't support IOMMU splitting very well, unlike KVM. Have no idea on ESXi, that I've never used passthrough in.

Have you considered Proxmox? you can do a lot more cutting-edge hacks in that than you can with Hyper-V/ESXi.
 

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
I did some researching but i couldn't find any ways to passthrough Intel UHD, so most likely i'm gonna go ESXI route,
Proxmox was on my list, but i'm worried about its stability and i'm not good with command lines tbh (hence why i prefer hyper-v)
 

Dreece

Active Member
Jan 22, 2019
503
160
43
I believe Intel UHD shares a core IOMMU group, whence Hyper-V probably won't play ball in that particular case. KVM will though, probably ESXi too, if it does then do update for the record ;)
 

gb00s

Well-Known Member
Jul 25, 2018
1,177
587
113
Poland
.... i'm gonna go ESXI route, Proxmox was on my list, but i'm worried about its stability and i'm not good with command lines tbh (hence why i prefer hyper-v)
Proxmox‘s stability is good enough for companies and you are barely forced to use command line. I do not see any problems to passtrough your specific hardware. There are tons of how-to’s in the net about enabling iommu and setting up passtrough in Proxmox.
 

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
Proxmox‘s stability is good enough for companies and you are barely forced to use command line. I do not see any problems to passtrough your specific hardware. There are tons of how-to’s in the net about enabling iommu and setting up passtrough in Proxmox.
How often will i have to troubleshoot through command line with my case for 24/7 usage?
I dont mind using command line, as long as they're windows based, which most of my servers are,
But linux is, eh.. havent touched it for like 10 years..
wish MS could just get passthrough right with w2021
 

Dreece

Active Member
Jan 22, 2019
503
160
43
With Proxmox, as long as you don't do something you shouldnt (can happen quite easily!) from the UI, there isn't really a need to hit the commandline. Although Proxmox is rock solid stable (I rely on it heavily to provide my other half and kid and living room a desktop computer each), there could be problems such as latency issues etc, especially with graphics cards and audio, also having to tweak further by pinning cores and turning on hugepages etc... none of this configuration is available via the UI, you would have to hit shell for that.

You're not asking for desktop running or for playing games etc., so Proxmox should fit your use-case quite well without having to go into shell, but this depends on if all your hardware is supported by the inbox drivers, though from the sound of things you're passing much of it through to VMs so should be fine. If something needs fine-tuning for better performance then you will have to hit shell and get your hands covered in penguin poo.

Give it a go, and see if it grows on you. Format is only a USB key away so can't hurt trying.
 
Last edited:

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
Update:
I decided to give esxi a go first. I thought i'd try something simpler first before trying proxmox.


It wasnt exactly a smooth process, had to google some errors,
and i had to use specific esxi versions and tweak some settings to get (almost) what i want.
Why almost?
Well with igpu being passthru'ed it requires svga to be disabled, and with svga disabled, the only way to access the vm is thru remote desktop.
This isnt ideal for 24/7 server. Rdesktop sometimes fails me and ipmi has been a huge help.
So basically compared to my physical servers, i can combine them but i also had to sacrifice IPMI (or vmware svga in this case).

So i'm wondering @Dreece and @gb00s does this behavior also occur in proxmox? from my understanding both esxi and proxmox are both based on linux right? I'd Appreciate your input on this
 

Dreece

Active Member
Jan 22, 2019
503
160
43
Proxmox is a bunch of services and perl scripts behind a web interface that control Qemu/KVM (the hypervisor).
ESXi is a proprietary hypervisor designed by the Vmware guys.

I have no idea if the behaviour is the same or not as I've never passed through an iGPU in KVM, only dedicated cards.

Give it a whirl and you'll soon find out my friend.
 

gb00s

Well-Known Member
Jul 25, 2018
1,177
587
113
Poland
Well with igpu being passthru'ed it requires svga to be disabled, and with svga disabled, the only way to access the vm is thru remote desktop.
This isnt ideal for 24/7 server. Rdesktop sometimes fails me and ipmi has been a huge help.
So basically compared to my physical servers, i can combine them but i also had to sacrifice IPMI (or vmware svga in this case).
That is the only expected and logical behavior. If you pass through your iGPU to the VM, of course, it's not available for the host anymore. That's the same expected behavior for Proxmox. Anything else would involve a second dedicated GPU.

I'm just curious why you need to pass through your iGPU? You can work within Proxmox directly in the VM's. Btw if Rdesktop fails you, you may have other issues as well. Works for me w/o issues for months on Win + Linux. Another solution I'm using is Guacamole as a remote entry.

PMX_1.png
You could use fullsceen mode as well.
 

Attachments

Last edited:

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
Proxmox is a bunch of services and perl scripts behind a web interface that control Qemu/KVM (the hypervisor).
ESXi is a proprietary hypervisor designed by the Vmware guys.

I have no idea if the behaviour is the same or not as I've never passed through an iGPU in KVM, only dedicated cards.

Give it a whirl and you'll soon find out my friend.
Do you have to disable the emulated vga when your gpu is p/t'ed? The principle should be similar to igpu

That is the only expected and logical behavior. If you pass through your iGPU to the VM, of course, it's not available for the host anymore. That's the same expected behavior for Proxmox. Anything else would involve a second dedicated GPU.

I'm just curious why you need to pass through your iGPU? You can work within Proxmox directly in the VM's. Btw if Rdesktop fails you, you may have other issues as well. Works for me w/o issues for months on Win + Linux. Another solution I'm using is Guacamole as a remote entry.

View attachment 15411
You could use fullsceen mode as well.
It's not connection problem, sometimes the rdp service needs to be restarted, or else it would show "internal error".
Happens like 2-3 times a month, and IPMI is extremely helpful for cases like this.
I need IGPU for emby and blue iris, intel uhd does the job in my current setup
I'm out of pcie lanes to use another GPU
(LSI 9300 8e + Intel dual 10g card + intel quad nic )

By second dedicated gpu you mean we can assign the second gpu to replace the emulated vga?
This doesnt seem to be possible in vmware, at least i havent seen it mentioned anywhere
 

Dreece

Active Member
Jan 22, 2019
503
160
43
Ah I see what your aim is now, so you're effectively trying to passthrough Intel's CPU igpu hardware acceleration to achieve hardware media encoding/decoding in the relevant VM, yet at the same time you would like to be able to administer the host via the physical video connections on motherboard.

Whilst using dedicated GPU cards inside VMs I've never disabled host motherboard basic video, that is always hooked up to a KVM over ethernet box from each server and function as expected.

Or..

Are you talking the actual emulated video inside the VM itself? I've never used that, only have the passthrough GPUs which are hooked up to monitors/tvs. If and when I need to remote access the VMs, I simply use either shell or rdp. Never had a problem with anything not working. VMs run 24/7 with full remote access never needing to restart any services. The only time I have emulated video is when I'm first setting up a VM, once I transfer the GPU in and enable it via drivers etc., I then remove the emulated video-card for the VM in Proxmox configuration. From that point on it's all good.

Are you running this all on a box with non-ecc memory as I note you wrote 'udimm' in your spec-list above? that is where your issue could be in respect to "internal error". Though with really good non-ecc memory running at factory or even slower than factory spec you should be ok, still not recommended. For 100% error-free uptime you really should consider ECC memory, because even with weaknesses in other areas ECC memory can keep process memory corruption at bay. Also two PSUs working together via a controller (just like servers with shared/redundant PSUs) or a top-class PSU and effective cooling of all hardware, especially any hardware raid card though HBAs should be fine with regular desktop airflow.

Sometimes when everything works just great, all hardware running well below peak ability, regular desktop hardware can maintain uptime, but if any little hardware weakness shows its face, you virtually have a hell of a time trying to figure out what component is the cause of process corruption/crashes. Even mixing server components with desktop components can still be a royal pain.

I'd personally run a memtest on your box, on 128gb it will take a while, but that won't necessarily tell you anything other than the memory is good, because it could be other issues which impact voltages etc which lead to memory corruption on a 24/7 system, effectively impacting perpetual services such as RDP.

@denywinarto you're probably aware of all the above so please forgive me for any preaching, but I do agree with @gb00s , it does sound like you have some issues that need investigating to determine why your rdesktop service is failing a few times a month.
 
Last edited:

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
Ah I see what your aim is now, so you're effectively trying to passthrough Intel's CPU igpu hardware acceleration to achieve hardware media encoding/decoding in the relevant VM, yet at the same time you would like to be able to administer the host via the physical video connections on motherboard.

Whilst using dedicated GPU cards inside VMs I've never disabled host motherboard basic video, that is always hooked up to a KVM over ethernet box from each server and function as expected.

Or..

Are you talking the actual emulated video inside the VM itself? I've never used that, only have the passthrough GPUs which are hooked up to monitors/tvs. If and when I need to remote access the VMs, I simply use either shell or rdp. Never had a problem with anything not working. VMs run 24/7 with full remote access never needing to restart any services. The only time I have emulated video is when I'm first setting up a VM, once I transfer the GPU in and enable it via drivers etc., I then remove the emulated video-card for the VM in Proxmox configuration. From that point on it's all good.

Are you running this all on a box with non-ecc memory as I note you wrote 'udimm' in your spec-list above? that is where your issue could be in respect to "internal error". Though with really good non-ecc memory running at factory or even slower than factory spec you should be ok, still not recommended. For 100% error-free uptime you really should consider ECC memory, because even with weaknesses in other areas ECC memory can keep process memory corruption at bay. Also two PSUs working together via a controller (just like servers with shared/redundant PSUs) or a top-class PSU and effective cooling of all hardware, especially any hardware raid card though HBAs should be fine with regular desktop airflow.

Sometimes when everything works just great, all hardware running well below peak ability, regular desktop hardware can maintain uptime, but if any little hardware weakness shows its face, you virtually have a hell of a time trying to figure out what component is the cause of process corruption/crashes. Even mixing server components with desktop components can still be a royal pain.

I'd personally run a memtest on your box, on 128gb it will take a while, but that won't necessarily tell you anything other than the memory is good, because it could be other issues which impact voltages etc which lead to memory corruption on a 24/7 system, effectively impacting perpetual services such as RDP.

@denywinarto you're probably aware of all the above so please forgive me for any preaching, but I do agree with @gb00s , it does sound like you have some issues that need investigating to determine why your rdesktop service is failing a few times a month.
No need to apologize i appreciate your input, but the "internal error" happened on my non-ecc machine,
The ECC machine is my spare machine and where i test hyper-v and vmware.
I will test it again on my ecc machine. But i doubt it's memory issue since i've seen it posted on some forums, try googling "internal error" rdesktop

Also a small update, i just tested hyper-v server 2019 (the baremetal one) and unfortunately it doesnt say all my pcie express are passthru-able . So i'm left with either vmware / proxmox, gonna test it tmw and update again.

Update : this morning my serve got nuked after a blackout, samsung evo 970 is just not suitable for 24/7 emby server.
And Ipmi once again saved the day, Well that and my daily macrium backup.. I replaced the nvme and done the restore process through ipmi kVM
 
Last edited: