Remote access with IPMI and discrete GPU for games

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Chris Beasley

New Member
Jun 5, 2015
22
0
1
44
Hi,

Sorry for the slightly confusing title!

I've screwed up my Q77 (BCQ77M) & i5 2500 setup and it no longer works; neither chip or motherboard work with anything else... Therefore I'm after a 'newer' setup.

The purpose of this setup is to use Steam in-home streaming to stream from this machine to HTPCs located around the house. The location of this system is remote due to no space for a desk and is located with the rest of my lab gear (in a rack under my house) so remote access is critical to ensure that if the system does fail it can be rebooted, accessed and dealt with without me crawling under the house

I've priced up a Haswell E3-1220v3 and appropriate Supermicro motherboard which is a gen3 x16 pci-e, with IPMI which I think would fit my needs for a more powerful gaming PC (plus can run vSphere nicely should I wish to virtualise it).

So does anybody know, if using Windows 10 for example, I can run my GTX 1070 streaming to my clients around the house and then if something goes wrong or I need to tinker with something I can boot up the IPMI KVM and use that. Obviously can't do both at the same time, but I assume if I set the desktops to mirror across the GPUs then I won't disrupt the resolution of the discrete GPU display when going back to streaming?

Any thoughts are appreciated and anyone wants any clarifications then I'm happy to try and explain further...

Many thanks,

Chris
 

KioskAdmin

Active Member
Jan 20, 2015
156
32
28
53
You'd just need to set the vga priority in BIOS for this to work right? Or virtualize the Windows OS and GPU passthrough?
 

Chris Beasley

New Member
Jun 5, 2015
22
0
1
44
You'd just need to set the vga priority in BIOS for this to work right? Or virtualize the Windows OS and GPU passthrough?
Yes, I assume so, although I'm not sure what the bios features are for an x10 class supermicro LGA1150 motherboard.

Indeed, I could virtualise the machine, except presently once I've finished streaming a game, the system would go to sleep after about 10 minutes. With it virtualised unless the hypervisor itself went to sleep, it would continue to draw large amounts of power 24/7 unless I manually shut down the vm, shutdown the hypervisor and then had to manually boot them up before I can do anything.

Does vSphere 6 (or 6.5) have a sleep after vm idle to reduce power right down to a baremetal idle state?
 

nk215

Active Member
Oct 6, 2015
412
143
43
50
I don't think you can virtualize a gtx gpu with ESXi (pass-thru or not).

My e3-1270 v2 with 6 4tb HDs a HBA card and 2 quadro 4000 GPU idles at less than 200 watts.
 

Chris Beasley

New Member
Jun 5, 2015
22
0
1
44
I've virtualised my old AMD Radeon 270x before and it was fine, I haven't tried the GTX 1070 yet but there shouldn't be a reason why not, unless nVidia have gimped it?
 

gbeirn

Member
Jun 23, 2016
69
17
8
123
I've virtualised my old AMD Radeon 270x before and it was fine, I haven't tried the GTX 1070 yet but there shouldn't be a reason why not, unless nVidia have gimped it?
Which I believe they do, the drivers detect it is running in a virtual machine and refuse to load.
 

Chris Beasley

New Member
Jun 5, 2015
22
0
1
44
Hmm looks like nVidia do the sneaky in the drivers. Appears that there is a flag you can change Esxi that stops the OS installation thinking it is actually VM.

Anyway, I'd rather not use ESXi as this limits sleep performance etc. and as this machine is for gaming I'd rather not have it up when its not needed, my other 4 machines are already pulling 600w 24/7 so I don't need another one!
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
It's because NVIDIA makes cards 'made' for virtualization and they don't want you to use a cheaper ie: home gfx card is why they have issues, and why people use the AMD cards. At-least that's my udnerstanding last time I looked.
 
  • Like
Reactions: Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,821
113
I am building my wife a small CUDA compute machine this week. She works at NVIDIA and has been reading deep learning books non-stop for the last week so I am doing NVIDIA not AMD for her.

I will try to set this up when I do the machine. I am going to put her on Ubuntu Server since that is what she is used to on the Linux side. I will see if I can put Windows 10 + Steam in-home streaming beforehand.
 

Chris Beasley

New Member
Jun 5, 2015
22
0
1
44
It's because NVIDIA makes cards 'made' for virtualization and they don't want you to use a cheaper ie: home gfx card is why they have issues, and why people use the AMD cards. At-least that's my udnerstanding last time I looked.
Yep, although it is a bit wierd as consumer GPUs can only pass through the entire GPU and not slices as their virtualisable cards can, so really its just a bit odd.
I am building my wife a small CUDA compute machine this week. She works at NVIDIA and has been reading deep learning books non-stop for the last week so I am doing NVIDIA not AMD for her.

I will try to set this up when I do the machine. I am going to put her on Ubuntu Server since that is what she is used to on the Linux side. I will see if I can put Windows 10 + Steam in-home streaming beforehand.
From a reddit post it appears that you need to change a flag to get the card installed.
Reddit said:
You have to set "hypervisor.cpuid.v0 = FALSE" for the Nvidia drivers to recognize that they aren't running in a VM so the 1050Ti should work depending on the rest of your hardware.
Full thread here: PCI passthrough for VMWare ESXi? • /r/nvidia

So on topic :) Has anybody tried this? I'm also investigating the Intel Q based chipsets for AMT but it appears that they disable iKVM when a discrete GPU is installed, even if the onboard is enabled and connected to a display!