PCI passthrough to bhyve on SmartOS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

knorrhane

New Member
Dec 17, 2016
14
0
1
44
Hi,

I'm attempting to passthrough an NVIDIA card to a bhyve VM on SmartOS and it's my understanding I need to disable the card on boot to avoid conflict with the global zone. Any pointers from anyone on how I should configure my loader.conf to achieve this? I have the PCI adress but I'm not sure how to proceed, can't really find any good documentation other than this post from 2018: PCI pass-through support with bhyve and SmartOS - John Levon's blog

Here's the PCI info:
pci10de,11d8 (pciex10de,1bb3) [NVIDIA Corporation GP104GL [Tesla P4]]
Thanks for any help!
 

knorrhane

New Member
Dec 17, 2016
14
0
1
44
I would check and ask at
Thanks gea, I've posted a question there as well. It's just really hard to find documentation, I'm thinking of migrating to proxmox and passthrough my HBA card to TrueNAS in a VM. Feels like it's getting harder to get help for SmartOS for home use.
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
Manuals for SmartOS are here: SmartOS Documentation
The community is small but quite helpful. SmartOS is much more minimalistic than Proxmox, resource wise more like ESXi but as a Solaris fork with a perfect ZFS integration and LX Linux zones/ Docker support, Bhyve and KVM. As a USB stick distribution it is a very stable minimalistic system with a commercial background. Although I still prefer ESXi with an OmniOS storage VM as this is near as minimalistic with a best of all support of any guest OS with the most minimalistic full featured ZFS storage VM with the Solaris kernelbased SMB that I always prefer over SAMBA.
 
  • Like
Reactions: knorrhane

knorrhane

New Member
Dec 17, 2016
14
0
1
44
Manuals for SmartOS are here: SmartOS Documentation
The community is small but quite helpful. SmartOS is much more minimalistic than Proxmox, resource wise more like ESXi but as a Solaris fork with a perfect ZFS integration and LX Linux zones/ Docker support, Bhyve and KVM. As a USB stick distribution it is a very stable minimalistic system with a commercial background. Although I still prefer ESXi with an OmniOS storage VM as this is near as minimalistic with a best of all support of any guest OS with the most minimalistic full featured ZFS storage VM with the Solaris kernelbased SMB that I always prefer over SAMBA.
Thanks for taking the time to reply gea!

I’ve actually ran ESXi and OmniOS in the past but I didn’t like it due to the overhead and the licensing was a bit complicated as I remembered it, but it was like 8 years ago I tried it and I ran SmartOS since.

I’ve read the SmartOS documentation but I can’t find any information on PCI passthrough, I only find this for Triton.

I appreciate the simplicity of SmartOS and that I don’t need a VM to handle my ZFS pool and as you say, the community is small but dedicated and very helpful. However I feel like focus in the last years have been on Triton and datacenters rather than SmartOS. Like I think the lx zones is a great feature but last I checked the latest Ubuntu version that was officially supported was LTS 16, with the community coming through and making it possible to install Ubuntu 22.04. As far as I’m aware official development to update lx zones has stopped. But this is from memory and I might be mistaken.

I’ve also tried napp-it which I liked but I wanted more focus on VM managing. I don’t quite remember but aren’t you the creator of napp-it? If so well done on that as well as all the other work you’ve been doing!

Back when I tried different hypervisors there was also the issue if ZFS versioning and support, especially with OpenZFS which also had performance issues at the time. I believe this is solved now and OpenZFS is more mature and ZFS versions has been replaced by feature flags right? So you can import your pool to almost any OS.

Another reason I want to try Proxmox is the web gui which be really useful for me as I mainly use remote management. A web gui is then a lot easier as I don’t need to install a SSH client or any software on the computer I use.

I’ll continue working on getting PCI passthrough to bhyve on SmartOS (I don’t have a ton of experience with Solaris/FreeBSD inner workings) I think but proxmox as a hypervisor and passing through my HBA is getting more and more interesting.

Thanks again for your reply and if you happen to find any information on PCI passthrough on SmartOS please let me know. Would really appreciate it, thanks!
 

SharkShark

New Member
May 18, 2023
4
0
1
Lost Lake Local
OP it may boil down to Nvidia firmware, and the fact that most BSD and SmartOS Bhyve types are running rack servers and would have to fire up a desktop box to get started, or deal with stupid or impossible kvm cabling, or crank up the Bieber in a worthy set of closed-back cans to dev it in the server closet annex. Pretty sure DisplayPort spec max cable length is 2m, so there's that too.

But you may well need the physics processors more than FPS, which is a very good reason for the Bhyve crew to prioritize their action-item agendas.

OP thanks and I hope you get it resolved and that your efforts of late have reached the right ears! :cool:
 
Last edited:

knorrhane

New Member
Dec 17, 2016
14
0
1
44
OP it may boil down to Nvidia firmware, and the fact that most BSD and SmartOS Bhyve types are running rack servers and would have to fire up a desktop box to get started, or deal with stupid or impossible kvm cabling, or crank up the Bieber in a worthy set of closed-back cans to dev it in the server closet annex. Pretty sure DisplayPort spec max cable length is 2m, so there's that too.

And on top of that, for years Nvidia specifically has been a deliberate PITA inre virtualization of their consumer gear [so they could make billion$ selling it to hyperscalers]...which is why you even have to pass it through in the first place, and likely why the eventual solution to your problem may well be specific to your gpu or its closed source firmware rev.




Imho you'd be stepping back in time for a pumpkinspice clicky on top of sausage factory kvm and unsupported ZoL...but windows gaming is a way of life, puTTY is hard, X11 is even harder, ESXi is Dad-style, and surely Proxmox will passthrough your hawt gpu to your olskool hardware vm with the influencer-based awesomeness of its incredible gui, amirite?


j/k - I've been where you are. You can still go to VBox forums and see where I was the only person on the planet who had ever had ipv6 Duplicate Address Detection logspam from a Linux guest running slaac in a dual-stack zone host...which was sum heady shit back in 2010.
Haha maybe, I doo have a rack server with IPMI so I'm good with remote access luckily ;)

You're right NVIDIA makes it hard for virtualization but I'm going to use a Tesla P4 for Plex transcoding and not gaming so should be ok I think. But we will see! I got some pointers on the SmartOS subreddit so I think I know what to do now, just need to find the time to test it. I would be happy continuing with SmartOS if I can get PCI passthrough to work since I don't really want to migrate all my VMs...

Anyway here's the link to the reddit post and I'll report back if I get it too work:
https://www.reddit.com/r/smartos/comments/163wqaz
 

SharkShark

New Member
May 18, 2023
4
0
1
Lost Lake Local
Haha maybe, I doo have a rack server with IPMI so I'm good with remote access luckily ;)

You're right NVIDIA makes it hard for virtualization but I'm going to use a Tesla P4 for Plex transcoding and not gaming so should be ok I think. But we will see! I got some pointers on the SmartOS subreddit so I think I know what to do now, just need to find the time to test it. I would be happy continuing with SmartOS if I can get PCI passthrough to work since I don't really want to migrate all my VMs...

Anyway here's the link to the reddit post and I'll report back if I get it too work:
https://www.reddit.com/r/smartos/comments/163wqaz
.

Yeah, I had a brilliant flash of insight and was busy editing... :D


Not to distract but you can transcode your cloud storage objects at rest for free some places.

And you can clone/restore all your host's complete zones, their applications, filesystems, and network configs in a single sc_profile.xml, so migration is not necessarily the chore it could be...


Tribblix will give you SmartOS zones and Bhyve in a desktop environment that weighs very little and gives you the tools to automate and organize your hypervisor and zones. But Tribblix is a real desktop OS, not a web gui script builder like Prox.

Danube Cloud has heap much web gui and comes with an OPNSense zone for your virtual edge, but is a big iron OS with a high ordnung and datacenter discipline...

...and you're prob well aware of all this already too....:D
 
Last edited:

Rttg

Member
May 21, 2020
71
47
18
Did you get the right overlay files added to your USB boot drive? I’m no longer using pcie passthrough after an initial test a while back but remember that being a part of the effort.
 

nwilkens

New Member
Sep 12, 2023
1
0
1
I posted a bit more detailed response here, but also found this related thread recently.

PCI passthrough to Bhyve on SmartOS, or using Triton DataCenter is possible (at least with my recently testing Tesla T4). I found the docs need some updating, which we will be doing in the coming days.

In your case with the PCI ID output, you will need to add the following to ppt_aliases and then reboot:

Code:
ppt "pci10de,11d8"
subsequently `pptadm list` should show an attached device, like this:

Code:
[root@gpu-test (hq-monroe-1) ~]# pptadm list
DEV        VENDOR DEVICE PATH
/dev/ppt0  10de   1eb8   /pci@7a,0/pci8086,2f04@2/pci10de,12a2@0
I would also suggest reviewing the Triton DataCenter documentation for Bhyve PCI passthrough for the final steps once the ppt driver is attached to the passthrough device.

Thanks!
 

knorrhane

New Member
Dec 17, 2016
14
0
1
44
I posted a bit more detailed response here, but also found this related thread recently.

PCI passthrough to Bhyve on SmartOS, or using Triton DataCenter is possible (at least with my recently testing Tesla T4). I found the docs need some updating, which we will be doing in the coming days.

In your case with the PCI ID output, you will need to add the following to ppt_aliases and then reboot:

Code:
ppt "pci10de,11d8"
subsequently `pptadm list` should show an attached device, like this:

Code:
[root@gpu-test (hq-monroe-1) ~]# pptadm list
DEV        VENDOR DEVICE PATH
/dev/ppt0  10de   1eb8   /pci@7a,0/pci8086,2f04@2/pci10de,12a2@0
I would also suggest reviewing the Triton DataCenter documentation for Bhyve PCI passthrough for the final steps once the ppt driver is attached to the passthrough device.

Thanks!
Thank you! I posted the same question on /r/smartos and got the same response. It worked as you stated and using zonecfg to passthrough the graphics card also worked as stated in the documentation. It's still a bit of a hassle setting up the bhyve VM (making the VM, expanding the quota to copy the iso to root, modifying the bhyve boot disk etc.) but I got it all working. Thanks again!
 

knorrhane

New Member
Dec 17, 2016
14
0
1
44
Did you get the right overlay files added to your USB boot drive? I’m no longer using pcie passthrough after an initial test a while back but remember that being a part of the effort.
I couldn't get it quite right initially but after some trial and error and just adding ppt_alias it worked. Thanks!