X9SRH-7TF (S2011, LSI 2308, Dual 10GbE ~$500)

ant

New Member
Jul 16, 2013
21
5
3
Hi Gea,

Thank you very much for the instructions. I have done very similar procedures numerous times with various hypervisors and solaris based guests and have not succeeded yet. I will try with OmniOS as a guest as I have not tried that one yet. Will also try the same build of esxi.

At the time of me writing this, you don't actually mention passing the 2308 through to the OmniOS guest - though I assume you did since you mention creating a 6 disk pool. I also assume you did this before installation of OmniOS - correct me if I am wrong.

Anyway I will follow your procedure and will see if it works for me.

Ant
 

gea

Well-Known Member
Dec 31, 2010
2,520
852
113
DE
Hi Gea,

Thank you very much for the instructions. I have done very similar procedures numerous times with various hypervisors and solaris based guests and have not succeeded yet. I will try with OmniOS as a guest as I have not tried that one yet. Will also try the same build of esxi.

At the time of me writing this, you don't actually mention passing the 2308 through to the OmniOS guest - though I assume you did since you mention creating a 6 disk pool. I also assume you did this before installation of OmniOS - correct me if I am wrong.

Anyway I will follow your procedure and will see if it works for me.

Ant
I will update ESXi and OmniOS today.

ps
In step 4, i activated pass-through
but in step 6 it is needed to add it as a pci adapter for OmniOS (updated the thread)
 

ant

New Member
Jul 16, 2013
21
5
3
Hi Gea,

Some notes on my testing.

I set my bios to optimised defaults - I had previously changed bios settings heaps of times so I thought it better to start from scratch. I checked that virtualization and VT-d was enabled in the bios and they were.

I installed ESXi 5.1 build 799733 and enabled pass through of the LSI 2308. rebooted.

I set up a guest os (solaris 10 64bit mode) and passed through the 2308 to the guest.

I installed OmniOS OmniOS_Text_r151006j.iso (not the latest) on to the guest VM. It was able to see the 2308 and was able to complete install. It could make use of the 2308 and its attached disks. You have no idea how happy that made me.

For further testing I shut down the guest, removed its 2308, then set up a solaris 11 64bit style guest. I passed the 2308 through to this guest and attempted to install solaris 11.1. It again stalled early in the install process like I experienced previously. No good.

So it looks like this is a problem with 2308 pci passthough and Solaris 11.1 and also Smartos (tested previously). Yet passthough of 2308 with OmniOS seems fine.

As a further test I booted from an older version of ESXI - 5.0u2 (needs added recent ixgbe driver for X9SRH-7TF from memory?). I Imported the previous OmniOS install and passed through the 2308 to the guest. It booted fine and could use the 2308.

I was hoping to be able to use Solaris 11.1 but I will be using OmniOS instead now - and I am glad that I can.

Thank you so much Gea. If I had not heard of your success I probably would not have tried OmniOS. I would have assumed it would behave the same as other Solaris based operating systems. This is such a relief after weeks of frustration trying to get this one thing working. At least I have learnt how to set up various hypervisors in the process.

Now I have a lot more work to do to get this to be an all in one replacing my mythtv backend and frontend (lots more pci passthrough), mail server, web server, test server.

Ant
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
why aren't you using a modern version of esxi 5.1u1? that version is ancient.

also remember classic client tends to make virtual machine hardware 8 (5.0) versus web client (virtual machine version 9 aka 5.1). There are big differences some good, some bad.

I still don't understand why you don't run bare metal? ESXi is not designed to push network cards as hard as say windows on bare metal.
 

ant

New Member
Jul 16, 2013
21
5
3
Hi mrkrad,

Not sure if you are addressing me but I will give my answers.

This was not anything to do with pushing network cards. I was hoping to get the onboard LSI 2308 sas/sata controller working via pci passthough to a VM. I want to do this in order to help me build what people seem to call an all-in-one.

For my all-in-one, I hope to have the following:

- a hypervisor so I can run virtual machines for home use and for testing different software versions and operating systems
- an VM running an operating system that does zfs (and specifically zfs - not something zfs-like) well so it is stable and I can make use of various zfs features, eg data integrity, snapshots, file system compression, maybe even deduplication. In order to do this it needs somewhat direct access to the disk controller via pci passthough. This would serve various zfs shares by nfs and possibly CIFS and AFP for use by the hypervisor as a datastore and by my VMs and other computers on my home network.
- a vm used to run mythtv backend that can access my dvb-t pci cards (TV cards), and also one of the usb controllers via pci passthough to access my dvb-t usb device.
- a vm used to run mythtv frontend that can access my pcie graphics card with pci passthrough for output to my tv, and can also access a second usb controller via pci passthrough for access to my usb remote control dongle and the async usb sound device of my amplifier. Both mythtv functions could be on one VM rather than two if possible.
- a vm running debian linux for other general server duties eg mail server, web server.
- other VMs as desired.
- consolidation of various servers into one box.

No guarantees I can get the above working but I will try very hard. I want the above set up rather than trying to get it all working in a single operating system installed on bare metal for the following reasons:

- not sure a single operating system exists that can do all that - not interested in the zfs on linux in its current state.
- I want to be able to do security updates to operating systems safely and easily - with an easy way to revert. My current mythtv server installed on bare metal was _very_ fiddly to get working right with suitable performance and functionality - so it now never gets updated. My mail server is on the same box so scarily it never gets updated either. By having VMs I can split server functions out to different VMs and upgrade them individually.
- for learning. I am learning a lot trying to get all these tricky things working together.
- because I just want to do it this way

As to why I am not using a modern version of esxi 5.1u1. The above posts were just for testing to get pci passthrough of the LSI 2308 working. I was trying to replicate gea's success by using the same software versions. I was also testing alternate versions to try and work out whether it was the specific version of esxi that made it work - or the guest os. Anyway that was just testing. I do intend to keep using the older esxi 5.0u2 because esxi 5.1 and esxi 5.1u1 have lost the ability to do pci passthrough of USB 'controllers' to VM guests and I need this functionality. I have tried doing standard USB pass though of my amplifiers usb sound device in esxi 5.1u1 to a guest and it tries to work but the sound output just fails after half a second. I would expect the same issue with my usb dvb-t card. From many reports I have read, pci passthrough of a usb controller to the guest is needed for these types of things.

If anyone is going to say the above is not the best way to do things - you are probably right. It is however the way I want to do this project and that is what I hope to accomplish.

Thank you for your interest.

Ant
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,489
228
63
Ant did you see any reason why they took that capability away? USB passthrough was always a reason I looked at VMware.
 

ant

New Member
Jul 16, 2013
21
5
3
Hi Jeggs101,

I have just skimmed through this massive thread at the vmware forums so I don't really know:

Esxi 5.1 pci passthrough broken | VMware Communities

They talk about a lot of things there but a predominant issue is that USB controllers specifically can't be passed through to guests, or there are issues passing it through to guests. The ability to pass through individual USB devices from hosts or from the vsphere client is still supported for devices in the hardware compatibility list.

This is not confirmed but I think the response from Vmware is that passthough of USB controllers was never a supported feature and they are surprised to hear it worked at all in previous versions of esxi. They urge users to use the usb device passthough instead (including passthough from a vsphere client) and that if you want usb controller passthough as a feature then make a feature request. This means it is not a bug. I am paraphrasing here from memory of reading parts of the thread and I don't speak for Vmware. I could be wrong.

My guess is that they are moving forward from older versions of ESXI and updating the architecture and this probably necessitates cutting out or replacing old code. As this was never a supported feature they have made a business decision to let it be a casualty of their progress. I am fine with that - but it means I will be using an esxi 5.0 variant rather than a newer version, providing that it suits my needs. I am just glad they let me use their product for free. I'd probably have a different attitude if I was paying them though.

Ant
 

gea

Well-Known Member
Dec 31, 2010
2,520
852
113
DE
For further testing I shut down the guest, removed its 2308, then set up a solaris 11 64bit style guest. I passed the 2308 through to this guest and attempted to install solaris 11.1. It again stalled early in the install process like I experienced previously. No good.

So it looks like this is a problem with 2308 pci passthough and Solaris 11.1 and also Smartos (tested previously). Yet passthough of 2308 with OmniOS seems fine.
LSI 2308 is quite new and supported at least in the newest Illumos versions. With OmniOS you have the currently most up to date Illumos implementation (a new stable every 6 months and bugfixes every two weeks). This is the reason I moved to OmniOS nearly completely.

For older Solarish systems, a LSI 9211 is a better choice.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Everything is moving to proper Function level reset virtual function (SR-IOV) - you have to remember that ESXi is tuned for running as a cluster with san. It was never meant to be a replacement for fusion/workstation. The old method of pass-through to give direct access obviously has security and stability implications that can be addressed with virtual functions.

Simply put you should never be able to do something that will compromise the hypervisor or other vm's. Directly accessing hardware is a sure fire way to break this promise.
 

ant

New Member
Jul 16, 2013
21
5
3
There is a bios update available for this motherboard now. R 3.0 X9SRH3_705.zip

Looks like it will allow support for E5-2600/1600 v2 product family. Yay - I was hoping they would support this in case I want to update the processor one day.

Ant
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,489
228
63
I think most of the boards will support v2. ASUS boards and MSI are updated too
 

33_viper_33

Member
Aug 3, 2013
200
2
18
Has anyone determined if this board works with engineering samples yet? I unintentionally bought one that was advertised as a production model that doesn't work in my server’s second CPU slot. It would be nice to put it to work.
 
Last edited:

33_viper_33

Member
Aug 3, 2013
200
2
18
Have any of you ran a watt meter on this setup? I'm curious what the idle power consumption of this board and a E5-1650 is. I would guess it would be 70+ watts given the 10GBE NICs and LSI controller on-board. Also can't wait to see how much of an improvement the E5-1650v2 is power wise.
 

Aluminum

Active Member
Sep 7, 2012
431
45
28
Have any of you ran a watt meter on this setup? I'm curious what the idle power consumption of this board and a E5-1650 is. I would guess it would be 70+ watts given the 10GBE NICs and LSI controller on-board. Also can't wait to see how much of an improvement the E5-1650v2 is power wise.
Some numbers to play with, though you can't directly assume these to be power draw it should be fairly close:

x540-T2 claims 13.4W TDP on a standalone card, onboard might be a little bit less because it can use some existing power circuitry.

The equivalent LSI card 9207-8i claims "9.8 W typical, Airflow 200 LFM"
 

ant

New Member
Jul 16, 2013
21
5
3
Not sure when this came available, but there is an IPMI Firmware update available for this motherboard now. Version R 2.40 SMT_X9_240.zip

Have not tried updating to it yet and I don't have a change log.

Ant
 

jpasint

New Member
Oct 20, 2013
6
0
1
Not sure when this came available, but there is an IPMI Firmware update available for this motherboard now. Version R 2.40 SMT_X9_240.zip

Have not tried updating to it yet and I don't have a change log.

Ant
That's been out for a while.
I've had it at least a few months now.

Couldn't tell you what is different thought since I only had the previous version for a couple of weeks on my new board.

It works like a charm I can say for sure.

Joe