Darn, i was really hoping passthrough would workThis MB dual 82576 SR-IOV unavailable.
BIOS
VT-d enable
SR-IOV enable
ESXi dmesg
2013-03-17T11:15:04.130Z cpu11:4612)<6>igb 0000:01:00.0: eth0: PBA No: 82576B-001
2013-03-17T11:15:04.130Z cpu11:4612)<6>igb 0000:01:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)
2013-03-17T11:15:04.130Z cpu11:4612)PCI: driver igb claimed device 0000:01:00.0
2013-03-17T11:15:04.131Z cpu11:4612)<6>igb: : igb_validate_option: max_vfs - SR-IOV VF devices set to 7
2013-03-17T11:15:04.131Z cpu11:4612)<4>igb 0000:01:00.1: Failed to initialize SR-IOV virtualization
What power BIOS settings are you using? Do you have a blank in the 4th node spot?My idle consumption is quite a bit higher than quoted in Patrick's post. Has anybody else tested theres?
I'm running 3 nodes of the 4 nodes.
1: 2 x L5520, 48gb of RAM, 2 SSD, 1 SATA
2: 2 x L5520, 48gb of RAM, 3 SATA
2: 2 x L5520, 24gb of RAM, 3 SATA
It's ~350w accurate? I don't remember exact wattage, but the draw was close to 3.1A. I know I'm running more disks and ram than Patrick, but if this thing truly idles at 174w, I'm using exactly double!
Here's another data point to add to the mix, based on readings from my Kill-a-Watt meter ..Has anybody else tested theres?
I also have chosen to setup the C6100 with 2 nodes, 6 drives per node.With that said, I think I know what I all need to make this setup work fully, but just wanted to verify it with you guys:
2x 6-sata Dell 6J3R2 harness
2x 05.m Mini SAS (SFF-8087) to 4x SATA
I think that this cable from the same seller on eBay is the cable for the remaining 2 ports (5&6)Backplane to Midplane breakout cables:
1) Cable for two drives - part number is 334VV.
2) Cable for three drives - part number is 3R7FF or 3T15J
3) Cable for four drives - part number V91FW.
In the 3.5" ones wonder if you could use a SATA DOM like this: KingSpec 8GB SATA Dom MLC Flash SSD Disk on Module Network PC Solid State Drive | eBay
Would help free up SATA ports for storage if that was the case.
I think PigLover or someone on here has the commands needed.And BTW, the replacement fan mod identified by PigLover in Taming the C6100 thread is certainly worth considering. Fan noise is reduced considerably, but at idle the fans run as low as maybe 1200~1400 rpm which causes a "low fan" alarm to be generated. It's nice and quiet though Next job is to figure out the IPMI commands to reset the low fan threshold, just one more thing to learn !
I have power management set to "OS control". I couldn't really tell the difference between that and node managed, which seems to just enable the intel management stuff. I have a node in the 4th spot as a cold spare, but it's powered off.What power BIOS settings are you using? Do you have a blank in the 4th node spot?
You can see in my Colo post two chassis with six nodes running plus two 24 port switches running on the DC PDU at 5amps.
These numbers are bit different from Patrick's, imo, and much closer to mine. If you're using up 240W with two nodes, then me using 350W with 3 (2 of which have double the RAM) is well within reason. Using your math of 0.8A per additional node, each chassis with 3 nodes will use 2.86A. For two equally configured servers, you're at 5.72A and that's not including his two switches.With 2 nodes
Node 1: 2x L5520, 24GB RAM, 1xSSD 4xSATA; OS Server 2012
Node 2: 2x L5520, 24GB RAM, 1xSATA; OS Server 2012
Off - 18W
Power Up - 340W peak
Idle in Server 2012, start menu - about 240W ( actual current draw indicated by the Kill-A-Watt is 2.06A )
So rough math says that adding a second node increased the idle power draw by 100W, or about 0.8A
Thank you for the detailed response. I will definitely go with the original 6-SATA harness. I think I will take my chances with the monoprice 1x4 cables. Cheap enough and can even return if needed. Will try to get the 0.5m of cable nicely organized between the backplane and the fans. While rewiring the original 1x3 wires to only use 2 nodes, I have discovered that freeing the screws that hold the entire HD assembly goes a long way in having a little bit more room to work on the cabling.I also have chosen to setup the C6100 with 2 nodes, 6 drives per node.
If you want to use the 6 SATA ports on the motherboard, then the Dell 6J3R2 harness is a must. ( I purchased mine from the same source ). This will connect the 6 SATA ports on the motherboard to the "interposer board". Having torn apart the C6100 to rework it, I now appreciate the effort and diligence that the Dell engineers put in on this product. The Dell cables are just the right length, just the right fit. You could probably use something else, but IMHO it's not worth the time and effort. Buy the right cables and be done with it.
So the next concern are the mid plane breakouts. The interposer board connects to the mid plane and (due to the 6J3R2 harness), passes all 6 SATA connections along. The mid plane then has the SSF-8087 connector as you indicated, but I'm not sure if the Monoprice cables will work or not. I think they do need to be forward breakout cables, so functionally they should work.
0.5m cables, however, are much longer than needed. Care is needed with the SSF-8087 to SATA cable as it needs to be routed from the mid plane through the fan housing assy and then to the backplane.
For each node, the mid plane has 2x SSF-8087 connectors, one for disks 1~4 and the other for disk 5&6. Dell makes specific cables for the 4 and 2 disk connections. I purchased mine from a fellow forum member (dba) who was parting out his C6100. Unfortunately I've installed them already and didn't make a note of their actual p/n, but the V91FW cable certainly looks like the 4x drive breakout cable.
Per dba in post #84 in this thread ..
I think that this cable from the same seller on eBay is the cable for the remaining 2 ports (5&6)
Again, the Dell cables are precisely the right length, not too long or too short (hate to go all Goldilocks on you) but they do work. I connected the V91FW cable to one "column" of 3 disks and stretched the 4th connector to the bottom of the other column, then used the 2 port cable for the remaining 3 drives in the next column.
Care should be taken when routing the cables to the backplane. I was a little careless with the cable routing (big hands and fat thumbs), and ended up getting some of the cable labels sticking too far into the fan assy..... startup was a little noisy !!
Not to say that the Monoprice cables won't work, but space is tight behind the backplane and the fan assy.
One thing that I do need to sort out, is that the drive power LEDs / access LEDs only turn on for the column with the 1~3 "effective" SATA connections. The others don't light up, but the drives are recognized and work just fine. Likely to be backplane specific, I'm just not sure what yet.
And BTW, the replacement fan mod identified by PigLover in http:// this thread is certainly worth considering. Fan noise is reduced considerably, but at idle the fans run as low as maybe 1200~1400 rpm which causes a "low fan" alarm to be generated. It's nice and quiet though Next job is to figure out the IPMI commands to reset the low fan threshold, just one more thing to learn !
One reason why I chose to ensure that the 6x SATA connections from the motherboard to the interposer board can be passed through to 6 physical drives, is that I may choose to use a different RAID controller (LSI) over the onboard ICH10R. Theoretically, I think you could even "mix and match" through the interposer board, in that SATA 1&2 could come from the motherboard, and SATA 3~6 could come from something like an LSI 9260-4i. This is pure conjecture at this time, I haven't gotten round to testing it yet, so take this with a grain of salt.
I received two systems today. I just looked up the service tag on Dell's website, and both say they have next business day warranty service through 6/29/2014.
If I had a problem, could I expect Dell to come out to fix it? Looks like both of them were placed into service on 6/29/2011.
Jeff
Been thinking about this and my "guess" is that you could get power from the harness that supplies power to the motherboard. But I don't know pin-outs and want to disclaim that it is likely a bad idea.I want to mount an Intel SAS expander in the area where a node had been removed. Any ideas on how to get power to the back of the chassis to an empty bay?