That's interesting, since when did Dell start using standard parts?Breakout cable: SATA in, SFF-8087 out
So it might be possible to "steal" more HDD slots for one of the nodes, hmm...
That's interesting, since when did Dell start using standard parts?Breakout cable: SATA in, SFF-8087 out
Very much so. Adding 3x more drives for a system would be fairly easy.That's interesting, since when did Dell start using standard parts?
So it might be possible to "steal" more HDD slots for one of the nodes, hmm...
Update: I found this information on eBay and elsewhere:One correction to Patrick's post above. The SFF-8087 cable from the SAS mezzanine card is also a forward breakout cable, not reverse, as it goes from the HBA as 8087 to the interposer board with SATA.
Well this just keeps getting better and better, the 6 drives node can run a properly-cached/redundant/multi-TB ZFS pool, while the other 3 minions do their own magic with 2 mirrored SSDs each. When the pool gets full just plug the 9202-16e in and keep going, and with the infiniband module I don't even have to use local cache on the other nodes to reduce latency.Very much so. Adding 3x more drives for a system would be fairly easy.
Might sound dumb... but doesn't it look similar to two of the three drive cables?
Not dumb at all. The reason isn't at all obvious.Might sound dumb... but doesn't it look similar to two of the three drive cables?
Thanks, you are of course correct - I updated my post. By the way, did you get SAS1068 or SAS2008 controllers on your 'mez cards? Also, do you have the part number for the Dell six-disk cable(s)?
Jeff
Update: I found this information on eBay and elsewhere:
Backplane to Midplane breakout cables:
1) Cable for two drives - part number is 334VV.
2) Cable for three drives - part number is 3R7FF or 3T15J
3) Cable for four drives - part number V91FW.
The standard four-node C6100 has four three-drive cables. You can re-allocate drives by swapping in different cables from the above with the caveat that the largest number of drives per node is six, which is achieved with one four-drive and one two-drive cable - not two three-drive cables.
Motherboard to Interposer cables:
1) Three-drive harness - part number ?
2) Six-drive harness - part number 6J3R2
3) Six-drive harness for SAS mezzanine card - part number HYJ6F
The standard four-node C6100 has four of the three-drive cables. To add drives you need to switch to one of the six-drive cables. You can of course use off-the-shelf cables instead, but be aware that the routing is very tight, there is little room to snake excess cabling, and that one of the cables must be a right-angle cable or else it blocks access to the PCIe slot.
Damn it, how about empty trays from other server chassis? I just can't get over the fact that 12 empty Dell 3.5" trays actually worth 1/3 of the server that has a 1100w PSU/4 motherboards/8 CPUs/96G RAM in it.Don't get those! They place the drive in the middle of the 3.5" spot. It will not fit the C6100 as the 2.5" drive becomes offset from the normal SATA position when you use them.
I have made this mistake.
Yeah - you have to accept that market pricing often bears little relationship to cost (or sometimes even to value). If you forget this truth just try figuring out airline pricing (like why a one-way ticket SFO to London costs $1,800, but a round-trip on the same airline with the same first leg costs $1,050...).Damn it, how about empty trays from other server chassis? I just can't get over the fact that 12 empty Dell 3.5" trays actually worth 1/3 of the server that has a 1100w PSU/4 motherboards/8 CPUs/96G RAM in it.
So... here's an interesting note. You can just manually insert SSDs on the bottom row of connectors (1 per). That seems to work well. I think dba has a line on 3.5" converters.Damn it, how about empty trays from other server chassis? I just can't get over the fact that 12 empty Dell 3.5" trays actually worth 1/3 of the server that has a 1100w PSU/4 motherboards/8 CPUs/96G RAM in it.
Errr, they do that? What is stopping people from just ordering round-trip tickets anyway?Yeah - you have to accept that market pricing often bears little relationship to cost (or sometimes even to value). If you forget this truth just try figuring out airline pricing (like why a one-way ticket SFO to London costs $1,800, but a round-trip on the same airline with the same first leg costs $1,050...).
In any case - Patrick - could you share some info on the 3.5" to 2.5" carrier you used to fit the SSDs into the Dell drive cages?
I am not sure if the SSD may pop out in some random month a year down the road due to fan vibrations...So... here's an interesting note. You can just manually insert SSDs on the bottom row of connectors (1 per). That seems to work well. I think dba has a line on 3.5" converters.
You are right though... pricing is somewhat annoying.
http://www.servethehome.com/Server-...y3-cloud-server-2u-4-node-8-sockets/#comments
Dale February 13, 2013 at 11:51 am
Patrick,
Thanks for sharing this information.
Do you have any recommendations for tray/adapter to mount 2.5″ SSDs into the 3.5″ bays?
Any pointers appreciated. Thanks.
So... here's an interesting note. You can just manually insert SSDs on the bottom row of connectors (1 per). That seems to work well. I think dba has a line on 3.5" converters.
You are right though... pricing is somewhat annoying.
This.I have a name for one method of inserting a 2.5" drive into a 3.5" chassis - it's called using a "DST Bracket" - double sided tape.