I cant answer all but:Two questions about the C6100 platform in this thread:
1) I saw on the first post that they are configured for up to three drives per node. Are there any other possibilities for changing that configuration? Specifically, one node with all 12 drives?
2) Are the nodes interconnected in any way that can be utilized from a Linux environment? And if so, at what speed does this interconnect operate?
I'm specifically thinking of putting all drives on one host, doing a RAID-10, then sharing that disk to the other nodes. Another option would be to RAID-0 or 1 the 3 drives within each node, then do the opposite between the nodes. Both of these depend on some sort of connection between the nodes.
I flipped through the pages of this thread, and googled a bit, but wasn't able to find the answers to these. If they're here and I missed it, I apologize. Could be that I'm using the wrong terms to search.
Any info would be appreciated!
This might be worth knowing if you plan on populating all bays and running all nodes.One thing to note, the manual says that you can't run these things in a full configuration with all 4 nodes and the 1100W power supplies (full disk, memory, CPU). Not really sure what that means to those wanting to run full disk stacks. In the HPC model you would run them disk less or one set so never fully loading them.
Each node has the possibility to easily connect 6x3.5'' drives (or 2 drive bays). Check this image i took with my phone, http://img46.imageshack.us/img46/6668/201303281508151.jpg
Edit: in the picture you see one node with 2 drive bays connected to it and the node to the right has no drive bays connected to it.