Strange, I have sold around 20 of these and no complaints or returns (if they didn't work I would expect to hear about it ).Those are really iffy in a c6100. I put four into a c6100: Two worked fine, one worked until you bumped the chassis, and one never worked. It turns out - surprisingly - that they position the drive a few millimeters further forward than other solutions, which means that your drives aren't fully inserted into the backplane.
I haven't noticed that exactly. Do you have 2 power supplies or 1? I have dual 1100 watt power supplies, and i've noticed that nodes 1 and 3 run off the top power supply, and does 2 and 4 run off the lower supply.I noticed that the bottom nodes (which are #2 and #4) will not power on if the top nodes (which are #1 or #3) are not powered on
Has anyone else observed this?
Thinking about this...I noticed that the bottom nodes (which are #2 and #4) will not power on if the top nodes (which are #1 or #3) are not powered on
Has anyone else observed this?
Is this really an upgrade?Yea, that seller seems to use the same picture for all their 6105s. I am also concerned / confused about the IPMI. Maybe I should consider spending a bit more and getting something I have more confidence in. I see the xs24s are selling now. Worth the upgrade or stick with a 23?
No, the XS24 model is NOT an upgrade. It is an alternate and older generation model.Is this really an upgrade?
XS24s on ebay are using 771 cpus...
But they are selling really cheap (@!#? the shipping to germany -.-)
I noticed that the bottom nodes (which are #2 and #4) will not power on if the top nodes (which are #1 or #3) are not powered on
Has anyone else observed this?
Nope, never seen this.
You may need to check the wiring to the fan controller board. Some of the cables may be connected incorrectly.
RB
I haven't noticed that exactly. Do you have 2 power supplies or 1? I have dual 1100 watt power supplies, and i've noticed that nodes 1 and 3 run off the top power supply, and does 2 and 4 run off the lower supply.
I can power on node 2 and 4, without having node 1 or 3 on.
Yeah i can run any node independent or all at once or even random selections. Heck i even tested and i can pull either PSU and all 4 nodes keep running. Im running twin 1100W PSU's in both my C6100's. Both seem to work that way.
Technically they should too if you look at the power distribution wiring. it setup so either PSU can power every node. Its basically why redundant power supplies exist. Plug them into 2 different power sources. If one fails the other powers everything
Yes (to confirm even further) I can power on any combination of the 4 nodes, with only one of the two PSUs connected to a power outlet.
Thank you all! RimBlock, you were correct. One of the molex connectors on the PDB was disconnected. I must have done that months ago and forgot about it. I can now power on nodes 1/2/3/4 individuallyThinking about this...
You may want to check the power distribution boards (the ones the PSUs slot in to) actually have the bridging piece installed (verticle piece connecting the bottom PDB to the top one. I would imagine a missing bridge may give the same issues.
RB
You can throw 24 drives in it, but by default wiring it will only allow each of the 4 nodes to access 6 drives (24 drives/4 nodes). Each node has one RAID mezzanine card (DELL LSI) with the physical ports being 1 x SAS connector (equals 4 SATA ports) and 2 x SATA connectors for the 6 x total drives. Also the drives will only work as SATA - see below.Hello. I saw this thread and this looks like a great idea for some non-production ESXi hosts. I read through a lot of the pages here but am having a hard time confirming a few things considering how many configurations there can be. On eBay, I'm looking at one of these servers:
Dell PowerEdge C6100 XS23 TY3 Server 24 Bay 4 Node 8x Intel Xeon L5520 96GB RAID | eBay
It looks like I should be able to throw 24 of these drives in it:
Dell 146GB 10K 2 5" 6Gbps SAS Hard Drive for PowerEdge C6100 | eBay
Then have 6 drives in a RAID 10 configuration per node. Am I correct in assuming this? I know little about Mezzanine cards etc unfortunately. Is the idea to have one RAID card that serves all 4 nodes, but each node sees it as its own?
If this has been covered I'm sorry but I was unable to find it.
Thank you, that is very helpful. So I would need one of these for each node to get SAS and hardware RAID?You can throw 24 drives in it, but by default wiring it will only allow each of the 4 nodes to access 6 drives (24 drives/4 nodes). Each node has one RAID mezzanine card (DELL LSI) with the physical ports being 1 x SAS connector (equals 4 SATA ports) and 2 x SATA connectors for the 6 x total drives. Also the drives will only work as SATA - see below.
RAID will depend on the OS installed, as it is not a hardware ("proper") RAID card and instead is software. If you use Windows, in general, you could do RAID 0, 1, or 10, depending on the version and drivers. Other OSes (without the support/drivers) will not understand the setup; this is why people who install something like ESXi do not see the aggregated logical "RAID" disk, but instead see each individual disk.
For RAID that will work with any (supported) OS, do proper hardware (accelerated) RAID, AND allow for SAS drives to work (at their fullest spec), then you need to get the upgraded DELL LSI mezzanine SAS card.
The unit you linked lists the LSI 1068 mez card as being included (Y8Y69).Thank you, that is very helpful. So I would need one of these for each node to get SAS and hardware RAID?
Dell 6g SAS SATA LSI SAS2008 MEZZANINE Daughter Card PE C6100 C6145 XX2X2 | eBay