Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I got the switch on eBay and got a fantastic deal, but the seller I bought from doesn't have it listed anymore.

This was my backup plan if something fell through with the one I got

QLogic Silverstorm Infiniband Edge 9024 FC24 ST1 DDR 24 Port Rack Mount Switch | eBay

QDR Infiniband has twice the bandwidth of DDR, and lower latency, but in practice you get a smaller throughput improvement. QDR PCIe2 x8 Infiniband cards are good for around 3.2GB/s, limited by the PCIe bus, while DDR cards are good for around 1.9GB/s. So DDR is 60% as fast as QDR in the c6100, but costs less than half... which makes the far less expensive DDR equipment a pretty good deal.

None the less, it seems that everyone here with QDR dreams of FDR.
 

alan

New Member
Oct 24, 2013
20
0
0
Alan,

It looks like the mezzanine cards that you bought don't have the mounting brackets. Any chance that you can return them?

Here is a card that does have the rear mounting bracket that surrounds the ports and extends to attach to the sled:

The server has the punch outs. Isn't that all I need?
 

alan

New Member
Oct 24, 2013
20
0
0
QDR Infiniband has twice the bandwidth of DDR, and lower latency, but in practice you get a smaller throughput improvement. QDR PCIe2 x8 Infiniband cards are good for around 3.2GB/s, limited by the PCIe bus, while DDR cards are good for around 1.9GB/s. So DDR is 60% as fast as QDR in the c6100, but costs less than half... which makes the far less expensive DDR equipment a pretty good deal.

None the less, it seems that everyone here with QDR dreams of FDR.
This is my first time with infiniband, the setup I'm replacing has 4 separate servers that communicate over gigabit ethernet. So even DDR would have been a major upgrade. I realize one port of QDR saturates the PCIe bus, I had wondered if hooking up both ports to a 20 gbit infiniband switch would give 40 gbit of bandwidth.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
This is my first time with infiniband, the setup I'm replacing has 4 separate servers that communicate over gigabit ethernet. So even DDR would have been a major upgrade. I realize one port of QDR saturates the PCIe bus, I had wondered if hooking up both ports to a 20 gbit infiniband switch would give 40 gbit of bandwidth.
If you have a number of threads working simultaneously, then yes - dual DDR can approach or equal QDR speeds. The one thing that I haven't tested is whether Ethernet RDMA works when aggregating links, for example using SMB3.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Yes, I found it for $499. I didn't realize just how good of a deal that was until later or I could have bought both they had and flipped one to pretty much pay for my whole setup.
Insanely good price for a 32 port QDR switch - or did you get the 18 port version, which would still be a good deal at $500. Let us know when you get it working; we know that the used Mellanox switches are easy enough to get running but I think that you are the first to try a QLogic. It would be great to have two options.
 

JoelC707

New Member
Oct 25, 2013
4
0
0
38
Lanett, AL
So, I got my C6100 in yesterday and I've got a slight problem that doesn't seem to make sense. Each node has a single 750GB SATA drive connected, all drives are verified god before I utilized them here and all four nodes see their respective drive in their BIOS. In the case of node 1 and 2, Hyper-V server saw the drive and installed just fine. Node 3 and 4 do not see the drive and won't install.

I dug around on the Dell drivers site and downloaded and extracted the INF files from the chipset driver. I load the ICH10 driver and it still doesn't see any drives. I've tried swapping the BIOS to IDE and back to AHCI but it doesn't see it still. I have not tried leaving it on IDE because I'd prefer it be on AHCI.

Anyone have any ideas what's going on with these two nodes?
 

JoelC707

New Member
Oct 25, 2013
4
0
0
38
Lanett, AL
Never mind, figured it out. Well I haven't actually figured it out but I got past it. It was something to do with the drives in node 3 and 4. I swapped the drive from node 3 to node 1 and it too wouldn't see the drive on install. I grabbed another drive, put it in node 3 and it sees it. No idea what's up with these two drives because I literally just ran SMART tests on them and they passed with no problems. I'll run more tests on them later.

My c6100 is a L5520 (2 CPU per node) model with 48GB per node (192 total). I did some power usage tests for my own benefit and thought I'd share them with you all here.

Chassis powered, no nodes online: 22W, 44VA, 0.38A
Node 1 booting: 276W peak. In BIOS: 156W, 161VA, 1.36A
Node 2 booting: 298W peak. In BIOS: 263W, 268VA, 2.30A
Node 3 booting: 407W peak. In BIOS: 373W, 378VA, 3.22A
Node 4 booting: 531W peak. In BIOS: 483W, 487VA, 4.19A

Initial power-on, all nodes booting simultaneously: 680W peak.

All four nodes booted and logged into Windows, idle: 375W, 377VA, 3.20A

Amperage values are on 120V. I do not have any readings with only a single CPU, sorry.
 

alan

New Member
Oct 24, 2013
20
0
0
Alan,

It looks like the mezzanine cards that you bought don't have the mounting brackets. Any chance that you can return them?

Here is a card that does have the rear mounting bracket that surrounds the ports and extends to attach to the sled:

I checked today, they ones I ordered didn't show the mounting brackets in the ebay photo, but did come with them. Everything has arrived now except for the C6100 itself, which is on it's way and should be here monday.
 

c6100

Member
Oct 22, 2013
163
1
18
USA
#1. No, you cannot use 16GB dimms with any L55xx or X55xx CPU. You must be using L56xx or X56xx CPUs in order to use 16GB dimms. With the L5520 you are limited to using 8GB or smaller dimms.
Just FYI, I put in 4 x 16 GB DIMMS today and the BIOS sees it just fine. I have a L5520 processor. In other words, this server does certainly support 16 GB DIMMs!
 

Bumpy2020

New Member
Oct 5, 2013
6
0
0
Never mind, figured it out. Well I haven't actually figured it out but I got past it. It was something to do with the drives in node 3 and 4. I swapped the drive from node 3 to node 1 and it too wouldn't see the drive on install. I grabbed another drive, put it in node 3 and it sees it. No idea what's up with these two drives because I literally just ran SMART tests on them and they passed with no problems. I'll run more tests on them later.

My c6100 is a L5520 (2 CPU per node) model with 48GB per node (192 total). I did some power usage tests for my own benefit and thought I'd share them with you all here.

Chassis powered, no nodes online: 22W, 44VA, 0.38A
Node 1 booting: 276W peak. In BIOS: 156W, 161VA, 1.36A
Node 2 booting: 298W peak. In BIOS: 263W, 268VA, 2.30A
Node 3 booting: 407W peak. In BIOS: 373W, 378VA, 3.22A
Node 4 booting: 531W peak. In BIOS: 483W, 487VA, 4.19A

Initial power-on, all nodes booting simultaneously: 680W peak.

All four nodes booted and logged into Windows, idle: 375W, 377VA, 3.20A

Amperage values are on 120V. I do not have any readings with only a single CPU, sorry.
Thanks Joel very good info there
Unfortunately for my home lab setup that is too much power usage :( (Power prices here are going through the roof
Someone reported a Node with 1 CPU in the BIOS was consuming 80w, which is close to 50% of your 1 node 2 CPU node

Given I do not need lots of grunt looks like min back to the low power i7/E3 setups
 

alan

New Member
Oct 24, 2013
20
0
0
My server will arrive monday. I've spent the weekend figuring out how to configure it.

one node has a LSI 9265-8i plus CacheCade 2.0
all 4 nodes have a QDR VPI Dual Port 40 Gb/s Infiniband daughter card
and there is an infiniband switch.

Here's what I want to do.

- configure the raid drive as a 16 tb Raid 50 with cachecade
- set up 4 small logical volumes for boot devices for each of the 4 nodes plus one large volume that gets shared by all of the nodes.
- configure infiniband so each node can boot off of the raid using SRP or iSER (or should I use something else).
- install debian 7.2 on each of the volumes
- install MySQL on one node
- have the other nodes communicate with the MySQL server using IP over Infiniband.

Are there instructions anywhere for any of these steps?

Will these cards work with debian 7.2?

Why can't I find any 9265-8i specific info on the LSI web site?

This is my first time configuring anything remotely like this. I hope to not do anything that makes things run signficantly slower than they should. The server is to run our own web sites and will replace severs we currently lease that were set up for us.
 
Last edited:

darkconz

Member
Jun 6, 2013
193
15
18
I just received my server and was able to rack it up last night. Now when I tried to use the iKVM, the iKVM connection disconnects every minute or so (the fps at the top goes to 0fps every now and then). It seems like the node is rebooting itself because there is no OS.

Now my real question is, can I configure everything headless? Or do I need to bring in a monitor and a keyboard to do the initial setup (upgrade firmwares, set BMC IP etc.). I appreciate somebody who has done this remotely from the start to give some insight to where I can look. Worse case is.. for me to haul a 17" monitor home from work... on the bus...

Thanks!
 

TangoWhiskey9

Active Member
Jun 28, 2013
402
59
28
I just received my server and was able to rack it up last night. Now when I tried to use the iKVM, the iKVM connection disconnects every minute or so (the fps at the top goes to 0fps every now and then). It seems like the node is rebooting itself because there is no OS.

Now my real question is, can I configure everything headless? Or do I need to bring in a monitor and a keyboard to do the initial setup (upgrade firmwares, set BMC IP etc.). I appreciate somebody who has done this remotely from the start to give some insight to where I can look. Worse case is.. for me to haul a 17" monitor home from work... on the bus...

Thanks!
Upgrade the BMC firmware first. Also which browser and Java version are you using?
 

darkconz

Member
Jun 6, 2013
193
15
18
Upgrade the BMC firmware first. Also which browser and Java version are you using?
Can I upgrade BMC headless, remotely? I am using Chrome and Java 7. I can get into the iKVM no problem.. it's just it won't stay connected. Is the Firmware Revision on System Information --> Summary the BMC firmware version? If so.. it is on 01.32.28014
 

TangoWhiskey9

Active Member
Jun 28, 2013
402
59
28
Can I upgrade BMC headless, remotely? I am using Chrome and Java 7. I can get into the iKVM no problem.. it's just it won't stay connected. Is the Firmware Revision on System Information --> Summary the BMC firmware version? If so.. it is on 01.32.28014
You can. Try using IE and/or Firefox to see if that works. Usually pretty simple step.