Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TangoWhiskey9

Active Member
Jun 28, 2013
402
59
28
There is 1. Look at the post, under the person's name - title - join date - posts. If you go to the bottom each post has a triangle with a ! in it. That's the vB report feature.
 

tby

Active Member
Aug 22, 2013
222
111
43
Snellville, GA
set-inform.com
Anyone know if the QLogic FC cards will fit? I'm planning to use my c6100 as a VMware home lab and using ESOS or LIO for shared storage. 4Gb FC dual-port cards seem the most economical route for > 1Gbs connectivity.
 

jared

New Member
Aug 22, 2013
38
0
0
Greetings Patrick or anyone who knows

long time viewer first time poster...
I noticed you put spare part numbers for the C6100s up to include the 2.5" parts in the first post.
I am currently pricing out a C6100 with L5639s and 96gb ram....however it is made for 3.5" HDD drives.

I cant find any LS5639s setups that include 2.5 sas backplane, so my question is this:

Is it possible to take a C6100 made for 3.5" HDDs, remove the 3.5 backplane, and insert a 2.5" hdd backplane, along with a few 2.5" caddies, or is there more to it that just that?

edit: the server is $1199 the 2.5" backplane is $99 and the spare HP DL series caddies that will fit in this according to your first post are roughly $6, that is about my budget.

Thank you Sir!

--
Jared
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
It seems like the pricing on these is going up, I'm glad we ended up buying a few prior to the price hikes but I wouldn't mind one of these for my home lab. L5520s have increased in price a bit as well while the L5639 CPUs continue to drop in price. Are the C6100s not coming off lease anymore in large quantities?
The prices are going through the roof.

Where they were US$699 for a basic build (8x L5520 &96GB ram), the two main sellers are selling cheapest for US$899 (MCP but with E5540s) and US$879 (PeterN). ESISOdotCOM are doing a 2 node version for US$638 (no trays though) and are still doing L5520s for US$35. L5639s seem to be around US$100 at the moment unless anyone knows of any people accepting 'Best offers'.

If anyone knows of any cheaper then post them up.

RB
 

zane

Member
Aug 22, 2013
70
0
6
Just got my c6100 in and all four boards are missing mezzanine slot on the main board. Has any one else had this issue or heard of this

Order specs from NES International ebay in tx
C6100 xs23-ty3
4 nodes
3.5 inch 12x hard drive bays
No hard drives
No Raid
1 PSU
8x 2.26 quad core L5520
192 GB of dual rank ram

 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
You must mean ZFS+Linux isn't production ready. ZFS+Solaris is rock solid.

I know the new Dell VRTX has SAS SR-IOV with you install the new Dell Perc8 card, but I have never heard of that feature on a c6100. What have you heard and what have you tested yourself?
oh know I was just curious. The only way to share 1 pci-e card amount multiple blades is SR-IOV (hopefully with FLR).

For some reason I thought the c6100 had shared pci-e slots but I guess you are saying they are mezz-only!

Odd that SR-IOV on VRTX works but nobody can get this to work on ESXi ! The hardware supports it!

I will check the P420/1gb FBWC to see if it is used in SR-IOV as well. It has always supported zoning/clustering..

I am going to see if I can find a way to make a target! I know the marvell chipset can be a target which you can make your own MSA :)

But then again I've got a boatload of lefthand vsa and with IPOIB or just 10gbe ethernet you can create some really powerful hyper-redundant storage setup.

All nics in the past several years support Virtual Functions and recently Function Level Reset so you can restart a hung VF without killing the rest of the vm's functions.

It would be very cool to setup my solarflare nic that supports 2048 VF's so each lun has its own dedicated path (or two or a dozen). This would allow a truly dedicated path for iscsi storage so the hypervisors won't try to do their aweful timeslicing.

I did some testing and ESXi 5.1 seems to be about 20-100x faster with "Thick provision, VAAI, single target/lun per vm, single nic per vm" versus simple 1 big shared lun with many vm's and shared nic many vm's.

1. Thin provision requires massive cpu to svmotion since it effectively expands the thin vmdk (say 700gb with 20gb used) ,then crunches it back down. This kills cpu and lags out svmotion. It takes about 40 minutes over 10gbe with large thin VM versus 4 minutes with VAAI/THICK ZERO!!

2. DAS to DAS vmotion is like this with thin: send, wait many minutes, send ram, wait minutes, sync ram, wait minutes. VERY SLOW!

3. SIOC alters queue depth based on shared storage latency! NIOC can further control ISCSI (but not FC). Only available for shared san. Queue depth is constantly adjusted based on defined latency (5ms ssd, 30ms sata san).

4. DAS uses none of this but simple time slicing method - 16-256 QD per vm, 16-256 qd per target/lun and simple time swapping. For instance if you seek 6 times each random with > 2000 sector difference, your share is up (regardless of latency!) - This is stupid for ram/SSD storage!

SR-IOV is critical for network and Storage to provide ESXi with "1 VM gets 1 RAID controller with 1 lun" and "1 dedicated VF nic per VM" - in this case there is no time slicing necessary since it is all offloaded to hardware.

With 2 vm running LSI SAS you can push maybe 64 QD across 8 SSD!! so that's maybe 8QD each SSD MAX!

with 1 VM running LSI SAS you can push 16 QD ingress (vm) and nearly 1000QD to ssd! You can imagine which one is going to be fast!

Also with high end storage they throttle QD=1 to force coalesce which results in higher QD, there is maybe two SSD on the planet that emulates this behavior. Well 3 with Samsung EVO tlc drive!

I am not sure how hyper-V time slices but it requires immense tuning to make multiple VM per host go fast . Without SR-IOV SAS controller you will never realize full potential like bare metal whilst hosting more than 1 vm per host.

So right now I just like you run multiple SAS controller to achieve higher speed but this is serious waste of money!

For some reason I was thinking C6100 had shared PCI-E cards so that led me to believe SR-IOV but since that is not the case, sorry.

VRTX is silly ! HP did this years ago, called BLC3000 - they are just blades with a tiny single phase on wheels design. People are so impressed with ancient technology that HP has been doing for ever! That is how far behind DELL is!
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Just got my c6100 in and all four boards are missing mezzanine slot on the main board. Has any one else had this issue or heard of this

Order specs from NES International ebay in tx
C6100 xs23-ty3
4 nodes
3.5 inch 12x hard drive bays
No hard drives
No Raid
1 PSU
8x 2.26 quad core L5520
192 GB of dual rank ram

Yes, same for me and I think someone else has also had the same.

Contact them and get either a replacement unit or replacement nodes.

I was told they were very surprised but this is the third case I have heard of (including my own) so I suspect they got a lower spec DCS batch and are trying to sell them away and hope people don't complain.

RB
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Greetings Patrick or anyone who knows

long time viewer first time poster...
I noticed you put spare part numbers for the C6100s up to include the 2.5" parts in the first post.
I am currently pricing out a C6100 with L5639s and 96gb ram....however it is made for 3.5" HDD drives.

I cant find any LS5639s setups that include 2.5 sas backplane, so my question is this:

Is it possible to take a C6100 made for 3.5" HDDs, remove the 3.5 backplane, and insert a 2.5" hdd backplane, along with a few 2.5" caddies, or is there more to it that just that?

edit: the server is $1199 the 2.5" backplane is $99 and the spare HP DL series caddies that will fit in this according to your first post are roughly $6, that is about my budget.

Thank you Sir!

--
Jared
You can convert from 3.5" drives to 2.5" drives, but it is usually too expensive. You need the 2.5" metal drive cage, the backplane, and, to connect the midplane to the backplane, four each of two different types of very specific SAS to SATA cables.
 

zane

Member
Aug 22, 2013
70
0
6
Yes, same for me and I think someone else has also had the same.

Contact them and get either a replacement unit or replacement nodes.

I was told they were very surprised but this is the third case I have heard of (including my own) so I suspect they got a lower spec DCS batch and are trying to sell them away and hope people don't complain.

RB
Kind of makes me mad because I told them (NES Int.) I was going to use the mez slot for the 6gb XX2X2 raid and they even tried to sell me the cards but they did not have them. Other than that the system is in really good shape and clean.

Thanks for the quick reply this is the best forum around glad I found it...
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,826
113
Please look at the note on page 105.

"NOTE: Following is the replacement procedure of SATA2 and SAS backplane for
3.5-inch hard drive systems. Replacement procedure for 2.5-inch of SATA2 and
SAS backplane is similar to backplane for 3.5-inch hard drive systems."

Dell Reference: ftp://ftp.dell.com/Manuals/all-prod...ucts/poweredge-c6100_Owner's Manual_en-us.pdf

The on board sata ports are also SATA 2.
So a quick note here, the SATA 2 ports all work at SATA 3 speed IF you have a SATA 3 controller installed. The onboard motherboard ports run at SATA II speeds since it is Intel 5500 based. They are just 7-pin connectors and no expander in the chassis.

On the replacement, I think you can do it, but I would suggest contacting a seller directly with a 2.5" chassis. You would need to at minimum replace:
1. Cage (to accept 2.5" drives)
2. Backplane (for the 24-bay version)
3. Cables since you need 6 SATA connections not 3 per node
4. Drive trays

I am probably forgetting something but it seems like that is much harder to do.
 

zane

Member
Aug 22, 2013
70
0
6
why don't you use the 9260 raid controller?
I need to have a mixture of options. 2 nodes with XX2X2 with extra 4 port gb Ethernet on pci and 2 nodes with mez DELL MELLANOX QDR VPI 2 PORT 40GB/S and pci will have dual gb Ethernet. Without the mez slots I have very limited options.
 

rork

New Member
Aug 23, 2013
5
0
0
Check ebay item 160987485157, my PDU board auction. It's listed at $25 plus shipping, but if you buy it I'll send you an invoice for just $2 plus shipping. The $2 is just to cover eBay/PayPal costs.

I *may* have the PDU cables as well. I bundled them for electronics recycling, but I don't think that I turned them in yet. If I can find them, they are yours as well.

Also, I have the midplane as well. It's auction item 160996861412. Another $2 for fees, but of course I'll ship in the same box.

Hi
I search for this midplane 47X9Y with cables,
Anybody can help me with this?
 

jared

New Member
Aug 22, 2013
38
0
0
Thanks for all the responses. Good thing i found an alternative way.

Deepdiscountserver.com just shipped my server after they took out the 4 nodes from a server that had L5639s on a 3.5 chassis, and put them in a 2.5 chassis and confirmed they worked.

Whoo Whoo!

My server is on its way.

Only thing missing is drive trays and fillers, so I will just order 24 HP DL series caddies.

You guys are awesome thank you so much !

--
Jared
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,826
113
Thanks for all the responses. Good thing i found an alternative way.

Deepdiscountserver.com just shipped my server after they took out the 4 nodes from a server that had L5639s on a 3.5 chassis, and put them in a 2.5 chassis and confirmed they worked.
Oh yea. You can take the motherboard sleds and place them in 2.5" and 3.5" chassis interchangeably. I think myself and a few others have done this multiple times.