AVOID - 4U, 2x Node, 4x E5 V3/V4, 56x LFF SAS3 3.5" bay - $299 - CISCO UCS C3260

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Cruzader

Well-Known Member
Jan 1, 2021
540
544
93
1. is there some propietary cable needed to connect a KVM via those ports you show?
It has a custom cable.
1649095066164.png
As for networking
1649095698110.png
1649095807761.png

Its either SIOC with qsfp+ or PCISIOC with card, and its clearly a full plate without pcie card installed in pic.

A bit of a fun looking box, but unless planning to run its original/intended software setup i would not touch it with a 10foot pole.
Its just massivly limited by its design if you dont plan to do that.
 

Markess

Well-Known Member
May 19, 2018
1,146
761
113
Northern California
$300 shipping though... Thought about contacting them to see if they would use my UPS account number.
Shipping on the listing page gives me the same $300 to send to me on the other side of the country in California. But if I click on "see details", along with the $300 freight option, the chart also gives me the slightly cheaper option of
US $250.00United StatesStandard Shipping (FedEx Ground or FedEx Home Delivery®)

It's not a lot less, but $50 is $50.

Oddly, the spec sheet says its noise level is 38dB. That's... Really really quiet!?
Maybe they're measuring the noise level with the special noise reduction system engaged....the one you get to by pressing and holding this button till it goes quiet 1649099344751.png



That KVM cable looks just like the HP ones from the same general generation. I suppose it would be asking too much for it to be a common, interchangeable, design.
 

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
Just got this reply from the seller:

We can include the 60 caddys for 200$
100-240VAC 50/60 Hz PSUs, comes with 4.
For shipping cost I can get a price quote please send me shipping details
Full setup would only allow 56 LFF drives and 4 SSDs
and come without any OS
Bios would be unlocked
 
  • Like
Reactions: Samir

Samir

Post Liker and Deal Hunter Extraordinaire!
Jul 21, 2017
3,257
1,445
113
49
HSV and SFO
That KVM cable looks just like the HP ones from the same general generation. I suppose it would be asking too much for it to be a common, interchangeable, design.
This makes sense to me as the front grille reminds me of HP as well. I know Dell used to make the older Cisco servers, so maybe they moved to HP now.
 

Cruzader

Well-Known Member
Jan 1, 2021
540
544
93
This makes sense to me as the front grille reminds me of HP as well. I know Dell used to make the older Cisco servers, so maybe they moved to HP now.
Some of the cisco stuff around hp g5/g6 is pretty much just hp with darker plastic used.
Same cage layouts, same blanks, tray locks etc

They put the bare minimum effort beyond just taping over the HP logo :D
 

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
It has a custom cable.
View attachment 22326
Figured so as I've seen this across Supermicro.

No word if it comes with it or not.

sA for networking
View attachment 22327
View attachment 22328

Its either SIOC with qsfp+ or PCISIOC with card, and its clearly a full plate without pcie card installed in pic.
Awesome! So they are dual 40 Gbps QSFP+, which is exactly what I need.

However, can I use just any transceiver and active cables out there? Humm... Really would like to get my hands on one.

The other option sounds even better: a single x8 PCIe 1/2 height slot (for a dual 40g card), plus two NVMe slots. ZFS SLOG, yes please!


A bit of a fun looking box, but unless planning to run its original/intended software setup i would not touch it with a 10foot pole.
Its just massivly limited by its design if you dont plan to do that.
What do you mean by original intended software?

I thought it's a SAS3 backplane expander chip, hopefully supporter by Linux kernel. I saw it listed in the docs but haven't been able to research its GNU Linux support. Then the Intel C612 chipset (I'm guessing that's what M4 is, haven't had time to read up on it). Some network drivers, etc.

Will these boxes not run a normal GNU Linux distro?

Please don't tell me they need like 1/2 dozen custom compiled kernel modules or drivers or something. Lol.
 
  • Like
Reactions: Samir

Cruzader

Well-Known Member
Jan 1, 2021
540
544
93
What do you mean by original intended software?

I thought it's a SAS3 backplane expander chip, hopefully supporter by Linux kernel. I saw it listed in the docs but haven't been able to research its GNU Linux support. Then the Intel C612 chipset (I'm guessing that's what M4 is, haven't had time to read up on it). Some network drivers, etc.

Will these boxes not run a normal GNU Linux distro?

Please don't tell me they need like 1/2 dozen custom compiled kernel modules or drivers or something. Lol.
Its like any san shelf, you can run it fine with just one host connected to it and whatever drives/distro.
To actualy split it across 2 hosts you need DP(Dual Port) drives and not just a regular distro.

There is a reason for this type of boxes getting dumped at a low price.
You get one host with a high drive capacity with very limited io/expansion and a 2nd node even more limited without drives.
With a cooling scaled for 800-1000w using more than a regular server would by itself.

As much as i find them somewhat interesting, its the type of hardware that i would not run even if i got it for free.
Tho for free id probably take one to gut it and try converting it into a pure shelf.
 
  • Like
Reactions: Samir

Alex0220

Active Member
Feb 13, 2021
176
43
28
That KVM cable looks just like the HP ones from the same general generation. I suppose it would be asking too much for it to be a common, interchangeable, design.
Yes, most of the blade solutions from all vendors use this type of connector and cable. Supermicro also uses it along with HPE and cisco.
 
  • Like
Reactions: Markess and Samir

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
Its like any san shelf, you can run it fine with just one host connected to it and whatever drives/distro.
To actualy split it across 2 hosts you need DP(Dual Port) drives and not just a regular distro.

There is a reason for this type of boxes getting dumped at a low price.
You get one host with a high drive capacity with very limited io/expansion and a 2nd node even more limited without drives.
With a cooling scaled for 800-1000w using more than a regular server would by itself.

As much as i find them somewhat interesting, its the type of hardware that i would not run even if i got it for free.
Tho for free id probably take one to gut it and try converting it into a pure shelf.
The more I do I read on this box, the more it is different than that.

For example, it uses an LSI Expander chip. And the manual talks about assigning disk groups and what nodes they are assigned to.


You shouldn't have to use purely dual-port drives with the LSI expanders like this. I have an LSI Expander chip in the Supermicro 2U dual-node X9 chassis here.

If you insert a dual port SAS drive, in the LSI BIOS config on boot, you can assign groups and failover groups for the dual port drive. Yes, there is some custom Windows drivers for it. Ubuntu was supported, but I never looked into it.

However, you can also insert a regular 2.5" SATA drive. Inside the LSI Expander config, you only have the option to assign the disk to a node/group. No splitting the disk.

The point is, you should be able to assign each HDD slot to a specific node. Or, just run one node with 56 drives!

I just counted... Holy smokes. I have 51 drives I could take out of 3 chassis, and slam all into this one box.

And sell the SC846s to cover the costs. So tempting...

Let's see what deal the seller gives on non-frieght shipping.
 
  • Like
Reactions: Samir

Cruzader

Well-Known Member
Jan 1, 2021
540
544
93
You still end up with a exotic box with a future parts issue if needing anything and a oof wattage.
Without the chance of expansion, very limited io, layers of possible compatability issues and pretty much at same pricing as just going with a regular 2u box + a shelf or 2.
(Especialy if using some 2.5 like you mentioned, that has so much 24 bay stuff in the 70-150$ area with trays.)

Beyond the novelty of running a top loader (that most regret getting) i dont see any upsides to this box.
Its either that im boring or have dealt with too many boxes like these, but i would not wish it upon my worst enemie for lab use with a generic distro.

Tho its probably somewhat of a ritual to buy atleast one toploader before ditching it again.
Got a small stack of 84bay units in storage from my old "fun" with them before replacing :D
 
  • Like
Reactions: Samir

Indecided

Active Member
Sep 5, 2015
163
83
28
I can imagine that this will sorta kill my DC power budget, but the thought is growing on me. I do like Cisco computing gear, except for the "fabric interconnect/nexus" bit, which we've managed to dodge all this years.

I haven't had a need for much cold/warm storage right now, but this may change in the coming years, but i'm still trying hold off..........I hope I am somewhat succeeding. Although $200 more for trays is fairly reasonable.... great.

I guess I haven't experienced the joys of a toploader, having missed out on the last HGST 4U60 deal, but then again we've been trying to go away from spinning disks for several years now.

However, it seems that getting parts for the S3x60 will be a challenge. e.g. the IT mode cards like UCSC-S3X60-HBA / UCSC-S3X60-DHBA don't appear to have any current or recent transacted history on eBay. Wonder if generics would work/fit. Cisco has a habit of having on-board semi-custom "riserless" layout slots for their LSI storage controllers.

I'd still be keen to hear back from whoever buys one, though.
 
Last edited:
  • Like
Reactions: Samir

oddball

Active Member
May 18, 2018
206
121
43
42
We have one of these in our DC. It's a great box.

The chassis supports two server nodes. The nodes can be m3/m4/m5 models. The m3 is Ivy Bridge chips, m4 supports the v3/v4 stuff and m5 Skylake/Cascade Lake etc. Both nodes have to be the same model, but chips and configs can be different.

You have these IOM cards, they manage the network connections and management of the server.

RAID cards are in the server node itself. You can have dual raid cards and setup some crazy failover situations.

The drive trays are cheap, $5-10 on eBay, even cheaper if you ping some of the sellers and ask to buy in bulk.

These accept ANY SAS or SATA drives. We've tested it, it's unsupported, but you aren't buying support anyways. The server nodes use four separate drive bays in the back on the bottom with a software raid setup. Server nodes can hold a few NVMe drives too.

You can rack this on any self type rails.

As for noise, they ARE quiet. We had ours ramp up like crazy because the firmware from an NMVe drive was out of date, updated and it became silent.

If you have dual IOM nodes you can get 160Gbps out of the chassis at once. There are trays for SAS SSD drives. I think you can do 24 SSD's, if you did a RAID-10 you could easily max the bandwidth.

Cisco has a doc on their site showing how to saturate a 40GbE link with just spinning disks.
 

oddball

Active Member
May 18, 2018
206
121
43
42
Regarding power, ours is about half full with a server node and it's using 550 watts.

Why the hate on the top loader? Maybe we're exceptional but I don't think the box has been slid forward or opened in eight months to maybe a year. Probably about the same as most servers, once setup they rarely change.

If you're looking at this make sure it's the m4 model, I think there's a slight change in the chassis style and the newer style can support the m4 and m5 nodes.

We manage ours with the fabric interconnects and Cisco manager. But you could manage it directly via the web ui, there are no licensing costs associated with it.
 

Cruzader

Well-Known Member
Jan 1, 2021
540
544
93
Why the hate on the top loader? Maybe we're exceptional but I don't think the box has been slid forward or opened in eight months to maybe a year. Probably about the same as most servers, once setup they rarely change.
the top loader part is just a bit of a meme by now.
But its not the actualy the top loading part, its just that they are always the boxes that due to density have gone over to 4-6-8 expanders layered on another expander.
With zoning to split drives, the layering limiting compatability more etc

they are a bit of the "if i had a dollar for evrytime..." with the amount of people buying them and never getting them to work as they wanted/expected (or at all).
Because they expected it to function like a dumb single backplane device with direct passthru to hba, and most dont work like that out of the box.
 
  • Like
Reactions: Samir

oddball

Active Member
May 18, 2018
206
121
43
42
the top loader part is just a bit of a meme by now.
But its not the actualy the top loading part, its just that they are always the boxes that due to density have gone over to 4-6-8 expanders layered on another expander.
With zoning to split drives, the layering limiting compatability more etc

they are a bit of the "if i had a dollar for evrytime..." with the amount of people buying them and never getting them to work as they wanted/expected (or at all).
Because they expected it to function like a dumb single backplane device with direct passthru to hba, and most dont work like that out of the box.
Ah makes sense.

I believe the Cisco one actually can work like that. You can create a single large virtual drive and then have all of that hang off the single server node.

Some of the documentation describes a use case for these boxes as giant drives to store video surveillance, so it just needs to be drives hooked together.
 
  • Like
Reactions: Samir

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
So the latest negotiations with the seller...

Shipping for one of the $299 chassis to the East Coast will be about $135, not $300 as in the listing. The $300 is for general freight but they will put it in a box for cheaper shipping.

They confirmed the $299 listing comes with both nodes, though no specifics if it's M3 or M4. They did offer "the same build and nodes with V4 CPUs" they mentioned in another message. So it's most likely M4 nodes.

It does NOT come with KVM breakout cable. They will add it for a surcharge.

I would absolutely pull the trigger right now... If, I knew that the LSI Expander chip/built-in raid card will work with ZFS and/or Ceph, which is quite picky with RAID controllers, even in Passthrough mode.

And these M4 nodes comes configured as pass-through with the onboard Raid controllers they have.
 
Last edited:
  • Like
Reactions: Samir

oneplane

Well-Known Member
Jul 23, 2021
844
484
63
That might indicate that the "$2,400.00 Standard International Shipping" is actually general freight shipping as well and not just mailing a box... hmm. That said, it's a big chunky metal unit so it will probably still be very expensive to get one of those to the EU either way.
 
Last edited:
  • Like
Reactions: Samir

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
I asked the seller to clarify what exact storage controller comes with the M4 nodes. There are two models, one for HBA and one for MRAID (LSI Megaraid based on the Broadcom

Found this in the documentation:

Choosing Between RAID 0 and JBOD
The RAID controller supports JBOD mode (non-RAID) on physical drives that are in pass-through mode and directly exposed to the OS. We recommended that you use JBOD mode instead of individual RAID 0 volumes when possible.

It seems the problem with ZFS and Raid controllers seems to be that typical zfs-unsupported raid controller doesn't pass disk through to the OS. Instead, the controller gives an open to configure each drive as a stand alone RAID0 device, which ZFS does not like.

However, the quote above seems to allow the raid controller to be configured as a real JBOD.
 
  • Like
Reactions: Samir

jtaj

Member
Jul 13, 2021
74
38
18
@oddball or anyone have this server, pls help. we got this server a couple months back and we can't seem to boot from the 2.5 sata SSD. the SSDs gets picked up by the MEGA raid controller and is seen in bios or raid menu, but bios does not show them as bootable option which differs from the old M3.

screen looks like this:

only network, PCI raid adapter and UEFI shell. setting up virtual or jbod via raid controller still wont have those array/drive as boot option in bios. what seems to be the issue?

bios menu boot.png

boot menu.png
 
Last edited:
  • Like
Reactions: Samir