AVOID - 4U, 2x Node, 4x E5 V3/V4, 56x LFF SAS3 3.5" bay - $299 - CISCO UCS C3260

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

feffrey

New Member
Oct 3, 2014
11
14
3
Having worked a ton on UCS Fabric Interconnect and Cisco servers in the past, the VICs typically need configured first before they will actually connect up. I don't know about C3260 systems however, never got a chance to work on these.

Really looking forward to hear about people using them. My Synology has no empty slots and I'm really tempted to buy one of these and try ceph / zfs, etc but I'm not willing to be the guinea pig! :)
 

Slothstronaut

Member
Apr 27, 2022
29
58
13
Mine arrived today. Packing was pretty good, no damage at all.

The configuration, however, is way off. Sent with 8 drives per server instead of 16, 1 of the nodes in each server has no RAM installed. I sent an email this morning as soon as i noticed but no reply yet.

The servers are still very much configured, no reset has been performed. The gbit management NIC is enabled (not shared with the QSFP+) which is nice, but the password hasn't been reset so you'll need to do the steps to get that done. Looking for documentation now to just factory default everything and start over.

Mine are both labeled "Content Delivery Engine", but appear to be C3260. M4 nodes in each.

More info to come as i dive deeper into them!
 

pcmantinker

Member
Apr 23, 2022
21
32
13
Having worked a ton on UCS Fabric Interconnect and Cisco servers in the past, the VICs typically need configured first before they will actually connect up. I don't know about C3260 systems however, never got a chance to work on these.

Really looking forward to hear about people using them. My Synology has no empty slots and I'm really tempted to buy one of these and try ceph / zfs, etc but I'm not willing to be the guinea pig! :)
I'm hoping we get some answers here soon with a few of us now being guinea pigs, myself included! This machine has huge potential!
 

TLN

Active Member
Feb 26, 2016
523
84
28
34
Except shipping shows as $650 for me. I got no use for something like that but I'd buy one for $300-500
 
  • Like
Reactions: Samir

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
Except shipping shows as $650 for me. I got no use for something like that but I'd buy one for $300-500
That's eBay's flat rate Freight charges based on a simple weight entered (and they aren't accurate for these listings and wildly differ).

Contact the seller directly, as they will box up 1 at a time for shipping. For me in the Northeast, it was around $160 or so (posted earlier in the thread).
 
  • Like
Reactions: Samir

jtaj

Member
Jul 13, 2021
74
38
18
I just bought 2 of these servers with some extra drives, he was very easy to work with and does some good deals if you contact them directly. Shipping for 2 machines was about $500 to the midwest. They arrive tomorrow, so I will provide some updates with what they exactly are.
if you don't mind, please have a step by step guide for dumb people like me. it seems installing windows isnt doing me a wonder, only picks up half the drive bay and ethernet/internet connection via SIOC does not work (even with installed driver).
 
  • Like
Reactions: Samir

Slothstronaut

Member
Apr 27, 2022
29
58
13
I am still working thru a lot of things but here is what I have to report after a day of tinkering.

Good news:
Servers work perfectly fine, no issues to report, they seem well taken care of. According to the remains of the config I've found, these were owned by AT&T, maybe storage for their cloud DVR?
After several hours of firmware updates, they are now both on the latest versions of everything that Cisco offers. (Version 4.1.3d)
These are, in fact, UCS S3260M4 machines. The CDE 6032 moniker on the front appears to be the branding Cisco uses for these custom builds.
They have the large RAID cards installed with the full 4GB cache, all features enabled
The RAID cards DO support JBOD for unconfigured drives. I am still getting an OS installed to verify if they passthru without any adulteration from the RAID controller.

Bad news:
The drives are assigned in CIMC to each blade, you can pick and choose any/all drives to assign to which blade, however, you cannot assign drives to BOTH blades. So there goes my master plan of an HA file server. The multipath of the drives is just backplane connectivity to the SAS Expander.
The SIOC cards require Cisco optics/DACs. Arista and no-name DACs/optics would detect but no link.
The boot drives on the back (2 per blade) are connected to the RAID card, so you cannot pass the RAID card thru to a VM or anything cheeky. These are meant for bare-metal file server installs (or enormous VM hosts)
The passwords and config is easy to reset, but you WILL need the KVM breakout cable. I have a Supermicro because literally everything else i own is SM, but you wont get anywhere without it as you cant reset the password/config from anything but the CIMC BIOS.

All in all, I'm happy with them so far (aside from the mixup with the order itself). I am working to install Windows so I can do some poking around and do some load tests on the machines and beat on the drives that came with it as well as the 56 12TB drives going in the other server. My thoughts now go to the second blade. What could I do with that? It can't use any of the drives, it doesn't have any PCIe. I guess i can use it for ESXi compute with iSCSI storage. One thought I had was to have another install of the OS/config being used by the first blade. So if something blows up on that one, I can import the drives to the RAID card and get back up and running a bit quicker, kind of a cold standby.

Anyways, that's my progress today. I will do a writeup when I get some time how I did all this for those playing along. The VIC networking is a bit of a deep concept to wrap your head around if you aren't familiar with it. It's VERY similar to ESXi and physical NICs/vNICs. The 40gbe ports on the back are expected to be big fat trunks to A/B switches that carry all the VLANs you would need. Then you CREATE vNICs on the SIOC that get presented to the blades. It's network/storage virtualization at the hardware layer. Neat stuff, but is just another layer of abstraction to troubleshoot when getting it stood up.

As to your question jtaj, I am assuming you are not getting a link because of the Cisco-branded requirement for the optics. Even if you have no vNICs configured, the link should still come up.
 

jtaj

Member
Jul 13, 2021
74
38
18
The drives are assigned in CIMC to each blade you can pick and choose any/all drives to assign to which blade however you cannot assign drives to BOTH blades.
some questions. so the CIMC (via browser I assume after setting up the DHCP thing), you can have it set all LFF bays to work with a single node (blade), yes? and from what @oddball mentioned fan control can be done in there as well.

are you installing windows on it? could you also take a picture of your SIOC and the ports you connect your cisco DAC/optik to? there seems to be different variation of SIOC type and mine doesnt reflect whats shown in the manual so I have no idea which port is which.
 
  • Like
Reactions: Samir

Slothstronaut

Member
Apr 27, 2022
29
58
13
I will have to get back to you on the networking part, just had an "oh shit" moment. I am starting to think these servers REQUIRE a fabric interconnect to work. They are marketed as standalone, but every single design reference calls out a 6300 series FI. I really really hope that is not the case. I am not fluent with UCS, we use it at work but I don't get this far into the weeds. Anyone know how UCS and standalone servers works?

The SIOC I have is the one in the listing photos. It has dual 40g ports, a serial RJ45 and then a management RJ45. I am currently doing all my work over the RJ45, windows is installing right now but if I can't get the 40g ports to come up connected to my Arista then this whole project is dead in the water... I am using OEM Cisco DACs, but the port remains "No Link". No errors that I can find anywhere, just never negotiates a link.
 
  • Like
Reactions: Samir

feffrey

New Member
Oct 3, 2014
11
14
3
I will have to get back to you on the networking part, just had an "oh shit" moment. I am starting to think these servers REQUIRE a fabric interconnect to work. They are marketed as standalone, but every single design reference calls out a 6300 series FI. I really really hope that is not the case. I am not fluent with UCS, we use it at work but I don't get this far into the weeds. Anyone know how UCS and standalone servers works?

The SIOC I have is the one in the listing photos. It has dual 40g ports, a serial RJ45 and then a management RJ45. I am currently doing all my work over the RJ45, windows is installing right now but if I can't get the 40g ports to come up connected to my Arista then this whole project is dead in the water... I am using OEM Cisco DACs, but the port remains "No Link". No errors that I can find anywhere, just never negotiates a link.
They definitely support standalone mode, when in UCS mode the cimc is disabled and you won't be able to login to it.

Here is a UI guide for the cimc,
Cisco UCS S-Series Integrated Management Controller GUI Configuration Guide for S3260 Servers, Release 4.1
I would bet something isn't right with the network settings for the vics. Also on the Fabric Interconnects it was possible to pull interface information via the UI or CLI for troubleshooting. I would assume these could do the same

I did find this video on setting up the CIMC, looks like out of the box it defaults to using the 40gb nics and not the dedicated 1gb interfaces for mgmt.

For those that already have them they are 208v only right?
 
Last edited:
  • Love
  • Like
Reactions: Samir and jtaj

jtaj

Member
Jul 13, 2021
74
38
18
@feffrey that video is an excellent find, ty. and to answer about voltage, at our place we only got 120v and not the 220v. it definitely works on 120v as thats what power supply sticker shows.

what I find is a lot of big servers will say 220v only but it's power supply is what determines it im assuming, which some will be both dual 120v/220v. unless server pulls a lot of power off the start, these storage server most likely all have delayed and staggered spin up.
 
  • Like
Reactions: Samir

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
The power supply for this beast is configured 2 + 2 configuration. The first number in "# + #" PSU setups means how many PSUs are required for the system to operate at full capacity. The second number means how many PSUs are redundant in the system. IOW, the C3260 requires two PSUs to run at max capacity.

Therefore, this is a 2100 watt system (2 x 1050W PSUs configured as 2 + 0). That exceeds a standard 120VAC 15A circuit. However, that's under max load. If you don't fill the system up, you're looking at far far less power. Like, around 700-1400 Watts MAX under full load with 30 or so 7200 HDDs (at best guess). An earlier user posted the system idled around ~550W with two nodes, 16x DIMMs each, and about 1/2 the drive bays filled with 7200rpm drives. But that's "idle", those 4x CPUs will ramp up another 600W+ when you load up the CPUs, and try to write to all of the HDDs at the same time.

While operating in a Fault mode of a single PSU, you could technically operate with a single PSU (as @jtaj found out when he plugged in just one PSU unit). Most 2+0 configurations in servers allows the use of a single PSU, operating at 1/2 the capacity in a limp mode, as the system will only max to 1050W - not it's rated 2100 Watts. If you exceed the limit, the PSU shuts off. And you are already at ~600W max capacity, with just the system turned on with 2x nodes, 4x CPUs, and 32x DIMMs - and no HDDs. The point here is: plan for max load of your configuration, not idle wattage.

As mentioned, second 2 in 2 + 2 means it is redundant : you can loose two PSUs, and the system will still have enough power from the other two PSUs to continue operating at full capacity.

For completion sake, yes most redundant PSU / two-PSU servers are classified as 1 + 1 (meaning you can loose a single PSU and the system can operate at full capacity).

However, a lot of servers - especially GPU and "storage" systems with lots of 3.5" LFF HDD capacity like some Supermicro systems I have - are 2 + 0 systems. Meaning, it needs both PSUs at full capacity to supply enough power to the chassis.

This is all dependent on the servers' PDU and how it can handle 1+0, 1+1, 2+0, 2+2 configurations and system power draws. What I have posted is above is typical of Supermicro, Dell, and HP systems.

Now about 220 VAC... Since two PSUs under max load exceeds 15A 110VAC, you need a larger circuit. 20A 110VAC is not common in data centers. However, 20A and 30A 220VAC is. Hence why I think they just say 208V, as it defaults to assuming they are of at least 20A circuits, where when someone says 110VAC, it's largely assumed 110 means a max of 15A.

Oh, and you need 2x 20A circuits, to cover all 4 PSUs at max load - if you ever had the need for a 2000W HDD storage server. Hehe.

---

Do you need all of this? Hell no! Go ahead and plug in two PSUs into a single 15A 110VAC - there's no issue for most of us home labbers here. What this means is that the system will pull power from both PSUs until it trips the circuit breaker, at ~1680 Watts (which is the typical magic number for GFCI breakers IME). That's way more than enough to plop in 30x or 50x or so 7200 drives. So yeah, feel free to plug two of those PSU suckers into a single 110VAC 15A outlet (and a really good UPS sine-wave UPS though!) and not worry about the 208V nor 4x PSU demands of this server.

You could do this with a single PSU as well. However, remember that you are tippy-toeing around a single 1050W PSU. IOW, if you load up a chassis with 56x LFF and have 4x 145W CPUs, you may experience a number of immediate power-offs of the server when under heavy load testing. Operating two nodes with 4x 145W CPUs, under max load, is going to eat a lot of your 1050W budget (~600 to 700W for just the two nodes!).

IME, 2x E5 V3 systems and 8x normal non-LRDIMM dimms pulls about 85W idle with no other components, no BMC/IPMI, and no power-hungry onboard NICs. So, add in RAID controller, backplane, PDU, SAS expansion chips (this system has two!!), SOIC (network card(s), BMC, etc), etc and you're looking at a bare system idle of around 110W to 140W is my best guess - multiplied by 2x because you have two nodes! That's idle. At full 100% CPU loads, you'll be looking at around 300W-350W full load - per node. So that's about 600W to 700W you want to dedicate to the nodes when calculating HDD wattage usage under a single 1050W PSU.

IOW, you have a capacity of around 300 to 400W for HDDs under a single PSU. Or, remove a node and SOIC and gain another 300W for more HDDs - off of a single 110VAC connection.

Best to use a normal single 15A 110V wall outlet, and connect two of the PSUs for a power budget of 1680W to play with (with nothing else on that circuit, that is). Or, run a 220VAC 30A circuit connected to a nice big and fat UPS. This is what I've done for my "server closet."

Remember, PSUs are a lot more efficient running at 220V than 110C (and less heat!).
 
Last edited:
  • Love
Reactions: Samir

Slothstronaut

Member
Apr 27, 2022
29
58
13
Well, I am at a loss here.

Did another full factory reset, blew everything away in the config for the SIOC and put it all back to OOB. Power up the node, configure the management to use gbit because the 40g links don't come up. Reconfigure CIMC so I can manage the server but the 40g links just will not come up.

Going thru the debug commands, they detect something is plugged in but that is all I can see. The Arista switch on the other end sees a DAC is plugged in, but no link. Verified in CIMC that the VIC ports are in "CE" or Classic Ethernet mode, which should be "just works" with any switch but ill be damned if it does anything. Trying to connect server to server doesn't work either. I don't have a Cisco switch with 40g to test with, in case it hates Arista for some reason. I have already read all the documentation I can find about disabling FIP mode, setting the speed manually etc no change.

Ideas? I've never come across this before. Either the SIOC needs some specific configuration to work or whatever I am doing is just not a supported configuration. Dozens of servers with Mellanox CX3/CX4 connected to this switch without issue.
 
  • Like
Reactions: Samir

Jaket

Active Member
Jan 4, 2017
232
119
43
Seattle, New York
purevoltage.com
So far we are just looking to get some of these racked up. However, we already see missing caddies on a few systems even though we paid extra to get all of the caddies. :(
Hopefully when we get around to the rest they have the parts we ordered.
 
  • Like
Reactions: Samir

feffrey

New Member
Oct 3, 2014
11
14
3
The power supply for this beast is configured 2 + 2 configuration. The first number in "# + #" PSU setups means how many PSUs are required for the system to operate at full capacity. The second number means how many PSUs are redundant in the system. IOW, the C3260 requires two PSUs to run at max capacity.

Therefore, this is a 2100 watt system (2 x 1050W PSUs configured as 2 + 0). That exceeds a standard 120VAC 15A circuit. However, that's under max load. If you don't fill the system up, you're looking at far far less power. Like, around 700-1400 Watts MAX under full load with 30 or so 7200 HDDs (at best guess). An earlier user posted the system idled around ~550W with two nodes, 16x DIMMs each, and about 1/2 the drive bays filled with 7200rpm drives. But that's "idle", those 4x CPUs will ramp up another 600W+ when you load up the CPUs, and try to write to all of the HDDs at the same time.

While operating in a Fault mode of a single PSU, you could technically operate with a single PSU (as @jtaj found out when he plugged in just one PSU unit). Most 2+0 configurations in servers allows the use of a single PSU, operating at 1/2 the capacity in a limp mode, as the system will only max to 1050W - not it's rated 2100 Watts. If you exceed the limit, the PSU shuts off. And you are already at ~600W max capacity, with just the system turned on with 2x nodes, 4x CPUs, and 32x DIMMs - and no HDDs. The point here is: plan for max load of your configuration, not idle wattage.

As mentioned, second 2 in 2 + 2 means it is redundant : you can loose two PSUs, and the system will still have enough power from the other two PSUs to continue operating at full capacity.

For completion sake, yes most redundant PSU / two-PSU servers are classified as 1 + 1 (meaning you can loose a single PSU and the system can operate at full capacity).

However, a lot of servers - especially GPU and "storage" systems with lots of 3.5" LFF HDD capacity like some Supermicro systems I have - are 2 + 0 systems. Meaning, it needs both PSUs at full capacity to supply enough power to the chassis.

This is all dependent on the servers' PDU and how it can handle 1+0, 1+1, 2+0, 2+2 configurations and system power draws. What I have posted is above is typical of Supermicro, Dell, and HP systems.

Now about 220 VAC... Since two PSUs under max load exceeds 15A 110VAC, you need a larger circuit. 20A 110VAC is not common in data centers. However, 20A and 30A 220VAC is. Hence why I think they just say 208V, as it defaults to assuming they are of at least 20A circuits, where when someone says 110VAC, it's largely assumed 110 means a max of 15A.

Oh, and you need 2x 20A circuits, to cover all 4 PSUs at max load - if you ever had the need for a 2000W HDD storage server. Hehe.

---

Do you need all of this? Hell no! Go ahead and plug in two PSUs into a single 15A 110VAC - there's no issue for most of us home labbers here. What this means is that the system will pull power from both PSUs until it trips the circuit breaker, at ~1680 Watts (which is the typical magic number for GFCI breakers IME). That's way more than enough to plop in 30x or 50x or so 7200 drives. So yeah, feel free to plug two of those PSU suckers into a single 110VAC 15A outlet (and a really good UPS sine-wave UPS though!) and not worry about the 208V nor 4x PSU demands of this server.

You could do this with a single PSU as well. However, remember that you are tippy-toeing around a single 1050W PSU. IOW, if you load up a chassis with 56x LFF and have 4x 145W CPUs, you may experience a number of immediate power-offs of the server when under heavy load testing. Operating two nodes with 4x 145W CPUs, under max load, is going to eat a lot of your 1050W budget (~600 to 700W for just the two nodes!).

IME, 2x E5 V3 systems and 8x normal non-LRDIMM dimms pulls about 85W idle with no other components, no BMC/IPMI, and no power-hungry onboard NICs. So, add in RAID controller, backplane, PDU, SAS expansion chips (this system has two!!), SOIC (network card(s), BMC, etc), etc and you're looking at a bare system idle of around 110W to 140W is my best guess - multiplied by 2x because you have two nodes! That's idle. At full 100% CPU loads, you'll be looking at around 300W-350W full load - per node. So that's about 600W to 700W you want to dedicate to the nodes when calculating HDD wattage usage under a single 1050W PSU.

IOW, you have a capacity of around 300 to 400W for HDDs under a single PSU. Or, remove a node and SOIC and gain another 300W for more HDDs - off of a single 110VAC connection.

Best to use a normal single 15A 110V wall outlet, and connect two of the PSUs for a power budget of 1680W to play with (with nothing else on that circuit, that is). Or, run a 220VAC 30A circuit connected to a nice big and fat UPS. This is what I've done for my "server closet."

Remember, PSUs are a lot more efficient running at 220V than 110C (and less heat!).
Appreciate the detailed write up. I have been burnt in the past with a Dell 6850 I bought and setup at home, only to realize that it was 200-240 only! Glad I'll be able to avoid buying a new UPS just for this system. :)

Well, I am at a loss here.

Did another full factory reset, blew everything away in the config for the SIOC and put it all back to OOB. Power up the node, configure the management to use gbit because the 40g links don't come up. Reconfigure CIMC so I can manage the server but the 40g links just will not come up.

Going thru the debug commands, they detect something is plugged in but that is all I can see. The Arista switch on the other end sees a DAC is plugged in, but no link. Verified in CIMC that the VIC ports are in "CE" or Classic Ethernet mode, which should be "just works" with any switch but ill be damned if it does anything. Trying to connect server to server doesn't work either. I don't have a Cisco switch with 40g to test with, in case it hates Arista for some reason. I have already read all the documentation I can find about disabling FIP mode, setting the speed manually etc no change.

Ideas? I've never come across this before. Either the SIOC needs some specific configuration to work or whatever I am doing is just not a supported configuration. Dozens of servers with Mellanox CX3/CX4 connected to this switch without issue.
Are you able to try an 40gb optic and fiber? I've personally had issues between Intel and Mikrotik and had to use optics + fiber, no brand or combination of twinax would work. In the past when I was doing UCS stuff, the only time I was connecting to non-cisco gear on both ends was a UCS FI and Extreme switches and there were no issues using Cisco or Extreme twinax cables.
 
  • Like
Reactions: Samir

Jaket

Active Member
Jan 4, 2017
232
119
43
Seattle, New York
purevoltage.com
Appreciate the detailed write up. I have been burnt in the past with a Dell 6850 I bought and setup at home, only to realize that it was 200-240 only! Glad I'll be able to avoid buying a new UPS just for this system. :)



Are you able to try an 40gb optic and fiber? I've personally had issues between Intel and Mikrotik and had to use optics + fiber, no brand or combination of twinax would work. In the past when I was doing UCS stuff, the only time I was connecting to non-cisco gear on both ends was a UCS FI and Extreme switches and there were no issues using Cisco or Extreme twinax cables.
I will hopefully next week be able to try and test this with a AOC cable and see if that works or not. Long as we can get them all racked up we have 80 other nodes going in today plus a few racks of these units.
 
  • Like
Reactions: Samir

Slothstronaut

Member
Apr 27, 2022
29
58
13
I don't have any OEM Cisco optics, got some ordered today. Came across this post that is the same problem I am having, same generation of VIC too. Fingers crossed this solves my problem!
 
  • Like
Reactions: Samir

Slothstronaut

Member
Apr 27, 2022
29
58
13
SUCCESS!! Kinda...

So I got to thinking about the cables I am using and the post I linked above and wondered what could be the problem. The DACs I have are 10GTek branded and "Cisco Compatible". I got to digging into just what that cable is reporting to the switch/SIOC and found out that it is NOT encoded correctly. It is being reported as: OEM QSFP-H40G-CU3M. When it supposed to say CISCO QSFP-H40G-CU3M.

In my endless empire of dirt, i was able to source a genuine OEM bonafide Cisco QSFP+ to 4x SFP+ cable. Splitting 40G into 4x10G. I have an old PoE switch with some SFP+ ports, figured what the heck, lets try it.

Link came up immediately, 4x10G happy as can be...

Annoyed, I put my Arista optic back in the SIOC, 1M OM3 cable and Arista optic at the other end. Link came up.

WTF????

Long story short, you can "fool" the SIOC into accepting any optic as long as it sees an OEM one first. The link is solid and working fine, as long as it doesn't flap. Not ideal, but it lets me move forward. OEM Optics are on the way, we're back on track.