AVOID - 4U, 2x Node, 4x E5 V3/V4, 56x LFF SAS3 3.5" bay - $299 - CISCO UCS C3260

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Jaket

Active Member
Jan 4, 2017
232
119
43
Seattle, New York
purevoltage.com
SUCCESS!! Kinda...

So I got to thinking about the cables I am using and the post I linked above and wondered what could be the problem. The DACs I have are 10GTek branded and "Cisco Compatible". I got to digging into just what that cable is reporting to the switch/SIOC and found out that it is NOT encoded correctly. It is being reported as: OEM QSFP-H40G-CU3M. When it supposed to say CISCO QSFP-H40G-CU3M.

In my endless empire of dirt, i was able to source a genuine OEM bonafide Cisco QSFP+ to 4x SFP+ cable. Splitting 40G into 4x10G. I have an old PoE switch with some SFP+ ports, figured what the heck, lets try it.

Link came up immediately, 4x10G happy as can be...

Annoyed, I put my Arista optic back in the SIOC, 1M OM3 cable and Arista optic at the other end. Link came up.

WTF????

Long story short, you can "fool" the SIOC into accepting any optic as long as it sees an OEM one first. The link is solid and working fine, as long as it doesn't flap. Not ideal, but it lets me move forward. OEM Optics are on the way, we're back on track.
Good to hear, we had this happen in the past then the link went down. I would try and flap the port and reboot etc and see if it keeps working for you. We had it in production where something didn't come back up and was not fun trying to fix it. :)
Glad to hear however we have some 40g to 4x10g cisco breakouts and a lot of other things. Sadly these are for a large project for a few months from now. Hopefully we will get to play with them more ahead of time.

I believe we have 20 of these systems I believe at the moment.
 
  • Like
Reactions: Samir

jtaj

Member
Jul 13, 2021
74
38
18
SUCCESS!! Kinda...

So I got to thinking about the cables I am using and the post I linked above and wondered what could be the problem. The DACs I have are 10GTek branded and "Cisco Compatible". I got to digging into just what that cable is reporting to the switch/SIOC and found out that it is NOT encoded correctly. It is being reported as: OEM QSFP-H40G-CU3M. When it supposed to say CISCO QSFP-H40G-CU3M.

In my endless empire of dirt, i was able to source a genuine OEM bonafide Cisco QSFP+ to 4x SFP+ cable. Splitting 40G into 4x10G. I have an old PoE switch with some SFP+ ports, figured what the heck, lets try it.

Link came up immediately, 4x10G happy as can be...

Annoyed, I put my Arista optic back in the SIOC, 1M OM3 cable and Arista optic at the other end. Link came up.

WTF????

Long story short, you can "fool" the SIOC into accepting any optic as long as it sees an OEM one first. The link is solid and working fine, as long as it doesn't flap. Not ideal, but it lets me move forward. OEM Optics are on the way, we're back on track.
can you share the 40G cisco cable part number in case we do need to purchase them?
 
  • Like
Reactions: Samir

Slothstronaut

Member
Apr 27, 2022
29
58
13
Yea this is just a temporary setup until the correct optics arrive. As you said, its not stable. You can run it for a while then it will drop. But I just wanted to get into these machines to test their functions before its too late. The cable I used to "trick" it was a Cisco QSFP-4SFP10G-CU5M. Anything from that series will work.

Table of supported optics/DACs from the S3260 Server Manual:
QSFP-40G-SR-BD
QSFP-40G-SR4
QSFP-H40G-CU1M
QSFP-H40G-CU3M
QSFP-H40G-CU5M
QSFP-H40G-ACU7M
QSFP-H40G-ACU10M
QSFP-4SFP10G-CU1M
QSFP-4SFP10G-CU3M
QSFP-4SFP10G-CU5M
QSFP-4X10G-AC7M
QSFP-4X10G-AC10M
QSFP-40G-LR4
QSFP-4X10G-LR-S
QSFP-H40G-AOC1M
QSFP-H40G-AOC2M
QSFP-H40G-AOC3M
QSFP-H40G-AOC5M
QSFP-H40G-AOC7M
QSFP-H40G-AOC10M
QSFP-4X10G-AOC1M
QSFP-4X10G-AOC2M
QSFP-4X10G-AOC3M
QSFP-4X10G-AOC5M
QSFP-4X10G-AOC7M
QSFP-4X10G-AOC10M
QSFP40G bidirectional short-reach optical transceiver
40GBASE-SR4 QSFP optical transceiver module with MPO connector
40GBASE-CR4 Passive Copper Cable, 1m
40GBASE-CR4 Passive Copper Cable, 3m
40GBASE-CR4 Passive Copper Cable, 5m
40GBASE-CR4 Active Copper Cable, 7m
40GBASE-CR4 Active Copper Cable, 10m
QSFP to 4xSFP10G Passive Copper Splitter Cable, 1m
QSFP to 4xSFP10G Passive Copper Splitter Cable, 3m
QSFP to 4xSFP10G Passive Copper Splitter Cable, 5m
QSFP to 4xSFP10G Active Copper Splitter Cable, 7m
QSFP to 4xSFP10G Active Copper Splitter Cable, 10m
QSFP 40GBASE-LR4 transceiver module, LC, 10km
4x10GBASE-LR transceiver module, SM MPO, 10KM
40-Gbps QSFP active optical cable, 1m
40-Gbps QSFP active optical cable, 2m
40-Gbps QSFP active optical cable, 3m
40-Gbps QSFP active optical cable, 5m
QSFP to QSFP active optical cables,7m
40-Gbps QSFP active optical cable, 10m
QSFP to four SFP+ active optical breakout cables,1m
QSFP to four SFP+ active optical breakout cables,2m
QSFP to four SFP+ active optical breakout cables,3m
QSFP to four SFP+ active optical breakout cables,5m
QSFP to four SFP+ active optical breakout cables,7m
QSFP to four SFP+ active optical breakout cables,10m

Interestingly, mine actually reports a part number of "L45593-D178-B50". I am starting to think as long as the Vendor ID is Cisco, it doesn't care. I am tempted to purchase a SFP+/QSFP+ eeprom writer and just start fixing my own optics to get around this silly problem. I would be done by now!

Basically, once you get the physical links up, the vNICs inside the OS will come up as well, they will mirror what the physical links are doing. You can factory default the CMC/SIOC and, with correct optics, they will link up and pass thru all the VLANs to the OS right out of the box, no tinkering needed.

I will likely be installing bare-metal windows so I will be tweaking the vNICs to pass thru a specific VLAN instead of the whole trunk. I will also likely be integrating this into NSX-T which, allegedly, is supported by the VIC 1300.

I was also able to verify the JBOD functions like an HBA and passes thru the raw drive, perfect for ZFS or other software RAID. Just a shame it can't present the drives to both nodes.
 

Jaket

Active Member
Jan 4, 2017
232
119
43
Seattle, New York
purevoltage.com
I was also able to verify the JBOD functions like an HBA and passes thru the raw drive, perfect for ZFS or other software RAID. Just a shame it can't present the drives to both nodes.
Interesting, I was told that both nodes could share the drives half and half. Hopefully this is true as that was a big part of usage we had planned if not we can just use all the storage for one and use the other as redundancy which is why we got a large amount of these.

Looks like we can't find the dongle knew I should have ordered a few more looks like no testing for a few more days here.
 
  • Like
Reactions: Samir

Slothstronaut

Member
Apr 27, 2022
29
58
13
Oh you can assign the drives any way you like. 56 drives in top-loader can be assigned 28/28 or 1/55 or 14/42 or anything in between. The two boot drives at the back cannot be "zoned" and are always assigned to the respective node.

My comment was in regards to you cannot assign a drive to BOTH nodes at the same time, for multi-path SAS HA. Reading all the documentation about this before buying, it made it sound like that was the whole idea of 2 nodes in one chassis. But it seems that was not the case.

This will be my third attempt at this design and after falling short yet again, I will just stick with a single file server and be done with it.
 

oddball

Active Member
May 18, 2018
206
121
43
42
You need to put the SIOC in trunk mode, if there is a virtual NIC with a vlan the link won't come up. Then trunk the other side.

These will work fine without a fabric interconnect.

We have a single node in ours. The SIOC's are configured for failover.

We've run Cisco servers for a LONG time and only had one failure. A C240 m4's motherboard had some error and went bad. The machine still ran, but the logs were filled with errors.

If anything the most amazing thing about Cisco machines is how they will charge forward with faulty gear. If ram goes bad the server simply disables it, throws a warning and keeps working. Same with drives, even a bad CPU.
 

jtaj

Member
Jul 13, 2021
74
38
18
@Slothstronaut were you able to get it to go on network and have internet access with the cable you used? also which driver is required under windows or was it recognized on default? lastly which window version are you using?
 
  • Like
Reactions: Samir

Slothstronaut

Member
Apr 27, 2022
29
58
13
@Slothstronaut were you able to get it to go on network and have internet access with the cable you used? also which driver is required under windows or was it recognized on default? lastly which window version are you using?
Kinda, the Arista optics will work for a while. Sometimes an hour or so, sometimes just a few minutes. It was really just a test that these are capable of connecting, the hardware is good, configured correctly, etc. On that note, some 10G-Tek optics arrived today from Amazon, they have the exact same problem as the 10G-Tek DACs in that the vendor field is incorrect and has "OEM" instead of "CISCO" so they do not work. I kinda expected that but they were available for next day shipping so thought I'd rule that out. My Cisco OEM optics are still on the way, will probably be next week now before I can do any further tests there. Takeaway from this:

DO NOT USE NON-OEM DACS/OPTICS ON THESE SERVERS! THEY MUST BE GENUINE CISCO

I am running Windows Server 2022, none of the drivers are inbox for UCS so you will need the ISO with the driver pack that matches the version of firmware the chassis is on. The VIC vNIC won't actually show up in the device manager until you install the chipset and UCS drivers and reboot, then they appear and you can install those drivers. I wonder if they have some kind of hardware validation going on that it won't present the VIC vNIC until a specific driver loads and it can verify the hardware.

I am still using my workaround to poke along and test these out, far from ideal but until my genuine QSFP+ optics arrive it's all I got.
 

pcmantinker

Member
Apr 23, 2022
21
32
13
I have my unit up and running happily! However, I am seeing these voltage faults. I've tried replacing all PSUs with the same result. Has anyone seen this and do you know how to proceed? Is it fixable or something I should worry about?
EDIT:
I am running everything on 208v from my UPS/PDU. I haven't tried 120v yet to see if that matters.
 

Attachments

Last edited:

Slothstronaut

Member
Apr 27, 2022
29
58
13
I have my unit up and running happily! However, I am seeing these voltage faults. I've tried replacing all PSUs with the same result. Has anyone seen this and do you know how to proceed? Is it fixable or something I should worry about?
EDIT:
I am running everything on 208v from my UPS/PDU. I haven't tried 120v yet to see if that matters.
The important part of the error is cut off on the right, but appears to be a CPU voltage problem. Either the CPU has a fault or the motherboard does. The PSU shouldn't have anything to do with that as it will be pulling from the 12V rail(s) to feed to CPU VRM, which is located on the motherboard itself. Odd to see BOTH boards are failing in the same way. You could check that the CPUs are installed correctly with no FOD in the socket or on the contact pads.
 

pcmantinker

Member
Apr 23, 2022
21
32
13
The important part of the error is cut off on the right, but appears to be a CPU voltage problem. Either the CPU has a fault or the motherboard does. The PSU shouldn't have anything to do with that as it will be pulling from the 12V rail(s) to feed to CPU VRM, which is located on the motherboard itself. Odd to see BOTH boards are failing in the same way. You could check that the CPUs are installed correctly with no FOD in the socket or on the contact pads.
Here's the full fault messages:
1651618229892.png
Voltage Reading:
1651618301487.png

I will try reseating and cleaning the CPUs/sockets and see if that makes a difference. Thanks for the tips.
 

pcmantinker

Member
Apr 23, 2022
21
32
13
Sadly, reseating the CPUs and cleaning the pins didn't seem to improve the reported faults. I do plan to upgrade these CPUs at some point. It could be that one or more of the CPUs has a problem, but it would be difficult to diagnose which one exactly. I remember reading that the passive heat sinks can support up to 120w TDP CPUs. Does that sound right? This would mean something like the E5-2683v4 if I'm not mistaken. Having 4x16 cores and 4x32 threads would make this server quite powerful.
 
  • Like
Reactions: Samir

Slothstronaut

Member
Apr 27, 2022
29
58
13
That seems a bit high, yea. I would hit up the seller before doing anything else. He has a BUNCH of these and might just swap out the boards with you.

I can't speak to the servers max capacity. Cisco builds these configured to a handful of SKUs and doesn't really expect a lot of aftermarket changes. From looking at it, those heatsinks are not exactly huge, I would say 120W TDP is OK as long as you have datacenter cooling.

Along that same lines, since I was planning on doing a dual-head NAS and now cant, I have a blade not doing anything. I did see that some of the storage expanders showed up on eBay for $200. That would add 4 additional drives to the back.

Or I will throw a truckload of RAM in it and make it another ESXi node in the cluster. My weekend project will be to try and force the backplane to allow drives to be visible to both blades, don't have high hopes for that however.

New OEM QSFP+ optics arrive today!
 

Digital Spaceport

New Member
Apr 17, 2022
11
10
3
Austin, Tx
digitalspaceport.com
Sadly, reseating the CPUs and cleaning the pins didn't seem to improve the reported faults. I do plan to upgrade these CPUs at some point. It could be that one or more of the CPUs has a problem, but it would be difficult to diagnose which one exactly. I remember reading that the passive heat sinks can support up to 120w TDP CPUs. Does that sound right? This would mean something like the E5-2683v4 if I'm not mistaken. Having 4x16 cores and 4x32 threads would make this server quite powerful.
I would try to swap the cpu's around and see if you can isolate the fault. Do you have any other machines that can handle v3/v4 cpus?
 

pcmantinker

Member
Apr 23, 2022
21
32
13
That seems a bit high, yea. I would hit up the seller before doing anything else. He has a BUNCH of these and might just swap out the boards with you.

I can't speak to the servers max capacity. Cisco builds these configured to a handful of SKUs and doesn't really expect a lot of aftermarket changes. From looking at it, those heatsinks are not exactly huge, I would say 120W TDP is OK as long as you have datacenter cooling.

Along that same lines, since I was planning on doing a dual-head NAS and now cant, I have a blade not doing anything. I did see that some of the storage expanders showed up on eBay for $200. That would add 4 additional drives to the back.

Or I will throw a truckload of RAM in it and make it another ESXi node in the cluster. My weekend project will be to try and force the backplane to allow drives to be visible to both blades, don't have high hopes for that however.

New OEM QSFP+ optics arrive today!
I contacted the seller to see if they can send replacement boards/nodes. I'm hoping that it's a simple exchange and the faults will be resolved. The listing does state there is a 90 day warranty.

Good to know about the 120w TDP probably being ok. I was referencing CPUs from their spec sheet and they should be compatible.

While you can't necessarily share drives between nodes, you can split the drive allocation between nodes. You could have 28 on one node and 28 on another node. I'm planning on a dual unRAID setup for managing all 56 3.5" HDDs. Then I can use the internal U.2 SSDs for cache and other write intensive tasks.

I'll probably max these out to 256GB each eventually too. 16GB DDR4 DIMMs aren't too expensive!

OEM cables for the win! I am currently using 4x10Gb breakout copper cables and will be switching to 40Gb copper soon.
 
  • Like
Reactions: Samir

pcmantinker

Member
Apr 23, 2022
21
32
13
I would try to swap the cpu's around and see if you can isolate the fault. Do you have any other machines that can handle v3/v4 cpus?
I only have this Cisco UCS server that supports v3/v4 CPUs, but I could swap out CPUs from both nodes and test on the one node and test each of them. I'll give that a shot later today.
 
  • Like
Reactions: Samir

Slothstronaut

Member
Apr 27, 2022
29
58
13
Cisco optics are in and work perfectly!

Got to install HDSentinel on the machine, I only have 16 of the 32 drives I am supposed to have (rest have yet to be mailed...) but they all have less than 20 days power on time, less than 20TB written! They are basically brand new!

Doing my stress tests now, I'll let this run for a while to get an idea how they will do but that's awesome to see.

I will need to work on it further, but when the drives are in RAID they aren't visible to HDSentinel which is odd, other LSI cards I have in use show them as part of an array and let me see the individual stats.