Cisco storage server IPMI/node config options

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

bilbo1337

Member
Sep 18, 2020
79
45
18
Florida
I have a question in regards to this cisco storage server. I only know mostly about Dell's servers and iDRAC. There's two server nodes on the cisco system, I'm assuming there's an option in cisco's "iDRAC" or IPMI to make the nodes redundant/failover or to treat it all as one server? I also have a question when it comes to Cisco's management interface in that does it cost money per year or is it like iDRAC where it's unlocked if you have the enterprise version and don't have to renew a license? I just don't want to imagine a nightmare scenario where this peculiar box requires other special hardware or software, and if it's gated by how many cores are allocated or other nonsense.
 

oddball

Active Member
May 18, 2018
206
121
43
42
We have one of these.

It's three things:
1) A chassis
2) IOM cards
3) Servers

The chassis holds the disks, the IOM's (which are network fabric cards) and the servers. The chassis has a bunch of sensors for the disks.

You load this thing with disks, it will support anything. The spec sheet says it needs to be in groups, but we've played with it, you can put anything you want in there. SAS, SATA, single, multiple etc.

The IOM's support a fail-over in a single node setup. This is what we have, you can have IOM A and IOM B and one is primary and one is secondary. You will be limited to 160Gbps in bandwidth (80Gbps full duplex), but I think it's worth it.

The nodes have the RAID cards in them. You can have dual raid cards with a failover ability as well. You do this with a single node with the port expander node with a second raid card. That way you have dual raid cards, dual IOM's, etc. CIMC is able to shut off bad memory and a bad CPU while operating. Although I haven't ever seen a CPU go bad, but I digress.

What's great about Cisco is there are NO licenses or any restrictions on hardware mismash. You can run these managed by UCS manager (this is how we do it), but they also work perfectly fine stand alone.

If you run it stand alone you'd have access to both nodes, the chassis and the networking all through the web ui. You can install any drives you want, and if you go the expander route vs the dual nodes you can dump whatever PCI cards and NMVe drives in there you want as well.

There are four drives on the back of the chassis that are supported for boot drives. We have some 100GB SSD's in there and it's great for the hypervisor.

We have an M4 node and I believe it has a slot for a single NMVe drive, the M5 nodes have two slots.

In the case of the listing I believe each IOM goes to a single server, so there would be a CIMC connection per IOM. According to the lid of the drive cover when you have two servers you can configure the chassis to have both servers share or to split the drives in half.

This is a really solid unit. Cisco has some docs on their website about how you can saturate a 40GbE connection really quickly with this.