$127 Cisco ENCS5412/K9 Xeon-D 1557 (12 core), 32G ram

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RobstarUSA

Active Member
Sep 15, 2016
233
104
43
Anyone know anything about these? Can any OS be installed on these? If so I was thinking these might make a great light server/vm server.

13 available as of my post. No software included so not sure if that makes these unusable without a support contract.

Edit: Looks like these cpus are broadwell gen, so a bit old.

eBay
Some info
 

ccie4526

Member
Jan 25, 2021
93
59
18
Interesting. I remember being really interested when these first came out, but never had the opportunity to play with one. This might make for an opportunity.... :D
 

newabc

Active Member
Jan 20, 2019
469
243
43
looks like something with similar ideas as the previous vmware edge equipment with an atom 3000 series cpu.
probably for SDN, sd-wan, or cloud virtual network
 

Mymlan

Clean, Friendly, and In Stock.
Oct 1, 2013
32
73
18
I would kill for noise and real-world power data on these boxes. At 125 watt typical power with no modules and only three 40mm fans, I worry it wouldn't be possible to swap with Noctuas for small rooms.
 

bvd

Member
Jan 2, 2021
93
89
18
I would kill for noise and real-world power data on these boxes. At 125 watt typical power with no modules and only three 40mm fans, I worry it wouldn't be possible to swap with Noctuas for small rooms.
Seems practically guaranteed to me, so I passed on this end. Also because Cisco BIOS (in my exp) has been so locked down, I wouldn't put it past em to block boot if "unexpected hardware" is detected :-/
 

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
Looks like most of the front-LEDs are FPGA controlled, the block of 8 ports on the left are a Marvell Alaska V 88E1680 is a highly-integrated, ultra- low power eight-port 10/100/1000Mbps transceiver, so that's a 'switch within a switch', probably one or two upstream ports to the CPU.

The block in the middle suggests different interfaces as well:

two serial ports (BMC and Xeon), two local ports (I210 to Xeon, and another to the BMC), and four ports that are actually just two dual-personality ports (you can't use them all, only 2 at a time).

So realistically:

(update: that Marvell 88e1680 uses QSGMII, so it's four 1Gbps links to the switch at best, which means that more than 4Gbps won't be possible using those ports if you do routing or multi-VLAN firewalling for example)

Xeon-to-Marvel has one link, or maybe two, probably KR-type links which require sideband or I2C configuration to expose on the Marvel ports
Xeon-to-I210 is a separate PCIe controller, a 'real' link
Xeon-to-dual-personcality has two links, but might be 'real' links

The LEDs on the other side are all connected to the FPGA as well, except the Power status, that's MCU-controlled.

This leads me to believe that is is indeed a similar construction to the NFV/VEP type boxes (like newabc suggested) where you have an at least for different network systems in one box, most of it controlled over I2C via an FPGA, and an embedded microcontroller managing some power and thermals. Normally that's something the SIO does (like a Nuvoton) but embedded systems often appear to not have the need for a SIO and use a general MCU instead.

It's definitely cheap for the resources you get, but it will take a bit of debugging and getting our hands on the 'normal' software to find out how all the internal I/O configuration works. As a firewall/routing appliance it will work either way, as long as you don't see it as a "12-port device", it really is more like a 4-port device, with a L2 basic switch permanently attached on at least one of those ports.

I wouldn't mind having one of them just to hack on it to see what it can do. But shipping+tax to the EU is too much for me just for a hack-it box :confused:
 
Last edited:

autoturk

Active Member
Sep 1, 2022
165
113
43
I have a bunch of these, and if you live in the SF Bay Area and can pick up, I can give you a pretty good deal on them if you want one. I bought six of them from the same seller for a cluster that I played with but ultimately ended up going in a different direction.

Having said that -- these are a bit of an oddball. You can install whatever OS you want on them, but there are some features (like PoE on the left-hand side switch) that I believe you can only enable using the Cisco software, which you need a support contract for. I can provide some pictures of the inside if you are curious -- perhaps someone who is a bit more experienced with Cisco stuff can help me figure out how to get PoE working.

Idle power usage is about 60 watts. There's no way to use a PCIe or OCP card as far as I can tell, so you are stuck with 1Gb/s.
 
Last edited:

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
You could get 10G via the M.2 Slot using an adapter in theory. As for the PoE, that would need some sideband commands to the chip on the daughterboard to get that working, likely routed via the FPGA.
 

mattventura

Active Member
Nov 9, 2022
447
217
43
Does anyone have disassembly/board photos of these? Would be interesting to see if you could just ditch the case and put it in something more standard.
 

autoturk

Active Member
Sep 1, 2022
165
113
43
Does anyone have disassembly/board photos of these? Would be interesting to see if you could just ditch the case and put it in something more standard.
you definitely cannot. It's not anything standard as far as I can tell. I'll try to take some photos tonight.
 
  • Like
Reactions: KingFrodo

autoturk

Active Member
Sep 1, 2022
165
113
43
You could get 10G via the M.2 Slot using an adapter in theory. As for the PoE, that would need some sideband commands to the chip on the daughterboard to get that working, likely routed via the FPGA.
I think the m.2 is SATA only. I know for sure the drive is SATA, but I don't know if the port allows for an NVME. I'll give it a shot.
 

Mymlan

Clean, Friendly, and In Stock.
Oct 1, 2013
32
73
18
I have a bunch of these, and if you live in the SF Bay Area and can pick up, I can give you a pretty good deal on them if you want one. I bought six of them from the same seller for a cluster that I played with but ultimately ended up going in a different direction.

Having said that -- these are a bit of an oddball. You can install whatever OS you want on them, but there are some features (like PoE on the left-hand side switch) that I believe you can only enable using the Cisco software, which you need a support contract for. I can provide some pictures of the inside if you are curious -- perhaps someone who is a bit more experienced with Cisco stuff can help me figure out how to get PoE working.

Idle power usage is about 60 watts. There's no way to use a PCIe or OCP card as far as I can tell, so you are stuck with 1Gb/s.
Can you give us an idea of what the stock noise factor is like? I may be interested in a few of your units (I'm in the south bay).
 

autoturk

Active Member
Sep 1, 2022
165
113
43
Can you give us an idea of what the stock noise factor is like? I may be interested in a few of your units (I'm in the south bay).
It sounds loud at boot but calms down substantially. Not something you'd like to keep in your office but manageable. I'll try to take a video or something, but don't really have a good reference point. I'll also try to get the model number of the fans -- I think they were Deltas.
 

turbo

New Member
Mar 17, 2022
26
22
3
I have been playing with one of these for a few weeks, I was actually working on documenting some of my findings but haven't gotten far enough to post something yet :p but I can answer some questions

It runs ESXi 8
1x Intel I210 is connected to the management port
2x Intel I350 ports are G0 and G1- only the SFP ports work, NFVIS software must do some magic to make the RJ45s work
2x Intel XL710 ports are connected to the internal Marvell switch @ 10Gbps
Onboard Intel X552 does not seem to be connected to anything, boo
Power draw is 28W with only CIMC booted, 65W with system running, 85W max I've seen (excluding POE)

The internal Marvell switch has 4x 10Gbps ports, 2 are connected to the main system XL710. The switch does not have any flash, it's booted with a special procedure from the host CPU (more on this later if anybody cares). The XL710 supports VM-to-VM accelerated networking and I am able to get about 30Gb/s between VMs using SR-IOV.

There are 2 expansion slots- 1 Cisco NIM slot and 1 internal "RAID" slot. I'm guessing these are PCIe with some sort of proprietary pinout.

I could not get an NVME to work in the M.2 slot despite the BIOS having NVME options and bifurcation settings available, may be wired for SATA only.
 

tjk

Active Member
Mar 3, 2013
481
199
43
Odd, the docs don't like any 10G modules but you are saying there are 2x X710 10G SFP+ ports?
 

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
Odd, the docs don't like any 10G modules but you are saying there are 2x X710 10G SFP+ ports?
Those are the internal ports. The Marvell switch has the 8 ports you see on the front, but internally it has backplane links back to the management CPU. On C3000 systems those are KR links for example (2x10G) but on D1500 they can do more.

The ports that you can see on the outside that have SFP+ cages are dual-personality ports, most likely controlled by the FPGA so you need to write to some I2C or MDIO address to have control switched from one port type to another. Since on the backend those are I350 controlled anyway, you'll never get 10Gbe because that chip doesn't support it ;-)

The only way to get an external high-speed port is to use their custom form-factor PCIe connection.

I have been playing with one of these for a few weeks, I was actually working on documenting some of my findings but haven't gotten far enough to post something yet :p but I can answer some questions

It runs ESXi 8
1x Intel I210 is connected to the management port
2x Intel I350 ports are G0 and G1- only the SFP ports work, NFVIS software must do some magic to make the RJ45s work
2x Intel XL710 ports are connected to the internal Marvell switch @ 10Gbps
Onboard Intel X552 does not seem to be connected to anything, boo
So they have a SoC with decent ports and they just ignore them? That's just rude. Bloody CISCO. I'm surprised they have a Xl710 in there instead considering that's exactly what the on-SoC ports were meant for; they are almost always used in combination with some skanky switch chip as a internal backplane connection, unless there is a need for exactly 2x10GbE, and then they put a bunch of Ixxx series controllers in there for the other ports. Weird.

The internal Marvell switch has 4x 10Gbps ports, 2 are connected to the main system XL710. The switch does not have any flash, it's booted with a special procedure from the host CPU (more on this later if anybody cares).
It's a bunch of I2C/MDIO commands to tell it how to configure itself, isn't it? I have some hopes more switch chips get upstream support with Switchdev but so far it's a bit of a shitshow and almost all non-mellanox chips need some device tree hints for automatic detection.

It'd be interesting to see what I2C, SPI and MDIO (or other sideband) interfaces the thing has. PoE, port personality and perhaps hardware control like power, fans and LEDs could live on some magic FPGA I2C address.
 
Last edited:

Mymlan

Clean, Friendly, and In Stock.
Oct 1, 2013
32
73
18
I have been playing with one of these for a few weeks, I was actually working on documenting some of my findings but haven't gotten far enough to post something yet :p but I can answer some questions

It runs ESXi 8
1x Intel I210 is connected to the management port
2x Intel I350 ports are G0 and G1- only the SFP ports work, NFVIS software must do some magic to make the RJ45s work
2x Intel XL710 ports are connected to the internal Marvell switch @ 10Gbps
Onboard Intel X552 does not seem to be connected to anything, boo
Power draw is 28W with only CIMC booted, 65W with system running, 85W max I've seen (excluding POE)

The internal Marvell switch has 4x 10Gbps ports, 2 are connected to the main system XL710. The switch does not have any flash, it's booted with a special procedure from the host CPU (more on this later if anybody cares). The XL710 supports VM-to-VM accelerated networking and I am able to get about 30Gb/s between VMs using SR-IOV.

There are 2 expansion slots- 1 Cisco NIM slot and 1 internal "RAID" slot. I'm guessing these are PCIe with some sort of proprietary pinout.

I could not get an NVME to work in the M.2 slot despite the BIOS having NVME options and bifurcation settings available, may be wired for SATA only.
How is the remote management/IPMI on the system? I don't have any experience with Cisco CMICs outside of basic switches/routers.