The ECC and ilo make this a "true enterprise nuc" that, IMHO and for the price of $150, is a better deal than the tiny/mini/micro parts. In some SMB environments, a 1 gig network is still plenty for simple file sharing, printing, browsing etc. I think this little box is aimed at those spaces.
With a link aggregation of the optional 4 x 1 gig ports to run file services, print server, AD and the 2 x 1 gig ports used for the gateway, this is pretty awesome.
If ran as a cluster set-up, the limitation of the 2 x HDD is easily overcome.
Also, the issue of the bios updates....well, subject to correction, I think this box is EoL. If it is, then there's really no point to the whole bios update fears.
My only issue is the very curious software licences tied as the optional parts together with Zynstra role in the whole thing.
If the licences are there to support the SMB space fine. If, however, the box is really tied down to those licences, like some then that would be the huge "gotcha" here.
Eh, "true enterprise NUC"? There are actual enterprise NUC out there like the Lenovo P320/330s and the Z2 Mini G3/G4 (they have workstation graphics with either PCIe or MXM slots, VPro/DASH and support for ECC RAM up to 64GB. I think my t740 thin client could, too (the Ryzen embedded supports ECC as well) - that being said, I don't want to pay 90 bucks for an 8GB DDR4 ECC SODIMM to test that one out.
Calling this machine a NUC is a little unfair to the NUCs - at least the NUCs have size on their side. Most can install an additional Ethernet port through their M.2 Key-E slot, some can support 2 drives (M.2 SATA+SATA, M.2 NVMe+M.2 NVMe, dual NVMe, etc etc), and others (like the Lenovo m720q) have useful PCIe breakouts.
Alright, let's do some math here. In order to make it justify the name "enterprise", I'll need to add 2 drives and 16, 32 or 64 GB of DDR4-2133 or 2400 RDIMM into it (you can use UDIMM, but only for up to 2x16GB configurations - about 5-10% cheaper if you do).
For 16GB, it's 2x8GB, which according to
Newegg is ~50 USD/stick for RDIMM, so roughly 100 bucks
For 32GB, it's 2x16GB, which according to Newegg is ~65 USD/stick for RDIMM, around 130 in total. (Yes, you can do 16 in one stick but why run only a single memory channel). Value-wise this is the sweet-spot.
For 64GB, it's 2x32GB, which according to Newegg is ~135USD/stick for RDIMM, or 270.
Note that this assumes that you don't have compatible DDR4 server RAM lying around. If your work upgraded their servers and you have a bunch of 8GB DDR4 RAM units ready for recycling, that changes the metrics by a bit.
Let's say that the drive cable is not included and they are 20 each (they were quoted for 30 at a posting above), and so are the carriers. That's 80 USD.
So at a very minimum you'll need to spend 150+100+80 USD, or roughly 330 USD to get it working “by the books” for loading 2 drives. Let’s say you raided the corporate junk pool for 2 sticks of 8GB, and you are willing to zip tie the drives to the chassis somehow - the drive data cables (no indication it is included) will make it 190 USD, not 150. That’s almost the same price as a secondary market TinyMicroMini machine (or its bigger SFF cousin, which will have more RAM slots and PCIe slots), and they will almost always come with RAM, some basic SSD and a Windows license. So in that case about the best thing you can say about the EC200A is...more NICs? My intuition tells me that they could’ve built it smaller or packed more parts in there, but they chose not to...my guess is to keep the costs as low and make it as quiet as possible.
A brand new HP Microserver Gen10 Plus with a base 3 year depot warranty, a Xeon-D 2224, 16GB of RAM and 4 empty caddies ready for drives will...
probably run around 600-620 on eBay, and Amazon
seems to confirm this.
The MSG10+ is still not a great machine in my opinion, mind you, but it has several things going for it over the EC200A - 3 years warranty, 4 drive bays ready to go, a known Intel quadport NIC setup with SRIOV support ready for ESXi, a PCIe slot, and a current generation CPU. The fact that it’s 4 bays means you are pushing more I/O, and that PCIe slot is always welcome.
The question is...which would perform better and be more power efficient? 2 of those EC200As with 1518s clustered together, or a single MSG10+ running the e2224?