Supermicro A2SDi (Atom C3000) NAS and virtualization server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ullbeking

Active Member
Jul 28, 2017
506
70
28
45
London
I'm curious to know if anybody here has tried building a virtualization server or NAS out of one of the new Supermicro A2SDi (Atom C3000) boards. They look awesome. I'm thinking about how I'm going to set things up at home and have so many options.

Option 1

Simply two boxes, one for each function: I'm wondering about using one of boards for a NAS (e.g., FreeNAS, FreeBSD+ZFS, or Debian+ZoL) and one for a lightweight virtualization host (e.g., Proxmox VE or Debian+KVM).

The mini-ITX form factor is particularly appealing. If I were to provion an exclusive virt host, then ideally it would be a passively cooled SFF server with the VMs running on the NAS (or local disks for simplicity). Moreover, this way I can get away with using non-ECC RAM on the virt host, which translate to more RAM and a better virtualization experience. The NAS would have 16 GB ECC.

Option 2

Maybe it will be possible to run both functions as an all-in-one system, off one board, in one box. This will be convenient, and can be implemented in either of two ways as far as I've been thinking:
  • Debian+ZoL: A standard Debian install serving files to the LAN from its ZFS volumes, and running KVM VMs managed using the standard libvirt tools. In other words, a file server running side-by-side with up to a dozen lightly loaded VMs.
  • FreeNAS in a Proxmox VM: I've been told this is complicated to set up and requires fiddling with VT-d/IOMMU to get storage passed directly through to FreeNAS. My main concern, however, is that now everything is tied up as a single point of failure; moreover, recoving from failure could be complicated depending on what has failed.
Please note that the hyperconvergence buzzword just won't leave me alone and ideally I would like to get some experience with this. Does this require running your NAS in a VM?

I'm leaning towards one of the 16-core offerings (e.g., A2SDi-H-TP4F). I'm mindful of cooling even though it needs to be practically silent. I don't know if this will be difficult which with chassis-based cooling. Currently I have a Fractal Design Node 304 in waiting, and wondering if it's a viable option for a compact solution.

One of the 8-core C3000 boards comes with CPU fan, but I have a feeling 8 cores for both serving files and running VMs might be pushing it. Or, the 8-core option might be OK if I really load it up with RAM and use SSDs and caching layers judiciously.

Option 3

Sticking with what's known and tested by the community, stick with the C2000 series then implement one of the above options. These boards are bound to be cheaper, but they lack VT-d/IOMMU and demand DDR3 RAM (which is expensive and getting harder to find day by day).

Option 4

On a different level, change gears and build a system in larger box, based on a Xeon E3-1200L v5 (low power CPU to ease cooling). For an all-in-one box or a large NAS, this may be a better option due to more expansion options, stronger processor, etc. For a virtualization host, the cost of having fewer cores might be made up for by loading it up with more RAM (although practically speaking, mini-ITX no longer is an option).

Does anybody have any opinions or thoughts? I can't be the first person who's thought of doing this.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Xeon-D also has lots of mini-itx options. If you can step up to mATX then also opens up the chance to use e5 systems and as you point out e3 etc.
Honestly I would not use c2000 if it's a new purchase today (unless you get used with memory cheap).

C3000 is pretty cool for low seems but lacks AVX is that important to your planned workload which I don't think it is.
8 core and 16 core have the same cache, so 16 core has half the per core cache, remains to be seen what impact this has, waiting for the review. Issue is finding the board also, still not much stock around yet.

One advantage of 2 systems is if one breaks you can move the disks to other if really needed to get you going.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,802
113
We got our second C3955 mITX board today. Going to be another 2 weeks or so for the 8C and C3958 16C boards.

I do not think that anyone has these boards in house yet other than us. The chips were released to manufacturers mid-August and they need to be shipped, applied to PCB, tested, and then to distribution. Assume the second half of September for general C3000 availability.

The A2SDi-H-TP4F we already reviewed: Supermicro A2SDi-H-TP4F Review 16 Core SoC With Power Consumption

We tested a version with the C3955. We will have the updated C3958 board reviewed in the next 2-3 weeks.

Here is the board with Proxmox Proxmox VE 5 with Intel Atom C3000 Series Denverton


On the options:

Option 1: Always good. If you do get the C3000 series, get ECC RDIMMs. There is also the C3338 which can be an OK low-end NAS box but ideally you want 8 cores to get full C3000 functionality.

Option 2: Proxmox is Debian with ZOL so no need to roll your own or do pass-through. Or run FreeNAS and use bhyve. Proxmox (Debian) is much better at virtualization. FreeNAS has a better NAS GUI.

On noise: these use the same coolers as the Xeon D series so you can buy and swap a small cooler, or you can just plop a Noctua fan on there (guide here: Near silent powerhouse: Making a quieter MicroLab platform ) and be done with it. You may need an extra zip tie, but this is a low effort task.

Option 3: DDR3 is cheap. Unless I did not need more performance, more RAM, more SATA, more networking, I would wait for the C3000 series or get Xeon D.

Option 4: With the Xeon E3's, just get a retail cooler. They generally run near silent unless you are actually at 100% utilization.

Other options not discussed: E5 V3/ V4 are slightly more expensive. Likewise, the Intel Xeon Silver 4108 is a super chip for the price. Albeit you would need to go to ATX rather than mITX. This should have informative benchmarks for you Intel Xeon Silver 4112 Linux Benchmarks and Review

Finally, you could also consider a Synology/ QNAP. They have some fairly nice offerings these days.
 
  • Like
Reactions: Evan

Marco

New Member
Sep 23, 2013
19
0
1
Hi Patrick, I'm trying to get a better picture of the power consumption figures for Denverton and I'm a bit surprised by the "single thread" test when compared with the X10SDV-2-TLN2F, actually consuming less (well, it should be compared against A2SDi-4C-HLN4F, but it's not available yet). My guess is that an Atom core should "weight" around 2W while for the D1508 it should be about 5W, I also fail to see major testbed differences... am I missing anything?

Another unrelated question: I guess those boards may not have the final BIOS release version, but is it currently possible to enable a x2x2 bifurcation for the x4 PCIe slot?

Thank you!
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
@Marco the Xeon-D core = approx 2 x C3000 denverton core's as I think your trying to say right ?
At idle the 2 platforms will be very similar as the base loaditems such as IPMI make up a good proportion of the idle power consumption, at max load the C3000 seems to be using a decent amount less, for a given workload I expect the actual total power consumed will be similar unless your taking 100% loaded. Cost wise it seems that the cost for X performance is also reasonably similar.

C3000 gets more sata if you want a bigger NAS, some board and cpu get more network but overall they are rather similar platforms. For a light virtulisation I still feel the Xeon-D is a better pick but depends on your needs.
 
  • Like
Reactions: realtomatoes

Marco

New Member
Sep 23, 2013
19
0
1
@Marco the Xeon-D core = approx 2 x C3000 denverton core's as I think your trying to say right ?
Yes roughly. It largely depends on the workload but it looks like a Broadwell can be almost twice as fast, so in many cases the different Xeon D and C3000 SKUs are comparable after performing the appropriate core translation (ignoring HT). For more simple "streaming" usage, that is, network or storage, Denverton is probably preferable and more efficient, with a lower max consumption. For more heavyweight computation Broadwell-DE is.

A really rough estimate for C3558:
- 1.5/2W per core (7/8W total)
- QAT 1/2W max
- 20Gbit MAC + 2Gbit, ~2.5W
- the rest (memory controller, 20 HSIO lanes, eMMC, etc.) 4/5W

And for a competing D1508:
- 5W per core (10/11W total)
- 20Gbit MAC, ~2W
- the rest (memory controller, x24 PCIe 3.0 + x8 PCIe 2.0, 6x Sata, 4x USB 3.0, etc.) 9/11W

So, depending on the system configuration, power consumption at idle could well be similar, especially if including the constant load of a BMC. Pricing is also very similar, with Denverton being just a bit cheaper, so it mainly boils down to expandability and max power envelope when choosing one over the other.

Having that said, I think Supermicro made a terrible job with these Denverton boards. Some random considerations:
  • They simply ignored one of the most important characteristic of Denverton, that is, the reconfigurable HSIO lanes. They made lane assignments fixed and this means that boards are not so expandable, not so convertible between storage and network usage depending on your present or future needs. For example, is not possible to have one M.2 boot device, up two x4 NVMe drives and (up to) 8 Sata disks. Gigabyte and ASRock got this right instead. Why on earth the X11SBA series has single PCIe line in a x8 slot, but A2SDi boards only come with a x4 slot? Ridiculous! Have a look at some other boards: none of them has less than a x8 PCIe slot. Even the Avoton A1SAi series had a x8 slot.

  • On the A2SDi-*-HLN4F and A2SDi-H-* models they placed the only USB 3.0 port on the motherboard, which is a joke because even in a data center it's way more useful to have it on the outside. If you still want to use a USB dongle as system drive (which is neither a production quality nor and embedded world thing) you can still plug it on the back. The only production quality options for a small, separate boot drive are: M.2, SATA DoM, eMMC. Besides M.2 disks, which are the most demanding in terms of HSIO lanes, SATA and eMMC (hint: the 'e' stands for 'embedded') are the most sensible options, with similar speeds, reliability, sizes and power consumption; apart from the fact that a SATA port consume 1 precious HSIO lane, while eMMC does not, making it way preferable. So why, on a system with a very limited number of HSIO, has Supermicro forgot to place a eMMC connector??? Instead of buying a SATA DoM people could have simply bought an eMMC DoM. Again, why has Gigabyte got this right instead with the MA10-ST0? Total non-sense. Also, on the USB port, have a look at this.

  • The product line is confusing and incoherent. For example A2SDi-LN4F and A2SDi-12C-HLN4F are basically the same product. If you want SFP+ ports you either have to chose between A2SDi-H-TP4F and A2SDi-TP8F only, and surprisingly FlexATX boards have an external X557 controller with Base-T ports only! On these same FlexATX boards they are using the extended temp model for the 8 cores (C3708), but regular C3x58 SKUs for 12 and 16. To them a lower frequencies C3708 can drive 40Gb but a 3758 cannot. No 4 core models with a single 10Gbit port, even with higher frequencies. I could keep going with a ton of other arguable design decisions. It's a total mess, there is no clear definition of what board is designed for, for sure they could have covered most of the needs by simply avoiding the 4Gbit phy and offering HSIO lanes redistribution. Plus a bunch of other more specific options. BTW, the website reflects the total chaos they have about these chips with wrong CPU models and wrong descriptions. Prices seem also to be a bit random, for example the A2SDi-H-TF seem to be 200/250 euro more than the A2SDi-4C-HLN4F: seriously? Too many products, weird options, overlapping content, little flexibility, high idle consumption, this is not really what Denverton should have been. They really got it wrong.

  • They could have avoided a BMC on very low power SKUs, this contributes to make Xeon D a very similar an preferable option.
I know most of the people consider them excellent, but they are really not so great if you consider what they could have been. The whole Xeon D line is so much better conceived... I'm not going to buy any Denverton board from Supermicro this round.
 
Last edited:

ullbeking

Active Member
Jul 28, 2017
506
70
28
45
London
Xeon-D also has lots of mini-itx options.
I'm looking at these and considering a Xeon-D board for an experimental board.

If you can step up to mATX then also opens up the chance to use e5 systems and as you point out e3 etc.
Indeed. I love the mini-ITX form factor, but I you break out of it, micro-ATX, ATX, etc, are all effectively the same to me and I'm happy to go all the way up to E-ATX if it gets me lots of expansion options.

Having said that, when I started thinking along those lines, I was under the impression that a large, roomy case is good for cooling and, therefore, noise, because the compoments are not all crammed in a hot, small space. Then somebody told me the other day that this is not true either, and that a large case require either large and/or many chassis to actually move the air in and out of it. So it's not necessarily quieter, however I have had people swear to me that their gigantic E-ATX towers are barely audible.

Honestly I would not use c2000 if it's a new purchase today (unless you get used with memory cheap).
I will nevertheless look for one on the second hand market for cheap, once the C3000 becomes widespread for experimentation purposes. Either that or this one: X11SBA-LN4F | Motherboards | Products - Super Micro Computer, Inc. The four NICs are really useful.

C3000 is pretty cool for low seems but lacks AVX is that important to your planned workload which I don't think it is.
Please could you elaborate on this? I don't know anything useful about AVX or how it helps with a virtualization server or NAS. How would AVX potentially be useful for me?

8 core and 16 core have the same cache, so 16 core has half the per core cache, remains to be seen what impact this has, waiting for the review.
This is an interesting point I keep forgetting, as these days I take the CPU cache for granted. I can't even honestly say that I know what effect I would expect CPU cache even to have on overall performance.

One advantage of 2 systems is if one breaks you can move the disks to other if really needed to get you going.
Yes. The first thing one of my colleagues said when I described my proposed all-in-one system was that it seemed like a single point of failure. I can imagine this would be even worse if I were to run the NAS inside a VM.