ASUSTOR FLASHSTOR NAS with 6x/12x M.2 SSDs, 2.5/10 GbE

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,533
5,855
113
In the 12 port version wouldn't the M.2 drives need at least x2 each to saturate the 10gbe network port?
That would be for a single drive. For a RAID array of 12 drives the networking side is outmatched likely 3:1.

Still working on that one but heading out tomorrow early to go record something in the bay area.
 

oldpenguin

Member
Apr 27, 2023
33
12
8
EU
Thank you both for clarifying - don't mean to highjack the thread, but either with the above mentioned TI ASM multiplexer or any other similar parts, are there any PCIe server cards willing to connect a bunch of m.2 drives available to buy? The 25w limit on an x8 port could surely accommodate at least 8 multiplexed nvme drives, while a full x16 could probably hit a minimum of 16 pcs (if not more, didn't actually check the IC specs, just trying to figure out how nice would it be to have an external enclosure equipped like this).
 

mrpasc

Well-Known Member
Jan 8, 2022
579
320
63
Munich, Germany
There are some cards from Highpoint which can accommodate 8 M.2 for a x16 port. But they are very expensive and a niche product.
The card needs to have VRMs as most of the 25W/75W are 12V (66W for the x16) and the NVME needs 3.3V.
So even with only 4 enterprise / datacenter NVMEs you can easily oversubscibe the 3.3V for an x16 slot.
For servers with the needs of lot NVME storage U.2 U.3 with it’s per device power connection is the way to go.
 
  • Like
Reactions: oldpenguin

Smudgeous

New Member
Jan 27, 2016
2
0
1
41
The last of my drives just showed up.
With 12 of the 2TB Intel 670P in a RAID 6 array on ext4, I ran e7db/diskmark in a docker image, running "sudo docker run -it --rm -e PROFILE=nvme -v /volume1:/disk e7db/diskmark". Results were interesting to say the least:

Sequential 1M Q8T1:
<= Read: 3127 MB/s, 3127 IO/s
=> Write: 836 MB/s, 836 IO/s

Sequential 128K Q32T1:
<= Read: 3144 MB/s, 25159 IO/s
=> Write: 760 MB/s, 6080 IO/s

Random 4K Q32T16:
<= Read: 1568 MB/s, 401420 IO/s
=> Write: 106 MB/s, 27303 IO/s

Random 4K Q1T1:
<= Read: 41 MB/s, 10683 IO/s
=> Write: 0 MB/s, 21 IO/s

Looks like this is only about 12.5% faster than a single 2TB model that STH previously saw in CrystalDiskMark, and those random 4K writes look horrendous.
 

Smudgeous

New Member
Jan 27, 2016
2
0
1
41
I've also not had any issues yet while running with the TeamGroup T-Force Zeus DDR4 SODIMM 64GB (2x32GB) 3200MHz (PC4-25600) 260 Pin CL22 kit for a couple of days.
That said, I also haven't had my 10GBe NIC show up so perhaps the issue Patrick ran into shows up when the CPU is more stressed. I'll be revisiting that once the hardware arrives.
 

abufrejoval

Member
Sep 1, 2022
39
11
8
I've just discovered and ordered a Sabrent PC-P3X4 quad M.2 to PCIe x4 board, which very likely is using this ASM1480 PCIe switch chip as well, which will do 16 x 8 lanes (obviously also 16 x 4) and seems to be economical enough to sell at €170 including VAT, much less than what HighPoint-Tech is charging for their higher-end models. Interestingly there is also a ASM2480 chip, that would support PCIe 4.0, but I gues that doesn't have a product yet.

This would be by far the cheapest PCI switch chip I've seen for a long time and generally good news. But of course having a PCIe 5.0 capable switch would be even better, especially if it can aggregate four x4 PCIe 3.0 NVMe drives into x4 PCIe 5.0 bandwidth.

Now a box like this Jasper Lake NAS just has me shudder to be honest: It marks a point where NVMe is no longer about higher performance than SATA, but quite simply about being cheaper, too, because the SSD controllers are natively PCIe and adding SATA is adding overhead.

And you even save a plastic enclosure...

Honestly, a 10Gbit or dual 2.5Gbit links for 6 or 12 NVMe drive is like driving a bunch of Ferraris stuck in first gear: Each of these individual drives completely overwhelms the 10Gbit port, two SATAs would max it out.

But I guess there are consumers who just would not care and just want a faster NAS that's also small and near silent...

My N6005 Jasper lake took 64GB Kingston DDR4-3200 quite well, clocking it a little lower: The wide support for many timings is something that ever fewer vendors do, to discourage re-use, I guess.

Intel has been lying about RAM support on Atoms for years, 16GB worked with all DDR3 models and 32GB with everything DDR4. It's really a shame that the next gen Atoms have reverted to single channel RAM, especially with an i3-N305 8 core at SkyLake IPC.
 

abufrejoval

Member
Sep 1, 2022
39
11
8
The last of my drives just showed up.
With 12 of the 2TB Intel 670P in a RAID 6 array on ext4, I ran e7db/diskmark in a docker image, running "sudo docker run -it --rm -e PROFILE=nvme -v /volume1:/disk e7db/diskmark". Results were interesting to say the least:

Sequential 1M Q8T1:
<= Read: 3127 MB/s, 3127 IO/s
=> Write: 836 MB/s, 836 IO/s

Sequential 128K Q32T1:
<= Read: 3144 MB/s, 25159 IO/s
=> Write: 760 MB/s, 6080 IO/s

Random 4K Q32T16:
<= Read: 1568 MB/s, 401420 IO/s
=> Write: 106 MB/s, 27303 IO/s

Random 4K Q1T1:
<= Read: 41 MB/s, 10683 IO/s
=> Write: 0 MB/s, 21 IO/s

Looks like this is only about 12.5% faster than a single 2TB model that STH previously saw in CrystalDiskMark, and those random 4K writes look horrendous.
Honestly the write amplification of RAID6 on QLC drives doesn't sound like it will last long, either. Now if you had really large non-volatile write caches to assemble large stripes and eliminate the write amplification (pretty much what PURE does in my understanding), this wouldn't be all that bad. But without any of this the RAID6 code has little choice but read-modify-write at whatever the lowest granularity is there, hopefully not 512 bytes or even 4k, but far from the size of the 12 full erasure blocks that would be ideal for maximum life-time.
 

michaelzxp

New Member
May 13, 2023
2
0
1
The only regret is the processor and network card. It would be perfect if it were N305 processor and 25Gb network card.
 

abufrejoval

Member
Sep 1, 2022
39
11
8
The Sabrent PC-P3X4 finally arrived and I've had a chance to look at it.

It's using one ASM2812 chip with 2 PCIe 3.0 lanes allocated to each M.2 slot. It's a bit of a compromise between a "perfect" switch which allows full device bandwidth at 4:1 oversubscription and a "cheapo" solution that doesn't oversubscribe at all.

A year ago that would have been extremely attractive when PCIe v4 and >2TB NVMe were the pricing cliff.

Today the premium for v4 is almost gone and a v3 device with x2 device links much less attractive: if capacity is your driver, anything 8TB or bigger will be at least PCIe v4 and the bandwidth impact hard to stomach while quad 2TB v3 aren't really compelling any more when you add €170 for the card. That leaves mostly 4TB drives attractive for the 16TB capacity point, but you'd really want 4 lanes of PCIe v5 or v4 from that setup for bandwidth...

But still, it's the only device like that around and at €170 including VAT it's much more economical than any of the HightPoint Tech variants, which also require x8 slots that are rarely left over in desktop mainboards or require bifurcation.
 

abufrejoval

Member
Sep 1, 2022
39
11
8
The only regret is the processor and network card. It would be perfect if it were N305 processor and 25Gb network card.
The N305 only has one extra PCIe lane 9 vs. 8 on Jasper Lake: there just isn't enough bandwidth to make a 25 Gbit/s net work.

I aggree the n305 seems a nice chip and very much like a great replacement for say a Xeon D-1541 and I'd like to get my finger on one, but it's rather limited by its I/O capabilities which you can't really expand without pushing beyond its power envelope. PCIe lanes have a fixed minimal energy footprint which unlike anything on-chip cannot be compressed under standard compliance.

The N305 may be suited to add more Docker containers and one might be tempted to do a bit of firewalling and content checking with it, but mixing a firewall and a storage box may not be such a great idea unless you trust your hypervisor and add extra physical network ports e.g. via USB dongles... That quickly turns it into a monster that is hard to manage, unless you run it with an orchestrator like oVirt, Xcp-ng, Proxmox or vSphere.

What I find most interesting is that I cannot imagine Intel producing distinct chips for the entire Alder-Lake N range, uniform cache size and PCIe lane counts as well as process maturity and the small size of these cores relative to all the I/O parts make it seem very unlikely.

Yet there is no way I can imagine that 2 or 4 core chips could be the results of defect based binning, so nearly all of these lesser SKUs are really just culled dies to fill market niches and something that I believe Intel has been prohibited from doing.

Pricing on the full range should be interesting and rather disconnected from production cost. When Intel can sell you a 2 core part below €100 for the whole board at profit, they could also sell you the 8 core one at the same price.

Fat chance of that happening...

And I keep hearing that vendors don't really like selling boards for any of these at €100 or less, because even if Intel gives these chips away for free, they still make a loss. Again some really interesting things going on there which should interest anti-trust watchdogs all across the planet.
 

adamj

New Member
May 29, 2023
1
0
1
Has anyone been able to install & run Proxmox or TrueNAS SCALE on one of these successfully? Even with the shortcomings mentioned above, the Flashstor 6 is a good fit for a project that I have that needs all flash storage, but that is dependent on whether it can run Proxmox preferably, or TrueNAS SCALE.
 

michaelzxp

New Member
May 13, 2023
2
0
1
The N305 only has one extra PCIe lane 9 vs. 8 on Jasper Lake: there just isn't enough bandwidth to make a 25 Gbit/s net work.

I aggree the n305 seems a nice chip and very much like a great replacement for say a Xeon D-1541 and I'd like to get my finger on one, but it's rather limited by its I/O capabilities which you can't really expand without pushing beyond its power envelope. PCIe lanes have a fixed minimal energy footprint which unlike anything on-chip cannot be compressed under standard compliance.

The N305 may be suited to add more Docker containers and one might be tempted to do a bit of firewalling and content checking with it, but mixing a firewall and a storage box may not be such a great idea unless you trust your hypervisor and add extra physical network ports e.g. via USB dongles... That quickly turns it into a monster that is hard to manage, unless you run it with an orchestrator like oVirt, Xcp-ng, Proxmox or vSphere.

What I find most interesting is that I cannot imagine Intel producing distinct chips for the entire Alder-Lake N range, uniform cache size and PCIe lane counts as well as process maturity and the small size of these cores relative to all the I/O parts make it seem very unlikely.

Yet there is no way I can imagine that 2 or 4 core chips could be the results of defect based binning, so nearly all of these lesser SKUs are really just culled dies to fill market niches and something that I believe Intel has been prohibited from doing.

Pricing on the full range should be interesting and rather disconnected from production cost. When Intel can sell you a 2 core part below €100 for the whole board at profit, they could also sell you the 8 core one at the same price.

Fat chance of that happening...

And I keep hearing that vendors don't really like selling boards for any of these at €100 or less, because even if Intel gives these chips away for free, they still make a loss. Again some really interesting things going on there which should interest anti-trust watchdogs all across the planet.
The number of PCIE channels is not directly related to the chipset, just like Asustor, which can be solved by adding an expansion chip.
 

haiki

New Member
Jun 3, 2023
1
0
1
I couldn't find anything on the interweb around life expectancy around this type of NAS offering. This was the only place with an active discussion on the product. So hypothetically if I were to get the 12 drive version would I be running into a scenario where all the nvme SSDs would need to be replaced simultaneously due to the TRIM limitations for an all SSD Raid 5? My config would be the 12 of the crucial P3 4tb or Teamgroup M34 4TB drives.