System-Suggestions for all flash (SATA) TrueNAS array

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TrumanHW

Active Member
Sep 16, 2018
253
34
28
For those who'd like 'BREVITY' ....

- What performance R or W (100% of either) are the drives able to sustain if the other hardware used doesn't limit them / add a bottleneck..?
- Is there a CPU + Motherboard combo you'd recommend that will be quiet, relatively inexpensive which has IPMI or IPMI equivalent ..?
- Any cases which don't cost more than an entire used Dell system which would go well with it ..?
- Should I avoid the extra heat of a SAS controller ..? Or should I still use SAS to ensure performance ... despite the extra heat + wattage..?

- Is this likely to exceed 10Gb (SFP+) performance..? I'm assuming it cannot exceed an SFP28, however.

Important data will be backed up to my (spinning) RAIDz2 array.
(any suggestions of how I can selectively backup only whats changed would be appreciated)


I did see some SuperMicro machines (used on eBay) that could be good ... and I have an R730xd I could use.



FOR THOSE WHO'D LIKE MORE INFO:


Priorities: Quiet // Minimize TDP (which helps keep it quiet, as lower TDP means less watts to cool)
Use case: Personal // Resilience is a luxury, not a necessity...making single parity (RAIDz1) seem adequate.


Media:
7x Samsung 4TB SATA SSD
- 5x Evo 870
- 2x Evo 860

Usually I use Dell PowerEdge: They're cheap (used), & has most features I use like iDRAC, hot swappable HDD & PSU and Xeon // ECC RAM).
That said, perhaps coupled with the E5-2430L v2 (60 watt TDP) and possibly replacing the system fan with a quieter fan (if possible).
I have 2x Optane P5800X ... but don't know if they'd improve performance, and planned on using them in an all flash NVMe array..?
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
My Workstation with 4* 10tb hdds in RAID 5 can saturate 10gbe with large io.
With ssds IT should easilly be able to do >10gbe
Personally I would be scared to use Samsung consumer devices in RAID/zfs: recentsamsung ssds have many issues and i think in a RAID they will die really fast
 
  • Like
Reactions: TrumanHW

TrumanHW

Active Member
Sep 16, 2018
253
34
28
Just to make sure we're on the same page ....

All the data is already backed up on a 2x parity (RAIDz2) array ... and even then, isn't a big deal to lose the content I'm keeping on here ....

Media content, (videos) etc ... and I do data recovery (I have equipment to put SSDs in IT mode).

Last, I wouldn't use RAID-0 ... but RAIDz1 ... which would allow a single drive to fail without loss...and I have other 4TB SSD which I could then use to rebuild any data lost.

I of course agree with you if it weren't data already sync'd up, but this is data that if lost would represent a small inconvenience to re-create.

In fact, so long as I keep a list of the content I keep on it...? I wouldn't even care if it were lost. (I just hate not knowing what the content was that I was) ... :)

Thank you again for making sure I knew the consequences thought.


As far as 10GbE or SFP28, etc ... what bandwidth / throughput would you guess as the most accurate transfer rate it can accommodate is?

Regards, TrumaN
 

mrpasc

Well-Known Member
Jan 8, 2022
487
259
63
Munich, Germany
As far as 10GbE or SFP28, etc ... what bandwidth / throughput would you guess as the most accurate transfer rate it can accommodate is?
Have a quite similar setup (6 4TB 870 Evo in RAID Z1):
Writing to them (via CIFS/SMB) doesn’t saturate even 10GBit network, as you are limited to IOPs/Bandwith of one SSD. Reading could fill a 25GBit network in theorie (6x500MB/s = 24GBit/s) but with all those Samba overhead I wouldn’t expect this in real life.
 
  • Like
Reactions: TrumanHW

TrumanHW

Active Member
Sep 16, 2018
253
34
28
awesome!! Thank you.

Alternatively I'm thinking about getting an Epyc server and maybe selling the 4TB SATA drives ... and configuring an all NVMe array ...

I'm assuming it's more in the 'consistency' that makes it nice, as well as not having to listen to a buncha spinning rust..? :)


I did just find a deal on a ~2000 Epyc server configured to support up to 24 NVMe SSDs ... of which I could just sell the SATA drives I have and pick up more NVMe drives, (for an 8x array in RAIDz1) ... and add some special vDevs ... as I have the 2x P5800X (not quite sure the best way to use them ... small blacks vDev ... L2ARC .... etc ... as well as a couple of 905p (900GB) x2.... which'd still only use half the NVMe slots it'd support. The advantage of not using them for small blacks or something is that if I don't see much value as an L2ARC I could just remove them ... in contrast, I don't know that there is a way for me to write the 'small blocks' info on the special vDev back to the main NVMe volume.

Obviously, like you, I thought I'd only get 1x any vDev's speed in writing to a 'parity' volume (RAIDz1, 2, etc).
But, my spinning rust writes faster than it reads (at ~600MB/s sustained).

Now, mirrored can never write faster than a single vDev's write speed .... but RAID..? That seems (logically) inaccurate and is corroborated by tests ... as I know I don't have spinning drives that can exceed 230MBs).

Have you tested yours and only got 1x the write speed for the zVol that you'd get if writing to a single vDev it's made from ?

Either way I'll report back once I get the last 2 SATA SSD drives in (maybe today).

And thanks again for your info, its grately appreciated!
 

mrpasc

Well-Known Member
Jan 8, 2022
487
259
63
Munich, Germany
Obviously, like you, I thought I'd only get 1x any vDev's speed in writing to a 'parity' volume (RAIDz1, 2, etc).
But, my spinning rust writes faster than it reads (at ~600MB/s sustained).
Did you measure sync writes or async writes?
The async write test measures the bandwith of your RAM (write cache), not the real write speed of your spinning rust pool.

Have you tested yours and only got 1x the write speed for the zVol that you'd get if writing to a single vDev it's made from ?
Did only some small tests, this pool is only used for non critical temporary media files. So did some tests with syn=always and got exactly as expected: around 450 to 500MB for write, 2500MB for read (with cold ARC).