Switching from Synology to TrueNAS - Need build help

VirtualBacon

Member
Aug 21, 2017
54
7
8
28
I currently have a Synology DS1817+ and I love it, apart from the encrypted performance. It flat out sucks

I looked into the DS1821+, and it turns out the encrypted performance isn't much better. So, I think I will switch to TrueNAS. I already have some hardware, so I'd like some advice is possible on how to use it

I have 2 x 800GB Intel DC S3700 SSD's, I have a bunch of 12TB SATA drives, I have all sorts of IT Flashed HBA's, and I have a Supermicro CSE-826BE1C-R920LPB which has 12 x 3.5" bays, and 2 x 2.5" Bays

I have a few questions

  1. What should I use those 2 x 800G SSD's for? Caching?
  2. RAM - How much should I get? This seems to be a hot topic for TrueNAS. is 32GB good?
  3. Best supported 10GB NIC's? My go-to would be a CX3, but I'm open to other suggestions
  4. Is there currently a best bang for the buck CPU? Needs to be supported by a Supermicro board with IPMI, so I'm guessing some kind of Xeon E series
  5. Is it worth getting an NVMe Cache?

Thanks!
 

cheezehead

Active Member
Sep 23, 2012
723
176
43
Midwest, US
All depends on your actual usage, if hardware is laying around then you may want to play a bit.

1) Depending on workflow, could be for ZIL or L2ARC
2) 32GB might be ok or could be very undersized. Given the chassis maxed at 12x12TB drives, you could be pushing 256GB of ram in the right workload. Remember the bulk of RAM is for ARC (read caching).
3) CX3's are good for now...if you have them laying around, use what you got. If your buying and this is for a business....you may want to look at CX4's for longer lifecycle support (or keep in mind replacing nic's in a few years when they get cheap).....I have some very early Emulex ones which don't work anymore (ended up nesting TrueNAS within ESXi to get it working....not ideal)
4) Cheap would be an X9 gen motherboard pair with Sandy or Ivy bridge...also can leverage DDR3 which is just cheap. X10 or X11 boards would gain you some more headroom but keep in mind secondary market on processors is what is due to the pandemic.
5) All depends on your actual workflow.

ZFS filesystem encryption has very little impact on performance in general as long as the CPU support AES-NI (celerons need not apply).

ZFS Caching primer ZFS Caching

IXSystems ZFS cache performance talk https://www.snia.org/sites/default/...ices_for_OpenZFS_L2ARC_in_the_Era_of_NVMe.pdf

The black box of ZFS performance sizing is one really sourced in not really knowing how your workload actual behaves.
 

VirtualBacon

Member
Aug 21, 2017
54
7
8
28
Thanks for that information

I've done a lot of research, this is where I am at. Do you notice anything out of place? So far I think this should end up pretty good. It seems like for ZIL/SLOG, Optane would be best. And for L2ARC I should probably go NVMe

I realized all my S3700 SSD's are in use, but I did find a 60GB Intel 330, and I ordered another for $15 so I can do mirrored boot drives. I also found 3 x DC S3610 800GB

My overall goal here is for this to be as reliable as possible

1626720798027.png

I may be over-correcting, but damn I hate working on that slow Synology
 

VirtualBacon

Member
Aug 21, 2017
54
7
8
28
I'm not too happy on the board and CPU though, they both are aging a little bit. Ideally I'd like them to be newer, but damn if newer SM boards are pricey
 

nasbdh9

Active Member
Aug 4, 2019
104
39
28
The relatively high frequency of E3 can provide better single-threaded performance, which is important for most SMB loads
If you only use SMB, you don’t need to buy SLOG, but I recommend buying a reliable UPS, and then use the NVMe slot for L2ARC
The boot drive does not need to use those outdated SSDs, buying two sandisk ultra fit usb 3.1 is more reliable and easy to replace
 

VirtualBacon

Member
Aug 21, 2017
54
7
8
28
Interesting, I'll look more into SLOG and skip it. UPS is top quality, SRT3000RMXLA

You sure about those flash drives? I've seen more flash drives fail than I can remember. Never an SSD like that
 

sovap

New Member
Sep 10, 2018
1
0
1
A bit late on this, but one option for those 2 800GB SSD drives would be to use them as a special allocation vdev for metadata and small block files. Chances are you would get more benefit out of it than a L2ARC, which is very dependent on workload. That would keep the small files on the SSD drives which handle better random read/write. And you can set special small blocks size per dataset. So you could even force a whole dataset to primarily use the SSD's by setting it to the same as record size.