Thanks for confirming the integrated controller setup.The integrated controller is a normal sata II (6gb/sec) with soft raid that can be disable in the bios.
The ep400 is a sas 12gb/sec controller.
Which to use depend on the type of disk you want to use , if you stay with mecanical the integrated one is ok.
If I was you I will try to get some used 2tb enterprise 2.5" disks, you can find them on ebay at reasonable price.
With four of them you will have a 6tb zraid system that should be what you want.
On my personal experience i will add a small ssd for write cache, it will encrease performance without spending too much.
Again you can look for enterprise ssd (you can find 120Gb for about 30/35 euros) and are ok for the job or you can place an nvme disk in a pci slot.
It is easy if you have the parallel hardware. It's a bit nuisance if you don't. I plan to have cloud backups but I would not willingly create a situation where the only remaining copy of my data is in one place in cloud only. Even if it is temporary. It will also take some time to transfer terabytes of data from cloud back to home server. I think that a lot of people go with Unraid because if the ease of adding new drives one at a time. Like many, I've decided that for the added benefits, it will be worth to go with ZFS even with the cons. My plan is to run this storage server for next 5-7 years, if it survives without without failures for that long. I would be very happy if I don't need reconfigure pools and go through any hassle during the server lifycycle. So that is why I'd rather (and just a personal preference) rather get a bit extra space than have the risk of needing to "repool".About " sizing up later is going to be difficult with ZFS. " in practice it's very easy: you just need to backup data, destroy the pull. recreate a bigger pool, restore data.
I don't see how you can get "and also probably have a positive impact on the SSD endurance." considering that everytime you make a write this is affecting all disks, so the "consumption" of ssd cells is the same on all disk. And that is the bigger disadvantage of using SSD :they are much more likely to fail all in the same time compared to HDD. This can be mitigate by having a backup or by start to update the disk to new one on a regular basis. For example change one disk in the array every year...
The ssd cells will fail after a certain number of writes, so like you have said in a scenario where you use the nas more for storage of static data (photos/videos/music/ pdf etc..) instead of dynamic data (virtual machine images, transaction databases etc...) You will be fine.Mixed up months and weeks above, but I think it is still understandable In any case, my data in the NAS is going to be fairly static (personal photos, small business paperwork archive, etc), so now looking at the endurance figures for even the lower end consumer devices, it seems highly unlikely likely that I will ever reach the write endurance limits. I'm guessing that the SSD drives can still fail, but in my case probably for other reasons than reaching their write endurance. Now this is getting a bit off-topic I would agree ...
Back to the subject - getting 4x 16GB UDIMM memory sticks seemed to be even more challenging than I initially believed for these Fujitsus. I wanted to get the Crucial MTA18ASF2G72AZ-3G2R1R memory modules as these are listed in Crucial site as compatible specifically with the TX1320 M2. The order I did for these, got back after a few days with "no stock with any of our providers and no idea if there will be these anymore". Now I ordered 4 x Kingston KSM32ED8/16MR modules and at least now I got a "we should probably get these to our stock in 10-16 days". These are not specifically listed to be compatible with the TX1320, but I'm hoping that they will be fine.