So I have taken the plunge and am building a Linux lab server around a E2100 Xeon. I have six 8TB SATA drives that I plan to put into a ZFS RaidZ2 pool. I am considering adding some L2ARC and ZIL/SLOG cache to increase IOPS and I have a 240GB M2 SSD (Corsair MP510) for that. To complete the picture, I'll have 32GB RAM initially and plan to upgrade when larger UDIMMS are available: either increase to 96 GB or replace with the maximum possible - 128GB. Network connectivity is 1GB ethernet and the system will be used mostly by myself for storage, containers and virtual machines, and 2 or 3 others who will mostly store files over nfs and/or smb.
I also plan to port another two 4TB SATA drives that I already have (so I may as well use them) that I'll use either in a mirror pool or as two separate RaidZ0 pools. Not sure which to do as I think the drives may have different RPMs. This space will be used for ad-hoc unimportant stuff (so lack of redundancy is acceptable) and secondary backups of some datasets on the main pool. I can also put in a second MP510 SSD for more cache if that makes sense.
I'm not sure how to arrange the cache SSDs, which will need to be partitioned (because it'll also be the boot drive) and potentially used for both ZIL/SLOG and L2ARC, which I may want also for the other pools. Not sure yet.
How big do the cache partitions need to be? I've read that the SLOG need be no more than 1GB based on the network bandwidth) and the L2ARC 4*RAM which is 128GB. I understand I need separate cache devices for each pool that I want to cache. I've also read that log/cache devices can be added to/removed from pools so these things could be changed around if necessary.
I could mirror the log partitions and use unmirrored partitions for the cache. That would allow me, with two SSDs, to provide L2ARC for each of two pools and mirrored ZIL/SLOG for all pools. That seems like a reasonable compromise given budget and intended use.
I realise using partitions isn't what one would do on a production commercial system but this for a lab environment with a limited budget. Given that, is the above reasonable?
Thoughts/comments/suggestions appreciated.
I also plan to port another two 4TB SATA drives that I already have (so I may as well use them) that I'll use either in a mirror pool or as two separate RaidZ0 pools. Not sure which to do as I think the drives may have different RPMs. This space will be used for ad-hoc unimportant stuff (so lack of redundancy is acceptable) and secondary backups of some datasets on the main pool. I can also put in a second MP510 SSD for more cache if that makes sense.
I'm not sure how to arrange the cache SSDs, which will need to be partitioned (because it'll also be the boot drive) and potentially used for both ZIL/SLOG and L2ARC, which I may want also for the other pools. Not sure yet.
How big do the cache partitions need to be? I've read that the SLOG need be no more than 1GB based on the network bandwidth) and the L2ARC 4*RAM which is 128GB. I understand I need separate cache devices for each pool that I want to cache. I've also read that log/cache devices can be added to/removed from pools so these things could be changed around if necessary.
I could mirror the log partitions and use unmirrored partitions for the cache. That would allow me, with two SSDs, to provide L2ARC for each of two pools and mirrored ZIL/SLOG for all pools. That seems like a reasonable compromise given budget and intended use.
I realise using partitions isn't what one would do on a production commercial system but this for a lab environment with a limited budget. Given that, is the above reasonable?
Thoughts/comments/suggestions appreciated.