Hello everyone!!
I ended up purchasing (10) 2TB Hitachi drives from ebay when they were $30 each. Of the 10 2 had a bunch of bad blocks leaving me 8 that I can use. I'm wanting to build an all in one Napp-it server on ESXi 6. This is going to be for my home environment with 30 VMs max and about 10-15 are going to be on all the time. I going to have a 2012R2 Dc, 2012 R2 file server, several desktop operating systems (Win, Mac, and Linux), steam dedicated game servers, UML vm, GNS3 vm, torrent client, PLEX, and maybe a lite database for KODI or an arduino project.
I already have a Xeon E3-1240 v2 with 32GB ECC, a X9SCM-f, and a Dell h310 flashed to IT mode. I would like to maximize all 3 (Capacity, Redundancy, Performance) with ZFS but I know that isn't possible. With (8) 2TB drives I thinking of making 2 vdevs. I will buy a few more drives if I have to, but I would like to stick with 8 if I can because I would to buy another HBA. I do have (7-9) 3TB Hitachi 7k4000 drives I can use until I replace them with 2TB drives instead. I bought the 3TB drives a few years back for ZFS, but never did anything with them. I will eventually use them but at a later date.
Option 1
vdev 1 Striping of (3) 2TB drives
vdev 2 RAIDZ-2 of (5) 2TB drives
Mirror vdev 1 and 2
That gives me a total of 6TB usable? I read somewhere that having multiple vdevs will increase the IOPS of the pool...? Is this correct. I would get better performance then a 8 drive RAIDZ vdev? Is option 2 better?
Option 2
vdev 1 RAIDZ of (4) 2TB drives
vdev 2 RAIDZ of (4) 2TB drives
Mirror vdev 1 and 2
That gives me a total of 8TB usable?
Option 3
vdev 1 Striping of (4) 2TB drives
vdev 2 Raidz of (4) 2TB and (1) 3TB drive
Mirror vdev 1 and 2
That gives me a total of 8TB usable?
I'm not really sure how many IOPS I should shoot for and I know that depends on the type of data I'm reading and writing. I did give some ideas on what kind of VMs I'm going to have. I plan on giving the Napp-it vm 16GB of RAM. I don't think I will use an SSD for l2arch or for a ZIL. I will probably use an SSD in the ESXi host for cache.
Let me know what you guys think. Thanks!!!
I ended up purchasing (10) 2TB Hitachi drives from ebay when they were $30 each. Of the 10 2 had a bunch of bad blocks leaving me 8 that I can use. I'm wanting to build an all in one Napp-it server on ESXi 6. This is going to be for my home environment with 30 VMs max and about 10-15 are going to be on all the time. I going to have a 2012R2 Dc, 2012 R2 file server, several desktop operating systems (Win, Mac, and Linux), steam dedicated game servers, UML vm, GNS3 vm, torrent client, PLEX, and maybe a lite database for KODI or an arduino project.
I already have a Xeon E3-1240 v2 with 32GB ECC, a X9SCM-f, and a Dell h310 flashed to IT mode. I would like to maximize all 3 (Capacity, Redundancy, Performance) with ZFS but I know that isn't possible. With (8) 2TB drives I thinking of making 2 vdevs. I will buy a few more drives if I have to, but I would like to stick with 8 if I can because I would to buy another HBA. I do have (7-9) 3TB Hitachi 7k4000 drives I can use until I replace them with 2TB drives instead. I bought the 3TB drives a few years back for ZFS, but never did anything with them. I will eventually use them but at a later date.
Option 1
vdev 1 Striping of (3) 2TB drives
vdev 2 RAIDZ-2 of (5) 2TB drives
Mirror vdev 1 and 2
That gives me a total of 6TB usable? I read somewhere that having multiple vdevs will increase the IOPS of the pool...? Is this correct. I would get better performance then a 8 drive RAIDZ vdev? Is option 2 better?
Option 2
vdev 1 RAIDZ of (4) 2TB drives
vdev 2 RAIDZ of (4) 2TB drives
Mirror vdev 1 and 2
That gives me a total of 8TB usable?
Option 3
vdev 1 Striping of (4) 2TB drives
vdev 2 Raidz of (4) 2TB and (1) 3TB drive
Mirror vdev 1 and 2
That gives me a total of 8TB usable?
I'm not really sure how many IOPS I should shoot for and I know that depends on the type of data I'm reading and writing. I did give some ideas on what kind of VMs I'm going to have. I plan on giving the Napp-it vm 16GB of RAM. I don't think I will use an SSD for l2arch or for a ZIL. I will probably use an SSD in the ESXi host for cache.
Let me know what you guys think. Thanks!!!