I'm in the process of designing a backup storage server. This box will serve as a backup to our primary backup server. The only things we need out of it are the following:
1) Be able to fully saturate two Gigabit network cards (250 MB/s)
2) Lots of storage space (100TB ish)
The box will be connected via iSCSI to my primary server. I plan on using rsync (or something similar) to copy data onto this box. I plan on using OI+Napp-IT as the OS and storage management interface. We will only be writing data to this server. Never reading or serving out data. The only time a read will happen is in the event of a major disaster.
With that in mind, i did some research and have speced out the following items:
Now to my questions:
1) Will i be able to saturate both my NICs and get a full 250MB/s into this box?
Obviously my main bottleneck is the network. As far as i can tell, with the specs above, i should be able to get at least 250MB/s. Am i missing something? My understanding is that rsync uses asynchronous writes. So i shouldn't really need any sort of SSD for a ZIL. Please correct me if i'm wrong here. My understanding of the above system's performance is this:
|HDD x 36 = 27,000 MB/s| -------> |Expander = 3,000 MB/s| ------> |PCIe 3.0 x8 HBA = 8,000 MB/s| ------->|Motherboard/CPU/Ram = A lot| -----------> |Bonded NICS = 250MB/s| ------> Network data
Beyond the NICs, the biggest bottle neck appears to the be expander. And thats what i'm a little worried about. The SuperMicro E16 expander is a SAS2 6Gb/s backplane. According to SuperMicro tech support, the expander has a single SFF8087 port that has four 6Gb/s lanes. According to the internet, each 6Gb lane = 750 MB/s. That means the single sff8087 port is capable of 3,000 MB/s. Is this correct? Aside from the technical bugs i've read about regarding expanders and ZFS, will this bottleneck be an issue for ZFS or other system functions? (Silvering, healing, verification, de-fragmenting, etc)
2) Is the box i've speced out above too powerful?
While i realize a lot of you build home storage servers that run ESXi and do all sorts of awesome things, this box only needs to do one thing. It only needs to write data to the disks and verify that the data is correct. I want to make sure i have enough horsepower to do what I've outlined above, but i also want to make sure i'm not buying a Xeon proc when i only need an i3 proc. (Backblaze, for example, only puts an Core i3 proc in their pods.) I just need the box to be able to keep up with the NICs and make sure ZFS will be able to do its thing and keep the data from rotting.
Thanks for helping me out. This forum has a wealth of information. I have found so many great posts here with a lot of very helpful information. Any and all advice or suggestions are welcome. Once i get all the parts i am planning on posting up here for everyone to see. It should be a lot of fun and a good experience.
1) Be able to fully saturate two Gigabit network cards (250 MB/s)
2) Lots of storage space (100TB ish)
The box will be connected via iSCSI to my primary server. I plan on using rsync (or something similar) to copy data onto this box. I plan on using OI+Napp-IT as the OS and storage management interface. We will only be writing data to this server. Never reading or serving out data. The only time a read will happen is in the event of a major disaster.
With that in mind, i did some research and have speced out the following items:
Chassis - Supermicro SC847E16-R1400LBP
Motherboard - SuperMicro X9SCM-F-O
CPU - Intel Xeon E3-1230V2 3.3Ghz
Ram - 32GB DDR3 1600 Unbuffered Ram
HBA - LSI SAS 9207-8i Controller Card
HDD - 36 x 3TB (possibly 4TB) 5400 RPM 6Gb/s Sata drives (Model TBD)
Motherboard - SuperMicro X9SCM-F-O
CPU - Intel Xeon E3-1230V2 3.3Ghz
Ram - 32GB DDR3 1600 Unbuffered Ram
HBA - LSI SAS 9207-8i Controller Card
HDD - 36 x 3TB (possibly 4TB) 5400 RPM 6Gb/s Sata drives (Model TBD)
Now to my questions:
1) Will i be able to saturate both my NICs and get a full 250MB/s into this box?
Obviously my main bottleneck is the network. As far as i can tell, with the specs above, i should be able to get at least 250MB/s. Am i missing something? My understanding is that rsync uses asynchronous writes. So i shouldn't really need any sort of SSD for a ZIL. Please correct me if i'm wrong here. My understanding of the above system's performance is this:
|HDD x 36 = 27,000 MB/s| -------> |Expander = 3,000 MB/s| ------> |PCIe 3.0 x8 HBA = 8,000 MB/s| ------->|Motherboard/CPU/Ram = A lot| -----------> |Bonded NICS = 250MB/s| ------> Network data
Beyond the NICs, the biggest bottle neck appears to the be expander. And thats what i'm a little worried about. The SuperMicro E16 expander is a SAS2 6Gb/s backplane. According to SuperMicro tech support, the expander has a single SFF8087 port that has four 6Gb/s lanes. According to the internet, each 6Gb lane = 750 MB/s. That means the single sff8087 port is capable of 3,000 MB/s. Is this correct? Aside from the technical bugs i've read about regarding expanders and ZFS, will this bottleneck be an issue for ZFS or other system functions? (Silvering, healing, verification, de-fragmenting, etc)
2) Is the box i've speced out above too powerful?
While i realize a lot of you build home storage servers that run ESXi and do all sorts of awesome things, this box only needs to do one thing. It only needs to write data to the disks and verify that the data is correct. I want to make sure i have enough horsepower to do what I've outlined above, but i also want to make sure i'm not buying a Xeon proc when i only need an i3 proc. (Backblaze, for example, only puts an Core i3 proc in their pods.) I just need the box to be able to keep up with the NICs and make sure ZFS will be able to do its thing and keep the data from rotting.
Thanks for helping me out. This forum has a wealth of information. I have found so many great posts here with a lot of very helpful information. Any and all advice or suggestions are welcome. Once i get all the parts i am planning on posting up here for everyone to see. It should be a lot of fun and a good experience.