Is my ZFS server too powerful? It only needs to saturate two Gbit NICs.

Discussion in 'DIY Server and Workstation Builds' started by MattRK, Jul 26, 2012.

  1. MattRK

    MattRK New Member

    Joined:
    Jul 26, 2012
    Messages:
    2
    Likes Received:
    0
    I'm in the process of designing a backup storage server. This box will serve as a backup to our primary backup server. The only things we need out of it are the following:

    1) Be able to fully saturate two Gigabit network cards (250 MB/s)
    2) Lots of storage space (100TB ish)


    The box will be connected via iSCSI to my primary server. I plan on using rsync (or something similar) to copy data onto this box. I plan on using OI+Napp-IT as the OS and storage management interface. We will only be writing data to this server. Never reading or serving out data. The only time a read will happen is in the event of a major disaster.

    With that in mind, i did some research and have speced out the following items:

    Chassis - Supermicro SC847E16-R1400LBP
    Motherboard - SuperMicro X9SCM-F-O
    CPU - Intel Xeon E3-1230V2 3.3Ghz
    Ram - 32GB DDR3 1600 Unbuffered Ram
    HBA - LSI SAS 9207-8i Controller Card
    HDD - 36 x 3TB (possibly 4TB) 5400 RPM 6Gb/s Sata drives (Model TBD)​

    Now to my questions:

    1) Will i be able to saturate both my NICs and get a full 250MB/s into this box?

    Obviously my main bottleneck is the network. As far as i can tell, with the specs above, i should be able to get at least 250MB/s. Am i missing something? My understanding is that rsync uses asynchronous writes. So i shouldn't really need any sort of SSD for a ZIL. Please correct me if i'm wrong here. My understanding of the above system's performance is this:

    |HDD x 36 = 27,000 MB/s| -------> |Expander = 3,000 MB/s| ------> |PCIe 3.0 x8 HBA = 8,000 MB/s| ------->|Motherboard/CPU/Ram = A lot| -----------> |Bonded NICS = 250MB/s| ------> Network data

    Beyond the NICs, the biggest bottle neck appears to the be expander. And thats what i'm a little worried about. The SuperMicro E16 expander is a SAS2 6Gb/s backplane. According to SuperMicro tech support, the expander has a single SFF8087 port that has four 6Gb/s lanes. According to the internet, each 6Gb lane = 750 MB/s. That means the single sff8087 port is capable of 3,000 MB/s. Is this correct? Aside from the technical bugs i've read about regarding expanders and ZFS, will this bottleneck be an issue for ZFS or other system functions? (Silvering, healing, verification, de-fragmenting, etc)

    2) Is the box i've speced out above too powerful?

    While i realize a lot of you build home storage servers that run ESXi and do all sorts of awesome things, this box only needs to do one thing. It only needs to write data to the disks and verify that the data is correct. I want to make sure i have enough horsepower to do what I've outlined above, but i also want to make sure i'm not buying a Xeon proc when i only need an i3 proc. (Backblaze, for example, only puts an Core i3 proc in their pods.) I just need the box to be able to keep up with the NICs and make sure ZFS will be able to do its thing and keep the data from rotting.



    Thanks for helping me out. This forum has a wealth of information. I have found so many great posts here with a lot of very helpful information. Any and all advice or suggestions are welcome. Once i get all the parts i am planning on posting up here for everyone to see. It should be a lot of fun and a good experience.
     
    #1
  2. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,559
    Likes Received:
    4,489
    A 6.0gbps connection realistically we have seen put out 550MB/s (or there about) so 4x 550MB/s is about 2.2GB/s per SFF-8087 hooked up to the expander. Network 250MB/s is going to be very difficult to achieve. Not that it isn't possible, but close. If you need more than 200MB/s then you probably want another NIC or two. Also important is that you need that data coming from different origins. Bonded gigabit Ethernet is not RAID 0 for networking.

    Also, assuming you are going to have some RAID-Z or RAID-Z2 striped scenario running, you are basically going to bang the drives a lot. Each drive can do maybe 150MB/s on the outer platter and 70MB/s on the inner tracks these days. With 36 drives in RAID 0 doing sequential reads/ writes it is more like 36* 150 = 5,400MB/s to 36 * 70 = 2,520MB/s maximum. Real life will be lower but you are still spec'ing 10x to 20x the disk performance you need. One major factor is what are you writing? Random 4K I/O remember a spindle disk is sub 1MB/s so if that is your access pattern, you might be happy to achieve 30MB/s.

    Will you be using deduplication? Compression? What are you pool setups going to be? E.g. are you doing four drive RAID-Z * 9? Those are factors that will influence processor selection. That CPU may be overkill, but on the other hand, Intel's power gating is so good that power consumption wise if half of it is only used 5% of the time you are talking a pretty negligible power consumption penalty to be able to scale for that 5% scenario. Using the Xeon you can be sure you have ECC support which you want for reliability.

    One thing I would certainly do is make sure that controller is supported on that expander backplane with SATA drives. Ensuring the expander and the controller are compatible is a major consideration.

    1400w power supply seems a bit much to me though.
     
    #2
  3. MattRK

    MattRK New Member

    Joined:
    Jul 26, 2012
    Messages:
    2
    Likes Received:
    0
    Thanks for the response. Sorry for the delay in responding. I got busy at work and haven't had time to read this thread.

    I completely forgot about the overhead of the gig nic. I'll be happy with 200 MB/s. As for the origins of the data, most of it will be coming from different sources, so it should be fine. If i do bond the nics, i'll probably go with an LACP link agg setup that will bond the two nics into one big pipe. I'm assuming Solaris supports such this as i know linux/bsd/windows/mac all do.

    I am planning on setting it up in a Raid-Z or possibly z2 setup. (Most likely 3 separate raid-z pools) The data should be fairly small but i'm not 100% sure on that one. I'll definitely look into it.

    Features wise, i'm not planning on doing dedupe (not nearly enough memory for that). Compression, on the other hand, looks like a good idea. From what i've read it helps performance. Though i'm not very familiar with ZFS.

    And i have verified compatibility. So far as i can tell, OI/Napp-IT supports my mobo, hba, and expander. I've also read several sucess stories with the backplane/expander and my hba. Though if i have problems i can always send the hba back and try something else.
     
    #3
  4. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,950
    Likes Received:
    859
    Compression squeezes more data through a connection. If you can move 200MB/s of data over a connection, you effectively move more data if you can turn 400MB into 200MB for the journey.
     
    #4
  5. sboesch

    sboesch Member

    Joined:
    Aug 3, 2012
    Messages:
    370
    Likes Received:
    22
    I would be curious as to where you would be finding this RAM, most of the re-sellers I am familiar with have only 1 model of these and they are ridiculously expensive.
     
    #5
  6. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,950
    Likes Received:
    859
    Kingston KVR1600D3D4R11S/8G on ebay

    You need 4 though.
     
    #6
  7. sboesch

    sboesch Member

    Joined:
    Aug 3, 2012
    Messages:
    370
    Likes Received:
    22
    Those are registered and will not work. They need to be UDIMMs or Unbuffered if you will.
     
    #7
  8. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,950
    Likes Received:
    859
    #8
  9. sboesch

    sboesch Member

    Joined:
    Aug 3, 2012
    Messages:
    370
    Likes Received:
    22
    OUCH! That is expensive. Looks like I am sticking with 4x4gb modules. Looks almost like a poor justification to upgrade my motherboard and processor! :)
     
    #9
  10. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,559
    Likes Received:
    4,489
    Or you could get a LGA 2011 board with more slots :)
     
    #10
  11. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,950
    Likes Received:
    859
    I thought there were 16GB kits. Just don't know p/n
     
    #11
  12. sboesch

    sboesch Member

    Joined:
    Aug 3, 2012
    Messages:
    370
    Likes Received:
    22
    Now that would be nice! I can't justify it since I just built a new server!
     
    #12
Similar Threads: server powerful
Forum Title Date
DIY Server and Workstation Builds Building a silent powerful home server Nov 2, 2014
DIY Server and Workstation Builds New build for a VM server Friday at 11:08 AM
DIY Server and Workstation Builds Home server/workstation with virtualization Nov 29, 2019
DIY Server and Workstation Builds DIY Server/NAS wanted but couldn't fit a normal rack :( Nov 28, 2019
DIY Server and Workstation Builds Flex ATX won't fit 2U server case Nov 24, 2019

Share This Page