Building a new FreeNAS box, thoughts/feedback?

Discussion in 'DIY Server and Workstation Builds' started by GCM, Sep 28, 2015.

  1. GCM

    GCM Active Member

    Joined:
    Aug 24, 2015
    Messages:
    137
    Likes Received:
    43
    Nothing purchased yet, this is all in the planning phase:

    So, any feedback would be greatly appreciated!

    Here is the plan:

    Chassis/barebones: SYS-6018R-TD8

    HD: 8x UltraStar 7K4000

    SSD: 2x 240GB (1 write cache 1 read cache)

    RAM: 64 GB

    CPU: E5-2603V3

    Thoughts?
     
    #1
  2. Keljian

    Keljian Active Member

    Joined:
    Sep 9, 2015
    Messages:
    429
    Likes Received:
    71
    Why do you need 64 gig of ram? :)
     
    #2
  3. BlueLineSwinger

    BlueLineSwinger Active Member

    Joined:
    Mar 11, 2013
    Messages:
    146
    Likes Received:
    53
    Without more detail, it's kinda hard to say. What's the environment? How are you planning to use it? What applications? How many users? Are you planning to run any plugins/jails?
     
    #3
  4. GCM

    GCM Active Member

    Joined:
    Aug 24, 2015
    Messages:
    137
    Likes Received:
    43
    Graphics design storage (Video, assets, etc) about 10 users. No plugins planned as of now.

    Using ZFS, from what I've read, the more RAM the better.
     
    #4
  5. Keljian

    Keljian Active Member

    Joined:
    Sep 9, 2015
    Messages:
    429
    Likes Received:
    71
    If you have the money, 64gig of ram is nice, but if you don't, 1gig per TB is plenty. (4x8=32)

    I would prioritise network performance over An extra 32 gig of ram

    A 10gbit Nic and suitable transceivers/cable/switch would be a worthwhile investment for that number of users/workload.
     
    #5
    Last edited: Sep 29, 2015
  6. Keljian

    Keljian Active Member

    Joined:
    Sep 9, 2015
    Messages:
    429
    Likes Received:
    71
    It might actually be worthwhile going all 10gbit with multiple teamed nics in the server, and clients if you are hitting it with video, depending on whether you plan to work from it or not.

    Also would be worth considering fast iop drives for the cache, pci-e ssd style etc.
     
    #6
  7. GCM

    GCM Active Member

    Joined:
    Aug 24, 2015
    Messages:
    137
    Likes Received:
    43
    Thanks for the tips. Any chassis/mobo combo you can recommend?

    The main issue is pricing, client is trying to come in at 5k or under.
     
    #7
  8. Deslok

    Deslok Well-Known Member

    Joined:
    Jul 15, 2015
    Messages:
    1,027
    Likes Received:
    110
    why the seperated read and write caches? also from my understanding on ZFS that cpu might be a bit slow in that enviroment.
     
    #8
  9. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,832
    Likes Received:
    1,489
    IMHO, this needs further explaining before we can give you accurate guidance.

    Are projects going to be "worked on" from the storage or placed there when done?

    Are all 10 users going to be doing video editing and graphic design on this storage at once or could they all be backing up at 4PM before they head home at once? Could there be streaming from storage to conference room and other users while others are doing intensive tasks on the storage at the same time?

    Can you explain a bit more how it will be utilized exactly?

    Is sound an issue for chassis? What about size? What about drive capacity?
     
    #9
  10. GCM

    GCM Active Member

    Joined:
    Aug 24, 2015
    Messages:
    137
    Likes Received:
    43

    Are projects going to be "worked on" from the storage or placed there when done?
    It's currently a mix. It's mostly photoshop/indesign work, but some Premiere/Aftereffets work will be done. Same with the local vs NAS working.

    Are all 10 users going to be doing video editing and graphic design on this storage at once or could they all be backing up at 4PM before they head home at once? Could there be streaming from storage to conference room and other users while others are doing intensive tasks on the storage at the same time?
    Most likely the answer is no. Sporadic use, perhaps a few consecutive use scenarios, but the real world example would probably be 2-3 concurrent users, with one of them being video.

    Is sound an issue for chassis? What about size? What about drive capacity?
    If by issue, do you mean "blazing fans all day" if so yes. If it's the usual hum of an average server, no. And we're shooting for 18TB minimum in usable space.
     
    #10
  11. GCM

    GCM Active Member

    Joined:
    Aug 24, 2015
    Messages:
    137
    Likes Received:
    43
    I'd like to keep as much possible space for the caches. Originally, I was planning 2x read and 1x write.
     
    #11
  12. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,559
    Likes Received:
    4,490
    If this is just a file server with ZFS then that CPU is more than ample.
     
    #12
  13. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,832
    Likes Received:
    1,489
    I'd change the chassis for sure then.

    If you're just using it as a file server for image manipulation and asset storage then the rest is fine, and likely pretty over kill.

    Another option may be:
    - E5-1620 V3
    - 32GB RAM
    - No Cache Drives

    Monitor usage and add cache drives and ram when/where needed.
     
    #13
    whitey likes this.
  14. GCM

    GCM Active Member

    Joined:
    Aug 24, 2015
    Messages:
    137
    Likes Received:
    43
    Good idea, but they're on the other side of the country ;)

    Trying to hammer down a suitable setup that would require minimal hardware tinkering post setup.
     
    #14
  15. Keljian

    Keljian Active Member

    Joined:
    Sep 9, 2015
    Messages:
    429
    Likes Received:
    71
    You will find that it is possible to wire up the server with 10gbe and put a mikrotik crs226 switch into production under $500 total if eBay is an option for the (dual Chelsio) network card, even with optical links.

    The real question is "Are people going to be editing video that is on the server, using the server as their storage" - if the answer is yes, then fast storage is justified as is a fast network
     
    #15
    Last edited: Sep 29, 2015
  16. GCM

    GCM Active Member

    Joined:
    Aug 24, 2015
    Messages:
    137
    Likes Received:
    43
    Yeah, I was looking at the Mikrotik switch. This is one of my first "low budget" builds so, I'm bit out of my element!
     
    #16
  17. miraculix

    miraculix Member

    Joined:
    Mar 6, 2015
    Messages:
    116
    Likes Received:
    24
    I recommend reading Cyberjock's ZFS noob guide here. He's the official FreeNAS Mr. Crankypants but he provides very helpful info.

    I may have totally misread/misunderstood something (hopefully someone corrects me if I did) but I think these are main points...

    • ZIL
      • There's no "write cache" SSD per se. There might be write caching in RAM but I don't recall for sure.
      • There is the concept of SLOG for sync writes, and using a dedicated ZIL device that's faster than the pool it supports
        • Use an SSD ZIL with good write endurance (or underprovision it) and write performance to support a pool of HDDs
        • Use an SSD ZIL with *really* good write performance/endurance to support a pool of SSDs (and maybe higher performance mirrored vdev setups).
      • The need for a ZIL device depends on what you're using FreeNAS for
        • NFS syncs writes by default, so definitely add a ZIL if you use NFS.
        • vSphere via iSCSI does not sync writes by default.
        • I'm not sure about CIFS/SMB... anyone?
      • Check the FreeNAS forum for specific ZIL device recommendations. I noticed Intel S3700 is recommended very often but there are other more exotic possibilities like ZeusRAM.
    • L2ARC
      • Primary read cache is in RAM ("ARC") and secondary read cache ("L2ARC") is optional.
      • Any SSD with good read performance is probably fine for L2ARC though there are specific recommendations (and specific SSDs to avoid)
      • Increasing RAM for more ARC generally provides better performance gains than adding L2ARC.
      • However, adding L2ARC actually increases RAM consumption, and you can kill performance if you are not careful and don't have adequate RAM.
      • 1GB RAM per 5GB L2ARC is the rule of thumb, so for a 240GB L2ARC you've already exceeded 48GB minimum... therefore go with 64GB RAM or more if you do decide to use that SSD for L2ARC.
    • vdevs
      • A ZFS pool is made up of one or more vdevs
      • Each vdev consists of multiple physical drives, and an individual vdev corresponds to non-ZFS RAID volumes you may be more familiar with. The most common examples:
        • Z1 uses 1 parity drive, similar to RAID5
        • Z2 uses 2 parity drives similar to RAID6
        • A pool with a single mirrored vdev is effectively RAID1
        • A pool with multiple mirrored vdevs is effectively RAID10 since there's striping across those multiple mirror vdevs. This arrangement is most recommended for situations requiring high performance, high availability, or both (iSCSI based vSphere datastore for VMs, 10GE networking etc.)
      • Just remember that performance (specifically IOPS) is constrained to the slowest disk within the vdev, striping happens across multiple vdevs in a pool, and striping is good (increases performance). Therefore for 8 drives...
        • A pool of 4 mirror vdevs (2 drives each) yields the best performance but lowest usable capacity.
        • A pool of two Z1 or Z2 vdevs (4 drives per vdev) is an alternative with lower performance but higher usable capacity (Z2 yielding less usable capacity than Z1 due to the extra parity disk)
      • Adding one more drive (nine total) to your system might be a decent performance/capacity compromise of one pool striping across three Z1 vdevs (3 drives per vdev). This is something I want to test myself, but for now I don't know how well the performance would compare to a pool of multiple mirrored vdevs.
    Good luck!

    EDIT: attempted to make this more readable :p
     
    #17
    Last edited: Sep 30, 2015
  18. markarr

    markarr Active Member

    Joined:
    Oct 31, 2013
    Messages:
    391
    Likes Received:
    101
    Also with vdev to keep with zfs best practices follow the guidelines below when making the vdev for best performance:

    RAIDZ1 vdevs should have 3, 5, or 9 devices in each vdev
    RAIDZ2 vdevs should have 4, 6, or 10 devices in each vdev
    RAIDZ3 vdevs should have 5, 7, or 11 devices in each vdev

    You can make them what ever size you want but the stripe performance starts to drop, freenas by default will only show you the above options.
     
    #18
  19. GCM

    GCM Active Member

    Joined:
    Aug 24, 2015
    Messages:
    137
    Likes Received:
    43
    Thank you everyone for the information!

    I've changed the config around a ton, based on reading and based on hardware I already have.

    I already had a Lenovo TS440 NIB, so I'll be utilizing that. It'll be much quieter than the last option.

    So here is what I have planned out:


    Lenovo TS440 (With extra 4 hotswap bays)
    32GB RAM
    8x 7k4's (Or perhaps I can step up to 5TB?)
    Chelsio T4 variant

    @markarr From my understanding, an 8 Disk RaidZ2 shouldn't have too much performance impact. Unless I up my drive capacity to 6TB, I'd be off of my target storage number.
     
    #19
  20. Keljian

    Keljian Active Member

    Joined:
    Sep 9, 2015
    Messages:
    429
    Likes Received:
    71
    You should probably look at an LSI card to support the hard drives. Something along the lines of an 9240-8i or 9211-8i in IT mode.
     
    #20
    Last edited: Sep 30, 2015
Similar Threads: Building FreeNAS
Forum Title Date
DIY Server and Workstation Builds Help! I’m building a small FreeNAS server Nov 6, 2019
DIY Server and Workstation Builds Building FreeNAS Dec 20, 2017
DIY Server and Workstation Builds Need Advice Building a 16+ ZFS NAS Oct 17, 2019
DIY Server and Workstation Builds Building a tiny (Sub 3.9 Litres) workstation Sep 8, 2019
DIY Server and Workstation Builds In process of building server, EPYC7351P, Asrock EPYCD8-T2 Aug 11, 2019

Share This Page