FreeNAS - actual computer power required?

Discussion in 'FreeBSD and FreeNAS' started by Diavuno, Mar 17, 2017.

  1. Diavuno

    Diavuno Active Member

    Joined:
    Jan 31, 2014
    Messages:
    836
    Likes Received:
    107
    With the release of V10 I'm curious to play with ZFS again.

    I have a large stash of 846 and 836 chassis and a huge surplus of X8 gear.

    I'm thinking I want to setup an 846 with 8x4TB and 4-8 600GB 15k's I have. the 15k's will be VM stuff, the number varies has to do with how many SSD's I need to put into the hot swaps.

    the 846's DO have 2x 2.5 internal (not hot swap, but this is a home box anyway)

    I'll probably use one of the X8DTN+ with dual internal USB A (mirror the boot) 18DIMMs (i hear ZFS is a ram hog, I have piles of 4GB dimms)


    The question I have though, is how much compute power do I actually need? Should I toss in any old L5520 I find or should I put in some nicer E5640's?

    Do I need both chips?

    HT quad seems like it would have ample power for a storage box, and the only reasons I can think I might want a second chip is for the 9 Dimms attached to it.... but that's also more power (not cheap in CA)
     
    #1
  2. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,162
    Likes Received:
    4,118
    Either will have enough compute. I would go E5640 or L5640 if you have them on hand.

    With FreeNAS 9.10 - using the machine to host VMs was less good. With FreeNAS Corral you can put extra CPU power to work.
     
    #2
    Diavuno likes this.
  3. K D

    K D Well-Known Member

    Joined:
    Dec 24, 2016
    Messages:
    1,374
    Likes Received:
    286
    @Patrick

    Was looking for a sc936 on ebay and the best I could find came with an X8DTi-F, 2x L5630, 48GB, LSI 9201-16i HBA. Got it planning to discard the internals and rebuild using an X10SDV-4C-7TP4F but the internals were surprisingly good and the system works like a charm. Tested with Win12 R2. No mem/cpu issues and IPMI shows no faults. I see you recommended an L5640 in the prev post. Does it make sense to just use the system as is? All I have to do is rack it and plug it in. I can use the X10SDV-4C-7TP4F for something different.
     
    #3
  4. Diavuno

    Diavuno Active Member

    Joined:
    Jan 31, 2014
    Messages:
    836
    Likes Received:
    107
    excelent!

    Does it prefer threads or frequency?
    I have some L5639's somewhere...

    Also will it really benefit from and extra chip (and the extra 36GB of RAM)

    Or should i drop in a few of my S3500's?
     
    #4
  5. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    644
    Likes Received:
    144
    Depends what your doing with it, if it's straight block more threads...if your using jails/docker/bhyve then it depends on the workload (some plex audio transcoding is single-threaded with the jail implementation). If you have the parts laying around just start with a single low power proc and watch the cpu load, adjust from there.

    More ram is better but each dimm adds 1-3w of load. The more ram you have, the large your ARC cache will be. Ram sizing in generally is 8GB min with FreeNAS and ZFS is generally 1GB per TB of storage (depending on your workload). I'd start with 36GB, see how it runs for a few days....then try dropping to 24GB and see how it does, if you don't notice much of a difference at all you just saved some on the power draw.
     
    #5
    Patrick likes this.
  6. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,606
    Likes Received:
    347
    There are still a bunch of single threaded items in Corral iirc, rsync, ssh (replication), smb single client transfer o/c so higher clock helps out sometimes. O/c if you're not looking for few high speed users but many ones then more threads are better :)
     
    #6
    cheezehead and Patrick like this.
  7. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,760
    Likes Received:
    780
    @Diavuno if you're going to virtualize, L5639's.

    Here is the rule with ZFS:
    ADD AS MUCH RAM AS POSSIBLE

    Here's our rule with virtualization servers:
    ADD AS MUCH RAM AS WE CAN AFFORD

    Patterns abound :cool:;):D
     
    #7
    wsuff and pgh5278 like this.
  8. Diavuno

    Diavuno Active Member

    Joined:
    Jan 31, 2014
    Messages:
    836
    Likes Received:
    107
    I'm not going to make an AIO.

    Right now I've setup like so:
    dual chassis of X8 Twins as compute with a single l5639 and 48GB per (In HA) (ESXI 6.5)
    Single 4x GBE (cheap chinese knockoffs) for Data.
    Single intel SSD for boot(32GB) and one for scratch(200)

    The 846 has a single 5640 and 36GB for Storage (can double up on both if needed) (FreeNAS 10)
    -Dual 4xGBE cards for data
    4x S3500 300GB
    4xWD red 4TB
    4x White label (reds)(Gohardrive) 4TB
    4x 2TB Hitachi (can grow as high as 14 as I consolidate)

    All of this will be hooked up to a powerconnect 2848 or 6248.


    This is all personal stuff, mostly I'm just having a bit of fun paying with it, I'll likely strip it down for power and keep most of the compute turned off.


    Now, I get to tinker with the storage and see what works best!
     
    #8
  9. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,162
    Likes Received:
    4,118
    @K D great question. If you are happy with the setup, maybe just keep it? I think that Xeon D is great and will be much lower power so a tough call. Building new, Xeon D. Only reason it is really a question is because you already have that L5630 setup running.
     
    #9
  10. K D

    K D Well-Known Member

    Joined:
    Dec 24, 2016
    Messages:
    1,374
    Likes Received:
    286
    Agreed. I'm using the sever as is. No sense in wasting a perfectly good mobo /cpu combo.
     
    #10
  11. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    644
    Likes Received:
    144
    Only reason would be operating cost reduction. If power is cheap, keep running with what you have...if power is crazy expensive (ie Hawaii or South Australia), take a look at your ROI with new gear.
     
    #11
  12. K D

    K D Well-Known Member

    Joined:
    Dec 24, 2016
    Messages:
    1,374
    Likes Received:
    286
    Looking at the power consumption it uses around 162w at idle with just one ssd and 2 1tb ultrastars. Definitely needs some tinkering. Waiting for the xeon d cooler to be delivered from wiredzone. Once I get it will transplant it into this motherboard and check it out. With an actively cooled xeon d, I think I can get away with slower fans for th middle fan wall in the sc835 and get rid of the 2 rear fans which should bring the noise levels to acceptable levels.
     
    #12
  13. K D

    K D Well-Known Member

    Joined:
    Dec 24, 2016
    Messages:
    1,374
    Likes Received:
    286
    Set it up with a Xeon D 1518. Power consumption is 76W. I'm actually using a LSI 16 port HBA right now as I dont have the 8643 to 8087 cables needed. I'm guessing the power consumption will only go down further once I remove it and start using the onboard 2116.
     
    #13
  14. sfbayzfs

    sfbayzfs Active Member

    Joined:
    May 6, 2015
    Messages:
    218
    Likes Received:
    96
    I plan to test Corral this weekend on as many hardware platforms I have time to test on - depending on what you want to do, FreeNAS 9.* works great with a wide range of hardware - I have personally set these configs up with friends:
    • 5 drives z2 on an N40L 1.5GHz dual core Microserver, 8GB RAM
    • 5 drives z2 on the N54L 2.2GHz version 8GB RAM
    • 10/20 drives z2 (second VDEV was added later) with a Pentium G620T in a X9SCL-F and 20GB RAM
    • same as above with core i3-2120t
    • 16 drives z2 Xeon e3-1220l v2, 32GB RAM
    • 10 drives z2 plus 2 drive mirror x7 board w/ L5420 40W quad core and 32GB RAM
    In most of these cases, drives are 4TB Hitachi 5400RPM, with some 3TB and smaller on some setups.

    I am going to try to test on some HP Microservers, and Supermicro x7, x8, x9, and x10 boards, and maybe a Fujitsu microserver.

    I generally use ZFSonLinux for my larger systems since I'm way more comfortable with Linux than FreeBSD, but FreeNAS Corral may be the killer convergence app here, I'm hoping... If so, I intend to merge all of my pools onto one system with some extra CPU and RAM.

    For CPU, as long as your VDEVs are under 10-20 drives per VDEV, you shouldn't need more than 2-4 CPU threads for zfs's parity calculations, and with the higher efficiency hashing algorithms in FreeBSD 11's zfs used in FreeNAS Corral, maybe less.

    On RAM sizing, the old 1GB RAM per 1TB storage is not remotely true though, I have had 100+TB zpools run very well with only 16 or 32GB RAM.

    Also someone closer to the zfs internals warned to not use >128GB RAM for a zfs system, which I intend to heed, although I may test with more RAM with a scratch dataset if I can.
     
    #14
    wsuff likes this.
  15. Leo Levosky

    Leo Levosky New Member

    Joined:
    May 17, 2017
    Messages:
    14
    Likes Received:
    0
    I just wondered how you got on with the HP Microservers? I'm looking to replace WHS 2011 and wondered if FreeNAS would work on a Microserver with 16GB of ram?
     
    #15
  16. acquacow

    acquacow Active Member

    Joined:
    Feb 15, 2017
    Messages:
    245
    Likes Received:
    93
    I don't think you need much in terms of CPU. I can max out 10gige from my virtualized freenas on 1.8GHz CPU...
     
    #16
  17. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,606
    Likes Received:
    347
    Which protocol might matter for that statement;)
     
    #17
  18. WANg

    WANg Active Member

    Joined:
    Jun 10, 2018
    Messages:
    303
    Likes Received:
    111
    Yes. I am using FreeNAS 11.1 U6 on an N40L with 16GB of RAM. Works fine for me serving out iSCSI via 10GbE.
     
    #18
  19. Leo Levosky

    Leo Levosky New Member

    Joined:
    May 17, 2017
    Messages:
    14
    Likes Received:
    0
    What does that actually mean? I know what SCSI is but have no idea what iSCSI is. Does it mean that it appears to other machines as if it is a local SCSI drive or something like that?

    At the moment I am using WHS 2011 and accessing the drives from Windows PC's is fine. Do I need to run SAMBA to access the drives from a FreeNAS box or is iSCSI some sort of alternative?
     
    #19
  20. WANg

    WANg Active Member

    Joined:
    Jun 10, 2018
    Messages:
    303
    Likes Received:
    111
    iSCSI is basically sending/receiving SCSI commands through the network, and in the case of FreeNAS, you turn the N40L into an iSCSI target (a virtual drive that offers up blocks of storage on a storage network). I have an HP t730 thin client set up as an ESXi hypervisor, and it's an iSCSI initiator. The iSCSI initiator mounts the virtual drive through the storage network and writes blocks of data to the iSCSI target like it's a directly attached SCSI drive (that's my datastore for storing VMs). If you are using FreeNAS to replace WHS2011, yeah, you can use NFS or CIFS (via Samba) to serve out your files. You don't really want to use iSCSI unless you need block based rather than file based remote storage.
     
    #20
Similar Threads: FreeNAS actual
Forum Title Date
FreeBSD and FreeNAS FreeNAS desktop to server build help Oct 17, 2018
FreeBSD and FreeNAS How best to determine what to upgrade for better FreeNAS performance? Oct 1, 2018
FreeBSD and FreeNAS FreeNAS and ESXi hardware sanity check/advice Aug 5, 2018
FreeBSD and FreeNAS FreeNAS samba share prompting for a password Jul 24, 2018
FreeBSD and FreeNAS FreeNAS 11.1 U5 - Fatal Trap 12 error Jun 26, 2018

Share This Page