Help new build with Supermicro Storage

Discussion in 'FreeBSD and FreeNAS' started by minhneo, Jan 1, 2019.

  1. minhneo

    minhneo New Member

    Joined:
    Oct 3, 2018
    Messages:
    7
    Likes Received:
    0
    Hi All,

    I need some advice about hardware config for product storage

    1. Service: My company about 300 employees
    - Nextcloud: 16TB, I will build nextcloud on centos instead of use Jail from Freenas, necessary ?
    - SQL server for: CRM, Acouting, Company Management,...
    - Voice server
    - 3 Dynamic Website about 2000 - 5000 CCU
    - pFsense firewall
    - 2 Active directory, dns, dhcp ( primary - secondary )
    - Monitor server
    - Some other service

    2. Primary Storage: 18k USD
    - Supermicro SuperStorage Server 5049P-E1CTR36L - 36x SATA/SAS - LSI 3008 12G SAS - 8x DDR4 - 1200W Redundant
    - Intel® Xeon® Silver 4114 Processor 10-core 2.20GHz 13.75MB Cache (85W)
    - 4 x 64GB PC4-21300 2666MHz DDR4 ECC Registered DIMM --> 256GB for ARC, can upgrade to 512GB, necessary ?
    - 2 x 1.0TB Samsung 970 PRO M.2 PCIe 3.0 x4 NVMe Solid State Drive --> Raid 0 for L2ARC, can upgrade to 2 x Optane 905P 960GB, necessary ?
    - Micron NVDIMM 16GB --> ZIL log
    - 3 x 1.0TB SAS 3.0 12.0Gb/s 7200RPM - 3.5" - Seagate Exos 7E8 Series (512n) --> Raid 1 Pool for Boot, 1 disk for spare
    - 28 x 6.0TB SAS 3.0 12.0Gb/s 7200RPM - 3.5" - Seagate Exos 7E8 Series (512e) -->Raid 10 Pool data( Raid 1 group with 3 disk, Raid 0 with 8 Raid 1 group ), capacity (26 * 6)/3 =~ 52TB, 2 disk for spare
    - 2 x Intel® 10-Gigabit Ethernet Converged Network Adapter X710-DA4 (4 x SPF+) --> Direct connect to 4 ESXi host (iSCSI)

    3. Backup Storage:
    - Supermicro SuperStorage Server 5049P-E1CTR36L - 36x SATA/SAS - LSI 3008 12G SAS - 8x DDR4 - 1200W Redundant
    - Intel® Xeon® Silver 4110 Processor 8-core 2.10GHz 11MB Cache (85W)
    - 4 x 16GB PC4-21300 2666MHz DDR4 ECC Registered DIMM
    - 1 x 512GB Samsung 970 PRO M.2 PCIe 3.0 x4 NVMe Solid State Drive
    - Micron NVDIMM 16GB --> ZIL log
    - 2 x 1.0TB SAS 3.0 12.0Gb/s 7200RPM - 3.5" - Seagate Exos 7E8 Series (512n) --> Boot
    - 10 x 12TB SAS 3.0 12.0Gb/s 7200RPM - 3.5" - Seagate Exos 7E8 Series (512e) -->Raid-Z2
    - 2 x Intel® 10-Gigabit Ethernet Converged Network Adapter X710-DA4 (4 x SPF+) --> Direct connect to 4 ESXi host (iSCSI)

    Can you give me some advice to optimize the parameters, such as: Network, ZIL, ARC, L2ARC,...

    Sorry, English is not first my language.

    Many thanks!
     
    #1
  2. dswartz

    dswartz Active Member

    Joined:
    Jul 14, 2011
    Messages:
    350
    Likes Received:
    26
    L2ARC is most likely not useful (certainly not spending the $$$ on optanes!) Might want to spend some of that $$$ on a redundant SLOG device - can you use 2 NVDIMM instead? Rather than a spare for the boot pool, use that 3rd drive in the root pool (e.g. a 3-way raid mirror). I don't understand the data pool description. You say RAID10, but are talking about RAID1 and RAID0? Also, what OS will run on primary storage server?
     
    #2
  3. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,812
    Likes Received:
    379
    Ok, just to get it straight-
    You are trying to get a new storage box for your company with a usage portfolio as in (1).
    Your current plan is described in (2) with (3) as backup?

    You then describe a ZFS based setup but implicate usage of Raid Cards? Or are those supposed to be RaidZ raids? I assume you are looking at FreeNas as storage OS?

    Connection to 4 ESX hosts (for the VMs from (1) I assume) via direct connection? Why no switched setup? 10G got cheap...

    Then - you have a single spinner Raid10 Array for very different access types - SQL Databases (maybe not heavy use), webpages (potentially lots of cached data unless you are highly dynamic); are you sure you will be able to satisfy most reads from cache?

    This does not look sensible to me to be honest...
     
    #3
  4. minhneo

    minhneo New Member

    Joined:
    Oct 3, 2018
    Messages:
    7
    Likes Received:
    0
    I can use 2 NVDIMM.
    I will 3 way mirror raid for boot.
    Pool data raid 10: triple of 3 way mirror.
    I will Freenas but it's problem with VMWare backup and replicate.
     
    #4
  5. minhneo

    minhneo New Member

    Joined:
    Oct 3, 2018
    Messages:
    7
    Likes Received:
    0
    Yes, (2) is primary, (3) is backup via replicate.

    Can you suggest for me some model?

    I think my data will leave in ARC and L2ARC cache in a month beforce it's full and cache flush, web cache dynamic in memory.
     
    #5
  6. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,812
    Likes Received:
    379
    What model do you mean? For a non Raid Card? a switch? a possible setup?
     
    #6
  7. minhneo

    minhneo New Member

    Joined:
    Oct 3, 2018
    Messages:
    7
    Likes Received:
    0
    I will user "Supermicro AOM-SAS3-8I8E-LP SAS 3.0 12Gb/s 8-port Host Bus Adapter"
    I will conect direct to Freenas from Host.
     
    #7
  8. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,812
    Likes Received:
    379
    So 4 1:1 connections (mirror'ed but not shared).

    Are you sure ESXi will provide vmotion/HA capability in that setup? I don't think they will since those would be seen as 4 individual (multipathed) storage arrays
    Or are the 4 ESXi boxes not in a cluster setup?
     
    #8
  9. minhneo

    minhneo New Member

    Joined:
    Oct 3, 2018
    Messages:
    7
    Likes Received:
    0
    I tested, Freenas is only support 1 path ( not multipath ) with iSCSI. If use multipath option from vcenter is slow and hight latency than 1 path
     
    #9
    Last edited: Jan 10, 2019
  10. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,812
    Likes Received:
    379
    You could try napp-it on a solaris derivate as alternative unless you need FN/FreeBSB specific features.
    Or you do plan with a switch (or two for redundancy), that would not provide multipath with FN but at least would enable building a proper cluster with HA/vmotion
     
    #10
  11. minhneo

    minhneo New Member

    Joined:
    Oct 3, 2018
    Messages:
    7
    Likes Received:
    0
    I tested with solaris 11.4 with multipath, bandthwith near double, latency decrease 300% ( 100% read, 0% write )
     
    #11
  12. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,812
    Likes Received:
    379
    but requires a license for commercial use ... :)
     
    #12
Similar Threads: Help build
Forum Title Date
FreeBSD and FreeNAS FreeNAS desktop to server build help Oct 17, 2018
FreeBSD and FreeNAS ZFS Send/server migration help Today at 9:07 PM
FreeBSD and FreeNAS Can't modify [Case Sensitive] Can't [Write] to any shared folder & network "driver" help. Dec 22, 2018
FreeBSD and FreeNAS Need some help with FreeNAS Sharing Mar 3, 2018
FreeBSD and FreeNAS 1st time FreeNAS user needs help with "Failed to wipe da0: dd: /dev/da0: Invalid argument" Oct 26, 2017

Share This Page