AIO ESXi/Napp-it advice requested

Discussion in 'DIY Server and Workstation Builds' started by nicklebon, Apr 4, 2018.

  1. nicklebon

    nicklebon Member

    Joined:
    Jun 14, 2017
    Messages:
    51
    Likes Received:
    2
    I am finally putting the server I've putting off for a year. Thus far I have or have ordered the following parts:

    SM 836 chassis w/ SAS2-EL2 backplane
    LSI 9300 16i HBA
    SM X10DRH-iLN4 motherboard (on back order till late April)
    2x E5-2695 v3 cpu (ES CPUs marked QEY6 replacing the original QFQKs I had due to higher base clock)
    2x HUSSL40108SS600 100GB ssd (intended for use as ZIL until I can get something better)

    Where I need advice:

    1. I am leaning toward a SM DOM, SSD-DM128-SMCMVN1 specifically, for booting ESXi. I realize way large for the intended purpose but the write speed is so much higher than the other modules. Am I right in thinking this will make a difference?

    2. Primary storage will be spinners connected to the LSI 9300 via the SAS2 -EL2 backplane. The 2 HGST SSD will also be connected here. Will using SATA drives be a bad thing to do? The 9300 and all attached drives will be passed to Napp-it for ZFS and for now any way ZIL.

    3. The memory prices ATM are causing me to scale back my 128GB plans and go with 64GB. From what I've read not going with 8x DIMMS in this board will put a serious damp on memory performance. Is this an accurate statement?

    If you seen an blatant errors in this please by all means point them out.

    Thanks
     
    #1
  2. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,804
    Likes Received:
    604
    1.
    The SM Sata DOM is ok but I would prefer a

    - Intel DC 35x0 SSD (80-120 GB)
    faster, more reliable and powerloss protection or

    - an Intel Optane P800 or 900P
    You can use this for L2Arc and Slog as well
    (no guaranteed powerloss protection but should be ok if the use case is not extremely critical)

    In my own setups I put ESXi and the storage VM onto the bootdisk.
    With Optane (P800 and up) you can also place L2Arc and Slog there

    2.
    If you buy new, choose SAS for an expander (ex HGST HE)

    3.
    I would not see the memory performance difference between 4 x dimm and 8 x dimm as essential
     
    #2
  3. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,325
    Likes Received:
    302
    Not really. Write speed is only "important" during the installation (copying bootfiles from another media to the satadom). After the installation almost all iop are read operations. You could even use usb sticks for esxi (I do this at work for a production esxi host and the host runs now over 2 years without the usb stick failing)
     
    #3
  4. nicklebon

    nicklebon Member

    Joined:
    Jun 14, 2017
    Messages:
    51
    Likes Received:
    2
    I'm not worried too much about PLP as both power supplies have UPS protection. Also despite the size of the chassis I have very few storage options outside the SAS backplane. That said, you lost with the P800/900P boot disk. I am working under the impression any device used as a L2Arc or Slog had to passed through and owned exclusively by the VM running the Napp-it image. Have I completely missed the boat here?

    Anyone have any thoughts on the the best way to get an 800P or 2 into this system? How about the ASRock Ultra Quad M.2 Card? I am specifically asking about if this has the ability to present 4 separate PCIe nvme devices that could be each passed through to separate VMs or not.
     
    #4
  5. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,804
    Likes Received:
    604
    #5
  6. nicklebon

    nicklebon Member

    Joined:
    Jun 14, 2017
    Messages:
    51
    Likes Received:
    2
    #6
  7. nicklebon

    nicklebon Member

    Joined:
    Jun 14, 2017
    Messages:
    51
    Likes Received:
    2
    After a lengthy delay due to backorder the Supermicro X10DRG-ILN4 arrived today. Is it normal for SM boards to have zero documentation included not even an optical disc with the docs. I literally received a box with a motherboard, i/o shield and 6x sata cables nothing more. Please note this is the -O aka retail sku. Seems odd to me.
     
    #7
  8. britinpdx

    britinpdx Active Member

    Joined:
    Feb 8, 2013
    Messages:
    342
    Likes Received:
    145
    I'm pretty sure that's the way they come, and on later (X10) motherboards that I've purchased there's usually a "quick install guide" and a "checklist" on a green sheet of paper to verify the box contents.

    The SM X10DRH-iLN4 Specification Page (towards the bottom of the page) indicates that the parts list is the motherboard, I/O shield and SATA cables, and contains links to documentation and latest BIOS/Firmware/Drivers etc
     
    #8
  9. nicklebon

    nicklebon Member

    Joined:
    Jun 14, 2017
    Messages:
    51
    Likes Received:
    2
    Wasn't a shred of paper in my box. Most box openings I could find showed the green sheet and the quick start guide. Pretty sure there should have the quick install guide. I got it done but was a pita walking from bench to the desk to refer the the version on my screen. I should note that Supermicro heat sinks come with documentation but the mb doesn't lol.
     
    #9
Similar Threads: ESXi/Napp-it advice
Forum Title Date
DIY Server and Workstation Builds Replacement build advice Jul 29, 2018
DIY Server and Workstation Builds NAS - New Build ( Advice & Recommendations) Jun 24, 2018
DIY Server and Workstation Builds Power protection for many servers, I am lost - any advice? Jun 2, 2018
DIY Server and Workstation Builds Advice on Media/Plex Server May 1, 2018
DIY Server and Workstation Builds First time Freenas build, sanity check & advice please! Apr 16, 2018

Share This Page