Home Setup - Design changes

Discussion in 'General Chat' started by marcoi, Jun 13, 2018.

  1. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,812
    Likes Received:
    379
    Does that compare against post #18 or #21? Have you found the comment why read was slower i referenced?
    Looking better then before at least and if all you changed was power settings then its a significant difference - whether it warrants higher power draw...well thats your choice:)

    O/C this is still not the maximum that you see locally, but its still an oranges to apple comparison.
    Run fio on freenas and fio on a linux box (ideally same version) with same command to see the impact of network (esx or physical)
     
    #61
  2. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,067
    Likes Received:
    160
    post 60 compares to post 35
    I also went back and removed the max power - I couldnt justify the extra 80 watts of idle power when i did not see any improvements.
    I also re-enabled HT after making sure the bios, esxi and vcenter were on the latest version. (it was disabled for L1TF bug)
     
    #62
  3. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,812
    Likes Received:
    379
    I'd disagree since one is Q32T1, the other Q128T8 ;)
     
    #63
    marcoi likes this.
  4. TeleFragger

    TeleFragger New Member

    Joined:
    Oct 26, 2016
    Messages:
    27
    Likes Received:
    1
    only thing I can say on this is that while mine are not IP .. .1080p (2mp) as of today is old tech... you should be looking at 5mp cameras. That is what I'm going to be upgrading too...soon
     
    #64
  5. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,067
    Likes Received:
    160
    thanks for the that. I ended up going with ring wireless cams for outside -they work good enough and its a quiet neighborhood so they are fine. For inside, I went with two amcrest 2k cameras that are recording to SD only. I dont have them uploading to cloud and for now i use their app when on internal network to access them.
     
    #65
    TeleFragger likes this.
  6. TeleFragger

    TeleFragger New Member

    Joined:
    Oct 26, 2016
    Messages:
    27
    Likes Received:
    1
    it works so that's good.. for me I don't like (my choice) wireless cameras and especially ring...

    years ago wife wanted cameras as we were going to Disney and I knew nothing about cctv. she picked up a 8dvr 4 camera cctv Samsung kit. bnc based... I ran the 4 cameras and up and running. over time the remote view app got upgraded and now it stinks... but still works. I have one side of the house that is not covered and is dark and a heroin addict is getting in cars. open cars.. well I just had my anker 13.5k mah battery stolen and my sony wireless headset... so I'm going to get cameras up. Since I am cable and not POE, my bud who I have met recently... said I am good to 5mp with those cables and anything more is too expensive right now so told me to go with an 8x2 dvr.. 8 bnc and 2 ip cam box... $200, 4tb purble drive $120, 5mp cameras. There are 2 types.. cheap is $38 each and expensive is $60... heck expensive is dual light or something... so I'm gonna go with that...

    put 3 in place to cover whats missing, then replace my others (2mp) later to 5mp...

    not affiliated with these people...
    thought id just share with you as it seems you like the rest of us like keeping up with things..

    cameras
    Buy LTS LTCMHT1752-28, Platinum Starlight Turret HD-TVI Camera 5MP / 2.8mm - MegaDepot

    dvr
    Buy LTS LTD8508K-ST, Platinum Professional Level 8 Channel Video Recorder - MegaDepot


    I desire 2 ip cameras due to face recognition...
    guy at work went with nest IP cams.. cost him $1800 installed for 4 and $300/y subscription. to me that's too much but he is non technical and doesn't do anything on his own AND it paid for itself in that first night.. someone in cuffs...

    face recognition he was able to put his family in the db and he got a txt someone unknown out back... cops in 3 minutes and boom.. cuffed, locked and loaded...

    so I want face recognition so I know when my specific family members are home, etc... even do neighbors, etc.. so I can know when they truly let the dogs out... etc...
     
    #66
  7. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,067
    Likes Received:
    160
    I hear you - everyone got different situations. I wanted to get "enough" right now. Eventually I'll update the system and get full POE IP cams setup around the house running on Blue Iris or some other software platform.
     
    #67
    TeleFragger likes this.
  8. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,067
    Likes Received:
    160
    New storage pool 8x 800GB SAS3 SSD in Z2
    Sync= standard
    FIO testing on freenas server on the new mount.
    Seems like it did better then raid0 tests. (not sure how)

    upload_2019-1-11_10-49-22.png
    upload_2019-1-11_10-49-44.png

    Write test:
    Code:
    fio --output=128K_Seq_Write.txt --name=seqwrite --write_bw_log=128K_Seq_Write_sec_by_sec.csv --filename=nvme0n1p1 --rw=write --direct=1 --blocksize=128k --norandommap --numjobs=8 --randrepeat=0 --size=4G --runtime=600 --group_reporting --iodepth=128
    Code:
    seqwrite: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=128
    ...
    fio-3.5
    Starting 8 processes
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    
    seqwrite: (groupid=0, jobs=8): err= 0: pid=73509: Fri Jan 11 10:47:01 2019
      write: IOPS=27.9k, BW=3485MiB/s (3655MB/s)(32.0GiB/9402msec)
        clat (usec): min=20, max=84356, avg=268.07, stdev=1128.03
         lat (usec): min=21, max=84357, avg=272.10, stdev=1130.45
        clat percentiles (usec):
         |  1.00th=[   27],  5.00th=[   51], 10.00th=[   52], 20.00th=[   57],
         | 30.00th=[   65], 40.00th=[   76], 50.00th=[   82], 60.00th=[   98],
         | 70.00th=[  151], 80.00th=[  314], 90.00th=[  478], 95.00th=[  816],
         | 99.00th=[ 2573], 99.50th=[ 4178], 99.90th=[13829], 99.95th=[22676],
         | 99.99th=[47973]
       bw (  MiB/s): min=    1, max= 6364, per=41.78%, avg=1456.15, stdev=929.15, samples=262144
       iops        : min= 1582, max= 6784, avg=3434.78, stdev=1291.01, samples=141
      lat (usec)   : 50=3.29%, 100=57.53%, 250=14.70%, 500=15.51%, 750=3.51%
      lat (usec)   : 1000=1.86%
      lat (msec)   : 2=2.17%, 4=0.90%, 10=0.36%, 20=0.11%, 50=0.05%
      lat (msec)   : 100=0.01%
      cpu          : usr=2.17%, sys=28.21%, ctx=245771, majf=0, minf=1736
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=128
    
    Run status group 0 (all jobs):
      WRITE: bw=3485MiB/s (3655MB/s), 3485MiB/s-3485MiB/s (3655MB/s-3655MB/s), io=32.0GiB (34.4GB), run=9402-9402msec
    
    Read Test:
    Code:
    fio --output=128K_Seq_Read.txt --name=seqread --write_bw_log=128K_Seq_Read_sec_by_sec.csv --filename=nvme0n1p1 --rw=read --direct=1 --blocksize=128k --norandommap --numjobs=8 --randrepeat=0 --size=4G --runtime=600 --group_reporting --iodepth=128
    
    Code:
    seqread: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=128
    ...
    fio-3.5
    Starting 8 processes
    seqread: Laying out IO file (1 file / 4096MiB)
    
    seqread: (groupid=0, jobs=8): err= 0: pid=73897: Fri Jan 11 10:51:42 2019
       read: IOPS=42.7k, BW=5339MiB/s (5599MB/s)(32.0GiB/6137msec)
        clat (usec): min=39, max=1164, avg=182.16, stdev=17.89
         lat (usec): min=39, max=1165, avg=182.67, stdev=17.90
        clat percentiles (usec):
         |  1.00th=[  143],  5.00th=[  159], 10.00th=[  163], 20.00th=[  172],
         | 30.00th=[  178], 40.00th=[  180], 50.00th=[  184], 60.00th=[  186],
         | 70.00th=[  188], 80.00th=[  192], 90.00th=[  200], 95.00th=[  204],
         | 99.00th=[  219], 99.50th=[  255], 99.90th=[  281], 99.95th=[  326],
         | 99.99th=[  523]
       bw (  KiB/s): min=112569, max=3342650, per=13.31%, avg=727565.93, stdev=100786.59, samples=262144
       iops        : min= 5231, max= 5448, avg=5391.65, stdev=54.73, samples=96
      lat (usec)   : 50=0.08%, 100=0.13%, 250=99.20%, 500=0.58%, 750=0.01%
      lat (usec)   : 1000=0.01%
      lat (msec)   : 2=0.01%
      cpu          : usr=3.37%, sys=53.70%, ctx=229108, majf=0, minf=1992
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=128
    
    Run status group 0 (all jobs):
       READ: bw=5339MiB/s (5599MB/s), 5339MiB/s-5339MiB/s (5599MB/s-5599MB/s), io=32.0GiB (34.4GB), run=6137-6137msec
    
     
    #68
  9. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,067
    Likes Received:
    160
    Here is CDM with z2 pool 8x800 gb no slog and sync=standard
    datastore to ESXI local host as ISCSI with 200GB drive added to w7
    upload_2019-1-11_11-50-2.png

    q128T8
    upload_2019-1-11_11-51-47.png
     
    #69
  10. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,067
    Likes Received:
    160
    New storage pool 8x 800GB SAS3 SSD in Z2
    Sync= ALWAYS
    FIO testing on freenas server on the new mount.
    Huge difference in write speeds from Sync = standard even on local mount. Read stayed the same. Which is to be expected.

    [​IMG]
    [​IMG]

    Write test:
    Code:
    seqwrite: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=128
    ...
    fio-3.5
    Starting 8 processes
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    
    seqwrite: (groupid=0, jobs=8): err= 0: pid=82736: Fri Jan 11 12:21:27 2019
      write: IOPS=2234, BW=279MiB/s (293MB/s)(32.0GiB/117312msec)
        clat (usec): min=672, max=74411, avg=3570.26, stdev=1644.94
         lat (usec): min=674, max=74418, avg=3574.84, stdev=1645.25
        clat percentiles (usec):
         |  1.00th=[ 1012],  5.00th=[ 1483], 10.00th=[ 1844], 20.00th=[ 2474],
         | 30.00th=[ 2802], 40.00th=[ 3097], 50.00th=[ 3392], 60.00th=[ 4293],
         | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4817],
         | 99.00th=[ 6915], 99.50th=[11338], 99.90th=[26608], 99.95th=[27919],
         | 99.99th=[31589]
       bw (  KiB/s): min= 1761, max=194984, per=15.22%, avg=43536.24, stdev=21516.84, samples=262144
       iops        : min=  162, max=  791, avg=271.41, stdev=95.66, samples=1872
      lat (usec)   : 750=0.03%, 1000=0.90%
      lat (msec)   : 2=10.46%, 4=41.41%, 10=46.48%, 20=0.54%, 50=0.18%
      lat (msec)   : 100=0.01%
      cpu          : usr=0.33%, sys=8.12%, ctx=475081, majf=0, minf=1736
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=128
    
    Run status group 0 (all jobs):
      WRITE: bw=279MiB/s (293MB/s), 279MiB/s-279MiB/s (293MB/s-293MB/s), io=32.0GiB (34.4GB), run=117312-117312msec
    
    read test:
    Code:
    seqread: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=128
    ...
    fio-3.5
    Starting 8 processes
    seqread: Laying out IO file (1 file / 4096MiB)
    
    seqread: (groupid=0, jobs=8): err= 0: pid=83251: Fri Jan 11 12:23:54 2019
       read: IOPS=41.5k, BW=5186MiB/s (5438MB/s)(32.0GiB/6318msec)
        clat (usec): min=43, max=2236, avg=187.69, stdev=26.12
         lat (usec): min=43, max=2237, avg=188.23, stdev=26.14
        clat percentiles (usec):
         |  1.00th=[   62],  5.00th=[  165], 10.00th=[  169], 20.00th=[  178],
         | 30.00th=[  184], 40.00th=[  188], 50.00th=[  190], 60.00th=[  194],
         | 70.00th=[  196], 80.00th=[  200], 90.00th=[  206], 95.00th=[  212],
         | 99.00th=[  233], 99.50th=[  262], 99.90th=[  306], 99.95th=[  343],
         | 99.99th=[  783]
       bw (  KiB/s): min=58599, max=3023087, per=13.58%, avg=721305.50, stdev=207796.63, samples=262144
       iops        : min= 4906, max= 5153, avg=5006.45, stdev=65.84, samples=96
      lat (usec)   : 50=0.20%, 100=1.40%, 250=97.65%, 500=0.72%, 750=0.01%
      lat (usec)   : 1000=0.01%
      lat (msec)   : 2=0.01%, 4=0.01%
      cpu          : usr=2.83%, sys=55.71%, ctx=227066, majf=0, minf=1992
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=128
    
    Run status group 0 (all jobs):
       READ: bw=5186MiB/s (5438MB/s), 5186MiB/s-5186MiB/s (5438MB/s-5438MB/s), io=32.0GiB (34.4GB), run=6318-6318msec
    
     
    #70
  11. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,067
    Likes Received:
    160
    Test 70 CDM
    upload_2019-1-11_12-33-3.png

    q128t8
    upload_2019-1-11_12-35-2.png
     
    #71
  12. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,067
    Likes Received:
    160
    New storage pool 8x 800GB SAS3 SSD in Z2
    Sync= ALWAYS
    Add optane log

    FIO testing on freenas server on the new mount.

    [​IMG]
    upload_2019-1-11_13-7-21.png

    WRtie:
    Code:
    seqwrite: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=128
    ...
    fio-3.5
    Starting 8 processes
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    seqwrite: Laying out IO file (1 file / 4096MiB)
    
    seqwrite: (groupid=0, jobs=8): err= 0: pid=88415: Fri Jan 11 13:09:28 2019
      write: IOPS=3141, BW=393MiB/s (412MB/s)(32.0GiB/83440msec)
        clat (usec): min=736, max=93663, avg=2535.79, stdev=3492.95
         lat (usec): min=740, max=93667, avg=2540.89, stdev=3493.07
        clat percentiles (usec):
         |  1.00th=[ 1565],  5.00th=[ 1827], 10.00th=[ 1926], 20.00th=[ 2040],
         | 30.00th=[ 2180], 40.00th=[ 2278], 50.00th=[ 2376], 60.00th=[ 2442],
         | 70.00th=[ 2540], 80.00th=[ 2638], 90.00th=[ 2769], 95.00th=[ 2933],
         | 99.00th=[ 3884], 99.50th=[ 5800], 99.90th=[81265], 99.95th=[82314],
         | 99.99th=[90702]
       bw (  KiB/s): min= 1399, max=178047, per=14.07%, avg=56561.47, stdev=9484.65, samples=262144
       iops        : min=  288, max=  519, avg=382.01, stdev=45.67, samples=1328
      lat (usec)   : 750=0.01%, 1000=0.01%
      lat (msec)   : 2=16.00%, 4=83.03%, 10=0.72%, 20=0.04%, 50=0.01%
      lat (msec)   : 100=0.20%
      cpu          : usr=0.58%, sys=13.40%, ctx=539129, majf=0, minf=1736
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=128
    
    Run status group 0 (all jobs):
      WRITE: bw=393MiB/s (412MB/s), 393MiB/s-393MiB/s (412MB/s-412MB/s), io=32.0GiB (34.4GB), run=83440-83440msec
    
    Read:
    Code:
    seqread: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=128
    ...
    fio-3.5
    Starting 8 processes
    seqread: Laying out IO file (1 file / 4096MiB)
    
    seqread: (groupid=0, jobs=8): err= 0: pid=89039: Fri Jan 11 13:13:50 2019
       read: IOPS=41.2k, BW=5152MiB/s (5402MB/s)(32.0GiB/6360msec)
        clat (usec): min=43, max=9576, avg=188.88, stdev=50.10
         lat (usec): min=43, max=9577, avg=189.43, stdev=50.13
        clat percentiles (usec):
         |  1.00th=[   58],  5.00th=[  163], 10.00th=[  169], 20.00th=[  178],
         | 30.00th=[  184], 40.00th=[  188], 50.00th=[  190], 60.00th=[  194],
         | 70.00th=[  198], 80.00th=[  202], 90.00th=[  208], 95.00th=[  215],
         | 99.00th=[  260], 99.50th=[  273], 99.90th=[  478], 99.95th=[  734],
         | 99.99th=[ 1811]
       bw (  KiB/s): min=13687, max=3044009, per=13.74%, avg=725137.79, stdev=236162.49, samples=262144
       iops        : min= 4806, max= 5092, avg=4960.06, stdev=81.61, samples=96
      lat (usec)   : 50=0.35%, 100=1.69%, 250=96.84%, 500=1.03%, 750=0.04%
      lat (usec)   : 1000=0.02%
      lat (msec)   : 2=0.02%, 4=0.01%, 10=0.01%
      cpu          : usr=3.94%, sys=54.61%, ctx=226104, majf=0, minf=1992
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=128
    
    Run status group 0 (all jobs):
       READ: bw=5152MiB/s (5402MB/s), 5152MiB/s-5152MiB/s (5402MB/s-5402MB/s), io=32.0GiB (34.4GB), run=6360-6360msec
    
     
    #72
  13. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,067
    Likes Received:
    160
    Storage observations so far:
    Reads are always consistent. So it must be just FreeNas cache kicking in. VM has 64GB ram.

    Sync =disabled/standard give best results via FIO/CDM testing
    Sync = always without slow gives lowest write performance
    Sync = always with slog optane gives about 400-600 MB/s write performance regardless of the pool setup.

    My network VM to VM on the same esxi host seems to be limited to max of 3GB per sec. Average about 2GB per sec
    My network Physical 10GB host to host does hit max 1GB per sec both read/write

    So i done with testing - i need to move on at some point lol.

    So next question is which pool structure should i go with? The pool is for VM storage for two hosts. Max I can hope for is 1GB read/write. But I know with VMs IOPS are better and more suggest doing mirror vdevs.

    I would also like to make sure i have enough storage for next few years so i might trade off redundancy for space.
     
    #73
  14. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,812
    Likes Received:
    379
    I'd suggest some testing for a change;)

    Basically your options are z2 and mirror. Stripe of z's does not work with the amount of disks you have.

    So whats your primary requirement?

    Vmotion or concurrent access or sth else?

    Setup both pool types, add to esxi, perform some vmotions or spin up 6-8 vms and run fio concurrently. Better perf/space ratio wins
     
    #74
  15. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,067
    Likes Received:
    160
    Concurrent access. I want all my VMs to run off this storage pool vs local storage. Some of the dev VMs will stay on local storage for now. since the storage is central, i should only need to change the computer resource when moving VMs vs moving the computer resources and storage.
     
    #75
  16. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,812
    Likes Received:
    379
    Then IOPS means mirrors. Those are not the fastest anyway so dont throw away more. Is 2.2 tb usable (70% fillgrade) enough?
     
    #76
  17. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,067
    Likes Received:
    160
    yeah thats what i been reading.

    according to wintel zfs calc:

    Z2
    upload_2019-1-11_16-16-23.png
    upload_2019-1-11_16-24-32.png


    Mirrors
    upload_2019-1-11_16-15-44.png
    upload_2019-1-11_16-23-56.png

    Losing 1 TB storage of z2 vs losing around 60k IOs of mirror. IDK what a better trade off.
    (the io calc not specific to zfs but probably good enough to provide an idea of io loss.)
     
    #77
  18. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,812
    Likes Received:
    379
    you loose 25% of your capacity but 40% of your performance ...

    I never went not with mirrors ;)
     
    #78
  19. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,067
    Likes Received:
    160
    I guess that is what great buys section for - to get more down the road lol.

    Im thinking of leaving sync=disabled though and keep using iscsi
     
    #79
  20. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,812
    Likes Received:
    379
    Thats an option. You're aware of the pitfalls I assume :)
     
    #80
Similar Threads: Home Setup
Forum Title Date
General Chat Home Lab Setup Contest From Veem Jan 10, 2013
General Chat Home Server Help Dec 29, 2018
General Chat Image/Data Server for Home Use and On-Line access Aug 18, 2018
General Chat Planning to expand home network Jul 26, 2018
General Chat Homelab downgrade, sanity check needed! Jul 2, 2018

Share This Page