Findings and exploration of Intel Optane 900p Under ESXI

Discussion in 'Hard Drives and Solid State Drives' started by marcoi, Nov 8, 2017.

  1. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,334
    Likes Received:
    205
    Decided to create a new thread for Optane 900P under ESXI so not to overload other threads with specific data.

    If you like to post your results and findings here. Please include some base details such as ESXI version, general host details, setup of 900P drive in esxi, vm specs, etc.
     
    #1
    MiniKnight and Patrick like this.
  2. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,334
    Likes Received:
    205
    System Specs:
    upload_2017-11-8_10-11-7.png

    Dell r720 has the following cards
    Slot 1: Processor 2 - HBA 9300-8i
    Slot 4: Processor 2 - Intel 900P AIC
    Slot 5: Processor 1 - 10GB Nic
    Slot 6: Processor 1 - USB 3.0 4 port card
    Slot 7: Processor 1- HBA 9300-8e

    Bios does show card connecting at correct pcie
    upload_2017-11-8_19-0-46.png

    Currently card Setup: Passthrough mode
    upload_2017-11-8_10-15-51.png

    VM Testing: Win 10 VM with passThrough.
    upload_2017-11-8_10-17-40.png
    upload_2017-11-8_10-23-27.png


    Initial testing:
    coming soon
     
    #2
    Last edited: Nov 8, 2017
  3. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,334
    Likes Received:
    205
    I updated windows 10 to latest, just to make sure the version didn't affect the benchmarks. So far I dont see any changes with the latest update.

    CDM both old and new versions for comparison:
    upload_2017-11-8_11-31-33.png

    the q32t1 seems to be lower then what it should be at.

    Sandra SI Benchmark - read test
    Code:
    SiSoftware Sandra
    
    Benchmark Results
    Drive Score : 2.33GB/s
    Results Interpretation : Higher Scores mean Better Performance.
    Binary Numeral System (base 2) : 1GB(/s) = 1024MB(/s), 1MB(/s) = 1024kB(/s), 1kB(/s) = 1024 bytes(/s), etc.
    
    Benchmark Results
    Random Access Time : 38µs
    Results Interpretation : Lower Scores mean Better Performance.
    Decimal Numeral System (base 10) : 1s = 1000ms, 1ms = 1000µs, 1µs = 1000ns, etc.
    
    Benchmark Timings
    Time to Read Capacity : 1 minute(s), 58 second(s)
    Results Interpretation : Lower Scores mean Better Performance.
    
    Performance per Thread
    Drive Score : 74.41MB/s
    No. Threads : 32
    Results Interpretation : Higher Scores mean Better Performance.
    Binary Numeral System (base 2) : 1GB(/s) = 1024MB(/s), 1MB(/s) = 1024kB(/s), 1kB(/s) = 1024 bytes(/s), etc.
    
    Performance vs. Speed
    Drive Score : 152.39kB/s/rpm
    Random Access Time : 0.002µs/rpm
    Results Interpretation : Higher Scores mean Better Performance.
    
    Detailed Results
    Speed at position 0% : 2.5GB/s (195.77MB/s - 2.5GB/s) (108%)
    Speed at position 3% : 2.3GB/s (152.92MB/s - 2.3GB/s) (99%)
    Speed at position 7% : 2.3GB/s (134.7MB/s - 2.3GB/s) (99%)
    Speed at position 10% : 2.33GB/s (144.63MB/s - 2.33GB/s) (100%)
    Speed at position 13% : 2.32GB/s (142.78MB/s - 2.32GB/s) (100%)
    Speed at position 17% : 2.29GB/s (117.5MB/s - 2.29GB/s) (98%)
    Speed at position 20% : 2.31GB/s (142.14MB/s - 2.31GB/s) (99%)
    Speed at position 23% : 2.28GB/s (129MB/s - 2.28GB/s) (98%)
    Speed at position 27% : 2.52GB/s (155.58MB/s - 2.52GB/s) (108%)
    Speed at position 30% : 2.31GB/s (126.9MB/s - 2.31GB/s) (99%)
    Speed at position 33% : 2.27GB/s (125.42MB/s - 2.27GB/s) (98%)
    Speed at position 37% : 2.31GB/s (121.32MB/s - 2.31GB/s) (99%)
    Speed at position 40% : 2.28GB/s (116.77MB/s - 2.28GB/s) (98%)
    Speed at position 43% : 2.26GB/s (288.61MB/s - 2.26GB/s) (97%)
    Speed at position 47% : 2.3GB/s (158.89MB/s - 2.3GB/s) (99%)
    Speed at position 50% : 2.28GB/s (240.72MB/s - 2.28GB/s) (98%)
    Speed at position 53% : 2.49GB/s (156.92MB/s - 2.49GB/s) (107%)
    Speed at position 57% : 2.31GB/s (153.93MB/s - 2.31GB/s) (99%)
    Speed at position 60% : 2.28GB/s (125.54MB/s - 2.28GB/s) (98%)
    Speed at position 63% : 2.31GB/s (136.78MB/s - 2.31GB/s) (99%)
    Speed at position 67% : 2.29GB/s (129MB/s - 2.29GB/s) (99%)
    Speed at position 70% : 2.29GB/s (133.1MB/s - 2.29GB/s) (98%)
    Speed at position 73% : 2.3GB/s (126.81MB/s - 2.3GB/s) (99%)
    Speed at position 77% : 2.3GB/s (127.09MB/s - 2.3GB/s) (99%)
    Speed at position 80% : 2.52GB/s (157.56MB/s - 2.52GB/s) (108%)
    Speed at position 83% : 2.31GB/s (128.33MB/s - 2.31GB/s) (99%)
    Speed at position 87% : 2.3GB/s (128.34MB/s - 2.3GB/s) (99%)
    Speed at position 90% : 2.33GB/s (140.45MB/s - 2.33GB/s) (100%)
    Speed at position 93% : 2.3GB/s (132MB/s - 2.3GB/s) (99%)
    Speed at position 97% : 2.3GB/s (134.82MB/s - 2.3GB/s) (99%)
    Speed at position 100% : 2.32GB/s (132.16MB/s - 2.32GB/s) (100%)
    Random Access Time : 38µs (38µs - 130µs)
    Full Stroke Access Time : 38µs (38µs - 182µs)
    
    Benchmark Status
    Result ID : INTEL SSDPED1D28 (280GB, PCIe2x32/NVMe)
    Firmware : 0325032503250325
    Computer : VMware Virtual Platform (Intel 440BX Desktop Reference Platform)
    Platform Compliance : x64
    System Timer : 1.86MHz
    Use Overlapped I/O : Yes
    I/O Queue Depth : 32 request(s)
    Block Size : 1MB
    Bytes Per Sector : 512bytes
    
    Sandra SI Benchmark - read test 4kB
    Code:
    SiSoftware Sandra
    
    Benchmark Results
    Drive Score : 133.78MB/s
    Results Interpretation : Higher Scores mean Better Performance.
    Binary Numeral System (base 2) : 1GB(/s) = 1024MB(/s), 1MB(/s) = 1024kB(/s), 1kB(/s) = 1024 bytes(/s), etc.
    
    Benchmark Results
    Random Access Time : 38µs
    Results Interpretation : Lower Scores mean Better Performance.
    Decimal Numeral System (base 10) : 1s = 1000ms, 1ms = 1000µs, 1µs = 1000ns, etc.
    
    Benchmark Timings
    Time to Read Capacity : 34 minute(s), 54 second(s)
    Results Interpretation : Lower Scores mean Better Performance.
    
    Performance per Thread
    Drive Score : 4.18MB/s
    No. Threads : 32
    Results Interpretation : Higher Scores mean Better Performance.
    Binary Numeral System (base 2) : 1GB(/s) = 1024MB(/s), 1MB(/s) = 1024kB(/s), 1kB(/s) = 1024 bytes(/s), etc.
    
    Performance vs. Speed
    Drive Score : 8.56kB/s/rpm
    Random Access Time : 0.002µs/rpm
    Results Interpretation : Higher Scores mean Better Performance.
    
    Detailed Results
    Speed at position 0% : 113.22MB/s (17.55MB/s - 113.22MB/s) (85%)
    Speed at position 3% : 118.15MB/s (14.07MB/s - 118.15MB/s) (88%)
    Speed at position 7% : 123.09MB/s (19.17MB/s - 123.09MB/s) (92%)
    Speed at position 10% : 119.75MB/s (14MB/s - 119.75MB/s) (90%)
    Speed at position 13% : 121.47MB/s (17.55MB/s - 121.47MB/s) (91%)
    Speed at position 17% : 119.35MB/s (21MB/s - 119.35MB/s) (89%)
    Speed at position 20% : 124.87MB/s (19.43MB/s - 124.87MB/s) (93%)
    Speed at position 23% : 119.4MB/s (14MB/s - 119.4MB/s) (89%)
    Speed at position 27% : 120.3MB/s (17MB/s - 120.3MB/s) (90%)
    Speed at position 30% : 123.22MB/s (19.28MB/s - 123.22MB/s) (92%)
    Speed at position 33% : 121.37MB/s (16.18MB/s - 121.37MB/s) (91%)
    Speed at position 37% : 120.38MB/s (11.34MB/s - 120.38MB/s) (90%)
    Speed at position 40% : 124.12MB/s (21.9MB/s - 124.12MB/s) (93%)
    Speed at position 43% : 118.27MB/s (15.23MB/s - 118.27MB/s) (88%)
    Speed at position 47% : 123.17MB/s (21.83MB/s - 123.17MB/s) (92%)
    Speed at position 50% : 120.69MB/s (18.26MB/s - 120.69MB/s) (90%)
    Speed at position 53% : 121.28MB/s (20.36MB/s - 121.28MB/s) (91%)
    Speed at position 57% : 125.37MB/s (19MB/s - 125.37MB/s) (94%)
    Speed at position 60% : 120.4MB/s (5.58MB/s - 120.4MB/s) (90%)
    Speed at position 63% : 116.76MB/s (15.83MB/s - 116.76MB/s) (87%)
    Speed at position 67% : 122.9MB/s (19MB/s - 122.9MB/s) (92%)
    Speed at position 70% : 117.2MB/s (21.7MB/s - 117.2MB/s) (88%)
    Speed at position 73% : 122.52MB/s (16.4MB/s - 122.52MB/s) (92%)
    Speed at position 77% : 123.87MB/s (16.25MB/s - 123.87MB/s) (93%)
    Speed at position 80% : 131.92MB/s (19.07MB/s - 131.92MB/s) (99%)
    Speed at position 83% : 138.87MB/s (27.15MB/s - 138.87MB/s) (104%)
    Speed at position 87% : 150MB/s (22MB/s - 164.73MB/s) (112%)
    Speed at position 90% : 224.44MB/s (36.6MB/s - 224.44MB/s) (168%)
    Speed at position 93% : 225.52MB/s (31.93MB/s - 225.52MB/s) (169%)
    Speed at position 97% : 229.77MB/s (30.58MB/s - 229.77MB/s) (172%)
    Speed at position 100% : 145.39MB/s (17.38MB/s - 176.78MB/s) (109%)
    Random Access Time : 38µs (38µs - 170µs)
    Full Stroke Access Time : 40µs (40µs - 146µs)
    
    Benchmark Status
    Result ID : INTEL SSDPED1D28 (280GB, PCIe2x32/NVMe)
    Firmware : 03250325032503250325
    Computer : VMware Virtual Platform (Intel 440BX Desktop Reference Platform)
    Platform Compliance : x64
    System Timer : 1.86MHz
    Use Overlapped I/O : Yes
    I/O Queue Depth : 32 request(s)
    Block Size : 4kB
    Bytes Per Sector : 512bytes
    
    Volume Information
    Capacity : 260.83GB
    
    Physical Disk
    Model : INTEL SSDPED1D28
    Firmware : 0325
    Interface : PCIe/NVMe2
    Removable Drive : No
    Queueing On : Yes
    
    Si Sandra -Write
    Code:
    SiSoftware Sandra
    
    Benchmark Results
    Drive Score : 1.35GB/s
    Results Interpretation : Higher Scores mean Better Performance.
    Binary Numeral System (base 2) : 1GB(/s) = 1024MB(/s), 1MB(/s) = 1024kB(/s), 1kB(/s) = 1024 bytes(/s), etc.
    
    Benchmark Results
    Random Access Time : 38µs
    Results Interpretation : Lower Scores mean Better Performance.
    Decimal Numeral System (base 10) : 1s = 1000ms, 1ms = 1000µs, 1µs = 1000ns, etc.
    
    Benchmark Timings
    Time to Write Capacity : 3 minute(s), 23 second(s)
    Results Interpretation : Lower Scores mean Better Performance.
    
    Performance per Thread
    Drive Score : 43.1MB/s
    No. Threads : 32
    Results Interpretation : Higher Scores mean Better Performance.
    Binary Numeral System (base 2) : 1GB(/s) = 1024MB(/s), 1MB(/s) = 1024kB(/s), 1kB(/s) = 1024 bytes(/s), etc.
    
    Performance vs. Speed
    Drive Score : 88.28kB/s/rpm
    Random Access Time : 0.002µs/rpm
    Results Interpretation : Higher Scores mean Better Performance.
    
    Detailed Results
    Speed at position 0% : 2GB/s (80MB/s - 2GB/s) (150%)
    Speed at position 3% : 1.32GB/s (46.44MB/s - 1.32GB/s) (98%)
    Speed at position 7% : 1.29GB/s (45.31MB/s - 1.29GB/s) (96%)
    Speed at position 10% : 1.28GB/s (44.64MB/s - 1.28GB/s) (95%)
    Speed at position 13% : 1.31GB/s (45.71MB/s - 1.31GB/s) (97%)
    Speed at position 17% : 1.29GB/s (45.13MB/s - 1.29GB/s) (96%)
    Speed at position 20% : 1.33GB/s (46.66MB/s - 1.33GB/s) (99%)
    Speed at position 23% : 1.29GB/s (45MB/s - 1.29GB/s) (96%)
    Speed at position 27% : 2GB/s (76.32MB/s - 2GB/s) (153%)
    Speed at position 30% : 1.29GB/s (44.74MB/s - 1.29GB/s) (96%)
    Speed at position 33% : 1.35GB/s (47.38MB/s - 1.35GB/s) (100%)
    Speed at position 37% : 1.3GB/s (45.26MB/s - 1.3GB/s) (96%)
    Speed at position 40% : 1.2GB/s (41.31MB/s - 1.2GB/s) (89%)
    Speed at position 43% : 1.21GB/s (42.14MB/s - 1.21GB/s) (90%)
    Speed at position 47% : 1.21GB/s (42MB/s - 1.21GB/s) (90%)
    Speed at position 50% : 1.2GB/s (41.65MB/s - 1.2GB/s) (89%)
    Speed at position 53% : 1.89GB/s (69.5MB/s - 1.89GB/s) (140%)
    Speed at position 57% : 1.22GB/s (42.31MB/s - 1.22GB/s) (91%)
    Speed at position 60% : 1.23GB/s (42.86MB/s - 1.23GB/s) (91%)
    Speed at position 63% : 1.24GB/s (43.09MB/s - 1.24GB/s) (92%)
    Speed at position 67% : 1.2GB/s (41.71MB/s - 1.2GB/s) (89%)
    Speed at position 70% : 1.2GB/s (45.38MB/s - 1.2GB/s) (89%)
    Speed at position 73% : 1.23GB/s (46.71MB/s - 1.23GB/s) (91%)
    Speed at position 77% : 1.23GB/s (46.47MB/s - 1.23GB/s) (91%)
    Speed at position 80% : 1.92GB/s (82.47MB/s - 1.92GB/s) (142%)
    Speed at position 83% : 1.22GB/s (47MB/s - 1.22GB/s) (91%)
    Speed at position 87% : 1.19GB/s (44.68MB/s - 1.19GB/s) (88%)
    Speed at position 90% : 1.21GB/s (46.29MB/s - 1.21GB/s) (90%)
    Speed at position 93% : 1.2GB/s (45.62MB/s - 1.2GB/s) (89%)
    Speed at position 97% : 1.3GB/s (49.91MB/s - 1.3GB/s) (96%)
    Speed at position 100% : 1.32GB/s (50MB/s - 1.32GB/s) (98%)
    Random Access Time : 38µs (38µs - 164µs)
    Full Stroke Access Time : 42µs (42µs - 219µs)
    
    Benchmark Status
    Result ID : INTEL SSDPED1D28 (280GB, PCIe2x32/NVMe)
    Firmware : 032503250325032503250325032503250325032503250325
    Computer : VMware Virtual Platform (Intel 440BX Desktop Reference Platform)
    Platform Compliance : x64
    System Timer : 1.86MHz
    Use Overlapped I/O : Yes
    I/O Queue Depth : 32 request(s)
    Block Size : 1MB
    Bytes Per Sector : 512bytes
    
    Volume Information
    Capacity : 260.83GB
    
    Physical Disk
    Model : INTEL SSDPED1D28
    Firmware : 0325
    Interface : PCIe/NVMe2
    Removable Drive : No
    Queueing On : Yes
    
    Si sandra - Write 4kb
    Code:
    SiSoftware Sandra
    
    Benchmark Results
    Drive Score : 110.71MB/s
    Results Interpretation : Higher Scores mean Better Performance.
    Binary Numeral System (base 2) : 1GB(/s) = 1024MB(/s), 1MB(/s) = 1024kB(/s), 1kB(/s) = 1024 bytes(/s), etc.
    
    Benchmark Results
    Random Access Time : 41µs
    Results Interpretation : Lower Scores mean Better Performance.
    Decimal Numeral System (base 10) : 1s = 1000ms, 1ms = 1000µs, 1µs = 1000ns, etc.
    
    Benchmark Timings
    Time to Write Capacity : 42 minute(s), 10 second(s)
    Results Interpretation : Lower Scores mean Better Performance.
    
    Performance per Thread
    Drive Score : 3.46MB/s
    No. Threads : 32
    Results Interpretation : Higher Scores mean Better Performance.
    Binary Numeral System (base 2) : 1GB(/s) = 1024MB(/s), 1MB(/s) = 1024kB(/s), 1kB(/s) = 1024 bytes(/s), etc.
    
    Performance vs. Speed
    Drive Score : 7.09kB/s/rpm
    Random Access Time : 0.003µs/rpm
    Results Interpretation : Higher Scores mean Better Performance.
    
    Detailed Results
    Speed at position 0% : 106.83MB/s (1.65MB/s - 106.83MB/s) (96%)
    Speed at position 3% : 105.54MB/s (9.83MB/s - 105.54MB/s) (95%)
    Speed at position 7% : 106.36MB/s (1.76MB/s - 106.36MB/s) (96%)
    Speed at position 10% : 105.08MB/s (2.68MB/s - 105.08MB/s) (95%)
    Speed at position 13% : 106MB/s (10MB/s - 106MB/s) (96%)
    Speed at position 17% : 104.5MB/s (11.27MB/s - 104.5MB/s) (94%)
    Speed at position 20% : 103.48MB/s (7.39MB/s - 103.48MB/s) (93%)
    Speed at position 23% : 106.42MB/s (6.85MB/s - 106.42MB/s) (96%)
    Speed at position 27% : 130.85MB/s (19.17MB/s - 130.85MB/s) (118%)
    Speed at position 30% : 107.26MB/s (3.38MB/s - 107.26MB/s) (97%)
    Speed at position 33% : 106.54MB/s (1.71MB/s - 106.54MB/s) (96%)
    Speed at position 37% : 106.81MB/s (1.64MB/s - 106.81MB/s) (96%)
    Speed at position 40% : 107.4MB/s (1.7MB/s - 107.4MB/s) (97%)
    Speed at position 43% : 105.28MB/s (2.73MB/s - 105.28MB/s) (95%)
    Speed at position 47% : 105MB/s (10.54MB/s - 105MB/s) (95%)
    Speed at position 50% : 105.9MB/s (1.65MB/s - 105.9MB/s) (96%)
    Speed at position 53% : 219.67MB/s (37.36MB/s - 219.67MB/s) (198%)
    Speed at position 57% : 106.32MB/s (2.83MB/s - 115MB/s) (96%)
    Speed at position 60% : 102.51MB/s (9.66MB/s - 102.51MB/s) (93%)
    Speed at position 63% : 104MB/s (9.66MB/s - 104MB/s) (94%)
    Speed at position 67% : 105.28MB/s (11.41MB/s - 105.28MB/s) (95%)
    Speed at position 70% : 104.38MB/s (10.15MB/s - 104.38MB/s) (94%)
    Speed at position 73% : 105.19MB/s (10.08MB/s - 105.19MB/s) (95%)
    Speed at position 77% : 105.43MB/s (7.18MB/s - 105.43MB/s) (95%)
    Speed at position 80% : 122.48MB/s (27.56MB/s - 122.48MB/s) (111%)
    Speed at position 83% : 107.07MB/s (1.65MB/s - 107.07MB/s) (97%)
    Speed at position 87% : 106.48MB/s (1.71MB/s - 106.48MB/s) (96%)
    Speed at position 90% : 105.73MB/s (2.7MB/s - 105.73MB/s) (95%)
    Speed at position 93% : 106.28MB/s (1.63MB/s - 106.28MB/s) (96%)
    Speed at position 97% : 105.77MB/s (1.7MB/s - 105.77MB/s) (96%)
    Speed at position 100% : 106.25MB/s (3MB/s - 106.25MB/s) (96%)
    Random Access Time : 41µs (41µs - 155µs)
    Full Stroke Access Time : 43µs (43µs - 139µs)
    
    Benchmark Status
    Result ID : INTEL SSDPED1D28 (280GB, PCIe2x32/NVMe)
    Firmware : 0325032503250325032503250325032503250325032503250325
    Computer : VMware Virtual Platform (Intel 440BX Desktop Reference Platform)
    Platform Compliance : x64
    System Timer : 1.86MHz
    Use Overlapped I/O : Yes
    I/O Queue Depth : 32 request(s)
    Block Size : 4kB
    Bytes Per Sector : 512bytes
    
    Volume Information
    Capacity : 260.83GB
    
    Physical Disk
    Model : INTEL SSDPED1D28
    Firmware : 0325
    Interface : PCIe/NVMe2
    Removable Drive : No
    Queueing On : Yes
    
     
    #3
    Last edited: Nov 8, 2017
  4. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,334
    Likes Received:
    205
    Thoughts or other testing request while i have it in pass-through? I plan to test pass through to FreeNas boz to see what happens. I know others had issues with pass through.

    Can someone provide me a fio command line to test drive under ubuntu?
     
    #4
    Last edited: Nov 8, 2017
  5. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    697
    Likes Received:
    169
    If you are not running it in pass-through, what kind of VMDK performance do you get on it if using the paravirtual controller?
     
    #5
  6. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,334
    Likes Received:
    205
    I have a few more tests to do in pass through then I'll switch it back and try this out.

    So in esxi host , do I need to configure anything for paravirtual controller?
     
    #6
  7. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    697
    Likes Received:
    169
    It's just the high performance scsi controller for the VM.
     
    #7
  8. K D

    K D Well-Known Member

    Joined:
    Dec 24, 2016
    Messages:
    1,411
    Likes Received:
    300
    What are the dimensions of the drive? Is there any way it will fit in a supermicro SYS-E300?
     
    #8
  9. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,334
    Likes Received:
    205
    I wish i took a pic of it installed . but its small and it fit in my r720 without any issues. I would say it about the size 0f raid card.
     
    #9
  10. coolrunnings82

    coolrunnings82 Active Member

    Joined:
    Mar 26, 2012
    Messages:
    395
    Likes Received:
    85
    Those results look like my U.2 P3700 in my R720. Mine has that card going to 4x front drive slots for U.2 and limits it to PCI-E 2.0 instead of 3.0.
     
    #10
  11. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,334
    Likes Received:
    205
    weird thing, my card is pci version and i confirmed it running 3.0 x4 from the bios. I think next test is install temp os on the server and see if its hardware limitation or OS.
     
    #11
  12. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,334
    Likes Received:
    205
    Tested with paravirtual
    upload_2017-11-9_11-17-47.png

    upload_2017-11-9_11-19-39.png


    Same drive but on standard scsi to compare results
    upload_2017-11-9_11-56-55.png

    Vs pass through from prior test
    Capture.JPG
     
    #12
    Last edited: Nov 9, 2017
    cheezehead likes this.
  13. vinceflynow

    vinceflynow New Member

    Joined:
    May 3, 2017
    Messages:
    29
    Likes Received:
    4
    Measuring random read/write IOPS
    Code:
    fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
    This will create a 4 GB file, and perform 4KB reads and writes using a 75%/25% split within the file, with 64 operations running at a time.

    The output will look something like:
    Starting 1 process
    Jobs: 1 (f=1): [m] [100.0% done] [43496K/14671K /s] [10.9K/3667 iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=31214: Fri May 9 16:01:53 2014
    read : io=3071.1MB, bw=39492KB/s, iops=9873 , runt= 79653msec
    write: io=1024.7MB, bw=13165KB/s, iops=3291 , runt= 79653msec
    cpu : usr=16.26%, sys=71.94%, ctx=25916, majf=0, minf=25
    IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
    issued : total=r=786416/w=262160/d=0, short=r=0/w=0/d=0
    Run status group 0 (all jobs):
    READ: io=3071.1MB, aggrb=39492KB/s, minb=39492KB/s, maxb=39492KB/s, mint=79653msec, maxt=79653msec
    WRITE: io=1024.7MB, aggrb=13165KB/s, minb=13165KB/s, maxb=13165KB/s, mint=79653msec, maxt=79653msec

    Measuring random read IOPS
    Code:
    fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randread
    Measuring random write IOPS
    Code:
    fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite
    In general, you can play around with the parameters to change bs, iodepth, etc to tune the test. The I/O workload pattern, can be change by changing the readwrite parameter.

    From the fio man page, readwrite I/O pattern are,
    Code:
    readwrite=str, rw=str
           Type of I/O pattern.  Accepted values are:
                  read   Sequential reads.
                  write  Sequential writes.
                  trim   Sequential trim (Linux block devices only).
                  randread
                         Random reads.
                  randwrite
                         Random writes.
                  randtrim
                         Random trim (Linux block devices only).
                  rw, readwrite
                         Mixed sequential reads and writes.
                  randrw Mixed random reads and writes.
                  trimwrite
                         Trim  and  write  mixed  workload. Blocks will be trimmed first, then the same blocks will be
                         written to.
     
    #13
  14. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,334
    Likes Received:
    205
    Testing using datastore 200GB added to ubuntu server under scsi 0. Formatted as 1 partition ext4 in ubuntu


    fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

    Code:
    test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
    test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.16
    Starting 2 processes
    test: Laying out IO file(s) (1 file(s) / 4096MB)
    Jobs: 1 (f=1): [m(1),_(1)] [100.0% done] [179.6MB/61104KB/0KB /s] [45.1K/15.3K/0 iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=5339: Thu Nov  9 16:08:40 2017
      read : io=3070.4MB, bw=99600KB/s, iops=24900, runt= 31566msec
      write: io=1025.8MB, bw=33274KB/s, iops=8318, runt= 31566msec
      cpu          : usr=8.68%, sys=85.08%, ctx=136662, majf=0, minf=8
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=785996/w=262580/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    test: (groupid=0, jobs=1): err= 0: pid=5340: Thu Nov  9 16:08:40 2017
      read : io=3072.5MB, bw=110643KB/s, iops=27660, runt= 28435msec
      write: io=1023.7MB, bw=36862KB/s, iops=9215, runt= 28435msec
      cpu          : usr=8.67%, sys=82.31%, ctx=153927, majf=0, minf=7
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=786533/w=262043/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: io=6142.8MB, aggrb=199268KB/s, minb=99600KB/s, maxb=110642KB/s, mint=28435msec, maxt=31566msec
      WRITE: io=2049.4MB, aggrb=66479KB/s, minb=33273KB/s, maxb=36862KB/s, mint=28435msec, maxt=31566msec
    
    Disk stats (read/write):
      sdb: ios=1566494/522834, merge=1/9, ticks=101848/36048, in_queue=136192, util=99.40%
    

    fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randread
    Code:
    test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.16
    Starting 1 process
    Jobs: 1 (f=0): [f(1)] [100.0% done] [263.2MB/0KB/0KB /s] [67.4K/0/0 iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=5402: Thu Nov  9 16:11:53 2017
      read : io=4096.0MB, bw=263909KB/s, iops=65977, runt= 15893msec
      cpu          : usr=12.23%, sys=87.58%, ctx=2422, majf=0, minf=74
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: io=4096.0MB, aggrb=263908KB/s, minb=263908KB/s, maxb=263908KB/s, mint=15893msec, maxt=15893msec
    
    Disk stats (read/write):
      sdb: ios=1032273/0, merge=0/0, ticks=42976/0, in_queue=42544, util=99.37%
    
    fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite
    Code:
    test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.16
    Starting 1 process
    Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/246.3MB/0KB /s] [0/63.4K/0 iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=5427: Thu Nov  9 16:13:16 2017
      write: io=4096.0MB, bw=220000KB/s, iops=55000, runt= 19065msec
      cpu          : usr=12.56%, sys=86.72%, ctx=14913, majf=0, minf=10
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=0/w=1048576/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
      WRITE: io=4096.0MB, aggrb=220000KB/s, minb=220000KB/s, maxb=220000KB/s, mint=19065msec, maxt=19065msec
    
    Disk stats (read/write):
      sdb: ios=0/1038569, merge=0/3, ticks=0/52532, in_queue=51972, util=98.86%
    
     
    #14
  15. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,334
    Likes Received:
    205
    same datastore/vm as above. now attached to scsi1 - paravirtual
    Code:
    test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
    test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.16
    Starting 2 processes
    test: Laying out IO file(s) (1 file(s) / 4096MB)
    Jobs: 2 (f=2): [m(2)] [95.5% done] [268.8MB/90720KB/0KB /s] [68.8K/22.7K/0 iops] [eta 00m:01s]
    test: (groupid=0, jobs=1): err= 0: pid=1430: Thu Nov  9 16:17:38 2017
      read : io=3070.4MB, bw=150689KB/s, iops=37672, runt= 20864msec
      write: io=1025.8MB, bw=50341KB/s, iops=12585, runt= 20864msec
      cpu          : usr=10.47%, sys=81.15%, ctx=95628, majf=0, minf=10
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=785996/w=262580/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    test: (groupid=0, jobs=1): err= 0: pid=1431: Thu Nov  9 16:17:38 2017
      read : io=3072.5MB, bw=157087KB/s, iops=39271, runt= 20028msec
      write: io=1023.7MB, bw=52335KB/s, iops=13083, runt= 20028msec
      cpu          : usr=9.54%, sys=80.11%, ctx=113337, majf=0, minf=9
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=786533/w=262043/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: io=6142.8MB, aggrb=301481KB/s, minb=150689KB/s, maxb=157086KB/s, mint=20028msec, maxt=20864msec
      WRITE: io=2049.4MB, aggrb=100579KB/s, minb=50341KB/s, maxb=52335KB/s, mint=20028msec, maxt=20864msec
    
    Disk stats (read/write):
      sda: ios=1562967/521540, merge=22/4, ticks=466604/157132, in_queue=622340, util=99.55%
    
    Code:
    test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.16
    Starting 1 process
    Jobs: 1 (f=1): [r(1)] [93.8% done] [310.6MB/0KB/0KB /s] [79.5K/0/0 iops] [eta 00m:01s]
    test: (groupid=0, jobs=1): err= 0: pid=1439: Thu Nov  9 16:19:04 2017
      read : io=4096.0MB, bw=282597KB/s, iops=70649, runt= 14842msec
      cpu          : usr=15.09%, sys=80.10%, ctx=11056, majf=0, minf=72
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: io=4096.0MB, aggrb=282596KB/s, minb=282596KB/s, maxb=282596KB/s, mint=14842msec, maxt=14842msec
    
    Disk stats (read/write):
      sda: ios=1032232/2, merge=0/1, ticks=156188/0, in_queue=155800, util=99.01%
    
    Code:
    test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.16
    Starting 1 process
    Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/322.3MB/0KB /s] [0/82.5K/0 iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=1499: Thu Nov  9 16:19:54 2017
      write: io=4096.0MB, bw=306556KB/s, iops=76639, runt= 13682msec
      cpu          : usr=14.02%, sys=82.31%, ctx=8375, majf=0, minf=8
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=0/w=1048576/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
      WRITE: io=4096.0MB, aggrb=306556KB/s, minb=306556KB/s, maxb=306556KB/s, mint=13682msec, maxt=13682msec
    
    Disk stats (read/write):
      sda: ios=0/1045793, merge=0/2, ticks=0/154132, in_queue=153764, util=99.02%
    
     
    #15
  16. vinceflynow

    vinceflynow New Member

    Joined:
    May 3, 2017
    Messages:
    29
    Likes Received:
    4
    Extracting some info from your tests ..

    Random read/write (75%/25%) at 4KB and 64 iodepth:
    read : bw=157087KB/s (157MB/s), iops=39271
    write: bw=52335KB/s (52MB/s), iops=13083

    This random read/write test is a rough approximation of a database workload. This results shows 39,271 read operations per seconds, and 13,083 operations per second. For comparison a, high-end ($$$) local SAS SSD might reach 40,000 and 10,000 respectively if the system is lightly loaded. A local non-SSD (spinning rust) will probably get somewhere around 500 read / 200 write. I

    The pure random read and pure random write tests are even better.

    Random read (100%) at 4K and 64 iodepth:
    read: bw=282597KB/s (282MB/s), iops=70649

    Random write (100%) at 4K and 64 iodepth:
    write: bw=306556KB/s (306MB/s), iops=76639

    I would say, for being a paravirtualized disk, the Optane 900P is on par with a local high-end SSD.
     
    #16
    marcoi likes this.
  17. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,334
    Likes Received:
    205
    OK I did a bare metal install of Windows 10 to check outside ESXI. I installed intel drivers. Fresh install w10 pro, not updated.

    It connecting at pcie 3.0 x4
    upload_2017-11-10_13-1-46.png

    Test results
    upload_2017-11-10_13-7-32.png

    Seems like pass-through comes closes to native performance, then para-virtual. Still not sure why this drive still isnt able to hit 500 range for q32t1 that been seen on other 900p drives. Could it be due to size 260 vs 480gb?
     
    #17
  18. vrod

    vrod Active Member

    Joined:
    Jan 18, 2015
    Messages:
    233
    Likes Received:
    33
    You should try the NVMe-controller instead of the Paravirtual one. It’s made for backend NVMe storage.
     
    #18
  19. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,676
    Likes Received:
    409
    Increase the thread count in CDM
     
    #19
  20. vinceflynow

    vinceflynow New Member

    Joined:
    May 3, 2017
    Messages:
    29
    Likes Received:
    4
    Yes, something is not quite right, with the CrystalDisk 4K Q32T1 results for the 900P on bare metal. I've seen Samsung 960 PRO M.2 NVMe with Samsung NVMe driver, hit around 360 MB/s on the 4K Q32T1 test.

    Is Win 10 and CrystalDisk installed on separate boot drive?
     
    #20

Share This Page