I'm going crazy over here troubleshooting a performance issue that I can't seem to iron out. Wondering if I am overlooking something simple.
Configurations:
Hard Drive: 24x SAS Seagate 12gbps connected, 7.2k 10TB Hard Drives – Model: ST10000NM0096 (revision: E001)
HBA: Dell Perc H830 – Latest firmware (25.5.0.0018) (Dell’s rebranded LSI LSISAS3108 chipset)
-Have 2 cards, both same configuration - Wipe config/recreate config - Same results)
Server: Dell R620 (2x servers attempted)
Chassis: CSE-847E1C-R1K28JBOD
OS: 2016 Standard – Fully patched/vanilla configuration
Raid configurations tested
o Raid 10 (24x drive)
o Block Size 1MB
Performance: Subpar 600-700MBps
Performance tests with DiskBench / IOMeter both show 600-700MBps
Copy/Paste from and two virtual disks in Windows, same/if not slower
Across network, same
Disk Queuing is very low - .0x-1.5 under high load (basic data copy)
CPUs 8 cores, E5 x3 quad cores (2x cpu)
196GB Ram - Swapped out with smaller amounts, for ecc memory tests - all came out clear
Tried different SAS DAC cables between controller and JBOD
Drives show up in Open Manage as connected 12gbps - Everything is green and looks happy
Tried few different driver versions as well
Swapping HBA with another working exact model
Swapped servers with another R620, similar CPU/Ram configurations
We called SuperMicro and looked for help from them as we are certain this could be enclosure issue - Perhaps the backplane firmware or some type of bottleneck on the JBOD itself. IMPI is basically useless and has no configurable functions that pertain or even could tell me what the hell is going on.
What do you guys think? Super Micro whitepaper says I should be getting 2.4GBps average with 24 drives at minimum performance. Getting nowhere near that and I HAVE to put this thing in production soon.
Configurations:
Hard Drive: 24x SAS Seagate 12gbps connected, 7.2k 10TB Hard Drives – Model: ST10000NM0096 (revision: E001)
HBA: Dell Perc H830 – Latest firmware (25.5.0.0018) (Dell’s rebranded LSI LSISAS3108 chipset)
-Have 2 cards, both same configuration - Wipe config/recreate config - Same results)
Server: Dell R620 (2x servers attempted)
Chassis: CSE-847E1C-R1K28JBOD
OS: 2016 Standard – Fully patched/vanilla configuration
Raid configurations tested
o Raid 10 (24x drive)
o Block Size 1MB
Performance: Subpar 600-700MBps
Performance tests with DiskBench / IOMeter both show 600-700MBps
Copy/Paste from and two virtual disks in Windows, same/if not slower
Across network, same
Disk Queuing is very low - .0x-1.5 under high load (basic data copy)
CPUs 8 cores, E5 x3 quad cores (2x cpu)
196GB Ram - Swapped out with smaller amounts, for ecc memory tests - all came out clear
Tried different SAS DAC cables between controller and JBOD
Drives show up in Open Manage as connected 12gbps - Everything is green and looks happy
Tried few different driver versions as well
Swapping HBA with another working exact model
Swapped servers with another R620, similar CPU/Ram configurations
We called SuperMicro and looked for help from them as we are certain this could be enclosure issue - Perhaps the backplane firmware or some type of bottleneck on the JBOD itself. IMPI is basically useless and has no configurable functions that pertain or even could tell me what the hell is going on.
What do you guys think? Super Micro whitepaper says I should be getting 2.4GBps average with 24 drives at minimum performance. Getting nowhere near that and I HAVE to put this thing in production soon.