We had previously been running FreeNAS, but I wasn't getting the kind of performance I wanted from it, so I stripped it down and installed Linux. The plan is to get the RAID volumes working as fast as possible, then use iSCSI with SNS iSANmp to manage the iSCSI shares. All the hardware is set up and running, so I sat down yesterday to do some initial testing.
It's using an older Adaptec 52445 card that we previously had in a workstation where we were getting over 1.5GB/s for reads and writes on DAS. The SAN has a total of 128TB of drives (all 8TB WD Reds). (Other specs: Supermicro X10 motherboard, 64GB ECC RAM, Xeon 1620v3, Chelsio 40GbE NIC, IBM 40GbE switch)
Because we'll be accessing this storage from Windows and macs, I'm doing my initial I/O testing in the AJA Disk Speed application as it simulates our usage pretty nicely. And here's where it gets weird...
In my initial setup I have 4x 8TB drives in a RAID 0. This is just to start benchmarking the hardware in a best-case scenario, before we make RAID 5 pools, which I expect will be slower. In any case, I'm getting solid 800MB/s writes on this setup over SMB from Windows. Not bad for a 4-drive array. In the AJA tool, I get these speeds regardless of the size of the test set (so it's the same whether it's a 4GB test file or a 64GB test file).
My read speeds are in line with writes - basically the same numbers, UNTIL I make the test set 64GB. Then the read speed drops to about 70MB/second - about 11x slower than writes. Occasionally there are faster bursts up to about 200MB/s, but they don't last.
I've tried enabling and disabling read caching on the card, but get basically the same numbers either way. It's also worth noting that in the AJA tool I've tested both single-files and file sequences (that is, one 64GB file, vs a couple thousand smaller ones that amount to 64GB).
Any idea what might be happening here? I'd expect to see slightly faster reads than writes if anything, so something is clearly wrong.
It's using an older Adaptec 52445 card that we previously had in a workstation where we were getting over 1.5GB/s for reads and writes on DAS. The SAN has a total of 128TB of drives (all 8TB WD Reds). (Other specs: Supermicro X10 motherboard, 64GB ECC RAM, Xeon 1620v3, Chelsio 40GbE NIC, IBM 40GbE switch)
Because we'll be accessing this storage from Windows and macs, I'm doing my initial I/O testing in the AJA Disk Speed application as it simulates our usage pretty nicely. And here's where it gets weird...
In my initial setup I have 4x 8TB drives in a RAID 0. This is just to start benchmarking the hardware in a best-case scenario, before we make RAID 5 pools, which I expect will be slower. In any case, I'm getting solid 800MB/s writes on this setup over SMB from Windows. Not bad for a 4-drive array. In the AJA tool, I get these speeds regardless of the size of the test set (so it's the same whether it's a 4GB test file or a 64GB test file).
My read speeds are in line with writes - basically the same numbers, UNTIL I make the test set 64GB. Then the read speed drops to about 70MB/second - about 11x slower than writes. Occasionally there are faster bursts up to about 200MB/s, but they don't last.
I've tried enabling and disabling read caching on the card, but get basically the same numbers either way. It's also worth noting that in the AJA tool I've tested both single-files and file sequences (that is, one 64GB file, vs a couple thousand smaller ones that amount to 64GB).
Any idea what might be happening here? I'd expect to see slightly faster reads than writes if anything, so something is clearly wrong.