LSI 9240 8i: RAID 10 slow perfromance

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

voodooFX

Active Member
Jan 26, 2014
247
52
28
New day, new performance problem :)

LSI 9240-8i (latest fw) with 6x 2.0TB (Seagate Constellation ES) in RAID10 (3x RAID1 with 2 disks all in RAID0). Strip is 64KB (the max allowed)

Seq. perfromance: 200MB/s (read/write), often lower..

Don't know what to tweak because almost all the vg parameters are single so you don't have much to choice...

To check the controller I created a RAID0 vg with all the drives and the speed is ok, about 700MB/s
So what's wrong with RAID10?
I was also surprised to see that the controller is not smart enough to read from the 6 drives, it only does it from 3.. :(
 

Rhinox

Member
May 27, 2013
144
26
18
9240 controller does not have any on-board cache. If you use raid1 (or raid10), I think controller must compare data written to both mirrors. This might be quite difficult without cache.

In case of raid0 there is nothing to compare. It's mirroring that's causing problems. Would not be surprised if you got quite low raid1 performance (compared to single disk)...
 

voodooFX

Active Member
Jan 26, 2014
247
52
28
If this is true I can not really understand what this controller is made for
 

neo

Well-Known Member
Mar 18, 2015
672
363
63
If this is true I can not really understand what this controller is made for
RAID 0 seems like a self bet, or flashing it to 8e and using a file system RAID 10 function (ZFS, BTRFS, etc..)
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
RAID 0 seems like a self bet, or flashing it to 8e and using a file system RAID 10 function (ZFS, BTRFS, etc..)
If you are on Linux. No need to flash it 9211. All drive treats as jbod. No need to mess around.
 

voodooFX

Active Member
Jan 26, 2014
247
52
28
OK at the end I found the optimal solution: all drives in JBOD -> datastore -> FreeNAS (VM)

I also did a little test: single RAID0 drive VS jBOD drive

Write (seq. 4GB)
R0: 115MB/s
JB: 133MB/s

Read (seq. 4GB)
R0: 118MB/s
JB: 137MB/s

latency (ioping): no difference

Thank you all for the suggestions :)
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
OK at the end I found the optimal solution: all drives in JBOD -> datastore -> FreeNAS (VM)

I also did a little test: single RAID0 drive VS jBOD drive

Write (seq. 4GB)
R0: 115MB/s
JB: 133MB/s

Read (seq. 4GB)
R0: 118MB/s
JB: 137MB/s

latency (ioping): no difference

Thank you all for the suggestions :)
do you test RAID1 on 9260 or 9265 ? I am just curious.

Thanks!
 

voodooFX

Active Member
Jan 26, 2014
247
52
28
Hi, I ended using all my drives on the 9240 as JBOD.
Now the 9240 has connected just two intel 730 480G and the 9260 8x Constellation ES 2.0TB in RAID50 (which gives me about 1GB/s in R/W :cool: ) and I hope this is really my new stable configuration because I'm mixing and testing stuff for weeks...

canta: the test was on the 9240
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
Hi, I ended using all my drives on the 9240 as JBOD.
Now the 9240 has connected just two intel 730 480G and the 9260 8x Constellation ES 2.0TB in RAID50 (which gives me about 1GB/s in R/W :cool: ) and I hope this is really my new stable configuration because I'm mixing and testing stuff for weeks...

canta: the test was on the 9240
thanks...
I just stick with 9240 for raid 1 on proxmox, very satisfied with speed. two raid 1, ssd 240g (proxmox reside with some small vms), and one black wd 750g (for big vm and zoneminder(4x720P cam + 2x480 cam) vm)..
the good thing is; I can pass command via smartctl to set apm=254 on those black wd.