LSI 9260-8i - poor random read/write performance with SSDs?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

lunadesign

Active Member
Aug 7, 2013
256
34
28
What sort of random read/write performance are people seeing with SSDs connected to the LSI 9260-8i?

I've been testing a *single* Plextor M5 Pro SSD on a Supermicro X9SRE-F:
1) Connected to the onboard SATA III controller in AHCI mode
2) Connected to the LSI 9260-8i in RAID 0 with no read-ahead or write caching

I ran a bunch of Iometer tests at queue depths ranging from 1-32 and found:
-- 4K Random Read: LSI card has 43-73% less IOPS than onboard
-- 4K Random Write: LSI card has 41-63% less IOPS than onboard

Is this normal or am I doing something wrong?

If anyone wants to run a quick comparison, I can post my Iometer .ICF file.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
nope. How is linear? That is what matters when you publish your numbers to others ;) lol
 

lunadesign

Active Member
Aug 7, 2013
256
34
28
nope. How is linear? That is what matters when you publish your numbers to others ;) lol
I'm not quite sure I'm understanding your question....I think you're asking how the numbers vary at different queue depths? If so, here's what I'm seeing:

4K Random Read:
QD1 = 19786 8197 --> 59% decrease
QD2 = 41159 21510 --> 48% decrease
QD4 = 74940 34412 --> 54% decrease
QD8 = 96529 26066 --> 73% decrease
QD16 = 97143 42625 --> 56% decrease
QD32 = 97162 55212 --> 43% decrease

4K Random Write:
QD1 = 22583 8327 --> 63% decrease
QD2 = 36791 21534 --> 41% decrease
QD4 = 58101 32897 --> 43% decrease
QD8 = 69692 26999 --> 61% decrease
QD16 = 75976 39070 --> 49% decrease
QD32 = 82801 34771 --> 58% decrease

In each case, the onboard IOPS are listed first, then the LSI 9260-8i IOPS.

I'm using the same SSD in both tests and performed a secure erase before each round of testing.
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
The 9260 is a RAID caching controller, it's designed to run with cache
If you want better performance with no cache a SAS2008 in IT mode would be a better choice.
Or a SAS2108 which at least can make the Drive JBOD.

You are comparing apples with oranges
 

lunadesign

Active Member
Aug 7, 2013
256
34
28
The 9260 is a RAID caching controller, it's designed to run with cache
If you want better performance with no cache a SAS2008 in IT mode would be a better choice.
Or a SAS2108 which at least can make the Drive JBOD.

You are comparing apples with oranges
I understand its designed to run with cache. I had all the advanced features like the write back cache and read-ahead turned on and wasn't seeing great performance so I turned them off to simplify the situation / minimize the variables in the hopes of finding the cause of the performance issues. It seems like the "base" performance is weak so that even if I turn those advanced features on, I'll still be held back by the "base" performance.

What's interesting is that I ran the same Iometer scripts with read-ahead and write back enabled yesterday and saw the random read/write performance actually get worse. This could mean that read-ahead and write back aren't helpful for the particular workloads that I've got Iometer generating. Or it could mean I've got serious problems with this card (although I've already tried another and saw the same results). I'm not sure.
 

lunadesign

Active Member
Aug 7, 2013
256
34
28
SSD's
Write back on (always)
IO Direct
No Read Ahead
Cache (Drive) off (840 pros like this on)

works best for me.
Good to know. So other than the write back setting, we've got the same. But even if you had that off, would you expect random reads/writes to be half as fast as an onboard AHCI controller? I'm guessing no but since I don't know what's reasonable to expect, I'm looking for guidance from people with a lot more experience with LSI cards than I have.
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
LSI controllers aren't faster in normal situations.
As they are adding a layer between drive and OS

BUT Intel's Mobo's until LGA1150 had only 2x SATA3 ports
LSI controllers have up to 8x normally
Here is where LSI came into its own.
It will do it again shortly with SATA4

<= than 2x SATA3 drives use Intel
2+ (or 6+ LGA1150) use LSI or other SAS2/SATA3 controllers

RAID is where the LSI9260 excells at
Being on 24/7 in a server situation.
 

iq100

Member
Jun 5, 2012
68
3
8
I am in contact with LSI technical support. They sent me an email stating there is a defect in LSI HBA. The defect is they do NOT allow a write policy of Write Back. LSI (Atlanta, GA) technical manager of support said he requested an engineering change to allow the user of LSI HBA to enable Write Back.

My reading of these things is that for 4K, QD=1, writes, the SSD controller (NOT the LSI HBA) needs to see more than one write request in order to write in a parallel/overlapped fashion to its multichannel flash chips. Until LSI allows enabling Write Back (Windows Device Manager says LSI has disabled use of write back, aka 'write caching on the drive'), 4K, QD=1, writes are serialized by LSI HBA, meaning LSI has taken away the multichannel overlapped SSD writing the buyer has bought. In my case that causes the AS SSD and Anvil QD=1 4k writes to be approx 2MBytes/sec. Taking the LSI HBA out of the equation, with write back, aka Windows write caching, I get 50 MBytes/sec with SSD connected to old 3gbps sata2 Dell Poweredge T110 Server motherboard.

As most people here know, there are two controllers involved in this data path. LSI's and the SSD's. There are two possible write policies: WriteThrough and WriteBack (aka Windows 'write caching'). Write Through waits for the SSD controller to complete the write to its flash chips BEFORE making available to the SSD subsystem the next write request. Writeback tells the host OS to send another write request as soon as the LSI controller has sent the first write request on to the SSD subsystem. In my case, my Samsung 840 Pro has its own 512MByte ram to be used for its own controller.

In my case, 9212 4i4e, I tried to use LSI wdcfg (a command line interface LSI provided as part of its Windows driver package, in order to comply with Microsoft certified driver requiements. I launched wdcfg in a cmd.exe windows:
>wdcfg -s wb=1
>wdcfg -a
Although wdcfg said the change to writeback would be used with next restart of driver, rebooting did NOT cause writeback.
LSI suport wrote they verified this defect using Seagate Enterprise SSDs as well, and that it will be fixed ASAP.
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Don't hold your breath.

If you get a cheap lsi 9260/m5014/m5015 - this problem does not exist.

The firmware was intentionally set to block changing of disk write cache. Which can be fixed with MEGASCU :)

Our server fleet is all samsung SSD 840 PRO, they all have physical drive write cache enabled. They are solid.

Curious this problem seems to have surfaced about the time that LSI bought sand force. No problems with LSI Sandforce controllers ;)

hmmmmmm.
 

iq100

Member
Jun 5, 2012
68
3
8
As always, thanks for your response mrkad.
You wrote>"... The firmware was intentionally set to block changing of disk write cache. Which can be fixed with MEGASCU "

According to mobilenvidia this is NOT correct. My 9212 4i4e is a LSI HBA card, not one of LSI MegaRaid cards.
Mobilenvidia stated MEGASCU does not work with HBA cards, like mine.

MrKrad, I will pay guinea pig.
If you will reply here with keystrokes/mouse clicks starting from Windows 7 Pro, or from a bootable DOS USB stick to force WRITEBACK on my LSI 9212 4i4e, then I will try that in my 9212 4i4e.
Remember I have P16 IR firmware and two Samsung 840Pros in raid0.

If you are correct, mrkrad, will Windows Device Manager write policy then show that the writeback check box, the box Microsoft labels 'enable write caching on the device' works?
Currently the Microsoft DM, Policies TAB reads "This device does not allow its write-caching setting to this device".

If you reply today, I will test now. Thanks.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
You have some issues with how this all works:

1. The raid card you have has zero cache. It doesn't read nor write cache.
2. Windows device manager write policy is saying "LSI Driver is in control of this setting and it has no function any more".
3. The only place for caching to occur in this case is on the 840 drive itself.
4. Typically you enable the DISK CACHE POLICY using MSM - but can be done on the command line equally. this is the only one you should worry about. It must be enabled or the samsung drive is crippled.

Here is MSM:

 
Last edited:

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
You don't need MegaSCU to the command line for drive cache enable
It's in MegaCLi and StoreCLi as well
But if LSI have diabled it deeper down it won't work

Any 'Mega' utility is for MegaRAID cards 9260/65/70 etc etc

HBA's need SAS2FLASH (sas2flsh DOS) or LSIUTIL
Neither of which can enable drive cache
The only way is via MSM

Lunadesign ignore the rantings above about the 9212 as this is completely unreleated to your situation
LSI 9260 and 9212 are like chaik and cheese, the only thing they share is the ability to set the drive cache
 
Last edited:

iq100

Member
Jun 5, 2012
68
3
8
Thanks for your response, mrkrad :).

You wrote nothing about MEGASCU being useable on a HBA like my LSI 9212 4i4e. Are you retracting that statement? You only showed MSM screenshots.
Is Mobilenvidia correct that MEGASCU can only be used for MegaRaid cards, not HBAs?

You wrote "... 1. The raid card you have has zero cache. It doesn't read nor write cache."
As I wrote in my earlier post, I know the HBA has no ram, so no cache structure can physically be stored on LSI 9212 4i4e PCI-e card.
LSI calls this setting Disk Cache Policy.

BUT the Samsung 840 Pro has 512 MB RAM. That is the RAM we want to use to store multiple 4K QD=1 Write requests.
When the Samsung 840 Pro is connected to my Dell T110 3gbps sata2 connector, Windows 7 Device Manager allows checking the box called 'Enable Write Caching On The Device"
Since LSI HBA card is not involved at all when the SSD is connected directly to the sata2 motherboard, "DEVICE" as used here by Microsoft MUST MEAN using the Samsung 840Pro 2.5 inch subsystems 512MB RAM (well not all of it).
In spite of the connection being 3gbps sata2, both AS SSD and Anvil report aprox 50 MByte/sec with Microsoft's Enabled write caching on the device (aka writeback).
This shows that if the upstream system will stream multiple 4K Write request to the Samsung 840Pro, Samsung can use its own 512 MB RAM and controller to simultaneously drive multiple channels and cause data to be stored on the physical flash chips in a paralleled overlapped fashion. This is crucial to QD=1, 4K Write performance and is in fact part of what the customer is paying for when buying the Samsung 840 Pro controller and storage subsystem

When two of my Samsung 840 Pros are connected to LSI 9212 4i4e PCI-e card 6gbps sata3 connectors, Windows 7 Pro Device Manager does NOT allow checking the box Microsoft calls 'Enable Write Caching On The Device".
Instead Windows 7 Pro Device Manager says "THIS DEVICE DOES NOT ALLOW ITS WRITE-CACHING SETTING TO BE CHANGED"

MSM refers uses two different terms of art:
SSD Cache Policy (SCP)
Disk Cache Policy (DCP)

SCP is referred to in MSM Physical Tab and again in MSM Logical Tab. Mine shows ENABLED. That makes sense, since my Samsung 840 Pro needs the ability to see more than one 4K Write to be able to drive its multiple overlapped flash channel write capability.
DCP is referenced in MSM Logical Tab, further down the left panel tree the entry is 'Virtual Drive: 0, 474,973 GB, Optimal'. The MSM right panel show that DCP, aka Disk Cache Policy is set to Disable. We expect this, since there is NO RAM available on the LSI PCI-e to be used for LSI card resident cache.

There is a bug in MSM 12.05.03.00 GUI, that would seem to allow Enabling the DCP setting, but upon exiting from MSM and re-launching MSM the DCP setting reverts to Disabled, which is the only value that makes sense for a non-RAM resident HBA, if the setting is referring only to LSI card resident RAM resources available for caching.

I have attached screen snapshots below. Note that MSM Logical Tab shows a Write Policy of 'Write Through' and not 'Write Back', on the Logical node that also shows DCP is Disabled. Probably, the way LSI uses these terms, is that because there is no RAM available to HBA that only Write Through makes sense as far as storing multiple 4K write requests on the PCI-e card. Nevertheless, as directly motherboard connected SSD Microsoft Device Manager show, there is a way for 'WRITE BACK', in the system FUNCTIONAL sense to be used. I don't care if LSI HBA cannot itself implement a oncard LSI PCI-e writeback functionality. WE NEED A WAY TO SEND THESE multiple 4K WRITE REQUESTS ON TO THE DOWNSTREAM SAMSUNG 840PRO, 512 MB RAM without LSI waiting for the Samsung 840 Pro to signal it has completed processing each single 4K write request, which is what I think LSI is doing, and what LSI Technical Support confirmed is now going to be changed.

So mrkad, do you have anyway to do this on any LSI HBA you have actually used??
The answer cannot be use MSM, if the goal is to enable a write policy of writeback. The LSI certified driver needs to be modified in a way that the LSI driver informs Microsoft's Device Manager that Enabling Write Caching On The Device IS POSSIBLE. Not because the DEVICE IS LSI HBA (it has no RAM), but because the downstream DEVICE subsystem consists of both LSI HBA + Samsung 840 Pro, and the Samsung 840 Pro does have 512 MB RAM and can accept more than a single write command, as the 50 MByte/sec 4K Write AS SSD and Anvil scores show, when the LSI HBA is out of the equation and Samsung 840 Pro is directly connected to even an Intel 3gbps sata2 motherboard connector.

So I am all ear and eyes, mrkad, it you can show screens that work to enable the ability for multiple 4K Writes to be streamed to the LSI Raid0, two Samsung 840 Pros. MSM does not provide that.
The goal is quite clear: duplicate the 50MByte/sec AS SSD and Anvil scores, my screen shots show is possible when SSD in connected directly to 3gbps Intel sata2 connectors using same Samsung 840 Pro product.
LSI is crippling the purchased Samsung 840 PRO SSD subsystem as far as 4K, QD=1, writes.
My scores for large sequential transfer are approx. 1GByte/sec for LSI raid0. So that is good.
You claim to have a 4K, QD=1, workaround. But do you have screen shots that work for a LSI HBA card?

https://www.dropbox.com/s/tlc3c2xx5qof6ck/Physcial Tab-SSD Disk Cache Setting is Enabled.jpg
https://www.dropbox.com/s/vp7mnu7em... node SAS9212 4i4e bus2 dev 0-SCP Enabled.jpg
https://www.dropbox.com/s/vp1d19r8m... Diable and Write Policy is Write Through.jpg
https://www.dropbox.com/s/817hyu7qt...ALLOW Write-Caching Setting To Be Changed.jpg
https://www.dropbox.com/s/f8w0anmxu...ALLOWS ENABLE Write-Caching ON THE DEVICE.jpg
https://www.dropbox.com/s/n8tfp4ccm...led or Enabled-always reverts to Disabled.jpg

Anyone else able to solve this with tools that work?

mrkad, I did not see your last post, until I posted this large post.
You wrote>"... Any 'Mega' utility is for MegaRAID cards 9260/65/70 etc etc". That corrects your previous recommendations to use MegaSCU for LSI HBA cards. Thanks.
You also wrote>"... HBA's need SAS2FLASH (sas2flsh DOS) or LSIUTIL ... Neither of which can enable drive cache ... The only way is via MSM"
Yes, I have successfully used SA2* to flash my LSI 9212 4i4e to P16 firmware.
As I wrote MSM will NOT work to make any changes. MSM GUI offers the apparent option to change Disk Cache Policy from Disable to Enable, but this reverts to Disabled (which is the only setting that makes sense for a HBA, as LSI uses these terms).

I think Mobilenvidia mentioned somewhere that he uses the Microsoft and NOT LSI driver? Anyone know how to do that? If possible, would this then allow Microsoft Device Manager to allow checking its Enable Write Caching box, since LSI driver would be out of the equation. Does it makes sense to use only Microsoft driver (which one?) to successfully write to a LSI IR mode raid0 volume?
Mobilenvidia? Anyone?
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Sounds like if you have your drivers and settings correct, this controller is bad.

I have seen this odd behavior on a cross-flash "FAILURE" and the card could not be fixed by anyone but LSI (RMA). Was not me!

Since you are so thorough with your drivers and checking the boxes like ANVIL setting DISABLE CACHE, i'd call it a day and RMA the defective controller.

It is well known that that combination will do 50mb/s with LSI (any) which is less than intel SATA-2 software raid. It is also well known that 2mb/sec is what you will see if you have the cache settings wrong in the operating system/drivers or the BENCHMARK SOFTWARE ANVIL.

Sorry - I can't help you - you keep babbling but do not provide useful information.

I do not use non-raid controllers myself. Megaraid for $66 9260-8i is the least powerful raid card I have in operation today. I will not go any lower than that.
 

iq100

Member
Jun 5, 2012
68
3
8
mrkad wrote>"... checking the boxes like ANVIL setting DISABLE CACHE ..."
I do NOT see anything labeled "DISABLE CACHE" in Anvil 1.0.51 RC6.
In my Anvil Settings Tab there is a box labeled "Enable Write-Through". Checked or unchecked my 4K Write is 2.32MByte/sec.
Using Anvil to test my motherboard connected Intel sata2, my Samsung 840 Pro, Anvil's 4K Write score is 54MByte/sec. Stays approx. same whether Anvil's "Enable Write-Through" is checked or unchecked.

mrkrad, where is the "DISABLE CACHE" setting in ANVIL? Or, did you mean the Anvil box labeled "Enable Write-Through"?

In MSM there are two references to cache settings: SSD Cache Policy which is ENALBLED and Disk Cache Policy which is disabled. Neither can be changed in MSM. It would not make sense to enable the latter (Disk Cache Policy), since there is no RAM resource on an HBA and hence no way for LSI to create a cache on its PCI-e controller card, if that is what the LSI setting means.

mrkrad, could you please confirm that when you referred to "ANVIL setting DISABLE CACHE" above, you meant Anvil's "Enable Write-Through" checkbox?

Because the data path involves both memory and CPU algorithms on the motherboard, on the PCI-e card, on the 2.5 inch SSD subsystems, it is important to use terms than identify what/whose resource/algorithm is being modified.
For instance Anvil's "Enable Write Through" setting probably only results in Anvil interface requesting Windows 7 to do the same thing that occurs when a user manually uses Device Manager to checks/uncheck "Enable Write Caching On The Device", and I already reported Window Device Manager states "This device does not allow its wire-caching setting for this device"

Can anyone confirm that in Windows 7 Device Manager that the "Enable Write Caching On The Device" is checked with LSI HBA PCI-e card?

Thanks!
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Yeah same difference. Enable Write-through is equal in meaning to disable write cache. Try to be a little more open minded please.

Did you not see Patrick's article on the LSI HBA (newer model) where it showed some drives such as the Samsung 840 dropped to 2MB/sec in anvil depending on that caching mode? Normally 50MB/sec.

But of course you will get faster using intel sata-2 ports lol :)

Have you benchmarked anvil using sata2 raid-0? I found it is faster. I used this for many years with hard drive and with sata-2 ssd since two SATA2 ssd in raid-0 was faster.

Remember SAS controller do not speed SATA!! The overhead you will pay is great. TRIM works for some new motherboard with raid-0 intel.

If I were you I would just use the intel software raid-0 and be happy. It will give you a better experience.

Please explain why to spend money on HBA to get worst results !?! especially with a server that worthless. I've got a couple of those junker servers they are not worth it!
 

iq100

Member
Jun 5, 2012
68
3
8
Thanks for your response mrkrad.

You wrote>"... Did you not see Patrick's article on the LSI HBA (newer model) where it showed some drives such as the Samsung 840 dropped to 2MB/sec in anvil depending on that caching mode? Normally 50MB/sec. "
NO, I did not. I will look for Patrick's article. If you have a ink that would be great :).

You also wrote>"... Please explain why to spend money on HBA to get worst results !?! especially with a server that worthless. I've got a couple of those junker servers they are not worth it! ..."

We support over ten turnkey systems (our software) that run on Dell PowerEdge T110. They have been reliable and Dell provides next day onsite service (only used once). We are considering whether using SSD and 6gbps sata would improve (already fast) user experience. The LSI 9212 4i4e was bought used (one of a lot of ten) to test. If our tests are favorable they will only cost $20 each. So far the raid0 LSI P16 IR mode reads are 1Gbyte/sec with two Samsung 840 Pros. There is a theoretical limit to 4K Writes QD=1 scores IN THE LONG RUN. The NAND flash chips used (at NAND die level) have an inherent write latency of .4-1.0 msec. So 4K x 2000, 4KByte writes is equals 8MByte sec if 0.5 msec latency and 4MByte/sec if 1 msec latency. With eight write channels this would increase to 64-32 MByte sec. I am not sure that the 4K QD=1 Write scores are that important to perceived speed for our proprietary application software.

Do your 9260-8i MegaRaid cards use cache RAM on the PCI-e card? If not, then no reason 9212 4i4e cards should not get 4K Write scores that match.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
I don't have the link handy but thought it was a forum post here? Also iirc it was resolved using iometer?