Should be sticky: Samsung 840 and 840 pro are not LSI megaraid compatible

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Status
Not open for further replies.

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Oh LSI is going to feel the pain for not supporting the most popular stable drive on the planet! I'll be glad to post (and retract should they fix it) everywhere that the samsung 840. This is going to hit all the major websites soon! I hope STH can spread the news.

The more negative press, the more likely they'll get off their butts and fix their firmware.

Honestly: I think they pulled support before Samsung released the new firmware. If you google around, you'll see there were massive problems with the original 840 firmware that caused massive performance issues due to GC and extremely poor performance.

They also note on webhostingtalk the Cache Enable bug - is present with the DC3700 intel drive, which i'd say is a pretty big deal - and a drive still on the supported list!

I was able to enable the cache after putting in the fastpath/cachecade 9266 trial key - have you tried that? Seriously, it allowed me to enable after putting in the key.

Have you noticed the 9266 has "performance mode" Best for IOPS , Best for Latency? and tunable parameters?

The "Random drop" of one 840 pro after 1 week for no reason is odd.

The 9266 runs 10 degrees centigrade hotter than the P420 and quite honestly isn't any faster
 

KamiCrazy

New Member
Apr 13, 2013
23
3
3
Hi,

I registered on this forum to post this reply. I was looking for other user's experiences of the Samsung 840 Pro in server applications.

I'm using IBM HS23 blades with LSI SAS 2004 controllers. The firmware does not allow me to change the Disk Cache policy. When I have the the drives in pass through mode the UEFI firmware reports that the drives have Disk Cache enabled. Once I create a RAID1 array it turns off Disk Cache, this is documented as default behaviour for RAID1 arrays.

With no Disk Cache I was getting horrible performance out of my RAID1 array. Further investigation lead me to a forum post elsewhere a person who had contacted Samsung support found out that Disk Cache is required for decent performance as turning it off essentially turns off much of the drives firmwares ability to do wear leveling, garbage collection etc.

I was able to enable Disk Cache by simply installing MSM and turning it on under virtual disk properties. On my controller that is the only setting I have.

I notice sometimes though that write performance drops. When the disk queue goes to 5, write performance seems to drop to 100MB/sec. I haven't figured this one out yet.

Running my drives though ATTO benchmarks I pretty much get the same scores as the drive specs.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Yeah i've done benchmarks that show the P420/1gb FBWC performs the same as the LSI 9266 w/fastpath - without any of this bullshit. It is odd they are willing to certify the agility 3 sandforce pos , the sm843 pro version of the 840 but not the 840 pro.

Given that these drives are solid on the p420 (PMC Adaptec 7 series ie 71605e,71605h) - i suggest it is LSI's problem.

To this date, i've had one 840 pro drop from a 9260 array for no reason. None from the 9266, but no drops from the samsung 830.

Perhaps the drives are just too fast.
 

0egp8

New Member
Apr 9, 2013
11
0
0
I registered on this forum to post this reply. I was looking for other user's experiences of the Samsung 840 Pro in server applications.
There's another thread on WebHostingTalk with a lot of responses if you want to look there:
Issues with 840 Pros for CacheCade? - Web Hosting Talk

Further investigation lead me to a forum post elsewhere a person who had contacted Samsung support found out that Disk Cache is required for decent performance as turning it off essentially turns off much of the drives firmwares ability to do wear leveling, garbage collection etc.
Do you remember the name of that forum? I'm trying to document the issue, any source material would be of help.

I notice sometimes though that write performance drops. When the disk queue goes to 5, write performance seems to drop to 100MB/sec. I haven't figured this one out yet.

Running my drives though ATTO benchmarks I pretty much get the same scores as the drive specs.
With the disk cache enabled, would you be willing to run a test in IOMeter with the following settings?

Transfer Size Request: 8 KB
Read: 80%
Random: 80%
Outstanding I/Os: 64
Number of Workers: 24
Maximum Disk Size: 1 GB
Run Time: 10 MIN

I've run this on the LSI 9271-8iCC with a RAID 0 VD of 6x Samsung 840 Pros with disk cache enabled a total of 4 times, and each time after about 3-4 minutes a different drive would drop out of the array.

I'll warn before hand that there is the risk of data corruption, but so far I haven't seen any signs of it. A cold boot makes the dropped disk appear in WebBIOS under Physical Drives. Change the status to Unconfigured Good, then reinsert into the array (the controller sees the previous configuration).

To this date, i've had one 840 pro drop from a 9260 array for no reason. None from the 9266, but no drops from the samsung 830.
Could you also run the above tests with drive cache enabled on your 9260 and 9266? I think the reason why LSI sets their controllers to disable the drive cache on the 840(P) is that there's some instability they haven't been able to resolve, and this test exposed it for the 9271-8i w/fastpath. 9260 actually allows the enabling of the 840(P) drive cache, so to see it be more unstable than the 9266 (which doesn't) is strange.

Perhaps the drives are just too fast.
Not so sure about that, I tried interpreting the error logs from tests I ran, and found that what caused the drives to drop were repeated read timeouts culminating in an unresolved read error. And right before the fatal error, the read/write random 8k speeds had dropped to 60000/10000 IOPS. If you want to see my results, they are here:
Web Hosting Talk - View Single Post - Issues with 840 Pros for CacheCade?
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
We found our R610 which runs the 9260 with samsung 830's fine.

point #1. The R610 shipped with faulty sas cables (source webhostingtalk). GRRRRR!!!
point #2. The cables were kind of loose to the m5014.
point #3. the older sleds do not fit the same. I wonder if the 830 fits tighter. I did a 300dpi scan of the drives and you can see that drive 0/1 have different wear marks.

Point #4 - the r610 faulty sas cable presents as follows: failure slot 1 , failure slot 0, drive is invisible (!!) not unconfigured bad. Drive must be re-inserted (hang? power loss?) or the controller, and machine, ever after a restart will not see the drive period. it is impossible without physically shutting down or pulling the drive and pushing back in to re-introduce the drive (which is then unconfigured bad).

The r610 which uses funky right angle cables was originally designed for the perc 6/i using the older sff cables, and 3gbp/s max. They do not change the backplane (!?!) when going to 6gbp/sec.

Perhaps dropping the drive down to PHY 3.0gbp/s would cure the problem?

The backplane is not bolted down like HP. The drive lights do not function (unlike all HP). This is some ghetto backplane!

We have noted issues with batteries, and have pulled them. It seems that around the time the battery which is failed, recharges, the drive faults. Since the battery is considered dying, it keeps trying to "cycle" over and over . battery removed.

so #1. R610 was designed with 3gbp/s perc 6/i backplane and cables. Dell Shipped faulty cables or perhaps rated at 3gbp/s at first and would replace them with event logs to prove the problem . I'm out of warranty so i'm trying my own cables.

I feel the cables going into the m5014 are some wobbly and not latched hard like usual.

The odd backplane is clipped in. It is possible this allows for tool-less removal, but the backplane could move back with pressure and or bad plastic clips.

the sleds we noticed were different , older worn looking were in slot 0/1 so perhaps they do not fit the same, they look fake or generationally older and of poorer construction.

We will place the h700 without battery (they all have fastpath! enabled did ya know that!) back in place and re-test since it uses a special slot. Perhaps the charging of battery is overwhelming?

perhaps the janky loose backplane is dirty or not 6gbp/s rated?

Perhaps the cabling is rated at 3gbp/sec?

Any way to ask the controller for stats on PHY errors? downlinks? I will trying to force with megacli to 1.5 and 3.0GBP/s

The samsung 830 are half as fast, but are 1000% solid.

We had 1 840 pro drop out at 1 week, nothing on the magician showed any abnormalities. We replaced with a new samsung 840 pro (newest firmware).

the second failure occured during a veeam backup (scsi-hot-add). [network destination]

The third failure occured during a live vm svmotion, 15 minutes after the battery recharged!! (transparent learn cycle).

Perhaps the 840 exceeds the speed the 830's can handle, and the crap dell design fails us?

Dropping to PHY 3.0gbp/s is acceptable for me, as the performance is acceptable to me, as long as stability exists.

FASTPATH is enabled, NORA WT DIRECT IO 128KB stripe (oops should be 64kb). I noticed background init was disabled, odd, I re-enabled it.

The other R610 works fine , same m5014.

I am wondering if there is a bios difference. IRQ's cause massive issues, say if 6 nic's and 10 usb ports are on the same IRQ. ALWAYS move everything away form the raid controller !! do not share nic/usb/idrac with the raid controller IRQ. trust me here.

Have you tried PHY 3.0gbps and what server are you using? I have seen no problems with HP servers, only dell shitboxes (R610). Notice the R610 is no longer listed on the supported? odd huh.
 

TheBay

New Member
Feb 25, 2013
220
1
0
UK
I have had issues lately with supermicro SFF-8087 cables not fitting in the socket properly, they only latch when they are pulled out slightly then making a poor connection, the metal tabs are in the wrong place! Can't remember whether they were molex or amphenol though.

Maybe a bad batch of plugs/cables out there, as a SFF-8087 should be the same regardless of card/cable.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
no shit I was feeling the same with the m5014,the cable should lock hard and not wiggle, but I could wiggle the cable. I wonder if there is a bad fitment issue causing this shit. I can go get some of that stuff you put on the threaded pipes to prevent vacuum/boost leaks..

Interesting.. Maybe that is what they meant by janky cables, the ends are unique since they have a 90 degree at one end.
 

supermacro

Member
Aug 31, 2012
101
2
18
I have had issues lately with supermicro SFF-8087 cables not fitting in the socket properly, they only latch when they are pulled out slightly then making a poor connection, the metal tabs are in the wrong place! Can't remember whether they were molex or amphenol though.

Maybe a bad batch of plugs/cables out there, as a SFF-8087 should be the same regardless of card/cable.
Ahh... I'm assuming that you were using the cables from Supermicro. What you have to do is actually push the metal part in. Otherwise it will not latch on to the connector. You will see that the metal part on the cable pushes out when you connect it to the socket so you have to manually push it in to click it in. I don't know why Supermicro made them like this but I've had no issues with others like from LSI/3Ware but figured out eventually (i knew it had to fit my Supermicro E16/E26 chassis since it's from the same manufacturer).
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
check the other thread for 26-drive man flashing.

samsung said a while ago "No DAS for non-enterprise drives". I'm guessing they may have ported the SM843 firmware back to the 840 without OP.

I'm going to engage their engineers at samsung but I doubt I can repeat anything they say to me.

I will try flashing all of the drives and setting proper OP% to their recommendation.

The 840 pro's are dropping for me, but will not respond until power cycled. This indicates the drive is hanging. Hopefully the new firmware with 10-25% OP will cure this.

All 840 pro's that have dropped have not shown any smart indicators other than 235 POR Recovery . (as in fresh new perfect shape after failing out and being checked in a regular pc).
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
SM843 = 840 pro with more OP and DAS light
SM843 like above with super-cap
SM843T = same but with eMLC (960gb!!) super-cap
PM843T (840 non pro with more OP and DAS LIGHT)

There are a ton of variants! Find the datasheet, find the rom, cross flash 840 pro's to SM843 :) win?
 

0egp8

New Member
Apr 9, 2013
11
0
0
Latest firmware resolves issue

Updated 6x 840 Pros and LSI 9271-8iCC with latest firmware.

Disk cache can now be enabled. Performance during the previously mentioned IOMeter test was stable at ~100,000 IOPS.
 

KamiCrazy

New Member
Apr 13, 2013
23
3
3
I only have 2 840 Pros in raid. Only bought 2 because after the first pair didn't work with LSI raid we stopped buying them.

I flashed to the May 2013 released firmware. It has fixed all issues I was having with my LSI SAS2004 controller (I am using HS23 blades).

I don't have any specific numbers but I am seeing 5x improved performance.
 

abackbone

New Member
May 30, 2013
2
0
0
Australia
Great read here fellows. We just picked up a pair of 9271-8iCC cards last month and have basically thrown them in the bin. So many issues, i dare not list them all. Long story short tho, we found a cheap and good investment in a handfull of SATA to SAS converters, allowing our SATA Intel x25-E 32GB drives to talk to the LSI card correctly. Still, we gave up before fighting with the firmware.

This week we are racing a brand new Adaptec / PMC 71605Q against the LSI and i will undertake many firmware updates to verify the results. I expect the LSI to wipe the floor with the Adaptec, but history has proven the victor to be the more stable card, so often. Three years ago we dumped two 9260-8i and put in Adapted 5805zq. a beast in it's day, and today, compatible with just about anything you plug into it. I ave one array made up of entirely WD green power disks. 2 years, 0 errors.

Anyway, our LSI box languishes at 130MB/s with the cache enabled presently, hence the PMC card.

Let the races begin!
 

supermacro

Member
Aug 31, 2012
101
2
18
I still can't enable "Disk Cache" with the new firmwares. This is the firmware I have:

9271-4i: 23.16.0-0012 (dated 4/15/13)

840 Pro: DXM05B0Q

Anyone else with 9270/9271 having the same issue?
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Yeah I found the PMC adaptec (AKA HP SmartArray P420) tends to run best with 0% read ahead and 100% write back.

So far I've found:

1. patrol read and Consistency check must be set to 1% or it will destabilize the server during the checks.

2. the C check finds inconsistent parity on raid-10 every week or so.

3. Battery - ditch them, they destabilize the system and draw tremendous power and heat to charge .

The firmware and design of the LSI is pure crap and I've found that fastpath and the non-caching IT cards without BBWC are not stable in esxi with certain drives ;)

Fastpath doesn't work under N conditions. Nobody knows N.

LSI are a bunch of fools who have no idea how to support their product.

(there is nothing in DX05 other than DAS light enable. Dream all you want, but that is the word from the man)
 

0egp8

New Member
Apr 9, 2013
11
0
0
I still can't enable "Disk Cache" with the new firmwares. This is the firmware I have:

9271-4i: 23.16.0-0012 (dated 4/15/13)

840 Pro: DXM05B0Q

Anyone else with 9270/9271 having the same issue?
Update your 9271-4i to the latest firmware (23.12.0-0013), and you should be fine.

Great read here fellows. We just picked up a pair of 9271-8iCC cards last month and have basically thrown them in the bin. So many issues, i dare not list them all. Long story short tho, we found a cheap and good investment in a handfull of SATA to SAS converters, allowing our SATA Intel x25-E 32GB drives to talk to the LSI card correctly. Still, we gave up before fighting with the firmware.

This week we are racing a brand new Adaptec / PMC 71605Q against the LSI and i will undertake many firmware updates to verify the results. I expect the LSI to wipe the floor with the Adaptec, but history has proven the victor to be the more stable card, so often. Three years ago we dumped two 9260-8i and put in Adapted 5805zq. a beast in it's day, and today, compatible with just about anything you plug into it. I ave one array made up of entirely WD green power disks. 2 years, 0 errors.

Anyway, our LSI box languishes at 130MB/s with the cache enabled presently, hence the PMC card.

Let the races begin!
Make sure that there's adequate airflow across the LSI card's heatsink (at least 200 LFPM as per spec). Temps can be outrageously high without them; I've gotten as high as 94C idle with bad placement, and the climate here is cool. There may be temperature throttling under load, which would botch the results. I'll test whether cooling makes a difference on my 9271-8iCC after some 25mm fans arrive.
 

supermacro

Member
Aug 31, 2012
101
2
18
Update your 9271-4i to the latest firmware (23.12.0-0013), and you should be fine.
There is actually another firmware that was released on 5/30/13 (23.16.0-0012). I've tried both with 23.16.0-0012 & 23.12.0-0013 (which was released on 5/7/13). I can't get the diskcache to be enabled. It's just grayed out in the WebBIOS. I even tried with Cachecade trial key hoping that it'll make a difference but no go...

Are they enabled by default or did you have to choose "enable" from like a drop down menu?
 
Status
Not open for further replies.