IBM 5110 with 5016 Cache Vault

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

serverguy

New Member
Mar 28, 2012
56
0
0
Hi guys,

So I got a IBM 5110 card with no cache for cheap on ebay the other day and I took the 1GB cache Vault module off my IBM 5016 and put it on the 5110.

Everything works fine. It booted and detected my Raid 5 and Raid 1 drives with no issue.

I updated the firmware on it and Cache Vault feature showed up in the MegaRaid Storage software.

Question:

-What are the advantages of 5110 over 5016 with my current setup? Is it only PCI 3.0 difference between the cards? Should I keep it this way or go back to 5016 setup? Any disadvantages over the 5016?

I do not have any feature keys installed. My Raid 5 has 3 SSD drives and my Raid 1 has 2 x 1TB 7.2K RPM SAS drives.



Thanks.
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
I'll tell you a little secret.

I did a test with 8 146gb drives. raid-10 (sorry, i think raid-5/6 sucks in its current design).

HP P400 LSI 1078e controller PCI-e x8 1.0 - 256 (half-cache) DDR ecc cache.

HP P410 PMC-Sierra PCI-E x8 2.0 - 512meg BBWC (full bandwidth cache) DDR2

HP P420 PMC-Adaptec PCI-E x8 3.0 - 1GB FBWC (full bandwidth) DDR3

LSI 9260/M5014(half cache)/9266

So the first was pretty easy, just move the controller and cabling - speed increase maybe 10% faster with real life strategy (heavy sql based apps).

So P400 -> P410 -> P420 = 0-10% increase in speed. Likely due to caching, and P410/P420 ran at 6gbps (direct to drives no sas expander).

So LSI? yeah well pretty much just as slow but a serious pita since the RIS data wasn't compatible.

Was I surprised? Not really, the laws of physics dictates the speed of the drives are the limited factors.


I can positively say RAID-5/6 would have had a difference in performance but quite honestly at my cost point, it is not worth the risk. I've been bitten hard by the raid-5 - loss of cache board - Forced the backups that ran in 6 hours to take 36 hours. Plus somehow managed to crash the box in the N times of replacing gear to fix it - forcing the controller to do a rescan of raid set. Any improper shutdown causes a rescan. This causes immense pressure on the drives/controllers.

What can you do about this? well read-only caching can greatly benefit raid-5/6 . Use of SAS only drives, preferably with PI definitely helps.

ZFS nuts know this : Regular raid cards take your word the data is not corrupt. Sata drives, sas expanders, they can LIE.

But SAS drives with PI - allow the raid controller to execute in hardware a check of every read block (at some cost!) for bit rot just like ZFS. This feature has been around for a while but well just getting implemented in more "prosumer" drives.

So I wouldn't expect much. I would never do raid-5/6 on SSD. RAID-1 or bust.
 

mobilenvidia

Moderator
Sep 25, 2011
1,952
213
63
New Zealand
Hi guys,

So I got a IBM 5110 card with no cache for cheap on ebay the other day and I took the 1GB cache Vault module off my IBM 5016 and put it on the 5110.

Everything works fine. It booted and detected my Raid 5 and Raid 1 drives with no issue.

I updated the firmware on it and Cache Vault feature showed up in the MegaRaid Storage software.

Question:

-What are the advantages of 5110 over 5016 with my current setup? Is it only PCI 3.0 difference between the cards? Should I keep it this way or go back to 5016 setup? Any disadvantages over the 5016?

I do not have any feature keys installed. My Raid 5 has 3 SSD drives and my Raid 1 has 2 x 1TB 7.2K RPM SAS drives.



Thanks.
Don't for get your Cachevault lacks the 'Capacitor' so it's just NAND cache without the ability to keep things stored in a power cut.
But both cards are virtually the same except for PCIe 3.0
They are copies of LSI cards with crippled FW that needs all sorts of keys to enable anything useful
 

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
That doesn't make sense. The nand is designed to copy the ram buffer to nand in the event of power loss. the capacitor gives it enough juice to copy the ram contents to the nand. It would be completely useless without a power source to give it time to transfer the cache.
 

mobilenvidia

Moderator
Sep 25, 2011
1,952
213
63
New Zealand
You are correct, but Serverguys M5016 controllers (I have one) lacks the capacitor, making it useless when the power does go down as there is nothing to keep the cache live for it to copy to NAND.

I don't understand why the Cachevault needs a 'capacitor' which in reality looks like 4x AAA rechargeable batteries
Why doesn't it use the BBU09 battery to do this job ?

My LSI9266 with no NAND cache module also shows Cachevault key as enabled so this means nothing
 

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
The P410 and P420 flash back write cache use the same dual "battery-like" 5.4v super capacitor. Why would you need a battery? they are prone to failure and require constant maintenance and you only need a second of juice tops to transfer the ram contents to nand.

the supercapacitor charges in 2 seconds tops, lasts for 10 years, requires no "cycling" and provides the 1-2 seconds of time to transfer the ram contents to nand.

That's why I swap out P420/FBWC when my batteries die (dell ~ 1 year, HP ~ 3 years old) - now I have 10 years to not worry about it.

Battery back write cache is RAM + charging/battery to keep ram refreshed for 24(dell) to 72(hp) hours. ZERO NAND in this solution. those hours are brand new battery, reduce by 1/3rd every year.

Flash back write cache is RAM + super cap + nand with a simple circuit that senses a loss in voltage then transfers the ram to the nand which will retain information for a few months (before bit rot gets you). You can probably lose power for a month or two versus 24/72 hours.

Change a battery every year (dell) or 3 (hp) versus 10 years.. I know which technology I want.

Of course, if you are using SSD, this is all pointless since you are not using write back cache.

so BBWC -> 24 to 72 hours before all data is lost, 1-3 year open up server and replace battery interval.

FBWC -> up to 2 months before all data is lost, 10 year open up server (?? really junk it!! ??)
 

serverguy

New Member
Mar 28, 2012
56
0
0
I did some testing and the only difference is in the ATTO write scores which have increased 40% almost.

I did raid 5 on my 3 SSD's cause I needed the space and redundancy. 2 of then in raid 1 would not be enough space. At some point I will upgrade and go back to raid 1.
 

serverguy

New Member
Mar 28, 2012
56
0
0
before and after config is the same as the M5110 just imported my config automatically.

Atto write for lets says 1024 was around 1500000 and now its around 2400000. THe Read has not changed and it shows around 3100000. The 8192 run is now showing 3200000 for both write and read. These numbers are on my Raid5 SSD drives.

The same test on my Raid 1 SAS drives give 140000 write and 2700000 Read. Terrible write!
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
You have to use Anvil with a caching controller man. It's on xtremesystem forum. Atto is poor quality linear benchmark. I use it for benching network i/o over windows shares.
 

serverguy

New Member
Mar 28, 2012
56
0
0
I installed Anvil RC6 and ran test on my Raid 5 SSDs. Read score: 2374.41 Write score: 825.63, total 3200.04

Anyone know why write scores are so low? I guess not bad scores for a Raid 5.
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
raid-5 can require a bit more work which leads to latency. raid-5 will read faster than raid-1/10 but raid-5 writes without caching are brutal.

i'd try it with 1 raid-10 of 4 drives, then raid-1 of 2 drives.

Most of my workload is 90% read 10% write, but those times I need to do ETL (100% write) I sure do love the raid-1/10 speed.

which is why you might slice some of that ssd into raid 1, 10, and 5 for the task at hand.