PERC H310 - LSI 9211-8i - $50

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

fractal

Active Member
Jun 7, 2016
309
69
28
33
I was a good boy. I only got one. I was going to get two ;)

I should be looking for 12G HBAs these days but couldn't pass this up.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,672
1,081
113
artofserver.com
question for you guys who know better, is the H310 flashed to IT mode work just as well as an LSI 9211-8i? I read on some FreeNAS forum that the H310 was sort of crippled in that its command queue depth is very shallow versus other HBAs. I was thinking of using a few H310 in IT mode for a ZFS storage server, but wondering if that's not the best choice here relative to getting another LSI 9211-8i based card?
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I seem to recall reading that flashing them restores the full queue depth. If you know a Linux command to test that, I'd be willing to try it.
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Yes the native Dell firmware has low queue depth. LSI it mode firmware has 600 QD I believe.

Sent from my SM-N930F using Tapatalk
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,709
517
113
Canada
question for you guys who know better, is the H310 flashed to IT mode work just as well as an LSI 9211-8i? I read on some FreeNAS forum that the H310 was sort of crippled in that its command queue depth is very shallow versus other HBAs. I was thinking of using a few H310 in IT mode for a ZFS storage server, but wondering if that's not the best choice here relative to getting another LSI 9211-8i based card?
I'm sure I read somewhere that the H310 natively was set to 25 with Dell firmware, but was capable of a queue depth of something like 600 when running the LSI firmware? Maybe I'm wrong...
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,709
517
113
Canada
I seem to recall reading that flashing them restores the full queue depth. If you know a Linux command to test that, I'd be willing to try it.
In ESXi I think you can ssh in and run esxtop, d then f to bring up QSTATS and look down the AQLEN column for your adapter :)
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
You are all correct, 600QD and performance where it should be after cross-flash. I probably have close to 10 of these in several boxes and they are a solid 6Gbps HBA once the magic is sprinkled on them.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,672
1,081
113
artofserver.com
so, once these are flashed to IT mode, and the queue depth problem is gone, am I going to be able to get full bandwidth from 8x SATA 6Gbps drives? for example, if I strip them in ZFS (no parity) or mdadm raid-0, am I going to see >800MBytes/sec sequential reads?
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,709
517
113
Canada
Experience, but there are loads of things that come into play here, if you want to actually make use of that performance, not least of which might be number of disks in the array, disk access times, latency, read cache, stripe size, workload and etc. For performance, you need low latency and really fast access times. For that you'll be wanting good SSD's rather than magnetic media.

That being said, the rest of the hardware and software, also has to be able to keep up, otherwise you'll be playing the "push the bottleneck about" game :)
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,142
594
113
New York City
www.glaver.org
is that based on experience or a guess?
Experience with a slower configuration (RAIDZ1 of 5 * 8TB) on a 9201-16i. Those systems replicated over the LAN and got about 700MB/sec. Given that the graph shows a transfer of some 16TB, this is purely disk-limited as there's no way to cache all those writes on a SSD - eventually things will "bog down" as the SSD fills and is flushed to spinning rust. The downward trend is due to slower data transfers as you move toward the middle of the drives from the outside.