HP MSA G2 Array - $499 with dual SAS controllers

mason736

Member
Mar 17, 2013
111
1
18
I assume so, but I can't say for sure - at least not yet. Another STH user may have some sleds that he's willing to trade or sell me, but right now I don't have any that I can use for testing. I'm hoping to test some Hitachi drives and some SSDs if I get a chance.
OK cool. I'm not picking up the p4300 for a few weeks. Hopefully there will be some developments before that
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
p4300 g2 is just a dl180 g6 with ilo nic , 4gb of ram (1 dimm slot out of 6 populated), e5520, and a p410/512 bbwc[fbwc?].

Lefthand o/s just stripes the 8 drives with raid-10 for o/s and temp and then raid (5/6/10) for the volume partition.

I just took 3 of these offline (wanna buy?) and i'm going to pimp the ram to 48gb (8gb rdimm ecc x 6) and maybe even swap in a e5620 since all of my compute is in the 5600 for vmotion.

What some folks have done is run two VSA's on one lefthand, say dev and prod. With VSA software you are limited to 5 volumes on controller 1:0 (10tb in the case of esxi) - kind of dumb. In theory it may be possible on hyper-v or using RDM to pimp this out more but they have a bunch of checks to prevent the haxors from optimizing their systems.

the p4300g2 has an amazing 10gb nic upgrade which is a 2gb stick of ram and nc550sfp card for like $2000 lol. It's likely the bare-metal o/s does not have drivers for all known cards but you can probably throw in a gen8 nic or fiber channel card and go to town. Either way if you hypervise and use the VSA (note the baremetal image has tons of lame checks in bash scripts to prevent using non-hp gear so don't bother) you could attach fiber or 10gb or 40gb nic's all day long as the hardware is neutral.

Anyhoo you could run dev/prod/test on lefthand1vsa and dev/prod/test on lefthand2vsa and rock 30tb off each physical machine. You can also reserve capacity for the lefthand vsa's and run light (read really light i/o) vm's on those machines too. If you haven't noticed that 2 is the highest performance it is because a 3 node system with network raid-1 has a 33% chance of mpio hitting a node with no local data, thus it would have to go back over the same nic's and ask another server for the data, get the data, and serve it up to you.

My testing shows that our 3-node p4300 g2 each with 8 x 450 15G SAS (24 drives) with disk raid-5 and network raid-1 over gigabit networking (ALB) with 2910al with full flow control performance equals that of 1 lsi 9260-8i with fastpath and 6 samsung 830 ssd's in raid-10.

Now you might say load up 8 ssd's and 10gb nic's but keep in mind the p4300 g2 wants to do 10gb primary /1gb secondary rather than 10/10 :(
 
  • Like
Reactions: dba

mason736

Member
Mar 17, 2013
111
1
18
Mrkrad, do you happen to know the answer to my previous question, regarding replacing the HP SAS drives with another drive, such as the WD RE4s, in the event of a failure.
 

RimBlock

Active Member
Sep 18, 2011
838
28
28
Singapore
p4300 g2 is just a dl180 g6 with ilo nic , 4gb of ram (1 dimm slot out of 6 populated), e5520, and a p410/512 bbwc[fbwc?].

Lefthand o/s just stripes the 8 drives with raid-10 for o/s and temp and then raid (5/6/10) for the volume partition.

I just took 3 of these offline (wanna buy?) and i'm going to pimp the ram to 48gb (8gb rdimm ecc x 6) and maybe even swap in a e5620 since all of my compute is in the 5600 for vmotion.

What some folks have done is run two VSA's on one lefthand, say dev and prod. With VSA software you are limited to 5 volumes on controller 1:0 (10tb in the case of esxi) - kind of dumb. In theory it may be possible on hyper-v or using RDM to pimp this out more but they have a bunch of checks to prevent the haxors from optimizing their systems.

the p4300g2 has an amazing 10gb nic upgrade which is a 2gb stick of ram and nc550sfp card for like $2000 lol. It's likely the bare-metal o/s does not have drivers for all known cards but you can probably throw in a gen8 nic or fiber channel card and go to town. Either way if you hypervise and use the VSA (note the baremetal image has tons of lame checks in bash scripts to prevent using non-hp gear so don't bother) you could attach fiber or 10gb or 40gb nic's all day long as the hardware is neutral.

Anyhoo you could run dev/prod/test on lefthand1vsa and dev/prod/test on lefthand2vsa and rock 30tb off each physical machine. You can also reserve capacity for the lefthand vsa's and run light (read really light i/o) vm's on those machines too. If you haven't noticed that 2 is the highest performance it is because a 3 node system with network raid-1 has a 33% chance of mpio hitting a node with no local data, thus it would have to go back over the same nic's and ask another server for the data, get the data, and serve it up to you.

My testing shows that our 3-node p4300 g2 each with 8 x 450 15G SAS (24 drives) with disk raid-5 and network raid-1 over gigabit networking (ALB) with 2910al with full flow control performance equals that of 1 lsi 9260-8i with fastpath and 6 samsung 830 ssd's in raid-10.

Now you might say load up 8 ssd's and 10gb nic's but keep in mind the p4300 g2 wants to do 10gb primary /1gb secondary rather than 10/10 :(
Great info to help avoid costly mistakes.

Thanks for that.

RB
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
HP's give no ****s about the drives . During the drive shortage I could only get my hands on P2000 G3 2TB drives, which are basically Hitachi 512 sector 2TB drives they sell in servers. Really the exact same shit.

Now if you are asking if you can use 3tb/4tb - don't know about advance sector shit. I've never had anything bigger than 2TB. That would be the P410 limit if any. I know they sell 3TB drives.

As long as the number of sectors is equal or greater than P410 gives no ****s. It is a fact of life that drives will become EOL. The biggest problem I see would be different sector size. Typically you replace a drive with the same sector size and the logical number of sectors must be greater than or equal to the previous drive.

But I just took apart a dev monster which had two dell 15K sas drives, and two 10K DELL sas drives and two 10K hp drives just fine in raid-10. Only problem is you won't get the updates to the firmware if you use non-hp drives. It will not apply an HPD9 update to a non-oem drive. These patches are to enhance and work around errata during stress/failure that can tank an array. It is why hp oem drives are superior on HP controllers. They have extra juju to make them fail more gracefully.

If i had the same sector size (512 to 512, 4K to 4K) i'd most certainly stick a WD RE4 2TB in rather than go without since folks don't run spares in lefthands.


there are two things i'd love to try:

1. raid-0 - why not? seems if you do raid-5 network raid-0 on the server makes a lot of sense (VSA)

2. jbod - why not? Seems if you have 5 slots for drives, why not use this as a way to turning 5 SATA direct ports into storage that is usable. (vsa)

3. The new HPVSA drivers for esxi basically is a software Smart Array controller that has nothing to do with lefthand vsa. it takes motherboard ports and turns them into a b-series controller with full raid-0/1/5/10 in esxi with options for FBWC 512meg!! crazy huh? confusing name.

P2000 G3/MSA do software raid crc iirc. be careful with this since raid-6 very slow. if you haven't noticed the dl320s -> msa60/70 -> msa2312sa/msa2324sa

likewise the dl180 G6 -> ds2600/ds2700/p2000 g3 sas.

It's why the dl320s overheats so quickly. It was meant to have a crossbar (msa60) or simple controller (msa) not a full xeon in the chassis. thermal shutdown comes very quick. component wise you can probably turn a msa60 to a dl320 or msa2312sa
 

dba

Moderator
Feb 20, 2012
1,478
183
63
San Francisco Bay Area, California, USA
mrkrad, the STH resident HP expert, send me a set of blank MSA interposer sleds - huge favor. Thanks again! With these, I've filled up the MSA with my existing 1TB drives for use as a bulk storage solution.

I also took some time to run a few experiments:

1) I slotted 1TB Hitachi consumer drives into the interposers. HP sells Hitachi drives in their interposers, but these are not the same drives. Result: They work great, even though they are not using HP firmware. I have an eleven drive RAID6 initializing right now.

2) I then tried a 4TB Hitachi drive. Oddly, the drive shows up as 2TB. HP supports 3TB SATA drives, so I'm a bit surprised at this result. Perhaps this version of the HP interposer is limited to 2TB?

3) Finally, I tried a Samsung 840 Pro SSD drive in the MSA. The drive shows up just fine and - nice - the interposers gave it full dual-port capabilities. I ran some IOMeter tests on the SSD drive just for fun. With the cache disabled, I get maximum throughput of 253MB/Second - not surprising since the MSA is limited to SATA2 drive speeds. The MSA was not designed for SSD. Maximum 4kb IOPS was 40,300. This is far lower than the SSD itself, and represents the best that the mirrored MSA controllers can do period. This isn't bad if you are used to 100-200 IOPS spinning drives, but in the SSD era it seems a bit anemic.

Overall I am very impressed by the MSA, especially at eBay prices. I ended up with 12TB of disk at 10 Gigabit speeds with redundancy, clustering, and connectivity to up to eight different hosts for less than $1K. On the other hand, Gigabit iSCSI is even cheaper and supports more than eight hosts while Infiniband is much faster. In the end, the MSA (or other small SAN array) occupies a middle ground between DIY iSCSI and a 10GbE or IB scale-out NAS. As technology changes, however, this middle ground seems to be getting narrower.

An update on this array after a week or so of use:

1) Power consumption isn't that bad - 1.6A when running full out.
2) No snapshots are included in the base license. Snapshot licenses are absurdly expensive.
3) Nice web GUI.
4) 1TB dual-port drives are surprisingly reasonable if you have patience. I bought six of them at $60 each with a 2012 manufacturing date - still under warranty! I found other new 1TB drives for $104 to $160 depending on whether they are Cheetah, Constellation, or Hitachi versions of the drive. These prices are a bit above the base price for a 1TB "enterprise" (midline) SATA drive, but remember that you get a fancy interposer sled that turns them into a dual-port drive.
5) I forgot how long it takes to format a large RAID-6 array! Three days!
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Please hook up the P420/1GB FBWC with SAAP 2.0 key (trial is 60 days, you can uninstall/reinstall endlessly) and setup a pair/quad of 256gb cache drives say locally. P822 would be nice but big bucks.

If you had a 750GB read-cache of fast internal SSD, and perhaps one connection to the array per host, no lun sharing do you think you could get stupid astounding raid-6?

Can the mss present jbod with raid?

definitely jealous of your hot deal there man. $499 is fantastic. You have any thoughts about upgrading the controller(s) to P2000 G3? I heard you can pop the two out, and poor-man install just one P2000 G3 controller to get 6gbp/s sas. never seen it done.

Pictures of this rig would be really cool. Most folks haven't seen the inside!!
 

dba

Moderator
Feb 20, 2012
1,478
183
63
San Francisco Bay Area, California, USA
I naturally thought about upgrading to G3, but I think that would ruin the economics of the MSA. For what it would cost, I would rather have a ZFS box.

I'll try the P420 with the MSA and SAAP/SmartCache when I get a chance - it would be a great experiment. For everyone: SmartCache is licensed feature of recent HP RAID cards that uses your own SSD drives as read cache for a disk array. Imagine four SSD drives adding 1TB of read cache at 2,000MB/S to a great big array of hard drives.

Please hook up the P420/1GB FBWC with SAAP 2.0 key (trial is 60 days, you can uninstall/reinstall endlessly) and setup a pair/quad of 256gb cache drives say locally. P822 would be nice but big bucks.

If you had a 750GB read-cache of fast internal SSD, and perhaps one connection to the array per host, no lun sharing do you think you could get stupid astounding raid-6?

Can the mss present jbod with raid?

definitely jealous of your hot deal there man. $499 is fantastic. You have any thoughts about upgrading the controller(s) to P2000 G3? I heard you can pop the two out, and poor-man install just one P2000 G3 controller to get 6gbp/s sas. never seen it done.

Pictures of this rig would be really cool. Most folks haven't seen the inside!!
 

TheBay

New Member
Feb 25, 2013
220
1
0
UK
HP Raid cards love Hitachi, they don't like Samsung spinners though!

Well jealous of the deal you got! And mkrad I wonder what the hell you have running in your home :eek:
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
I've noticed every damn drive i've had since the ole DL320s G1 (AIO1200) those were WD then all are hitachi lately with some seagate 2.5" 10K SFF sas DP thrown in here or there.