HP StorageWorks MDS600 Questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Biren78

Active Member
Jan 16, 2013
550
94
28
Hi - I read here that the HP Storageworks MDS600 holds 70 drives in 5U. For $775 with shipping, seems like an OK idea.
HP StorageWorks MDS 600 Dualport I O Module Hard Drive Array AJ866A | eBay

Was thinking of making a drive convalescent home out of a HP MDS600. Seems like I can buy 2x Norco 24 bay chassis for $400 each then 2x SAS expanders for $225 each and 2x Nice 850w PSUs for $125 each. for a total of around $1500. It would hold only 48 drives but would be OK for my purpose. Seems like the MDS600 will take more drives and cost half.

I had a few questions:
  1. Is the MDS600 hot swap? How does this work? I don't see how you can put 70x 3.5 drives in the front of a 5U and make it fit
  2. Does anyone have pictures?
  3. How many SFF ports do I need to connect?
  4. Will this work with both SATA and SAS drives because I have both?
  5. Are these really cheaper than Norco enclosures?

TIA
 

Scout255

Member
Feb 12, 2013
58
0
6
The chasis itself may be cheap, but the drive trays will add to the price. I believe they are just regular HP trays which are $5-10 a pieces, so for 70 of them your looking at an additional $350-$700 + shipping.

You can see how they are installed here: HP SSA70 MDS600 Modular Disk System 451018 B21 70x 3 5" HDD Trays | eBay
Its basically a pullout drawer concept (the drives still function with the drawer pulled, there is a rather expensive looking ribbon cable attachment to connects to the drive drawer.) All drives are hot swap.

Without using a SAS switch, you are limited to 4X3G SAS bandwidth per drawer (so this would mean you would need an 8E SAS card and run 2 SAS cables to connect to it).

I'm really not sure on #4, all the specs show that it will support SATA drives, but it may require you to have interposers, which would mean the standard trays may not work. Lots of HP Guru's here though, i'm sure someone else can clairify on this one.

With trays I think it would be very similar in cost, though the HP unit would have a much smaller footprint (5U instead of 8U). The advantage Norco has is if 1 component fails, you can generally buy them relatively cheap (like say your SAS expander, or a backplane, etc.) whereas with the HP unit, some of the components, such as the IO module, cost about the same as what that seller is asking for the whole setup.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I believe they are just regular HP trays which are $5-10 a pieces, so for 70 of them your looking at an additional $350-$700 + shipping.

You can see how they are installed here: HP SSA70 MDS600 Modular Disk System 451018 B21 70x 3 5" HDD Trays | eBay
Its basically a pullout drawer concept (the drives still function with the drawer pulled, there is a rather expensive looking ribbon cable attachment to connects to the drive drawer.) All drives are hot swap.

I'm really not sure on #4, all the specs show that it will support SATA drives, but it may require you to have interposers, which would mean the standard trays may not work. Lots of HP Guru's here though, i'm sure someone else can clairify on this one.

With trays I think it would be very similar in cost, though the HP unit would have a much smaller footprint (5U instead of 8U). The advantage Norco has is if 1 component fails, you can generally buy them relatively cheap (like say your SAS expander, or a backplane, etc.) whereas with the HP unit, some of the components, such as the IO module, cost about the same as what that seller is asking for the whole setup.
They are regular HP trays, so plan on $5 or so every time you add a drive - there are millions of these floating around so there is no need to buy them until you need them. They are very simple trays - no interposers.

If you are using SATA drives, you'll use one SFF-8088 port for each bank of 35 drives. With dual-ported SAS drives you'd double up for speed and redundancy - using two SFF-8088 ports per 35 drives. Your maximum throughput will be around 1GB/second per bank of 35 drives, 2GB/second total** - or double that if you are using dual-ported SAS drives. If you think about it, most STH readers would buy one of these to get an ultra high quality place to stuff a huge quantity of large SATA drives to use as bulk storage. For that use case, 2GB/second is more than enough.

**HP also talks about a "high performance cabling" option which is an x8 SAS connection through a SAS switch that will offer double throughput, but that appears to work only with very specific combinations of equipment.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Would be nice to swap out my Norcos with one of these and reclaim 3U. Look heavy though! 160lbs +
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
The IO module is just a fancy say of saying "wiring and an expander chip". OK so it's more complicated than that, but not much. The auction includes two of them already, which is all you need unless you plan on using dual-ported disks, in which case you need two more.

Do you need to buy these to get the MDS600 to work?
HP 600 Modular Disk System Dual I O Module Option Kit AP763A | eBay
Seem more expensive than the whole thing on that auction. What are these for?
 

MaxCFM

Harder, Better, Faster, Stronger!
Apr 11, 2013
30
2
8
Atlanta,GA
They are regular HP trays, so plan on $5 or so every time you add a drive - there are millions of these floating around so there is no need to buy them until you need them. They are very simple trays - no interposers.

If you are using SATA drives, you'll use one SFF-8088 port for each bank of 35 drives. With dual-ported SAS drives you'd double up for speed and redundancy - using two SFF-8088 ports per 35 drives. Your maximum throughput will be around 1GB/second per bank of 35 drives, 2GB/second total** - or double that if you are using dual-ported SAS drives. If you think about it, most STH readers would buy one of these to get an ultra high quality place to stuff a huge quantity of large SATA drives to use as bulk storage. For that use case, 2GB/second is more than enough.

**HP also talks about a "high performance cabling" option which is an x8 SAS connection through a SAS switch that will offer double throughput, but that appears to work only with very specific combinations of equipment.
A few questions
1. If using a MDS600 with only SATA drives would you have enough bandwidth in a 9211-8e card running in an x4 slot to satisfy 1GB/sec per bank? or would the card need to be in a x8 slot?
2. Would the MDS600 work behind an expander port from a RES2CV360? Say I had 2 free ports off the expander and all SATA on the rest of the expander already.

Just trying to align my future purchases up and educate myself.
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
A few questions
1. If using a MDS600 with only SATA drives would you have enough bandwidth in a 9211-8e card running in an x4 slot to satisfy 1GB/sec per bank? or would the card need to be in a x8 slot?
2. Would the MDS600 work behind an expander port from a RES2CV360? Say I had 2 free ports off the expander and all SATA on the rest of the expander already.

Just trying to align my future purchases up and educate myself.
1. Yes PCIe2.0 x4 has 2GB/s bandwidth in each direction for 4 total.
2. Technically it should work with an expander, SAS connections can be daisy chained. However, compatibility issues may crop up as things were not as standardized in SAS1 compared to SAS2
 

MaxCFM

Harder, Better, Faster, Stronger!
Apr 11, 2013
30
2
8
Atlanta,GA
1. Yes PCIe2.0 x4 has 2GB/s bandwidth in each direction for 4 total.
2. Technically it should work with an expander, SAS connections can be daisy chained. However, compatibility issues may crop up as things were not as standardized in SAS1 compared to SAS2
Thank you this helps me greatly!
 

OnCall

New Member
Aug 6, 2013
7
0
0
Don't know if you're still looking for answers regarding the MDS600, or SSA70, and it apparently was previously known as, but I have one, and can answer some questions about it. (I was actually lurking in these forums to learn more about it, in case there were others who've already gone there.) I registered so that I could add my experiences to this thread.

My SSA70/MDS600 has recognized both SAS & SATA drives that I've plugged into it, in the same drawer, and at the same time. The LSI-9280-4i4e that I had connected even gave me choice of which drive to boot off of, although this is functionally 2 separate drawers of 35 drives each. You'd need a dual-port external HBA or RAID/SAS controller to use both sides at once.

I haven't measured the power usage of this, but it sounds similar to standing out on the tarmack next to a 727 with engines on idle, so it may not be for the faint-hearted, noise-wise.

I've read through the HP manuals online for these, and apparently, if yours identifies itself to the HBA as an SSA70, you may either need a firmware update, or an I/O module replacement to get it up to date, but you can't update the firmware on these without an HP controller card, and you also can't edit the zoning without a genuine HP card or HP SAS switch. I did buy an HP P800, which does work, but I don't care much for the HP RAID cards or their software interface, so I have it just in case I need to edit the features of this system.

Another caution, the manuals and unit say in multiple places that you should have all of the spots populated with either drives, or fillers or blanks, in the case of the redundant fans/power supplies/I/O modules, etc., in the back, or you risk overheating your drives. If you're buying one of these in an ebay auction (which I did without doing my homework first), try to get the seller to include the drive trays, if you can, as it will save you a bit down the road.

I haven't yet done any link testing of it, but I suspect that mixing SAS & SATA forces the entire drawer to SATA 1.5 Gb/s speeds, so if you were looking for slow, but cheap storage, this may still work for you. It will only do SAS 3G speeds if all of the drives are SAS 3G, which could get pretty expensive.

I don't have any expanders, so I can't check how it works with those, but I do have many LSI 9280-4i4e cards, a 9280-8e on the way, and various others with internal-only ports and no good way to export them.

I got lucky enough to only pay $300 for mine with all the trays, and I drove 4 hours to pick it up, rather than pay $175 in truck-freight charges, & still have to hassle with the trucking company about scheduling a residential delivery. It is fairly heavy, at 160 lbs empty, and probably close to twice that with all drives installed. It has 4 redundant hot-swap 1200 watt power supplies, and 4 hot-swap fan modules, which turn on as soon as the power cords are plugged in, whether or not the unit is turned on.

If it comes with dual I/O modules, that's all you need, unless you have a SAS switch, and are using dual-port drives since otherwise, you'll not get any benefit from doing so. The modern version of this is the D6000 storage array, which is 6G SAS all the way, and is reported to not even work with SATA drives installed. They look the same, although HP warns that the I/O modules for the D6000 will NOT work in an MDS600 or SSA70 at all.

Hope this info can be of some use to others, as I'm not sure if I should put mine up for sale, and look for a set of 3G MSA60s, or go faster, and more current, by looking for a set of D2600s from HP, although they won't hold as many drives, they're probably much more energy efficient to operate...
 

BThunderW

Active Member
Jul 8, 2013
242
25
28
Canada, eh?
www.copyerror.com
I measured my Dell MD1000 with 14x15K drives at just over 300W at the wall. With 70 drives, this sucker would draw over 1600W! Now I imagine people wouldn't be filling it with 15K drives, but holy crap that's still gonna suck down some serious juice. Any way you slice it, 70 drives are going to put a big dent in your cooling and hydro costs. Things get a little more reasonable if going with standard consumer drives. But even at 6-7W average per drive that still adds up to 400-500W+.
 

Biren78

Active Member
Jan 16, 2013
550
94
28
This is some great info! I was thinking about getting one and this helps.

Any pics of your setup? I think these are so cool
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
The p8xx series is capable of handling these it is perhaps windows 2008/esxi that cannot deal with 4 paths properly (unix definitely).

all hp smartarray have zoning built in (SSP) they always have since the days of two scsi cards in two pc's and one ultra-scsi cable with drives (MSA1500).

Though a switch would be spiffy way to give a blade many DAS.

the MDS6000 raises speeds double

Drives eat up a ton of power, 15K SAS drives seem to chew about 15-18 watts of power
 
Last edited:

OnCall

New Member
Aug 6, 2013
7
0
0
No pics of the setup yet, but should i decide to actually 'set up' this, I can surely take pics. Right now, I'd placed it and a DL160 on a pallet for testing, 'cause I needed to know more info about what I'd just bought, before it was too late to return it, or before I sunk more money into something that wasn't going to be worthwhile or accomplish my goals. Part of me says to cut my losses, and just go shopping for some MSA60's or even better, some D2600's...;)

If there is anything specifically that you would like a better look at, LMK, and I can take some pictures of it. I'm not likely to get rid of it anytime soon, 'cause right now, I could still write off the roughly $400 I've got in it, and not hurt too much, so it isn't likely to be worth the time to put it back on ebay or package it for shipping. I've got plenty of rack space, so it could just sit in a rack and stay out of the way, if I decide not to use it.

I'd really prefer something much faster than 1.5 Gb/s, and physically, I don't see enough of a difference between this and the newer D6000 such that I couldn't convert the important parts over to the 6G version, but that may be my ignorance too. I will have to look into the availability of the internal components for the D6000, to find out how much it would cost to 'upgrade', 'cause the big heavy parts look to be the same in all the manuals I can find so far. I'm thinking that it will only entail replacing the I/O modules and the backplanes, but I need to further dissect this one for more info (yes, if/when I take this one apart, I WILL take plenty of pictures!).

Per the HP manual, this model can't be zone-edited without an HP SAS switch, although I don't know how accurate that is. All of their instructions on how to edit zones starts with the web console of the SAS switch, but much like their other stuff, they never intended for these to be used by anyone who didn't drink all of the HP BladeSystem Kool-aid, so they are understandably not written for anyone to do otherwise, nor are they supported for anything other than that. We're pretty much on our own :)

They do specify that using many other HP controllers is ok for the smaller non-BladeCenter-specific storage arrays, but I didn't check the dates on that documentation, so I don't know what may supersede what. These do seem to have some issues that force them to be much more Blade-friendly, & I'm not willing to purchase an HP SAS switch, just to be able to use those features, as long as there was no pre-existing zoning on mine I needed to remove to access some bays.

If the interface was faster, I wouldn't mind filling it up with 10K or even 15K drives, but first, I need to get a faster interface :)
 

OnCall

New Member
Aug 6, 2013
7
0
0
I'm still trying to figure out ways to cut down on the power use & the noise, if possible, since I won't be using more than 1 drawer for a while, trying to disable one side of it seems to be my best bet. The power supplies are truly redundant, so no matter which side you pull, all fans always get power. Will try to find some fan blanks and possibly power supply blanks for one side, or possibly just remove the drawer on one side, along with all the modules on that side. I will check the chassis power draw with no drives and all modules in before starting, so that I have a good reference on whether it is even worth bothering with cutting the power, 'cause even 4 x 1200 watt power supplies aren't drawing 4800 watts when there is no demand for it.

I will post my results as soon as I get them, although I'm in the midst of a new server deployment at a customer's, and a migration of their old SBS 2003 server to this replacement, which will severely limit my available free time to play with my new toys.

I measured my Dell MD1000 with 14x15K drives at just over 300W at the wall. With 70 drives, this sucker would draw over 1600W! Now I imagine people wouldn't be filling it with 15K drives, but holy crap that's still gonna suck down some serious juice. Any way you slice it, 70 drives are going to put a big dent in your cooling and hydro costs. Things get a little more reasonable if going with standard consumer drives. But even at 6-7W average per drive that still adds up to 400-500W+.
 

BThunderW

Active Member
Jul 8, 2013
242
25
28
Canada, eh?
www.copyerror.com
When running multiple redundant power supplies, the power "losses" are going to be greater due to the inefficiency of power supplies. I've verified this myself, so even most of my servers at home support redundant power supplies, I only run one and keep the other as a cold-spare. PS failures are rare enough that I'll take the risk of running a single supply. Of course, I wouldn't do that at the datacenter.

The fans wattage adds up pretty fast too (some >30W each). Especially if they're running all out.

I'm still trying to figure out ways to cut down on the power use & the noise, if possible, since I won't be using more than 1 drawer for a while, trying to disable one side of it seems to be my best bet. The power supplies are truly redundant, so no matter which side you pull, all fans always get power. Will try to find some fan blanks and possibly power supply blanks for one side, or possibly just remove the drawer on one side, along with all the modules on that side. I will check the chassis power draw with no drives and all modules in before starting, so that I have a good reference on whether it is even worth bothering with cutting the power, 'cause even 4 x 1200 watt power supplies aren't drawing 4800 watts when there is no demand for it.

I will post my results as soon as I get them, although I'm in the midst of a new server deployment at a customer's, and a migration of their old SBS 2003 server to this replacement, which will severely limit my available free time to play with my new toys.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
you don't have to run all the power supplies eh?

honestly the DL180 G6 seems to be the best bang for buck 12LFF server, two 10gbe dual port nic's (fully redundant) and P420/1gb FBWC - you have more power than some might big san's and it's stupid easy to setup HP Storevirtual VSA on free esxi 5.1 !! you can run say DEV/PROD/TEST on each machine, with two servers (or more).

I've got 10 reserved esxi 5.1 mac address licenses so I can build up vsa's on the fly. , heck if you ran a vswitch you could perhaps even use mac NAT masquerade but that might be for proof of concept only!
 

BThunderW

Active Member
Jul 8, 2013
242
25
28
Canada, eh?
www.copyerror.com
Yup. DL180 G6 + optional MSA60 sounds like a killer SAN config.


you don't have to run all the power supplies eh?

honestly the DL180 G6 seems to be the best bang for buck 12LFF server, two 10gbe dual port nic's (fully redundant) and P420/1gb FBWC - you have more power than some might big san's and it's stupid easy to setup HP Storevirtual VSA on free esxi 5.1 !! you can run say DEV/PROD/TEST on each machine, with two servers (or more).

I've got 10 reserved esxi 5.1 mac address licenses so I can build up vsa's on the fly. , heck if you ran a vswitch you could perhaps even use mac NAT masquerade but that might be for proof of concept only!
 

OnCall

New Member
Aug 6, 2013
7
0
0
Right now, I only have a 1u DL160 G6, not a DL180, so I only have 4 LFF in that one, and I'd really rather have a RAID controller that will send me an email with any problems. I haven't yet found any part of the HP RAID setup where it will allow me to configure SMTP settings for notification? I may end up getting a DL180 at some point down the road, but I think I prefer the DL380s for the extra features.

I've found a few DL360 G6's on ebay that have 6LFF, instead of 10SFF, so I'm watching them for future prospects, no matter how things work out with this MDS600, 'cause we need a new (low-volume) production server that this would work well for.
 
Last edited: