Underappreciated 14-bay 2U storage server - HP DL180 G6 for $168 shipped

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PersonalJ

Member
May 17, 2013
127
11
18
Picking one of these up now, I have a few SAS drives in storage I really need to test so I can sell them. If anyone wants the L5630s let me know, I am going to replace them.
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
i've got a dozen 146gb hp 15K sas drives. perfect for raid-5/50 - i've got some G4's with SCSI drives raid-5 - 8 years and no bad sectors.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Picking one of these up now, I have a few SAS drives in storage I really need to test so I can sell them. If anyone wants the L5630s let me know, I am going to replace them.
How much for those?

Wonder how they compare to the L5520 and L5530.
 

ecosse

Active Member
Jul 2, 2013
463
111
43
I have a DL180 G6 - the noise is the problem for me. They are fine until you put an PCI-E card in, whereupon the IPMI updates itself and the fans get pretty loud. I've not really found a way to lessen the noise, other than to put it in a noise reducing case. Just something to be aware of... but if you find a fix I'd be most interested!
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
That's odd. I built mine up and then added an Infiniband card. There was no significant increase in fan speed. I do note that there are two versions of the chassis - one with four fans and one with eight.

I have a DL180 G6 - the noise is the problem for me. They are fine until you put an PCI-E card in, whereupon the IPMI updates itself and the fans get pretty loud. I've not really found a way to lessen the noise, other than to put it in a noise reducing case. Just something to be aware of... but if you find a fix I'd be most interested!
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
What version of the BMC firmware do you have? i've noticed the later revisions manage the fans better. Mine is not exceptionally loud even with 14 drives and two PCIe cards plugged in, an M015 and an Intel 10Gbe. Before somebody points out that this is impossible...the M1015 is loose behind the rear drive tray connected to the PCIe slot under the drive tray using a 12inch flexible PCIe riser - totally ghetto, but functional for now.

At least it wasn't exceptionally loud before this recent heat wave. With ambient too high for comfort its actually something of a screamer. :(
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
What version of the BMC firmware do you have? i've noticed the later revisions manage the fans better. Mine is not exceptionally loud even with 14 drives and two PCIe cards plugged in, an M015 and an Intel 10Gbe. Before somebody points out that this is impossible...the M1015 is loose behind the rear drive tray connected to the PCIe slot under the drive tray using a 12inch flexible PCIe riser - totally ghetto, but functional for now.

At least it wasn't exceptionally loud before this recent heat wave. With ambient too high for comfort its actually something of a screamer. :(
Do tell more about this flexible PCIe riser..... one of the reasons I haven't jumped on one of these servers is the single PCIe slot (or so I understood). With two, I could certainly make the switch.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
I had an aquaintance at a cable shop make up a custom riser for me. This one is expensive (~$90 for a 12" cable) but it gets the idea across. Basically plug it into the PCIe slot tucked under the rear drive cage, make a careful 90 degree bend in the cable, route it out under the drive cage and make another 90 degree bend to accept the card. Ghetto rig it as secure as you can behind the drive cage.

Make sure you use a PCIe 2.0 rated cable. There are lots of cheap cables available on ebay - don't use them as they are 1.0 cables and you won't get the performance you want from them.

I also velcro mounted two SSDs to the inside wall of the case and connected them to the MB SATA ports. Used for a mirrored boot.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
You get up to four PCIe slots in a dl180 g6.

The models with two hot-swap drives in back are limited to one half height PCIe slot, but all others have three full height and one half height slots, and you can replace the two bay rear cage with a different cage that offers three additional PCIe slots. You can then add a one-slot, two-slot, or three-slot PCIe riser.

Right now I'm running a twelve-bay DL180 G6 with a RAID card plus an Infiniband card.

Do tell more about this flexible PCIe riser..... one of the reasons I haven't jumped on one of these servers is the single PCIe slot (or so I understood). With two, I could certainly make the switch.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
You get up to four PCIe slots in a dl180 g6.

The models with two hot-swap drives in back are limited to one half height PCIe slot, but all others have three full height and one half height slots, and you can replace the two bay rear cage with a different cage that offers three additional PCIe slots. You can then add a one-slot, two-slot, or three-slot PCIe riser.

Right now I'm running a twelve-bay DL180 G6 with a RAID card plus an Infiniband card.
And for those wondering why, the DL180 G6 has large PCIe slots meant to house risers and provide enough lanes for the riser cards. The rear drive cage only takes a few screws to remove.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
There are some options:
1. DVD kit - removes x16 card and leaves you with 12 LFF and 2x8
2. left riser with 2 x8 and 1 x 4
3. left riser with 2 x8 ($10 i paid)

So I'm going to rock the P420/1gb FBWC HP SMARTCACHE $249 raid controller with 4 SSD in read-only cache mode with a fan-out cable and the 12 drives, and run two or three dual 10gbe nic's - the QLE8152 $75-99 cards run at x4 only so you can fit two emulex x8 dual 10gbe nic's in there and 1 QLE8152 for 6 10gbe ports.

If you are good at fabricating it would not be hard to get the the dual x8 riser and fit it in - use some lego's :) The only place i've found selling the retrofit kit to put it back to 12LFF proper is a dude in germany who won't ship over here.

Why? Well I've got a 36 drives that were in old DL320s storage servers which are getting old in the tooth.

Why did I choose 5600 series cpu's? Because they suppport SR-IOV and every other server is on 5600 series. You can get non-ES 1.6ghz quad core e5603 cpu's for $70 on ebay. Sure they are slow but they support hardware AES.

Software defined storage (AKA VSA lefthand or nexentastor CE vm ) you can run multiple VSA's per box and other light vm's on the same machine.

One of the nic's i'm selling does 1024 vnic's to linux per port (dual port) that's 2048 nic's - a web hoster's wet dream!

It is truly amazing how much power you can get out of such little $$ these days!

8gb dimms $45 each - 6 yields 48gb
E5603 $70 5600 series cpu with AES-NI and SR-IOV
P420/1gb FBWC $249 (hp smartcache is $249 per SERVER)
Two 10gbe nic's $150 (75 each)
1 Barebones DL180 G6 $$cheap$$ with redundant power ($$CHEAP$$ if you did the sell the old cpu's trick!)
4 SSD to boot and cache the 12 drives (750gb read only cache, 125gb for windows 2012/linux) !! $400
fanout cable $15 for the ssd to P420.
12 RE4 SATA/SAS drives (2/3/4TB) - not cheap but a ton of storage!

Serious power house here!

I priced out a newer DL380p and was well over $10K to do the same config!!
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Oh well hell then :) I thought they all had the 2 bays out back and the single PCIe slot. Well this changes things :)
HP Proliant DL180 G6 2X L5520 Quad 2 26GHz P410 1XPS Rail Kit No HD RAM | eBay has the PCIe cage instead of the two rear drives. They accepted my offer of $300.

The details on that auction:
* Includes the HP P410 RAID card with 256MB RAM and a battery
* Includes a 750W power supply - the 460W platinum would have been better
* Includes the four-slot PCIe cage with three full-height and one half-height slots
* Includes half-height riser and single x16 full-height riser. Needs a different riser card if you want to use the third and fourth PCIe slots
* Includes four fans - esiso versions have eight fans
* Front configuration is eight hot-swap drive slots with no expanders - backplane is dual SFF-8087 ports
* There are two blanks in the upper left of the front panel. Remove these and you can slot in two more HP 3.5" drive trays - the power connectors are right there waiting - but there is no backplane so no hot swap on these drives. The drive trays even latch properly, so it's not a hack.
* You have lots of room inside the chassis for SSD drives. You can add sub-trays to the two non hot-swap slots for four drives, you can locate four or maybe even eight SSD drives in the empty part of the rear PCIe cage, and there is room, and power, for at least two SSD drives forward of the power supply cage.

So here is the performance-focused storage server that I am building:
* DL180 G6 as above, with 10 3.5" drives for bulk storage and 8 2.5" SSD drives for boot and VM storage
* Swap the one-slot PCie riser for the two-slot PCIe riser card to get three total PCIe slots
* HP P410 RAID card for bulk storage
* LSI HBA (ZFS) or HP P420 RAID card (Windows) for the SSD drives
* 10GbE card *plus* Infiniband QDR card to serve up all of that data
* The platform will be ZFS (which has the best features by far) or Windows (best IPoIB RDMA performance) - I'm testing both to see how far I can push ZFS performance for VM storage
 
Last edited:

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
That sounds like a decently fun box :) I need to get a primer on some ZFS. Don't have the luxury of the NetApp or EQL from the office in the basement, and really could use something that allowed 10GbE/40GbIB connectivity. Thanks for the extra information, very much appreciated.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
About to fire up a pair with P420/1gb FBWC on x16 slot, and two dual port 10gbe nic's on the x8 side.

The SE1220 Lefthand boxes run the fans full blast, and so far we have had ZERO issues running the P420's in the x16 slot under heavy load with esxi and 8 15K SAS drives.

Will be interesting to see how the dual 460 watts handle 12 drives and all this networking.

Dual 10gbe nic's is more for redundancy since even vmware recommends using two vendor brand networking with 10gbe due to generally bugginess of nic's.

Intel tends to be reliable, everyone else tends to have issues - usually related to virtual functions/fcoe/iscsi/advanced features.

I do plan on wiring up SSD or 4 for P420 read caching so bursting up to pci-e x8 2.0 speeds may be possible.

The goal is to reduce restore time on backups which is why I picked raid-50 - 12 drives in raid-5 would be too dangerous. especially 2tb sata drives.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Good news! Solid!

I had to have the two x8 slots for nic. We are running an intel dual 10gbase-T and Emulex BE2 on the x8 slots and the P420/1gb FBWC on the x16.

The old server would throw latency errors under heavy load and eventually drop the datastore.

1. Remove usb esxi 5.1 boot stick
2. Remove drives
3. Stuff into DL180 G6
4. Boot and wire up. done.

DL320s/xeon core2duo/8gb/p410bbwc/dual gigabit on board -> DL180 G6, 24gb ram, 5600 series quad core, two dual 10gigabit nic's, P420/1gb FBWC -> AWESOME FAST! And I still have the power cable to hook up a pair of SSD for HP SMARTCACHE read acceleration when I have a few minutes so we can burst to 20gbe reading!

The server is D2D so raid-50 provides OUTSTANDING performance. The drives are hp hitachi 2tb (512 sector) and DBA has/had the P2000G3 sleds from them.

Awesome in all aspects. I may look for the x4 riser option and see if I can stuff something on that like the QLE8152 dual 10gbe nic which only needs x4 pci-e 2.0 to operate (and they are dirt cheap).

One might think of it as a Storage server with 6-port 10gbe switch ;) how interesting would that be?

Software defined networking and Software defined storage. Oh my that sounds a lot like a P2000g3 or powervault ISCSI ;)

Imagine not having any switches, but only having servers.. (star/mesh topology distributed virtual switch)
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Nice server! Does it need all of that CPU power? I was able to push >3GB/S over SMB3 with a single L5520, and the CPU was barely working. I guess it might be useful for compression.

A note for all of you tinkerers: HP cable 536647-001 is really useful for those wishing to re-wire the HP dl180 G6. That cable plugs in to the existing wiring harness and provides a new Molex power connector with a nice long lead - perfect for powering SSD drives tucked into some corner of the chassis.

Thanks again for the sleds mrkrad. While I like the HP MSA, I really really like the dl180 g6 storage server even better. Now I just wish that there was a way to do SAN-like volume replication across two of these DIY storage servers. Does the HP VSA do that?
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Send me an email. lefthand VSA definitely does full SRM syncronous replication with esxi.

think two servers, two lefthand, one wire between 1 server+1 lefthand and 1 server+1 lefthand. (preferably a quorum).

Remove either lefthand, continue running. cut wire between each side. Continue running.

They also support low speed snapshot based if you can't afford the latency for synchronous replication.


If you want to use my storevirtual vsa, I am no longer using them, and I think it would be fine to use/review them. I have 10 ESXi mac addresses setup in the manual zone for 5.1 and if you want to do hyper-v just tell me which 10 manual mac addresses and I can re-generate them (I have no idea what hyper-v 2012 reserves for manual mac range).

It's very reliable but of course you know ISCSI requires heavy network infrastructure to get decent speeds.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I just looked at HP VSA - I know that you've been using the LeftHand stuff quite a bit. It looks very very cool. I'll email you to get your thoughts on it.

Send me an email. lefthand VSA definitely does full SRM syncronous replication with esxi.

think two servers, two lefthand, one wire between 1 server+1 lefthand and 1 server+1 lefthand. (preferably a quorum).

Remove either lefthand, continue running. cut wire between each side. Continue running.

They also support low speed snapshot based if you can't afford the latency for synchronous replication.


If you want to use my storevirtual vsa, I am no longer using them, and I think it would be fine to use/review them. I have 10 ESXi mac addresses setup in the manual zone for 5.1 and if you want to do hyper-v just tell me which 10 manual mac addresses and I can re-generate them (I have no idea what hyper-v 2012 reserves for manual mac range).

It's very reliable but of course you know ISCSI requires heavy network infrastructure to get decent speeds.