The "Dirt Cheap Data Warehouse" - Database and Storage Servers on the cheap

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Smalldog

Member
Mar 18, 2013
62
2
8
Goodyear, AZ
Infiniband and Ethernet Networking:

My only wish is that I had a good way to bridge the IB and Ethernet networks without using a server. If a Mellanox 4026E ever comes up for auction, and is cheap, I'll go for it.
I think you meant 4036E. Would love to have one myself, but they are way to expensive for hobby/tinkering use.

Have you tried bridging under any flavor of linux?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Thanks for the proof-read; it was indeed a typo.

True, it is quite easy to set up bridging, but I have way too many machines running already so I haven't gone that route. For now I have a separate subnet for IB/IP and, since the storage-heavy servers are all connected to both subnets, things work OK. It would just be a cleaner wiring setup to have 1x or 2x IB connections per server instead of having both IB and Cat5 cables.

I think you meant 4036E. Would love to have one myself, but they are way to expensive for hobby/tinkering use.

Have you tried bridging under any flavor of linux?
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
This is awesome! Lots of documentation but still want to know more.

Ever think about running a private cloud on there?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Cloud ops is a very exciting field, and all of the other STH folks working with ProxMox, OpenStack, etc keep tempting me, but I'm holding out. My needs for my VM farm are very simple, and they are completely met now that I have enough CPU and RAM, manual migration, and VM replication for DR.

I'd like to write up a bit more detail if there is interest. What would people like to hear more about?

This is awesome! Lots of documentation but still want to know more.

Ever think about running a private cloud on there?
 

chune

Member
Oct 28, 2013
119
23
18
Cloud ops is a very exciting field, and all of the other STH folks working with ProxMox, OpenStack, etc keep tempting me, but I'm holding out. My needs for my VM farm are very simple, and they are completely met now that I have enough CPU and RAM, manual migration, and VM replication for DR.

I'd like to write up a bit more detail if there is interest. What would people like to hear more about?
Couple months late to the party here, but i would like to hear more about the infiniband setup! I almost pulled the trigger on some 10gb SFP nics the other night, but i remember people talking about how good/cheap infiniband is. Are you doing IP over IB? I would prefer all my esxi servers to run off NFS datastores rather than block-level anything. Played with iscsi and FC and wasnt a fan. How much did the IB cards run you? Are they all pcie or did they make a IB daughter card for some of those?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Couple months late to the party here, but i would like to hear more about the infiniband setup! I almost pulled the trigger on some 10gb SFP nics the other night, but i remember people talking about how good/cheap infiniband is. Are you doing IP over IB? I would prefer all my esxi servers to run off NFS datastores rather than block-level anything. Played with iscsi and FC and wasnt a fan. How much did the IB cards run you? Are they all pcie or did they make a IB daughter card for some of those?
Asking a VMWare-specific question as a new topic will probably get you more relevant answers. For my non-virtualization use of Infiniband, I have QDR Infiniband (FDR soon) utilizing SRP in some cases and IPoIB in others. SRP is faster while IPoIB is really easy on Windows and, in Windows 2012 with RDMA, fast enough to max out the PCIe2 bus. Even DDR Infiniband and IPoIB will make 10GbE look silly; I benchmarked a $75 DDR Mellanox card at 1,910MB/s when testing sequential reads.

My budget recommendation, as of this date, is to buy Mellanox ConnectX-2 generation of cards, either DDR or QDR. The ConnectX-3 generation of cards are even better but still quite expensive. The ConnectX gen 1 cards are temptingly cheap, but don't support RDMA. If you do buy a ConnectX-2 card, you will probably have to custom flash it to get RDMA - there is information available elsewhere on this forum.

I have used both Mellanox PCIe cards and Mellanox/Dell mezzanine cards. They are the same silicon, but in a different physical format, and use the same firmware and drivers.

And lastly, neither iSCSI nor FC are all that complex in the grand scheme of things. If they seemed like more trouble than they were worth to you, then IPoIB is probably your best choice.
 
Last edited:

chune

Member
Oct 28, 2013
119
23
18
its not that they were too complex, i just didnt like the failure behavior of iscsi and FC in esxi, or windows for that matter. NFS fails gracefully in esxi and just renders the datastore inaccessible and will auto-reconnect when possible while iscsi and FC flip shit if you even think about disconnecting them and pretty much require a host reboot at some point. I also used iscsi in windows and ended up having to use a hard drive recovery tool to get my data back when things went south. slightly scary.

Have you made any progress on the solaris/IB testing? Specifically im wondering how convoluted it is to setup NFS+RDMA on the connectx2 cards in solaris vs. getting a connectx3 cards which might play a little nicer with 10/40gb ethernet connectivity?
 
Last edited:

AndrewTBense

New Member
Jan 6, 2014
5
0
1
I just wanted to check-in here. I have been piecing together my own HP DL585 G7 to tinker with. I am almost complete with my build. I am hoping that it will be sufficient for me to host my GPU farm. This is the *only* thread that I have been able to find where an individual has decided to purchase one of these monsters for themselves.

Here is a picture of my progress so far. I have obtained a pair of ES Opteron 6272's. From the research that I have done, these engineering sample CPUs have unlocked multipliers.. hopefully I'll be able to optimize these unique opteron properties ;). Of the 10 GPUs that I have for this machine, three of them have waterblocks. So watercooling the CPUs is in store ;)

That's all for now, just wanted to say hello.

-Andrew
Clemson, SC
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
I just wanted to check-in here. I have been piecing together my own HP DL585 G7 to tinker with. I am almost complete with my build. I am hoping that it will be sufficient for me to host my GPU farm. This is the *only* thread that I have been able to find where an individual has decided to purchase one of these monsters for themselves.

Here is a picture of my progress so far. I have obtained a pair of ES Opteron 6272's. From the research that I have done, these engineering sample CPUs have unlocked multipliers.. hopefully I'll be able to optimize these unique opteron properties ;). Of the 10 GPUs that I have for this machine, three of them have waterblocks. So watercooling the CPUs is in store ;)

That's all for now, just wanted to say hello.

-Andrew
Clemson, SC
Andrew - welcome. That looks so cool! You need your own build thread for something of that magnitude. DIY Server Builds Forum
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
That is quite a beast you have there! Any reason for not choosing the Intel version of this server? or a ML370 G7?
Hi mrkrad,

I actually did buy a DL580 G7, the more popular Intel version of the DL585 G7, and stuffed it with four 8-core/16 thread CPUs and a massive 64 DIMMS. The IO performance was absolutely dismal, and the overall database throughput was shockingly bad. I sold it. I suppose that I was being limited by the two rather modest IO chips in the 580 versus the four IO chips in the 585.