3 node 10gbe (1 x X520-DA2, 2 x X520-DA1, 2 x DAC) $175 - doable?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ealvar

Member
Mar 4, 2013
55
14
8
Looking for a cheap 10gbe solution for a 3 node lab (1x storage server, 2 x vsphere servers).

I've spent some time reading and I'm pretty sure this will work but want to bounce it off ya'll to confirm.

Storage server - X520-DA2 (port A 172.0.10.1, port B 172.0.20.1)
vSphere server #1 - X520-DA1 connected to storage server port A (172.0.10.2)
vSphere server #2 - X520-DA1 connected to storage server port B (172.0.20.2)

I have an HP Procurve switch for my 1gb ethernet needs and the 10gbe will only be used for iSCSI network traffic.

I found an X520-DA2 with DAC cables for $99 shipped. Natex has the X520-DA1 for $73 a pair with shipping.

Like I said, I'm fairly certain this will work easily but wanted a sanity check.

Any cheaper options out there?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,520
5,828
113
You are probably OK with that. One lesson many of us have learned (and some repeatedly like myself) is that adding a switch does add expense but is also much more flexible in the future, e.g. what happens when you need a second storage server.
 

ealvar

Member
Mar 4, 2013
55
14
8
I hear you :) I'm pretty certain the lab environment will stay static, per the lady's wishes. I do have a second storage server but it's for ZFS snapshot replication only and is fine on 1gb links.
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
You should be fine and great way to get the speed now and be in a position to add a switch to grow to add more ports in the future. Sticking to all SFP+ will also make it easier to source a compatible switch down the road. If your procurve can take 10G expansion modules (like the old 2900's), you can pick up XFP or SFP+ SR modules and Intel SR modules for the X520's to join the 1G and 10G side.

I did something similar in several sites when we had only 2 or 3 servers with 10G, even bridging 10G ports or 10G and bonded 1G ports together using Linux to avoid disconnected subnets and make the transition to a physical switch easier once we had the $.

The only issues I have had with direct connecting 10g parts is with a pair of Supermicro X10 servers with the motherboard X540-AT2 10GbE copper ports back-to-back, running Linux - when one of the machines locked up and would not empty the 10G Ethernet adapter's queue anymore, it would also impact the sending machine causing that NIC to lock up until the link went down or was reset. With a switch in the middle that isn't an issue (it only blocks the switch port connected to the impacted server) but does happen when back-to-back. It is a corner case in that one box is totally locked up so there are bigger issues to worry about, and I have not been able to recreate the same behavior with X520's or ConnectX-3 cards but thought it was worth a mention in case it crops up for someone else.

I would also avoid passive DAC cables over 5m and just go for SR SFP+ modules & fiber to avoid hassle.
 

ealvar

Member
Mar 4, 2013
55
14
8
Thanks for the feedback @Blinky 42 . Unfortunately I just have the 2824 which offers no expansion. But, I think this setup will serve my needs for a while.