Jumping on the 10Gbe wagon

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Mrlie

Member
Jan 1, 2011
32
2
8
45
Oslo, Norway
For a while now I have been thinking about upgrading some servers and a workstation to faster network, and for me 10Gbe is the next logical step.

This is for my home network, so nothing mission critical etc. For me noice is a bigger concern over cost, but then again I will have to pay for everything myself and nothing will be tax deductable etc.
My networkingskills are basic, at work I play around with Windows, Linux and IBM z/Series and I feel very comfortable around computers. However I'd rather have a "plug'n'play" system, than spending my spare time debugging some hardware or software config. I value my spare time too much :)

My servers are currently running OpenIndiana and ESXi, and workstation is an older Windows 7 but will probably be replaced with a new workstation during the winter, running Windows 10.
I might be replacing the OpenIndiana installation for ESXi or a bare-metal ZoL install, but no decission or timeframe on this.

From a compability standpoint I was thinking about getting some Intel nics, probably the Intel X520-series, since I assume most plattform will have driversupport for these out of the box?

At work an older EMC-system was thrown out today, so I got my hands on some sfp+ modules from Finisar, modelnumber FTLX8571D3BNL-E5 (10GBASE-SR 300m SFP+ Extended Temperature Optical Transceiver | Finisar Corporation)

Anyone know if these will play nice with a Intel X520-series nic?
And will they work with the D-Link DGS-1510-28X switch I am considering, from a cost point of view and also that the fan can be replaced easily for a quiet Noctua-fan?

Am I on the right track here, in terms of hardware?
If I am not able to use the sfp+ I got at work for free, will it be better in any way to find new compatible sfp+ or buy DAC? Distance from switch to servers are 1-2 meters, and about 3-5 meters to workstation.

I'm not trying to get 10Gbe for the cheapest price, but I dont wanna waste money by buing the wrong hardware. If I go for the D-Link switch mentioned above I will probably buy that new here in Norway, the rest will be bought through eBay, Fiberstore.com etc as long as the price is right and international shipping is available.
 
Sep 22, 2015
62
21
8
I don't see why you'd need to buy Intel NICs. Go on eBay and get some of the older Mellanox cards for 30 bucks each. I bought two, one for my esxi server, and one for my workstation, and they both went in very easy. I had to download a single driver for my win7 machine, but the esxi server picked it up and added it as a physical NIC right away.

The Mellanox nics took some Avago transceivers I got off eBay, plug and play, which means it doesn't care what kind of transceivers it gets. I used a DAC cable from my server to the switch, and a long fiber patch cord from my switch to the server. Performance is fine, 7gbps from workstation to win 10 VM running on my server. Total Cost not including the switch was 130 USD.
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
I would go with Mellanox cards as well to start at least, you can pick them up cheap and they work just fine. I have several of the dual-port ConnectX-3's and they are great.
I also have a mix of Intel 10G SFP & Cu cards (the X540's, X520's, the X710 Quad 10G cards) which work out of the box and are well supported in Linux. Recommended, but are more expensive.
With the short distances, you should be fine with DAC cables. Poke around on eBay and you should be able to pick some up pretty cheap. Just be sure they are passive cables, not active.
No experience with that D-Link switch yet, I did pick up a Netgear XS712T to bridge between the Copper and DAC 10G networks. It works fine for basic SAN use. The management web interface is pretty non-intuitive and limited is my biggest complaint.
 

Mrlie

Member
Jan 1, 2011
32
2
8
45
Oslo, Norway
An update,

I went ahead and bought a D-Link 1510-28X, a few Intel X520-DA2 and the following tranceivers from Fiberstore.com:
The 1510-28X accepts both the D-Link compatible tranceiver I bought from Fiberstore.com, and also the Finisar FTLX8571D3BNL-E5 I had from before. The Intel NIC accepted the Finisar-tranceiver, but also the 2nd tranceiver I bought from Fiberstore.com.

So far I have just connected one computer (Win7) and verified connection, I have yet to install NICs in the other servers and play with jumboframes etc to tweak performance. I also plan to change the fan in the D-Link switch to a Noctua NF-A4x10-FLX, like @Kristian did here: https://forums.servethehome.com/index.php?threads/d-link-dgs-1510-28x-24-g-4-10g.4277/#post-44386

I've heard way worse fans than this, and it doesnt have any annoying pitch etc, but if I can make it quiet by a simple fan-swap then thats what I am gonna do, since its gonna sit in the spare bedroom I have converted to an office/datacenter/world-HQ.
 

jfeldt

Member
Jul 19, 2015
48
9
8
54
I have two of the Intel X520. I like them a lot but they are locked down in their drivers to only use Intel branded SFP+ modules. Though, if you are using a unix derivative, I see a line in the conf file to let them use unbranded ones, but I haven't tried that. Is your working connection with a X520 and one of those transceivers you listed? If so, that's awesome. If not, and you end up with problems with those transceivers and the X520, that could be one cause.
 

Mrlie

Member
Jan 1, 2011
32
2
8
45
Oslo, Norway
On my workstation (Win7) I installed the newest driver and my card accepted both non-Intel tranceivers without any issue.

I also put another card into a esxi boks I have running, and everything worked after just booting machine back up again. Havnt tested any performance, just that I can still read/write to a freenas-vm I have running on it.
Did install the vmxnet3 adapter on the vm though late last night, when I installed the x520.

Might replace esxi on this boks and run freenas on bare metal instead, and play with esxi on another dual-socket boks I have running. Hw is more suitable atleast, so it makes sense to consolidate my vm's on one plattform on one machine (running both esxi and virtualbox now).
 

Pete L.

Member
Nov 8, 2015
133
23
18
56
Beantown, MA
I was going to say that some 10G Cards are "Vendor Locked" and so are some switches (Cisco) when it comes to SFP+'s, I've had tremendous luck (so far / **knock on wood**) with the Mellanox ConnectX-2 and ConnectX-3 Cards I picked up off e-bay the first two were Dell OEM and didn't really realize it so I picked up a couple of branded ones, they all work well and I also picked up a Dell X1052p Switch which has 4 SFP+ Ports (so does the non-POE Version) and to my surprise my generic SFP+ Modules work perfectly as well. 10G is addictive =)
 

inkysea

New Member
Dec 2, 2015
4
0
1
123
blog.inkysea.com
I can't justify the 10GBe cost for home lab hosting VMs. Inexpensive quad 1GBe nics with a low latency switch should perform well in a lab even for hosting VM storage workload. In my lab, I'm not network constrained with 1GBe, I'm storage IO constrained. Even in an enterprise environment where 10GBe is needed for thousands of VMs that are NAS hosted, the same rule typically applies. If you are network constrained then it's most likely your network configuration. Yes, I've seen NAS networks that are routed to a core switch with the rest of the network. LOL!

Two 1GB nics at 50% utilization can easily handle 21k random read IOPS. If your thinking of investing in 10GBe for your home lab to host VMs, then save $$ with quad 1GBe and put the $$ into storage.
 
Sep 22, 2015
62
21
8
I can't justify the 10GBe cost for home lab hosting VMs. Inexpensive quad 1GBe nics with a low latency switch should perform well in a lab even for hosting VM storage workload. In my lab, I'm not network constrained with 1GBe, I'm storage IO constrained. Even in an enterprise environment where 10GBe is needed for thousands of VMs that are NAS hosted, the same rule typically applies. If you are network constrained then it's most likely your network configuration. Yes, I've seen NAS networks that are routed to a core switch with the rest of the network. LOL!

Two 1GB nics at 50% utilization can easily handle 21k random read IOPS. If your thinking of investing in 10GBe for your home lab to host VMs, then save $$ with quad 1GBe and put the $$ into storage.
People have other reasons for wanting 10gbe. My NAS vm will write to its storage at around 400 MBs. My workstation, full of SSDs, can easily read and write that fast. It's often a single stream file copy so NIC aggregation won't cut it. That's why I converted part of my network into 10gb, and yes, it's one giant network. For a home lab, it works great.
 

inkysea

New Member
Dec 2, 2015
4
0
1
123
blog.inkysea.com
People have other reasons for wanting 10gbe. My NAS vm will write to its storage at around 400 MBs. My workstation, full of SSDs, can easily read and write that fast. It's often a single stream file copy so NIC aggregation won't cut it. That's why I converted part of my network into 10gb, and yes, it's one giant network. For a home lab, it works great.

Your use case sounds like 2 nodes with 10Gbe. switches with 2 10Gbe SFP plus ports can be found cheap on Ebay, costing less than $300 to build a 2 node 10GBe network. Definitely the best route to take for the use case.

Scaling beyond 2 10GBe nodes gets expensive quickly. If you know of a cheap switch with 4 10Gbe SFP plus ports then let me know!

My use case is 4 ESX nodes and a storage node. 5 nodes total would be well over $700 for a 10 Gbe network. I've opted for the $300 route of 1gbe using ISCSI MPIO for my storage network.

I do have to point out that in your example, you are also storage constrained getting 3.2 Gb/s over a 10Gbe network. MPIO using 4 1gbe has a theoretical throughput of 4Gb/s and it can definitely achieve 3.2 Gb/s to storage. I'm also storage constrained at this point. So adding more nics 1Gbe or 10Gbe won't help. In most cases spending the $$ on storage enhancements will result in better performance.
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
All depends on the intended use case, my dev file servers can flood a 10Gb network link without breaking a sweat and it was a massive improvement going from bonded quad 1G ports between several servers to point to point 10G and then switched 10G. I do a lot of work with huge video files, and the time saved to move multiple TB around faster is well worth it.

For a 4 Vm + 1 storage setup it is pretty inexpensive to get single port ConnectX-2's ($18-20 on ebay) for each ESXi host and then 2x Dual port cards ($50 on ebay) and have a direct link between the storage box and each VM.
For example
Dual port ConnectX-2 Mellanox ConnectX-2 VPI 10Gbe Dual-Port Adapter Card MHQH29C-XTR
Single ConnectX-2 + Cable RT8N1 0RT8N1 DELL MNPA19-XTR CONNECTX-2 10GB PCIE SERVER ADAPTER W/CABLE
Would be about $250 after shipping for 10G between the storage box and 4 VM servers.
 

inkysea

New Member
Dec 2, 2015
4
0
1
123
blog.inkysea.com
All depends on the intended use case, my dev file servers can flood a 10Gb network link without breaking a sweat and it was a massive improvement going from bonded quad 1G ports between several servers to point to point 10G and then switched 10G. I do a lot of work with huge video files, and the time saved to move multiple TB around faster is well worth it.

For a 4 Vm + 1 storage setup it is pretty inexpensive to get single port ConnectX-2's ($18-20 on ebay) for each ESXi host and then 2x Dual port cards ($50 on ebay) and have a direct link between the storage box and each VM.
For example
Dual port ConnectX-2 Mellanox ConnectX-2 VPI 10Gbe Dual-Port Adapter Card MHQH29C-XTR
Single ConnectX-2 + Cable RT8N1 0RT8N1 DELL MNPA19-XTR CONNECTX-2 10GB PCIE SERVER ADAPTER W/CABLE
Would be about $250 after shipping for 10G between the storage box and 4 VM servers.
Very nice! I've stayed away from these types of cards as I'm running FreeNas. It looks like FreeNas will be supporting them in 10.... What are you running for a NAS?
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
Very nice! I've stayed away from these types of cards as I'm running FreeNas. It looks like FreeNas will be supporting them in 10.... What are you running for a NAS?
No NAS distro for me, all Centos boxes running between C5 and C7 depending on the project/age of box.
 

Mrlie

Member
Jan 1, 2011
32
2
8
45
Oslo, Norway
One of the motivations for upgrading to 10GbE is that I need to move around aprox 30TB of media back and forth due to upgrading one of my storageservers with new bigger drives. I know its overkill and on a day to day basis I will not be able to utilize the full potential of the 10GbE between the servers and my workstation. But for me I thought this was the simpler solution, rather than buying quad-port 1GbE since not all my software supports SMB3.

Paying the premium is worth it to me, rather than spend the extra time and effort to tinker and try to make a more "ghetto setup". 10-15 years ago I would perhaps choose different in the same situation, but work and other stuff takes up much more time today than it did before.

Besides, this is much cheaper than having a "Ferrari-account" (the account you hide from your wife/bf/gf and save up so that you might one day have enough to buy a Ferrari etc). Yes, I am one of those guys that perhaps spend too much time working with and thinking about computers, but I can think of worse faiths than having my work and hobby being the same thing :)