Sorry another 10gb setup question

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TeleFragger

Active Member
Oct 26, 2016
247
52
28
51
Howdy all,
First post - but you dont have to be nice as i have thick skin...

anywho..

I fired up a new machine and installed Server 2016 Standard. While copying all of my files over from my old 2k12 box - it was taking FOREVER. That got me googling, etc and I hit on 10gb on the cheap.
So I saw that you can just take 2 10gb nic and a cable and your golden. Well I am now done with the file moves but would love to still look into this.

Talking to a guy at work he told me to do it right or not at all. He said dont do direct connect and and get a switch. I was looking at the Quantum LB6m switch for around $300. I then saw a 3Com 4500g. Upon looking into it I read in a 3com pdf that the sfp ports are only good for a few meters.

Well lets see if you guys can help me

my setup is
  • video/photo editing box that I do many of my moves from and is 8-10 meters away
  • File server - houses home automation (homeseer), personal files, plex movies and dvr shows(plex is off box) in utility room
  • esxi test lab in utility room
question 1. I have is that I see 1m, 3m, 5m cables on ebay. there are some longer ones so I asked a seller a question and this is what I got back...

1-5 meter, Passive DAC
5-15 meter, Active DAC
15-30 meter, AOC (Active Optical Cable)
30-300 meter, SFP+ SR Transceiver
300m-10KM, SFP+ LR Transceiver


I am looking at using the Mellanox Connectx-2 cards but do not know if they can do Active as I need longer than 5 meters...

question 2.
Could I use a 3com 4500g switch and bypass direct connect or is my 8-10 meters too far?

thanks for the beginning of my questions...

i did search and read through but when your brain turns to mush .... well
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
DAC/twinax is good for short runs (TOR) or short runs w/in the same rack, that being said I do not have any experience w/ active DAC's/longer distances. Fiber/transceivers in 10G SFP+ ports for longer runs (else 10G copper and cat7 cabling for longer distances as well, but you get a lil' higher latency and pwr usage...nominal on a home lab, significant at scale, ohh and those 10G copper nics are more $$$ typically)

That's the long and short of it. LB4M for 2 ports or less of 10G, LB6M for more (your probably gonna want more). Plenty of other 10G options arnd/avail as well, look at the 10G under $550 thread.

10G SFP+ nics are a plenty for cheap these days, dont pay more than $50 in my book per device, maybe even less, mellanox connectx-2/intel X520-da2 (most cross-platform/OS compatible IMHO)/Chelsio cards for BSD.

No opinion on 3com other than not my cup o' tea.

2cents.
 
Last edited:

Jeff Robertson

Active Member
Oct 18, 2016
429
115
43
Chico, CA
Morning, I just started upgrading my network to 10Gb as well so here are my two very inexperienced cents. Your worry about distance only relates to DACs, if you purchase those connectx-2 (I've got two, they work great and were $20/each on ebay with a short DAC) cards you can build your own fiber cable. See FS.COM - Data Center, Fiber Cabling & Connectivity Optics Supplier for some reasonably priced fiber cables that can cover long distances. The transceivers can be found on ebay on the cheap in bulk. You may also want to look into the new Ubiquiti Edgeswitch 16 XG, it has 12 SFP+ ports and four 10GBe copper ports so you can mix cabling ($600 new). Good luck and post your results!
 

TeleFragger

Active Member
Oct 26, 2016
247
52
28
51
thx for the reply...

yeah i know 3com is what it is... if cheap enough i dont mind. YEARS ago i got a refurb dell powerconnect 2724 (24port gig unmanaged switch) and love it. have had it so long i cant remember when i bought it... 13 years ag0? so i could just keep that and go direct connect on the nics
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Home slice ain't lying on the 'do it right' comment though, a 10G switch will make life MUCH more bearable, if it's 2 hosts not too bad but any more than that and you start throwing more add-in cards/10G nics's at it to make it work and the topology get's gnarly IMHO.

GL w/ 10G adventures!
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
Point-to-Point is ok for linking 2 machines. I wouldn't do much more than that though. I have one server and an LB4M, so I used a DAC to connect that. Then I have 2 SFP+ transceivers and a fiber to connect my main workstation to the switch, once I get off my duff and run the fiber. Everything else will use 1G for now.

The NICs are Mellanox ConnectX2s, they work just fine in Linux, Windows (I think), ESXi, and BSD. Even FreeNAS supports them now. And they are widely available for decent pricing. It's the switches that make the costs go way up.
 

TeleFragger

Active Member
Oct 26, 2016
247
52
28
51
Point-to-Point is ok for linking 2 machines. ....

The NICs are Mellanox ConnectX2s, they work just fine in Linux, Windows (I think), ESXi, and BSD. Even FreeNAS supports them now. And they are widely available for decent pricing. It's the switches that make the costs go way up.
my real question here then to start is..

Mellanox connectX2 card in 2 machines connected by a 12m cable.. will this work? do I have to make sure to get an active cable (if that exists)? or is it native that you dont see it..
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
my real question here then to start is..

Mellanox connectX2 card in 2 machines connected by a 12m cable.. will this work? do I have to make sure to get an active cable (if that exists)? or is it native that you dont see it..
12m is getting close to as far as I could try to run a DAC. You can't really tell how they are connected, that's the card's job. If it can hold a link over the connection, it will. You might try looking around a bit for modules and fiber, it might be close cost wise and will be more reliable over a distance. I think my DAC is 2m. :)

With a point to point link, the other kind of annoying thing is having to run 2 subnets. This prevents issues in the networking stack that could cause packets to go over the wrong interface. My main network was 10.0.0.0/24, so I used 10.11.12.0/24 for the 10G. Then you just make sure any higher speed traffic goes to that new address. That's one nice thing about using a switch, just connect them up.
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
I would avoid a DAC for 12M, you will need an active one and support for them is far more hit and miss than passives.
Get a pair of SFP+ SR modules and a 15m LC-LC OM3 or OM4 MM cable and you will be all set and in a better position if you change your switch or NICs out down the road.
Brand-new a pair of SFP+ SR transceivers are ~$32 and a 15M cable is < $10. Make the most of the shipping time & costs if buying from China and get extra modules so you can use new hardware you pick up on ebay right away.
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
Just to throw a wrench in the works...

How many x4 or better PCIe slots do you have available in your file server?

Hear me out here. It looks like you will probably put any switch in the utility room with your gear. If it were me, spending money today, I would run fiber from your workstation to your utility room.

Then, if I had the slots in the file server, I would put a pair of dirt cheap 10G cards. I would put a dual port card in the server if I only had one slot. I would connect one port to the fiber to the workstation and the other with a short DAC cable to the ESX server.

Now, IF you have two slots, then you are wasting zero dollars. You will probably want the extra single port card down the road about the time you buy a switch. If you only have one, then tell yourself you will be running redundancy to the server when you do get a switch ;)

Unfortunately, from what I can see, there are no cheap, reliable, quiet, low power, compatible switches out there. They are getting close but they aren't quite there yet. I am holding off on buying a switch until things get just a little bit better using PTP links as needed.

Of course, ymmv.
 

Jerry Renwick

Active Member
Aug 7, 2014
200
36
28
43
What you have mentioned about the 10G solutions is right, 1-5 meter, Passive DAC passive cooper cables are only limited for really short 40G applications, while the DAC active copper cables can support up to 15m. AOC cables are usually more expensive than DAC cables but with longer distance. For longer distances, you can use fiber cables and SFP+ transceiver modules. 10G DAC cables are the cost-effective solutions for 10G connectivity.
 

tullnd

Member
Apr 19, 2016
59
7
8
USA
With your setup, you really only seem to need the bandwidth between two computers. I wouldn't bother with a switch right now. Affordable 10Gb enabled switches are out there with one or two SFP+ ports, which is great for interlinking switches where you may aggregate more than 1Gb of bandwidth between them, but not necessarily need it for an individual connection. You want 10Gb(or at least more than 1Gb) between your two PC's.

I just bought all the fiber to do a switch interconnect in my home, having wired my house with Cat6 cable. The run from basement to second floor was easier to do fiber(due to the install space concerns of running possibly 10 Cat6 drops up to the attic) and just run SFP interconnects between two switches.

I managed to buy some 1Gb MM fiber SFP's for about $8 each for my switches, but even the 10Gb counterparts would only run maybe $25-30 each. The fiber for 125m(I bought excess) was about $28 or so for OM4 armored fiber cable. I even ran some single mode at the same time, cause it was so cheap...just because I may one day want something more. At that price, I ran two sets of OM4, two individual SM runs, which gives me plenty of redundancy(overkill). The cabling cost me about $100. You could just get two sets of OM4 MM cable and run it that 15-25 meter distance(probably like $15 each for the cable runs) and then the mellanox cards and two SFP+ ports.

That would get you 10Gb connectivity between those two hosts for dirt cheap. Go ahead and do a dual SFP+ in the server(only like $25 more for the dual card, right?) and you can connect it to your ESXI lab later on if you want. You can always re-route this all to a 10Gb switch down the road when the costs come down.

Actually now that you're on this site, you'll be jones'ing for a 10Gb switch at some point anyways. This place does horrible things to my wallet. I'm just trying to mitigate it a little.
 

mervincm

Active Member
Jun 18, 2014
159
39
28
Even if you just have 2 devices, a switch is very handy. You can use a cheap 220$ (new) Microtik with 2 10gig sfp+ ports and 8-24 1gig . This lets you have a completely flat network, one IP per device, no tracking "use this IP to get there on the 10gig path or this ip to get there on the 1gig path) This is what I landed on (after try a few switchless configs) for my environment where I have a NAS w 10gig storage (and plex) and a workstation on 10gig, and everything else happy on 1gig or bonded 1 gig channels.

RouterBoard.com : CRS210-8G-2S+IN