Link Aggregation support? (cross platform/MultiOS for several 1gigE links)

So when i'm perusing costs of 10gig Ethernet equipment i'm finding it a little steeper than i'd like, such that I find myself eyeing other alternatives. Having seen networking cards made that have 4x 1gig ports in a single card for insanely cheap like $40 I found myself wondering if there was a way to take advantage of using them instead? Yes it would be a hassle of extra cabling, and need a notably bigger switch, but perhaps gain failover, and is probably less hassle than my original consideration of 4/8gig Fibrechannel or 10gig SDR Infiniband home networking over short distances for high speed in a small cluster.

The problem is that the only solid information i've found so far suggests that it's something supportable in Windows Server 2012, and various virtual machine hypervisors, but I can't find anything clear about whether it's usable on desktop or workstation grade computers whether Win XP/7/8/10, MacOSX, or Linux flavors. (each of the three runs different software I need for planned workflow) I am considering running virtualization on some computers but not all - conventional desktops right back to 32bit Windows XP would be using certain other tools on the network (my 24bit/96khz audio gear only has XP drivers), served by a SnapRAID Linux (or Windows, but prefer Linux) box wanting faster than 1gig speeds.

Even if true Link Aggregation weren't possible at both sides i'm wondering if I could at least set up something client side using 2-4 channels, even if the workaround is I manually access different drives through each (somehow) so operations like reading from drive NAS_INPUTS_A and writing to drive NAS_OUTPUTS_B would be faster than sharing the same link to the NAS for all use, just like I might use different drives instead of the same drive to process large workstreams to avoid drive thrash.
 

whitey

Moderator
Jun 30, 2014
2,770
866
113
38
Quick comment as I can't type novels NEARLY as well as you good sir...why get quad port GigE cards for $40 when you can get single port 10G cards for $17.

MNPA19-XTR 10GB MELLANOX CONNECTX-2 PCIe X8 10Gbe SFP+ NETWORK CARD

QTY 2 for $34 and 5x's the BW/throughput as your 'make my life hell' w/ LACP/bonding scenario...jus sayin'. :-D

I've said it before and I'll say it again until I am blue in the face, 'gotta pay to play'. I think a lot of us serious 'labbers' have left 1GbE in the dust LONG ago.

Garbage in...garbage out :-D
 
  • Like
Reactions: Twice_Shy

Blinky 42

Active Member
Aug 6, 2015
568
203
43
44
PA, USA
I second @whitey suggestion, don't waste your time, effort and $ on multi-port 1G when you can get 10G cards cheap.
When moving video around, even 10G gets slow and you will want to jump up to 40G between the storage box and boxes consuming the most video. Even with 2x quad port cards per computer it is still slower than a single 10G port, takes up 2 slots each side and a pain to cable and setup.

That said, to do multi-port LACP setup, you need a managed switch that supports several LACP groups, and any linux/bsd from this decade. For windows it is a bit more of an issue, as I recall only the recent server flavors have it built in, you need to use the vendor's own utilities & drivers to configure a LACP group / team across the ports on a card otherwise.
 
  • Like
Reactions: Twice_Shy
Wow. ^@_@^ I had not seen any 10gig cards at that cost so far honestly was my problem. Though I wasnt perusing Ebay for them I guess. The other thing was seeing switches and stuff at like $900, unless they do have some lower cost option, but then this even gives an alternative to my consideration of using dual port Infiniband without a switch just plugging port to port. If I had mobo support I could just directly crosslink a couple of workstations on the fast link and keep it simple.

Thanks for straightening me out.

Just to be fully sure this SFP+ is the same 8p8c connector that would work with slower networks (until I got a 10gig switch) the same as normal?

(though if i'm fully honest dual port Infiniband could still be in the running because even the QDR and some FDR cards are affordable now :) but at least it lets me design around a 10gig minimum better)

40gig I wont be able to saturate without stripes of SSD's right now, though yes that would be the next future step up. I assume with consumer SSD's being up to 500MB/sec a stripe of 8 of them off an SAS RAID would be close?
 

ttabbal

Active Member
Mar 10, 2016
758
203
43
43
Nope. SFP+ is for fiber. Which is also not as expensive as you might think. Look for SR modules, and OM3 fiber. I have a couple machines using those ConnectX2 cards and the cards, modules and fiber were under $100 total.

Switches, ebay, LB4M (2x 10G, 48x 1G) or LB6M for 24x10G.
 

Blinky 42

Active Member
Aug 6, 2015
568
203
43
44
PA, USA
SFP+ is not the same as RJ45 connector used for 10/100/Gigabit/10G Ethernet (google it - it is a cage that modules slide into and lock in place)
You can get 10G-base t with Cat6 cables, but it honestly isn't as popular a standard as SFP+ for many reasons (power, total solution cost, latency)
You can find Intel X540 cards on eBay, but you will more commonly see Copper 10G in new server motherboard designs where they have swapped the 1G ports for 10G ports. The Xeon-D family, and most of Supermicro's X10 and later series of motherboards have variants with 10Gbit Copper or SFP+ ports on-board.
 
  • Like
Reactions: Twice_Shy

Tom5051

Active Member
Jan 18, 2017
236
29
28
42
Don't waste your money on link aggregation in the home. You will never get more than 1gbps between 2 machines.
It's only ever useful when you have lots of client machines connecting to many server machines at the same time.
 

whitey

Moderator
Jun 30, 2014
2,770
866
113
38
Don't waste your money on link aggregation in the home. You will never get more than 1gbps between 2 machines.
It's only ever useful when you have lots of client machines connecting to many server machines at the same time.
Golf clap, this is 95-98% true I have found as well unless you get REAL tricky w/ IP/MAC hashing XOR nonsense or multiple subnets across different vlans between hypervisor and backend SAN in a bond/LACP. You can force the matter that way as well but all piss poor IMHO...go to 10G and be done w/ it!