$99 10Gb NIC from Asus

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Long time coming IMO. I'm still torn on if I want these versus just getting newer Intel server parts. Reading STH, it sounds like all the servers will have 10G built-in.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
10GbE in servers is going to be like 1GbE soon.
I never understood why the chipset for e5 v3/v4 didn't end up with some form of native 10G.
In any case who actually deploys a server without at least 10G ?? (If for no other reason than it's cheap and almost standard and you can all ways use some bandwidth for backup tasks)
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
At present the industry is transitioning to a more integrated 25/50/100 approach. It makes some sense that there is limited integration in the current chip generation (outside of SoC's intended mostly for appliances and embedded like Xeon-D) - but I expect that to change in the next cycle.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
@Evan Intel has a big networking business. As @PigLover mentions, as the industry moves to 25GbE, 10GbE becomes previous-gen and OK to integrate
 
  • Like
Reactions: eva2000

StevenDTX

Active Member
Aug 17, 2016
493
173
43
I work for a Fortune 50 company and I would venture to guess that less than 15% of our servers are connected to 10GbE. Probably 90% of those are ESXi. Only about 50% of our storage is connected to 10GbE.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
You have a point, once 10G is integrated then Intel will loose a lot of network sales, not that a lot of the adapter we buy are Intel but I am sure a lot of companies do prefer Intel of say qlogic or Broadcom.

Sitting just outside the 50 that @StevenDTX does I would say we are 50-60% 10g or better, even today though we dp order some servers without 10g like domain Controllers as it's just not needed but all racks have 10g base-t top of rack so it was zero $ it would be used. For any blade server etc using 10/25 SFP+
 

Jerry Renwick

Active Member
Aug 7, 2014
200
36
28
43
I would reckon this is the sign, 10Gb will undoubtedly replace 1Gb, and will be replaced by high-speed data rate like 25Gb/40Gb/100Gb in the near future. For my office network, I have already deployed the 10gb devices in my sever room.
 
  • Like
Reactions: eva2000

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Aside from storage servers and backup servers and a few special roles, maybe huge virtulised servers what though need more than 10g now or even in the next few years. 1G to 10G is a huge leap and that's likely what most server will need, sure 40/100g on the network side i a sure, just less so on the server at least for now. NVMe over the higher bandwidth connections will be somewhat interested but also limited I guess.
 

Net-Runner

Member
Feb 25, 2016
81
22
8
41
I don't want to wait until 10GbE switches come down in price. Having two servers in my home lab (currently connected with Mellanox ConnectX-3 directly) will definitely try these switch-less. No RDMA probably, but still :) 10 GbE over Cat6 sounds like sci-fi :)
 

techtoys

Active Member
Feb 25, 2016
189
50
28
58
I purchased 2x each
Aquantia AQC-108 5G cards
Aquantia AQC-107 10G cards - Asus XG-C100C​

The AQC-107 are the Asus cards in this thread.

Over Cat-5e in home using iPerf3.
AQC-108 5G: 4.30 Gbps
AQC-107 10G: 6.39 Gbps

For the 10G cards speed was measured connected through 2 10G switches.
The switches are connected to each other in the home via Cat-5e
Card #1 (Bedroom) is connected to the Asus switch over Cat-5e.
Card #2 (Garage) is connected to the Netgear switch over 5' Cat-6

I ended up purchasing 2 switches to connect these cards.
  1. Netgear XS708v2
    placed in Garage with Home Lab
  2. Asus XG-U2008
    placed in Bedroom walk-in closet
    all home wiring runs into master bedroom closet
The AQC-108 card speed was measured via direct connection, no switches.
However, direct connection was across 2 longer Cat-5e runs.

I could not get the AQC-108 5G cards to connect at 5G to the 10G switches.
The 10G switches consider these cards to be 1G and speed is limited to 1G (949 Mbps)

I may be able to return the AQS-108 cards. I can't see much use for them now.
 
  • Like
Reactions: eva2000

pyro_

Active Member
Oct 4, 2013
747
165
43
Makes sense that the switches would see them as 1g cards. I dont believe that those switches support 2.5 or 5g speeds only 1g/10g
 

techtoys

Active Member
Feb 25, 2016
189
50
28
58
Here are a few more setup details.
Test was run between:
  1. Workstation in Bedroom
    Windows 10 (1703) x64
    Aquantia Drivers 1.40.42
  2. Server in Garage
    Windows 2016
    Aquantia Drivers 1.40.42 (same drivers)
The higher data rate of 6.4 Gbps was through the 2 10G switches mentioned above
A direct connection between the 2 cards was only 5.3 Gbps.
This would be due to signal loss over a substantially longer Cat-5e run.

Samsung 960 M.2 (x4) file transfer to the Server was ~730 MB/s
Disk on server was likely 2 SSD in RAID 0 striped.
Most of my higher performance disk arrays run between 500-800 MB/s

This makes larger file transfer to/from workstation much more palatable
 

techtoys

Active Member
Feb 25, 2016
189
50
28
58
Makes sense that the switches would see them as 1g cards. I dont believe that those switches support 2.5 or 5g speeds only 1g/10g
Yup, 1 or 10.
Some press reports indicate that Aquantia is working with partners to release lower cost switches.
I expect these would support 2.5G & 5G speeds.
 

techtoys

Active Member
Feb 25, 2016
189
50
28
58
Over Cat-5e on 2 legs of trip
iperf3 AQC-107 10G: 7.89 Gbps
ntttcp AQC-107 10G: 9.35 Gbps

One of the 2 Asus cards died and I had to RMA back to Newegg.
This test is from Asus 10G in bedroom to MNPA19-XTR through 2 switches.

I retested the link using iperf3 and ntttcp.
I am guessing that iperf3 on windows is not the best test.

Based on some other posts I tweaked up the send/receive buffers and turned on Jumbo frames.
 
  • Like
Reactions: eva2000