Chelsio T320 dual SFP+ & 2 transceivers - 32 $

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Doesn't look like it has RDMA. You typically only find that (at least RoCE) on more expensive / newer adapters.
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
There are different RDMA protocols, iwarp and roce v1/2. Iwarp is the official one and is used by intel (I think they have just two nics with that feature) and chelsio while roce v1 is a proprietary protocol from mellanox. Roce v2 is a more open source protocol and is available in connect-x 3 pro and some newer intel nics.
 

epicurean

Active Member
Sep 29, 2014
785
80
28
Anyone who does NOT need the tall brackets for this card, please let me know so I can take it off your hands. Thanks!
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
@BLinux, did the cards show up with both brackets??


Sent from my iPhone using Tapatalk
yes, both brackets. the FP bracket is already attached and the plastic casing had a section that held the LP bracket. The SFP+ modules came installed in the card; no rubber dust caps though.
 
  • Like
Reactions: Craash

HorizonXP

Member
May 23, 2016
68
1
8
38
I'm having a hell of a time getting these to work. Ordered 5, got them yesterday. Apparently they come with firmware version 5.0, so I flashed a few to 7.11, the latest.

My pfSense box wouldn't recognize the card at all until I changed a BIOS setting for PCIe bifurcation from Auto to x4x4. That card seems to be working fine now.

My FreeNAS box picked it up just fine, but it seems to drop connections at times.

I have the pfSense and FreeNAS box connected via SFP+ DAC cables to an H3C S5120 switch. I tried running iPerf tests between the boxes, and can barely reach 1Gbps speeds, let alone 10Gbps. Previously, with my Intel X520s, 10Gbps was achievable.

Tried installing another Chelsio card into my ESXi machine, but ESXi 6.5 refuses to load the driver, and I got nowhere.

Finally, swapped out my Intel X520 from my Windows 10 desktop, and installed the Chelsio card. Driver installed OK, but as soon as I connect my OM4 fiber cable, Windows BSODs, and now won't boot. Boots fine when the cable is disconnected, which obviously defeats the purpose of all of this.

At $20 a piece, I'm not sure this "deal" was worth it so far. If anyone has any pointers, I'm all ears!
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
I'm having a hell of a time getting these to work. Ordered 5, got them yesterday. Apparently they come with firmware version 5.0, so I flashed a few to 7.11, the latest.

My pfSense box wouldn't recognize the card at all until I changed a BIOS setting for PCIe bifurcation from Auto to x4x4. That card seems to be working fine now.

My FreeNAS box picked it up just fine, but it seems to drop connections at times.

I have the pfSense and FreeNAS box connected via SFP+ DAC cables to an H3C S5120 switch. I tried running iPerf tests between the boxes, and can barely reach 1Gbps speeds, let alone 10Gbps. Previously, with my Intel X520s, 10Gbps was achievable.

Tried installing another Chelsio card into my ESXi machine, but ESXi 6.5 refuses to load the driver, and I got nowhere.

Finally, swapped out my Intel X520 from my Windows 10 desktop, and installed the Chelsio card. Driver installed OK, but as soon as I connect my OM4 fiber cable, Windows BSODs, and now won't boot. Boots fine when the cable is disconnected, which obviously defeats the purpose of all of this.

At $20 a piece, I'm not sure this "deal" was worth it so far. If anyone has any pointers, I'm all ears!
hmm... that's not good.

have you tried isolating the issues? Like, if the pfsense is ok, but the freenas is dropping, then your speed test between them would be lower if there's packet loss.

have you tried it in any linux box yet? i'm kind of thinking about returning them now...
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
There are different RDMA protocols, iwarp and roce v1/2. Iwarp is the official one and is used by intel (I think they have just two nics with that feature) and chelsio while roce v1 is a proprietary protocol from mellanox. Roce v2 is a more open source protocol and is available in connect-x 3 pro and some newer intel nics.
The spec sheet doesn't say it has RoCE though it says it has iwarp. Iwarp is outdated afaik.
 

Craash

Active Member
Apr 7, 2017
160
27
28
I will say that I purchased a 3rd one to put in my Windows 10 workstation (currently has a Connect-x2) JUST because of the great performance I saw in both my FreeNAS and PFSense boxes which both have the exact same 110-1088-30. Both of those are Optiplex 9010 i7s.

pfSense v2.3.4-RELEASE-p1
FreeNAS-11.0-U3 (c5dcf4416)

All tie into my beloved T1700G-28TQ with FS.com DACs.
 
  • Like
Reactions: BLinux

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
I will say that I purchased a 3rd one to put in my Windows 10 workstation (currently has a Connect-x2) JUST because of the great performance I saw in both my FreeNAS and PFSense boxes which both have the exact same 110-1088-30. Both of those are Optiplex 9010 i7s.

pfSense v2.3.4-RELEASE-p1
FreeNAS-11.0-U3 (c5dcf4416)

All tie into my beloved T1700G-28TQ with FS.com DACs.
so you're able to get near 10Gbps performance?
 

Craash

Active Member
Apr 7, 2017
160
27
28
so you're able to get near 10Gbps performance?
I actually set down to do a couple of benchmarks. iPerf2, and a real world file transfer.

First, machine specs.

Windows 10 Workstation
i7-7800x
Twin Evo 960 500GB's in RAID 0 Putting out 2890/3126 MB/s Read/Write.
Mellanox Connectx-2 10Gb MNPA19-XTR SFP+

FreeNAS
Optiplex 7010
Intel i7-3770 @3.4GHz
32GB DDR3 (FreeNAS LOVES RAM)
Chelsio 10gb 110-1088-30 SFP+ 2-Port PCIe Controller – LAN
Drive Config:
HP 9207-8e PCIe 2.0 SAS6G HBA
PROAVIO E8-MS 8 bay 6G SAS Drive Enclosure
8 Each Western Digital WD4000FDYZ 4TB 64MB 7200RPM

With FreeNAS as the Server and my Windows 10 Box as the client, I see 9.11 +/- .5.

I see a good solid 10 from my freeNAS to my pfsense (both with Chelsio cards).

But, the cool thing? Check out the screen shot on transferring a 40GB file from my workstation array to the FreeNAS array. 7.5Gb/s AVERAGE. That's at 93% finished of a 40GB file. WAY past cache impact.


40GB from Win10 to FreeNAS



iPerf2 - FreeNAS Host, Win10 Client. SINGLE INSTANCE.


Twin EVO 960's on x299 Chipset (RAID 0)
 

HorizonXP

Member
May 23, 2016
68
1
8
38
Ok, I'm not seeing anything like that at all.

I just reverted my boxes back to their original configurations.

Windows 10 Workstation
i3-2120
Twin Evo 950 250GB's in RAID 0
Intel X520-DA1 SFP+

FreeNAS
Intel Xeon E3-1230v2 @ 3.3 GHz
32GB DDR3 ECC
Intel X520-DA1 SFP+

pfSense
SuperMicro SYS-5018A-FTN4
Intel Atom C2758 @ 2.4 Ghz
4GB DDR3
Intel X520-DA1 SFP+

In between the 3 machines is an H3C S5120 switch. They're all on the same VLAN and IP subnet, so no inter-VLAN routing should be occurring. The workstation is connected via fiber, the other 2 are direct attach.

I've just been running tests using iPerf2 between all 3 machines.

  • From Win10 (Intel X520) -> FreeNAS (Intel X520) - 9.47 Gbps
  • From Win10 (Intel X520) -> FreeNAS (Chelsio N320) - 1.8 Gbps
  • From Win10 (Intel X520) -> pfSense (Intel X520) - 716 Mbps
  • From Win10 (Intel X520) -> pfSense (Chelsio N320) - 553 Mbps
  • From FreeNAS (Intel X520) -> pfSense (Intel X520) - 2.42 Gbps
  • From FreeNAS (Chelsio N320) -> pfSense (Chelsio N320) - 564 Mbps
To me, this indicates that there's some tuning issues on the pfSense box, and maybe even the FreeNAS box.
 

Craash

Active Member
Apr 7, 2017
160
27
28
I have 9000 MTU set on my pfsense, freenas, ESXi, Windows Clients, and on my Switch. That is the extent of any tuning on my part (eh, a bit on the ConnectX-2 Card on the WinTel box).

I suspect if you increase the streams on your iPerf2 test, you'll saturate the link. If I recall, might have been a somewhat 'known' issue.

Finally, what about a DAC straight between FreeNAS-pfSense to test?
 

HorizonXP

Member
May 23, 2016
68
1
8
38
Increasing the streams by adding -P 2 to 8 on iPerf had no impact. Same speeds.

I'll try a DAC between pfSense and FreeNAS, but a "direct connect" fiber between my workstation and FreeNAS yielded the same speeds. 9.5 Gbps from Win10 to FreeNAS, but only 2 Gbps from FreeNAS to Win10. This was observed in Samba file transfers too. 1 GB/s writes to FreeNAS, but only 180 MB/s reads. Super odd, considering these are HDDs, not SSDs. I'm happy with the write speeds, but would really like read speeds to be better.

Anyway, if I'm seeing these speeds with Intel NICs, I'm not sure the Chelsio would be better. Moreover, I don't think the Chelsio card is the problem. But I'm going to try again.

I have 9000 MTU set on my pfsense, freenas, ESXi, Windows Clients, and on my Switch. That is the extent of any tuning on my part (eh, a bit on the ConnectX-2 Card on the WinTel box).

I suspect if you increase the streams on your iPerf2 test, you'll saturate the link. If I recall, might have been a somewhat 'known' issue.

Finally, what about a DAC straight between FreeNAS-pfSense to test?
 

manfri

Member
Nov 19, 2015
45
7
8
56
Does this card support SRIOV and RDMA (esp. for HyperV 2016)?

Sent from my SM-G950F using Tapatalk
Yes and yes.
Ok, here's an update. I received my 4 Chelsio cards from this purchase. They are N320E cards, and the specs are here:

https://www.chelsio.com/assetlibrary/products/N320E Product Brief 090630.pdf

So, based on that information, the typical power consumption is 14W. So, that makes it 7~8W more than the Mellanox ConnectX-2 dual port.
N320E... i don't think they support RDMA

Terminator 3 (T3) family Ethernet Adapter Driver Archives
Terminator 3 drivers support the following adapters: S310E, S320E, S320X, R310E, N310E, N320E
TOE and RDMA functionality is not supported on N3xx adapters.

https://service.chelsio.com/nec.html


And a "quick" search i've NOT driver beyond 2008R2 and ESXI 5.X, but maybe in the OS are natively supported

and NO VMQ and SRVIO noted in product spec https://www.chelsio.com/assetlibrary/products/N320E Product Brief 090630.pdf


And in all the doc i've found for S2D, minimum required is ConnectX3
 

HorizonXP

Member
May 23, 2016
68
1
8
38
Just reinstalled the Chelsio NIC back into the FreeNAS box, and I'm seeing the exact same performance numbers. So there's something about FreeNAS that's messed up on the networking side. The NIC doesn't matter. I might have to resort to posting on the FreeNAS forums for help, but they're generally not as nice as he folks here.

So I may reinstall the Chelsio card into the pfSense box too, since that's not doing great with the Intel NIC either. Maybe I need to install it from scratch or something.

Increasing the streams by adding -P 2 to 8 on iPerf had no impact. Same speeds.

I'll try a DAC between pfSense and FreeNAS, but a "direct connect" fiber between my workstation and FreeNAS yielded the same speeds. 9.5 Gbps from Win10 to FreeNAS, but only 2 Gbps from FreeNAS to Win10. This was observed in Samba file transfers too. 1 GB/s writes to FreeNAS, but only 180 MB/s reads. Super odd, considering these are HDDs, not SSDs. I'm happy with the write speeds, but would really like read speeds to be better.

Anyway, if I'm seeing these speeds with Intel NICs, I'm not sure the Chelsio would be better. Moreover, I don't think the Chelsio card is the problem. But I'm going to try again.
 

Craash

Active Member
Apr 7, 2017
160
27
28
@HorizonXP,

Forgive me, I'm not sure where we are with your setup. I THINK that we've decided that the NIC doesn't matter. I'm assuming it has an on-board 1GB NIC, if you use it can you at least saturate the link? Solidly? We have alot of things in the 'mix' here and I'm just trying to decide how we can eliminate items that might be throwing a wrench in the works.

It took me awhile to get my FreeNAS sorted out, but I'm pretty happy with it now. One of the things that got me was using my Chelsio in a physical x8 (pcie2.0) port that was wired as a x4. This gave me pretty bad performance. Moving it to a x8 (wired as a x8) fixed it right up.
 

Craash

Active Member
Apr 7, 2017
160
27
28
What. A. Day. I put my new Chelsio in. Windows does DOES not find a driver for it, so you have to use the Unified installer v1.5.13.0 from service.chelsio.com. As soon as I loaded the driver, BAM. Windows 10 BSOD's. Reboot, repeat.

So, I figured I'd format real quickly and load chipset, basic drivers and updates and see if I could get it to load. After multiple clean installs and installation order (chipset, driver, update: driver, update, chipset: etc) without success, I finally tried port 2 on the card, and what do you know, all my issues vanished. Really?? Hours wasted on what might be a bad port?

Anyway, I went ahead and finished my workstation setup without another BSOD, and once all setup I bench marked with the same 40GB file. Pretty much the same results.

So, doesn't seem like there is any advantage to it on a windows machines, at least in my case. Of course the downside is no native driver and more power usage.

On odd thing: windows shows it's link speed at 10.7 Gbps. Not 10, 10.7.

Oh, I did update the firmware to v7.11.0 too.​