10Gb SFP+ single port = cheaper than dirt

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
For anyone wondering, these are pretty much plug and play with ESXi 6.0U1- you just need to set the rate (10000mbit)

A curiosity is that I had the option of setting 40gbit, didn't try it as I don't have any 40gbit hardware.
 
Last edited:

xbliss

Member
Sep 26, 2015
77
0
6
47
For the any new readers that are reading this thread and want a simplified read. Someone correct me if I am incorrect.

Connectors/Connections needed if using this card (or a similar SFP+ card):
So in a nutshell is someone an this from server to server - Twinax SFP+ DAC is needed
From server to switch (or vice versa) you need 10Gb SFP+ modules (which are LC type)

Mellanox seems to be SFP+ module agnostic but the switch you use may not be. If using a Mikrotik switch here is a link that may help you out.
Supported Hardware - MikroTik Wiki
As these are tempting at the price even though I have/ would not be able to use them for a while.
Should I just wait for when I am ready as the prices will stay or go down. Would the prices of these stay at this level, or go down in the future?

I guess I wont be able to saturate until I have a Fast RAID or major SSDs that surpass SATA bandwidths.
 

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
Better/newer cards will drop in price as time goes on, naturally. These cards likely won't drop much more.

Nvme drives can saturate 10gbit, and a pair of conventional SSDs could also. (Remember we are talking 1 gigabyte per sec, most SSDs can get close to 6gigabit (550 megabytes) a sec)

Latency is the key though. You can have a 1000gb link, but if you have big latency, then when you trigger something off, it will take a while.
 

J Hart

Active Member
Apr 23, 2015
145
100
43
44
Better/newer cards will drop in price as time goes on, naturally. These cards likely won't drop much more.

Nvme drives can saturate 10gbit, and a pair of conventional SSDs could also. (Remember we are talking 1 gigabyte per sec, most SSDs can get close to 6gigabit (550 megabytes) a sec)

Latency is the key though. You can have a 1000gb link, but if you have big latency, then when you trigger something off, it will take a while.
Latency is the main problem with using SFP+ DAC cables in the end. Unlike 10G-SR SFP+ modules(~100ns), the DACs take a certain amount of time to gain carrier(500ns). It adds up especially if your applications are sending many small frames. Of course it is no where near as bad as 10GB-T which takes substantially longer(2000-2500ns). And like you said, 1000base-T is even worse in this department at ~4000ns. Anyway, I always find these sorts of things interesting especially because you get situations where it is faster to read the memory of another machine than to wait for a local SSD to return some data(SSD latency ~100us).

http://www.datacenterknowledge.com/...benefits-of-deploying-sfp-fiber-vs-10gbase-t/

http://www.anandtech.com/show/8104/...ew-the-pcie-ssd-transition-begins-with-nvme/3

And the ever popular latency comparisons guide from Dean and Norvig

https://gist.github.com/jboner/2841832
 

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
Latency is the main problem with using SFP+ DAC cables in the end. Unlike 10G-SR SFP+ modules(~100ns), the DACs take a certain amount of time to gain carrier(500ns). It adds up especially if your applications are sending many small frames. Of course it is no where near as bad as 10GB-T which takes substantially longer(2000-2500ns). And like you said, 1000base-T is even worse in this department at ~4000ns. Anyway, I always find these sorts of things interesting especially because you get situations where it is faster to read the memory of another machine than to wait for a local SSD to return some data(SSD latency ~100us).

http://www.datacenterknowledge.com/...benefits-of-deploying-sfp-fiber-vs-10gbase-t/

http://www.anandtech.com/show/8104/...ew-the-pcie-ssd-transition-begins-with-nvme/3

And the ever popular latency comparisons guide from Dean and Norvig

https://gist.github.com/jboner/2841832

Thanks for your post! I haven't tested fibre to fibre yet- still waiting on a second functional card, but gigabit to 10gig fibre latency is about 0.5 ms across the mikrotik crs226
 

Fritz

Well-Known Member
Apr 6, 2015
3,372
1,375
113
69
I may have set a new record yesterday. Picked up 5 Chelsio S310E-CR for $70 with free shipping. That's $14 apiece.
 
  • Like
Reactions: Keljian

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
I may have set a new record yesterday. Picked up 5 Chelsio S310E-CR for $70 with free shipping. That's $14 apiece.
Not going to beat anyone here: I got a T420 for $80, shipping is taking ages though, I ordered about a month ago, with an estimated arrival of 2 weeks from now.
 
Last edited:

ehfortin

Member
Nov 1, 2015
56
5
8
53
For anyone wondering, these are pretty much plug and play with ESXi 6.0U1- you just need to set the rate (10000mbit)
Does this apply to the hp 671798? If so, do we have to upgrade the firmware to an original Mellanox?

I'm asking because I noticed that there are a few Connectx-2 supported (mnph29d-xtr, mt26478) with esx 6U1 and the mnpa19-xtr is not on the list.

Thank you.

Ehfortin
 

ehfortin

Member
Nov 1, 2015
56
5
8
53
That's what I ordered a few months ago. Didn't try to flash them but they are working fine with Windows Server 2012 R2 and VMware ESXi 6.x.
 

DonJon

Member
Apr 9, 2016
50
6
8
47
Thanks for your inputs. Got some of those HP branded cards. Works flawlessly. The firmware was at ver 2.9.1000. But the boot rom is old which takes to a CLI when <ctrl-b> is entered. the newer FlexBoot version has a gui and supports iSCSI boot. Here is the documentation for it.

https://www.mellanox.com/related-docs/prod_software/FlexBoot_user_manual.pdf

But the problem I have is, these cards have a Device ID and PSID of HP's. So cannot update the rom with the image file downloaded from Mellonax (Can never find the fw updates from HP site). Tried to update the firmware by using the Mellonax .ini config with HP PSID and Device ID of Mellonax card, but after that there is no PXE boot screen. Below is the doc on how to customize these card for OEM FW.

http://www.mellanox.com/page/custom_firmware_table

It will be great to have iSCSI boot option on these cards. Has anyone tried and succeeded in updating the boot rom/firmware on these cards?
 

DonJon

Member
Apr 9, 2016
50
6
8
47
The Device ID for the generic Mellanox and the HP branded card is 26448. The boot option GUI for iSCSI is included with FlexBoot version 3.4.460 ROM image available at this link under Archive.
http://www.mellanox.com/page/products_dyn?product_family=34

But the problem is the compiled rom images in .tgz file for 3.4.460 does not have the image for ConnectX-2 cards (i.e. Device ID 26448). But they are available in the prior version package 3.4.306 (But the boot GUI was not included on this version, only PXE boot available with iSCSI thru PXE). So no luck in getting the iSCSI boot config pages with this version.

But the source code for FlexBoot 3.4.460 is available for download from the same link. Has anyone got experience in compiling the source for this specific card? ConnectX-2 with device id 26448? Any pointers are welcome. Hope to get iSCSI configurations directly on the boot options rather than depending on PXE server.
 

Boddy

Active Member
Oct 25, 2014
772
144
43
To tell you the truth, maximizing 10Gb is hard. It is hard to tell the difference between 1Gb and 10Gb since most things you normally do are so bursty as is. The real advantages are seen with large data transfers with parallel streams, and lower latency. The bad part is the cost of switches and whatnot.

For most users and use, 10Gb is worthless. If you are doing it to experiment and fun, then sure... But it costs you. Go back through the threads first though, my fingers can retype everything :)
How many SSDs would you need running to saturate a 1oGb? Just for interest sake.

PS. There are some 10Gb cards with 3 meter cables for $20 available on EBay that seller is accepting offers for $18

Mellanox Single Port with 3m DAC
 

hsben

Member
Sep 10, 2014
70
8
8
41
Pardon my ignorance, but how are you guys using these if you aren't buying expensive switches for these cards?
 
Last edited:
Sep 22, 2015
62
21
8
Id imagine most people are connecting two together.

Some of us are buying semi-expensive switches, 300 or so for Mikrotik, or 500 or so for the d-link/tp-link

Some others are buying cheap but high-cap ex-datacenter gear and just accepting the power and noise required.