ConnectX-2 EN cards do a good job at offload. They are high performance 10gbe nics. Unfortunately they do not support RDMA over Ethernet. You need connectX-3 for that.
That's fine. RDMA is a bit "overkill" for my home servers... not that 10GbE isn't !ConnectX-2 EN cards do a good job at offload. They are high performance 10gbe nics. Unfortunately they do not support RDMA over Ethernet. You need connectX-3 for that.
Out of curiosity, what kind of SFPs were you using? I tried both a passive DAC cable and some generic fiberstore ones, connected to my Mikrotik switch and they didn't want to work. Same SFPs work fine in the ConnectX-2 ENs . Could be pebkac tooYep, we bought a pair and tested with our cards and switches, worked great. So we bought another 50... so far everything seems to work in our new design. Thanks for the comment though, we will be sure to watch carefully.
@Entz - Followup. We got our transceivers in today, everything works great. These are what we used. We also used some ER single mode ones and that worked in the adapters as well.Hopefully you have better luck then I did. I couldn't get them to work in ConnectX-2 VPI cards (would not bring up a link).
Mine came in too. JDSU PLRXPL-SC-S43-22-N SFPs seem to work fine. I haven't tested up to 10Gb yet as I maxed out the SSD read speed in my workstation.Followup. We got our transceivers in today, everything works great. These are what we used. We also used some ER single mode ones and that worked in the adapters as well.
Similar reasoning to what Chuckleb said - I'm running 25 feet between workstation and server. I think an SFP cable would have been more expensive and definitely bulkier than the SFPs + patch cable. I also tend to move things around a lot, so being able to change the patch cable is nice.Any reason to purchase transceivers and fiber separately instead of either pre-made passive copper or active cables if length isn't a concern (short runs only)? (Sorry for all the questions... learning is fun! )
I use Infiniband to to connect my LabESX hosts to an iSCSI NAS (SAN?). I am currently just doing host-to-host-to-host, but am going to expand to an IB switch at some point in the coming months. Some are doing true 10GbE networking, I chose IB because it was a cheap way to get ultra-fast, low-latency connectivity in the home. I bought three used dual port ConnectX cards and cable sets at the close of 2013 ... the whole setup cost me less than $250.is (beezzz) the reason ppl buy these (home users / non enterprise) to connect their main system to a NAS/SAN (like a cheap box running freenas with several HDDs, for example).
2 (3 really) questions:
1) Im trying to get an idea of why non enterprise ppl need more than full duplex gigabit (which just about every motherboard has) and im assuming it so that they can access file shares from a NAS? (who needs more than 120MB/s in a home setup, even on the uplink, unless ofcourse you are access a disk box directly)
2) would 2 x of these (plus 2 x GBICs) work in this scenario: I have a windows 2008 r2 server , and a 12 bay freeNas server with several disks. Put 1 in the win 2008 box, assign the adapter an internal/private (192.168.2.1) IP, put 1 x in the freeNas box, assign it a IP on the same subnet as the 2008 box... and then connect to the storage of the freenas box via IP, in windows? would that work or am i missing something / some part
3) please see Mr. F's question above mine as he needs an answer as well
THANKooouuuuuuu!
I have a ConnectX-2 EN connected to Amazon.com: MikroTik CRS226-24G-2S+IN Cloud Router Gigabit Switch, 24x 10/100/1000 Mbit/s Gigabit Ethernet with AutoMDI/X, Fully manageable Layer3, RouterOS v6, Level 5 license: Computers & AccessoriesWhat do all you people connect these cards too? Is there a "relatively" cheap switch that can handle more than 2 10GbE SFP+ connections without needing to spend a rediculous amount of money?
I'm trying to figure out how I could interconnect 3 nodes from my Dell C6005 server (all running ESXi) and my NAS without needing to install 3 x 2 port cards...
Mostly a switch. The CPU is pretty weak.thats an extremely interesting switch/router. That would only solve 1/2 of my connections though (1 for my NAS, 1 for 1 ESXi box, leaving 2 ESXi boxes with a single NIC each).
Currently, I have three C6100 nodes (two ESXi nodes and one node for storage (rewired for 6x3.5" in the front of the C6100)) connected in a triangle fashion with ConnectX-2 dual-port cards. ESXi vSwitches won't function as bridges, unfortunately, so the connection between the ESXi nodes is on a separate subnet and is used for vMotion traffic only. I just got everything installed yesterday, in fact. VM storage over 10GbE is amazing.What do all you people connect these cards too? Is there a "relatively" cheap switch that can handle more than 2 10GbE SFP+ connections without needing to spend a rediculous amount of money?
4+ devices is tricky without a switch, unfortunately. I wish there were more affordable 10GbE switch options!I'm trying to figure out how I could interconnect 3 nodes from my Dell C6005 server (all running ESXi) and my NAS without needing to install 3 x 2 port cards...
That's really interesting - I totally missed that in searching around for an inexpensive 2 SFP+ switch. Is it fanless? And is that an LCD screen on it?I have a ConnectX-2 EN connected to Amazon.com: MikroTik CRS226-24G-2S+IN Cloud Router Gigabit Switch, 24x 10/100/1000 Mbit/s Gigabit Ethernet with AutoMDI/X, Fully manageable Layer3, RouterOS v6, Level 5 license: Computers & Accessories
And a small storage server on the other 10GbE port.
Thanks! I see they have a rackmount version as well - do you happen to know if that one is fanless too?Double affirmative regarding your questions.
It is fanless and it is a LCD screen