And even more complicated when you start to think about what protocol(s) to run on top of infiniband - whether you go IPoIB and use standard iSCSI on that, or let the storage say on more native IB and use SRP or something. I believe there's also a FCoIB standard out there though I might be wrong. I have a few boxes with onboard IB adapters at home, but don't have a switch so my IB experience is near-zero.Hmmm infiniband looks like an interesting option with amazing bandwidth but after reading up a bit on it it sounds like it could get complicated with subnet managers?
Not sure on that answer - but that is an important question. SFP+ compatibility (cables or transceivers) can be very picky in some products.Will these cables work? There seem to be so many options for cables.
Considering how new Server 2016 is, if they've already released a beta driver I would consider that a good thing. That indicates to me that they at least plan to put out a driver for the new OS even though those cards are very old.ESXi 6 is supported but Windows Server 2016 has beta drivers...this could be a problem?
Yeah, so I have read. I've never used or worked with SFP+ so wanna make sure before purchasing.Will these cables work? There seem to be so many options for cables.
Not sure on that answer - but that is an important question. SFP+ compatibility (cables or transceivers) can be very picky in some products.
I don't know how long it will be before Mellanox will release an RTM driver but it may be worth waiting. Can't say I want to run my iSCSI storage with a beta driver!ESXi 6 is supported but Windows Server 2016 has beta drivers...this could be a problem?
Considering how new Server 2016 is, if they've already released a beta driver I would consider that a good thing. That indicates to me that they at least plan to put out a driver for the new OS even though those cards are very old.
Thank you, I didn't see this. I just saw beta driver and assumed they are still working on it.Mellanox already have a NON-BETA driver for Windows 2016, you have to ignore the beta page, go directly to the download page.
Here they are:What happen to the HP card that you linked yesterday?
If you paying more than $45 per cards, then look at the SUN Oracle CX-3 cards?
The SUN Oracle CX-3 card is VPI cards, you will have choice between IB or EN per port. Then you could run IB on one port, EN on the other port.
Thanks for the post, appreciate it. I'm a bit confused by the Sun Oracle 40Gb/Sec cards. These are infiniband cards right? So after reading the PDF/links you posted, does this mean they can operate in Ethernet mode OR infiniband mode (where a subnet manager is needed)?@BSDguy
I'll let Wikipedia explain, they do a much better job than me.
RDMA over Converged Ethernet - Wikipedia
RDMA over Converged Ethernet - Wikipedia
VPI Virtual Protocol Interconnect
http://www.mellanox.com/pdf/prod_architecture/Virtual_Protocol_Interconnect_VPI.pdf
Ebay search :Oracle 7046442 , 40GBe for less than $100 USD.
Change a config file , flash the firmware could get you to 56GBe. The card also works in IB mode.
Sun Oracle 40Gb/Sec Dual Port QDR Infiniband PCIe HCA Adapter M3 Card 7046442 | eBay
One more thing. I installed 2 of the Mellanox CX-2 cards, load up Windows 2016, installed Mellanox dirver. No tweaking, just run NTttcp , I was getting line speed 10Gb/s out of the box.
Congratulationsjust passed my VCP6-DCV exam and as a result of that I purchased two of the following servers:
Thanks, I worked seriously hard to pass it considering the long hours I have been working! Thanks again!Congratulations
Yes and yesI'm considering the Ubiquiti EdgeSwitch ES-16-XG for use in my virtualisation environment. It will be used for iSCSI only storage traffic with 2 ESXi hosts and my SAN (with a third host added later this year).
Considering the issues I had with the Cisco switches performance with iSCSI (mentioned earlier in this thread), has anyone used the Ubiquiti EdgeSwitch ES-16-XG switch for iSCSI traffic? Does it perform well?
Ok, so yes you've used it for iSCSI and yes it performs well...great!Yes and yes
It is a bit picky when it comes to gbics or dacs though
ah sorry , i thought i had posted them here...Thanks but after reading the whole thread did I miss your post about stress testing this switch with iSCSI traffic? I did see:
On to putting iscsi hurt on my switches
Was there some further testing you did?
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.09 GBytes 9.40 Gbits/sec 564 614 KBytes
[ 4] 1.00-2.00 sec 1.09 GBytes 9.38 Gbits/sec 507 624 KBytes
[ 4] 2.00-3.00 sec 1.09 GBytes 9.38 Gbits/sec 507 607 KBytes
[ 4] 3.00-4.00 sec 1.09 GBytes 9.39 Gbits/sec 507 601 KBytes
[ 4] 4.00-5.00 sec 1.09 GBytes 9.38 Gbits/sec 508 594 KBytes
[ 4] 5.00-6.00 sec 1.09 GBytes 9.39 Gbits/sec 504 445 KBytes
[ 4] 6.00-7.00 sec 1.09 GBytes 9.38 Gbits/sec 505 472 KBytes
[ 4] 7.00-8.00 sec 1.09 GBytes 9.38 Gbits/sec 461 590 KBytes
[ 4] 8.00-9.00 sec 1.09 GBytes 9.38 Gbits/sec 506 605 KBytes
[ 4] 9.00-10.00 sec 1.09 GBytes 9.38 Gbits/sec 507 626 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 10.9 GBytes 9.38 Gbits/sec 5076 sender
[ 4] 0.00-10.00 sec 10.9 GBytes 9.38 Gbits/sec receiver