Exactly, the scope of a DAC cable is to avoid the cost of the dual optic when the distance is short enough...Thanks Matt.
So you don't need optics with a DAC? I guess the direct attach part makes sense now. When I was looking at FS, their DACs had a different option for each manufacturer - does that matter if optics aren't involved, or is it just more of a marketing thing? Should I just get one that matches the NIC, or would I need a dual compatible one?
Similarly, I read about Intel NICs being optics locked, so if I went that route, I'd want an Intel transceiver at each end with an LC-LC OM1 cable? If I went Mellanox, would the Brocade transceiver work on that end for the extra details?
The vendor "signature" of the DAC works in the same way as the optics, it is a way for the vendor to recognize that you're using a chain of "verified components"; if a switch or a NIC wouldn't work with a generic/other vendor signed optics, then it wouldn't bring up the link if you use a not-verified DAC (I've tried this with a Cisco original SFP+ DAC in a HP Procurve 2810, the switch refused to link and it shows as a tampered with optics).
In your case, you could buy an Intel-Intel DAC or an Intel-Brocade one, for your switch it doesn't make any difference and for the DACs you don't get any monitoring anyway, so... no big deal
If you go to the dual optic + fiber route, you'll need a Brocade SFP+ 10G-SR, an Intel SFP+ 10G-SR and a OM3 or OM4 patch cable.. the OM1 has a bigger core of 62.5nm (instead of the 50nm of the "modern" OM3+ multimode fibers). You might need to "roll the fiber" (ie. swap the fibers left-to-right inside the LC bundle) at one end, in order to connect the TX of one side to the RX of the other... it's more difficult to write it as to do it, and once you've done it once, it would be "second nature"... And, if you use the 850nm wavelength SR optics, 99% of the time you can actually see the light emitted, so you wouldn't do it wrong
Well.. it depends for short burst then yes, the link would be at 10G so until you have data buffered in memory then yes, you'll get around 1.2GB/s... after you've depleted the buffers, than you're obviously limited by the slowest device in the chain, and most of the time is a spinning disk, so you'll hovering around the 150-180MB/s mark (for bigger files, if you're going to trasfer a lot of small files the performace will obviously drop).Edit: I've read conflicting reports - if I connect my NAS and workstation to the SFP+, will I be able to transfer at 10Gb?
I hope that what I've written makes sense...