Anyone notice cheap brocade 10gbe?

Discussion in 'Networking' started by mrkrad, Jul 18, 2013.

  1. mrkrad

    mrkrad Well-Known Member

    Joined:
    Oct 13, 2012
    Messages:
    1,237
    Likes Received:
    50
    Brocade 1020 cna's are $40 on ebay
    Brocade 24 port turboiron are $2000 on ebay.

    I know brocade requires brocade "optics" but that's pretty dang cheap.

    Anyone ever use the brocade 1020 cna for networking?

    24 * 40 for 12 servers no single point of failure
    2000 for switch
    ??? for optics

    win!!??
     
    #1
  2. dba

    dba Moderator

    Joined:
    Feb 20, 2012
    Messages:
    1,478
    Likes Received:
    181
    If you can actually buy a Brocade Turboiron 24x for $2K, then it might be worth it. They seem to be more expensive than that, for some reason.
    On the other hand, the cards are - as you point out - inexplicably cheap, especially for current-generation equipment.
     
    #2
  3. Jeggs101

    Jeggs101 Well-Known Member

    Joined:
    Dec 29, 2010
    Messages:
    1,466
    Likes Received:
    216
    Can you get 3 of these, stick them in a low cost machine and make a higher latency switch out of the box? Or maybe better questions is can you direct connect 2 clients to one server with 3 cards (single link to each client of course) if it is 10gig E?
     
    #3
  4. PigLover

    PigLover Moderator

    Joined:
    Jan 26, 2011
    Messages:
    2,771
    Likes Received:
    1,115
    Yes to both questions, though the direct connect option is probably better. At 10gbe speeds the latency added by a cpu-based switch can be devestating to throughput.

    Of couse if you are going to do this yoi shoild really consider some of the QDR infinibamd cards instead. Yoi can do the same thing at 40gig for the same or less money.
     
    #4
  5. dba

    dba Moderator

    Joined:
    Feb 20, 2012
    Messages:
    1,478
    Likes Received:
    181
    I hope that someone tries out a point-to-point setup with the Brocade 1020 cards and then posts their results here. They aren't as fast as QDR Infiniband, but they do appear to be cheaper.

     
    #5
  6. mrkrad

    mrkrad Well-Known Member

    Joined:
    Oct 13, 2012
    Messages:
    1,237
    Likes Received:
    50
    Openvswitch is like subnet manager, it turns your nic into a switch. You have to remember most CNA's have acceleration aka vswitch in silicon so think of it as a two port switch (nic). As long as your traffic stays within the nic latency would be pretty low.

    Pretty much all of those 2009-2010 10gbe switches go for $1500 to 2500 these days. All built on strata broadcom.

    Now if you want even cheaper : CX4 or there's the 2 or 4 port switch with 20 gigabit ports :)

    But let's think about that. Switch with 20 ports and 2 two-port 10gbe. Hmm. Sounds a lot like what I just said up there.

    The point i'm trying to say, is the whole fast networking market is a sham. It's stagnant technology and overpriced. I'll say it's price fixed..

    The used price fallout makes it so obvious when 24 port switches that cost $10-20K are selling for 1/10th their price (and just as power as newer models).

    The nic's are the same. New $1000-1500 CNA used $40-75.

    We test all of our nic's by looping them and connecting them other car in the same box (or another near box). I've never tried Brocade since I get emulex (equal to if not better than intel) for $75. The intel dual SFP with SR optics ($129) and intel x520 10gbase-T ($150-175~) are really ancient technology.

    Street price new - it's likely you would pay more for cabling (not the nic, not the switch) than "deal used ebay" server(s)!!

    But i've got every fricken brand of nic here except mellanox and Brocade.

    My suggestion: Always hunt for cards with optics, fiber is stupid cheap, 50meter of super high quality $61 (OM3 laser optimized), but for short haul you can use junky old OM1 or OM2 that people throw away. Many folks don't know what they are selling and will sell a nic with dual optics for $130 which is about what two optics go for. $5 of cabling and boom you are in business.

    DAC cables are used in place of fiber optics for direct connection up to 5 meter. Most cards support this - not all. you may find that some nic's work quite fine out of band with dac cables. Take a close look sometime at SAS or SFP (fiber,infiniband,ethernet) and you might find the cables are pretty much the same. Obviously sas and qsfp and cx4 are different but some are really cheap to purchase and work quite fine used.

    For the most part fiber is faster (light!) and uses less power than DAC cables. I think for most people using 10gbase-T will suffice. The only major benefit I see from 10gbase-T is the backwards compatibility that most cards offer to drop back to gigabit should your switch or wiring fail you can always drop down in speed.

    Google pics of the inside of your favorite expensive high speed cable and you'll be surprised at what you see. I accidentally used monoprice 350mhz cat5e over 60 feet and well it ran perfectly! swapped it for cat6a and guess what, no difference in speed nor latency. $6 60 foot cable pushing 8-9 megabit over a $40 used broadcom nic from 2007 lol!
     
    #6
    Last edited: Jul 18, 2013
  7. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,941
    Likes Received:
    857
    8-9 gigabit?

    Good read though. I have been waiting for a long time for 10gbase T and now am really rethinking. Thinking direct cable OR light but light switches cost more.

    80% ready to just tell myself -> the world should be QDR infiniband but I'm debating if power is too much.

    Anyone know if you can run optics through the cheap QDR cards? Would make it cheaper for long haul segments.
     
    #7
  8. dba

    dba Moderator

    Joined:
    Feb 20, 2012
    Messages:
    1,478
    Likes Received:
    181
    I have heard that Infiniband uses less power than most 10GbE gear, especially the older generation gear, and yes you can use either copper or optical QSFP cables with Infiniband.

     
    #8
  9. mrkrad

    mrkrad Well-Known Member

    Joined:
    Oct 13, 2012
    Messages:
    1,237
    Likes Received:
    50
    Just remember that there is overhead for everything. It is common to use vlan/LACP/LAG to trunk, QOS to use two ports for redundancy and speed aggregation and security! This does not come free! Stack some iscsi and/or FCoE on top of that and more overhead!

    TCP/IP adds even more overhead plus you may be running software raid, and other applications on the machine.

    These are real features people use everyday on servers to get that "two-wires" from each server to handle it all. You may have 20 vm's with 40 nic's in an esxi server running over those two wires. This is where theoretical speed from benchmarks veers from real life.

    It is very easy to make 4 vm's do 2.5gbps each! It is very very hard to make ONE vm reach full 10gbps with the overhead of secure redundant trunking!

    Also many nic's were designed for asymetrical performance. IE 10gbps to 4 machines, 6gbps return rate.

    1 file server will spend 90% of the time sending at 10gbps to 4 pc's reading data. 10% of the data will be return (writes).

    How often do you actually do 10gbps (read) and 10gbps (write) full time? probably never.

    I have to use ramdisk to push 10gbps in both directions, and that's without any fancy security features (QOS/vlan/trunking)!!
     
    #9
  10. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,545
    Likes Received:
    4,471
    Have a bunch of IB cards I need to get running at this point. I think dba is correct on power consumption.
     
    #10
  11. mrkrad

    mrkrad Well-Known Member

    Joined:
    Oct 13, 2012
    Messages:
    1,237
    Likes Received:
    50
    Want to trade? :) I've got a bunch of 10gbe various adapters with 5M DAC cables. I'd love to try 40gbps.

    I'd want to try 40 or 56 rate. Two cards with a reasonable cable length. Prefer ESXi and Windows support but definitely windows support.

    I've got some QLE8152,solarflare (1024 vnic per PORT! linux), netxen qlogic,netxen intel SR optics, broadcom single 10gbase-t, Dell broadcom dual 10gbase-T (smbus tape trick on non-dell).

    I could probably do a pair of broadcom 10gbase-T or a pair of SFP+ including a pair of DAC cables so you can try out bonding (using both ports to get 20gbps send and 20gbps receive simultaneously on windows 2012).

    I just want to try out this infiniband and have a setup 100% proper.

    p.s. I might have some procurve gigabit with quad 10gbe under lifetime warranty to trade for say a used voltaire 36 port 40gb infiniband switch? Anyone? Fully warranted not grey-market.
     
    #11
  12. jtreble

    jtreble Member

    Joined:
    Apr 16, 2013
    Messages:
    88
    Likes Received:
    10
    Does anyone know if there is a standardized way of measuring the level of NIC "CPU offload/acceleration" so one could compare 10 GbE NIC types if one were intent on building their own "reduced latency" switch/gateway/bridge?
     
    #12
    Last edited: Jul 19, 2013
  13. Jeggs101

    Jeggs101 Well-Known Member

    Joined:
    Dec 29, 2010
    Messages:
    1,466
    Likes Received:
    216
    John I like the idea but can't offer a great way. One idea might be to install two dual port nics and check with traffic going between two ports of the same card against two ports on different cards.
     
    #13
  14. dba

    dba Moderator

    Joined:
    Feb 20, 2012
    Messages:
    1,478
    Likes Received:
    181
    Hands off my QDR switch! It's too precioussssss (accidentally slips into Gollum voice)
     
    #14

Share This Page