As a single user is infiniband worth the money?

Discussion in 'Networking' started by Dajinn, Jun 3, 2015.

  1. Dajinn

    Dajinn Active Member

    Joined:
    Jun 2, 2015
    Messages:
    512
    Likes Received:
    78
    I'm just now treading into the waters of network attached storage and all of these new connectors and networking protocols I didn't know existed now...exist...to me, lol.

    Anyway, I was attracted to the prospect of a multi-node VM host with seperate SM847s for storage, however, I'll likely be using them as network attached storage.

    Really, as far as heavy usage is concerned, the most reading from the storage I'll probably be actively doing when not testing things out in VMs will be Plex usage for me and a few family members. That's it.

    That being said, I don't want to run into a bottleneck if I decide I want to do some more heavy lifting.

    Also, if I go with a 4 node supermicro server, I'm going to be limited per node to 1 low-profile pci-e device. I want to avoid having to buy all these different network adapters when I want to upgrade. I was considering a few options

    -Quad Gigabit LP Nic & Teaming them for 4 Gbps over Ethernet
    -10 GbE NIC
    -DDR Infiniband Adapters ( DDR Switches are relatively cheap compared to QDR Counterparts)

    Thoughts?
     
    #1
    T_Minus likes this.
  2. TeeJayHoward

    TeeJayHoward Active Member

    Joined:
    Feb 12, 2013
    Messages:
    374
    Likes Received:
    106
    I played with Infiniband and fiber both. In the end, I like fiber. Infiniband's foreign to me. It doesn't play by the rules I grew up with. So my Infiniband network is just 10Gb IPoIB these days. Also, IB really shines with RDMA. So if you do decide to do Inifinband, make sure your entire setup works with RDMA. For me, this meant no ESXi. And only Server OSs for Windows systems.

    tl;dr: If I could do it all over again, I'd take the money I spent on my 40Gb IB network and instead spend it on a 10GbE network.
     
    #2
  3. markpower28

    markpower28 Active Member

    Joined:
    Apr 9, 2013
    Messages:
    393
    Likes Received:
    98
    esxi does support two type of RDMA. SRP and iSER.
     
    #3
    ehorn likes this.
  4. dswartz

    dswartz Active Member

    Joined:
    Jul 14, 2011
    Messages:
    376
    Likes Received:
    28
    Depends on what IB adapters you have. I got a couple of connectx-3 EN adapters. They are 10gbe/40-gbe out of the box. ESXi supports them as such. And ESXi presents iSER storage adapters for me. I can't yet take advantage of that because my storage backend (SCST based) doesn't yet support iSER.
     
    #4
    T_Minus likes this.
  5. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,784
    Likes Received:
    1,459
    What's the cheapest 40Gb switch with like 4 or 8 ports you can buy? Are they still big $$ ??
     
    #5
  6. dswartz

    dswartz Active Member

    Joined:
    Jul 14, 2011
    Messages:
    376
    Likes Received:
    28
    I haven't seen anything with less than 8 ports. Leastways not on ebay, which is the only source of affordable HW :)
     
    #6
    T_Minus likes this.
  7. Entz

    Entz Active Member

    Joined:
    Apr 25, 2013
    Messages:
    269
    Likes Received:
    62
    If you are looking at needing > 4 ports it is hard to beat infiniband. Switches (DDR) are quite a bit cheaper.

    RDMA is amazing, IPoIB is a pig IME in both CPU usage and overhead. CX4 to QSFP cables can also be extremely expensive.

    I started with IB and moved to pure ethernet but have fewer nodes then you are planning. 10GBe Ethernet is just cleaner/easier. Especially when integrating with your existing network (would need to bridge from ethernet to IPoIB which is slow and doesn't always work).

    Do you have any adapters now? You may find that round-robin iSCSI is more then enough for what you are doing.
     
    #7
  8. Dajinn

    Dajinn Active Member

    Joined:
    Jun 2, 2015
    Messages:
    512
    Likes Received:
    78
    Nah, I don't have any adapters now. I actually don't really have anything now, I'm just trying to save myself future pains. Like, I think I might just get the barebones SS6026TT-HDTRF from Mr Rackables because each node can support two full size PCIe cards and I want that expandability if I choose to either use IB or a quad NIC or something. I also wanted to play around with a Dell J23 as well which will require me to get an external HBA card so I need more expandability than what a 2U-4node server can offer me.

    Don't DDR IB ports support link aggregation? At least if I went with a cheaper IB option I could squeeze 20 Gbps out of 2 ports per node, and then just attach those to a DDR switch that my storage servers would also be attached to using 2 DDR ports.
     
    #8
  9. dswartz

    dswartz Active Member

    Joined:
    Jul 14, 2011
    Messages:
    376
    Likes Received:
    28
    I'm running 10gbe but using connectx-3 EN cards in point to point mode. Haven't needed a switch yet, thankfully...
     
    #9
  10. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,784
    Likes Received:
    1,459
    I'm running 10gbE too, and have switch, but was hoping to do 40Gb between ESXI hosts and data stores.
     
    #10
  11. dswartz

    dswartz Active Member

    Joined:
    Jul 14, 2011
    Messages:
    376
    Likes Received:
    28
    the connectx-3 cards handshake at 40gbe point to point :)
     
    #11
  12. Entz

    Entz Active Member

    Joined:
    Apr 25, 2013
    Messages:
    269
    Likes Received:
    62
    No. DDR is already 20Gbps (RDMA) / 16Gbps IPoIB (max but usually lower). You may be able to bond at a higher level but anything ethernet related needs eIPoIB which is not widely supported. This is also what makes bridging a PITA. Generally you will just use a single link (or 2 for redundancy). You can do things like round robin.
     
    #12

Share This Page