Mellanox ConnectX-6 Brings 200GbE and HDR Infiniband Fabric to HPC

Discussion in 'STH Main Site Posts' started by Patrick Kennedy, Nov 23, 2018.

  1. #1
  2. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,143
    Likes Received:
    433
    Instead of needing two separate cards, each NUMA node can directly attach to the fabric with this method. This is important for Intel Cascade Lake-SP which will come out in a few months with PCIe Gen3. ->4

    And I thought those had been available for a while? But am not following closely tbh :)
     
    #2
  3. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,403
    Likes Received:
    4,351
    Cascade Lake is still listed as Gen3 not 4. That is what has people buzzing about Rome. First x86 with PCIe Gen4 before Cooper / Ice.
     
    #3
    gigatexal likes this.
  4. Stephan

    Stephan IT Professional

    Joined:
    Apr 21, 2017
    Messages:
    85
    Likes Received:
    29
    Oh man. Wake up, Intel.
     
    #4
Similar Threads: Mellanox ConnectX-6
Forum Title Date
STH Main Site Posts Changing Mellanox ConnectX VPI Ports to Ethernet or InfiniBand in Linux Apr 7, 2019
STH Main Site Posts NVIDIA to Acquire Mellanox a Potential Prelude to Servers Mar 11, 2019
STH Main Site Posts Mellanox ConnectX-4 Lx Mini-Review Ubiquitous 25GbE Mar 1, 2019
STH Main Site Posts Mellanox ConnectX-5 VPI 100GbE and EDR InfiniBand Review Feb 12, 2019
STH Main Site Posts A Product Perspective on an Intel Bid for Mellanox Jan 31, 2019

Share This Page