Mellanox ConnectX-6 Brings 200GbE and HDR Infiniband Fabric to HPC

Discussion in 'STH Main Site Posts' started by Patrick Kennedy, Nov 23, 2018.

  1. #1
  2. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,646
    Likes Received:
    357
    Instead of needing two separate cards, each NUMA node can directly attach to the fabric with this method. This is important for Intel Cascade Lake-SP which will come out in a few months with PCIe Gen3. ->4

    And I thought those had been available for a while? But am not following closely tbh :)
     
    #2
  3. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,231
    Likes Received:
    4,187
    Cascade Lake is still listed as Gen3 not 4. That is what has people buzzing about Rome. First x86 with PCIe Gen4 before Cooper / Ice.
     
    #3
    gigatexal likes this.
  4. Stephan

    Stephan IT Professional

    Joined:
    Apr 21, 2017
    Messages:
    77
    Likes Received:
    25
    Oh man. Wake up, Intel.
     
    #4
Similar Threads: Mellanox ConnectX-6
Forum Title Date
STH Main Site Posts Mellanox BlueField BF1600 and BF1700 4 Million IOPS NVMeoF Controllers Aug 6, 2018
STH Main Site Posts MiTAC HillTop NVMeoF JBOF Storage Powered by Mellanox BlueField Aug 6, 2018
STH Main Site Posts Mellanox Spectrum for Microsoft Azure SONiC at OCP Summit 2018 Mar 20, 2018
STH Main Site Posts Mellanox Innova-2 with 25GbE and Xilinx FPGA Nov 7, 2017
STH Main Site Posts Mellanox BlueField NVMeoF SoC Solution Aug 21, 2017

Share This Page