Mellanox ConnectX-6 Brings 200GbE and HDR Infiniband Fabric to HPC

Discussion in 'STH Main Site Posts' started by Patrick Kennedy, Nov 23, 2018.

  1. #1
  2. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,868
    Likes Received:
    390
    Instead of needing two separate cards, each NUMA node can directly attach to the fabric with this method. This is important for Intel Cascade Lake-SP which will come out in a few months with PCIe Gen3. ->4

    And I thought those had been available for a while? But am not following closely tbh :)
     
    #2
  3. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,310
    Likes Received:
    4,262
    Cascade Lake is still listed as Gen3 not 4. That is what has people buzzing about Rome. First x86 with PCIe Gen4 before Cooper / Ice.
     
    #3
    gigatexal likes this.
  4. Stephan

    Stephan IT Professional

    Joined:
    Apr 21, 2017
    Messages:
    85
    Likes Received:
    28
    Oh man. Wake up, Intel.
     
    #4
Similar Threads: Mellanox ConnectX-6
Forum Title Date
STH Main Site Posts Mellanox ConnectX-5 VPI 100GbE and EDR InfiniBand Review Tuesday at 9:02 AM
STH Main Site Posts A Product Perspective on an Intel Bid for Mellanox Jan 31, 2019
STH Main Site Posts Mellanox BlueField BF1600 and BF1700 4 Million IOPS NVMeoF Controllers Aug 6, 2018
STH Main Site Posts MiTAC HillTop NVMeoF JBOF Storage Powered by Mellanox BlueField Aug 6, 2018
STH Main Site Posts Mellanox Spectrum for Microsoft Azure SONiC at OCP Summit 2018 Mar 20, 2018

Share This Page