Mellanox ConnectX-6 Brings 200GbE and HDR Infiniband Fabric to HPC

Discussion in 'STH Main Site Posts' started by Patrick Kennedy, Nov 23, 2018.

  1. #1
  2. Rand__

    Rand__ Well-Known Member

    Mar 6, 2014
    Likes Received:
    Instead of needing two separate cards, each NUMA node can directly attach to the fabric with this method. This is important for Intel Cascade Lake-SP which will come out in a few months with PCIe Gen3. ->4

    And I thought those had been available for a while? But am not following closely tbh :)
  3. Patrick

    Patrick Administrator
    Staff Member

    Dec 21, 2010
    Likes Received:
    Cascade Lake is still listed as Gen3 not 4. That is what has people buzzing about Rome. First x86 with PCIe Gen4 before Cooper / Ice.
    gigatexal likes this.
  4. Stephan

    Stephan IT Professional

    Apr 21, 2017
    Likes Received:
    Oh man. Wake up, Intel.
Similar Threads: Mellanox ConnectX-6
Forum Title Date
STH Main Site Posts Mellanox ConnectX-5 VPI 100GbE and EDR InfiniBand Review Tuesday at 9:02 AM
STH Main Site Posts A Product Perspective on an Intel Bid for Mellanox Jan 31, 2019
STH Main Site Posts Mellanox BlueField BF1600 and BF1700 4 Million IOPS NVMeoF Controllers Aug 6, 2018
STH Main Site Posts MiTAC HillTop NVMeoF JBOF Storage Powered by Mellanox BlueField Aug 6, 2018
STH Main Site Posts Mellanox Spectrum for Microsoft Azure SONiC at OCP Summit 2018 Mar 20, 2018

Share This Page