Impact of module placement & number on memory bandwith

Discussion in 'Processors and Motherboards' started by Rand__, Jun 30, 2019.

  1. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,468
    Likes Received:
    502
    So I have been browsing the manual of one of my boards today in an attempt to understand the potential usage of the black dimm slots on Supermicro X11 SP boards...

    I wondered how badly it would impact the performance if they are used and while I have yet to hear from SM support I found this document hinting at the potential impact - and its actually quite large. Never realized that it had that much impact not to have a balanced or full config...

    O/c this is in regard to Lenovo servers with 12 modules per CPU and not SM with 6/8 but it at least gives an impression for my use case - and of course makes a very good case to think more about this than I have been doing before...


    Xeon E5v4: http://lenovopress.com/lp0501.pdf
    Xeon SP: https://lenovopress.com/lp0742.pdf

    upload_2019-6-30_12-11-58.png
     
    #1
  2. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,804
    Likes Received:
    407
    HPE has similar info also indicating the non optimum config suffer a lot. while I have not tested it simply I only populate 6 or 12 dimms per socket.
    (Exception is really small servers where really only need 1 or 2 dimms but they also don’t then need peak memory performance)

    The logic behind lots of scalable systems have 8 slots is not to loose capacity compared to the older E5 systems, SM should have really found a way to get 12 dimm sockets on the boards per cpu.
     
    #2
  3. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,468
    Likes Received:
    502
    Yes that would have been great. 6 available for peak performance on a single cpu board is kind of limiting...
    O/c 6 x 32GB is not too shabby, but as soon as you don't do 32GB+ modules...
     
    #3
  4. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,804
    Likes Received:
    407
    Normally I use for general purpose VM or DB workload 384gb per socket (20 cores per socket for VM, generally less for other workloads), and using only rdimm that’s 2R 32gb you need 12 slots a socket, now you get 64gb rdimms or lrdimms at a reasonable price can do that in 6.
     
    #4
  5. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    693
    Likes Received:
    163
    The memory bandwidth impacts can be fairly large.

    Dell has some performance numbers comparing Scalable processor memory bandwidth on their C6420 blades, I'm assuming most vendors will suffer similar deficits.

    Modular system CPU and memory configurations can affect performance | Dell Canada

    For SM, you may want to email support to get the answer...their email support has actually been very good from my experience.
     
    #5
    vanfawx likes this.
Similar Threads: Impact module
Forum Title Date
Processors and Motherboards AMD Epyc - performance impact of runing 4 channel memory instead of 8 Channels Jul 12, 2018
Processors and Motherboards x10sdv sata-0 / m.2 switch - impact? Jun 3, 2018
Processors and Motherboards [Poll|Discussion] Spectre/Meltdown impact on legacy HW - what to do with it? Jan 14, 2018
Processors and Motherboards Dell r230, will 4x32GB ECC Modules work? Aug 14, 2019
Processors and Motherboards M11SDV* ECC memory module compatibility Jul 28, 2019

Share This Page