Improving Power Consumption & Buss Speed on Dell R710

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ViciousXUSMC

Active Member
Nov 27, 2016
265
141
43
41
I just moved my R710 from Freenas as my "Hypervisor" to ESXi this means my VM's now are not a part of Freenas but instead VM's in ESXi that must communicate via Virtual Switches.

At work on a HP Z400 I can get about 6.5Gb/s between VM's with Iperf using a normal 1500 MTU and about 16Gb/s using 9000 MTU.

At home on the R710 I only get about 3.5Gb/s with either configuration. It's obvious the "buss" must be my limitation.

Granted none of my storage probably could provide anything near 3.5Gb/s so this is adequate, for the sake of learning and experimenting I present the following questions.

With a dual socket triple channel memory configuration, I am not running the optimal setup right now. 64GB of RAM with a combination of 4GB and 2GB DIMMS filling 8 out of 9 slots on both CPU's so there is no way I am making use of Triple Channel right now.

Question #1 - If I was to move to an optimal memory configuration would I likely see this 3.5Gb/s number go up as it must use the CPU/Memory Buss to pass the data, or is this more of a chipset limitation that I probably have reached and there will be no difference?

Question #2 - I found 6x16GB kits for a decent price. If I strip out my current ram and move from 16x DIMMs to 6x DIMMS what kind of power savings (wattage) could I expect? For a 24/7 server would anybody think that the cost of the RAM would be worth it for the power/heat savings?

I do not recall exact numbers but when I went from 24GB of RAM to 64GB of RAM I recall my Wattage went up by a good margin, I do not plan to replace this server anytime soon, so the savings could add up over time to quite a bit.

So in essence, its a bit of "for science" reasoning here that I am looking to see if spending money would be a total waste, or that there is some reasonable purpose and I can look into implementing the change.
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,142
594
113
New York City
www.glaver.org
At work on a HP Z400 I can get about 6.5Gb/s between VM's with Iperf using a normal 1500 MTU and about 16Gb/s using 9000 MTU.

At home on the R710 I only get about 3.5Gb/s with either configuration. It's obvious the "buss" must be my limitation.So in essence, its a bit of "for science" reasoning here that I am looking to see if spending money would be a total waste, or that there is some reasonable purpose and I can look into implementing the change.
I don't use ESXi, but if I understand correctly this is the same test as on bare metal, but with 2 guests each going through ESXi. Under FreeBSD on my R710 I get 29Gbit/sec using iperf on the loopback interface with no tuning whatsoever:
Code:
[0:1] host:~> iperf -c 127.0.0.1
------------------------------------------------------------
Client connecting to 127.0.0.1, TCP port 5001
TCP window size: 47.8 KByte (default)
------------------------------------------------------------
[  3] local 127.0.0.1 port 36295 connected with 127.0.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  33.8 GBytes  29.0 Gbits/sec
The hardware is dual X5680 CPUs with 6 * 8GB M393B1K70CHD-CH9 DDR3-1333 located in sockets A1-A3 and B1-B3, if that helps.