Current Best bang for buck 55XX or 56XX cpu ?..

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Do be careful with the dual-CPU systems for transcoding. Unless your transcoding app is NUMA aware (and most of them are not) you will see much lower performance with a dual-CPU 12 core/24 thread system than you will with a slightly faster 6-core/12-thread single CPU system that doesn't have any NUMA issues.

I was doing video processing on a dual X5550 system for quite a while. When I went to a single-CPU 4-core/8-thread sandy bridge my coding times dropped like a rock. Faster single-threaded CPU, yes, but half as many of them. It was only later that I learned NUMA was the issue.

I'd go with a single the X5650.
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
ESXi will shutdown the 2nd cpu "DORMANT" when your load is at 20% or less unless you pin vm's to that cpu!

Whichis why most folks disable power management in the bios and set it to MAX MAX MAX performance and not let ESXi manage power, it really eats up a ton of performance!
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
This has been bugging me all night so I have been reading up on this. Is this still valid in current implementations? VMware recommends leaving power management in unless you see significant performance problems which they say you'd only see in latency-dependant applications. On that case they would disable power management on the host and esx.

With the host OS scheduler, wouldn't all cores be used somewhat during normal operations? Wouldn't you want idle cores powered down so you can get turbo boost on remaining cores?

I would say in home environments, power management is good and should be used, even at performance hits because many of us are concerned about power and noise. If you truly have a whole CPU idle or lightly used, you may want it off.


ESXi will shutdown the 2nd cpu "DORMANT" when your load is at 20% or less unless you pin vm's to that cpu!

Whichis why most folks disable power management in the bios and set it to MAX MAX MAX performance and not let ESXi manage power, it really eats up a ton of performance!
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Do be careful with the dual-CPU systems for transcoding. Unless your transcoding app is NUMA aware (and most of them are not) you will see much lower performance with a dual-CPU 12 core/24 thread system than you will with a slightly faster 6-core/12-thread single CPU system that doesn't have any NUMA issues.
In general I would agree, but in the OP's case where he runs it inside a VM, would this matter? The host OS would abstract the memory layer out and you would get whatever the host provisioned for you? You gain efficiency by virtualization at the expense of performance regardless.
 

bookemdano

New Member
Jun 29, 2011
15
0
1
Thanks for the replies, all. Good thoughts for me to chew on. PigLover from what I've seen from googling, ESXi itself is NUMA-aware, so wouldn't it be able to handle getting everything routed efficiently? Or I guess if not then I could overcome any issues by just using CPU Affinity and only using 1 CPU.

Speaking of CPU affinity, mrkrad that's really interesting about ESXi shutting down CPUs with <20% usage.. I had no idea its power management was so aggressive with multi-CPU setups. Do you happen to have a reference to this? I am using ESXi 5.1 U2. Still though, as Chuckleb says, in a home scenario where I only occasionally need the extra CPU power, maybe ESXI shutting down the second processor when it's not needed is a good thing.

OBasel When you say 20-40W idle are you guesstimating that I'd see 20-40W difference in idle wattage between the L5630 and X5650 or are you saying that the X5650 likely consumes 20-40W idle? And is that per each or including both processors? Sorry, probably stupid questions but I have no idea how much power CPUs consume when idle. I guess I was hoping it was quite a bit less than they use at 100% (I know the newer chips are a lot better in that regard).

Chuckleb and Patrick thanks for the price points. I'm going to try to buy today or tomorrow because ebay's running another one of their 2x/3x/4x ebay buck promos. So that will "save" me another $10 or $20.

I'm usually a "happy medium" kind of guy. That's why I am so ticked that I missed the deals on the L5639--that would have been the no-brainer way to go. But now with them running $150+ each, the miser in me no longer feels they are a good deal when I could get an L5640 or even an X5650 for $20-30 cheaper.

So I guess that leaves 2xL5640 as the happy medium between the L5630 and X5650. Still, the X5650 feels like the better deal (just due to glut of them on the market I guess). In all the excitement of owning my first DP mobo I never even considered using it in UP configuration with a single X5650. I suppose that could be a different sort of happy medium. What would I be sacrificing though, besides losing the extra 4-6 cores? Aren't certain PCI slots connected directly to one IOH or the other? I assume I lose the other six memory slots (not really a problem at the moment since I've got 6x8GB DIMMs). Anything else?

Thanks again everyone.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
ESXi is NUMA aware, but it can only help you avoid memory performance delays if your VMs each 'fit' on a single on a single CPU and its direct memory footprint. E.g., if you use
2x 4-core CPUs but your transcoding VM needs 6 vCores then ESXi still has no choice but to span the NUMA boundary. Similarly, if each physical CPU has 32gb but your VM requires 48gb then some of the assigned memory will be from the 'wrong' CPU.

ESXi does help manage many NUMA issues, but if your core application has NUMA issues all by itself there is no magic that ESXi can do to 'fix' this.
 

bookemdano

New Member
Jun 29, 2011
15
0
1
Last edited:

britinpdx

Active Member
Feb 8, 2013
367
184
43
Portland OR
When you say 20-40W idle are you guesstimating that I'd see 20-40W difference in idle wattage between the L5630 and X5650 or are you saying that the X5650 likely consumes 20-40W idle? And is that per each or including both processors? Sorry, probably stupid questions but I have no idea how much power CPUs consume when idle. I guess I was hoping it was quite a bit less than they use at 100% (I know the newer chips are a lot better in that regard).
From memory (I don't have my notes in front of me) from a very recent test comprising of ..

Supermicro 823TQ-650LPB chassis (I believe the PSU is 80plus efficient)
Tyan S7012 DP 1366 motherboard (has quad NICs standard)
6x Micron 4GB PC3-10600R memory
2x case fans operational (other 2 disconnected)
2x Intel STS100C active CPU coolers
1x Seagate 160G SATA drive
Server 2012R2, OS booted and idle after login

Using a Kill-A-Watt to measure wall plug draw ..

With 2x L5639's the system idles at about 125W
With 2x 5650's the system idles at about 130W

So very little idle power difference between the 2.

Now, the UP and DP 1366 motherboards typically implement the Intel 5520 (IOH) Chipset, supporting 36 pci-e lanes. I know from experience that the 5520 runs hot and needs to be well cooled.
The Supermicro X8DTH-6F that you have purchased is an I/O monster, and has 2x 5520 IOH's on board as well as al LSI SAS controller, so I would expect idle power consumption to be higher than the figures I have mentioned here. (The Tyan S7012 only has one 5520 IOH)

What would I be sacrificing though, besides losing the extra 4-6 cores? Aren't certain PCI slots connected directly to one IOH or the other? I assume I lose the other six memory slots (not really a problem at the moment since I've got 6x8GB DIMMs).
I don't think you will lose anything running the X8DTH-6F with only 1 CPU. Clearly you only get access to the 6 memory slots that are hard wired to the CPU slot. However, the dual 5520 IOH's are connected with QPI and "present" themselves as a single IOH to the CPU(s), so even though 4x pci-e slots are hard wired to one 5520 and 3x pci-e slots to the other, you still have the ability to connect to all 7 pci-e slots from one CPU.

This certainly changed with LGA2011 designs where the pci-e lanes are driven directly from the CPU, and in DP 2011 designs the pci-e slots are hardwired to specific CPU's
 
  • Like
Reactions: abq

cptbjorn

Member
Aug 16, 2013
100
19
18
Wow that is great info britinpdx, much appreciated.

I was just going to do some swapping w/ my DL180 and DL380 this weekend to decide what I want to do with my L5639 and L5520 pairs and the x5650 pair I just ordered but if the difference is really that small I'm not going to bother getting true A/B tests
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Let us know how that dual IOH setup works out! I've yet to find a non-buggy implementation. The DL370 G6 had this setup and it was finicky at best with dual SOCKETS populated. Very buggy as far as power-savings mode, basically disable all power savings if you value your uptime ! :)
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
DayZ servers run fine on X5650s. The L5639 is a bit slow for larger servers as the Arma2oaserver.exe is single threaded.
So I just bit the bullet and sprung for a matched pair of X5570s @ $200 (for the pair).

With the Arma2oaserver.exe being single threaded, more cores is not really going to help and the X5570 runs at 2.93GHz with a TDP of 95W. The node will be on the bottom of the chassis without a node above it (rear blocked though) so I could potentially put in a 2U passive cooler if needed.

Thanks for all the info and suggestions everyone.

RB
 

bookemdano

New Member
Jun 29, 2011
15
0
1
Ditto, thanks everyone (and sorry RimBlock for butting in on your thread!) I ended up with 2x X5650 for $230 (minus about $18 in ebay bucks due to the 4x promo) so I am pretty happy with that. I may just install one of them for now and test out a UP configuration for a while, see what my idle wattage and temps are. Then I can install the other one and see what I get. britinpdx your awesome post is what pretty much convinced me to go for the X5650 over the lower TDP L line. If the difference in real world power consumption is that minimal then I'm not going to sweat it. Due to the glut of them on the market, I think that the X5650 offers more bang for the buck over the L5640. Now if the L5639 were still available at $75 that would be a different story!

Thanks again all. Once I get the system built and put through its paces I will report back.
 

britinpdx

Active Member
Feb 8, 2013
367
184
43
Portland OR
I had time to go back and review my notes from 5639 & 5650 testing, and verify the configuration. I had the config mostly correct, the only (minor) change to the earlier post would have to correct the HD type ... it wasn't an older 3.5" Seagate 160GB SATA II as I had thought, it was a newer 2.5" Hitachi 500GB SATA III

To recap, Chassis was a Supermicro 823 with a PWS-652-2H (80plus) power supply. I suspect that the xx2 in the 652 indicates it has a silver rating.

Aida64 reports the current config as..

Motherboard:
CPU Type 2x HexaCore Intel Xeon X5650, 2133 MHz (16 x 133)
Motherboard Name Tyan S7012 (5 PCI-E x8, 18 DDR3 DIMM, Video, 4 Gigabit LAN)
Motherboard Chipset Intel Tylersburg 5520, Intel Westmere
System Memory 24567 MB (DDR3 SDRAM)

Storage:
Disk Drive HGST HTS725050A7E630 (500 GB, 7200 RPM, SATA-III)

Network:
Network Adapter Intel(R) 82574L Gigabit Network Connection (192.168.1.114)
Network Adapter Intel(R) 82574L Gigabit Network Connection (192.168.1.109)
Network Adapter Intel(R) 82576 Gigabit Dual Port Network Connection (192.168.1.124)
Network Adapter Intel(R) 82576 Gigabit Dual Port Network Connection (192.168.1.118)

I was using Aida64 running on Server 2012R2 for testing & reports, and I typically run the "system stability test" to monitor temps / fan speeds etc.

2x L5639 configuration
Idle Power draw = 125W
Average Idle Core Temp = 30C

Load Power draw = 256W
Average Load Core Temp = 50C

2x X5650 configuration
Idle Power draw = 130W
Average Idle Core Temp = 30C

Load Power draw = 295W
Average Load Core Temp = 60C
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
Supermicro X8DTH 6F LGA 1366 2 x Xeon L5639 Hexcore 2 13 GHz 12GB DDR3 | eBay

Might be of interest $699 or OBO
SuperMicro X8DTH-6F LGA 1366 + 2 x Xeon L5639 Hexcore 2.13 Ghz + 12GB DDR3

Motherboard has been used for about 8 months. Bare Board only with IO shield only . No accessories are included

CPU = 2 x L5639 12M Cache, 2.13 GHz, 5.86 GT/s Intel® QPI SLBZJ

RAM = 6 x 2GB Kingston KVR1333D3S8R9S/2G DDR3, 1333MHz, ECC, CL9, Single Rank, X8, 1.5V, Registered, DIMM,
 

bookemdano

New Member
Jun 29, 2011
15
0
1
Thanks mobilenvidia, depending on what kind of offer they would accept that could be a good deal for somebody. I already purchased the board for $250 and the 2x X5650 for $230. Not sure what 6x 2GB RDIMMs are worth these days, but I'd want minimum 4GB DIMMs at this stage (I mean part of the reason for these DP boards is so you can go beyond 32GB of RAM).

But yeah the X8DTH-6F still retails for a pretty penny so that could be a good deal for someone when weighed against retail prices.
 

britinpdx

Active Member
Feb 8, 2013
367
184
43
Portland OR
Just for grins, here's a little more data on the performance of 2x L5639's and 2x X5650's ( I'm sure this data must be located on some benchmark site ) ...

Cinebench R15 CPU test
2x L5639 = 974
2x X5650 = 1214

Passmark 8.0 CPU Mark
2x L5639 = 10194
2x X5650 = 12313

Do be careful with the dual-CPU systems for transcoding. Unless your transcoding app is NUMA aware (and most of them are not) you will see much lower performance with a dual-CPU 12 core/24 thread system than you will with a slightly faster 6-core/12-thread single CPU system that doesn't have any NUMA issues.
I'm currently ripping my HD-DVD and BluRay collection to mkv format for playback at home, and also using Handbrake to transcode the mkv down to mp4, using the "ipad" target setting.

I appears that Handbrake (v0.9.9) does make use of as many cores/threads as it can access...



For the same input file, at about the same point (44%) in the transcoding process, the 2x x5650 had an average of 86fps, and the 2x L5639 had an average of 68.2fps.

For my typical usage, it seems that the x5650 is a better bang for the buck. Ironically, I paid more for a pair of L5639's about a year ago as I did for a pair of x5650's a month ago !
 
Last edited:

bookemdano

New Member
Jun 29, 2011
15
0
1
That's what I like to hear britinpdx! Your posts have been totally re-assuring that I made the right decision in going with the X5650. Thanks a ton.