Dual E5-2670 Build Advice Needed For Home PC/VMware Lab

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Fodmidoid

Member
Dec 29, 2016
94
0
6
50
Greetings,

I've recently picked up two E5-2670 (v1) processors with the intention of building an everyday workstation PC that would also be used as a VMware Workstation Lab, plus some occasional gaming. I would also like 128 GB RAM (minimum of 64 GB to start).

I need advice on what motherboard to buy for this type of use. A few that I was looking at were:

ASRock Rack EP2C602-4L/D16
SuperMicro X9DR3-LN4F+
ASUS Z9PA-D8

Should I just consider building a system with one processor and keep the other for a spare, or is two CPUs a good way to go for what I want?

Any other advice is welcomed as well, such as memory, power supply, video and sound cards, storage controller, case, liquid cooling, etc.

Thanks very much for all your help!
 

SycoPath

Active Member
Oct 8, 2014
139
41
28
Your short a lot of relevant info for anyone to make recommendations. How many VMs? How many PCIe slots do you need? Many dual motherboards don't have all expansion slots available unless you have both processors in. Does power draw matter to you? What OS are you using for storage? Are you using something like Napp-it or FreeNAS to host your storage as a VM or are you just using local storage to VMWare and presenting virtual disks to the installed OS's?
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
Here are a few things I would consider before I started picking motherboards.

1 - Many dual socket motherboards tie PCIe slots to individual processors as SycoPath mentioned. This means you must populate both processor sockets to use all the PCIe slots.
2 - Many dual socket motherboard tie memory slots to individual processors. This means you must populate both processors sockets to use all memory slots. This limits the total memory with one processor and may make it expensive to get to your target memory by requiring higher density ( read as more expensive per megabyte ) memory.
3 - Many dual socket motherboard tie interrupts to individual processors. You need to be careful which slots you use for high performance i/o cards to avoid saturating the intra-processor bus.
4 - the 2670 chews a bit of power. Two of them even more. It can be tricky to keep two of the cool and quiet at the same time.

Does it make sense for your use case to have two systems? One for virtualization with oodles of memory shoved in a closet / garage where noise is not an issue and a second with more modest processor / memory but appropriately upmarket graphics card for gaming? Many find this a less expensive and more palatable choice if space is available.
 

Fodmidoid

Member
Dec 29, 2016
94
0
6
50
Here are a few things I would consider before I started picking motherboards.

1 - Many dual socket motherboards tie PCIe slots to individual processors as SycoPath mentioned. This means you must populate both processor sockets to use all the PCIe slots.
2 - Many dual socket motherboard tie memory slots to individual processors. This means you must populate both processors sockets to use all memory slots. This limits the total memory with one processor and may make it expensive to get to your target memory by requiring higher density ( read as more expensive per megabyte ) memory.
3 - Many dual socket motherboard tie interrupts to individual processors. You need to be careful which slots you use for high performance i/o cards to avoid saturating the intra-processor bus.
4 - the 2670 chews a bit of power. Two of them even more. It can be tricky to keep two of the cool and quiet at the same time.

Does it make sense for your use case to have two systems? One for virtualization with oodles of memory shoved in a closet / garage where noise is not an issue and a second with more modest processor / memory but appropriately upmarket graphics card for gaming? Many find this a less expensive and more palatable choice if space is available.
Hmmm...that's interesting. Thanks for the reply.

Well, I already bought the two 2670's months ago.

I do already have a Dell R715 racked in a server cabinet with Dual Opteron processors and 64 GB of RAM, but with ESXi installed, I was having trouble going through the video lecture labs (CBT Nuggets, for example), which do everything in VMware Workstation. I figured I could just use ESXi instead, but I found it difficult to do, in particular, with creating ten NICs per host, like they do in the lectures.

So, since I was wanting to build new powerful PC anyway, and I read about the fantastic deal on the Xeon E5-2670's, I thought why not build a 16 core, 32 thread machine with maybe 128 GB RAM that I could use for personal everyday use, as well as for a VMware Workstation lab, plus some gaming, and more.

But, this is something that I would be leaving on 24/7 and I was curious how much money that could add up to, running dual the Xeon. Converting watts into dollars isn't my strong suit.

Part of me wants to put everything in a nice case with a window, while another part of me wants to possibly put it in a rack mountable 4u case and rack it in my server cabinet, though I guess it would have to be close to my desk if it's going to be my home workstation.

So, knowing all that, should I still be considering splitting them up? I'm definitely open to suggestions. Sometimes it's hard to see things from the inside.

Thanks.
 

Fodmidoid

Member
Dec 29, 2016
94
0
6
50
Your short a lot of relevant info for anyone to make recommendations. How many VMs? How many PCIe slots do you need? Many dual motherboards don't have all expansion slots available unless you have both processors in. Does power draw matter to you? What OS are you using for storage? Are you using something like Napp-it or FreeNAS to host your storage as a VM or are you just using local storage to VMWare and presenting virtual disks to the installed OS's?
Thanks, SycoPath.

Since I am planning to use this as an everyday workstation, I guess I will need a substantial amount of Pie slots to accommodate for a sound card, video card, USB 3, etc, as I realize that most of these boards will be server boards.

Power draw is something I was curious about, though not a deal breaker. My electric is included in the rent, but that doesn't mean I want to rack a huge boil up on the guy either. Plus, if I were to move, that situation would most likely change. I just don't know how much more per month I could be looking at leaving this running 24/7.

Storage-wise, I currently have a 5-bay Synology NAS with around 8 TB usable space,plus an old HP Server running OpenFiler, though I would like to add local SSD and Sata storage to this build and play around with vSan among other things.

Thanks.
 

Cole

Member
Jul 29, 2015
30
1
8
36
How about a Dell T5600/7600 workstation? Has USB3, sound, made to house graphics cards.
 

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
Still haven't heard the budget so.... asus z10 or z9pe-d8 ws is the shiznet and fits in an eatx case. The supermicro x9/x10 drg is super peacher keen but you need a case with 11 pcie slots so expect to pay another 300+ for the enclosure. This puts you in a nice spot with dual nics, lots of pcie slots, easy upgrade to 256gb of ram because "it's my pc". The V1 E5 is going to limit you to PCIE-2 rather than PCIE-3 so you may want to rethink your 2670's once you build out your machine. The z9 / x9 motherboards lack ddr4 if that is important to you. The Asus motherboards comes with audio, and is a tiny bit overclockable, it also comes with USB 3.0 (iirc). The Supermicro motherboards come with 5 dual spaced slots and a couple of little ones, ipmi and more enterprisy features. Any of these boards is in the 5-600 range if you shop around

A middle ground is one of the X9 or 10 Supermicro motherboards with 3 x16 slots and a couple of little ones that are more normal dual cpu workstationish and can be found in the 3-400's range.

You can go supercheap like one of the natex boards but as a home workstation I think it would be noisy and limiting.
 

nk215

Active Member
Oct 6, 2015
412
143
43
49
Still haven't heard the budget so.... The V1 E5 is going to limit you to PCIE-2 rather than PCIE-3 so you may want to rethink your 2670's once you build out your machine.
I just want to correct the information that E5-2670v1 limits the PCIe to 2.0. This is only true with the Intel motherboard (popular with Natex deals) which uses the PCIe from the chipset itself. E5-2670v1 works fine for PCIe 3.0 in supermicron boards.

For a general purpose desktop, I recommend going with a E-ATX size board. Supermicron X9DR3-F is a very good candidate. X9DR7-LN4F is also a good choice so as X9DRi-F etc.

All these boards are in the $250-$300 range on a good day. If energy is not a real concern, it's very hard to beat the power and cost of a dual E5-2670 setup.

In the US, a good estimate is $100/year energy cost for every 100 watts idle power 24/7.
 

superempie

Member
Sep 25, 2015
82
10
8
The Netherlands
Intel Ark says the E5-2670 is PCI Express Revision 3.0 and the X9DRG-QF product page says it supports PCI-E 3.0 too, so it should work.

edit:
Checked my server build which has a SM X9DAi in it with 2x E5-2670 SR0KX, and it says:
# dmidecode | grep "PCI Express"
Type: x16 PCI Express 3 x16
Type: x4 PCI Express 3 x4
Type: x16 PCI Express 3 x16
Type: x8 PCI Express 3 x8
Type: x16 PCI Express 3 x16
Type: x8 PCI Express 3 x8

Still have a X9DRG-QF over here, but can't currently check it.
 
Last edited:

Fodmidoid

Member
Dec 29, 2016
94
0
6
50
Thanks a lot for the replies. I've been checking out these things and trying to figure out a build.

I can't help but wonder, though, if I should be embracing my R715 more?

Dual Opteron Processors and 64GB RAM. does anybody have any thoughts on this compared to building a Dual E5-2670 setup? Should I instead, use the Dell R715 as a VMware lab server, and then put something separate together for daily desktop and gaming use? Should I think about building a single E5-2670 machine for that?
 
Last edited:

Use This

New Member
Jan 6, 2017
2
0
1
40
Hi, first timer here and I am having a trouble with my S2600CP2J build from the get go. I apologize for hijacking the thread - did not want to clutter up the front page.

I have purchased a kit from Natex, which includes the board, 2 x E2670s, and 32 GB (4 x 8 GB) memory. When I boot the system I get a symptom like below:


I am getting almost exactly same LED behavior including the beeps, except mine will have the System Status light blinking in amber afterwards. I have rotated all 4 sticks ( Samsung 2Rx4 PC3L-10600R ECC DIMM) and I get the same result. I have yet to see the BIOS screen yet.

Originally I tried booting with a video card (GTX 960 and Radeon HD6870, on separate occasions) installed, and I did not get anything on my monitor and got the above error. Later I uninstalled the video cards, and boot the system with only the CPU and memory stick(s) installed. Unfortunately I have no monitor with analog input, so I had no display connected to the system's onboard video. I thought the system would still go through POST without a display attached, but I still got the same error.

The comments on the YouTube video mention memory problem, but could all of my 4 sticks be bad? I had all 4 sticks give me the same error individually.

Only step I have skipped is to find a monitor accepting analog inputs and connecting it to the onboard GPU. I will have to borrow one (or buy one) for that so I figured I would ask here before I take that step.
 

Tdunbug

New Member
Apr 13, 2016
4
0
1
46
Hi, first timer here and I am having a trouble with my S2600CP2J build from the get go. I apologize for hijacking the thread - did not want to clutter up the front page.

I have purchased a kit from Natex, which includes the board, 2 x E2670s, and 32 GB (4 x 8 GB) memory. When I boot the system I get a symptom like below:


I am getting almost exactly same LED behavior including the beeps, except mine will have the System Status light blinking in amber afterwards. I have rotated all 4 sticks ( Samsung 2Rx4 PC3L-10600R ECC DIMM) and I get the same result. I have yet to see the BIOS screen yet.

Originally I tried booting with a video card (GTX 960 and Radeon HD6870, on separate occasions) installed, and I did not get anything on my monitor and got the above error. Later I uninstalled the video cards, and boot the system with only the CPU and memory stick(s) installed. Unfortunately I have no monitor with analog input, so I had no display connected to the system's onboard video. I thought the system would still go through POST without a display attached, but I still got the same error.

The comments on the YouTube video mention memory problem, but could all of my 4 sticks be bad? I had all 4 sticks give me the same error individually.

Only step I have skipped is to find a monitor accepting analog inputs and connecting it to the onboard GPU. I will have to borrow one (or buy one) for that so I figured I would ask here before I take that step.
Does your motherboard support Remote BMI (if so, you can try and boot to your motherboard via BMI to graphic display of any bios errors). I know when I had bios issues with my SM x9drl-if being able to login in remotely via a laptop was very useful in diagnosis. The issue of my board was that the video drivers were not supported by Wins 10 pro. From your youtube video I noticed that you only have one cpu installed. This may mean that only specific memory slots and PCIe slots are available. Check your manual for single cpu memory/PCIe configurations as each cpu has it's own memory/PCIe controllers. This will help possibly confirm if it is a memory/hardware failure. Did you try to boot with two sticks at a time and no PCIe video? As for the no display issue you may have to through trial and error reseat your graphic card into each slot until the Graphical display works(that is what I had to do or use the onboard until you can install your drivers). Also some Server grade boards will not use the PCIe card until the onboard video is disabled within the bios. So you will have to go into your bios first to disable it. Finally, confirm that your power supply is capable of handling the load as most run of the mill PSUs just don't cut it.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
But, this is something that I would be leaving on 24/7 and I was curious how much money that could add up to, running dual the Xeon. Converting watts into dollars isn't my strong suit.
Late to the party but as a rough reference I have three of these nodes (X9SRL though so single socket 2670v1) and they cost me roughly $45 a month to leave humming 24/7 at my KwH rate of 12cents. Close to 4amps steady-state w/ 3 nodes each w/ 128GB memory and a Juniper switch to drive everything.
 

Use This

New Member
Jan 6, 2017
2
0
1
40
@Tdunbug: I managed to borrow a monitor with an analog connector and that solved the problem. The YouTube video is not mine - I happened to find it by chance and since my board behaves just like that I figured a video can replace my thousand words. Thank you much for your kind help. I cannot believe that the board does not recognize a graphics card until its own onboard GPU is acknowledged first. With an ancient connector to boot!

I have another question. I would like to update the BIOS so that hopefully I can avoid this kind of problem in the future. The board also has problems with S3 sleep as well as fan speed control. Hopefully a newer BIOS can fix them. My question is, can I use the following package, which appears to be the latest BIOS?

02.06.0005

The readme says:

Code:
To update the system firmware to the versions included in this update package, the firmware
currently loaded on the given server system must meet the following:
 
  - BIOS 01.01.1002, or later Version
  - ME 02.01.05.069 or later Version
  - BMC 01.00.2612 or later Version
  - FRUSDR 1.01 or later version

The utilities used to update the System Firmware are:
  - Iflash32 11.0 build 11
  - FWPIAUPD 11.0 build 9
  - FRUSDR 11.0 build 19
And my current board has the followings:

BIOS: SE5C600.86B.01.02.0003
BMC Firmware Version: 1.04.2896
SDR Version: SDR Package 1.03
ME Firmware Version: 2.1.5.69

It appears to me that I am good to go with the latest BIOS even though the current BIOS on the board seems quite old? Am I reading it correctly? I took a picture because, gosh, I am a little nervous. If there is anything I need to be aware before flashing the BIOS please let me know. Thank you again so very much!

 

Fodmidoid

Member
Dec 29, 2016
94
0
6
50
Yes, you have completely hijacked this thread and the conversation has nothing to do with my dilemma anymore, yet I am still getting notifications that someone replied to my thread, only to log in and see it was for your thing again instead. I would appreciate it if you started a separate thread based on your own problem, especially since my issue had still not been resolved.

Thanks.
 
Last edited:

Dave_B

New Member
Dec 7, 2016
27
4
3
75
Hmmm...that's interesting. Thanks for the reply.

Well, I already bought the two 2670's months ago.

Part of me wants to put everything in a nice case with a window, while another part of me wants to possibly put it in a rack mountable 4u case and rack it in my server cabinet, though I guess it would have to be close to my desk if it's going to be my home workstation.
I put together a dual E5-2670 system from the Natex deal for a home workstation. Since it would be sitting on my desk, I picked up a compact windowed Corsair Carbide Clear 400C case for $50 to put it in. I did have to get a little creative with the case which, although E-ATX size, did not have the correct mounting holes. But I did not want a behemoth, server case looming over me. The inlet and outlet fans that came with the case provide plenty of airflow combined with the fact that the top is completely vented. CPU Cooling is supplied by a pair of $20 Cooler Master Hyper 212s with the CPU2 fan mounted on the back side in a pull configuration. Doing that provided extra separation between the two heatsinks that reduced CPU2's max temp by 5C so both run at 65C max under 32 thread 100% load. I used Noctua PWM NA-RC7 Low Noise Adapters to slow the fans down and keep everything nice and quiet. The PSU is a Supermicro 665W server supply I've had for several years powering my previous X79 and X99 builds. Stuck in my mini GTX 1060, a $15 USB 3.0 card, hooked up my USB Sound "Card" to complete the system, and installed a few SSDs I had to get it going.

It functions very well as a general purpose computer is even a capable gamer. So you can do this inexpensively, and it is not noisy at all sitting inches from me on the right side of my desk. A photo of dubious quality is provided for you viewing below.