D-1540 or E5v3 - Torn and need more info for usecase

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ramos

Member
Mar 2, 2016
68
12
8
44
Hi StH members,
I am a new publishing member here, but been lurking on/off for months.

I need a new server for the home. But I am torn between the D-1540 and a E5 x2 platform of various configurations.

Goal / Purpose:
- The platform will be bought privately for both
--- Work experiments for work; Training and gaining experience with on-premise self-management cluster installations as well as software that run on clusters, like Hadoop and various new Spark based products and NoSQL databases.
--- Secondary: Entertainment at home. Just a virtual Win10 Pro for movies, browsing, development in VS/Eclipse etc
--- Secondary: File server hosting. Virtual FreeNAS for Z2 experiments and easy management of media and datasets.
- I seek to gain experience with auto-deployment of clusters for companies with strict no-cloud governance and not much cluster experience in-house and to gain experience from self-managed clusters (and totally free hands unlike the shared clusters at work). On top of that I need to test various cluster products like Hadoop performance insights
- I prefer to have one physical box to sit somewhere in a small flat (Expensive area)

Software implementation plan:
- I plan on doing this via a Win 2012R2/2016 base and then run a VM server off that. I have experience with VMWare, Oracle VB and Hyper-V already.
--- An on/off VM a Win10Pro for home/entertainment
--- Up to 6 simultanous VMs (2+4 nodes) running a cluster installation of various kinds. I cannot do this via AWS / Azure because of not totally free reigns, license fees for stuff I get for free, as well as the NEED for gaining more experience with on-premise cluster installations, especially deployment of Kickstart installations via PXE / servers and Chef/Ansible management of the rest once the 1-click deployment stuff runs.

Background / Situation:
- I work in Big Data and have vast experience with M-L, Hadoop (IBM/Hortonworks (ODPi) and Cloudera), VM setups (not much organized cluster management yet like Puppet, Chef, Ansible, CFEngine etc), plus AWS experience, so I know many of the options I phase.
- I have two on-premise clusters (2+4 nodes each) at work already for experiments, but as these are not owned by me or my department and share users, I only have limited admin rights on these for experiments and are under certain rules for what I can and cannot do. These are also datacenter hosted so planning OS installs takes weeks, if I were allowed, fx.

Budget:
- 2k-3k usd/euro ex external disks, ie OS disk only (4k absolute max = 2x D-1540s)
(EU based but if X costs 1 usd in US, it usually costs 1 euro in EU so potato potato)

Needs:
- Host a VM server and 6-8 concurrent VMs running
- ECC mem for computations, so Xeon.
- Low power bill, as I will be footing the bill. And my current TDP 130W i7-950 already cost more in power over 5 years than it cost to buy ($2000) and it has never been OC'd.
- A slot for a cheap 960 gfx card for 4k 444 60Hz Admin (I need/want the real estate for VM windows and multitasking comfortably and already got the monitor)

-----

Solutions thought of:
- D-1540 (List is a purchase list with prices, ignore the fans/case stuff)
D-1540 - VM server and virtual Hadoop workstation Preisvergleich | geizhals.eu EU
Pro: Small, modern ports, low power, fastest OS disk available, cheap 8/16 core/threads setup, great netcard setup and comes in microblade form, should I need to expand.
Con: Low on CPU power, only 1 PCI-e slot taken by a 960 so no HBA (1015), Might be too low in memory with 64GB being the feasible limit.

- E5 v3 platform, using 2x 2618L v3 bought legit from new
Hadoop Data node 2x2600v3 - 2x D1540 budget Preisvergleich | geizhals.eu EU
Pro: Modern as the D-1540, DDR4, flexible Hz 2.3 -> 3.4 turbo, low power, more versatile
Con: Expensive to buy and run.

- E5 v3 platform, same as above, but using 2x 26xx ES off eBay, (Not sure if ES are illegal to buy?)
Pro: Cheaper than 2618L. Lots more power and might be competitive with 2x 1540s
Con: No warrenty, might be dodgy to buy, high power bill, possible cons

- E5 v1 platform, same as above, but used 2670 v1's or similar
Pro: Very cheap power to buy, lots of core/memory power
Con: Old tech, Regular SSD OS, Expensive power wise, slow memory, used parts, no warrenty

- Physical second-hand setup,
(Cheap Desktop/Laptop system for fun, Cheap Xeon for disk/server + 6 cheap physical Opteron nodes for the experiments).
Pro: Real physical environment, flexible setup, no VM problems, no issues if HW dies because of low costs
Con: Slow memory, slow disks, high powerbill, HW might be so old it does not reflect reality etc.


Any insights on what system to pick for the purpose? (tia)
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
I'm not sure if this helps but that motherboard you picked for the E5-2618L v3 is not a great fit for a standard case as it is meant for 1U or 2U systems with horizontal expansion car mounting.

If it is purely a home development system, the D-1540/ D-1541 or maybe even the D-1528 are nice because they use less power. The dual E5 V3 is what I would go with if I were loading more disks to the nodes. With your budget, I don't think you are going to be using 16 or more disks per node.
 

Ramos

Member
Mar 2, 2016
68
12
8
44
I'm not sure if this helps but that motherboard you picked for the E5-2618L v3 is not a great fit for a standard case as it is meant for 1U or 2U systems with horizontal expansion car mounting.

If it is purely a home development system, the D-1540/ D-1541 or maybe even the D-1528 are nice because they use less power. The dual E5 V3 is what I would go with if I were loading more disks to the nodes. With your budget, I don't think you are going to be using 16 or more disks per node.
Thanks for the answer and good point I forgot. U's won't matter to me, as I won't be hosting the box in a center (on purpose, as I don't have access to something like the HP iLO enabled-systems).

The cases chosen were merely stuffed in there to remind me of a fixed overhead cost and what I need to remember as well when comparing numbers.

As the impact of disk performance stuff (physically) for running jobs on variable individual nodes is something I can test at work via VPN, I do not need to have physical disks on each node and I can therefore virtualize those with a couple of big SSDs hosting virtual drives in the 200 GB range, which should suffice. I have a test dataset of 1 TB for the biggest part so far. OS disks for the nodes will just be hosted on the Host OS disk as virtual drives, as it should have enough IOPS for anything like that (I assume).

In short, I think you are right that the 6x Sata ports ( + the OS at M.2/M-key) of the D-1540 should suffice so the HBA issue isn't much of an issue.

I am more worried about if 8 cores of just 2 Ghz (albeit 14nm and same Linpack mark as a stock clock 6700K) will be enough to virtualize the whole system simultanously.
 

Ramos

Member
Mar 2, 2016
68
12
8
44
- Does any know if there is any issues with buying the E5 v1 CPUs other than no warrenty and power bill issues?
I've read about the ES CPUs and how they don't work in brand servers and that Intel don't want people to use them, so affraid of BIOS issues and such.

I have been avoiding the v1's as the CPU mark tests have shown them quite slow compared to what one hoses out in power bills compared to v3's and I was wondering if saving $1200 now will cost me $1500 in added power bills to get the same power out. Ie if twin E5-2618L v3 can deliver >= flops at just 2x 75W as the cheap 2x 2670 v1's at 2x120W from another thread on this forum?

I did some math just for those who wonder why I focus a lot of power bills,
- Math wise using a realistic Kwh price in my area, a "24-7 / 2" (running it 12 hrs a day on average) annual bill, comparing as example 2 x 75W 2618L v3's to 2 x 120W 2670 v1's is, holding other variables the same and assuming the PSU is stout enough in efficiency to pull the same power efficiency at both systems usage patterns, comes to,

Delta( 2 x 2670v1 , 2 x 2618L) = 2 sockets x (120-75) W x 24/2 hrs/day / 1000W (kW) x 365 days/year x $0.3927/kWh
= $394/year difference

Delta( D-1540, 2 x 2618L ) = $459/year

But that's of cause not a fair comparison, we should compare to 2 D-1540 nodes, as a 16 cores to 16 cores comparison,
Delta( 2x D-1540 systems, 2 x 2618L ) = $262/year (manageble if the dual E5 can handle my needs)

Basically 2x D-1540 systems could earn the pricing difference compared to the even the 2618L v3's, if I run the systems slightly more than 18 months of 24-7.

- Is there any cheap alternatives to the 2618L v3's out there on the used market (E5v1, E5v2) in the same price range or below, not running more than 75W TDP?
(Even if that number does not hold scale the same for each CPU in general, in terms of Turbo wattage spent, then I'd prefer to compare non turbo Watt to non Turbo watt)

- Are there any solutions options I am forgetting in my setup thoughts?
(Like whole used servers or so? ... bulk blades?)
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@Ramos - since the E5 generation the chips have been fairly good at powering down when at low utilization %
 

Ramos

Member
Mar 2, 2016
68
12
8
44
@Ramos - since the E5 generation the chips have been fairly good at powering down when at low utilization %
Okay, thanks, but this complicates stuff a bit then. Is there any tables or manuals that can tell/predict how much they can throttle down in each generation?
 

nrtc

New Member
Dec 3, 2015
23
4
3
54
Okay, thanks, but this complicates stuff a bit then. Is there any tables or manuals that can tell/predict how much they can throttle down in each generation?
Things are actually more complicated, since you're interested in the power consumption at the wall rather than just the energy consumption of the CPU. Xeon-D motherboards will definitely be more efficient than a dual E5-26xx motherboard. As ballpark figures: a dual E5-2670v1 system should idle somewhere between 100 and 200W and peak up to 400W in load (see here or here). The top-spec Xeon-D idles somewhere around 30W and peaks around 100W (see here). Since a test system will spend most of its time idling, the power difference will be somewhere in the 100 to 150W range.
 

Ramos

Member
Mar 2, 2016
68
12
8
44
Good links, thanks and your'e right. Though I did see a thread on StH where people claimed their E5 v1's were idling under 100W for a dual system?

I will go and ponder more and probably come up with a hybrid approach before too long. Those Xeon D's really are gods in terms of what they deliver. Except for perhaps pricing (for us mortals).

I also need to go ask the company if they got some E5 v1/v2 boxes they wanna get rid of and at what prices. The thing is, what I will do with them could be memory intensive (Spark jobs, MemSQL, other NoSQL) so maybe simple good ole DDR3 ECC in large quantities like 256 GB for cheap could be the solution and then combine that with a D-1540/D-1567, I'll get for minor stuff and for running everything BUT the data nodes in the experiments. Either D-15xx could probably easily run as a Master node in any cluster setup.

I'll get back with results of what I did and how it worked out, for feedback loop and info, but might not be till June for that because of work schedules.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
E5-2670 @ 65 watts idle vs d-1541 @ ~30 watt idle.
One is a lot cheaper ! (Complete in case SM system ~1100 just add ram vs building a e5 maybe less than 500 if you choose wisely)

Still it's heat and noise and size, + complexity to build (I don't consider the build at all difficult but some may be worried s out bent pins for cpu sockets etc)
 

tuatara

Member
Mar 2, 2016
67
14
8
44
Note that normal dual E5 v1 server motherboards are more likely to idle in the 100-200 W range as @Ramos stated. The Open Compute boards are stripped down and optimized for low power consumption (and cost) and are a very specific form factor with specific input power requirements (~208 V AC or 48 V DC).
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Just as a reference point my dual e5's of the HP DL380 variety are normally ~180watts with low load (not exactly idle but not too far from) but they do have a lot of memory in them that consumes some watts.
 

kroem

Active Member
Aug 16, 2014
248
43
28
38
I just want to chime in on the gfx card - Nvidia consumer cards and ESXi might not play nicely.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Note that normal dual E5 v1 server motherboards are more likely to idle in the 100-200 W range as @Ramos stated. The Open Compute boards are stripped down and optimized for low power consumption (and cost) and are a very specific form factor with specific input power requirements (~208 V AC or 48 V DC).
I think 200W is a bit high.

Here is the V3 launch piece I wrote for Tom's which had idle numbers for dual E5-2690 V1 chips:
Power Consumption Results - Intel Xeon E5-2600 V3 Review: Haswell-EP Redefines Fast

That system was a big 2U Lenovo as well.
 

nrtc

New Member
Dec 3, 2015
23
4
3
54
E5-2670 @ 65 watts idle vs d-1541 @ ~30 watt idle.
One is a lot cheaper ! (Complete in case SM system ~1100 just add ram vs building a e5 maybe less than 500 if you choose wisely)

Still it's heat and noise and size, + complexity to build (I don't consider the build at all difficult but some may be worried s out bent pins for cpu sockets etc)
This. The OP mentions he's in a small flat and expects to run the system 12h a day. Unless you're running a system 24/7, the cheaper 2nd hand system with higher consumption will typically be the more economical option in my experience. However, the noise, size and heat that come with increased power consumption is probably more important than the financial savings. You really don't want to have to sleep each night with a rack server blazing across the hall.

Secondly, the biggest contribution in power savings for me is by simply suspending the system when I do not need it. So you might keep an eye out for ACPI S3 (suspend-to-ram) support, as I believe this is not always standard on servers.
 
Last edited:

0dd

New Member
Oct 25, 2014
14
1
3
109
I just want to chime in on the gfx card - Nvidia consumer cards and ESXi might not play nicely.
I can confirm that Nvidia cards and pass though do not like each other, a few of the Virtualization systems (kvm and xen) now have patches to help this but expect to spend some time getting them to run on it. The error commonly seen using virtualization and Nvidia is error 43.


Hope this can save you some time.

0dd
 

Ramos

Member
Mar 2, 2016
68
12
8
44
I just want to chime in on the gfx card - Nvidia consumer cards and ESXi might not play nicely.
I can confirm that Nvidia cards and pass though do not like each other, a few of the Virtualization systems (kvm and xen) now have patches to help this but expect to spend some time getting them to run on it. The error commonly seen using virtualization and Nvidia is error 43.


Hope this can save you some time.

0dd
Thanks for the heads-up. The GFX card is not for passthrough on anything but the Win10 entertainment VM and its mostly just to get the host admin real estate with 10-16 windows (including VMs and admin windows and misc dashboards) open at a time.
 

Ramos

Member
Mar 2, 2016
68
12
8
44
Current plan is to implement/buy a D-1567 unless the price is too high and else default to a D-1541 and simply get one more if I run low on power. I plan on getting it in May so far.

The extra (4-6) play nodes will be ODROID XU4, but I will probably wait and see if there is an XU5 coming very soon. Hopefully PXE is in that one, as it is in the brand new C2 if I read correctly. The XU4/5 will be data nodes only unless facts on minimum requirements change and then I will VM the master nodes on the D-15xx.

I hope the CentOS 7 image for C2/XU4 can run in 64 bit mode on them. Was a bit confused when it said 64 bit CPU but 32 bit RAM in the specs.

Those 4+4 big.LITTLE CPUs (4 big cores and 4 small, run as o-chip power management cluster) are awesome though!
 

Ramos

Member
Mar 2, 2016
68
12
8
44
I thought it was the RPi 3 that could PXE boot?
I think they do but they are way weaker than the C2's and those are 55-60% the CPU power of the XU4, which can run all 8 cores now, rather than "these 4 or these 4" in the previous generations. Also the C2/XU4's have 2 GB DDR3 ram where the Rpi3 has 1 GB DDR2.

The XU4's are probably the cheapest (and weakest) thing I can use (purchase and power wise) for play-nodes in a physical enviroment for my needs. Ideally I would prefer 4-8 GB RAM which I hope the XU5 will provide (Hoping for 4 GB).

I looked at used stuff and at J1900s for this too but those seemed to be too weak.