v2 server as workstation/desktop

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

fp64

Member
Jun 29, 2019
71
21
8
Hello,

I am looking to being able to post-process something like 30-40 TB of big binary files (about 40-60GB each) with the possibility of graphical display (paraview) ie gpu for the display must be used. Speed of post-processing is not crucial but lots of memory channels (8) and 256GB ram seem necessary. A 2p xeon v2 server seems about right in terms of cost but the graphical part is doubtful. is there a v2 server that can be coaxed into into running desktop linux?
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Servers with ipmi have an integrated gpu, it's nowhere the Performance of a Nvidia/AMD gpu, but it should be enough for simple GUI Environments
 

fp64

Member
Jun 29, 2019
71
21
8
the idea is to be able to switch off the meager onboard graphics with an add-on gpu taking over. i seem to remember having come across a discussion somewhere where this is doable on r720xd. if true, these servers are hard to find and not that cheap and therefore not an option. so i am looking for alternatives.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Maybe I'm not getting it...

If the server/motherboard does in fact support an add-on GPU, the BIOS should be easily configurable to set the Graphics Priority (or something similar) to "Offboard" i.e. add-on GPU vs "Onboard"

What am I missing??
 

fp64

Member
Jun 29, 2019
71
21
8
gpus on servers are normally intended as coprocessors and not as display terminal drivers.
 

MBastian

Active Member
Jul 17, 2016
205
59
28
Düsseldorf, Germany
There might be branded servers out there that do not allow non-branded components, but als long as it fits and has enough power/airflow it should work, regardless if the BIOS has a boot priority on/offboard setting. Actually I've left my ipmi GPU active on my supermicro x9 board so I do not loose advanced remote mamagement options. It is also connected via a VGA to HDMI adapter to my main display, showing kernel messages and systemd logs. Just deactivate that source in the advanced display management, so your display manager does not use it.
 

Markess

Well-Known Member
May 19, 2018
1,146
761
113
Northern California
gpus on servers are normally intended as coprocessors and not as display terminal drivers.
If you use a desktop/workstation GPU (and not a Tesla for example) it shouldn't be a problem. I've got three dual CPU E5 v2 systems, one is a Dell R620, one has a Supermicro X9 board and the third with a Foxconn board. All three can work with a Desktop and/or Workstation GPU just fine and all three have run desktop versions of Ubuntu at one time or another.

Are you intending to connect a monitor directly? That's what I was doing and it wasn't difficult. If you're planning on using a desktop GPU to process images and remote in over a network to access the GPUs output, that's a bit more complex I'd assume.
 

fp64

Member
Jun 29, 2019
71
21
8
the monitor has to be connected to the gpu. the aim is to have the gpu driver installed otherwise shading and rotating 3d graphics objects that are made up of few million points is impractical.

as a test, i installed a 12-year-old nvidia card in the third pci slot of a(n) ml350 g9 and thru the bios switched to gpu display only (without the card driver.) it worked but the cooling fans went wild. but this is no bother since this is my compute engine. i would like to offload the postprocessing to another (cheaper) machine. hence, the question.
 

Markess

Well-Known Member
May 19, 2018
1,146
761
113
Northern California
the monitor has to be connected to the gpu. the aim is to have the gpu driver installed otherwise shading and rotating 3d graphics objects that are made up of few million points is impractical.

as a test, i installed a 12-year-old nvidia card in the third pci slot of a(n) ml350 g9 and thru the bios switched to gpu display only (without the card driver.) it worked but the cooling fans went wild. but this is no bother since this is my compute engine. i would like to offload the postprocessing to another (cheaper) machine. hence, the question.
As others have said, if you're connecting a monitor it shouldn't be much of an issue. You mentioned Dell R720 a few posts earlier as an example. I can tell you that for the Rx20 series, if you install a non-whitelist card, any non-whitelist not just GPUs, the system will run the fans at 100%. It will still work with the GPU, but your fans will be on high, as the system doesn't know what to do cooling-wise for an unknown card at the back of the chassis. As you can imagine, only older GPUs are on the whitelist for these older systems. For the R620 it was Quadro 600 & 2000, plus Firepro V5800. R720 has a wider range to choose from, as there's a 6/8 Pin PCIe power cable available, but they're still old cards. If you have an Enterprise IDRAC license, you can tinker with the fan curves, but that probably makes it less likely to be a cheaper option.

So, if noise is a factor, you may wish to go with something a little less proprietary (like Supermicro) that won't have whitelist restrictions.
 

MBastian

Active Member
Jul 17, 2016
205
59
28
Düsseldorf, Germany
So, if noise is a factor, you may wish to go with something a little less proprietary (like Supermicro) that won't have whitelist restrictions.
I second that. But: As far as I know there is no Supermicro X9 Board that can drive the memory Channel at 1866 MT/s with more than eight DIMMs. At least some HP systems can. Imho the theoretical 16% memory performance hit is not worth it to shell out significant more cash. Ymmv
 

fp64

Member
Jun 29, 2019
71
21
8
thank you very much. this is helpful. but, i lack the experience choosing components and building servers.

the dell example concerned the r720XD of which i am not certain.
 

MBastian

Active Member
Jul 17, 2016
205
59
28
Düsseldorf, Germany
the monitor has to be connected to the gpu. the aim is to have the gpu driver installed otherwise shading and rotating 3d graphics objects that are made up of few million points is impractical.
I am not sure what you mean by that. Typically the drivers won't initialize a consumer card properly if there is no connected monitor. But there are ways around that. If you do not actually need to look on the display you can eiter fake a monitor in software or just buy a hdmi dummy plug. There are plenty of ways to stream the desktop to a remote computer.
 
Last edited:

fp64

Member
Jun 29, 2019
71
21
8
the key is to be able to use the display with the gpu driver installed. otherwise there is no point of using a gpu.
 

Markess

Well-Known Member
May 19, 2018
1,146
761
113
Northern California
@fp64 just to clarify: Your goal is to have a Workstation/Desktop that can take 256GB of RAM and have the CPU computing power to use it for memory intensive applications? I think this is what you want?

You mentioned the E5 v2 server not because your goal is to have an E5 v2 server with a GPU, but because you thought it was a possible economical solution to your goal of having a Workstation/Desktop with 256GB of RAM and the CPU power to use it?
 

Markess

Well-Known Member
May 19, 2018
1,146
761
113
Northern California
Okay, so I think my thoughts above about noise come into effect: Even though many 1U & even 2U height servers will work (because many have a riser to mount a GPU horizontally), the small(er) fans will make them noisy as a "desktop", which is going to be positioned close to your ears.

I think you're right that an E5 v2 system will give you some economy. With a couple outlying exceptions, its the last of the Intel line to use DDR3, and DDR4 for newer systems is still quite expensive.

A couple options:

1. Get a taller 3U/4U system. These will have larger, slower, fans. So should be quieter.
2. Get a "tower" server. These should be quieter as well, as they also have bigger, slower, fans. Have you looked into dual CPU workstations from that generation? There were several brands/models as I recall (I didn't own any). For example, the HP "Z" series, although I think only the Z820 has the RAM Capacity you'd want. I'm sure there are others though.HP Z820 Workstation Product Specifications | HP® Customer Support
 

Markess

Well-Known Member
May 19, 2018
1,146
761
113
Northern California
Not necessarily the approach you'd want to take, but I think this is the kind of thing you're wanting to achieve?

A few month's back, my son and I did a "Desktop" experiment using parts left over from various upgrades:
  • This Server motherboard ASUS Motherboard Z9PR-D12 Intel LGA 2011 Socket with 1U Heatsink | eBay
  • A pair of E5-2640v2 (not the fastest, but what we had on hand)
  • A pair of Intel BXTS13A heatsink/fans instead of passive ones (for less need for overall high airflow and noise)
  • 196GB of RAM (all we had handy)
  • An Nvidia GTX 970
  • An old desktop case big enough to hold the EEB format motherboard
  • A desktop power supply (with an adapter to convert two Molex to a second 8 Pin CPU Power (EPS12V) plug)
  • A few 120mm fans.
This ran fine on Ubuntu 20.04. Quiet and stable. What we were trying to see was if lots of cores/threads and RAM would make for a better gaming experience than a newer CPU & a small amount of DDR4.

It didn't. But, I suppose that's more because the limitations of the Games in utilizing the extra CPU and RAM.

In any case, it drew a lot of power at idle compared to the single processor E5 v3 desktop we were comparing it to, but it was a fun experiment.
 

fp64

Member
Jun 29, 2019
71
21
8
This is not an always on system. it is for doing just post processing calculations and displaying the results. I would have built a desktop ryzen system but the size and cost of memory make one to look for alternatives. My 24/7 system is an HP ml350 g9 with two 2690v4 and 256gb @ 2400T and (hopefully) 3 radeon vii.
 
Last edited:
  • Like
Reactions: Markess