LGA 2011-3 Workstation with Xeon E5 1600v4, ECC RAM, and two GeForces 1080

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

zir_blazer

Active Member
Dec 5, 2016
355
128
43
A friend is currently in USA and he wants to build a new Workstation. Since the budget allows for it, I decided to recommend him to go for a true Workstation with a Xeon with ECC RAM. Additionally, he wants two GeForce 1080, which will be used for GPGPU purposes. The system will also be used as a gaming machine, but that is not reelevant.

This is a small list of the parts which I have pretty much nailed down.
The full list (With outdated prices) is HERE: Part List - Pastebin.com



PROCESSOR - LGA 2011-3
Intel Xeon E5-1620v4 - Broadwell-E / 4C-8T / 3.5-3.8 GHz / 40 PCIe Lanes - 306 U$D https://www.amazon.com/dp/B01GUAJQ08
Intel Xeon E5-1650v4 - Broadwell-E / 6C-12T / 3.6-4.0 GHz / 40 PCIe Lanes
Intel Core i7 6800K - Broadwell-E / 6C-12T / 3.4-3.6 GHz / 28 PCIe Lanes
Intel Core i7 6850K - Broadwell-E / 6C-12T / 3.6-3.8 GHz / 40 PCIe Lanes

The Xeon E5-1620v4 wins hands down due price and features. It is actually an excellent alternative to high end LGA 1151 (Core i7 6700K), and I would say anyone building such machine to consider going entry level LGA 2011-3 with that Xeon.

The Ci7 6800K is around 100 U$D more expensive, has 6 Cores and can be overclocked (Not important since he said he doesn't like to do that), but sacrifices ECC RAM support and PCIe Lanes.
The 6 Cores 40 Lanes Processors are around 600 U$D, too expensive.



HEATSINK - LGA 2011-3
Cooler Master Hyper 212 EVO - RR-212E-20PK-R2

Will be purchased locally since there is little difference between Amazon and some local vendors.



MOTHERBOARD - LGA 2011-3
AsRock X99 Taichi - ATX / X99 / 12 VRM / 2 BIOS / Realtek ALC1150 / Intel i218v + Intel 211-AT / 8x DDR4 DIMM / 3x PCIe 16x / 2x PCIe 1x / 2x M.2 Key M - 220 U$D https://www.amazon.com/dp/B01ITOLDQI
Gigabyte GA-X99P-SLI - ATX / X99 / 2 BIOS / Realtek ALC1150 / Intel NIC? / 8x DDR4 DIMM / 4x PCIe 16x / 2x PCIe 1x / 1x M.2 Key M / Thunderbolt 3
Supermicro X10SRA - ATX / C612 / 8 VRM / Realtek ALC1150 / Intel i210-AT / 8x DDR4 DIMM / 4x PCIe 16x / 2x PCIe 1x
Supermicro X10SRA-F - ATX / C612 / 8 VRM / Realtek ALC1150 / Intel i210-AT / 8x DDR4 DIMM / 4x PCIe 16x / 2x PCIe 1x / IPMI / VGA

The initial idea was using a Supermicro X10SRA or X10SRA-F, but after the launch of "X99 Refresh" Motherboards for Broadwell-E, I find these to be too dull and expensive. I still love Supermicro Motherboards and the fact that their Manuals always have a diagram with all the I/O topology so you know exactly what goes connected where, something which all the other manufacturers lack.

After searching a while, I found the AsRock X99 Taichi, which seems to be simple, solid, cheap, and has great reviews, superceding the Supermicros as my first choice. It has 3 PCIe 16x slots (16x/8x/8x) and two M.2 Key M PCIe 4x coming from the Processor, filling the 40 PCIe Lanes. The topology seems to be quite simple and devoid of lane switches, which is how I like Motherboards.

The disadvantage of Haswell-E Motherboards is that they may NOT work out of the box with a Broadwell-E Processor without flashing the Firmware, since they could be leftover parts with older Firmware. Since chances are that he travels back here without testing the parts (He will not be able to assemble a full system), this means that we're risking to have to procure here a 200-300 U$D Haswell-E Processor JUST for flashing (Unless there are alternatives like removing the Firmware chip and flashing it with a Flash programmer, but I would have to purchase that too). The AsRock X99 Taichi being a X99 Refresh works with Broadwell-E out of the box, that gives me a lot of peace of mind.

The advantage of the Supermicro X10SRA-F with the BMC is that it doubles as a basic integrated VGA, which may be useful if he ever decides to set up his system to do VGA Passthrough as I do with mine. Otherwise, he would have to set it up headless. IPMI itself doesn't serve any purpose for him.

I considered the Gigabyte GA-X99P-SLI because having an Alpine Ridge for ThunderBolt 3 makes it special, and is just slighty more expensive than the AsRock X99 Taichi. It also has four PCIe 16x slots, but the specifications doesn't really make it clear about the topology (It can't be 8x/16x/16x/8x, more likely 8x/16x/8x/8x?). The M.2 Key M Slot should also be PCIe 2.0 4x since all the Processor PCIe Lanes should go to the PCIe 16x slots.

Finally, there is the issue of DDR4 RAM type. I read elsewhere that AsRock support claims that some of their X99 Motherboards works with ECC RAM but only if its x8 wide (1Rx8 or 2Rx8), as it fails to POST with x4. The Supermicro X10SRA/X10SRA-F also seems to be picky since the Manual says that it supports non-ECC UDIMM ONLY if there is a Core i7 plugged in, with Xeons it requires RDIMM. I'm not sure it is just "not supported or validated to work" or it means that it will not POST at all.
Moreover, I recently make a Thread because there doesn't seem to be hard info about if ECC works as intended on X99 Motherboards (Or other consumer Chipsets in LGA 1150/1151) or Intel restrict it to C Series Chipsets only.



RAM MEMORY
As mentioned previously, this depends on both Processor (If Core i7 or Xeon) and how picky the Motherboard is with x4, x8, and whatever. So far, assuming I'l go for Xeon and that ECC works in X99 and thus the extra price is justified, I was going to use either DDR4 UDIMM ECC or RDIMM ECC.
Everything I found with ECC (Both UDIMM and RDIMM) seems to be either 2133 or 2400 MHz, there is nothing higher than that.
Since with 8 slots you can go up to 128 GiB using 8 * 16 GiB and these are available as ECC UDIMMs (Which could also be used for a LGA 1151 Xeon E3 V5) I don't know if going for the slighty slower and less compatible RDIMM is worth it. I doubt that he ever upgrades the RAM. 64 GiB seems already overkill, so depending price, 4 * 8 GiB could be enough, or 4 * 16 GiB could fit just for the sake of spending the budget.
I also found that RAM prices has been on the rise in the last few months and that a 16 GiB ECC UDIMM module could be found for 80 U$D some months ago, they're 100 U$D or more currently. That's sad.
Overally, I'm rather clueless about what RAM to buy and would take suggestions, since I can't decide between UDIMM or RDIMM. I prefer first party modules (Those that produces the DRAM and also assemble modules with it, like Samsung, Hynix, Micron), but they are hard to find and highly expensive...

DDR4 UDIMM with ECC
Kingston ValueRAM KVR24E17D8/16 - 16 GiB x 1 - 2400 MHz / 17-17-17 / 1.2V / 2Rx8 / 2Gx72 / (1Gx8)x18 - 110 U$D https://www.amazon.com/dp/B01FM3GBC0/
Crucial CT16G4WFD824A - 16 GiB x 1 - 2400 MHz / 17 / 1.2V / 2Rx8 / 2Gx72 - 100 U$D PROVANTAGE: Crucial Technology CT16G4WFD824A Crucial CT16G4WFD824A 16Gb Ddr4-2400 Ecc Udimm
Micron MTA18ASF2G72AZ-2G3 - 16 GiB x 1 - 2400 MHz / 17-17-17 / 1.2V / 2Rx8 / 2Gx72 / (1Gx8)x18 - 102 U$D MTA18ASF2G72AZ-2G3 Micron DDR4-2400 16GB/2Gx72 ECC CL17 Server Memory - DDR4 SDRAM Server 288 Pins - SuperBiiz.com



VIDEO CARDs
The plan is to use two GeForce GTX 1080. The question is... which one?
I'm assuming that all the GeForces are designed with gaming in mind, something that you at most do 12 hours daily, and in conditions that may not actually max out the GPU at all. GPGPU instead means 24/7 computing that will actually put them in Full Load, as if it was a continuous stress test. This means that I prefer beefed up models with strong VRMs and cooling. Founder Edition seems to be the worst of the lot in that regard.
There is also the issue that most 1080 were suffering from coil whine, but I can't fix that...
I used this as reference for PCBs: Comparison of custom GeForce GTX 1080 PCBs | VideoCardz.com

ZOTAC GeForce GTX 1080 AMP Edition - 1822 MHz - 8+3? - 570 U$D https://www.amazon.com/dp/B01GCAVRSU
MSI GEFORCE GTX 1080 ARMOR 8G OC - 1797 MHz - 10? - 599 U$D (Plus 20 U$D rebate) https://www.amazon.com/dp/B01GXOWUDQ
EVGA GeForce GTX 1080 CLASSIFIED GAMING ACX 3.0 08G-P4-6386-KR - 1860 MHz - 14+3 - 700 U$D Micro Center - Computers and Electronics

I will urge him to buy the Zotac NOW since they were 620 U$D yesterday, and is currently by far the most beefed up model for less than 600 U$D. Doubt than that can be beat. The EVGA Classified is supposed to be the best, but also the most expensive one.
The Zotac suffers from overheating, but that may be fixed: Zotac 1080 amp overheating fix • /r/nvidia



The other component that will be purchased in USA that I'm missing is the SSD, will edit what I find later. I don't know if the Intel 750 PCIe is still the top dog. I could also use a Samsung 960 PRO M.2, and maybe with a M.2-to-PCIe adapter to put a beefy heatsink on it.
Power Supply will be procured locally since you can't freely import those. It will probabily be a high end Seasonic model (Platinum 1000W or such). Case will be a Tower, probabily a rather sad one. I'm starting to like the idea of going for a 4U Rack as Tower replacement...


Well, that's all. I hope that other indecisive people can use all my research for reference, but I prefer at the moment if other people solves my indecisions first :D
 

MBastian

Active Member
Jul 17, 2016
205
59
28
Düsseldorf, Germany
Imho and without knowing more about the intended main purpose(Software) this setup looks like an oversized gaming rig to me.
Building a new system from scratch can be full of pitfalls and I wouldn't recommend it to someone if he want's to use it for anything resembling serious work.
Why not go for a used Dell T3600/T3610 or T5600/T5610 with one fast E5 v2 quad or hexacore? The gap betreen E5 v2 and v4 CPUs is not so large as Intel would like us to believe and prices for DDR3 modules are really really low. He could even use a cheap v1 CPU for a few months, until v2 parts drop in prices.
Hell, if you really want something jaw dropping why not go for a T7600/T7610 with Two v2 CPUs and four GTX 1070 cards? The chassis is somewhat expensive compared to the smaller parts but, well worth it if you go for four cards or maybe multiple NVMe cards

DELL PRECISION T7610 Barebone Build your own best Workstation | eBay

Edit: It seems prices for older v3 capable Dell and HP workstations are dropping too.
 
Last edited:

zir_blazer

Active Member
Dec 5, 2016
355
128
43
Sadly, this build will be postponed for at least one month. Beginning in January 2017, a 35% import tax for computer parts imported by business will be removed (Not sure how this affects the current 50% customs tax that we pay, unless you import via a middleman...), so my friend thinks that purchasing the parts right now in USA will be more expensive than waiting a few weeks.
As by January my friend will be back here, he will only be able to buy from vendors that ships internationally, which Amazon mostly does. I'm not sure about how much shipping cost may add up.


Always RDIMM if you've got a chance.
Gotcha.

I'm still curious about why people may prefer RDIMM ECC over UDIMM ECC or viceversa, assuming same density and price per module. RDIMM is not compatible with consumer platforms so it will be harder to reuse the modules when the system gets eventually dissambled (Well, that could be in... 5 years?), it is slighty slower, and I think that the buffer chip also slighty increases power consumption. The only advantage on paper is that it reduces the load on the Memory Controller, which may increase reliability. I doubt that he will ever put more RAM than whatever he builds it with, so being able to put more modules is not a usable advantage.
Are RDIMMs considered higher quality than UDIMM ECC, being manufactured specifically for more expensive Servers? Maybe less chances for DOA parts? Is there any other tangible difference? Is not that if things "just works" there would be much of a difference anyways...


Have you tested your GPGPU app? There are some on NVIDIA GPUs that can say 100% when querying NVIDIA-SMI but they're only using about 60-70% of TDP.
Imho and without knowing more about the intended main purpose(Software) this setup looks like an oversized gaming rig to me.
Probabily the most interesing questions, for which I don't have an answer...
My friend gets extremely evasive everytime I ask him about his "work app", so even I don't know what he is going to use the GPUs with. The only logical answer is that he is under NDA, so everything related to it is a guesswork.

As far that I know, he currently has 2 GF980 and was intending to upgrade to a new system with 2 GF1080. Without knowing about his GPGPU Software I can't look for benchmarks about how it scales with multiple GPUs, so I don't know if 4 GF1070 or a single Titan Pascal would be better for the same money. Since I suppose he already did his research when he decided on 2 GF1080, my idea was merely trying to choose a GF1080 model that looked the most solid for heavy compute usage.
His 2 GF980 were going to other family systems, so I don't think he needs a Motherboard with 4 PCIe 16x nor a Case big enough for four Video Cards in the same system.


The oversized gaming rig is a half truth, since as it will be his main rig, when not doing compute it will be a standard gaming system. Is is still a step onto the Workstation segment (Xeon E5 1600 / RDIMM or UDIMM ECC), and it is less of a typical gaming rig that if he used an overclocked Ci7 6700K as other people would do. But yes, as he will not use any of the more interesing Server features, is a rather simple build that looks very consumerish.


Why not go for a used Dell T3600/T3610 or T5600/T5610 with one fast E5 v2 quad or hexacore? The gap betreen E5 v2 and v4 CPUs is not so large as Intel would like us to believe and prices for DDR3 modules are really really low. He could even use a cheap v1 CPU for a few months, until v2 parts drop in prices.
Hell, if you really want something jaw dropping why not go for a T7600/T7610 with Two v2 CPUs and four GTX 1070 cards? The chassis is somewhat expensive compared to the smaller parts but, well worth it if you go for four cards or maybe multiple NVMe cards
I'm aware that the Server folks LOVES hunting for previous gen parts because if used or refurbished they can be found dirt cheap, but me and my friend are of the type of guys that simply prefers to buy new, latest gen parts.
Also remember that since we are not in USA, we have the international shipping cost issue, plus the fact that the seller must be willing to ship to South America. Purchasing new parts directly from Amazon (Not third party sellers IIRC) is actually simpler than buying used most of the time.

I still recall the craze for the overclockable Dual LGA 1366 Nehalem platform with Xeons X or L Series around 3 years ago, but never bite the bullet there. I'm also aware than Sandy Bridge-E parts are in that same spot at the moment, but I don't find them interesing since he can't take advantage of the extra CPU power of a Dual Processor system, yet gaming will be hurt a lot by low Frequency Xeon E5 2600 parts.
I think that it may be possible to build a Haswell-E system with used parts rather close to my proposed Broadwell-E but much cheaper, but I would absolutely not like to step down to SB-E/IB-E.

The tower or rackmount would have to be purchased locally, since a tower like the one of those Dells systems will be ridiculous expensive to ship. Power Supplies (Or anything else that connects to the 220V network) needs a special permit to import. Besides, I usually avoid OEM systems like the plague due to propietary parts (Motherboard...), which is why I became a fan of Supermicro retail parts. If I were going for a Dual Processor system, chances are that I would go for something like a Supermicro X10DAX, but again, there is no usefulness in going Dual Processor for his system.



I will have to admit than the Intel P3600 SSD even if refurbished made me drool since is also current gen, but at 700 U$D is probabily too expensive, and I don't think than he needs a 1.2 TiB SSD.
Good thing is than that SSD inspired me to google, and found than a new Intel P3500 400 GB PCIe is a realistic contender, as it is slighty better than the Intel 750 400 GB PCIe and currently CHEAPER than it. It cost 280 U$D on an Amazon third party seller: Amazon.com: Intel SSDPEDMX400G401 P3500 Series 400GB Half: Computers & Accessories
There is also an Intel P3520 1.2 TB PCIe for 660 U$D: Amazon.com: INTEL SSDPEDMX012T701 INTEL SSD DC P3520 SERIES 1.2TB SINGLE: Electronics
The more mainstream choises would be a Samsung 960 EVO 500 GB M.2 for 250 U$D: Amazon.com: Samsung 960 EVO Series - 500GB PCIe NVMe - M.2 Internal SSD (MZ-V6E500BW): Computers & Accessories
Or a Samsung 960 PRO 512 GB M.2 for around 330 U$D, but Amazon currently doesn't have it.
 
Last edited:

zir_blazer

Active Member
Dec 5, 2016
355
128
43
*BUMP*

Two weeks passed by, so we're getting closer to 2017. Since the reduced import taxes seems to significantly affect lots of things across the board, budget for this system increased. Now he wants to upscale to FOUR GF1080 instead of two. This brings several more challenges...


1 - We've been talking about de-coupling his GPGPU work to a dedicated Server system instead of a hybrid all-in-one, so that he keeps using his current system with 2 GF980 for daily usage and gaming. De-coupling would be useful only if the work system needs the Video Cards, big Power Supply and cooling, then can go cheap on all other components. Since as mentioned previously I don't know what Software he will be using, I can't research myself what other variables may affect the performance.

To begin with, I don't know if the application requires SLI support to use multiple GPUs (GF1080 were capped to 2-way SLI if I recall correctly), or if it can use them individually in parallel multiple instances. Since back when BitCoin mining was all the rage applications could work with each GPU individually, I suppose that SLI support is not reelevant, but I'm not 100% sure.
I know that CPU and RAM doesn't matter, but I don't know if PCIe bandwidth does. Assuming it doesn't, I could pick a budget Pentium with a H110 Motherboard and 2 * 4 GiB UDIMM ECC, buy a few PCIe 1x to 16x Powered Riser adapters, and call it a day. Now instead of an oversized gaming machine, it seems to look like your typical oldschool cryptocurrency mining rig...
If PCIe bandwidth DOES matter, then the previous system with a Xeon E5 and its 40 PCIe Lanes is still the best option.


2 - Assuming that the system is better suited for LGA 2011-3 and that it needs the PCIe bandwidth, the AsRock X99 Taichi doesn't suits this system any longer since it has only 3 PCIe 16x slots (16x/8x/8x). The Supermicro X10SRA/X10SRA-F with 4 PCIe Slots (16x/8x/8x/8x) seems to be next best option, at the cost of no M.2. I'm not particularily worried about not having a M.2 SSD because I prefer the PCIe ones anyways, but the X10SRA-F PCIe arrangement is not friendly for 4 cards plus a PCIe SSD since the other slots are PCIe 2.0 1x from the Chipset. It would have to be a Motherboard with 5 PCIe 8x/8x/8x/8x/8x for 4 Video Cards and a PCIe SSD...

Another issue is that the Supermicro X10SRA-F doesn't come with Broadwell-E support out of the box. Newer production runs includes the latest BIOS, but there would be no guarantee about what I'm getting. Worse yet, the BIOS chip seems to be soldered instead of socketed, so it seems that I can't workaround this with an EEPROM reprogrammer (Which would be a nice tool for me to get anyways), but would instead need a Haswell-E Processor (At the very least a 200 U$D paperweight in the budget).
I barely looked around for proper Server Motherboards instead of Workstation ones (The X10SRA/X10SRA-F has Audio thus could double as a Desktop computer, which is why I picked it on the first place). For a 5 PCIe 8x pure Server Motherboard, the Supermicro X10SRL-F could be interesing.


3 - Four Video Cards means much more planning on how to power and cool them. A lot more...
Back when BitCoin mining was still popular, I recall that there were issues if you had more than three Video Cards in the same system since they draw up to 75W power from the PCIe slots and that overloaded the ATX 24 Pin connector two 12V cables (Whose safe upper limit seemed to be around 225-250W), so it was usual to see them burned like this: https://ip.bitcointalk.org/?u=http://i.imgur.com/mkTKa.jpg&t=571&c=wSreaZYG_Sasqg
So, for more cards in the same system you had to go for Powered Risers.
I recently read than on modern Video Cards nearly all the power draw comes from the PCIe 6/8 Pins power connectors and thus is safe to run all of them plugged directly in the Motherboard without anything fancy. Just to make sure, do someone knows since what Radeon/GeForce generations things works like that? It I go that way, it could simplify matters a lot.


4 - Regardless of power, due cooling, I was looking anyways for risers so I could reposition the cards suspended midair and allow for more spacing between them, like these systems:
Suspending a video card with wire
They could be standard ribbon risers instead of powered ones. Yet, using risers to suspend cards means that I can not use a standard case anymore...

As a side project both me and my friend were looking around for racks to consolidate our family Tower systems, since they take a bit of space, and worse when we have to put them horizontally to swap parts and such. The thing is that suspending Video Cards is rather non standard, so I can't go out and purchase a 4U rack case (Which should be Tower size equivalent) and use the ribbon cables. Do someone knows if there are production 4U/5U racks that are intended for suspended cards, or I have to go for a fully custom, modded solution?

To be honest, I don't know a lot about racks, but lately I have been reading about them and I'm sad that I don't have more disposable income to buy things to test them without spending weeks researching. Would take as much advice as possible regarding tower-to-rack conversions (Particulary due to cable extenders for monitors/USB if the rack is too far apart from the desktop), or those with experience in mining rigs.


I expect that this system should be up and running around February, assuming I get pass the planning stage...
 
  • Like
Reactions: Patrick

zir_blazer

Active Member
Dec 5, 2016
355
128
43
*BUMP*

6 months later, my friend wants to resume this build. A lot of things changed, but not much for him. He doesn't like AMD, so Ryzen and Naples are out of question, and he doesn't seem to want to wait for Skylake-E or Coffee Lake, either. This leaves only current LGA 1151 or LGA 2011-3 platforms, which is basically the same than half a year ago.
Sadly, it seems that Ryzen wasn't even helpful to bring Intel to lower their prices, so going LGA 2011-3 leaves me a sour taste. Current HEDT offerings look bad in price/performance against Ryzen, and although the Xeons E5 are still better in some regards, is not enough to justify to buy one at the same price that they were 6 months ago. Worse with Skylake-E around the corner.


PROCESSOR
LGA 2011-3 (Intel Ark)
Intel Xeon E5-1620v4 - Broadwell-E / 4C-8T / 3.5-3.8 GHz / 40 PCIe Lanes - 309 U$D Amazon
Intel Xeon E5-1650v4 - Broadwell-E / 6C-12T / 3.6-4.0 GHz / 40 PCIe Lanes - 655 U$D Amazon
Intel Xeon E5-1660v4- Broadwell-E / 8C-16T / 3.2-3.8 GHz / 40 PCIe Lanes
Intel Core i7 6800K - Broadwell-E / 6C-12T / 3.4-3.6 GHz / 28 PCIe Lanes
Intel Core i7 6850K - Broadwell-E / 6C-12T / 3.6-3.8 GHz / 40 PCIe Lanes
Intel Core i7 6900K - Broadwell-E / 8C-16T / 3.2-3.7 GHz / 40 PCIe Lanes
LGA 1151 (Intel Ark)
Intel Xeon E3-1245v6 - Kaby Lake / 4C-8T / 3.7-4.1 GHz / 16 PCIe Lanes
Intel Xeon E3-1275v6 - Kaby Lake / 4C-8T / 3.8-4.2 GHz / 16 PCIe Lanes
Intel Core i7 7700 - Kaby Lake / 4C-8T / 3.6-4.2 GHz / 16 PCIe Lanes
Intel Core i7 7700K - Kaby Lake / 4C-8T / 4.2-4.5 GHz / 16 PCIe Lanes

The build absolutely leans to HEDT, LGA 1151 is just there for reference. The 1650v4 is enough. He knows that if he wants more Cores and PCIe Lanes he has to sacrifice Single Threading performance.


MOTHERBOARD:
AsRock X99 Taichi - ATX / X99 / 12 VRM / 2 BIOS / Realtek ALC1150 / Intel i218v + Intel 211-AT / 8x DDR4 DIMM / 3x PCIe 16x / 2x PCIe 1x / 2x M.2 Key M - 220 U$D Amazon
Gigabyte GA-X99P-SLI - ATX / X99 / 2 BIOS / Realtek ALC1150 / Intel NIC? / 8x DDR4 DIMM / 4x PCIe 16x / 2x PCIe 1x / 1x M.2 Key M / Thunderbolt 3 - 193 U$D Amazon
Supermicro X10SRA - ATX / C612 / 8 VRM / Realtek ALC1150 / Intel i210-AT / 8x DDR4 DIMM / 4x PCIe 16x / 2x PCIe 1x
Supermicro X10SRA-F - ATX / C612 / 8 VRM / Realtek ALC1150 / Intel i210-AT / 8x DDR4 DIMM / 4x PCIe 16x / 2x PCIe 1x / IPMI / VGA - 309 U$D Amazon

His current choice is the Gigabyte. Previously it was the AsRock, but some vendor told him that they had issues with the brand (Not even with particular models...) so he dropped it. However, the Gigabyte has rather awful 1 star reviews. Also, AsRock says that it supports RDIMM ECC, but Gigabyte says that it works with ECC "in non-ECC mode". I'm not sure if that applies to the AsRock as well. I'm rather convinced that as a worst case scenario, RDIMM ECC should boot and work, but I still don't know if X99 works with ECC at all.
I personally want to force him to Supermicro, which is the correct choice considering that he is going for Workstation gear, and the Motherboard should match. However, he doesn't want Supermicro "due to the lack of reviews". Is rather hard to convince someone to buy something with a lack of brand recognition...
I don't see the point of the X10SRA when the X10SRA-F is not much more expensive and has IPMI. While I doubt that he will ever use it, he is spending so much money that I consider worth to take that feature.


RAM MEMORY:
DDR4 RDIMM ECC
Samsung M393A2G40EB1-CRC - 16 GiB x 1 - 2400 MHz / 17-17-17 / 1.2V / 2Rx4 / 4Gx72 / (1Gx4)x36 - 155 U$D Amazon (Third party seller)
Samsung M393A2K43BB1-CRC - 16 GiB x 1 - 2400 MHz / 17-17-17 / 1.2V / 2Rx4 / 2Gx72 / (1Gx8)x18 - 147 U$D Amazon (Third party seller)

RAM is at least 50% more expensive that it used to be. Worst of all, my friend is toying with the idea of going for 128 GiBs instead of just 64. What for? No idea. When a 16 GiB stick was 100 U$D I didn't mind that someone spends disposable income in more RAM, not so much with current prices.
Ironically, I managed to find RAM that I am confident with. The 155 U$D Samsung module appears in the Tested Memory List of the Supermicro X10SRA-F and it seems that Supermicro has a rebrand of it. Is also among the most affordable module I could find, while previously Samsung had a premium.
Not sure if is worth looking for better modules. There are also 2666 MHz and LRDIMM from Samsung, and I didn't checked other offerings (Hynix and Micron, which both produce DRAM and sell sticks). Any point in going 4 * 32 GiB LRDIMM 4 * 32 GiB or 8 * 16 GiB RDIMM?


SSD:
Intel SSD DC P3520 PCIe 1.2 TiB - 704 U$D Amazon but no stock

Previously I suggested him an Intel 750, or better, a P3500, which at some point were cheaper but is not the case any longer. Since I doubt he wants to buy from eBay, used parts, etc, other mentioned models like the P3600 that only have good prices if you're purchasing them from someone that is getting rid of extra stock aren't useful for this case. So basically, its about if the P3520 is the best option for a brand new product with stock at major retailers. I don't like that they are slower than the part that they replace, but they also have twice the endurance and much better price per GiB, making 1.2 TiB "affordable" enough.
Someone else suggested him a Samsung 960 EVO, but I absolutely dislike the M.2 format due to often they can overheat, besides than that is oriented to consumer, while the Intel is full blown Server. The P3520 U.2 version seems interesing but he would still need an adapter of some sorts, while the PCIe card is overally easier to plug in any system with a PCIe 4x+ slot. Any reason to go for the U.2 version?


VIDEO CARDS:
eVGA GeForce GTX 1080 Ti Founders Edition 11G-P4-6390 - 1480/1582 MHz / 7+2 VRMs / 1 BIOS / 8-Pin + 6-Pin / 250W TDP - 700 U$D Amazon

This was what he had in mind, I didn't researched GF1080 Tis. Since its a mere FE, I suppose that there should be better cards with custom PCB and cooling.
It seems to be easier to just go with two of these than 4 1080s.


CASE:
Corsair Carbide Quiet 600Q - EATX (12" x 10.6") - 160 U$D Amazon
Since he is going to buy just two Video Cards and doesn't intend to use more in this system, there is no need for anything fancy like the cryptocurrency mining custom cases. Just two plugged into the Motherboard and that's is.
As this is more about personal taste that functionality, I'm not arguing about his choice.


I'm missing Power Supply and Heatsink (Probably a Seasonic of some sorts, and a CoolerMaster Hyper 212 EVO).
 
Last edited: