New E5 26xx vt-d server, thinking X9DR7-LN4F-JBOD and SC743TQ-1200B-SQ

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ynari

New Member
Sep 21, 2016
14
0
1
51
I'm Looking to jump on the cheap E5-26xx (either 2680 or 2690) dual Xeon bandwagon, for comparatively (compared to v4) cheap dual CPU goodness, with dual GPU passthrough to two separate VMs on Xen (currently dual Quadro 6000s (GTX480s reflashed), running on a much older system).

I'm thinking about pairing the X9DR7-LN4F-JBOD with the SC743TQ-1200B-SQ. I need a motherboard that can handle two dual width PCI-e x16 cards, that allows for 6GB drives (all SATA at the moment, but why not use an SAS controller?) so either the X9DR7-LN4F-JBOD or the X9DA7 seem ideal. X9DA7 has USB3, audio, and another PCI-e 16x slot, but the X9DR7-LN4F-JBOD has IPMI, and VGA for glass console access to Xen, which seems very useful.

Just waiting on confirmation from Supermicro that the SC743TQ-1200B-SQ is suitable. It's a bit pricey, but when I weigh in the fact it has a 1200W PSU and eight hot swap SAS bays, it starts looking quite reasonable. I'd rather not use the 3U case they suggest for it, as this is to sit in a home office.

I've heard that all the PCI-e devices are easy to pass through to separate VMs - as long as I'm not forced to pass through the onboard VGA to a VM, I'll be happy.

My current system (an Intel S3210SHLC, X3370, with a hodgepodge of PCI-e slot adapters, and enough cheap hot swap adapters to handle RAID10 (4 drives) with a RAID 1 cache ) is ok after a lot of faffing, but it's rather old and a bit of a pain. Hoping for a much newer system that 'just works'

Also hoping that the Supermicro case really is reasonably quiet, as this will be used in a home setting.

Any comments welcome..
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,709
517
113
Canada
I like your thinking :)

I think given your choice, I would be opting for the X9DR7-LN4F board with onboard IPMI. I might even be tempted to couple it with a good used 826 giving me 12 hotswap bays, rather than the 8 the tower case offers. Cooling will be easier/ quieter in the 4U tower, but I already have other racked gear that the 826 would snuggle in with, you may not want to rack your gear of course :)
 

StammesOpfer

Active Member
Mar 15, 2016
383
136
43
Biggest problem is probably going to be power connectors for those video cards. Usually you don't find that in a sever unless it is specifically for GPGPU compute stuff.
 

wildpig1234

Well-Known Member
Aug 22, 2016
2,230
478
83
49
Biggest problem is probably going to be power connectors for those video cards. Usually you don't find that in a sever unless it is specifically for GPGPU compute stuff.
nearly any 700+ W psu nowadays would have at least 4 of those 6/8 pin vga power connectors. make sure you use a calculator and have a good margin though b/c those two cards use a lot of power like almost 250w each. depending on how drives and other things, I see at least a 850w if not more psu
 

ynari

New Member
Sep 21, 2016
14
0
1
51
The case is designed to take two GPUs, as is the psu, although it took me a while to find the right document :

http://www.supermicro.com/products/nfo/files/power_supply/psu_cablelist.pdf

pws-1k25p-pq. 2x 6 pin PCI-e, 4x 6+2pin PCI-e.

Wavering over DA7 vs DRF-LN4F-JBOD, as I've realised the DA7 has both USB3 and firewire, which is a definite advantage.

In case it helps everyone, Supermicro confirmed the X9DRF-LN4F-JBOD should work in the chassis, but they haven't explicitly tested it.
 

wildpig1234

Well-Known Member
Aug 22, 2016
2,230
478
83
49
man, thats nice but a lil pricy at 500+... does say it has suspend to ram sleep so thats nice....
 

ynari

New Member
Sep 21, 2016
14
0
1
51
Important question : I'm presuming the dual QPI 8G links compensate for the routing of PCI-e traffic between the processors. The X9DA7 has three PCI-e 16x ports (only two usable for dual slots due to position). One hangs off CPU1, the other off CPU 2, (the non dual slot capable third slot is also on CPU1), the NICs are on CPU1.

The X9DR7 has both 16x slots off CPU2, the NICs are on CPU1.

If I'm running VMs with GPU passthrough, it's possible it's more efficient to pin to a specific CPU with a GPU on that CPU slot, instead of moving traffic over QPI? Although the X9DAE (sli capable) is using a GPU per CPU.

Yeah, it's a bit pricey, but I've not had a proper upgrade since 2005. Besides, a Xeon v4 system will cost 3K+ for the CPUs alone, this will cost under 2K fully loaded.

Noting the NH-U9DX seems a decent cooler, possibly better than the Supermicro alternative, as it ships with decent fans and fits in a 4U case.