RyC's Consolidated Build Thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RyC

Active Member
Oct 17, 2013
359
88
28
Background: This is a comprehensive build log/history of almost everything computer related I can think of, so it may be a little long. This is the first time I’ve documented the history of my entire system (for myself even), so hopefully it’s relatively on topic. The posts are also split since there’s a maximum of 30 images per post.

I'm a software developer, and all of this is purely as a hobby. I’m a year out of college now, but most of the equipment I have was bought, and bought used, while I was still in college, so I tried to get the best deals I could on essentially only one major hardware purchase a year. In 2012, I caught the virtualization bug and consolidated every computer function except my desktop and laptop on ESXi. 4 years later and now my desktop is virtualized too. Virtualization has been an amazing tool for someone like me. Virtualization allowed me to experiment and learn a huge amount with just a small set of hardware. Even today, I sometimes think about how amazing it is that a single physical computer could be running my desktop, television, NAS, and more, all at the same time!

I have 3 ESXi hosts currently, and they’re listed below in the order they came online.

Build’s Name: Larken
Operating System/ Storage Platform: VMware ESXi 6.0U2
CPU: Intel Xeon E3-1245v3
Motherboard: Supermicro X10SAT
Chassis: Supermicro SC846TQ
TV VM Datastore: 1x 128GB Samsung 850 EVO
TV Scratch + SnapRAID: 1x 160GB WD VelociRaptor, 1x 5TB Toshiba (from the shucking deal here), 3x 4TB WD Red, 2x 3TB WD Red, 2x 3TB WD Green
NAS Scratch + Storage Spaces: 1x 320GB WD Blue, 4x 4TB White Label (WD RE)
RAM: 32GB (4x 8GB) Crucial Unbuffered ECC 1600Mhz DDR3
Add-in Cards: 2x IBM M1015 (flashed to LSI/Avago/Broadcom 9211), Mellanox ConnectX-2 EN, Intel RES2SV240 SAS expander
Power Supply: 1x Supermicro PWS‑920P‑1R, Power at Idle: ~200W
Other Bits: Online in 2013

Usage Profile: Television and NAS storage, television operations

Other information: A little history: I have a love of TV that my girlfriend says borders on the obsessive. But oddly, it’s not necessarily with the TV shows themselves, but with the systems and technologies that are used to watch TV. Windows Media Center is a DVR application that Microsoft developed during the 2000’s. Although abandoned by Microsoft after the release of Windows 7, even today it is still the best platform (IMO) to watch and record television on a PC, and the only option to watch and record television on a PC if you watch certain channels (usually HBO/Showtime/etc) on certain cable providers. Microsoft put a lot of time, effort, and money into WMC and, while some of that made it into the Xbox 360 and Xbox One, it’s disappointing to see WMC abandoned. The largest benefits of running WMC to me (compared to a Tivo for example) are automatic commercial skipping, and vastly increased storage capacity, which brings me to:

I’m a digital hoarder, and uncompressed MPEG2 recordings are 5-8GB per hour, depending on the channel. As a result, I had to buy more and more hard drives and was soon out of drive slots in the random desktop case I was using as a NAS at the time. This server was meant to be the solution to the storage shortage. This server is also the only one that I bought a few components at retail (as opposed to eBay/forum members). The X10SAT motherboard and E3-1245v3 were purchased new, because the Haswell E3 platform was relatively new in 2013. The X10SAT is technically a workstation motherboard, but it was the only one Supermicro had at the time with all PCIe slots (all of their Haswell E3 server boards back then had at least 1 PCI slot which would have been a complete waste for me). I do sometimes wish it had IPMI/BMC, but other than that, I’ve encountered no issues running on a workstation board. I found the Supermicro SC846TQ case from TAMS, which has popped up around here a few times. I probably could have bought used parts for an older platform, but I had a little extra money that summer. Larken runs 3 VMs normally:

The TV VM runs on Windows 7 since WMC runs best on 7. WMC was not meant to be run in a VM. However, Microsoft designed devices called “WMC Extenders”, which are much like Tivo Mini’s in that they “extend” the WMC session across ethernet to wherever your TV is (it uses a special form of RDP even). The WMC VM can then stream live and recorded television to these Extender boxes, completely bypassing the need to passthrough a GPU (and possibly place a noisy server in the living room). The tuners are network based, also bypassing the need to passthrough a USB or PCIe tuner. What is passed through to the TV VM is an M1015 connected to an Intel RES2SV240 SAS expander, and one of the onboard NICs. Storage runs on a combination of SnapRAID + DrivePool, which works well since the media files are generally only added and don’t change. DrivePool allows me to spread TV files across several hard drives, but present one unified hard drive/folder to WMC to show all Recorded TV items in one view (currently ~3400 episodes recorded). I used to run FlexRAID RAID-F for a combination of drive pooling and parity, but there were unexplained error messages and the software just felt rickety. SnapRAID + DrivePool feels much more robust and solid. I also have a Windows 10 VM that runs Emby, which serves ripped blurays and such to an Nvidia Shield (running Kodi), and also any computer/mobile device.

The NAS VM runs Windows Server 2012 R2 with the other M1015 passed through. I’ve got 4x 4TB White Label (WD RE) drives in 2 separate mirrored Storage Spaces pools (2x 2x 4TB in Storage Spaces mirrors). One pool is for general file storage, and the other pool is for backups. I used to run another FlexRAID product, tRAID, instead of Storage Spaces, and while it has successfully recovered files in the past, yet again, there would be unexplained error messages and it didn’t feel solid.

The NAS VM runs Veeam Backup and Replication every month, which backs up the entire VM environment. The Veeam backups are then uploaded to CrashPlan, although lately, so much data is changing every month that the backups don’t finish uploading before it’s time to make the next set of backups (I only have 4mb upload on Charter). One cool thing about the SC846TQ case is that I can pass some drive slots to the TV VM, and others to the NAS VM (as opposed to an expander backplane). So I’ve got 2 completely separate storage systems, running different software, for completely different purposes, all running in the same case and computer!

This server also was also my “test” bench for various VMs and hardware (until I got Luna below), and ran my “work” computer for around a year using a Quadro 2000 and HP zero client. With a Quadro 2000, 2x IBM M1015, and a Mellanox ConnectX-2, I was out of PCIe slots that could physically fit all the cards (there’s 3x x16 and 3x x4 slots, and all the cards are x8 except for the Quadro 2000). I bought a StarTech adapter PCI Express X1 to X16 Low Profile Slot Extension Adapter that could make an x16 low profile card fit into a x1 full height slot, and placed one of the M1015’s on there. I didn’t notice any performance issues, probably because I wasn’t doing anything too disk intensive, and the Quadro 2000 is retired now freeing up the slot for the M1015 again.

Photos:
SC846:
img_4803.jpg

Supermicro X10SAT:
img_3871.jpg img_3875.jpg

X10SAT installed:
img_0020.jpg

Intel SAS expander mounted "creatively" to the side:
img_0018.jpg
 
Last edited:
  • Like
Reactions: Chuntzu and Patrick

RyC

Active Member
Oct 17, 2013
359
88
28
Build’s Name: Laurel2
Operating System/ Storage Platform: VMware ESXi 6.0U2
CPU: 2x Intel Xeon L5520
System: Dell PowerEdge C1100
Drives: 2x 2TB WD RE4 in ZFS mirror for NFS, 1x 250GB WD RE4 for local Datastore
RAM: 72GB (18x 4GB) Registered ECC 1066Mhz DDR3
Add-in Cards: Intel X520-DA2, Dell LSI 2008 mezzanine card (85M9R)
Power Usage: Power at Idle ~250W
Other Bits: Online in 2014

Usage Profile: VM storage, VM management, network management, other management

Other information: This one’s not a DIY build, but is included for completeness. In 2013/2014 the C6100 and C1100 started coming up cheap(ish) with gobs of RAM, and in 2014 I finally saved enough to buy one. This was a great platform to implement a “management” host, where I could give OmniOS a large amount of RAM for ZFS (I chose 32GB), and run all the “management” services, such as vCenter.

pfSense runs virtualized, and provides internet/routing/DHCP/DNS/etc for the whole house. Running pfSense on the C1100 was somewhat of an issue since it only has 2 built in ethernet ports, and I wanted to dedicate 1 port to serving out NFS storage. So rather than have 1 port dedicated to WAN, I trunked the connection from the cable modem over a dedicated VLAN, so the pfSense LAN and WAN connections could share the same ethernet port on the host, leaving the other port dedicated to storage and other management traffic. This worked great for a while, but when we moved to a new place, we got a new cable modem that refused to connect unless everything was powered on in a specific order, and even then, would disconnect randomly requiring everything (including the physical ESXi host) to be shut down and powered on again to restore internet. It was getting extremely annoying, so I bought a Dell 85M9R LSI 2008 mezzanine card off eBay to free up the PCIe slot where an M1015 was. I was able to flash it to 9211 IT firmware with no issues using one of the procedures floating around here. I bought some cheap Mellanox ConnectX-2 PCIe cards as well to dedicate the storage traffic to, freeing up one of the onboard ethernet ports to dedicate to WAN. See this post for more info on that: Starting out with 10gb help The mezzanine card (and M1015 before that) is passed through to an OmniOS VM running napp-it to host ZFS storage for all VMs.

I also run Active Directory, and tie as many services as I can requiring authentication into it, including Windows accounts, Wi-Fi (WPA2 Enterprise), OpenVPN, vCenter/vSphere, NAS login/permissions, and probably a few more. Additional “management”/VMs that run on Laurel2 include: Ubiquiti Unifi (Wi-Fi) controller, vCenter (vCSA), vRealize, VUM, ADS-B airplane tracker, Asterisk+FreePBX phone controller, nginx web server, and OpenVPN server.

I would like to replace this server at some point since its power consumption is a little high (around 250W idle). Perhaps a 1U Xeon-D system? Laurel3 here I come...

Photos:
Mezzanine card
img_1014.jpg img_1038.jpg
 
  • Like
Reactions: Patrick

RyC

Active Member
Oct 17, 2013
359
88
28
Build’s Name: Luna
Operating System/ Storage Platform: VMware ESXi 6.0U2
CPU: 2x Intel E5-2670v1
Motherboard: Intel S2600CP2J
Chassis: Intel P4000M (P4308XXMHGR)
Drives: 1x 256GB Samsung 850 Pro, 1x 1TB WD Black
RAM: 64GB (8x 8GB) Registered ECC 1600Mhz DDR3
Add-in Cards: AMD Radeon HD 7870, Mellanox ConnectX-2 EN
Power Supply: 2x Intel CRPS 750W, Power at Idle ~160W
Other Bits: Online in 2016

Usage Profile: Workstation desktop virtualization, test bench

Other information: To go along with my drive to virtualize as much as I could, in 2013, I had a “moonshot” goal of virtualizing both Mac OS X and Windows on the same computer, with full graphics acceleration for both VMs, and running at the same time. The Mac OS X part was purely just to see if it could be done, I didn’t plan on using it as a daily driver since I already have a Mac laptop. While ESXi was more or less able to accomplish this on 5.5 in 2013, the hardware I had back then was not (I never got Radeon GPU passthrough to work on Larken or Laurel, Laurel2’s predecessor, but a Quadro 2000 worked great on Larken). Others have been successful with running Mac and Windows with GPUs and posted about it here, and I ended up being mostly successful.

DETOUR DOWN VDI IN SPOILERS

A detour on some VDI thoughts: There were two options I looked into for a “virtual desktop”, one was the pass a GPU and USB ports directly to a VM and “tether” to the server, or a “proper” VDI solution like VMware Horizon and zero clients. The whole zero client concept was very appealing to me, since I could essentially place a powerful desktop to anywhere in the house. With VMware Horizon + a compatible video card such as a Quadro 2000, full graphics acceleration to zero clients and Horizon software clients (with some limitations) was achieved. This was great since I’m an Android developer so I was using the Android Emulator which requires OpenGL compatibility above what the VMware software GPU has. I used such a solution for about a year as my “work” computer, where I used a zero client at home, and then the Horizon software client on my laptop when I was at school or work. I’m in the camp that believes GPUs are becoming a necessity for even basic “office” tasks. After adding the Quadro 2000, everything just seemed so much snappier, such as launching the start menu and bringing up the alt-tab switcher.

Some limitations on VMware Horizon I ran into: without the Quadro 2000, the framerate could barely reach 30fps, and that was only when there was a limited amount of screen movement. YouTube would be unwatchable full screen, and barely watchable windowed. After adding the Quadro 2000, full screen YouTube was now possible, but framerates would still peak at around 30-35fps. This was more or less fine for YouTube and office work, but attempting to play even light games resulted in a poor experience since the low frame rate is much more noticeable. I normally have Windows 10 animations turned off, but animations (desktop switch, window minimize/maximize, etc) even with the Quadro 2000 were still abysmally slow. This was the result of the PCoIP encoder, not the graphics card. The graphics card renders at acceptable frame rates, but the PCoIP encoder is still in software, and it can’t keep up. The potential solution to this is a hardware PCoIP encoder, which supposedly can keep up with high frame rates.

However, the equipment list for a “proper” VDI solution for my requirements was starting to become a little too expensive, and perhaps a little too proprietary: I would ideally need a more modern Quadro video card and most likely a hardware PCoIP encoder. But I don’t need a Quadro for what I do, and the proprietary Teradici equipment is starting to pile up. Who knows how long VMware is going to stick with Teradici? I don’t want to re-buy a GPU (especially a Quadro) when the GPU I had already works fine. The decision then was to simply passthrough the existing Radeon GPU and USB ports and be “tethered” to the server in close proximity (which ended up being fine actually).
 
Last edited:
  • Like
Reactions: Patrick

RyC

Active Member
Oct 17, 2013
359
88
28
Back to Luna (and apparently there's a 10,000 character limit per post too which is why this is split again): The opportunity to virtualize my desktop finally arose in 2016 when the Intel E5-2670 deals and combo package at Natex showed up. The opportunity to finally have a powerful workstation class desktop system was too much to resist (not to mention my old desktop was from 2009 and had DDR2 RAM :O). While I came close to accomplishing my “moonshot” (and hence, “Luna”), limitations of the Intel S2600CP board prevented the running of two GPUs at the same time under ESXi Intel S2600CP dual GPU passthrough issue In the end, since it was just for fun to see if Mac OS X would run, I’m still extremely happy with the system.

Windows 10 GPU passthrough with the AMD 7870 works great. I’m able to run the latest AMD drivers with no issues. Passing through one of the onboard Intel USB controllers (for the internal headers) works great, aside from one (resolved) glitch: the front USB ports on the P4000M (which were passed through via the internal header) seemed to cut out for a few milliseconds every few minutes. The mouse would jump around and the keyboard backlight would flicker. I ended up getting a StarTech internal USB to external USB via PCI bracket 4 Port USB A Female Slot Plate Adapter since I wanted all ports in the back anyway, and I haven’t seen the issue at all since I switched it over. I have an Apple Cinema Display with USB audio (and also USB hub), with a Bluetooth 4.0 dongle for the wireless mouse. I use a 10 foot combo mini-DisplayPort+USB and another 10 foot DVI extension cable (for dual monitors with an older DVI Apple Cinema Display) to reach my desk from the server cabinet. Sometimes I forget that I’m working on a VM until I look down and see the hulking computer that used to be next to my desk is gone! The performance and experience as a whole is as good as native (although I’m not running anything too intensive)

Minor issues: I originally bought a StarTech 4 port PCIe USB card 4 Port PCI Express (PCIe) SuperSpeed USB 3.0 Card Adapter w/ 4 Dedicated 5Gbps Channels - UASP - SATA / LP4 Power This one has an onboard PCIe switch, so you can pass the USB ports individually to VMs. However, I could not get it to work on Windows at all. The drivers simply refused to load. I ended up passing one of the onboard controllers to the VM, and the drivers were loaded automatically (in Windows and Mac OS X), and I didn’t need USB 3.0 anyway. The primary, more annoying issue is if I shutdown/restart a VM with the GPU attached, occasionally the entire host will freeze up and reboot itself. The BMC then goes into an error condition (triangle indicator light on the front panel turns orange) and the only way I’ve found to clear it is to power cycle the whole host at the wall (there doesn’t seem to be an actual need to clear the error condition, but I just like to have everything in a good state). Once everything comes back up, there don’t appear to be any lingering issues. I’ve read it might have something to do with the PCIe reset not happening correctly, and that one solution is to disable the GPU in Device Manager before shutdown, but since it’s just an annoyance that doesn’t happen regularly, I’m fine with it as is. Another caveat: The HDMI audio device is not passed through because it causes the host restart issue on every restart/shutdown, but since I don’t use HDMI audio, it doesn’t matter to me. This doesn’t cause an issue with driver installation, the “HDMI Audio” driver option simply doesn’t show up. Other than that, every application I’ve run has been solid, stable, and fast, just like on a non-virtualized desktop (GTA V runs great, for example).

Here’s the part list:
Intel AXX2IOS I/O Shield for S2600CP2, S2400SC2, and S2400GP2 New Bulk Packaging - $6
Intel AUPSRCBTP Single Passive Heat Sink New Bulk Packaging - $13 (x2)
Intel P4308XXMHGR Server Chassis 4U, 750W, New Bulk Packaging - $115
Intel FCPUPMAD Air Duct Required for Intel Server Board S2600CP In P4000M New - $15

$40 shipping

You can probably go even lower, but it seems kalleyomalley is running out of stock on some items.

A little while later, kalleyomalley had some additional P4000M parts in stock:
Intel AXX3U5UPRAIL Advanced Rail Kit For Server Chassis P4000 New Bulk Packaging - $20
Intel AUPBEZEL4UF Rack Bezel Frame (No Door) New Damaged Box - $10
Intel AUPBEZEL4UD Rack Bezel Door For Server Chassis P4000 Family, New Bulk - $8
Intel AUPLGPUBR Full Length GPGPU Support Bracket New Bulk Packaging - $5 (just to get a proper GPU power cable)

Overall, I was really impressed with Intel’s build quality, and how all the parts seemed to fit together so well. The BIOS/IPMI/ME/FRUSDR update process was seamless as well, especially since I was using the P4000M case and didn’t have to modify the SDR.

Photos:
Testing the motherboard and GPU passthrough
img_1730_1.jpg img_1731_1.jpg img_1733_1.jpg img_1737_1.jpg

Weird Rackable Systems identifier before reflashing BIOS/etc

img_1732_1.jpg

Preparing CPUs for the new Intel passive heatsinks:

img_1741_1.jpg img_1742_1.jpg img_1743_1.jpg

The case and accessories arrive from kalleyomalley:

img_1744_1.jpg img_1745_1.jpg img_1746_1.jpg img_1747_1.jpg img_1748_1.jpg img_1751_1.jpg

Here’s part of the conversion process to rack mount case:

img_1978.jpg img_1979.jpg img_1980.jpg img_1981.jpg
 
Last edited:
  • Like
Reactions: Patrick and Chuntzu

RyC

Active Member
Oct 17, 2013
359
88
28
Photos in the rack:
Front:
img_2005.jpg
Rear:
img_2003.jpg img_1998.jpg img_2001.jpg
Side:
img_2008.jpg
Closed up with cable tether coming out the back:
img_2012.jpg

Other misc photos in spoilers:
Kaia the Cat hiding out before anything was installed
img_0354.jpg
Rack (mostly) empty:
img_1995.jpg
Old rack using Lack tables from Ikea (and super old hardware):
img_6998.jpg

Total Power Consumption: 650-750W, 850W when doing something CPU/GPU intensive. This doesn’t include the monitors, which are on a different outlet, but it does include all other accessories such as switches, the NetShelter CX fans, etc.

Other Parts:
KVM:
Iogear Miniview Ultra+ 8 Port Miniview Ultra+ (TAA Compliance)
This is probably the best deal I’ve ever gotten at WeirdStuff Warehouse in Sunnyvale - $25 for the KVM and 4 of the special combo VGA/USB/audio cables

Switch: Cisco SG200-26
No complaints, other than I need SFP+ ports now!

Rack: APC NetShelter CX 18U
Honestly, I only got the NetShelter CX because I found it on Craigslist for free if I picked it up myself (and in San Francisco, when I was near LA at the time). Then, even though I read the specs and knowing it was 350 pounds, I came by myself, in a pickup truck with a 5 foot lift into the bed. I ended paying some people off the street to help me get it in :eek:

This is probably the best Craigslist deal I’ve ever gotten, the condition is maybe B-, but it’s almost entirely cosmetic. The biggest issues are that a number of the clips holding the panels on have snapped off, but I can order replacements on eBay. All the panels and fan wall were included. The noise reduction is incredible. Without having to modify any servers, I can sleep literally 3 feet away from the cabinet with no issues. It sounds like a very dull fan when closed up and a screaming monster when open.

UPS: CyberPower CP1500PFCLCD UPS System - CP1500PFCLCD | CyberPower
Bought on a NewEgg Shell Shocker deal, it gives about 7-10 minutes of time to shut everything down when the power goes out. I try to shut down Luna and Larken as quickly as possible to give Laurel2 more time to shut down vCenter and associated services (which takes forever).

Raspberry Pi 2: I won this in a raffle at the last local VMUG meeting here (thanks PernixData!). My intention is to eventually write a script that will automatically shut everything down cleanly when the power goes out. Any tips or other Pi ideas appreciated!

Future Improvements - any tips appreciated:
VSAN:
Now that I have 3 ESXi hosts, I would love to move storage onto VSAN. However, this would probably require a fair amount of money to buy SSD’s for the caching tier, and possibly more hard drives to fill out the storage tier. But possibly the biggest sticking point is the 10Gb Ethernet situation.

10Gb Ethernet: I’ve already bought all the NICs and transceivers and fiber, but I need a switch with at least 3 SFP+ ports, especially if I’m going to start experimenting with VSAN. Mikrotik makes some switches that come close, but only have 2 SFP+ ports. Perhaps I need to spring for a Quanta LB6M, but the number of ports seems way overkill for my needs.

KVM (the hypervisor): I’ve always used VMware, but it’s important to branch out and learn new platforms too. I’ve read PCI passthrough of video cards may support more platforms, and also Nvidia GeForce cards can function under KVM (they’re artificially limited by the driver on ESXi).
Possible limitations: Have to replace the Veeam backup system? Also, it’s my impression that there’s a lot of manual workarounds/tweaks (and config editing) required for certain things, such as PCI passthrough, but maybe that’s not necessarily true. I would probably start off with Proxmox when I begin diving in.

Management Network Split: The management VLAN consists mostly of things joined or that require access to AD. The DC handles DNS for the management VLAN (for AD purposes), but pfSense handles DHCP and duplicates the management DNS entries for the other VLANs (the other VLANs have pfSense set as the DNS server, where the management VLAN is given the DC). Perhaps I can forward non management VLAN DNS requests for the management VLAN to the DC directly, and also hand the DHCP service off too? I’m waiting until Windows Server 2016 comes out to try this (and possibly any new Storage Spaces features).

PfSense/Suricata/Traffic Shaping: Running some sort of intrusion detection or traffic shaping for buffer bloat seems to be popular with users of pfSense, but I haven’t begun looking into this in any depth.

Moving: In a few months, I’m packing everything up and moving to Oregon (from California, so not too far at least). I’ve kept as much as the packaging for the servers as I could, but any advice on how to move all this equipment without damage appreciated!


Thanks for reading (and possibly reading it all)!
 
Last edited:

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
pretty sweet. curious why you did an intel tower case build /w rackmount conversion kit instead of just getting another supermicro 846 chassis. for the same amount of rack space you could've fit a lot more hard drives. rack mounting tower chassis just seems ... ?

but you did get it from kalleyomalley aka oemxs, i can only assume you got it for a ridiculous price. but even if it was 100 or 150 i would've rather have spent a bit more to get a more formidable chassis.
 

RyC

Active Member
Oct 17, 2013
359
88
28
pretty sweet. curious why you did an intel tower case build /w rackmount conversion kit instead of just getting another supermicro 846 chassis. for the same amount of rack space you could've fit a lot more hard drives. rack mounting tower chassis just seems ... ?

but you did get it from kalleyomalley aka oemxs, i can only assume you got it for a ridiculous price. but even if it was 100 or 150 i would've rather have spent a bit more to get a more formidable chassis.
Thanks!

The insane kallelyomalley/oemxs deal was indeed the primary reason I went with the P4000M instead of a Supermicro 846 or similar. Secondary reason is that I'm not anticipating using a lot of drives for this build, but hopefully I don't regret not choosing a case with more drive slots later.
 

TheKingOfKats

New Member
Aug 27, 2016
4
1
1
29
Wow really nice build @RyC. I'm doing something very similar, are those the prices that kalleyomalley took for best offer? Or were the actually listed that low? Also can 8x3.5 bay hotswap use both SAS or SATA drives? Does it connect with a single mini SAS? There just isn't enough documentation on the Intel accessories so it's near impossible trying to find accurate information. What do the GPU power cables look like and where do they actually connect on the distribution board? What did you do with the blue GPU fan duct when you put the full size GPUs in?

Sorry to bombard you with questions, but you've done almost exactly what I want to do in an Intel build
 

RyC

Active Member
Oct 17, 2013
359
88
28
Wow really nice build @RyC. I'm doing something very similar, are those the prices that kalleyomalley took for best offer? Or were the actually listed that low? Also can 8x3.5 bay hotswap use both SAS or SATA drives? Does it connect with a single mini SAS? There just isn't enough documentation on the Intel accessories so it's near impossible trying to find accurate information. What do the GPU power cables look like and where do they actually connect on the distribution board? What did you do with the blue GPU fan duct when you put the full size GPUs in?

Sorry to bombard you with questions, but you've done almost exactly what I want to do in an Intel build
Thank you and no problem!

The prices I listed were the best offer prices kalleyomalley accepted, so you might be able to try lower if they still have what you want in stock. I believe the 8x3.5 bay hot swap can use both SAS and SATA, but probably not at the same time. It does connect via 2 SFF-8087 cables (I think) and the case included 2 reverse breakout cables (I think - someone correct me on the direction) to plug into regular SATA/SAS on the motherboard.

The GPU cables came in the Intel AUPMGPUBR GPU bracket kit, but depending on your GPU, you need revision 2 of the kit. Here's an Intel PDF with an illustration of the new and old cable: Product Change Notification 111735 - 00 - Intel® Quality Document ...
The revision 2 8-pin GPU connector has a little separating tab so you can make it a 6-pin connector if you need 2x 6-pin instead of 1x 8-pin and 1x 6-pin (thanks @britinpdx). If you look at the PDF, you can see the non GPU end of the cable looks kind of like a 4-pin CPU power connector. The PDU (at least for the hot swap PSUs) has a couple open ports for that 4-pin connector.

The blue GPU fan duct thing actually swings up and kind of out of the case for when you're installing a GPU. After you install a GPU or any other PCIe device, it just swings back down and it has enough clearance to not touch full height PCIe cards. The 7870 in there has top facing GPU power connectors and everything fits without being tight.
 

TheKingOfKats

New Member
Aug 27, 2016
4
1
1
29
The GPU cables came in the Intel AUPMGPUBR GPU bracket kit, but depending on your GPU, you need revision 2 of the kit. Here's an Intel PDF with an illustration of the new and old cable: Product Change Notification 111735 - 00 - Intel® Quality Document ...
The revision 2 8-pin GPU connector has a little separating tab so you can make it a 6-pin connector if you need 2x 6-pin instead of 1x 8-pin and 1x 6-pin (thanks @britinpdx). If you look at the PDF, you can see the non GPU end of the cable looks kind of like a 4-pin CPU power connector. The PDU (at least for the hot swap PSUs) has a couple open ports for that 4-pin connector.
Thanks I didn't see the extra 4 pin connectors on the PDU, that explains a lot! I ordered the last of kalley's AUPLGPUBR stock, but it seems to be version 1. I suppose this will make do, but I'll have to look into just making my own GPU power cables. Did you just buy multiple AUPLGPUBR kits to support your 2 GPUs?
 

RyC

Active Member
Oct 17, 2013
359
88
28
The kit comes with at least 2 cables, maybe even 4, so one kit was plenty! Version 1 will work fine as long as you don't need 2x 6 pin connectors. You could possibly just run 2 cables too if you do