Yeah, but the hassle of ebay, shipping, bent pin claims, etc.
--
BTW, that X10DAL-i is a VERY special mobo (not to mention inexpensive!): I carefully read just about every Supermicro X8, X9 and X10 manual (I prefer Supermicro due to drivers and stability and warranty) looking for one special arrangement:
PCIe assignments to CPU1 and CPU2.
This was the ONLY board that:
* wires all 3 x16 PCIe 3.0 slots to CPU1 (middle slot is x8 wired)
* wires the x16 closes to the CPU, SLOT5, at full x16 data path (all others seemed to be full wired closest to the end)
* An addition x8 slot wired at x4 next to CPUs connected to the PCH.
Translation: 3-way x16 + x4 slot to run a HBA all available off of a single CPU. Not even the single CPU boards was wired like that. They seemed to have the critically limited number of PCIe slots placed where they would be covered by the dual-width GPUs. This arrangement gives you 3x X16 slots and one free x8 (wired at x4, connected to the PCH) available for your HBA card. Just-plain-perfect!
And, if that's enough, the one PCIe slot that is useless without a 2nd CPU is covered up by a dual-width GPU anyways.
Dare I say the perfect PCIe wiring? Now if they only moved the x16 1 slot closer to the CPUe like my Asus Rampage IV Black Edition so I could fit 3x full dual width GPUs in this chassis.
I looked closely at the X10DAX (Nvidia SLI license in the BIOS). But, the PCIe slot assignments was a deal breaker. And, THE COST! Ouch!
I am going to hack the X10DAL-i BIOS to embed the Nvidia SLI license (or attempt to). That kind of stuff is fun for me.
Oh, FYI, I do FAH, rastering, password cracking, Nvidia Cuda testing, occasionally Altcoin mining and more recently Tor hashing experiments.
I like having the option of moving that stuff (currently 3x Titans and 4x AMD 280s) to the server. I tried that before with my current chassis but Nvidia desktop Windows drivers ticked me off, kept crashing the system during lockups. Going Arch Linux on the next build.