BANGING 24-bay barebones deal

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Mine was unpacked today (along with 4x 160GB Intel S3500 drives). I got the zip-tie version of the AOC-USAS2-L8i. The backplane is a SAS2 846EL1.

Heatsinks and shroud were included which was nice. Here is the strange part - it did not come with rack handles! I also did not get those 2x SATA cables to the 2.5" bay area.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
The AOC-USAS2-L8i is a UIO card. This is SM proprietary design with the parts soldered on the wrong side of the board compared to standard PCIe. The standard mounting bracket has to be removed in order to fit it into a "standard" PCIe chassis. Even if you find a bracket it won't fit in this chassis. I plan to remove it and use a LSI 2308 based card.

Also note that the slot they have it plugged into is PCIe 2.0 x4 (*). It won't run full speed in that slot. Likely OK even with 24 SATA drives on the expander, but if you try to use the other port for SSDs (log/cache, etc) then it will bottleneck. You need to move it to one of the other three slots, each of which are PCIe 3.0 x8.

(*) See the block diagram on page 1-10 of the manual for the motherboard. Note that the x4 slot is derived from the PCH chip and not directly from either CPU. This means that IO on this slot is 1 QCI hop away from CPU1 and 2 hops away from CPU2. For any kind of high-performance cards (good HBAs, 10Gbe or Infiniband) this is deadly. Too much latency. SuperMicro did not expose any PCIe from CPU2 on this board. All three PCIe 3.0 x8 slots are from CPU1 and the 2.0 x4 and x1 slots are from the PCH chip. Nice because it means all of the PCIe slots are active even with just a single CPU installed - but somewhat performance limiting for dual CPU setups.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
The AOC-USAS2-L8i is a UIO card. This is SM proprietary design with the parts soldered on the wrong side of the board compared to standard PCIe. The standard mounting bracket has to be removed in order to fit it into a "standard" PCIe chassis. Even if you find a bracket it won't fit in this chassis. I plan to remove it and use a LSI 2308 based card.

Also note that the slot they have it plugged into is PCIe 2.0 x4 (*). It won't run full speed in that slot. Likely OK even with 24 SATA drives on the expander, but if you try to use the other port for SSDs (log/cache, etc) then it will bottleneck. You need to move it to one of the other three slots, each of which are PCIe 3.0 x8.
I have had UIO cards before, and in a standard slot. IIRC I just bent a bracket to fit and that worked reasonably well.

My planned config is 1x E5-2603 V1 + 16GB or 32GB of RAM. Then just using it as a ZFS server with all SAS2 drives. I did at least *think* about taking out the AOC and motherboard and just using a JBOD board for this. Then decided that I might as well run it in a more standard config just in case I ever need the CPU power.

@PigLover did you happen to try a second SFF-8087 cable? There are two more ports on the backplane. If you could get an 8x lane back-haul that would be ideal.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Supermicro manual for the backplane, such as it is, here. The manual discusses using the 2nd 8087 for cascading chassis but is silent on any possible use for the third port. When I get things put together I'll give dual-linking a try and see what happens.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Got it spun up with a pair of E5-4640s (E5-2665 equiv). 8/16 cores @ 2.4Ghz. 8x 8GB DDR3-1600. Interestingly, it recognized the 1600 memory speed even though the chips are V1/Sandy Bridge (OK - just looked - the E5-4640 is rated for this memory speed - odd - never looked to see if it ran @ 1600 on the Intel board I pulled them from).

The BIOS is v1.0. Guessing they all will be - you'll need to get a V1 CPU to flash it with since you have to be at 3.0 or above to support V2 CPUs.

Still testing...memtest now and that will take a while.
 

Stanza

Active Member
Jan 11, 2014
205
41
28
The AOC-USAS2-L8i is a UIO card. This is SM proprietary design with the parts soldered on the wrong side of the board compared to standard PCIe. The standard mounting bracket has to be removed in order to fit it into a "standard" PCIe chassis. Even if you find a bracket it won't fit in this chassis. I plan to remove it and use a LSI 2308 based card.

Also note that the slot they have it plugged into is PCIe 2.0 x4 (*). It won't run full speed in that slot. Likely OK even with 24 SATA drives on the expander, but if you try to use the other port for SSDs (log/cache, etc) then it will bottleneck. You need to move it to one of the other three slots, each of which are PCIe 3.0 x8.

(*) See the block diagram on page 1-10 of the manual for the motherboard. Note that the x4 slot is derived from the PCH chip and not directly from either CPU. This means that IO on this slot is 1 QCI hop away from CPU1 and 2 hops away from CPU2. For any kind of high-performance cards (good HBAs, 10Gbe or Infiniband) this is deadly. Too much latency. SuperMicro did not expose any PCIe from CPU2 on this board. All three PCIe 3.0 x8 slots are from CPU1 and the 2.0 x4 and x1 slots are from the PCH chip. Nice because it means all of the PCIe slots are active even with just a single CPU installed - but somewhat performance limiting for dual CPU setups.
Looking at TD_trader's pics seems it's random on which pcie port they have whacked the sas cards in.... One of his boxes has the far left slot.... The other is the far right slot.

Excellent

Wonder who initially built them :rolleyes:

As for the sas ports....

I don't think from memory they support dual link...

Just a one in and two out type setup.

Eg in jbod config...
Server to jbod.... Jbod to two more jbods...

Kinda a weird way too do it..

.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Looking at TD_trader's pics seems it's random on which pcie port they have whacked the sas cards in.... One of his boxes has the far left slot.... The other is the far right slot.

Excellent

Wonder who initially built them :rolleyes:

As for the sas ports....

I don't think from memory they support dual link...

Just a one in and two out type setup.

Eg in jbod config...
Server to jbod.... Jbod to two more jbods...

Kinda a weird way too do it..

.
Mine came not even in a slot but instead hovering above. It seems as though the zip ties failed to keep the card in its slot through shipping. It too was in a different slot.
 

Triggerhappy

Member
Nov 18, 2012
51
11
8
Heatsinks yes, shroud no for me. Pretty much same layout as piglovers shroudless one. One rack handle attached to the case and the other was randomly taped to the side of the box without screws. Cut myself nicely on the rails which were attached with a piece of tape to the underside of the case completely exposed.

Need to source a cpu to actually test it now. Unless someone wants to please my wallet and buy the mobo / sas card as is so I can just transplant the guts of my current system.


Side note: Figured out why the wife didn't mind picking up my box. The one she brought back home was easily 4x larger. :)

Was looking for this case for a while now. Thanks again!
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Regarding the airflow shroud: the shroud is only there to allow the use of passive (no fan) heatsinks in the chassis. It is designed to concentrate active airflow from the chassis fans across the heatsinks. The units Liquid8 is shipping all seem to have active cooling heatsinks - they have fans and are rated for 145W TDP CPUs.

You really don't need the shroud when using these active heatsinks. Just food for thought for folks building on these chassis.
 
  • Like
Reactions: Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Regarding the airflow shroud: the shroud is only there to allow the use of passive (no fan) heatsinks in the chassis. It is designed to concentrate active airflow from the chassis fans across the heatsinks. The units Liquid8 is shipping all seem to have active cooling heatsinks - they have fans and are rated for 145W TDP CPUs.

You really don't need the shroud when using these active heatsinks. Just food for thought for folks building on these chassis.
That is what I was thinking also. I was going to also see if I could use my radial style "silent" coolers and use the shroud to keep air moving or if that would just muck stuff up.

I am going to propose an activity. What if we Linux-Bench'd our builds? I am guessing that I will have the slowest but I am very interested to see what others are coming up with. I will also do a mini-build log on mine.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
The fans on the Intel heatsinks Liquid8 appears to be shipping are inaudible compared to the chassis fans. The whole system settles down nicely to what I consider 'tolerable server' noise levels. If you put the silent heatsinks on you won't notice any difference.
 

paranoidAndroid

New Member
Oct 23, 2014
12
1
3
44
Anyone check the power usage at start up and while relatively idle? want to replace 2 dell cloud servers (fs12/cs24) that seem to pull a constant 165-170 watts idle each 24/7...just using a basic kill-a-watt device to check. looking to get something more efficient sometime soon and cut down on the power bill.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Anyone check the power usage at start up and while relatively idle? want to replace 2 dell cloud servers (fs12/cs24) that seem to pull a constant 165-170 watts idle each 24/7...just using a basic kill-a-watt device to check. looking to get something more efficient sometime soon and cut down on the power bill.
With 2x E5-4640 (E5-2665 equiv), 64Gb DDR3 1600 1.35v & 2x Sandisk Extreme II 240 Gb SSD I'm pulling 114 watts idle running Server 2012 R2. Peaked at 290w running a stress test.

Note that this is with the LSI Raid card plugged in but no 3.5" drives. Expect to add 3-6w/drive idle.

You can probably get it 30-50w lower using a single lower-rated CPU, especially if it is a v2.
 

paranoidAndroid

New Member
Oct 23, 2014
12
1
3
44
With 2x E5-4640 (E5-2665 equiv), 64Gb DDR3 1600 1.35v & 2x Sandisk Extreme II 240 Gb SSD I'm pulling 114 watts idle running Server 2012 R2. Peaked at 290w running a stress test.

Note that this is with the LSI Raid card plugged in but no 3.5" drives. Expect to add 3-6w/drive idle.

You can probably get it 30-50w lower using a single lower-rated CPU, especially if it is a v2.
Thanks PL!
 

TD_Trader

Member
Feb 26, 2013
63
7
8
Just received my eight Gigabyte MD70-HB0 mainboards today! :) :) :)

I'll be tearing out the SuperMicro mainboard(s), and installing the Gigabyte MD70-HB0 mainboard. Hopefully I can re-use the same heat sinks, fans and air shroud. I'll probably start the upgrade this weekend (I'll take/post photos of the upgrade process). I'm still waiting on my CPU's and DDR4 memory. Haven't decided what to get yet, I'm leaning towards maybe an E5-2630v3 and I'm still not sure on the DDR4 memory (possibly 16GB modules?) that I'm going to use yet. Looking for suggestions and good deals/prices if anyone finds anything. Thanks!

Gigabyte MD70-HB0 dual Xeon Haswell-EP DDR4 motherboard - Imgur









I ordered mine about 2 months ago, but I believe Patrick also did a review about two weeks ago on the MD70-HB0 mainboard here:
http://www.servethehome.com/Server-...hb0-review-dual-10gbe-lsi-12gbps-sas-onboard/

Information on the MD70-HB0 mainboard can be found here:
GIGABYTE B2B Service - Server Motherboard - Socket 2011-3 - MD70-HB0 (rev. 1.2)
 
Last edited:

Hank C

Active Member
Jun 16, 2014
644
66
28
so I have replace Asrock server board to this case, what is the header for power on and power switch?
 

Triggerhappy

Member
Nov 18, 2012
51
11
8
Finally fired mine up last night with an E5-2609. Mine came with the latest 3.0a Bios so I could've popped in a V2 directly :(

These things mustve been run with little ram because the location of the SAS card made it impossible to insert a DIMM in the slot closest to the first PCIE slot. That would probably limit the number of DIMMs per CPU to 2.

Overall this thing seems whisper quiet for a server (once it calms down after initial boot). We'll see what happens when I populate the drives.