Intro & Built notes

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Andreas

Member
Aug 21, 2012
127
1
18
Greetings to all.

I've been viewing the site for a while for its great content and being a software guy for myself, I wanted to look into some hardware stuff again - time has definitely moved on since the 90ties :) Really appreciating the good info I gathered here.

As stated in one of the comments to an article by Patrick - I am in the process to build a few systems for a personal study project around I/O. While using some benchmarking tools to isolate the performance of individual components, I am rather focussed on a balanced system for the intended workloads.

Phase 1 of this project entails a workstation followed by a small 2-socket server in phase 2. The components of the workstation arrived this week and for now it is both fun and I had my fair share of surprises when hitting "the edge".

Some personal lessons learned so far:
  • Power supplies and many SSDs don't fit well together - Independent of the size (tried 600W - 1500Watt)
  • 4 LSI 9207-8i cards in one system produce more heat than the 6-core CPU when the system idles
  • Quite a few benchmark utilities don't scale well with many SSDs (while producing almost identical data for single drive measurement, @32drives the gap widens in one case to 1:5)
  • Investigated 9 different SSDs for performance and power consumption. Lesson: benchmark kings might not be the best solution for your workload
  • PCIe3 and LGA2011 is really a boon
  • The "pickiness" of SAS flash utilities with regards to modern motherboards

As said, I valued all the great content here and hope to spend some good time chatting about hardware. If interested, I can share some of my experiences.

regards,
Andreas

To close off:
Here is the first sign of life of my workstation.
System is based on an i7-3930K and ASUS X79 WS.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,517
5,812
113
Which SSDs and power supplies have you been having issues with? I have not had an issue thus far.
 

ehorn

Active Member
Jun 21, 2012
342
52
28
Thanks for sharing Andreas,

Very nice results there... Yea, those cards get warm :)

In addition to Patricks Q's - what types of issues did you run into with the X79 and the 9207's?

Did you run into any Option ROM flakery? Did you need some weird UEFI/BIOS settings to get these to play nice?

Feel free to share some details of your trials and achievements in assembling such a nice storage array.

Pics are always nice too... (geek porn) :)

peace,
 
Last edited:

Andreas

Member
Aug 21, 2012
127
1
18
Patrick,
here are a few reasons why I ran into issues:

  • It's an issue only above 24 SSDs in 1 system. In my case 32 or later 48 in the 2-socket server.
  • All 9 different SSD's I measured power consumption, use only the 5V supply, not the 12V line.
  • Out of the performance behaviour and the intended performance (and price) I choose Samsung SSDs 830 with 128GB. I compared to SanDisk Extreme (120,240,480GB), OCZ Vertex 3 (120, 240GB), OCZ Agility 4 (128,256GB), OCZ Vertex 4 (128,256GB) and Samsung 830 (128,256GB).
  • As expected, power consumption varies. While the Samsung have the highest consumption at full speed write (ca. 860mA @5V), they are by far the most power efficient at idle (1/3 vs. the OCZ Vertex 4 and Agility 4). Back at the Samsung: 860mA @5V = 4,3Watt x 32 SSD = 137,6 Watt. At 5 Volt!
  • Pretty all power supplies in the range 300 - 1500 Watt deliver the biggest part of their supply on the 12 Volt rail. 3.3V and 5V are usually limited to 20A or 25A, with the exception of one I could find with 40A (Silverstone Strider 1500W). The Corsair Pro Gold 850W has 125W at 5Volt.
  • All of my SSDs are directly connected to the 32 ports of the 4 LSI controllers. No port extender is in between to limit data rates of individual SSDs (and hence their power consumption).
  • Idle power of my whoile setup is in the 100 Watt range, no issue there.
  • The Samsung SSDs need 440mA for full speed read, which is within the power envelope of the 5V part of the current PSU when all 32 fire off (Enermax Platimax 600W). BTW, OCZ's need more power during read than the Samsungs.

As said, I found a solution for the 32 SSD Setup by using a completely oversized PSU for my system, but still need to find one for 48 SSDs. If someone knows of a PSU with 60-80A of 5V power, I would appreciate a small note.

rgds,
Andreas
 

ehorn

Active Member
Jun 21, 2012
342
52
28
Back-planes typically receive molex power connectors. - Molex are almost always coming from the 12V rail (at least I thought).... Shouldn't a single (high current) rail be sufficient to deliver the necessary wattage?

<scratches head> Watts is watts (A*V)... 12V or 5V or 3.3V
 
Last edited:

Andreas

Member
Aug 21, 2012
127
1
18
Hi ehorn.
Thanks for your comments.

on the LSI cards: Warm would be an understatement - I burned one finger on the heatsink :) Had to add one fan extra just for the LSI cards.

Compatibility of the X79 and the LSI seems to be fine for now - had no issues in recognizing and running the cards on the MB. To compare CPU load at full speed, I reprogrammed the firmware to IR mode. The screenshot above is with the 4 LSI in IR mode and each card has its own Raid0 with 8 SSDs. From initial checks, this had the lowest latency and CPU load at 14 GB/sec read.

I still have the 1333 DDR3 RAM in the system, which provides ca. 34 GB/s memory bandwidth for applications (ca. 80% efficiency). 1600 DDR are ordered to check if there is a difference. Probably not in the transfer rates by itself, but in available bandwidth for the CPU to do some useful stuff with all that data.

There is one glitch with the LSI 9207-8i (in both modes IT and IR). My Samsungs loose about 20% of their peak read performance. Even if the SSD is accessed as a single drive. Read speed of the SS128GB goes down from 510MB/s to 440 MB/s. Hadn't have time to look into this deeper, but will do so. Any idea?

The Option ROM thing was supposed to be straight forward. On 2 modern Motherboards the DOS Version refused to load with the PAL error message. Had to use an older system to reprogram the 4 LSI cards, 1 by 1 (Slot limitations). Could have been so nice with the -fwall option in the original system.

Here is one of the 32 data highways during assembly - sorry for the quality:


rgds,
Andreas
 

john4200

New Member
Jan 1, 2011
152
0
0
As said, I found a solution for the 32 SSD Setup by using a completely oversized PSU for my system, but still need to find one for 48 SSDs. If someone knows of a PSU with 60-80A of 5V power, I would appreciate a small note.
What about using two PSUs, each supplying 30A @5V ?
 

Andreas

Member
Aug 21, 2012
127
1
18
Back-planes typically receive molex power connectors. - Molex are almost always coming from the 12V rail (at least I thought).... Shouldn't a single (high current) rail be sufficient to deliver the necessary wattage?

<scratches head> Watts is watts (A*V)... 12V or 5V or 3.3V
Correct: Watt is Watt, but the PSU have deliberate limits on the different voltage rails.
Take the Enermax Platimax 600W as an example:
http://www.enermax.com/home.php?fn=eng/product_a1_1_2&lv0=1&lv1=52&no=182
The combined 3.3V and 5V are spec at 100W maximum, the 12V rail at 600W.

In the "old" days of high powered graphioc cards the load was on 12V, which such a PSU serves fine. In a system "optimized" for many SSDs which ONLY use 5V, the limit is much lower. It's pretty much with all PSUs.

rgds,
Andy
 

Andreas

Member
Aug 21, 2012
127
1
18
What about using two PSUs, each supplying 30A @5V ?
This is a route I started to look at. There seems to be issues with minimum load such a PSU need (on the 12V rail). Most of them are switching PSU which Need a Minimum load. Second, a PSU would normally only start with a connected motherboard. If possible, I would prefer a single PSU solution, but as a last resort, a 2 PSU setup might be the only solution.

rgds,
Andreas
 

john4200

New Member
Jan 1, 2011
152
0
0
This is a route I started to look at. There seems to be issues with minimum load such a PSU need (on the 12V rail). Most of them are switching PSU which Need a Minimum load. Second, a PSU would normally only start with a connected motherboard. If possible, I would prefer a single PSU solution, but as a last resort, a 2 PSU setup might be the only solution.
I've never heard of a minimum 12V power draw for a PSU to operate. I know the efficiency will be lower at low power draw, but I think the PSU would still operate.

You can get a wire harness so that you can operate two PSUs with one motherboard.

I just noticed the the Silverstone ST1200 also has 40A @ 5V, and it is only $195 at newegg ($165 after rebate). Two of those would seem to fit your needs, wouldn't you say?

http://www.newegg.com/Product/Product.aspx?Item=N82E16817256041
 

ehorn

Active Member
Jun 21, 2012
342
52
28
Correct: Watt is Watt, but the PSU have deliberate limits on the different voltage rails.
Take the Enermax Platimax 600W as an example:
http://www.enermax.com/home.php?fn=eng/product_a1_1_2&lv0=1&lv1=52&no=182
The combined 3.3V and 5V are spec at 100W maximum, the 12V rail at 600W.

In the "old" days of high powered graphioc cards the load was on 12V, which such a PSU serves fine. In a system "optimized" for many SSDs which ONLY use 5V, the limit is much lower. It's pretty much with all PSUs.

rgds,
Andy
Right, my bad... Did not stop and consider the draw, rather than the supply (i.e. 5V vs 12V)...

You gotta love ~ 100W for a race-car rig at idle.

I have no idea why you are seeing such drops on peak BW. I assume you are measuring this drop against Intel RST?

Any plans for a chassis for that beast?

Nice component selection. Love the pic...

peace,
 

Andreas

Member
Aug 21, 2012
127
1
18
I've never heard of a minimum 12V power draw for a PSU to operate. I know the efficiency will be lower at low power draw, but I think the PSU would still operate.

You can get a wire harness so that you can operate two PSUs with one motherboard.

I just noticed the the Silverstone ST1200 also has 40A @ 5V, and it is only $195 at newegg ($165 after rebate). Two of those would seem to fit your needs, wouldn't you say?

http://www.newegg.com/Product/Product.aspx?Item=N82E16817256041
Thanks for the link to the 1200W Version. Hadn't seen it here on the typical Price comp sites here in Austria. Will check again.

Could you elaborate a bit about the wire harness? Haven't heard about that.

thanks,
Andreas
 

Andreas

Member
Aug 21, 2012
127
1
18
Right, my bad... Did not stop and consider the draw, rather than the supply (i.e. 5V vs 12V)...

You gotta love ~ 100W for a race-car rig at idle.

I have no idea why you are seeing such drops on peak BW. I assume you are measuring this drop against Intel RST?

Any plans for a chassis for that beast?

Nice component selection. Love the pic...

peace,
I just haven't had enough time for my pet project to look deeper into these issues. Will start vacation tomorrow. The comparison is with the SSD connected to either the X79 or a Z77 6GBit SATA port - Intel drivers.

Case:
To keep the stuff somewhat portable a Sharkoon 12 case is the current home. Sharkoon has also some nice 6-slot SATA cases in a 5,25 Slot with 6 passive SATA connectors - needed for my direct connected lanes.
Fans are Noctua 120mm, very silent.

During assembly:
The case would in theory be able to handle 66 SSDs (11 x 6 slot). Current limitation is the 4 x 8port SAS Controller and the PSU Limits. While tempting, I will not go down this road with this system. The PCI lanes would easily provide enough bandwidth, but the whole memory bus / CPU balance would collapse under any useful workload. But I am tinkering with the 2socket server for such a setup :)

On the left side the different SSDs for performance evaluation and on the right side the stack of Samsungs for the rig.


rgds,
Andy
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
You are going to have a hard time finding a single PSU to supply that much 5V unless it is a dedicated 5V psu. Even looking at Supermicro's SC216 2.5 chassis, their 1200W 1+1 PSU only gives you 250W(50A) on the 5V rail. Consumer SSDs use 5V is to allow them to work with laptops which dont always provide 12V for HDD/SSD. If you look at enterprise SSDs like the STEC ZeusRam or the Seagate Pulsar, they use the 12V rail along with the 5V rail.

12V has been heavily used in higher power computer components like video cards and the CPU because you have less resistive loss. P = R*I^2 = IV IIRC Intel CPUs are powered exclusively off the 8pin 12V connector.

ehorn, the 4pin molex connector gives you 5V and 12V. The SATA/SAS power connector is spec'd to supply 3v3, 5V, and 12V with Micro SATA using only 3v3 and 5V.
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
Andreas, another option would be to make a couple 2.5" drive JBODs each with their own PSU
 

Andreas

Member
Aug 21, 2012
127
1
18
Andreas, another option would be to make a couple 2.5" drive JBODs each with their own PSU
Cactus,
I thought about that but could not find a proper case. Most of 8-drive cases have 2 eSATA connectors with extenders - this would kill the I/O speed I am looking for. Another is the topic on Overall power efficiency which I currently enjoy by not using external casings.

Would you know of a 8-drive case with 8 individual SATA 6G connectors? Any idea welcome.

thanks,
Andy
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,517
5,812
113
First... wow!!! Looks awesome! On the below:

There is one glitch with the LSI 9207-8i (in both modes IT and IR). My Samsungs loose about 20% of their peak read performance. Even if the SSD is accessed as a single drive. Read speed of the SS128GB goes down from 510MB/s to 440 MB/s. Hadn't have time to look into this deeper, but will do so. Any idea?
Is that an issue due to the lack of TRIM? That would impact even a single drive.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,517
5,812
113
I think you could get a second SC216 with a JBOD card to wire the CPU. Then you basically use SFF-8088 cables plus SFF-8088 to SFF-8087 converters to do the runs between machines.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
That is really great throughput for a single-CPU system. I'm more and more impressed with the latest LGA2011 Intel CPUs.

You might want to give up on running that many SSD drives in a chassis not designed for it. Pick up a JBOD case and you'll never have to think about power again.
I have two 24-bay Supermicro 216 chassis and they have been great. Their tiny 900W power supplies put out only 4A on the 5V rail (20 watts) but I've never had a problem running 24 SSD drives at full throttle. In fact, for a brief time I jury-rigged 38 drives off of one of these power supplies by running half of the Molex power lines to a second chassis.
The SAS/SATA backplanes in these cases are fed by Molex power connectors and I have to assume that the backplane is converting some 12V power, of which the 216 has plenty, down to 5V. After all, a Supermicro 216 chassis filled with 24 "enterprise" 2.5" drives could easily pull 12A on the 5V bus (along with another 12A on the 12V bus) which is more than double the 5V rating of the power supply.

Greetings to all.

I've been viewing the site for a while for its great content and being a software guy for myself, I wanted to look into some hardware stuff again - time has definitely moved on since the 90ties :) Really appreciating the good info I gathered here.

...

Some personal lessons learned so far:
  • Power supplies and many SSDs don't fit well together - Independent of the size (tried 600W - 1500Watt)
  • 4 LSI 9207-8i cards in one system produce more heat than the 6-core CPU when the system idles
  • Quite a few benchmark utilities don't scale well with many SSDs (while producing almost identical data for single drive measurement, @32drives the gap widens in one case to 1:5)
  • Investigated 9 different SSDs for performance and power consumption. Lesson: benchmark kings might not be the best solution for your workload
  • PCIe3 and LGA2011 is really a boon
  • The "pickiness" of SAS flash utilities with regards to modern motherboards
...
regards,
Andreas
 
Last edited: