Intro & Built notes

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Andreas

Member
Aug 21, 2012
127
1
18
@jcl333,
first - I wish you a Happy New Year!

Sorry for the delay in responding, had been away and busy.

To address your questions:

1) I realized that this community has a strong affinity with LSI but I do not have any preset preferences with regards to Raid controllers. Most of the controllers I used in the past were fine for the job at hand, but the speed and performance of multiple SSDs put a different level of stress on all parts of the chain (hardware and software alike).

For the high performance workstation LSI proved to be the better solution:
1) When I started the project, it was the only PCI 3.0 x8 capable controller - a prerequiste in my case, as 8 fast SSDs would saturate a PCI 2.0 x8 interconnect
2) The LSI 9207-8i controllers were reasonably priced
3) The LSI 2308 RoC is a good performer
4) The drivers for WS2012 are rock solid, even at 2.2 mio IOPS or more than 20 GB/sec seq transfers. (I did not know that upfront, but in hindsight)

The reason to go for the Adaptec was easy. It seem to be a better fit for the power/port ration I was trying to optimize. Most of the energy saving 1155 boards don't have multiple PCI x8 slots and multiple LSI controllers would consume much mor energy than the rest of the system. So the goal was to use only one controller with as many ports as possible. As I am probably amont the first batch of customers using this new controller, I can't claim that I based my decision on previous and other experiences. I hoped and expected that the new controller is a good fit for the purpose. and for a reasonable price.

The system is now running 6 weeks and the controller did not show any issues during operation with my setup.
When I tested the controller in the high perf workstation before I built it into the server, I connected it with 24 SSDs - just out of curiosity. Pushed to the limits, the controller did not exhibit the same stability than the LSI controller. It is hard to say from the superficial check I did, but my assumption is that the driver hasn't undergon the same level of regression testing than the LSI drivers which are much longer in the market. As an example: When pushed to max IOPS in some cases the driver disconnected the controller from the OS and the only remedy was to reboot the system. This test was way beyond the usage envelope of my intended use, and all experiences so far with 24 hard disks is very positive. It does, what it is meant to do. in my case, I configured it with 2 Raid5 volumes defined in the controller to reduce the load on the CPU (keeping the lower power envelope when transferring data): This decision is based on the observation that the power consumption of the intel CPU fluctuates stronger than the controller's power between idle and fully loaded.

I did not have any airflow in the case where the RoC of the adaptec is located. Consequently, the temp of the RoC was reported by the Raid utilities to be in the 85 degree celcius range. While it seemed hot to me, the utilitiy reported the temp to be "ok". Due to my concern on the long term probability to get issues with this temp, I installed a tiny 4cm vent on a creative 3 bracket structure to lead the airflow over the chip and connected the vent to the motherboard. Energy consumption = not measurable. Temp dropped by 40 degrees and is now at 45 degrees which I like much more.

I haven't done any play with jbod, software raid or Raid6 configurations but the performance with Raid5 is more than sufficient to saturate my installed network bandwidth (4 GBit/sec).

hope that helps & regards,
Andy
 

jcl333

Active Member
May 28, 2011
253
74
28
@jcl333,
first - I wish you a Happy New Year!

Sorry for the delay in responding, had been away and busy.
No worries, I must apologize myself. I only just saw your reply, I am having trouble getting the message-reply-notification thingy working :-(

To address your questions:

1) I realized that this community has a strong affinity with LSI but I do not have any preset preferences with regards to Raid controllers. Most of the controllers I used in the past were fine for the job at hand, but the speed and performance of multiple SSDs put a different level of stress on all parts of the chain (hardware and software alike).
Fair enough. The RAID controllers I have used the most are actually Adaptec, or the one's built in to HP and Dell servers, some of which I am sure are LSI anyways.

For the high performance workstation LSI proved to be the better solution:
1) When I started the project, it was the only PCI 3.0 x8 capable controller - a prerequiste in my case, as 8 fast SSDs would saturate a PCI 2.0 x8 interconnect
2) The LSI 9207-8i controllers were reasonably priced
3) The LSI 2308 RoC is a good performer
4) The drivers for WS2012 are rock solid, even at 2.2 mio IOPS or more than 20 GB/sec seq transfers. (I did not know that upfront, but in hindsight)
Yup, I am considering one of these.

The reason to go for the Adaptec was easy. It seem to be a better fit for the power/port ration I was trying to optimize. Most of the energy saving 1155 boards don't have multiple PCI x8 slots and multiple LSI controllers would consume much mor energy than the rest of the system. So the goal was to use only one controller with as many ports as possible. As I am probably amont the first batch of customers using this new controller, I can't claim that I based my decision on previous and other experiences. I hoped and expected that the new controller is a good fit for the purpose. and for a reasonable price.
Hmm, I see that LSI has 16, 20, and 24 port RAID controllers on their site, was that not true at the time you looked? For the HBAs they don't seem to go past 8 ports except for the PCIe2.0 cards.

This is something I am considering as well. I am wondering if something like the 24-port model just has a built-in expander, but it is looking like it doesn't.

I am concerned about power consumption and heat production. The energy saving features of the Adaptec are part of what makes it attractive.

The system is now running 6 weeks and the controller did not show any issues during operation with my setup.
When I tested the controller in the high perf workstation before I built it into the server, I connected it with 24 SSDs - just out of curiosity. Pushed to the limits, the controller did not exhibit the same stability than the LSI controller. It is hard to say from the superficial check I did, but my assumption is that the driver hasn't undergon the same level of regression testing than the LSI drivers which are much longer in the market. As an example: When pushed to max IOPS in some cases the driver disconnected the controller from the OS and the only remedy was to reboot the system. This test was way beyond the usage envelope of my intended use, and all experiences so far with 24 hard disks is very positive. It does, what it is meant to do. in my case, I configured it with 2 Raid5 volumes defined in the controller to reduce the load on the CPU (keeping the lower power envelope when transferring data): This decision is based on the observation that the power consumption of the intel CPU fluctuates stronger than the controller's power between idle and fully loaded.
Hmmm, but wouldn't a 20nm Ivy Bridge CPU be more efficient than the ROC on the Adaptec card? Which is probably 40nm or larger.

I did not have any airflow in the case where the RoC of the adaptec is located. Consequently, the temp of the RoC was reported by the Raid utilities to be in the 85 degree celcius range. While it seemed hot to me, the utilitiy reported the temp to be "ok". Due to my concern on the long term probability to get issues with this temp, I installed a tiny 4cm vent on a creative 3 bracket structure to lead the airflow over the chip and connected the vent to the motherboard. Energy consumption = not measurable. Temp dropped by 40 degrees and is now at 45 degrees which I like much more.
That is a big difference. I have been talking with Adaptec on some of these things, they actually have a fan kit for the previous generation of cards, and might come out with one for this version as well. In the meantime they encouraged just putting a fan on their yourself, they said there are even standard mounting holes on the heat sink. But it sounds like it take very little air movement to keep this under control, good to hear.

I haven't done any play with jbod, software raid or Raid6 configurations but the performance with Raid5 is more than sufficient to saturate my installed network bandwidth (4 GBit/sec).
Yeah, since I will be using either consumer or maybe WD RED drives, I want to have RAID 6, and might even play with ZFS.

hope that helps & regards,
Andy
Thanks very much for your reply.

-JCL
 

Andreas

Member
Aug 21, 2012
127
1
18
Haven't posted for a while in this thread.

While most systems here are configured for storage space and I/O speed, I thought you might be interested in this system from the compute server end of the spectrum.

4 x NVidia GTX Titan's

10.752 CUDA cores
24 GB GDDR5 RAM,
> 1 TB/sec aggregated memory bandwidth (in the cards)
ca. 6 TFlop/s (double precision), ca. 18 TFlop/s (SP) -
(This is roughly comparable to the #1 position of the Top500 list in 2000, the ASCII White machine for approx 110 million US$)

When PCI 3.0 support is turned on, each card can read/write with about 11 GB/sec on the PCI bus.
For full concurrent PCI bandwidth of all 4 cards, a dual socket SB machine is needed with its 80 PCI lanes and better main memory bandwidth
(With 1600 MHz DDR3, my dual socket SB delivers ca. 80 GB/sec with the stream benchmark)

So, depending on the GPU workload, a LGA 2011 system might be ok (when compute / device memory bound) or a dual-SB board is needed when I/O bound.



cheers,
Andy