Intro & Built notes

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Andreas

Member
Aug 21, 2012
127
1
18
This thread has not been active for a while, so I'd like to provide an update if people are interested:

1) The bigger workstation has now ECC 256GB RAM and runs with 48 SSDs on 6 LSI 9207-8i HBAs, when max I/O perf is needed. Unfortunately, the 6 HBAs consume all 6 PCI slots of the ASUS motherboard leaving no slot available for the Mellanox 40GBit card - a pity. So I ended up running the workstation for day to day work on 4 HBA/32 SSD config. The two E5-2687W CPUs are great. Whatever the workload I throw at them, they deliver sustainable performance at very high levels. A joy to use in a workstation.

1a) The Corsair H100 watercooler are effective. I managed to keep CPU temps under 60 degree celcius with all workloads. To give you an perspective. Intel's linpack - known for its "heat generation" is around 53-55 degree maximum.

1b) with 6 HBAs in one system, heat in the PCI section becomes an issue. Solved it with a relatively slow spinning 20 cm ventilator

2) Scalability of 4 HBA on the Asus Z9PE-D16 is excellent, levelling somewhat off when moving to 6 HBA. My current guess is that my apps are hitting a QPI limit. When I switch off power save mode of the QPI interconnect, performance of the app goes up.

3) The power supply issue is solved (More than 24 SSDs "kill" most power supplies). Since I changed to the Silverstone strider PSU and its 40 amp on the 5 Volt rail, the sudden shutdowns are a thing of the past.

4) The system based on the ASUS P9X79WS/i7-3930K and normal RAM modules is far less stable than the dual socket machine with ECC memory. I did run for days and weeks applications which had self detection algorithms in the program. At least one error per day was detected, vs. zero on the dual socket machine. If someone need rock solid stability, be concious on your component selection

5) The choice on teh Samsung 830 was in hindsight a good one. The SSDs are very reliable and stable & predictable from a performance perspective. I've got 4 x Samsung 840 (256GB) but it is too early to tell their characteristics.

6) Today another nice controller arrived. The new Adaptec 72405 with 24 SAS ports and 6 cables (SFF-8643 to 4xSATA) arrived. As soon as I have time, I'll give this controller with a new RoC a try. For curiosity, I intend to run it one with the Samsung SSDs, but its final destination will be the new home server with 24 x 3TB HDDs (a 32 GB E3-1245v2 system). The goal is to get the power consumption of the system at idle as low as possible. I won't be able to get under 100 watt (all discs spinning), but everthing above 150 watt would be disappointing. The AFM-700 flash module (the new BBU type) will arrive tomorrow.

7) a few pictures

The new Adaptec controller is a bit larger then the LSI 9207-8i. 24 ports vs. 8 ports


The shape of the SFF-8643 connectors are rather square in nature. Not sure, why it was necessary to replace the SFF-8087. So it be.


rgds,
Andy
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,517
5,812
113
Andy,

Thanks for the update! The SFF-8643 and SFF-8644 change is being made in the next two years or so to allow higher density solutions. Especially with low profile cards.

Super interested in how you like the Adaptec card. The 6000 series saw a huge adoption drop-off from the 5000 series so it will be interesting to see if Adaptec has changed course.

BTW Here is a crazy one for you. I have been testing single drives on a LSI SAS 2308 controller and have been seeing poor performance. Any thoughts there would be appreciated.

Cheers,
Patrick
 

Andreas

Member
Aug 21, 2012
127
1
18
BTW Here is a crazy one for you. I have been testing single drives on a LSI SAS 2308 controller and have been seeing poor performance. Any thoughts there would be appreciated.
Patrick,
what is your iometer performance with the SSDs ?
Normal or lower as well?
I had much more consistent performance metrics with iometer vs. the benchmark apps when running against the LSI cards. When iometer reports normal performance, I'd rather assume a software issue in these apps. (Too much optimization for certain desktop SATA controllers)

rgds,
Andy
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,517
5,812
113
I ran IOmeter a few times yesterday and saw similar results on 4K random writes. Also tried ATTO, and AS SSD and those look strange as well in the smaller file size range.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Andreas - do you know if that Adaptec 72405 runs SATA-III/6Gbps when connected to SATA devices? Or does it still fall back to SATA-II when running in SATA mode like the Adaptec 6-series did?

Could be very interesting info since few to none of the readers here actually run SAS devices.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,517
5,812
113
Interesting... so I setup a new test (remotely) - I used 4K 100% write and random. The Samsung 830 is showing 16K IOPS and 60MB/s. Much closer to what I would expect. Still not able to get those numbers higher though.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,517
5,812
113
4 Worker outstanding I/O of 4:
Samsung 830 - 16K IOPS
Samsung 840 Pro - 23K IOPS
SanDisk Extreme - 18K IOPS
Kingston V+200 - 14.5K
Vertex 4 - 17K

On a relative scale, those numbers would look about right for a QD1 test.
 

Andreas

Member
Aug 21, 2012
127
1
18
Had tonight some time to give the new Adaptec 72405 Raid controller a test. Connected 24 x Samsung 830 128GB SSDs to it to check different configurations.

Basically, I removed the 3 LSI controllers connected to the PCI subsystem of CPU0 (Slots 1,2 and 4 on the ASUS motherboard) and left the 3 other LSI controllers in slots 3,5,6 connected to CPU1 in the system.

Nothing spectacular to report about the BIOS utility. Setting up a Raid is simple and easy. Like it more than the LSI way of doing it. Benchmarkers beware: There is an item called Selectable Performance Mode. By default it is set to Dynamic. "Dynamic" is full of surprises :) I switched fast to another mode called "Big Block Bypass" to get repeatable and better performance (for my usage pattern).

24 ports provides lots of connectors. To sum it up, the Adaptec seems to be optimized for HD usage, not SSD usage.

12 reasonable fast SSDs like my Samsungs saturate the PCI 3.0 x8 bus. BTW, speeds above 6GB/sec are easy to achieve on sequential transfers. On random I/O the adaptec seems to be limited by a less powerful CPU on board vs. the LSI 2308. I could not achieve in any configuration more than 185.000 iops (4 KB sector size). With the LSI 9207-8i 450k iops are possible.

Saturation with fast SSDs is quite early. The first SSD provides 80k iops, with the second 140k iops and the third SSD hits the ceiling of the onboard RoC. Form then on, adding more SSDs don't provide higher iops. With hard disks, I assume the controller is much more able to scale all the way through the 24 port configuration. Albeit, I did not check this today.

The driver isn't as solid as the LSI driver.

There seems to be some unseen reserves in the PCI 3.0 x8 connector. With one test, iometer kept reporting more than 7 GB/sec. This was repeatable. Amazing, isn't it? ;)


I have a ton of iometer screenshots of different configurations: RAW and through NTFS, raid and non-raid, dual layer of config, etc ....

When using 8 SSDs in a raid0 config on the Adaptec and 8 SSD connected to one LSI 9207-8i, creating a single 34 GB file took 50-60 sec with the Adaptec and about 20 sec with the LSI controller. When reading the same file, it took 19 sec on the Adaptec and 16 sec on the LSI. Just as a rough guideline. Nothing scientific.

I do think that the part of the driver responsible for writing would still benefit from some further development.

Connecting 24 SSDs to one raid controller with an x8 connector is probable beyond a useful configuration and seems to be rather an edge case under severe stress (for the controller). I look forward to check this controller with the hard disks in my home server. I guess it will be much more balanced.

In NTFS, saturating the PCI bus is easy to achieve. Raid functionality for this run is only in the controller, not in software.



Here are 2 high watermark screenshots:

reading off the SSDs (1MB, QD=32, seq)


Same settings, write


Quite a few SSDs connected to one raid controller


.... even with room to grow beyond 24 ports .....


regards,
Andy
 
Last edited:

Andreas

Member
Aug 21, 2012
127
1
18
Wow! That is some great throughput. Over 7GB/s. Nice.
I'd rather believe the 6.1 GB/sec which was really sustainable.
But seeing in iometer with a PCI 3.0 x8 controller 7.4 and 6.8 GB/sec respectively was fun, which I wanted to share with the community (Don't think it is real .....)

cheers,
Andy
 

Andreas

Member
Aug 21, 2012
127
1
18
Man, that is smoking!
Due to constant "open heart surgery" on these systems, I externalized the storage components for now. Consequently, there are currently some cables around.
Silver = Adaptec
Red = LSI



cheers,
Andy
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,517
5,812
113
Are you running power from the main chassis to the external one? What does that look like on the backplane? Is it just a bunch of hot swap bays?
 

Andreas

Member
Aug 21, 2012
127
1
18
Are you running power from the main chassis to the external one? What does that look like on the backplane? Is it just a bunch of hot swap bays?
Patrick,
my setup is optimized for flexibility, not layout.
All power is supplied by one PSU, wherever the components are.

On noise:
1) All my SATA cables are 1 meter long - usually too long for internal use. Especially, when 48 of them are in the way of an airflow.
2) The densest SSD carriers I could find for a resonable price are the Sharkoon Quickports (30 Euro/pc). 6 hot-plug SSDs with active cooling fit in one 5,25" slot.
3) I started with all 8 Sharkoon carriers built into the machine, but when I switched to the 2 watercoolers with big radiators, spacing for quick access became an issue.
4) Replacing the standard 4cm fans of the Sharkoon with Papst fans, reduced the noise significantly. From office level noise to inaudible.
5) Changing the standard fans of the Corsair CPU watercooler with Nocua fans had a similar effect.
6) Currently I would consider my workstation with 32 logical cores, 256GB RAM, 6 HBAs and 48 SSDs as quiet and thermal very stable. Under full CPU load, it is at the lower office level.

On secure erase:
Last weekend I tried to secure erase some SSDs. A few Samsung SSDs became unstable (performance wise, not data loss). The good thing is, there is no BIOS lock up to secure erase SSDs when connected to the LSI HBAs. It accelerates the process significantly when 8 SSDs are SE in parallel.
8 of the 48 SSDs had issues with SE. Performance did not recover to factory speed. I used a separate system to SE and TRIM the drives (The LSI controller does not support TRIM). It helped quite a bit, but the 8 drives are still well below their initial ratings (write is suffering). Don't know what else can be done to "reset" the SSDs to full performance. Any ideas welcomed.

rgds,
Andy
 

Andreas

Member
Aug 21, 2012
127
1
18
Some comments to the Adaptec 72405 controller:

With no cooling, the RoC approaches 75 degrees. While this seemed hot to me, the sw utility stated otherweise - 75 degree celcius is ok.
To reduce heat fatigue I installed a 4 cm fan in the neighbor slot. Temp is now about 45 degree (and I feel better)

Got the AFM-700 module this week. Installing is a snap. Takes about 5 mins to charge the condenser fully. The package contained a warning, that it contains material which will potentially cause cancer to people in California. According to this paper, all other people in the rest of the world should be fine. So, if you live in California, don't buy the AFM-700 module.....

As I use this controller in a fileserver scenario, the high port count is a boon to get energy consumption down. 3 x 8 port LSI controllers would consume much more energy (and need to be more intensely cooled).

Performance is stable. A 12-drive WD 3TB raid5 volume writes between 1200 MB/sec and 1500 MB/sec - depending on the zone the heads were in while I tested. Writing with full speed keeps the E3-1245v2 CPU below 3% (idling at 1,6 GHz). While the LSI 9207 seem to be faster with SSDs, the new adaptec controller on the 72405 is for sure not the bottleneck with my 24 HD setup. Its the disks - which is good news.

I would not use the 72405 it in a pure SSD environment (unless you need capacity without expanders):
1) The 24 6gbps ports have too much bandwidth vs. the PCI bus interface
2) The 24 SSDs would be able to produce too many iops to be handled by one controller

Haven't tried the 72405 with 24 fast enterprise 2.5" hard disks - don't have those. Based on initial observations, I could imagine that the contrallor should be able to handle both. high bandwidth and high transactions.
I did not try raid 6 write performance.
The command line interface is ok - nothing special to report.
Don't have enough experience wrt to data loss and corruption - the controller is just too new in the park.

Two things to report with the adaptec driver for WS2012:
1) all disks connected to the 72405 are listed in disk manager before the OS disk (connected to the SATA port on the MB). This is different to the LSI driver layout.
2) raw disks are not visible in IOMeter (they are when connected to the LSI controller)

rgds,
Andy
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,517
5,812
113
Andy - I wanted to do that exact setup which is why I asked about power. Great to see that you did it!

Very interesting on the SE issue. What procedures did you use?
 

OBasel

Active Member
Dec 28, 2010
494
62
28
I hope those Adaptec cards find their way to ebay.

Always wonder why it is so hard to find non 3ware or lsi on da'bay
 

StuckInTexas

New Member
Dec 13, 2012
1
0
0
Andy - I haven't yet tried SSDs on the 72405, but I have done extensive testing with 24 Seagate Savvio 15K.3 drives. Prior to that I was using the LSI 9265/66/71. I had so many issues with the LSI solution and the SAS expander I was using. The Adaptec solution is much cleaner and I haven't run into any of our previous problems. I deal mostly with sequential reads/writes and the Adaptec card is hands down a better product. Writes are higher and more consistent, while reads are night and day better. The LSI cards always topped out around 2.4-2.6GB/s for reads. With the Adaptec card, I am drive limited around 4.4-4.5GB/s, which is the same as writes. I should get a set of 6Gbps SSDs shortly, so I will see if I have the same issues with IOPS that you had.
 

Andreas

Member
Aug 21, 2012
127
1
18
@StuckinTexas
The server is now running continously and so far the stability with HDs is good. Not a single issue like unexpected perf drop or similar things. "usability" of the Bios settings is easier to manage than with the LSI controllers.
I am running 2 x Raid5 on the 24 drives.
One raid is deduped and based on a few mio files. 4.5 TB savings is not bad at all (30%)




I do not intend to run it with SSDs, so beyond the initial test out of curiosity, I won't go that route.

regards,
Andy
 

jcl333

Active Member
May 28, 2011
253
74
28
Hello Andreas,

I was just pointed to this thread by Patrick as I am considering the Adaptec controller for my new server build. Your thread is great with the comparisons between the Adaptec and LSI, almost wish you had an Areca in there just for completeness ;-)

Here is my current build thinking:
http://forums.servethehome.com/showthread.php?1130-Server-2012-multi-array-build

Some questions for you if I may:
* What made you decide to try the Adaptec controller? This forum is dominated by LSI and Areca.
* From your comparison, it almost seems like you prefer the LSI, or are you just listing pros/cons? Which do you like better overall?

Of course, your "research" project is a completely different usage pattern than what I am doing, I am actually more interested in your other project, the low-power storage server with an 1155 chip you are building. This is exactly what I am after.

I was actually thinking of getting the 16-port Adaptec 71605, because I was considering a combination of this and an HBA in one 24-bay chassis so that I could do one RAID6 array, and then play around with other things such as Server 2012 storage pools, or ZFS. But, after reading your thread now I am not so sure, I might go with the 24-port model instead because then I could push all 24-bays without an expander if I wanted to, or I could have ports left over to go out to an external JBOD.

I was actually talking with Adaptec support about the heat from the ROC, this was a concern of mine since they design the cards for a "high air flow" server environment. On the 6xxx series cards they actually make a fan accessory, they have not made one for the 7xxx series yet, but they encouraged me to go ahead and put a third-party fan on there. Even so, it was very interesting how hot the LSI got, and how much power consumption they have. This is making me think I might not go with a second HBA at all, and just run the whole thing on the Adaptec. More in line with getting the overall system power draw lower.

Thanks very much, I look forward to hear more on your projects.

-JCL