This thread has not been active for a while, so I'd like to provide an update if people are interested:
1) The bigger workstation has now ECC 256GB RAM and runs with 48 SSDs on 6 LSI 9207-8i HBAs, when max I/O perf is needed. Unfortunately, the 6 HBAs consume all 6 PCI slots of the ASUS motherboard leaving no slot available for the Mellanox 40GBit card - a pity. So I ended up running the workstation for day to day work on 4 HBA/32 SSD config. The two E5-2687W CPUs are great. Whatever the workload I throw at them, they deliver sustainable performance at very high levels. A joy to use in a workstation.
1a) The Corsair H100 watercooler are effective. I managed to keep CPU temps under 60 degree celcius with all workloads. To give you an perspective. Intel's linpack - known for its "heat generation" is around 53-55 degree maximum.
1b) with 6 HBAs in one system, heat in the PCI section becomes an issue. Solved it with a relatively slow spinning 20 cm ventilator
2) Scalability of 4 HBA on the Asus Z9PE-D16 is excellent, levelling somewhat off when moving to 6 HBA. My current guess is that my apps are hitting a QPI limit. When I switch off power save mode of the QPI interconnect, performance of the app goes up.
3) The power supply issue is solved (More than 24 SSDs "kill" most power supplies). Since I changed to the Silverstone strider PSU and its 40 amp on the 5 Volt rail, the sudden shutdowns are a thing of the past.
4) The system based on the ASUS P9X79WS/i7-3930K and normal RAM modules is far less stable than the dual socket machine with ECC memory. I did run for days and weeks applications which had self detection algorithms in the program. At least one error per day was detected, vs. zero on the dual socket machine. If someone need rock solid stability, be concious on your component selection
5) The choice on teh Samsung 830 was in hindsight a good one. The SSDs are very reliable and stable & predictable from a performance perspective. I've got 4 x Samsung 840 (256GB) but it is too early to tell their characteristics.
6) Today another nice controller arrived. The new Adaptec 72405 with 24 SAS ports and 6 cables (SFF-8643 to 4xSATA) arrived. As soon as I have time, I'll give this controller with a new RoC a try. For curiosity, I intend to run it one with the Samsung SSDs, but its final destination will be the new home server with 24 x 3TB HDDs (a 32 GB E3-1245v2 system). The goal is to get the power consumption of the system at idle as low as possible. I won't be able to get under 100 watt (all discs spinning), but everthing above 150 watt would be disappointing. The AFM-700 flash module (the new BBU type) will arrive tomorrow.
7) a few pictures
The new Adaptec controller is a bit larger then the LSI 9207-8i. 24 ports vs. 8 ports
The shape of the SFF-8643 connectors are rather square in nature. Not sure, why it was necessary to replace the SFF-8087. So it be.
rgds,
Andy
1) The bigger workstation has now ECC 256GB RAM and runs with 48 SSDs on 6 LSI 9207-8i HBAs, when max I/O perf is needed. Unfortunately, the 6 HBAs consume all 6 PCI slots of the ASUS motherboard leaving no slot available for the Mellanox 40GBit card - a pity. So I ended up running the workstation for day to day work on 4 HBA/32 SSD config. The two E5-2687W CPUs are great. Whatever the workload I throw at them, they deliver sustainable performance at very high levels. A joy to use in a workstation.
1a) The Corsair H100 watercooler are effective. I managed to keep CPU temps under 60 degree celcius with all workloads. To give you an perspective. Intel's linpack - known for its "heat generation" is around 53-55 degree maximum.
1b) with 6 HBAs in one system, heat in the PCI section becomes an issue. Solved it with a relatively slow spinning 20 cm ventilator
2) Scalability of 4 HBA on the Asus Z9PE-D16 is excellent, levelling somewhat off when moving to 6 HBA. My current guess is that my apps are hitting a QPI limit. When I switch off power save mode of the QPI interconnect, performance of the app goes up.
3) The power supply issue is solved (More than 24 SSDs "kill" most power supplies). Since I changed to the Silverstone strider PSU and its 40 amp on the 5 Volt rail, the sudden shutdowns are a thing of the past.
4) The system based on the ASUS P9X79WS/i7-3930K and normal RAM modules is far less stable than the dual socket machine with ECC memory. I did run for days and weeks applications which had self detection algorithms in the program. At least one error per day was detected, vs. zero on the dual socket machine. If someone need rock solid stability, be concious on your component selection
5) The choice on teh Samsung 830 was in hindsight a good one. The SSDs are very reliable and stable & predictable from a performance perspective. I've got 4 x Samsung 840 (256GB) but it is too early to tell their characteristics.
6) Today another nice controller arrived. The new Adaptec 72405 with 24 SAS ports and 6 cables (SFF-8643 to 4xSATA) arrived. As soon as I have time, I'll give this controller with a new RoC a try. For curiosity, I intend to run it one with the Samsung SSDs, but its final destination will be the new home server with 24 x 3TB HDDs (a 32 GB E3-1245v2 system). The goal is to get the power consumption of the system at idle as low as possible. I won't be able to get under 100 watt (all discs spinning), but everthing above 150 watt would be disappointing. The AFM-700 flash module (the new BBU type) will arrive tomorrow.
7) a few pictures
The new Adaptec controller is a bit larger then the LSI 9207-8i. 24 ports vs. 8 ports
The shape of the SFF-8643 connectors are rather square in nature. Not sure, why it was necessary to replace the SFF-8087. So it be.
rgds,
Andy