Intro & Built notes

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Andreas

Member
Aug 21, 2012
127
1
18
I agree with your choice of Samsung 830 for your experiments since it is an excellent value. But I just wanted to point out that the Plextor M3P is similarly consistent in performance to the Samsung 830, but the Plextor M3P actually writes a little faster than the Samsung 830, and uses significantly less power doing so. For anyone who wants even better performance and lower power consumption than the Samsung 830, and is willing to pay for it (the Plextor M3P is more expensive than the Samsung 830), look at the Plextor M3P (or M5P which should be widely available in a month).
It would be interesting to see if the Plextor M3P or M5P which utilizes also a Marvell based controller have a more consistent write behavior than the Vertex/Agility 4.
 

Andreas

Member
Aug 21, 2012
127
1
18
As written before, my initial tests to write to all SSDs simultaneously were limited by my current power supply, which max out at the power consumption of 24 Samsung SSD in write mode.

Before I left, I played around with 4 independent logical drives (1 logical volume with 8xSamsung SSDs each per LSI-9217i in IR mode - Raid0). Copying a file from one raid0 to a second raid0 delivered 2 GB/s sec write speed during file copy in explorer out of the box. Max speed would have been 2,56 GB/sec (8x320MB/s). had not have time to do any tweaking or tuning to potentially reduce the 20% "loss" (software raid, storage spaces, etc ...). CPU load is negligible.


(This graph was impacted by a previous write, which wasn't completely done when this one was started)

Shuffling 50GB files (think of VHD's) in 25-30 sec around on a desktop is really cool ...

Andy
 

john4200

New Member
Jan 1, 2011
152
0
0
It would be interesting to see if the Plextor M3P or M5P which utilizes also a Marvell based controller have a more consistent write behavior than the Vertex/Agility 4.
All the Plextor drives use a Marvell controller, and the performance is consistent.

The inconsistent write behavior on the Vertex 4 is due to firmware, not the controller. OCZ used some gimmick to gain a few points on empty-SSD benchmarks, with the huge downside that the performance becomes erratic and unpredictable on drives more than half full. Just yet another case of OCZ doing something to help marketing their products rather than working on improving quality and reliability.
 

Andreas

Member
Aug 21, 2012
127
1
18
Came back this morning from vacation and had a chance to do some further stuff with the workstation.

1) The overall bandwidth is now at the speed limit of the Samsung SSDs. With IOmeter set to 1 MB transfers, overall read speed is almost 16700 MB/sec which equals to the individual max speed of the SSDs with 520 MB/sec (times 32). CPU load is 14% at 1200 MHz.



2) There is also progress and insight at the IOPS end.
First I checked as discussed the individual bottleneck of the LSI 9207-8i controllers. While the SAS 2308 RoC is able to deliver the full bandwidth of the SSDs with larger block sizes, I could not get it to the advertised IOPS speed of 700.000. With 8 SSDs connected and iometer set to 512B random read and QD32, the controller "levels" off after 5 SSDs (adding from 1 to 8 drives). In the table below, the "delta" column shows the incremental IOPS the last drive added. Please consider that the data was not taken from long run averaged performance reading, so the variability per drive might be a bit higher.

To check for SSD variability like GC or wear levelling, I ran the 8xSSD per LSI HBA on all 4 controllers. All 4 sets had quite identical 444.000 to 447.000 IOPS.Further evidence of the initial observation that the Samsungs deliver predictable performance.

Alternatively, I measured 8 SSDs connected to 2 LSI HBA's - adding the SSDs alternating to the 2 controllers (right box in the table below). As each LSI HBA had only to deal with 4 SSD, performance stays almost linearly up to the higher level at 608.000 iops. I would be interested, how LSI set up their cards to get to 700k iops as well. One possibility is reflashing the cards back to IT mode, they are currently on IR firmware, but BIOS was disabled and no RAID functionality was set.



Next, I used the setup of 5 SSDs connected to 4 LSI for a total of 20 SSDs

Max. IOPS with 512Byte read transfers on 20 SSDs connected to 4 LSI HBA was 1.305.000 iops. This time limited by the CPU, which was at 90% load at 3,5GHz. To get to higher transaction rates, a 2-socket SB platform is required (at the end, some useful processing is desired). By carefully spreading the SSDs to the 4 LSI adapters and limiting to 5 SSDs / adapter, each SSDs provided a bit more than 65.000 iops (The number of 81.000 below is due to a formular error - divided by 16 instead of 20). This rate seems to be approx 20% below the max advertised rate. Nice to see the low I/O response time in this setup.



Same 20 SSD setup for write.
While the random write iops with 512B transfers did not exceed 600k iops, sequential writing of 512B with QD32 peaked at 1.325.000 on these 20 SSDs.



rgds,
Andy
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,827
113
Andy, I think dba has been seeing something very similar. He is currently testing a motherboard with an onboard SAS 2308, a 9207-8e, and a few other things. Great update.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Hi Andy,

Like you, I've been finding that the new LSI SAS2308 is capable of some amazing IOPS, more than I'll ever need and certainly higher than the SAS2008 chip or anthing else I've measured, but not as high as the specified 700K.

Also, can you share your IOMeter setup, specifically the "Maximum Disk Size" in sectors from the Disk Targets tab?
 

ehorn

Active Member
Jun 21, 2012
342
52
28
Thanks for sharing those metrics Andy. I recall you mentioned you are in the process of setting up a dual proc. Nevertheless, I think these are incredible results for a single CPU.

peace,
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
213
63
New Zealand
To get 700k IOPS from the SAS2308 you would need 8x SSDs capable of at least 90k IOPS each.
The Vertex4 supposedly maxes out at 125K IOPS but I doubt it will ever get that high without the tail wind, downhill test :)
 

Andreas

Member
Aug 21, 2012
127
1
18
Andy, I think dba has been seeing something very similar. He is currently testing a motherboard with an onboard SAS 2308, a 9207-8e, and a few other things. Great update.
You are welcome. Which MB's are we talking about - single, dual or quad socket ? ;)

I'm building a single and dual socket system, unfortunately a quad SandyBridge rig is outside the envelope for my project. Haven't come across a quad motherboard which loops the 160 PCI 3.0 lanes to I/O slots. Intel's MB seems to hold for now the high watermark with 6 x 16 PCI 3.0 lanes (plus 2 internal x8 slots). The trend in raid/HBA adapters looks like to converge around the x8 PCI slots factor and not x16 (ie. with 24-32 SATA/SAS ports).

Hi Andy,

Like you, I've been finding that the new LSI SAS2308 is capable of some amazing IOPS, more than I'll ever need and certainly higher than the SAS2008 chip or anthing else I've measured, but not as high as the specified 700K.

Also, can you share your IOMeter setup, specifically the "Maximum Disk Size" in sectors from the Disk Targets tab?
There are 2 things I'd like to understand better.

1) the scaling from 1-8 SSDs per controller. Even with the lower spec of the Samsung 128GB drives (80.000 iops), perfect scaling would get us to 640.000 iops per controller. I am currently at 445.000, so I "lost" somewhere approx 200.000. If the HBA cant get above - let's say 500.000 - fine with me. But LSI somehow manages 700.000 with appropriate SSDs. Not sure if I need to start with 120k iops drives (8x120k = 960k iops) and some loss get me to the architectural limit of the HBA.

Or, (2) are MLC based drives with their higher overhead per se not able to work with the 2308 controller in concert towards 700k. Could it be an exclusive realm of SLC SSD's? Don't want to spend that amount per GB, but I might take a closer look at the SuperSSpeed s301 drives which should show up soon.

My settings in iometer are simple - nothing fancy:
For random read I take a testfile with at least 50% of capacity - maximising the likelyhood that all flash chips get some work to do. To be on the safe side, one can take 100%. For Datatransfertests I take 10GB (if a single SSD) or 50 GB (when a raid0).

(Note: This is the area where I had significant differences with quite a few drives, which were on public benchmarks and specs faster than the Samsungs. Writing the full drive in one sweep had either very low write speeds in the second half (OCZ Vertex 4, Agility 4), or benefited after the write from some cool down period to show better performance vs. immediately after the write operation (SanDisk, Vertex 3). Due to many rearrangements in the drives/raid/HBA settings, the amount of data written to the individual drives is not synchronized. Some of my drives get far more write cycles than others. I don't have enough drives of all models, but the Samsungs don't exhibit for now much variation. Independet of the structure and combination of setups I am currently checking. Not sure if this would be the case with all drives, neither I am sure it might be an issue at all. It is just a comment for those who assemble setups with more than 8 SSDs)

Data transfer was measured with: 1MB blocks, 100% sequential, 100% read, QD16
Data transfer for write is currently limited to 24 SSDs - as I still havent got my new PSU with enough power on the 5 volt rail. Should be here any day now.
IOPS tests benefit far more from a larger testfile, then seq transfer. Setting for read: 512B (or 4K), 100% random, 100% read, QD 128 (single SSD), QD >512 (on raid). I was too lazy to check for the "perfect" QD curve. Only assessment was, that QD64 delivers not the peak value.

Beware of the CPU load when measuring many individual SSDs :). For production work, I'd rather take the lower absolute transfers and iops values of raid-ing in the HBA and keep the CPU free for its main duty.

Tip: The testfile (ie. 10GB) by iometer is not written very fast. It is usually faster to take one template file and copy it with the filemanager to the SSDs under test.

Thanks for sharing those metrics Andy. I recall you mentioned you are in the process of setting up a dual proc. Nevertheless, I think these are incredible results for a single CPU.
You are welcome ehorn. Agree, it is just mindboggling what the combination of recent developments (CPU, IO architecture, controller, software, ...) deliver these days. 1.300.000 iops used to require 13.000 disk drives (and some CPUs).

BTW, today i "tuned" my workstation. A simple change in CPU cooler got me about +10% more performance. I do have some heavyweight apps, which triggered CPU throtteling when the CPU hit 90 degree celcius. Neither might it be good to run the CPU for hours at 90 degrees. This evening I picked up a Corsair H100 CPU water cooler set. Installation is a snap. Impact is incredible so far. With a decent air cooler the CPU hit 90 degrees when running at 3.5 GHz, throttled sometimes down to 3.2 or even 3.0. With the water cooler, the CPU (without overclocking it) stays rock solid at 3.8 GHz which is the maximum frequency of the 3930K. Temperture at full 100% load on all cores and all functional units never exceeded 58 degrees. With normal workloads (ca. 30%) the temperature is below 40 degrees. Amazing.

A screenshot of HWInfo while the CPU was really maxed out. Think of something like Intel's Linpack benchmark taxying the CPU plus add 3-4 GB/sec IO. Runtime was this time 3 hrs. Check the CPU temp which is at least 30 degrees less, plus I get 10% more CPU frequency vs. before. If interested, with this load on the CPU, memory, 4 LSI, 33 SSDs and fans, power consumption is approx 300 watt at the wall socket.

With this setup,
Intel's Linpack delivers 149,5 Gflops vs. 135 GFlops before. Setting: 20000, 20000, 4K
Stream shows 41 GByte/sec on the memory bus




Second workstation:
It is in the making. The components are ordered except the SSDs. Some components are already here, some will come this week. Made up my mind on the CPUs (2 x E5-2687W), memory (16x8 GB regECC 1600 MHz), MB (Asus P9ZE-D16. Not the L version, but the one with 4 x GBit ports).

With SSDs, I am still thinking. The Samsung were a good decision a few weeks ago, but there are new drives coming in which I'D like to test first before making a decision. Neutron GTX, Plextor M5Pro and SuperSSpeed s301 are those I will probably buy one first and compare it to the other drives in my closet. I'll see.


To get 700k IOPS from the SAS2308 you would need 8x SSDs capable of at least 90k IOPS each.
The Vertex4 supposedly maxes out at 125K IOPS but I doubt it will ever get that high without the tail wind, downhill test :)
Pieter,
you are right, 90K IOPS drives would be needed to get me to 700.000 IOPS per HBA. But you might have seen the "poor" scaling of the 2308 beyond 5 drives (the Samsungs are spec'ed at 80k iops). As probably most of us I don't need these last IO/s beyond the already 445K I measured. It is rather a matter of curiosity to understand the "why". Do I need 150K SSDs to start with and the unavoidable scaling effect still produces outstanding 700k, or is there something else I overlooked? Is it the IR firmware which limits this exercise, or do I need (your fav) IT firmware to avoid the scaling implication currently seen. Or is there some magic in the drives and an undocumented setting in the HBA driver unleashes the last bit of performance?

Needless to say, but I say it anyway :) The workstation is great fun to work with in its current state of development.

rgds,
Andy
 
Last edited:

mobilenvidia

Moderator
Sep 25, 2011
1,956
213
63
New Zealand
The SSD Review is getting near perfect scaling with the LSI9207-8i with 8x M4's

You may need to wait for the soon to hopefully be released LSI9206-16e with dual SAS2308's
You can get high overall IOPS with cheaper SSD lower IOPS drives.
ie 16x Samsung's (80k IOPS) = 1280k IOPS per PCIe 8x slot.
You will be limited to the 8x PCIe GEN3 bus speed the LSI9206 uses, 8GT/s with <2% latency should be good for a heap of data.
 
Last edited:

ehorn

Active Member
Jun 21, 2012
342
52
28
...If interested, with this load on the CPU, memory, 4 LSI, 33 SSDs and fans, power consumption is approx 300 watt at the wall socket.
Incredible performance/watt...

Anand blogged from an Intel conference today discussing Haswell. High level discussion, but much was said of the improvements in power consumption coming for that platform.

peace,
 

ehorn

Active Member
Jun 21, 2012
342
52
28
Neutron GTX, Plextor M5Pro and SuperSSpeed s301 are those I will probably buy one first and compare it to the other drives in my closet. I'll see.
Those SuperSSpeed's look pretty sweet (albeit fully priced for their performance).
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,827
113
Actually, you know what could be somewhat cool, what if you used Vertex 4 256GB drives as 127.99GB drives? You would never run in "storage" mode. If you think about it, $180 for a 128GB drive performing at that level is not a bad deal.
 

ehorn

Active Member
Jun 21, 2012
342
52
28
Actually, you know what could be somewhat cool, what if you used Vertex 4 256GB drives as 127.99GB drives? You would never run in "storage" mode. If you think about it, $180 for a 128GB drive performing at that level is not a bad deal.
That was my thinking in going with the 240GB Sandisk drives. I only require ~ 3TB of "hot" data and (24) X 240GB provides plenty of room for OP (~50%). Even though the Sandisks do not show the same "penalty" as other drives as they fill, the $/GB on the Sandisks was compelling enough to move to the larger capacity and offer the benefits of large OP.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,827
113
That was my thinking in going with the 240GB Sandisk drives. I only require ~ 3TB of "hot" data and (24) X 240GB provides plenty of room for OP (~50%). Even though the Sandisks do not show the same "penalty" as other drives as they fill, the $/GB on the Sandisks was compelling enough to move to the larger capacity and offer the benefits of large OP.
The Vertex 4 drives behave a bit differently. Gains in performance mode (when you do not fill more than 1/2 capacity) are bigger than even the venerable Intel 320 SSDs (but you do not get the super-cap.

BTW -- We need benchmarks of the SanDisk array!!!
 

Andreas

Member
Aug 21, 2012
127
1
18
The H100 is a really nice package...
This one works so good with the i7-3930K CPU, that I got today 2 more for the dual socket board. I would not use them in pure server environments, but for my workstation setup are they just right. The 3930 CPU is now for 24hrs under full load. Never left the 3,8GHz mark, never went above 60 degree celcius

Incredible performance/watt...
Anand blogged from an Intel conference today discussing Haswell. High level discussion, but much was said of the improvements in power consumption coming for that platform.
Isn't that a pitty?
There I get the new CPU's from the parcel service today, look forward to bring them to use and here comes the news that I got old, outdated and basically inefficient technology..... ;);)



Those SuperSSpeed's look pretty sweet (albeit fully priced for their performance).
Not sure which price you are referring to, but if I remember correctly, the 128GB models might be in the 200$ ballbark. Other SLC drives are significantly more expensive. These drives are very interesting for high writeIO scenarios. As said, I will check one first and then decide.

Actually, you know what could be somewhat cool, what if you used Vertex 4 256GB drives as 127.99GB drives? You would never run in "storage" mode. If you think about it, $180 for a 128GB drive performing at that level is not a bad deal.
Interesting approach. Seen technically, it will work. About the cost dimension: If I see the recent price drops of 120/128GB drives, it would still be a 1:2 difference, almost in the ballpark of the SLC Superspeed. But used in a "dual mode" setup, 128GB fast and 256GB "slow" when needed might be a compelling approach for some.

Andy
 

ehorn

Active Member
Jun 21, 2012
342
52
28
... BTW -- We need benchmarks of the SanDisk array!!!
Yes... yes we (meaning me) do... :)

But before I add any more drives, I am adding a UPS... Especially after reading about Pieters' lightening incident. Which (BTW) I was happy to hear it did not fry everything in his hardware line....
 

ehorn

Active Member
Jun 21, 2012
342
52
28
This one works so good with the i7-3930K CPU, that I got today 2 more for the dual socket board. I would not use them in pure server environments, but for my workstation setup are they just right. The 3930 CPU is now for 24hrs under full load. Never left the 3,8GHz mark, never went above 60 degree celcius
Nice!

Isn't that a pitty?
There I get the new CPU's from the parcel service today, look forward to bring them to use and here comes the news that I got old, outdated and basically inefficient technology..... ;);)

ROFL!!! Yeah man... "Outdated tech"... I feel for ya! ;)

Nice looking gear!

Not sure which price you are referring to, but if I remember correctly, the 128GB models might be in the 200$ ballbark. Other SLC drives are significantly more expensive. These drives are very interesting for high writeIO scenarios. As said, I will check one first and then decide.
My bad, I was thinking of the MLC's (302's)... I have not seen/looked pricing in the US for these drives. Only overseas.

peace,