Storage system with 200k+ IOPS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

elhoim

New Member
Jan 26, 2015
6
1
3
42
At work i need to send a hardware request for getting a storage system for an ArcSight ESM server.
Given some rules of thumb from ArcSight professional services, we need to aim for at least 200k+ write IOPS.

The goal is to have at least 7TB useable with RAID-10 and to have some capacity to grow over the next 5 years.

My first idea was a Lenovo System Storage EXP2524 DAS enclosure with 10x 1.6TB SAS 2.5" MLC SS Enterprise SSD (49Y6200) connected to a ServeRAID M5225 card in the server.

A colleague mentioned that the RAID card might not handle that level of IOPS, but I cannot find any documentation on this.

Some questions:

Is it worth it adding the FastPath option to the card?
Will it support the number of IOPs required?
Will it have some room for growth in the future?
Should I consider PCIe based SSDs?


Sorry if this is a lot of questions, I am not much of a hardware guy, but I got put in charge of the hardware ordering phase; so I thought I would pick the brains of the most hardcore hardware enthusiasts I could find... :)
 
Last edited:

Patriot

Moderator
Apr 18, 2011
1,452
792
113
Why are you looking at Lenovo for a ESM server? Just curious...
Not as familiar with the Lenovo branded raid cards... but you would need fastpath to get up to 300k iops max... those cards are not fast, but you are not asking for much.

https://lenovopress.com/pdfs/1069/tips0992.pdf
You should be able to hit 200k iops with 10 of those drives if the LSI raid stack isn't complete crap.

A pcie SSD would hit the 200k mark with 1 drive...
 
  • Like
Reactions: T_Minus

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
Why are you looking at Lenovo for a ESM server? Just curious...
Not as familiar with the Lenovo branded raid cards... but you would need fastpath to get up to 300k iops max... those cards are not fast, but you are not asking for much.

https://lenovopress.com/pdfs/1069/tips0992.pdf
You should be able to hit 200k iops with 10 of those drives if the LSI raid stack isn't complete crap.

A pcie SSD would hit the 200k mark with 1 drive...
Lenovo raid cards are just LSI cards I have one in our storage server and just use the LSI tools(different application though this isn't nearly 200k write iops)
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,654
2,066
113
A single 2TB p3700 NVME drive can handle 175,000 4k Write IOPs and a mixed read/write of 265,000. The 800gb model can do 90,000 4k write IOPs, and 200,000 70/30 mixed read/write.

7TB usable of P3700 in RAID10 is not going to be cheap especially in a pre-configured setup but it will out perform your needs.

If you do go this route then maybe the lower performing P3600 2TB that can handle 160,000 mixed read/write, and 56,000 write IOPs would work due to the # of drives you'll be needing to get 7TB usable. It will still have good write endurance too.
 

Patriot

Moderator
Apr 18, 2011
1,452
792
113
You also need to know at what Qdepth you need to hit 200k iops... 200k iops at Q1 is significantly harder to hit than 200k iops at say... 32 or 64 qdepth.

Lenovo raid cards are just LSI cards I have one in our storage server and just use the LSI tools(different application though this isn't nearly 200k write iops)
I know they are lsi... I just don't know the mapping... and how any custom fw effects on performance.
I remember doing competative analysis against LSI 9200-8i based megaraid and they were able to hit about 330k iops with fastpath, I don't remember how bad it was without. The competing P420 hit 450k iops though... next gen for both was about double... 750k iops on the 9300 series and 1m iops on the p430.

That is all raid 0 or hba performance though... to hit what you want with raid 10 is asking for more.
 
  • Like
Reactions: T_Minus

elhoim

New Member
Jan 26, 2015
6
1
3
42
Why are you looking at Lenovo for a ESM server? Just curious...
Not as familiar with the Lenovo branded raid cards... but you would need fastpath to get up to 300k iops max... those cards are not fast, but you are not asking for much.

https://lenovopress.com/pdfs/1069/tips0992.pdf
You should be able to hit 200k iops with 10 of those drives if the LSI raid stack isn't complete crap.

A pcie SSD would hit the 200k mark with 1 drive...
The company I work for has a framework contract with Lenovo for standardized procurement.

Would you have other RAID cards to recommend or other SSDs that have better $/IOPS & $/GB ratios?
 

Patriot

Moderator
Apr 18, 2011
1,452
792
113
lol... No, you don't get better $/iops and $/GB when going up in performance... you get one or the other.
huh, I just found it odd that you were putting HPE software on a lenovo box... irony that you are putting security software alongside superfish.
 

elhoim

New Member
Jan 26, 2015
6
1
3
42
A single 2TB p3700 NVME drive can handle 175,000 4k Write IOPs and a mixed read/write of 265,000. The 800gb model can do 90,000 4k write IOPs, and 200,000 70/30 mixed read/write.

7TB usable of P3700 in RAID10 is not going to be cheap especially in a pre-configured setup but it will out perform your needs.

If you do go this route then maybe the lower performing P3600 2TB that can handle 160,000 mixed read/write, and 56,000 write IOPs would work due to the # of drives you'll be needing to get 7TB usable. It will still have good write endurance too.
Plus it will take lots of PCIe slots, 8 if I count correctly.


You also need to know at what Qdepth you need to hit 200k iops... 200k iops at Q1 is significantly harder to hit than 200k iops at say... 32 or 64 qdepth.
How can I get an idea of the queue depth?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
One idea: If you do not need the reliability of a SAN with redundant controllers, consider running ArcSight on a beefy server with a good number of NVMe drive slots. With six NVMe drives @ 3.2TB each, you have >8TB of usable space in software RAID10 after formatting, with a cost of around $50K, and will never have anyone complain about IOPS... ever. $50K is quite a bit, but any ArcSight deployment that needs 200K IOPS is already costing you much more than that. The advantage of this approach is simplicity - no RAID cards, no cables, no external boxes, etc. The disadvantages are limited redundancy compared to a SAN and limited ability to add capacity later.

I don't know the EXP2524, but it looks like a common 24-drive enclosure with a single backplane and a SAS switch. You are SAS2 6Gbit speed given your chosen drives. You'll be able to connect the enclosure to just one or two SAS2 RAID cards, which will be a throughput bottleneck and an IOPS bottleneck. Current RAID cards can handle quite a bit, but if you really do need real-world 4k IOPS that high, I think that it's asking too much. Then again, if you really need 200K IOPS for ArcSight, then you should be engaging a consultant, not taking free advice from a web forum. That's a lot of logging!!

If you do go with an external SAS disk enclosure instead of internal NVMe drives, consider going with SAS3 backplane and SAS3 RAID cards and SSD drives. I'd still prefer NVMe for this one, but SAS3 is going to get much closer to your goals than SAS2.

At work i need to send a hardware request for getting a storage system for an ArcSight ESM server.
Given some rules of thumb from ArcSight professional services, we need to aim for at least 200k+ write IOPS.

The goal is to have at least 7TB useable with RAID-10 and to have some capacity to grow over the next 5 years.

My first idea was a Lenovo System Storage EXP2524 DAS enclosure with 10x 1.6TB SAS 2.5" MLC SS Enterprise SSD (49Y6200) connected to a ServeRAID M5225 card in the server.

A colleague mentioned that the RAID card might not handle that level of IOPS, but I cannot find any documentation on this.

Some questions:

Is it worth it adding the FastPath option to the card?
Will it support the number of IOPs required?
Will it have some room for growth in the future?
Should I consider PCIe based SSDs?


Sorry if this is a lot of questions, I am not much of a hardware guy, but I got put in charge of the hardware ordering phase; so I thought I would pick the brains of the most hardcore hardware enthusiasts I could find... :)
 
  • Like
Reactions: Patriot and T_Minus

Patrick

Administrator
Staff member
Dec 21, 2010
12,520
5,828
113
One idea: If you do not need the reliability of a SAN with redundant controllers, consider running ArcSight on a beefy server with a good number of NVMe drive slots. With six NVMe drives @ 3.2TB each, you have >8TB of usable space in software RAID10 after formatting, with a cost of around $50K, and will never have anyone complain about IOPS... ever. $50K is quite a bit, but any ArcSight deployment that needs 200K IOPS is already costing you much more than that. The advantage of this approach is simplicity - no RAID cards, no cables, no external boxes, etc. The disadvantages are limited redundancy compared to a SAN and limited ability to add capacity later.

I don't know the EXP2524, but it looks like a common 24-drive enclosure with a single backplane and a SAS switch. You are SAS2 6Gbit speed given your chosen drives. You'll be able to connect the enclosure to just one or two SAS2 RAID cards, which will be a throughput bottleneck and an IOPS bottleneck. Current RAID cards can handle quite a bit, but if you really do need real-world 4k IOPS that high, I think that it's asking too much. Then again, if you really need 200K IOPS for ArcSight, then you should be engaging a consultant, not taking free advice from a web forum. That's a lot of logging!!

If you do go with an external enclosure, consider going with SAS3 backplane and SAS3 RAID cards and SSD drives. I'd still prefer NVMe for this one, but SAS3 is going to get much closer to your goals than SAS2.
Adding to this a bit - there are now quite a few larger 24+ bay 2.5" NVMe chassis available. We are also going to see 10TB+ NVMe 2.5" SSDs this year. If you end up with one of these larger chassis, you will have quite a bit of room to expand.

Also - we should be getting time on a full NVMe system very soon using lower cost 2TB SSDs for a total of 48TB raw capacity. They are P3320's so relatively slow, but I am still excited to see what they can do.
 
  • Like
Reactions: Patriot

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Adding to this a bit - there are now quite a few larger 24+ bay 2.5" NVMe chassis available. We are also going to see 10TB+ NVMe 2.5" SSDs this year. If you end up with one of these larger chassis, you will have quite a bit of room to expand.

Also - we should be getting time on a full NVMe system very soon using lower cost 2TB SSDs for a total of 48TB raw capacity. They are P3320's so relatively slow, but I am still excited to see what they can do.
Any chance that I could install an Oracle DB on that NVMe machine to see what it can do? Would love to run the SLOB benchmark and some data warehouse test queries on a machine with 38GB/s raw throughput. I am wondering if it can beat my 80 SATA SSD system - which it should be able to do if if allocates 48 PCIe 3 lanes to the backplane, and spreads those lanes evenly across both CPUs.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I did a demo of an EXP30 ultra I/O on an ibm power7 server a few years ago... 320,000-480,000 iops in 1u connecting using gx++ bus, ran oracle on it :) pity at the time it was 10+tb SSD but cost $100k !!!

No with NVMe I wonder if IBM have a crazy I/O option... I am sure it costs but there must be some demand for these units in the market.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,520
5,828
113
Any chance that I could install an Oracle DB on that NVMe machine to see what it can do? Would love to run the SLOB benchmark and some data warehouse test queries on a machine with 38GB/s raw throughput. I am wondering if it can beat my 80 SATA SSD system - which it should be able to do if if allocates 48 PCIe 3 lanes to the backplane, and spreads those lanes evenly across both CPUs.
That might be possible. Shoot me a mail. It is not setup yet but want to keep this top of mind.