Intel 910 Series!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

KenE

New Member
Feb 14, 2012
34
0
0
Intel SSD on PICe, 400GB MSRP for $1900.

http://hothardware.com/Reviews/Intel-Announces-PCIExpress-SSD-Product-Family-Meet-the-910/?page=1
http://www.anandtech.com/show/5743/intels-ssd-910-400800gb-mlchet-pcie-shipping-in-1h-2012

Getting down into the meat, this looks like it's perfect for my application. Long term reliablity, PCIe, LSI 2008 raid controller, and has speed and room!

It would be nice to have the ocz z-drive, but this seems to fit the happy middle ground, of speed and stablity if Intel keeps up the enterprise line. If the this thing is solid, I'm not going to look back and take the SSD plunge on the server.

:cool:
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
That does look like an excellent product - should have less compatibility headaches than the Revodrives thanks to the 2008 chip!
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
Again, why not use the SAS2308 and have PCIe v3.0 support.
Bring out a brand new kick butt product but use yesterdays interface


SAS2008 290k IOPs vs SAS2308 600k IOPs
But I suppose it would have added $5 to a $1900 device, a real deal breaker :)

Even better would have been this on an internal version of the LSI9202
Each 400GB module connected to 8 SAS ports (16 total)
The LSI9202 being PCIe v2.0 16x, bandwidth to burn.

Wonder if the 910 is able to be flashed to IR mode and allow it to be bootable ?
 
Last edited:

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
I cannot for the life of me remember if IT mode has the ability to set a drive as bootable, ppretty sure it can.
Bugger would have to flash my one to IT just to find out for sure, I'll not get any sleep wondering.
This would then prove the Anandtech review wrong in it not being bootable.

I doubt Intel will be marketing this towards 'Home' users, I for one could not afford $1900 for 400GB
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
Why do I doubt my self, just flashed M1015 to IT mode and added OptionROM

The SAS2008 in IT mode with MPTSAS2 BIOS can set any drive as 1st boot and another drive as secondary.
Unless Intel/LSI have changed their firmware for the 910 this should also be able to boot

Anyway I'm looking forward to receiving one in the mail to review and test various Firmware options :cool:
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Good for them, and I hope they sell a million of them. I won't be buying one - the value just isn't there for me.

Consider this: Intel 910 800GB (with 896GB of flash) is $3,800 for 2,000MB/Sec reads and 1,500MB/Sec writes.

Spend that same amount of money on a different solution and you get 2X the capacity and 4X the performance:
16 Intel 520 series 120GB SSD drives at $200 each = $3,200.
4 IBM M1015 cards from eBay = $300.
Total: $3,500 (Still $300 less than the Intel 910 - enought to buy another drive and M1015 as spares)
Performance: Approximately 8,000MB/Sec reads and 5,000MB/Sec writes.
Capacity: 1,600GB (each drive formatted to 100GB)
Overprovisioning capacity: 448GB (compared to 96GB for the Intel 910)

Granted that 16 drives would take up much more room than a single Intel 910 card. That could be important to some solutions, but it isn't for mine. Also, while the MLC-HET flash used in the Intel 910 extends the lifetime of the device compared to the MLC used in the consumer-grade 520 SSDs, I wonder if simply leaving almost .5 TB unformatted in the 520-based solution might provide enough lifespan extension for server use, if not just as much of an extension as the HET flash.

Intel SSD on PICe, 400GB MSRP for $1900.

http://hothardware.com/Reviews/Intel-Announces-PCIExpress-SSD-Product-Family-Meet-the-910/?page=1
http://www.anandtech.com/show/5743/intels-ssd-910-400800gb-mlchet-pcie-shipping-in-1h-2012

Getting down into the meat, this looks like it's perfect for my application. Long term reliablity, PCIe, LSI 2008 raid controller, and has speed and room!

It would be nice to have the ocz z-drive, but this seems to fit the happy middle ground, of speed and stablity if Intel keeps up the enterprise line. If the this thing is solid, I'm not going to look back and take the SSD plunge on the server.

:cool:
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,802
113
dba... my thoughts exactly! The other really important thing to consider with a lower density solution like the 910 is that there is often a license cost for a server. The Intel 910 gives 800GB for one PCIe slot. In a 2U you can fit 24 drives and maybe 6 PCIe cards. You can also go higher-capacity internally with the LSI + SSD solution and use SFF-8088 for other drives if needed.

I'm also pretty interested in a new Vertex 4. Could be neat to play with.
 

KenE

New Member
Feb 14, 2012
34
0
0
Tthe problem with a 16 drive array, it eats pcie slots and the you have the reliabity issues with now 16 parts and 4 drive cables and 2 cards and even the you have make accommodations for 16 drives. For those in the SMB market like us, this 910 drive could be mana from heaven.

Im seeing a e3 server with ssd os drive, a 910 drive and a 5k drive running as a internal backup for both drive. Simple cool and quite. But it would be cool to have a smokin' storage subsystem but getting it off the server is always the problem.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,802
113
Ken just some thoughts: The 910 series has 2 or 4 NAND controllers plus a SAS 20008 controller and multiple PCBs. If you have something that breaks, you are going to end up RMA'ing the whole thing. You also get limited to 800GB per PCIe slot. If you use 512GB drives x8 per card you get 4TB per PCIe slot. Using less PCIe slots for a given capacity means more room for 10GbE NICs!
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
You could always get a LSI9202 controller and have 16 drives from single PCIe slot (but needs16x slot) then get 8TB per slot.
Dual SAS2008 controller on a 16x PCIe card now you can pump data
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I'd say that such a solution *may* eat PCIe slots. You need as few as one RAID card (and just one slot) if you are looking to match the speed of the 910 or as many as four if you are looking for maximum throughput. And while it's true that more cables means more opportunity for things to break, the multiple SSD architecture almost always includes some level of redundancy - RAID1E, RAID10, RAID60, etc. My particular solution using 25 SSD drives over five controllers and with RAID1E can tolerate the failure of a cable, a controller, or a drive. If I wanted more capacity and did not need quite as much speed, I could have used RAID60 instead. And further, I can add capacity at any time simply by adding another drive.

Ken, if you do opt for the 910, and it's a production server, definitely get a pair and mirror them. As Patrick pointed out, a single 910 does not give you any kind of redundancy.

Tthe problem with a 16 drive array, it eats pcie slots and the you have the reliabity issues with now 16 parts and 4 drive cables and 2 cards and even the you have make accommodations for 16 drives. For those in the SMB market like us, this 910 drive could be mana from heaven.

Im seeing a e3 server with ssd os drive, a 910 drive and a 5k drive running as a internal backup for both drive. Simple cool and quite. But it would be cool to have a smokin' storage subsystem but getting it off the server is always the problem.
 
Last edited:

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
If you do opt for the 910, and it's a production server, definitely get a pair and mirror them. A single 910 does not give you any kind of redundancy.
$3800 for 400GB of software RAID1

Personally that amount of money spread the risk over 24 drives with either a 24port controller or 8 port with SAS expander.
SSD dies, replace (RAID 5/6)
Controller dies replace

Intel 910 dies goodbye data unless both NAND boards can be placed on a new controller board and work
 

KenE

New Member
Feb 14, 2012
34
0
0
Sorry about dropping out of the conversation, been in a confronce all day. We are set up with dfs on our servers and our data gets replicated 800miles away in another state real time. So i have a little more flexibility when it comes to data integrity. Since its just me hitting the server I only need about 2000 iops and 200gb of hot storage, im in that weird region of more than spinners but less than ssds. So a, simple solution (that is quite) is my goal.
Heck if I could get a better handle on lsi's new nytro megaraid card I would look at that as well. I hope the release that analysis software soon, I'd be interested to see what's really happening when I run my GIS software.
 

KenE

New Member
Feb 14, 2012
34
0
0
Patrick got the PM

Thanks for the info were are probably 90days away from server purchase (if we can continue in this current economy)
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
For a single-user GIS server the 910 would be a pretty good solution - assuming that you have the budget and don't have the room for multiple SSDs. If I had the room I'd still go with the multiple SSD solution - better performance and lower cost. If I had no room and no budget budget, an OCZ Vertex3 x2 480GB formatted to 300GB or so to provide more OP space would also be an option - half the price and equivalent performance but almost certainly worse reliability than the 910.

Be careful not to "fill up" any SSD too much. Figure out how many GB you write per day and try to leave at least that much free space on your drive*. I go so far as to enforce this by under-formatting my SSD drives. If you have 200GB of data but write 400GB in a day, I'd recommend making sure that you have more than 600GB of SSD space available.

*It's not that simple with controllers that do compression like the SandForce, but it's close enough.

Sorry about dropping out of the conversation, been in a confronce all day. We are set up with dfs on our servers and our data gets replicated 800miles away in another state real time. So i have a little more flexibility when it comes to data integrity. Since its just me hitting the server I only need about 2000 iops and 200gb of hot storage, im in that weird region of more than spinners but less than ssds. So a, simple solution (that is quite) is my goal.
Heck if I could get a better handle on lsi's new nytro megaraid card I would look at that as well. I hope the release that analysis software soon, I'd be interested to see what's really happening when I run my GIS software.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,802
113
Heck if I could get a better handle on lsi's new nytro megaraid card I would look at that as well. I hope the release that analysis software soon, I'd be interested to see what's really happening when I run my GIS software.
What is this analysis software? Keep me in the loop. I'm a hardware guy at heart so need help finding relevant benchmarks.