M1015's and SSD's?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

KenE

New Member
Feb 14, 2012
34
0
0
Ok I am looking to spec out my file server here at the office but had some questions when it comes to SSD RAID Cards and bus speeds.

Right now I'm looking at Supermicro 1155 Mid-tower server with the C202 chipset.
2 (8x) slots with 1 (4x wired 1x) board http://www.supermicro.com/products/motherboard/Xeon/C202_C204/X9SCL.cfm
CPU would be the i2100 series (it's only a file server)
8GB of ECC ram
Cruical/Mushkin 60GB SSD for OS (Server 2008 r2)
M1015 controller (not sure which flash level) RAID 10
4-128GB Samsung 830 SSD's in a istar 5.25 drive bay
2-500gb HDD (Raid 1) installed on the MB chipset to run hourly backups on the OS/Data Drive
Mellanox Infiniband III 10GbE card (8x slot)
I've got a WHS to do major backups and this server does DFS with the home office (so we've got 2 copies of data)

So the question are:
What firmware does the IBM card need to be flashed too?
To spread out the pain, could I just get two drives (RAID 1) and Migrate to RAID 10 without loosing data on this LSI card? - And as I need space can I add a more drives to the RAID 10 array?
The Samsungs aren't set in stone but I was looking for solid SSD's that can live outside the 'trim' environment any other thoughts?
Can the LSI controller (with SATA III SSD's) at least get close to filling the 10GbE pipe or will I hit a bus/CPU limit?
(My workstation is on a 4x wired bus on the other end to my max limit is going to be 5GbE I assume.

Any thoughts from our resident M1015 expert?

Ken
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,518
5,819
113
Ken,

On vacation so I will let others discuss firmware. One thing is that the istarusa 4-in-1 5.25" bay adapter is something I used to use with my 15K rpm 2.5" SAS drives. I ended up dropping all of this in favor of bare 2.5" disks due to noise. Thermaltake just sent their version of the 4-in-1 and 6-in-1 and I will be reviewing them in the next week or two. SSD's actually don't run too hot, so I'm looking for something that has some airflow, but is not trying to cool 15K rpm SAS drives.
 

KenE

New Member
Feb 14, 2012
34
0
0
Thanks Patrick! I just looked at the thermaltake stuff. Pretty cool. The istarusa stuff did look a little 'floppy'. But I figured since they were going to be SSD's it wouldn't make much difference. I'm still probably 2 months out on the Server upgrade. (We need some work before we drop 7K on migrating to new servers).

Ken
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I'd stay away from the iStar 4in1. They are flimsy/crappy, the trays bend, if your mounting screws aren't perfect they actually intrude into the space for the drive and the fans are failure prone. Because of the flimsy trays you can get poor seating of the SATA and SATA-power connections from the drives. I used two of them in my build to mount 8x SSDs and eventually replaced both with a SuperMicro 4 in 1 which I like much, much better.

See here: http://www.supermicro.com/products/accessories/mobilerack/CSE-M14.cfm

If you shop around they are only about 20% more expensive than the iStar units - and worth every penny. The fan is LOUD but if you slow it down with a resister it runs smooth and quiet (plus - unlike the iStar unit - the fan will alarm if it fails). The chassis comes with a breakout cable to SATA connectors but you'll have to look around a bit to find the right cable if you need to get to an SFF-8087 (you need a SAS-multilane to SFF-8087 - I'll get the exact specs later today and edit them into this post).
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
On your other questions:

- Since you are using Server2008r2, I assume you actually want to run the raid 10 on the M1015 (and not software raid). In that case, I'd probably run the M1015 with current firmware from IBM (for this card) or LSI 9240-8i. Use the LSI MegaRaid drivers with it. Works great. Unless you are using a software raid (like ZFS) there is no need to re-flash to something supporting IT mode (i.e., no need to do the 9211-8i IT mode flash).

- Can't really advise on the exact SSDs. Since you are running raid you'll need something supporting garbage collection. I've used OCZ Vertex for this type of application with good luck but lots of info out there says OCZ is a 'don't touch' so IDK what to advise. The Samsung 830s are probably not a bad choice, though their garbage collection gets somewhat mixed reviews.

- No, your 4x SSD raid 10 won't get close to saturating a 10Gbe link - especially if your client-side is limited. But it will kick a 1Gbe link's butt to the floor so. I don't know exactly what Mellanox card you are looking at but beware of the cheaper ones that do not do packet processing offload as it is very difficult to get anything close to 10G throughput when using CPU-based packet processing.
 
Last edited:

KenE

New Member
Feb 14, 2012
34
0
0
Yeah my thought was to use the m1015 for the ssds.
Samsungs seem to be used by dell for raids so I thought it would work.
I have a pair of infinihost III cards (the type without memory, i. E. Cheap). They were quick in my server but not fast VERY LOW latency. I'm hoping that by using these cards in sandy bridge boxes with ssd's will result in better performance than a sandy bridge and conroe with hdds.
I run a GIS workstation and i have lots of little files and real big ones with databases on projects. Now with the megaraid drivers can I add drives to the array (in pairs of course) without losing data?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I wouldn't expect the SSD/Newer CPU to make those Mellanox cards much quicker. Your issue is the cards need to use the system CPU for network processing (packet header encapsulation, protocol stack processing, checksums, etc) rather than having it done onboard on the NIC. No matter what you do you are still stuck with bus latencies and CPU interrupt/scheduling delays. You need to consider whether you are spending many $hundreds for a problem you could fix with a couple of better NICs.
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
See below for my thoughts
Ok I am looking to spec out my file server here at the office but had some questions when it comes to SSD RAID Cards and bus speeds.

Right now I'm looking at Supermicro 1155 Mid-tower server with the C202 chipset.
2 (8x) slots with 1 (4x wired 1x) board http://www.supermicro.com/products/motherboard/Xeon/C202_C204/X9SCL.cfm
CPU would be the i2100 series (it's only a file server)
8GB of ECC ram
Cruical/Mushkin 60GB SSD for OS (Server 2008 r2)
M1015 controller (not sure which flash level) RAID 10
4-128GB Samsung 830 SSD's in a istar 5.25 drive bay
2-500gb HDD (Raid 1) installed on the MB chipset to run hourly backups on the OS/Data Drive
Mellanox Infiniband III 10GbE card (8x slot)
I've got a WHS to do major backups and this server does DFS with the home office (so we've got 2 copies of data)

So the question are:
What firmware does the IBM card need to be flashed too?
I would run in LSI9211 IR mode for best results with RAID and everything else just passes through

To spread out the pain, could I just get two drives (RAID 1) and Migrate to RAID 10 without loosing data on this LSI card? - And as I need space can I add a more drives to the RAID 10 array?
Not tried to go from RAID 1 to 10, via migrate, currently running Win8 on the SSD's so can't test yet

The Samsungs aren't set in stone but I was looking for solid SSD's that can live outside the 'trim' environment any other thoughts?
Trim doesn't work in RAID with any chipset yet, Intel will be first probably to offer it on its SW RAID chipsets

Can the LSI controller (with SATA III SSD's) at least get close to filling the 10GbE pipe or will I hit a bus/CPU limit?
(My workstation is on a 4x wired bus on the other end to my max limit is going to be 5GbE I assume.
In RAID0 4x SSD's can pump out aprox 1.6GB/s (10GbE = aprox 1GB/s actual)

Any thoughts from our resident M1015 expert?

Ken
 
Last edited:

KenE

New Member
Feb 14, 2012
34
0
0
I wouldn't expect the SSD/Newer CPU to make those Mellanox cards much quicker. Your issue is the cards need to use the system CPU for network processing (packet header encapsulation, protocol stack processing, checksums, etc) rather than having it done onboard on the NIC. No matter what you do you are still stuck with bus latencies and CPU interrupt/scheduling delays. You need to consider whether you are spending many $hundreds for a problem you could fix with a couple of better NICs.
Well the server needs to be upgraded anyway, we installed this one in the spring of 2007 (our SBS server has been in service since april of 2006!) so the servers are needing to be retired. Since I handle all the GIS I wanted my file server to spit data out as fast as possible.
When I used the mellanox cards with the old server it was fast (I could tell the latency was very low) with the little files but as soon as the files got over 100MB it would drop back down to almost 1GbE speeds. I'm assuming that this is because of my old 3ware card with HDD's and the memory limitations of the raid card.
 

KenE

New Member
Feb 14, 2012
34
0
0
mobilemedia, Do you feel that using a flashed (IR) M1015 is safe for a production environment? I can't seem to tell the difference between the 9211 and the 9240?
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
Safe as houses :)

LSI9240 = RAID Conroller or JBOD mode for single drives, no passthrough at all.

LSI9211 (IR) RAID controller with passthrough for single drives

LSI9211 (IT) Passthrough for single drives, or simple HBA controller
 
  • Like
Reactions: anomaly

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
LSI9240 = RAID Conroller or JBOD mode for single drives, no passthrough at all.
Curious why you say this. 9240 does straight passthrough of any unconfigured drives. It reports it as a single-drive JBOD in the MMI, but it is in fact a sector-for-sector perfect passthrough. Besides, what is the difference between a single-drive JBOD and a passthrough? This is true for any of the MegaRaid series controllers (9240, 9260, etc).

If this were not true I am wondering how my ZFS server works (and why the pools export/import perfectly to a another system using only on-board SATA controllers).
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
JBOD needs to set
A drive does not become JBOD by default

On the LSI9211 any single drive not in an array is passed through nothing in the controller needs to be set.
On the other RAID controllers the drive is not shown to the OS/BIOS unless it's set to JBOD.

JBOD and passthrough might very well be the same, but it requires an extra step with JBOD on the LSI9240
The LSI9260 I found doesn't like JBOD, I even tried to force to enable it with a NVRAM tweak but it refused to work, RAID0 single drive is best it can do.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
JBOD needs to set
A drive does not become JBOD by default

On the LSI9211 any single drive not in an array is passed through nothing in the controller needs to be set.
On the other RAID controllers the drive is not shown to the OS/BIOS unless it's set to JBOD.

JBOD and passthrough might very well be the same, but it requires an extra step with JBOD on the LSI9240
The LSI9260 I found doesn't like JBOD, I even tried to force to enable it with a NVRAM tweak but it refused to work, RAID0 single drive is best it can do.
No, it doesn't. Any unconfigured drive is treated as a JBOD of 1 drive - AKA passthrough. Personally tested and confirmed multiple times, using 'real' 9240s, M1015s (both with IBM firmware and LSI 9240 firmware) and 9260s. LSIs MegaRaid manual confirms that this is expected behavior for all MegaRaid cards.
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
Thats odd, as I've just flashed my card (M1015) to a LSI9240 latest FW.

One of my spindle HDD is JBOD which the OS happily sees
The other is Unconfigured good the OS can't see, which in LSI9211 mode the OS does see (as unconfigured good)

I'm pretty sure that the LSI9260 can't do JBOD, I'll put this card in and confirm or eat my words :)
 
Last edited:

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
Just plugged in the M5015 (LSI9260) and confirmed there is no option to set drives to JBOD
Unconfigured good drives don't show to OS, only option for single drive is RAID0

So far I've still got my RAID0 Win8 bootable partition on the SSD's, its at least good that the LSI cards are interchangeable with the arrays
 

KenE

New Member
Feb 14, 2012
34
0
0
Well i just figured out that on our budget ssd raid isnt going to work. Ive got 220GB of active storage on our network and leaves no room for over provisioning.

So re looking at everything I'm now looking at a 4disk raid 10 using the hitachi 7k3000 2gb platters on either a lsi 9260 or 9265. So that if i need the speed bump l can get cachecade with some ssds and a bbu.
Mobile do you know if flashing a m5015 to the 9260 will make it 'look' like a lsi card? Can the software be tricked?
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
Mobile do you know if flashing a m5015 to the 9260 will make it 'look' like a lsi card? Can the software be tricked?
Nope, the M5015 is a M5015 no matter what SBR you put in.
It will only look like a LSI9260 in MSM and at boot time (where its faster as a non M5015)

I have an Intel Cachecade card and it doesn't work on the M5015, complains wrong ibutton (key)
You'd be best to go down the LSI9260 (from LSI) or an Intel equivalent so that if you purchase a key they will work

The keys have a chip with security code that needs to correspond to the chip on the controller.
There is no way round this (well without unsoldering, then burning the tiny EEPROM with code from another)