Best RAID controller for new virtualization setup

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

lpallard

Member
Aug 17, 2013
276
11
18
I have to concur with you Rhinox once again on price observations... In canada LSI controllers are just crazy unreachable for home SOHO users....

The 250Euros deal for a M5016 brand new with all Cachevault & supercap sealed in the box is very tempting..

Supermicro replied to my inquiry, they simply mentioned that they never tried IBM products in their motherboards and cannot confirm compatiblity..

Would you guys think its fairly safe to assume the M5016 will work on a supermicro board?

Otherwise, I resell it and cry a little bit... ;)

Good that the M5016 supports both SAS & SATA at the same time, as long as they are not within the same raid array... So i take one array with SAS drives & one array with SATA drives is not unfeasible..
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
I suppose your board supports IOMMU so why not build an ESXi All-In-One system with a virtualized ZFS NAS or SAN. For such a system, your IBM 1015 is perfect and unbeaten. With ZFS and its software raid no cache or BBU is needed or wanted as ZFS use the main memory for caching and a ZIL device if you need powerloss safe write logging. ZFS mirrors are very fast and ZFS Raid-Z is mostly faster than Raid 5/6 and you do not have the Raid 5/6 writehole problem due to its copy on write filesystem.

If you have ESXi running, you may download and import my preconfigured web-based ESXi VM appliance and check such a system for NAS use and as a shared NFS datastore for your VMs.

see my howto at http://www.napp-it.org/doc/downloads/napp-in-one.pdf
 
Last edited:

Rhinox

Member
May 27, 2013
144
26
18
...Would you guys think its fairly safe to assume the M5016 will work on a supermicro board?...
I think it will. LSI is not bound with some particular brand. It makes controllers that are expected to work in every possible server. And M5016 is just re-branded LSI9266. You can even flash it with original LSI bios. I'm pretty sure M5016 will work in any server-board (and most of desktop-boards). And as you said: if not, you can still sell it for the same price. Maybe even for more! ;-)
 

chune

Member
Oct 28, 2013
119
23
18
Why not get a L5639 dL180 g6 or DL380 ? The key part to this unit success is not the raid controller , yes that is important but the cost overall!

A DL180 G6 with dual L5639 $399 with rails - canot be beat on price. Throw enough ram to your solution and pick fast drives. 15K SAS 3.5" or 10K SAS 2.5" is key !

The raid controller with raid-10 will not be the limiting factor, it will be the drives. The cache you want to go with 512BBWC or 1024FBWC ( I suggest the FBWC) but the P410 is usually included in the HP solutions and works just fine with raid-10! Just need to ensure the BATTERY IS new or you get the super capacitor version (!!).

6. HP rules. Sorry, the rest of the guys are junk. I've never experienced problems until the day I got my first 3 dell servers. PSOD in ESXi? Never saw that until my first R610 came on line with the Perc H700 firmware being bad for 6 months!
Clearly you are an HP fanboy, so im not going to try to change your religion here. However, could you explain to me why HP chose to allocate ONE sff-8087 port to power FOURTEEN drive bays on the dl180?? I was about to drop the hammer on three of them for napp-it boxes until i realized this fact. And the fact that i would have to battle for any additional pci-e slots. Also, they have ugly caddies :D. The Dell c2100 came at a slightly steeper price ($650 but comes with an H700 + BBU), but it has dual sff-8087 powering its expander backplane by default and can be upgraded to a 1:1 backplane sporting 3x sff-8087 connectors. I also enjoy the option of dual 10gb nics and a LSI SAS2008 daughtercard all without using any of the pcie slots. But most importantly it had very handsome drive caddies. Funny about the PSOD comment, the first and only time i saw PSOD was trying to get esxi 5.5 loaded on my HP microserver =P
 
Last edited:

lpallard

Member
Aug 17, 2013
276
11
18
M5016 on its way.... For pretty cheap too! 268 Euros including shipping! Will post back when installed!

Regarding 15k SAS drives... Its been a bigger challenge than I expected to find cheap (relatively) drives... I need at least 300GB (officially, about 189GB + contingency) but I'd be more cofortable with 450GB...

300GB SAS drives are crazy expensive.. so imagine 450GB..


Some questions regarding SAS drives:

-Would I see a major difference with 7k2 drives instead of 15k ??? The premium for 15k is more than triple the price!!
-Or could I use my existing SATA drives for now, until SAS drives fall down in price (will it ever happen), then replace the drives for SAS ones? We're talking about both bootable drives as well as ESXi datastore..
-Are there any drive brands I should NOT touch because unreliable, incompatible with the M5016?? In other words, would HP, Dell, IBM, etc branded drives work with the M5016??

Other question is regarding my large RAID5 linux mdadm array... If I move the array to a real hardware array, will I be able to proceed as follows:

1. Connect 3 drives to the controller and create a RAID5 array
2. Move some stuff to the array
3. Add more drives
4. Expand the array
5. Move the remaining stuff...

All that is because I cannot temporarily store 6TB of data (its too much) while I obliterate the linux RAID array and reconstruct the aray with the M5016...

Cheers!!!
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
yes, you can expand the array on LSI cards with the Online Capacity Extend (OCE).

Back to a question I asked earlier, how are you attaching the SAS drives to the card? They have a different connector, didn't want you to get surprised later on.

Certain desktop drives do not work well the Seagate 3TB Baracuddas for example. The NAS drives almost always work, I have used the WD Red drives and have he Seagate NAS ones now. Just be careful on the desktops.
 

lpallard

Member
Aug 17, 2013
276
11
18
yes, you can expand the array on LSI cards with the Online Capacity Extend (OCE).

Back to a question I asked earlier, how are you attaching the SAS drives to the card? They have a different connector, didn't want you to get surprised later on.

Certain desktop drives do not work well the Seagate 3TB Baracuddas for example. The NAS drives almost always work, I have used the WD Red drives and have he Seagate NAS ones now. Just be careful on the desktops.
Based on your comments, I may be in trouble.. I have NO NAS or RED drives... Only desktops.. And a lot of them.

The server currently hosts:

2TB WDC WD20EARX-00PASB0 <= DANGER!! Imminent self-destruct ;)
2TB ST2000DL003-9VT166
2TB HDS5C3020ALA632
2TB HDS5C3020ALA632
2TB ST32000542AS
2TB HDS5C3020ALA632
2TB ST2000DM001-1CH164
2TB ST2000DM001-1CH164
320G WD3200AAKS-75VYA0
320G ST3320620AS
250G ST3250820AS

And 2 other 2TB Hitachi's like the Deskstar 5k3000, and a 3TB Seagate (cant remember the model number but definitely desktop drive)..

I have three scenarios to use all these drives and have the simplest setup (nothing better than the KISS principle!)

Scenario 1:

The M5016 is connected to a SAS expander. One 300GB SAS drive connected to the first SAS port of the M5016, the expander connected to the other SAS port. From the expander, a second SAS drive connected to a port, assembled as RAID1 with the other SAS drive from the M5016. Esxi and critical data onto that array.

Then from the expander, I connect all other drives (except the smaller /older ones) using a SAS to SATA fanout cables.. All drives assembled as a RAID5 array at the controller's level, then passed to a VM for usage...

Finally, the two 320GB & the 250GB are assembled as a RAID0 array or discarded and later replaced by a SSD (which is the intent soon enough..)

Scenario 2:

2X 300GB SAS drives connected to the M5016 and assembled as RAID1, Esxi and critical data onto that array.

The M1015 remains in the server, all 2TB drives connected to it as it is now, the M1015 passed through to the main VM and continue to use the linux RAID5 array.. Problem with this option is that the large RAID5 array wont benefit from the M5016's cache and supercap..

Scenario 3:

Same as scenario 2, but basically the M1015 is eliminated (reused in another build), all 2TB drives are connected to the motherboards SATA ports and passed through to the main VM for a linux RAID5 array. Problem with this is the lack of SATA ports on this motherboard. Only has 6, so increasing the RAID5 array wont be possible..

I overall much prefer scenario 1 due to all drives will be able to benefit from the controller's cache and supercap. The only downside is the necessity of a SAS expander, but I understand a basic one is not too expensive..

DO you see significant issues with the scenario 1? For example, SAS and SATA drives together but NOT in the same arrays...
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
We bought about 150 Seagate ST3000DM001 drives and had many many problems with them and LSI controllers in SuperMicro chassis. We tried to get it to work with Adaptec cards, single and dual channel expanders and all different firmware. Random drops, unrecognized, etc. We switched out to WD red SATA drives and never a problem.

I think we even had problems with using IT as a pass through. Not the best day for us...

So the configs you have may work out, just not sure. I would not be worried about intermix of SAS and SATA on same card. The card is smart enough.
 

lpallard

Member
Aug 17, 2013
276
11
18
We bought about 150 Seagate ST3000DM001 drives and had many many problems with them and LSI controllers in SuperMicro chassis. We tried to get it to work with Adaptec cards, single and dual channel expanders and all different firmware. Random drops, unrecognized, etc. We switched out to WD red SATA drives and never a problem.

I think we even had problems with using IT as a pass through. Not the best day for us...

So the configs you have may work out, just not sure. I would not be worried about intermix of SAS and SATA on same card. The card is smart enough.
I wonder what your problems may have been caused by... Firmware? With my M1015 in IT mode on my Supermicro motherboard, not a single glitch in about 8 months... The only drive that had a hiccup about a week or so after I installed them on the M1015 was the WD green drive, but I blamed the nature of the drive rather than the M1015, etc...

All Hitachi's or Seagates work flawlessly still.. (touching wood).

Would you use branded drives with this controller? HP , Dell, IBM, etc? or should i stick to Hitachi, Western, Seagate ??

Also, 15k or 7k2?
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
It could have been the expanders, though I think I tested with the breakout cables as well. This was from card direct to chassis though and well, it could have been bad batches since these were early in the cycle. Also I had the 3TBs, you have the 2TB drives. We tried every firmware, even harassing Seagate to give us beta firmware. Nothing. Finally stripped them all out and repurposed them for data delivery drives, which work fine.

The green drives and the like with the spin downs can cause problems. I experienced that in the Tivo and also in some Drobos, switched out for regular drives and they seemed better. Another fun story is with WD RE drives, we had about 60 of the 2TB RE SATA drives and after about 3 years, they all started to randomly drop. I think the failure after 3 years was in the order of 25-30%, which is pretty high. We RMA'd them to WD who gave us the newer RE3 drives, without the "Green Power" and they are rock solid. Another group that I work with made WD replace a huge bunch, I think 40-50 drives. These are the enterprise RAID enahnced drives with the 5 year warranty.

I'm not a fan of the branded drives since they usually mean markup for us, and they are nothing more than the regular drives that are whitelisted in their products. Seagates, WD, Hitachi are all fine. I have some of each in my personal systems, at work we stick with a single one because we buy in large batches, no other reason.

I'm running 720RPM SATA drives, can't afford nor need the performance of 15k drives nor want to run SAS for the cost. Ideally, after 3-4 years, I'd replace all the drives since they get denser and I can run with less drives, saving me power but losing performance since I have less spinning platters. Though too lazy so I probably won't. If I want speed, I'll do SSD... and I stories there too, but another day. ;)

You're going to have to do some testing. The card is fine, you'll just need to play and see what it likes.

I wonder what your problems may have been caused by... Firmware? With my M1015 in IT mode on my Supermicro motherboard, not a single glitch in about 8 months... The only drive that had a hiccup about a week or so after I installed them on the M1015 was the WD green drive, but I blamed the nature of the drive rather than the M1015, etc...

All Hitachi's or Seagates work flawlessly still.. (touching wood).

Would you use branded drives with this controller? HP , Dell, IBM, etc? or should i stick to Hitachi, Western, Seagate ??

Also, 15k or 7k2?
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Interesting note, just looked at the settings on my 9271 and saw this:

--
Allowed Mixing:


Mix in Enclosure Allowed
Mix of SAS/SATA of HDD type in VD Allowed

... also

Allowed Device Type : SAS/SATA Mix
Allow Mix in Enclosure : Yes
Allow HDD SAS/SATA Mix in VD : Yes
Allow SSD SAS/SATA Mix in VD : No
Allow HDD/SSD Mix in VD : No
Allow SATA in Cluster : No
--

It looks like my card will allow any drives to use used to build a virtual disk/array. You'll have to confirm after you get your card in. Here's the command I used: ./MegaCli64 -adpallinfo -a0
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I ran a mirrored pair of 5K4000 drives (HDSC5Cxxxxx line like yours) for about six months with no issues. Now I have a dozen Hitachi/HGST Deskstar 5K2000 drives in RAID6 and another eight 5K4000 drives also in RAID6. As a test this last week, I beat them up for 48 hours straight, reads and writes, using IOMeter. No problems found. These are supposed to be low-power desktop drives, not NAS drives, but I have come to trust them in my NAS.
 
Last edited:

Rhinox

Member
May 27, 2013
144
26
18
We bought about 150 Seagate ST3000DM001 drives and had many many problems with them and LSI controllers in SuperMicro chassis. We tried to get it to work with Adaptec cards, single and dual channel expanders and all different firmware. Random drops, unrecognized, etc. We switched out to WD red SATA drives and never a problem...
FYI, random drops of common desktop-drives from raid-array might be sometimes caused by error-recovery. As drive is used, some sectors might become "weak", and then it is necessary to process them a few times to read data. If this happens, hard-drive tries to remap weak sector to one of spare-sectors (yes, even traditional drives have them) and you can even see if any sectors have been remaped in smart-status. This is quite common and normal for every drive (even those most expensive). Problem is, desktop drives either do not have any time-limit for this, or it is very long. And it could take between a few seconds up to a minute! If drive does not report writing/reading finished for such a long time, controller marks drive as defect.

The so called "raid-drives" have a feature called "time limited error recovery" TLER (Western Digital) or "command completion time limit" CCTL (Samsung/Hitachi): maximum time drive-firmware is allowed to spend with error recovery. This is less than threshold in raid-controller so drive does not drop from array. Some desktop drives have this feature too, but disabled (can be enabled), and some common desktop-drives have no way to limit recovery-time. So it does make sense to use "raid-drives" for raid-arrays, or at least limit TLER/CCTL to a few seconds (some vendor provide utilities for it)...
 

lpallard

Member
Aug 17, 2013
276
11
18
I have not yet got my M5016 but in the meantime, I am starting to envision how I will transfer my curent server to the virtualized one trunning on the same hardware.. I am concerned with the process because mainly I will have to convert a running RAID1 system array to seperatate drives, transfer terabytes of data to a linux mdadm array to a hardware array without adding temporary drives... This is another topic in itself..

Anyways, before I start, I need to pick my virtualization platform. I didnt want to create a new thread because there are already dozens, but instead of asking which platform I should go for, I'd rather ask if it is possible to manage ESXi from a linux client.... I read that VMware offers only windows based vSphere clients.. I have no windows computers. This is one of the point of virtualization in my case...

Can you manage ESXi from a linux machine reliably? Reliably means to me: Production ready.

If not, then proxmox is the only hypervisor left for me.. I dont like Xen because of the dom0 concept..

Are the guys using esxi all running windows machines to manage the hypervisor??
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
ESXi version 5.5 starts to limit how you manage it by slowly removing support for the vsphere client and moving you to vsphere web client. This is a problem for the free users, but if you are paying, then the web app is available and should be cross platform though I heard of troubles.

I do believe that most others would have a windows machine to manage it otherwise and I hear the web client works on the Mac.
 

lpallard

Member
Aug 17, 2013
276
11
18
Hmmmm..... sounds like finding the right hypervisor for me will be a sizable challenge..

ESXi seems to be getting more & more proprietary/limited, and for a non profit use like me, it wont be possible to pay money for critical features.. Also it seems to require a windows machine to manage (nonsense but again, windows is popular I guess..) and their apparent intention to move the vsphere client to a paid web client is also a push to pay for everything..

While I am waiting for the M5016..

Quickly, without getting into the details of my usage, hardware, etc, which hypervisor would you recommend? Proxmox , Xenserver , something else?

I will need to virtualize my current server which is fairly I/O intensive (mysql databases, monitoring services, web apps, web frameworks, etc) and also

2 Windows XP virtual machines (one for production, one for testing)

2 Linux machines (Ubuntu server and a slackware machine)

a pfsense machine with lots of RAM usage (Snort, squid, HAVP)
 
Last edited:

bwillcox

Member
Jan 20, 2013
32
0
6
Tejas
Straight up KVM would be a good fit; you can then just run virt-manager on your Linux system and connect to your hypervisor and manage the vm's from there. This is what I do for my hypervisors that run CentOS... I connect into them from virt-manager on my Fedora workstation and or use virsh from an ssh session if needed.

-b
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
ESXi requires flash to manage these days with the horrible web interface. You can of course use other applications to manage simple tasks including command line but - the fact is ESXi is in a class of its own as far as doing general purpose linux hosting.

If you want to push a workstations through and run some vm's on the side, then perhaps esxi is not for you, but if you want to run 50 vm's of centos off a single server 100% non-stop forever, then esxi is for you. It just costs money $$ or something like that.
 

lpallard

Member
Aug 17, 2013
276
11
18
I may give a spin to the major platforms and see which one fits best... its just super time consuming and I'd like to keep the downtime of the server to a minimum for this migration..

Proxmox is very attractive right now.
 

Rhinox

Member
May 27, 2013
144
26
18
ESXi requires flash to manage these days with the horrible web interface...
Not quite so. You can still manage stand-alone ESXi-server with vSphere "native" client (which, unfortunatelly is nearly as bad as flash-client except you do not need vCenter-Server as big fat "middleman")...