You must mean ZFS+Linux isn't production ready. ZFS+Solaris is rock solid.
I know the new Dell VRTX has SAS SR-IOV with you install the new Dell Perc8 card, but I have never heard of that feature on a c6100. What have you heard and what have you tested yourself?
oh know I was just curious. The only way to share 1 pci-e card amount multiple blades is SR-IOV (hopefully with FLR).
For some reason I thought the c6100 had shared pci-e slots but I guess you are saying they are mezz-only!
Odd that SR-IOV on VRTX works but nobody can get this to work on ESXi ! The hardware supports it!
I will check the P420/1gb FBWC to see if it is used in SR-IOV as well. It has always supported zoning/clustering..
I am going to see if I can find a way to make a target! I know the marvell chipset can be a target which you can make your own MSA
But then again I've got a boatload of lefthand vsa and with IPOIB or just 10gbe ethernet you can create some really powerful hyper-redundant storage setup.
All nics in the past several years support Virtual Functions and recently Function Level Reset so you can restart a hung VF without killing the rest of the vm's functions.
It would be very cool to setup my solarflare nic that supports 2048 VF's so each lun has its own dedicated path (or two or a dozen). This would allow a truly dedicated path for iscsi storage so the hypervisors won't try to do their aweful timeslicing.
I did some testing and ESXi 5.1 seems to be about 20-100x faster with "Thick provision, VAAI, single target/lun per vm, single nic per vm" versus simple 1 big shared lun with many vm's and shared nic many vm's.
1. Thin provision requires massive cpu to svmotion since it effectively expands the thin vmdk (say 700gb with 20gb used) ,then crunches it back down. This kills cpu and lags out svmotion. It takes about 40 minutes over 10gbe with large thin VM versus 4 minutes with VAAI/THICK ZERO!!
2. DAS to DAS vmotion is like this with thin: send, wait many minutes, send ram, wait minutes, sync ram, wait minutes. VERY SLOW!
3. SIOC alters queue depth based on shared storage latency! NIOC can further control ISCSI (but not FC). Only available for shared san. Queue depth is constantly adjusted based on defined latency (5ms ssd, 30ms sata san).
4. DAS uses none of this but simple time slicing method - 16-256 QD per vm, 16-256 qd per target/lun and simple time swapping. For instance if you seek 6 times each random with > 2000 sector difference, your share is up (regardless of latency!) - This is stupid for ram/SSD storage!
SR-IOV is critical for network and Storage to provide ESXi with "1 VM gets 1 RAID controller with 1 lun" and "1 dedicated VF nic per VM" - in this case there is no time slicing necessary since it is all offloaded to hardware.
With 2 vm running LSI SAS you can push maybe 64 QD across 8 SSD!! so that's maybe 8QD each SSD MAX!
with 1 VM running LSI SAS you can push 16 QD ingress (vm) and nearly 1000QD to ssd! You can imagine which one is going to be fast!
Also with high end storage they throttle QD=1 to force coalesce which results in higher QD, there is maybe two SSD on the planet that emulates this behavior. Well 3 with Samsung EVO tlc drive!
I am not sure how hyper-V time slices but it requires immense tuning to make multiple VM per host go fast . Without SR-IOV SAS controller you will never realize full potential like bare metal whilst hosting more than 1 vm per host.
So right now I just like you run multiple SAS controller to achieve higher speed but this is serious waste of money!
For some reason I was thinking C6100 had shared PCI-E cards so that led me to believe SR-IOV but since that is not the case, sorry.
VRTX is silly ! HP did this years ago, called BLC3000 - they are just blades with a tiny single phase on wheels design. People are so impressed with ancient technology that HP has been doing for ever! That is how far behind DELL is!