cachecade 1.0 requires read-ahead enabled. drives can be raid-0 or single , the idea being that the SSD's should always be faster than the array behind them. IE 1 slow 250 meg ssd will lag down 6 15K SAS drives in linear read, but perhaps random i/o will be far faster due to latency.
cachecade 1.0 can suffer a loss of the ssd and continue to operate normally.
cachecade 2.0 with read/write requires a raid'ed ssd setup - raid 10 or 1 - same rule applies but you are writing and reading so if think about that you want redundancy and speed to 512GB. If the cachecade drive(s) fails, the entire array fails. 1 SSD in 2.0 read/write mode is INSANE and would be too slow imo.
Two 512gb (480gb after OP) in raid-1 would be okay, but 4 x 256gb with a little OP would be best in raid-10.
IIRC you can connect up an MSA2312sa ($499 ebay) to 1 external and 4 x 256gb to internal and the external sas (4x3gbps) can benefit from the hot caching.
Now if you have 4 servers (in a cluster box?) that can do this - you can connect 4 machines with cachecade to the same san (different luns of course) and they can have a good ole time.
It's highly SMART to overprovision for caching on cachecade 2.0 read/write. That samsung 830/840 pro will retain its speed and life if you say knock it from 512gb to 256gb - If you use the entire 512gb you will wear it out in a year or so and disk iops will drop by 10x when under continual load.
Seriously - i'd pick a crucial m4 $299 512gb and run it at 50% before i'd pick an OCZ and run it without OP.
I wish they made a cluster server where you had two internal or 4 2.5" internal pay "Blade" and then a slot for a 4i/4e cachecade controller. Then plug those 4 servers in a single $499 MSA2312SA with 12 4TB SAS RE4 drives (SAS is key here). Build 4 separate luns for each cluster server so there is no cache coherency , and go to town. Maybe this would allow you to rock raid-5/6 with nearline RE4 SAS drives while not feeling the pain of the 7200rpm speed.