As much as I have fun building bargain-basement SANs, unless you've got an absolute requirement for shared storage and HA (and from everything you've said so far, you don't), I'm going to side with T_Minus and the rest and say don't waste your time and money on one at this stage. It's adding a significant amount of complexity that, reading between the lines, I'm not sure you're quite ready for.
Raid1 for OS because I had a VM machine with windows server 2008 that blew through a 128g raid 1 disk array with just OS upgrade files and such. I wanted to make sure that didn't happen again. Also was thinking that I wanted it to have plenty of high speed storage for SWAP files if needed.
Semi-offtopic, and it's possibly a contentious opinion given the frequent arguments I still have with some people about it, but personally I make people apply for special dispensation if they want boxes with more than 4GB of swap (windows or *nix). If you don't have enough physical memory and eat into swap for any significant portion of the dataset being held in RAM, performance will tank and then you're also chewing into your precious IO. Large swathes of swap are only useful IMHO if you've got applications habitually leaking memory, whereupon the leaks will (hopefully) eventually get paged out to disk and never read back. But if you've got memory leaks it's far more efficient (not to mention cheaper) to fix them or come up with a mitigation strategy than it its to waste money on "fast" swap space.
The 4x SAS 6tb drives are to be RAID5 so I can start with some 18tb of storage. I currently use about 12ish TB of storage for both virtual machines and the data they use/product. This is also the total current size of my data.
Four spindles in an 18TB RAID5 will likely be pretty dire for random IO unless it's got a lot of RAM and SSD in front of it as cache (and with drives that size you're at great risk of losing the array during a drive replacement/rebuild but that's a topic for another thread but you should look into using RAID6 or better to lessen the chances of running into a URE). Even with caching, ultimately you're still going to be limited by the rate at which things can actually be written to the backing discs. If your IO requirements are as high as you seem to think then four SAS drives for the bulk of your storage is not enough (unless I've misunderstood and they're 6TB SAS SSDs?).
By way of comparison, my humble home server-cum-NAS uses 6x6TB drives in RAID10 with an 128GB SSD writeback cache - I've used it as an iSCSI target before, but if I was making a SAN for the sort of VM hosts you're talking about I'd be wanting
at least 16 spindles and 1TB of SSD fronting it (see many other threads on this site for people using FreeNAS and similar systems to do this very effectively).
What sort of numbers are we actually talking here, in terms of required IOPS and throughput?
my existing setup virtual machines are slow and it appears the network and NAS are the bottleneck.
What exactly was it they were bottlenecking on, network or NAS, throughput or IOPS? I've seen plenty of people waste money on 10GbE when the NAS behind their NICs couldn't even saturate 1GbE - usually because it had nowhere near the amount of platters/SSDs needed to keep up with lots of random IO - and any IO from more than a couple of VMs will be as near to random as makes no odds. Your average bog-standard linux NAS box with less than a dozen spindles and little in the way of cache will choke on that pretty quickly.
Hence a lot of people recommending you keep it local if you can - VMs running off local SSDs are pretty much as good as it gets performance-wise, add a SAN layer (be it FC, iSCSI or anything else) and you're immediately sacrificing potential performance and reliability for the sake of having a SAN. If it's not a crucial component, try and leave it out - your hairline will thank you
I'd propose you attempt to segregate your storage and your VMs; put some of the SSDs into one host and run off a local SSD data store there, and use that box for your hefty random IO boxes. Then run your big fat HDD RAID array and some SSDs (possibly in an SSD-fronted array if windows or your RAID controller supports that) and use that for processing your large dataset stuff.
There's a lot of ways to skin this particular cat, but jumping straight from "NAS is slow" to "I need to build build an RDMA-enabled hyperconverged SAN on windows using only four SAS drives and no dedicated storage fabric" seems like you might be trying to skin the cat using a water pistol whilst the cat is still alive and with its claws perilously close to your nether regions
(All my £0.02, YMMV, IMHO, yadda yadda)