You sir...are what we call an 'I/O MANIAC'! hahah j/k, very nice work...a whole other level. What you got for backend networking again, better be 40G ethernet or some insane IB network?
What kind of VM workloads you gonna be drivin on that Hyper-V cluster, did you have specs in you other thread on the compute nodes for the cluster since this seems to be storage-centric?
Yes i have FDR IB back end..
@Patrick recommended this switch (kinda, i think he wanted it.)
Mellanox MSX6015F-1SFS SX6015 100-586-011 18 Port FDR 2Tb/s Infiniband Switch Mellanox MSX6015F-1SFS
18 Port FDR InfiniBand Switch
4 Ports are in this san,
3 Ports in each node of San 2 (jbod SOFS)
4 Hyper V hosts with 2 ports each.
Specs on the Hosts are 48 Cores (2683 v3) 256GB of ram, and mirrored boot drives and either 1 more Dual port 40gbe card or 2 10GB cards and those ports attach to the gnodal 4008 for the SAN dedicated to backups / replicas / DPM
San 3 has 1 40 Gbe port 36 3.5 bays, 11 1 TB SSDs (R50 used for replicas), and 6 256GB ssds for caching of the incoming backup / DPM traffic
the workload is Enterprise monitoring / "I need to stand of several of everything so i can ensure I am monitoring it"
Monitoring 300+ metrics on every process on every server creates a huge strain on the monitoring solution,
I consult mostly in the System Center Suit (SCOM, SCCM, SCORCH, SCVMM and soon DPM) and hopefully having the ability to explain to my clients HOW to get IO out of their SQL Servers / Hyper V hosts. for most of my clients SQL Enterprise is usually 100K per node. Yet they run it on crap hardware that just gets bogged down with the amount of data thrown at it. This would then be where i say.. hum.. we can fix that.. and here is what you need.
I wish i had some of the queries / workloads that
@dba uses in his DCDW to compare / bench this server to his.
Ill know next week if i can get the 1 Million IOPS + 10GB/s to the hosts. (crossing fingers)