F2uSoe: She is Hungry for 20 GB/S -- 22.1 GBs Achieved!!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Well here ya go. 22.1 GB/s achieved!! 9 x 1.2TB nvme drives. 8 2.5" and 1 PCIe
as expected the writes were much improved when i only used the 1.2Tb drives


The spacing of ATTO leaves something to be desired
View attachment 1252
View attachment 1253
Looking good! You probably want to test with IOMeter and with a test file much larger than 4GB to get the most accurate numbers.
 
  • Like
Reactions: Chuntzu and Patrick

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
You sir...are what we call an 'I/O MANIAC'! hahah j/k, very nice work...a whole other level. What you got for backend networking again, better be 40G ethernet or some insane IB network?

What kind of VM workloads you gonna be drivin on that Hyper-V cluster, did you have specs in you other thread on the compute nodes for the cluster since this seems to be storage-centric?
 

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
You sir...are what we call an 'I/O MANIAC'! hahah j/k, very nice work...a whole other level. What you got for backend networking again, better be 40G ethernet or some insane IB network?

What kind of VM workloads you gonna be drivin on that Hyper-V cluster, did you have specs in you other thread on the compute nodes for the cluster since this seems to be storage-centric?
Yes i have FDR IB back end.. @Patrick recommended this switch (kinda, i think he wanted it.)

Mellanox MSX6015F-1SFS SX6015 100-586-011 18 Port FDR 2Tb/s Infiniband Switch Mellanox MSX6015F-1SFS
18 Port FDR InfiniBand Switch

4 Ports are in this san,
3 Ports in each node of San 2 (jbod SOFS)
4 Hyper V hosts with 2 ports each.

Specs on the Hosts are 48 Cores (2683 v3) 256GB of ram, and mirrored boot drives and either 1 more Dual port 40gbe card or 2 10GB cards and those ports attach to the gnodal 4008 for the SAN dedicated to backups / replicas / DPM

San 3 has 1 40 Gbe port 36 3.5 bays, 11 1 TB SSDs (R50 used for replicas), and 6 256GB ssds for caching of the incoming backup / DPM traffic

the workload is Enterprise monitoring / "I need to stand of several of everything so i can ensure I am monitoring it"

Monitoring 300+ metrics on every process on every server creates a huge strain on the monitoring solution,

I consult mostly in the System Center Suit (SCOM, SCCM, SCORCH, SCVMM and soon DPM) and hopefully having the ability to explain to my clients HOW to get IO out of their SQL Servers / Hyper V hosts. for most of my clients SQL Enterprise is usually 100K per node. Yet they run it on crap hardware that just gets bogged down with the amount of data thrown at it. This would then be where i say.. hum.. we can fix that.. and here is what you need.

I wish i had some of the queries / workloads that @dba uses in his DCDW to compare / bench this server to his.

Ill know next week if i can get the 1 Million IOPS + 10GB/s to the hosts. (crossing fingers)