Sorry if this is a long post but I could use some help please
I just passed my VCP6-DCV exam and as a result of that I purchased two of the following servers:
Supermicro 5028D-TN4T
Each server has 128GB of RAM and I also bought a Cisco SG300-28 managed gigiabit switch. I am using my Netgate SG-2440 as the firewall/router with pfsense. I already had a custom built server based on a Supermicro X10SL7-F motherbaord that I will be using for the SAN using Starwinds Virtual SAN as the iSCSI target.
When I studied for my VCP6 exam I did everything nested in VMware Workstation 12.5 so this is my "real" first vSphere 6 cluster setup with shared storage. I've setup my SAN as follows:
On the ESXi side I have ESXi6 on the latest build as of today and also installed a quad low profile HP NC364T network card which will be used for iSCSI traffic only. On the Cisco switch 12 of the ports are in a VLAN for iSCSI storage traffic only.
In vCenter I created 4 vmkernels with the following IP addresses: 192.168.60.6 to .9 (all on one subnet) and am using "network port binding" for the software iSCSI initiator (all the vmks are showing as compliant). On the two datastores (each datastore is backed by an SM863 SSD) I have enabled Round Robin. I also have a dedicated virtual switch with 4 port groups (one for each ISCSI vmk) and have each port group to use one of the NICs on the quad NIC card.
On the local SAN side I have benchamrked the disks and always get 400-500MB/s speeds using ATTO/CrystalMark.
After creating a test VM on one of the datastores I get about 450MB/s when I run CrystalMark and about 300-350MB/s when running ATTO but when I copy a largeish file (3GB zip file) from one folder to another WITHIN the VM I get sub 100MB/s speed. Is this normal or acceptable?
Since I am new to iSCSI, MPIO and shared storage I guess what I am wondering is, what sort of speeds should I see considering I am using SSD storage only with iSCSI/MPIO? I think MPIO is working as I can see traffic going across all four vmks when I run esxtop on the ESXi6 server. I'm just baffled why a benchmark tool show 300-450MB/s speeds in a VM but when I do a real world file copy it never goes above 100MB/s? I only have two VMs currently, vCenter and my test Windows VM so there is no load on this system. Oh, I also changed the default Round Robin IOPS value from 1000 to 1.
So what I'm trying to get at is, how do I know my underlying iSCSI storage (with MPIO and Round Robin) is performing correctly before putting 20 or more VMs on it? I can't help but feel something isn't quite right with the performance. Storage vMotion is painfully slow (about 80MB/s) but I think Starwind are bringing out a new version next week to resolve this.
In Starwind Virtual SAN I have set each disk device to have 4GB write back cache.
I'm hoping someone can say, haha you missed this or that. Please help I just "assumed" that having 4Gb/s of bandwidth across the four 1Gb NICs using iSCSI and MPIO would have given me at least 300MB/s speeds on an SSD backed datastore. I know theres overhead etc so I'll never get the full 4Gb/s (or 500MB/s) speeds but I assumed 300MB/s would be realistic?
I just passed my VCP6-DCV exam and as a result of that I purchased two of the following servers:
Supermicro 5028D-TN4T
Each server has 128GB of RAM and I also bought a Cisco SG300-28 managed gigiabit switch. I am using my Netgate SG-2440 as the firewall/router with pfsense. I already had a custom built server based on a Supermicro X10SL7-F motherbaord that I will be using for the SAN using Starwinds Virtual SAN as the iSCSI target.
When I studied for my VCP6 exam I did everything nested in VMware Workstation 12.5 so this is my "real" first vSphere 6 cluster setup with shared storage. I've setup my SAN as follows:
- Windows Server 2016 RTM with the MPIO role installed (and I enbaled MPIO for iSCSI devices)
- 32GB of RAM
- For now, two Samsung SM863 480GB SATA SSD drives to be used as datastores (no RAID)
- Two Samsung Pro 840 128GB drives mirrored for boot volume (not used for VMs)
- BIOS, IPMI and LSI firmware is all on the latest as of today
- Installed Starwinds Virtual SAN 8.0.9996.0
- Installed a quad HP NC364T gigabit network card to be used for iSCSI traffic
On the ESXi side I have ESXi6 on the latest build as of today and also installed a quad low profile HP NC364T network card which will be used for iSCSI traffic only. On the Cisco switch 12 of the ports are in a VLAN for iSCSI storage traffic only.
In vCenter I created 4 vmkernels with the following IP addresses: 192.168.60.6 to .9 (all on one subnet) and am using "network port binding" for the software iSCSI initiator (all the vmks are showing as compliant). On the two datastores (each datastore is backed by an SM863 SSD) I have enabled Round Robin. I also have a dedicated virtual switch with 4 port groups (one for each ISCSI vmk) and have each port group to use one of the NICs on the quad NIC card.
On the local SAN side I have benchamrked the disks and always get 400-500MB/s speeds using ATTO/CrystalMark.
After creating a test VM on one of the datastores I get about 450MB/s when I run CrystalMark and about 300-350MB/s when running ATTO but when I copy a largeish file (3GB zip file) from one folder to another WITHIN the VM I get sub 100MB/s speed. Is this normal or acceptable?
Since I am new to iSCSI, MPIO and shared storage I guess what I am wondering is, what sort of speeds should I see considering I am using SSD storage only with iSCSI/MPIO? I think MPIO is working as I can see traffic going across all four vmks when I run esxtop on the ESXi6 server. I'm just baffled why a benchmark tool show 300-450MB/s speeds in a VM but when I do a real world file copy it never goes above 100MB/s? I only have two VMs currently, vCenter and my test Windows VM so there is no load on this system. Oh, I also changed the default Round Robin IOPS value from 1000 to 1.
So what I'm trying to get at is, how do I know my underlying iSCSI storage (with MPIO and Round Robin) is performing correctly before putting 20 or more VMs on it? I can't help but feel something isn't quite right with the performance. Storage vMotion is painfully slow (about 80MB/s) but I think Starwind are bringing out a new version next week to resolve this.
In Starwind Virtual SAN I have set each disk device to have 4GB write back cache.
I'm hoping someone can say, haha you missed this or that. Please help I just "assumed" that having 4Gb/s of bandwidth across the four 1Gb NICs using iSCSI and MPIO would have given me at least 300MB/s speeds on an SSD backed datastore. I know theres overhead etc so I'll never get the full 4Gb/s (or 500MB/s) speeds but I assumed 300MB/s would be realistic?