Does anyone know if you can buy a PCIe 3.0 x16 low profile card that supports FOUR M2 NVMe drives?
I found this Supermicro card which supports TWO drives but I am looking for a card that supports FOUR drives...
Yeah it does, thanks. I've enabled VBS in all my VMs so will be interesting to see how this goes.
Can VBS be enabled/installed in Core edition of Windows 2016/2019?
I've just upgraded to vSphere 6.7 Update 1 and one of the first things I wanted to start experimenting with is Virtualization Based Security (VBS) in my VMs. I have a Win2016 and Win2019 VM I have installed with hardware version 14 and VMware Tools 10338. Windows is patched with October 2018's...
I've been reading the vSphere 6.5 Host Resources Deep Dive by Frank Denneman and have just finished reading the chapter on power management. In the chapter he talks about Intel Turbo Boost and Turbo Bins.
The Turbo rates look like this for an E5-2450:
5/5/6/6/7/7/8/8
I currently have the...
It would have been defaults as I didn't change a thing after creating the pool (ie: all defaults).
Whats the thoughts on using lz4 compression with iscsi and VMs? Good? Bad? Use it? Don't use it?
I have deleted the zpools until the new disks arrive but how do I check if iscsi is using sync=always? I'll check this when I create the new pools next week when I have all the drives.
Yeah I think I am going to install the Pro's in the hosts themselves rather than in FreeNAS.
I considered using all the SM863 drives in a single pool but I would like to have two pools. One for performance (where most VMs will run) and one to maximise space (for templates etc).
I've got my 4...
Now that my storage seems to be stable for now I would like to have a rethink about how I configure the ZFS volumes on my FreeNAS server.
Next week some new SSDs will arrive so I'll have the following:
4 x Samsung SM863 480GB SSDs
4 x Samsung SM863 960GB SSDs
2 x Samsung Pro 840/850 512GB...
What I should have asked in my previous post is:
If I have the disconnect happen again, what NICs should I consider for storage traffic if continuing to use direct connect?
PS: Thanks for Juniper mention!
Yip, learned that they hard way!
When I still had jumbo frame enabled the disconnects funnily enough didn't happen at all while running some bench marking tests. It didn't happen when I manually ran a Veeam full backup job. I wasn't looking in the logs at that stage though but after the crash I...
Well thanks for typing up such a great reply!
This is really interesting because I had an issue with iSCSI performance when I tried to use my Cisco SG300-28 switch for storage traffic. The performance was terrible (there's a post on this forum about it). That's why I direct connected all my...
Glad everyone's enjoying my pain ;-)
I'm not ready to throw in the towel just yet and move over to NFS haha
This entire troubleshooting exercise has taught me loads and I'm just wondering if all the issues I had with Starwinds Virtual SAN was due to the jumbo frames too.
Will be interesting...
Thanks for the links.
First, an update! On Wednesday, I removed Jumbo Frames from ALL of the storage network. On the vds, vmkernel and the FreeNAS storage NICs I set the MTU to 1500 and so far ALL of these errors have gone away:
WARNING: 192.168.61.3 (iqn.1998-01.com.vmware:esxi2-4d9a7f4c)...
So I've been keeping my eye on the errors in vmkernel.log and I get LOADS of these iSCSI sense codes:
2017-11-08T21:54:29.391Z cpu15:66050)ScsiDeviceIO: 2962: Cmd(0x43950118d940) 0x89, CmdSN 0x62c from world 134150 to dev "naa.6589cfc000000a63b770ad1ddd260d2a" failed H:0x0 D:0x2 P:0x0 Valid...
I am grasping because I have been struggling with this for a year now. Had endless storage issues. When my storage crashed yesterday morning at least I could login to the FreeNAS gui and see that all the ZFS pools/volumes were online so I think its safe to say that the disks and controllers are...
Ok, so I'm not getting anywhere with my troubleshooting and am pretty tired of the hosts getting disconnected from the iSCSI storage. I can only think its the dual port 10Gb NIC in the SAN and/or the cables between the hosts and SAN as when this happens it affects BOTH hosts at the same time...
Some further info from the vmkernel.log off one of the hosts:
2017-11-07T21:56:14.464Z cpu5:66376)ScsiDeviceIO: 2962: Cmd(0x43950120b400) 0x89, CmdSN 0x2bd5e2 from world 66556 to dev "naa.6589cfc0000006f22a5c1eb41598028b" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xe 0x1d 0x0...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.