You can go with Starwinds VSAN with Storage Spaces underneath. VSAN will replicate data across nodes, while Storage Spaces can do local redundancy. I won't recommend you doing parity spaces, cause from my experience their performance is really low. Hope this will help: Build a 2-node Hyper-V...
Optane drives are really reliable these days. I can confirm. I am running consumer grade drives in my lab, which are still reliable, but datacenter drives are still more reliable.
For a lab you can take a look at vSAN alternatives. There are options, which have free versions. Ceph could be deployed on three nodes and provide storage to ESXi hosts. Ceph is open-source, so free of charge. Ceph storage on VMware | Ubuntu
As an alternative, I have used Starwind VSAN for...
Good luck with your project :) Not exactly the case, but I've had a chance to test SR-IOV on Supermicro X10DRH servers running Windows Server 2019. Everything worked as it should.
That's weird. As an example, I've found the following guide where SR-IOV can be enabled in BIOS.
https://dlcdnets.asus.com/pub/ASUS/mb/LGA1200/ROG_STRIX_Z490-H_GAMING/E16511_ROG_STRIX_Z490-H_GAMING_BIOS_manual_EM_WEB.pdf
You can try looking at Supermicro mobos. e.g. the following one has an...
As mentioned above, Starwinds VSAN as a storage backend for the hyperconverged cluster would be a nice solution. Since you do not have much money, you can go with free version, it will require some Powershell skills to manage it. You can actually do a storage/NVMe Passthrough to their Appliance...
SR-IOV is not available on Windows 10, as far as I know. I have configure it successfully on Windows Server 2019 on Dell R640 servers with Mellanox CX5 NICs. I think the issue you are facing is related to the MoBo.
Since you are using NVMe drives, you should try looking at NVMe over fabrics. VMware has added support of NVMf in vSphere 7.0.
https://storagehub.vmware.com/t/vsphere-7-core-storage/nvmeof/
I have plans to test it, just looking for a suitable hardware. There are several target implementations...
Couple links:
https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/bug-check-0x149--refs-file-system
https://forums.veeam.com/veeam-agent-for-windows-f33/bsod-pnp-detected-fatal-error-t64833.html
The last time I have faced BSOD with ReFS was in April or March, I am not using it...
LSI 9261 should support cachecade.
https://docs.broadcom.com/doc/12352140
However, it might require additional license key for you to activate this feature. According to the documentation, cachecade will cache reads only.
Cachecade 2.0 will do both. https://docs.broadcom.com/doc/12351884
In addition, there are multiple BSOD issues with ReFS. I would avoid using it in production, it could create more issues than solve problems. ZFS with properly configured caching can be a great option to use.
Check out this video that covers the process almost entirely and hopefully will help you a bit
Note that if you have only a single ESXi host, this approach will not work, since to upgrade your host vCenter will need to bring the host into a maintenance mode Closed for Maintenance: Things...
I am managing a bunch of ESXi servers, and most of them have storage controllers that are out of the VMware HCL for 7.0. I have figured out three possible options on how to proceed with them. The first, expensive, and obvious one is a storage controller upgrade. The second one is just to stay...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.