You can go with Starwinds VSAN with Storage Spaces underneath. VSAN will replicate data across nodes, while Storage Spaces can do local redundancy. I won't recommend you doing parity spaces, cause from my experience their performance is really low. Hope this will help: Build a 2-node Hyper-V...
Optane drives are really reliable these days. I can confirm. I am running consumer grade drives in my lab, which are still reliable, but datacenter drives are still more reliable.
For a lab you can take a look at vSAN alternatives. There are options, which have free versions. Ceph could be deployed on three nodes and provide storage to ESXi hosts. Ceph is open-source, so free of charge. Ceph storage on VMware | Ubuntu
As an alternative, I have used Starwind VSAN for...
Good luck with your project :) Not exactly the case, but I've had a chance to test SR-IOV on Supermicro X10DRH servers running Windows Server 2019. Everything worked as it should.
That's weird. As an example, I've found the following guide where SR-IOV can be enabled in BIOS.
https://dlcdnets.asus.com/pub/ASUS/mb/LGA1200/ROG_STRIX_Z490-H_GAMING/E16511_ROG_STRIX_Z490-H_GAMING_BIOS_manual_EM_WEB.pdf
You can try looking at Supermicro mobos. e.g. the following one has an...
As mentioned above, Starwinds VSAN as a storage backend for the hyperconverged cluster would be a nice solution. Since you do not have much money, you can go with free version, it will require some Powershell skills to manage it. You can actually do a storage/NVMe Passthrough to their Appliance...
SR-IOV is not available on Windows 10, as far as I know. I have configure it successfully on Windows Server 2019 on Dell R640 servers with Mellanox CX5 NICs. I think the issue you are facing is related to the MoBo.
Since you are using NVMe drives, you should try looking at NVMe over fabrics. VMware has added support of NVMf in vSphere 7.0.
https://storagehub.vmware.com/t/vsphere-7-core-storage/nvmeof/
I have plans to test it, just looking for a suitable hardware. There are several target implementations...
Couple links:
https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/bug-check-0x149--refs-file-system
https://forums.veeam.com/veeam-agent-for-windows-f33/bsod-pnp-detected-fatal-error-t64833.html
The last time I have faced BSOD with ReFS was in April or March, I am not using it...
LSI 9261 should support cachecade.
https://docs.broadcom.com/doc/12352140
However, it might require additional license key for you to activate this feature. According to the documentation, cachecade will cache reads only.
Cachecade 2.0 will do both. https://docs.broadcom.com/doc/12351884
In addition, there are multiple BSOD issues with ReFS. I would avoid using it in production, it could create more issues than solve problems. ZFS with properly configured caching can be a great option to use.
Check out this video that covers the process almost entirely and hopefully will help you a bit
Note that if you have only a single ESXi host, this approach will not work, since to upgrade your host vCenter will need to bring the host into a maintenance mode Closed for Maintenance: Things...
I am managing a bunch of ESXi servers, and most of them have storage controllers that are out of the VMware HCL for 7.0. I have figured out three possible options on how to proceed with them. The first, expensive, and obvious one is a storage controller upgrade. The second one is just to stay...
In the case when RDMA/RoCE is not mandatory and only bandwidth matters, you could potentially pass through the unsupported network adapter to an OpenWRT or pfSense virtual machine and route it back down to ESXi over regular VMxnet. That is how a WiFi adapter for a small ESXi host works in my...
I have a Windows Server 2012 R2 currently running in my home lab on top of a good old buddy WD Sentinel DX4000. It was Windows Storage Server 2008 originally, but I have managed to upgrade it a little bit. There is no way I can update this hardware further, and basically, that is the only reason...
Have you tried running diskpart and using the clean command to wipe the GPT/MBR records completely and then re-initialize the drive? That may lead to the generation of a new UniqueID.
Second this one. It is damn expensive, storage efficiency is questionable, performance is adequate, but only up to a certain level (is capped). The vSphere integration is impressive, though. And it is stable as hell if configured and managed properly.
Unlike FreeNAS, I didn't have a chance to...
Do not underestimate Jason's proposal and make sure you have the latest firmware and drivers you can get, especially if we are talking about running ESXi. Windows Server can tolerate a lot of weird hardware-related things being somewhat crappy by itself. ESXi will throw you a PSOD on every sneeze.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.