I also saw this. Community NVMe Driver for ESXi Not sure if this would help us in the future as well with this issue.
感谢这个方法,非常有用!I just found out that ESXi 7.0 is really particular about the brand of NVMe drive you try to use.
And it was an issue on earlier versions but it seems to be even stricter now
My Intel 660P works perfectly, but my Adata XPG SX8200 and HP EX920 do not work at all
I believe most Intel and Samsung drives like 970's should work fine at least
YMMW with other brands on the official HCL - Cisco, Dell, HPE, HGST, Hitachi, Huawei, Intel, Lenovo, Micron, Oracle, Samsung, ScaleFlux, SKHynix, WD
If anyone has other brands working please comment and we can start to create a list.
In previous versions of ESXi you could run a couple of commands to load older nvme drivers like this:
But on ESXi 7.0 that somehow kills all of your network card drivers and your box is now a shiny paperweight
Anyone know of any further hacks/tweaks to get it work?
Thank you so much! I had no luck with a copy of an old driver, but this driver seems to work.I also saw this. Community NVMe Driver for ESXi Not sure if this would help us in the future as well with this issue.
Which version of ESXI are you on? Because this driver still works fine for me on the latest patch of ESXI 7Thank you so much! I had no luck with a copy of an old driver, but this driver seems to work.
That is because the drives have Samsung and Phison controllers.
Honestly I came up with this last night based on the screenshot you posted. Vs my own ESXI instances using SMI controllers.Well, nowhere in the thread did it specify the Phison controller so...
Perhaps the thread title should be changed.
Have you tried this?
That’s the change I was referring to. “Some” consumer drives…The title of this post shouldn't be changed, because upon further investigation; it doesn't seem to matter your drive brand unless maybe its Samsung because otherwise it is hit and miss.
Yeah it is definitely some consumer drives. I haven't figured out what actually makes it work or not but I thought I had figured it out last night. Either way I'm glad that the driver still works today and my servers are running fine.That’s the change I was referring to. “Some” consumer drives…
Yes. Unfortunately, the alternatives, are not as good. Very limited/complicated/non-scalable GPU/PCIe passthru, virtual NICs that can barely support 10G, and literally no good option to replace vSAN. Not to mention the shitload of management features from vCenter.wait a minute, people are still willingly using VMWare????
Hmm, in my experience Proxmox is killing it in all of those areas, and is far better than VMwareYes. Unfortunately, the alternatives, are not as good. Very limited/complicated/non-scalable GPU/PCIe passthru, virtual NICs that can barely support 10G, and literally no good option to replace vSAN. Not to mention the shitload of management features from vCenter.
Until we have good replacements we are hostages. People suggest Nutanix, XCP-ng, Proxmox. Good initiatives but in reality none of them have 1/10th the features ESXI has, unfortunately. Even core things like those I mentioned either doesn't exist or their alternatives are clumsy/hacky and/or limited.
Anyway, hope some day this scenario change. Until there, company will stay on VMWare and at home, VMug still gives me everything for cheap subscription per year.
As I said, they are cool initiatives. But they are far from something that companies would rely on. Not to mention the shady techniques from Proxmox to "require a subscription to update apt repos".Hmm, in my experience Proxmox is killing it in all of those areas, and is far better than VMware
I';d argue that it is the opposite and Proxmox has 10 times more features than ESXi
- GPU passthrough works great
- PCIe passthrough is even better and is trivial to get working
- Virtual NIC's could be faster, but ~10Gbps is fine by me
- SRIOV is easy to get working if you want faster "virtual" NIC's
What do you need vSAN for?
Clustering? Vmotion? Mgmt? Storage? SDN?
Proxmox is awesome here
Shared Storage?
Ceph and GlusterFS are rock solid for me, and if you want a hacky way to do it, just name your storage the same on each node
Yeah, maybe your use case it is "fine", but it doesn't make it better than what is available on ESXi unfortunately.
- Virtual NIC's could be faster, but ~10Gbps is fine by me
Sorry but I'm not in the fanboy/enthusiast zone. I'm grounded on facts. The features available on vCenter to management, workload distribution, clustering, etc, there is nothing like that in the marketing. That is why ESXi is (now unfortunately) the most used hypervisor outside public cloud. There is arguments against that. It is just sad that we had this bad acquisition which f'ed up everything.I';d argue that it is the opposite and Proxmox has 10 times more features than ESXi
I hope that was a rhetorical question. There are many usages of vSAN outside just storing VM disks.What do you need vSAN for?
GlusterFS is dead. Even TrueNAS deprecated its support. Ceph... Well, the cost of REALLY operating Ceph in production for a company with reasonable size, makes VMWare subscription appear cheap. Not saying it is bad, it is really good and I follow it since the initial versions with the design sessions etc but, it still miles away from vSAN and its distributed storage both in terms of reliability, performance, security.Shared Storage?
Ceph and GlusterFS are rock solid for me, and if you want a hacky way to do it, just name your storage the same on each node