I totally agree with you. I also did not find a reliable / safe alternative. Only review I read so far which seamed promising where S2D from Microsoft.
Regarding the switch I have to go a little bit of topic. My plan was creating / experimenting / learning ceph cluster. So I thought i use a few old DL380p Gen8 with 12 LFF (some 8TB SATA3 HDDs) and some cheap Samsung NVMe (Cache Disks) and a fast storage network. This is why I tought of 40gbps for the storage cluster. But my compute nodes will still have local NVME or dual local NVME and will stay at the 10gbps switch for now. Plan was using the power of multiple iSCSi path from the 40gbps to the 10gbps compute nodes ESXi. In a later step upgrade the compute nodes also to 40gbps this is why I wanted a bigger qsfp+ switch and I was thinking the cheap mellanox switches u can find are only somekind of HBA / Storage Mode switches...
that sounds very very promising. I might do a short vision on my plans and post it somewhere in the forum. So if there are any pitfalls I dont see. Others might see them.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.