Hmm... I agree net-runner, but does it really work that way? Doesn't the problem relate to iSCSI's design itself? would that mean it's 50k IOPs limit per iSCSI connection?
From the videos i've seen, with starwind in a windows cluster, they say there is a quick migration if there is a power failure on host... which my guess is a reboot.
Thanks for the heads up Net-Runner, I also noticed that Starwind VSAN has some measure of fault tolerance? I saw a video on youtube that a tech failed a node and it live migrated off it somehow... have you been able to reproduce this?
Storage would be fed to the VMware (VSAN) cluster via raid controllers with JBOD mode, but if I went Starwind VSAN, I would probably just go Hyper-V for the cost savings. The drive compatibility's a bit higher with windows so I wouldn't have to worry as much about HCLs with commodity drives...
Ahh, but we are education in canada, and we get DC licences for 45$ a pop lol... so that's actually really interesting now. price i could save on HC software could buy me that 4th node.... hmmm!
That's very interesting Chuntzu! Thanks for all the great input guys, much appreciated.
I read somewhere s2d will be roughly 6500$/node per licensing though... and min 4 nodes is costly...
Also an update for this build... the Samsung 950's have almost double the rated TBW for 512gbs of M.2...
Thanks for the replies!!
How do you like Starwind in terms of stability? any problems? and for the hypervisor you use, do you connect via iSCSI to the local LUNs? Any performance issue?
Hey Everyone,
I'm hopefully going to build a few HCI node servers, with this chassis:
Supermicro | Products | SuperServers | 2U | 6028UX-TR4
I'd like to do 3 nodes, hyperconverged with a VSAN software.
2 E5-2680v3
128gb DDR4
2 in RAID 1(Storage Spaces Mirror) Samsung 950 Pro 512gb M.2...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.