Will it run CentOS Stream 9 or any other recent Linux distro? I can get a bunch of these for real cheap (company pays utility bill so I don't care much), it's just a long delivery time UK -> USA which prevents me from finding out.
Western Digital doesn't share your point of view. Go argue with them :)
https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/collateral/white-paper/white-paper-graid-supremeraid-with-openflex-data24.pdf
Chris Mellor has a name within the IT community and in society, while you act anonymously... Blocks & Files, TheRegister etc are trusted and respected sources, and you're not. Your "I asked them and they told me..." is so childish and naive! I feel like I want to hug you :)
Dude I gave a link to the article on Blocks & Files. That's it! You disagree? Go argue with Chris Mellor on what he wrote!
P.S. There's some GRAID POV as well, just in case if you want to speak tech.
StarWind has no in-house functionality to pool the NVMe drives. It uses either Linux MDRAID or ZFS for that purpose, none of which were designed to handle that amount of IOPS. Only to make things worse iSCSI is another sad story to tell - it's CPU hog. Don't get me wrong: they do have a properly...
I've heard different story. RAIDIX sold their Russian business to some third parties, just to avoid being sanctioned. Xinnor, their new company based out of Haifa, Israel has launched worldwide operations with their original team and codebase. You don't expect anyone to build such a complex...
That's smart move! We work with federal contracts and have been “strongly advised” by the government to avoid dealing with Russian companies even if they are not legally bound by sanctions law. It's better to be safe than sorry!
You can use Windows Server Standard on the storage controller nodes, but this means you have to have sort of the shared storage back end for it. Back in the days it was Clustered Storage Spaces, JBODs and all stuff like that. Unfortunately CSS doesn't work anymore (unreliable with the most...
Man you have a point! Less software runs @ escalated privilege level - less grey hairs admins will develop while running it. I'd be very careful (read - concerned) with storage stack having "root" rights, even with VM-level isolation.
This is very naive :) I wish software engineering would happen in your Universe where unicorns eat rainbow and dump with butterflies... In our Universe environment is much more harsh :( In a nutshell: Properly written system software will benefit from all the architectural features particular...
You isolate performance crucial primitives (like say mutexes and critical sections, each one non-acquired right on call is putting calling thread into APC state so you get at least scheduler timeout which is ~30ms, done with WINE you'll get even more thread context switches and re-queueing so...
You might end up with iSER working, but from the experience iSER on VMware is so badly implemented you'll hardly notice any CPU usage difference even on 10 GbE networking... TL:DR; Don't waste your time on it :)
We can all keep our fingers crossed, but from the multiple discussions with Microsoft developers this won't happen ever. SSDs unlike spinning disks don't have "Force Unit Attention" flag in writes tolerated forcing ACK returned only after write buffer will "touch" actual storage medium.
This statement from OSNEXUS means their virtual storage is so fundamentally slow they don't benefit from low latency RDMA networking can provide! VMware vSAN is not much different here, VMware vSAN doesn't do RoCE(v2) / iWARP because it's slow as pig, but... This is V2 of their design and their...
VMware vSAN can do RAID5/6 since... forever? ...so I don't see why you're complaining about low usable capacity, it's definitely on the level. Performance is on the slow side compared to the other other guys though. Good thing is it's in kernel and supported by hypervisor vendor so one throat...
If you like StarWind performance you could think about replacing non-RDMA Intel NICs with Mellanox CX3 (they are cheap on eBay) or CX4 cards to have iSER rather than iSCSI for both East-West traffic & vSphere uplinks as well. RDMA is king :)
Open-E is pretty shitty product in terms of performance, software quality/maturity and especially support being pretty much AWOL. If you plan to stick with ZFS I'd recommend either plain vanilla FreeBSD or Linux + ZoL done right (don't be afraid of ZoL, next version of FreeBSD is going to have...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.