My USB boot drive crapped out and I had to re-install ESXi and import the vmx files.
Prior to the crapout, I had my workstation running perfectly with a K2200 GPU passed thru and HW Virtualization enabled (for nested Virtualbox, Bluestacks, etc)
Now I cannot for the life of me get it to work...
Hence my need for a frontend. My SAN doesn't have SMB/CIFS. It's an enterprise SAN, so does offer cache and cache protection and a BBU.
So yes, I was simply looking for an easy *nix frontend to share out an iSCSI san via SMB/CIFS.
This is a specific re-use case of an existing 40TB SAN that we already own. I'm very familiar with ZFS on HBA based systems. ZFS has all kinds of issues when there's lots of layers in between (SAN / Cache / datastore is alot) I specifically don't want Windows for virus / randsomware...
I've got a production Hyperconverged ESXi system, and a fairly loaded (but out of warranty)HP SAN that I'd like to use for Veeam backups.
However, I don't want to direct mount it as a Windows drive..too much risk with Windows. SMB / CIFS would be fine.
it's already an iSCSI datastore...I...
The 40G only ports are still stacked, 1/2/2 to 1/2/5 and 1/2/7 to 1/2/10, 2/2/2 to 2/2/5 and 2/2/7 to 2/2/10 are setup as breakouts, unstacked. Like I said, I've done a mess of these, and these are the first ones to show this EXACT behavior. It's very odd.
So I've rolled a bunch of these out in stacks, and I can now confirm that I have a stack which is exhibiting the same behavior. If I tag VLAN's on 1/2/2 or 2/2/2 the whole port goes south. BOTH switches.
Yup, familiar with all of them. We're just using enough of them internally that I'd like to get some ears. I've used these as well:
2 Post Rack Rails
At that point I might as well just buy switches with the ears included.
But I'm going to price out having someone replicate the ears from...