It might work for the cisco and then it might not work for the HP side, switches typically don't mind. Direct connect sometimes is best via fiber so you can get compatible adapters for both sides. I may also be wrong, I dont have any HP equipment to test. I know good chunk of cisco will throw a...
If your making a direct connection via a twinax cable you may have compatibility issues. Not sure about HP but cisco doesn't always play nice with non cisco sfp/qsfp.
If you want a automated failover you will need some sort of shared storage, starwind can do this for you. If you can live with manual and a 5 min RPO then you could just use native Hyper-V replication and no shared storage just storage local to the servers, this would be the KISS option.
I at this point I would remove the host from vcenter and reinstall/reset config on the host.
There is a Vmware Host Performance sensor that gives you metrics on the host if thats what your looking for.
HA or autostarts its one or the other cant have both.
#1 is a new one that i havent seen before. Are you registering it from vcenter? I would remove it from vcenter make sure its not on the host, reboot and then reregister it from vcenter.
As long as your cluster doesnt have HA confgired you go to the host/configure/vm startup/shutdown.
Not sure...
I think all of your issues are related to you managing the host directly rather than from vcenter. vCenter has a config for each host internally and can overwrite any changes you make on the host itself to set it back to what it thinks it should be.
I would try searching for the model number in google you will probably find some r/c fourms as they use server psus for battery charging quite often.
You will have to jumper one or more of the small pins to turn it on to get the 12v out.
So if you were to loose one vDev yes the whole pool will become unavailable, and you will have data loss. This is why even though it is "wasteful" people were suggesting raidz2.
You would create a pool with one set of disks and then once that is created then you will expand that pool and that will allow you to make another vDev.
You can have drives of different sizes in zpools. The stipulations are that you have to add the same type of zfs and same size. So if you create a 7 disk raidz1 you can expand the pool by another 7 disk raidz1, not an 8 disk raidz1. IIRC you can not shrink a zpool.
I dont sleep the drives. I have gone back and forth on that but most of my drives are from older enterprise storage so the power on hours were high when I got them so didnt want to take a chance of the spin cycles taking them out.
I have only added: docker, snapraid, megerfs, and msmtp. So my installs are limited on the base os. Proxmox has built in data collector that i send to a grafana LXC, I have also tweaked the unraid Plex grafana dashboard to work with my plex lxc.
The one thing I dont know that you have with your...
Any reason you dont want to use proxmox as the base? It does zfs (w/e disks you want) boot out of the box with no fuss. Have your rootfs and your 10 disk pools for vms and LXC, for docker install on the base install or put it in a lxc or vm. I am currently running all of of the above. I have...
If you asking if you can access the web gui from the proxmox box itself then you are correct you need a second computer as there is no gui install with proxmox. It is debain so in theory you could install a gui and manage it from itself but not a great practice.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.