Morgonaut converting me to virtualization...

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
So i'm posting this in server/workstation builds because it will definitely affect my future server/workstation builds. I'm deciding this is something I need to learn more about before I commit down a less efficient path/this seems like something worth planning for or trying.

For those who don't know Morgonaut Hackintosh is a youtube channel that covers alot of hackintosh stuff with a super strong pitch to virtualize everything - and i'll be honest, i'm probably sold, my only question is whether it works as well in reality as the sales pitch. :)

My virtualization question potentially exists outside of hackintosh or mac stuff as well though - it got me interested enough I wouldn't rule out running on a dedicated windows or NAS box. An extra layer of protection from malware or increased hardware compatibility or ease of standardized backups or migrating an already working OS+apps install, hmm...


SO... my question for the group. There are a bunch of virtualization sub-boards here covering specific virtualization solutions - but I don't know even which one would be appropriate to try or use! So i'm seeking suggestions and feedback to steer me which to look at, maybe comparison.

- Most of my interest regards Audiovisual production, this is VERY sensitive to things like latency for realtime access. This fear was what kept me from ever looking much into virtualization in the past. I use software like Avid Pro Tools where my assumption was some unexpected use of the cpu UNDER the OS where it can't see and didn't expect might throw off something - but since I don't have anyone else to ask I could be totally wrong.

- Is there any specific hardware recommendations to follow when choosing hardware for virtualization? I assumed all older Xeon gear would support whats needed but was unsure about newer Ryzen stuff from 1st thru 5th gen.

- Mac virtualization is a specific need, because only the Mac version of Pro Tools supports Dolby Atmos in software right now.

- I'd love to virtualize Mac under AMD but that's usually less and less recommended by all the bare metal guides, including talk of sync problems in audio and such, any feedback on use of Ryzentosh's for AV production i'm all ears for. It definitely seems like better bang for the buck and worth experimenting to see.
 
Okay no responses yet, so maybe I was too open ended or needed to show i've been trying to do my own research too..?

The current state of my understanding to start the convo, if I say something wrong please correct me:

VMware ESX is the most common type 1 hypervisor, with spots 2 and 3 alternating between Microsoft Hyper-V and Citrix Xen but there's at least a half dozen others too. There's both free and paid versions of most of these - i'm not against spending money (it's not about spending zero) though being able to learn and test 'for free' is important since I don't even know whether what I want to do will work properly. My #1 concern is LATENCY (more than spending nothing) and how a hypervisor on the bare metal under the OS might screw up timing in certain sensitive AV software.

Most of the fancy features of all these systems I probably DON'T need - it might be interesting to know that it can, or learn about in the future, but i'm the reluctant sysadmin who just wants to learn the minimum necessary to get back to my real job of video. Minimum time investment to solve problems that wont paint me into a corner basically so I can get back to work. Maybe i'll learn or expand knowledge in the future but I don't want to play with hypervisors, I want to play with video software.

ProxMox based on debian seems to be the hypervisor used by Morgonaut, and Linus Tech Tips did a mac virtualization video involving KVM which also installs to linux. Since both of these are free/open source and have public instructional information i'm willing to look at those first. I'm also very keen on the idea these are both based on linux because that means the hardware compatibility of linux should apply instead of the compatibility of hackintoshing. :)


Compatibility with hardware is probably my #2 concern which is part of what steers me to the linux based ProxMox and KVM. VMware sounds like it has less compatibility which could be an issue of concern and also may have a RAM limit in free versions of 32gigs in the VM which is limiting for 4-8k video work. Microsoft Hyper-V sounds like it's included with all Server systems 2008 and up which should mean good hardware compatibility and i'm probably eyeing that to look at more than VMware to consider experimenting with. Should there be any other type 1 hypervisors to look at?

The only cpu limitations i'm aware of are Intel's VT-d and VT-x and AMD's SVM for virtual machines and intel SSE 4.2 instructions - this should be supportable in all potential hardware i'm looking at but i'm aware some models of cpu lack this.

GPU passthru is said to sometimes be tricky with both linux based systems - but works well IF it works - I don't know what this means, or examples or if its finicky on exact gpu models or how it goes.

Passthru of other things is a question - for audio interfaces under Protools it's firewire and USB and now Thunderbolt. If there are any problems with this working as well as running mac native I don't know.


If someone has suggestions of:
- other hypervisors i might want to consider
- which of these hypervisors might be the most suited to my stated AV use (with latency being #1 and hardware compatibility being #2) esp if it differs from my own reading so far, might save me some wasted time
- "hardware list-narrowing guidelines" - if there are potential problem areas, things to trip up on with my planned usage, or suggested research topics to delve into more before choosing hardware

Really the last is I guess the purpose - what do you suggest I research more next, and where to find that information. It is for instance difficult to find much public information on Avid forums for protools about hackintoshing or people under virtualization because neither is officially supported as a reason for them giving help, yet I know people do both already.
 
Last edited:

audiophonicz

Member
Jan 11, 2021
68
32
18
so just in general: I think AV encoding typed work is better on bare metal.
Not sure how many VMs youre trying to run here, but unless youre trying for enterprise virtualization, id stick with the free flavors. Also on the how many VMs, note that GPU/PCI/USB/xxx 'passthru' is to pass a physical device thru the host OS to a specific VM. This is not a sharing thing and everything else will lose access to this device but the VM its passed to, including the host.

I built a hackintosh for audio encoding once. Same exact HW as Apple would use, just hackintosh OS. ran like poop. I cant imagine a virtual hackintosh would be any better, let alone 4k video.

Edit: just happened to come up in another conversation, apparently Xen is built on KVM, not H-V. I stand corrected.
 
Last edited:
  • Like
Reactions: Twice_Shy

jjh11

New Member
Dec 8, 2022
1
0
1
I have virtualized two separate MacOS Monterey VMs on VMWare vSphere 7 and Proxmox 7.2, both running on Zen 3 server. The installation is quite straightforward with OpenCore (except some trickery to get unsupported AMD CPU to work with MacOS installer), but I haven't tried any PCI device passthrough if there's any issues.
For HW passthrough, the supported list of modern GPUs is very limited for MacOS, for audio devices the situation might be better.
One thing to note is that MacOS VMs are very easy to break when a major OS update arrives, so if you're planning to do any productive work, you need to be careful.
I have separate Macs for work, those VMs are mostly a playing ground for me where it doesn't matter if something breaks.
 
so just in general: I think AV encoding typed work is better on bare metal.
Not sure how many VMs youre trying to run here, but unless youre trying for enterprise virtualization, id stick with the free flavors. Also on the how many VMs, note that GPU/PCI/USB/xxx 'passthru' is to pass a physical device thru the host OS to a specific VM. This is not a sharing thing and everything else will lose access to this device but the VM its passed to, including the host.

I built a hackintosh for audio encoding once. Same exact HW as Apple would use, just hackintosh OS. ran like poop. I cant imagine a virtual hackintosh would be any better, let alone 4k video.

Edit: just happened to come up in another conversation, apparently Xen is built on KVM, not H-V. I stand corrected.
AV encoding on bare metal better is always what I would assume - i'd think both OS and app cannot predict latency happening 'under' its awareness and even if a hypervisor is a very slim load maybe it's an unpredictable load. Since i'm wanting to use timing critical software synthesizers and such i'm very conscious, but I can't disagree that 'Morgonaut's virtualization method has many nice advantages if it could be made to work with the audio softs.

That said... all the other features make attempting to use a hypervisor potentially beneficial. Improving compatibility, being able to more easily snapshot a working system, having more than one working system (ie a new upgraded one) so you can test a new configuration and still instantly use the old one to get work done if it flakes out one bit. I've heard stories of stuff that wont run on bare metal mac working under virtualization which has me very curious, like if instead of passthru you let the hypervisor control that hardware. I don't know if that still performs the same even if it works tho - like maybe without video accel the graphics cards suck, but for something like a USB audio device there's no issue?

I wont know until I test. I'm intending to buy two systems sometime this spring hopefully - one intel and one ryzen and test both vanilla and virtualized Mac installs on both to see what happens.


I have virtualized two separate MacOS Monterey VMs on VMWare vSphere 7 and Proxmox 7.2, both running on Zen 3 server. The installation is quite straightforward with OpenCore (except some trickery to get unsupported AMD CPU to work with MacOS installer), but I haven't tried any PCI device passthrough if there's any issues.
For HW passthrough, the supported list of modern GPUs is very limited for MacOS, for audio devices the situation might be better.
One thing to note is that MacOS VMs are very easy to break when a major OS update arrives, so if you're planning to do any productive work, you need to be careful.
I have separate Macs for work, those VMs are mostly a playing ground for me where it doesn't matter if something breaks.
Did you find reasons to prefer one type of hypervisor over another? Was the experience of the virtualized machine inside in any way different?

I know MacOS VM's can break - but hackintoshes period break on updates. At least virtualized I can freeze a working install and create a test install/clone then upgrade and see how it works for months before committing to screwing up a workflow. If something unexpected happens I roll back to get work done and then mess with things on the weekend or evening to figure out what broke.
 
  • Like
Reactions: bigfellasdad

bigfellasdad

Member
Apr 10, 2018
86
44
18
67
In my work, its a case of virtualise every thing, unless a good reason exists. We look after over 7000 on premise servers, a huge majority are now virtualised and if done correctly, its absolutely fine, we also have 000's of cloud VMs, all of which of course are virtualised.
I would agree with others above, that I would suggest your requirements would be on physical machines. I would also imagine you have strong IO requirements so unless you have many people accessing the same data, id even use local storage. Id be avoiding contention at all costs and the only way to do that is with dedicated hosts for VMs and dedicated network/storage for those hosts, which in my mind is not worth the added complexity

But, as any body, I could be wrong... the devil is always in the detail!
 
  • Like
Reactions: Twice_Shy
That's kind of what i'm going to try to do. The benefits outweigh the costs - and at least justify ATTEMPTING to virtualize. If virtualization doesn't work I fall back to a bare metal hackintosh install after careful picking of parts for compatibility with both OSX and type 1 hypervisors.

It's just a question of "should I look beyond ProxMox or am I overcomplicating it?" Is there any reason to try five different ways to virtualize OSX, are there critical features of the hypervisor that are really great and worth having and only available on Xen or only available on Hyper-V for instance, and are there critical differences of the virtualized environment (including virtual machine portability TO or from other hypervisors - I don't know if i'd need this, i'm just aware this is one of the things I realized was possible, and i'm realizing I don't even know what all the right questions to even ask are because I don't know what I don't know about hypervisors right now) that might sway my decision to decide what things I definitely want, what things are worth testing, and what things absolutely wont work.
 

Parallax

Active Member
Nov 8, 2020
420
212
43
London, UK
The only hypervisor I have seen published latency or delay numbers for on the impact of virtualisation is VMware ESXi, where it's a high priority for certain verticals they serve (like the financials). However, I think this will be more difficult for you to run and find community support for Hackintosh-style solutions than Proxmox, so you probably need to consider that angle as your main concern rather than a few ms of latency here or there.
 
  • Like
Reactions: Twice_Shy