HyperV networking - for media server (Plex)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BestGear

Member
Aug 25, 2014
59
3
8
44
Hi Guys


Just want to see what you think is the best approach for a Hyper-V VM running Plex.

My host runs as a file and print only. The Host runs 2012R2.

Plex is in a VM, and uses SMB to reach the shares where the media live across multiple volumes.

Now, as Plex is in a VM.....how should I configure networking to be as optimal as possible.

I did have the Plex VM with its own virtual switch and a physical NIC, which was great, but it means that its a busy NIC as its used for Plex clients connecting to Plex, and of course Plex to access the media shares...and its hitting the real "physical" network with its traffic (twice).

SO...whats the best route?

Should I configure a private virtual switch in which the Plex VM and host are the only players, which would let Plex access the media shares (and no traffic would leave the host), and keep the external switch and dedicated NIC for supporting the Plex clients?

I guess the risk in that is that Plex would have two routes with the same hop count/cost to reach the Host F&P (media shares). I guess I could fudge it so that the Plex VM would only know the Host by its private switch IP address (hosts entry rather than DNS to resolve the host), that way the route would be out the private switch virtual NIC.

I dont want to change the way Plex reaches the media shares, so discount presenting the disks directly (RAW pass through) to the Plex VM rather than the host. I originally had it that way, but that lead to other issues and flexibility challenges! (it worked fine though!).

Anyway... if you have read this far, thanks.... and ideas welcome!

David
 

Mike

Member
May 29, 2012
482
16
18
EU
Maybe I'm missing something, but is this not mostly unneeded as it's a full duplex medium and PV 10gb anyway?
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
You say it is a "busy" NIC - but have you actually looked at its utilization levels at all? I would guess it probably has a fair bit of bandwidth to spare, in which case unless you will be adding more clients soon there is no reason to change anything at all.

As a general architecture (not knowing how many NICs or of what speeds you have), its always good to keep things as simple as possible. If I were to re-work things I would probably take all of the NICs in the host, configure them as a single team (with multiple VLANs if needed), connect the host, that NIC team, and all of the VMs to a single virtual switch - done. But following the rule of "if its not broke, don't fix it" - as I said above its probably fine to just let it keep running as-is.
 

PigLover

Moderator
Jan 26, 2011
3,215
1,571
113
You say it is a "busy" NIC - but have you actually looked at its utilization levels at all? I would guess it probably has a fair bit of bandwidth to spare, in which case unless you will be adding more clients soon there is no reason to change anything at all.

As a general architecture (not knowing how many NICs or of what speeds you have), its always good to keep things as simple as possible. If I were to re-work things I would probably take all of the NICs in the host, configure them as a single team (with multiple VLANs if needed), connect the host, that NIC team, and all of the VMs to a single virtual switch - done. But following the rule of "if its not broke, don't fix it" - as I said above its probably fine to just let it keep running as-is.
+1

BluRay HD is max 54Mbits/second - and most disks are much less than this. Most Plex users re-code their video library and end up with much better compression. Unless you are streaming 10+ simultaneous raw 3-D BluRay users from your Plex server then your NIC isn't actually very busy at all.

Keep it simple.
 

markpower28

Active Member
Apr 9, 2013
413
104
43
I have been running Plex VM on Hyper-V with a team of two physical NICs which is shared by other VMs. The load is is very light. I only steam in the house, no external access.
 

BestGear

Member
Aug 25, 2014
59
3
8
44
Hi Guys - thanks for the replies.

I do like *simple* but do want and need to separate the data for other reasons, hence my multiple NICs (host has 6 Pro/1000 with sr-iov) - reasons around all that I don't want to go into..

The NIC *is* busy, its not so much the data being sent from Plex to the viewing client (of which there are quite a few, inc external), but the data from host provided file shares where the media files are, to Plex.

You will appreciate many *normal* blurays are 25-35gb in size, so when streaming from several of them, Plex is reading up a good heap of data off the disks, even if its transcoding to a lower quality client.

The question (forget Plex), is really how to provide an optimal path for a VM to reach host provided shares - out to the real physical world and back in, or via a host only switch..... Internal switch seems the way to go (with a notional 10gbps internal link speed), but I was looking for some real world experience and ideas.

I just feel that there should be no reason for the Plex data being read off the the host shares to actually cross a physical network.

I think I will give it a whirl with the internal switch on another host and see how it compares.


David
 

PigLover

Moderator
Jan 26, 2011
3,215
1,571
113
Still think you are trying to "solve" for a problem you don't have. Have you collected any stats/data on utilization of your NIC under load? Do you have any evidence of network load-related problems during playback?

Back of the envelope, it doesn't pencil out to more than 100-200mbps even spilling large bluray to multiple clients with the datastore remote from the Plex server. And your network should be able to handle that load without any problems or side-effects whatsoever.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
It doesn't matter how big the BluRay images are - the point that PigLover made above is that to play them you only need to stream them at max 54 mbps - there is no need to get the entire 30+ GB file to the client instantly - the end of a 2-hour movie doesn't need to get there until 2 hours have passed. Assuming the worst bandwidth case (all local clients, no transcoding, all video is max bitrate all the time) you've still got bandwidth for almost 20 clients (thats around 1gbps receive from SMB + 1gbps send to clients - you can do full-duplex) In reality, the sum total of bandwidth to your external clients is probably going to be minimal (at least significantly limited relative to your LAN bandwidth) as I'm guessing you don't have gigabit internet upload speed. In fact, unless you have a ton of CPU power allocated to that VM I'm willing to bet you run out of CPU power for transcoding before bandwidth.

As to the question - yes, the fastest way to transfer data from a host SMB share to a guest is via an internal switch. And the fact that it reports 10gbps as the link speed doesn't matter as there is no link - its just a place the OS expects to see a number. The speed is as fast as memory-copies can move the data around, probably limited to how fast the host can read the data from disk or the VM can process it. I remember old VMware versions that used to emulate an old 10mbit NIC because every OS on the planet had drivers for it and we still did multi-gbps transfer rates between VMs with them.
 

BestGear

Member
Aug 25, 2014
59
3
8
44
Still think you are trying to "solve" for a problem you don't have.
LOL!

That's often part of the fun! I *know* its not a big problem, but I wanted to balance things out a bit.

Back of the envelope, it doesn't pencil out to more than 100-200mbps
Perfmon/resource monitor shows an average of 300-350Mbps with a fairly typical load on the NIC.... which I know is not stressed, but its certainly busy as its sustained.

As to the question - yes, the fastest way to transfer data from a host SMB share to a guest is via an internal switch.
Thanks.... that is what I wanted to confirm. Is there a best way of ensuring the routing from the guest (with one external and one internal/private switch nic) to the host is via the internal switch or is a simple hosts entry to ensure that the VM knows to route via the internal switch to reach the (private switch) IP address of the host ok?

I like the point re CPU horses for transcoding... I dont actually have many issues with that unless its several crappy phones watching big blurays.... The host has a hex core xeon and 24gb ram and it purrs along quite happily.

Thanks Guys....
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Well the easy way of ensuring that guest -> host traffic is via the internal switch is to have that as the only option. Going back to my first post in this thread if everything is connected through a single virtual switch, which is then linked to the rest of the LAN over a teamed connection, the traffic between the host and VMs will stay in RAM while the traffic that is destined for external will go external. If you really have requirements for keeping some traffic separate preferably use VLANs on that team, or move just those few things off to dedicated NICs.

If you are sticking with multiple NICs/IPs for the guest and host (for their private internal connection plus their own external connections) you might be able to get away with just a host file entry or two, or you might have to configure a few static routes. You might have to do some testing to get it working correctly.
 

BestGear

Member
Aug 25, 2014
59
3
8
44
Hi

Ok - tried a couple of things...with NetCPS

Standard gig LAN connected PC to server = 94mps

VM to host server over "reported (I know its notional)" 10gbs private switch = 24 mps

I checked the physical switch stats and can confirm that the VM definitely did not hit copper.

So - the VM to server over the internal switch is a quarter of the speed!

I need some downtime to set it up again to be over the physical LAN and retest, or may do it on another host.

That is certainly not the result I expected!

I also tried NetCPS from two VMs sitting on another virtual switch (external via another physical NIC) and VM to VM..... only got 24mps too!

I know NetCPS is old... but I would not have expected any duff results in a VM....

I need to go understand why the virtual switches are so pants....and yes, VMQ is disabled.....

David
 
Last edited: