Odroid H2 Proxmox host

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

blood

Member
Apr 20, 2017
42
14
8
44
I received an Odroid H2 system last week and have it stood up with Proxmox on it. I bought this system so that I could consolidate some virtualization functions onto it with the intent of powering down a few other systems that consume more power. One big thing I wanted to get was hardware transcoding for Plex as I had been relying purely on software previously and had some trouble with reliable playback at times.

My configuration is:
  • Odroid H2 board (Celeron J4105 proc)
  • 2 x 16GB DDR4 SODIMMS (the bulk of the cost)
  • 15V/4A power supply
  • H2 Type 4 case
  • 256GB Samsung 950 Pro m.2 (pulled from another system when I upgraded)
  • 4TB hdd (some consumer grade drive I had lying around)
Overall I'm pretty happy with it. The BIOS has a rough feel to it - probably explainable by the fact that Odroid is just getting started with x86/x64 stuff. Stuff was a bit scattered and inconsistent - though I got it all working so all is forgiven. For instance, there's a section for NVMe and it doesn't show my drive - yet I can select it in the boot screen and it works fine functionally past that. There isn't an option for "legacy boot", which meant that I couldn't boot the Proxmox installer, so I tossed Debian Stretch on there with UEFI and then followed the procedure for laying down Proxmox on top of it. It's all working well now.

(Proxmox needs to add support for UEFI. Seriously)

The NICs are Realtek which kinda made the hair on the back of my neck stand on end, but when I sloshed over a few TB of data to it with rsync, the transfer was bouncing off the 1gbps redline, so I suppose this is alright (I'm surprised the HDD could sustain this, but it did). I've got it setup nicely with Open vSwitch on the second NIC with VLAN tagging that I use for connecting VMs and LXC containers with the first NIC dedicated to the management.

I also connected a USB serial adapter to the first UART on the expansion header and enabled a console on it. I need to explore this more as I think I recalled seeing an option for console redirection in the BIOS which, while certainly not full IPMI, would be quite welcome.

The first thing I tested out was whether I could get hardware accelerated transcoding working with Plex in a container, and I'm delighted to say that it works well so far. The Proxmox kernel is new enough to recognize the Gemini Lake CPU fully and all the right stuff in /dev/dri was present. The userland wasn't new enough for vainfo to detect anything unfortunately, but I configured an Ubuntu 18.04 LXC container (2 CPUs, 8GB of RAM), configured it to pass through the stuff under /dev/dri/ and /dev/fb0 - created the right devices under the /dev in the container (which I then automated), and passed in the storage for some movies before firing up Plex. It is able to smoothly stream 1080p movies to devices on the LAN and to some WAN clients that required transcoding without skipping a beat, verifying that all streams that weren't played directly were transcoded fully in hardware. I didn't test how many simultaneous streams it can support, or 4k - but it's working much better than it did with bigger/badder Xeon setups so I'm considering it a win.

I need to finish migrating a bunch of containers to it now but it seems to be more than capable of performing duties of the always-on infrastructure services for my house.

Enough CPU to run light server duty? Check
Enough RAM to let me overprovision the CPU? Check
Access to fast storage? Check
Access to bulk storage? Check
Quick Sync video? Check
Low power? Check

I need to pick up a fan for it as I didn't realize when I bought it that it isn't a standard 4pin header as while it's not overheating, I'd like to keep it cool.

I'm now shutting down an older D525 system (X7SPA-H-D525) which was really getting long-in-the-tooth. It was reasonably low power, but really felt weak these days. I have a few ARM SBCs that are much more responsive, but this was actually server grade hardware so I kept it around. I'm also going to power down an X10SLM+-LN4F system. I liked this box except I wish it had more (smaller) PCIe slots for expansion.

The last "big" x64 system that's running is my OpenIndiana NAS. Everything else is either ARM, or embedded-esque like this H2. I'm looking forward to seeing my energy bill next month!
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Very nice. I'm excited about the Odroid H2 but missed out on the first block sales. Too bad they undershot the market and sold out so fast! And doubly too bad that Intel's manufacturing snafu's mean they can't get more CPU parts until April-ish next year for another production run.

I am confused by the no UEFI support in the installer thing - I've installed Proxmox with UEFI many times. You have to in order to boot from NVMe on SuperMicro motherboards. So there must be some odd incompatibility between the Proxmox installer and the H2 UEFI BIOS.
 

blood

Member
Apr 20, 2017
42
14
8
44
I've installed Proxmox with UEFI many times.
Really? I've never been able to get it working before, having tried on a handful of systems over the years. I've always just changed to legacy booting, but I couldn't here. I've googled around for this before and ran into solutions involving rEFInd and such but mostly people just saying to use legacy boot.

I went down a bit of a rabbit hole that I didn't talk about in my opening post where I tried to use ZFS for my root filesystem with UEFI. I got it working well with stock Debian until I layed down Proxmox on top of that - at which time it wouldn't boot (blank screen when I expected to see GRUB). Nothing I could do off a rescue disk would revive it short of a complete reinstall (I could mount all the filesystems and reconfigure/reinstall GRUB and stuff looked right), but I could never figure out what was breaking it. I eventually gave up on ZFS root.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
I've got 5 systems running, booted from NVMe via UEFI and installed using the Proxmox installer in UEFI mode. It does work, at least with the right UEFI BIOS (e.g., SuperMicro's). However I've never gotten a ZFS root to boot via UEFI. So at least one part is consistent!
 

Kal G

Active Member
Oct 29, 2014
160
44
28
44
Very nice. I'm excited about the Odroid H2 but missed out on the first block sales. Too bad they undershot the market and sold out so fast! And doubly too bad that Intel's manufacturing snafu's mean they can't get more CPU parts until April-ish next year for another production run.
Doesn't help the hobbyists when somebody buys up 100 of the initial run to sell on eBay.
 

amalurk

Active Member
Dec 16, 2016
311
116
43
102
Wonder if they are going to sell next batch at same price....Udoo Bolt is interesting too but going to be probably 3x the price to get to the 4 core version so, only makes sense if need the better GPU.
 

blood

Member
Apr 20, 2017
42
14
8
44
I also connected a USB serial adapter to the first UART on the expansion header and enabled a console on it. I need to explore this more as I think I recalled seeing an option for console redirection in the BIOS which, while certainly not full IPMI, would be quite welcome.
I could have sworn that I was able to connect to a console over the first serial port a few days ago, but as of now I don't get anything - and I tried all sorts of things to get it working. I can see "agetty" listening on ttyS0, and I have a 3.3v FTDI USB serial adapter connected to the pins for UART1 according to this, but I'm not getting any data now. Maybe it was never working. No amount of messing with kermit's flowcontrol settings or other options seems to make it (or UART2) work. That's annoying.

This didn't seem like a project where I'd need to use my oscilloscope, but maybe it is...
 

Metteus

New Member
Jan 22, 2016
4
2
3
37
I received an Odroid H2 system last week and have it stood up with Proxmox on it. I bought this system so that I could consolidate some virtualization functions onto it with the intent of powering down a few other systems that consume more power. One big thing I wanted to get was hardware transcoding for Plex as I had been relying purely on software previously and had some trouble with reliable playback at times.

My configuration is:
  • Odroid H2 board (Celeron J4105 proc)
  • 2 x 16GB DDR4 SODIMMS (the bulk of the cost)
  • 15V/4A power supply
  • H2 Type 4 case
  • 256GB Samsung 950 Pro m.2 (pulled from another system when I upgraded)
  • 4TB hdd (some consumer grade drive I had lying around)
Overall I'm pretty happy with it. The BIOS has a rough feel to it - probably explainable by the fact that Odroid is just getting started with x86/x64 stuff. Stuff was a bit scattered and inconsistent - though I got it all working so all is forgiven. For instance, there's a section for NVMe and it doesn't show my drive - yet I can select it in the boot screen and it works fine functionally past that. There isn't an option for "legacy boot", which meant that I couldn't boot the Proxmox installer, so I tossed Debian Stretch on there with UEFI and then followed the procedure for laying down Proxmox on top of it. It's all working well now.

(Proxmox needs to add support for UEFI. Seriously)

The NICs are Realtek which kinda made the hair on the back of my neck stand on end, but when I sloshed over a few TB of data to it with rsync, the transfer was bouncing off the 1gbps redline, so I suppose this is alright (I'm surprised the HDD could sustain this, but it did). I've got it setup nicely with Open vSwitch on the second NIC with VLAN tagging that I use for connecting VMs and LXC containers with the first NIC dedicated to the management.

I also connected a USB serial adapter to the first UART on the expansion header and enabled a console on it. I need to explore this more as I think I recalled seeing an option for console redirection in the BIOS which, while certainly not full IPMI, would be quite welcome.

The first thing I tested out was whether I could get hardware accelerated transcoding working with Plex in a container, and I'm delighted to say that it works well so far. The Proxmox kernel is new enough to recognize the Gemini Lake CPU fully and all the right stuff in /dev/dri was present. The userland wasn't new enough for vainfo to detect anything unfortunately, but I configured an Ubuntu 18.04 LXC container (2 CPUs, 8GB of RAM), configured it to pass through the stuff under /dev/dri/ and /dev/fb0 - created the right devices under the /dev in the container (which I then automated), and passed in the storage for some movies before firing up Plex. It is able to smoothly stream 1080p movies to devices on the LAN and to some WAN clients that required transcoding without skipping a beat, verifying that all streams that weren't played directly were transcoded fully in hardware. I didn't test how many simultaneous streams it can support, or 4k - but it's working much better than it did with bigger/badder Xeon setups so I'm considering it a win.

I need to finish migrating a bunch of containers to it now but it seems to be more than capable of performing duties of the always-on infrastructure services for my house.

Enough CPU to run light server duty? Check
Enough RAM to let me overprovision the CPU? Check
Access to fast storage? Check
Access to bulk storage? Check
Quick Sync video? Check
Low power? Check

I need to pick up a fan for it as I didn't realize when I bought it that it isn't a standard 4pin header as while it's not overheating, I'd like to keep it cool.

I'm now shutting down an older D525 system (X7SPA-H-D525) which was really getting long-in-the-tooth. It was reasonably low power, but really felt weak these days. I have a few ARM SBCs that are much more responsive, but this was actually server grade hardware so I kept it around. I'm also going to power down an X10SLM+-LN4F system. I liked this box except I wish it had more (smaller) PCIe slots for expansion.

The last "big" x64 system that's running is my OpenIndiana NAS. Everything else is either ARM, or embedded-esque like this H2. I'm looking forward to seeing my energy bill next month!
Hello Blood,

Thank you for sharing this fantastic overview, i've bought an Odroid H2 after reading it :)
Could you please provide us any additional information about how to configure plex hw accelerated in a container ?
I use proxmox too, so the scenario is the same.

Thanks!
 

blood

Member
Apr 20, 2017
42
14
8
44
Hello Blood,

Thank you for sharing this fantastic overview, i've bought an Odroid H2 after reading it :)
Could you please provide us any additional information about how to configure plex hw accelerated in a container ?
I use proxmox too, so the scenario is the same.

Thanks!
Sorry for the belated reply to this...

I should have taken better notes of this when I did it originally, but looking at the current setup, here's the things I can recall having to do:

First, make sure that the right devices exist. In the host, I have this:

Code:
root@h2:~# ls -l /dev/dri/
total 0
crw-rw---- 1 root video 226,   0 Dec 21  2018 card0
crw-rw---- 1 root video 226, 128 Dec 21  2018 renderD128
I recall that from the host, running vainfo didn't show me what I wanted but it was because the userland tools in Debian 9 were too old for the device. Maybe the version in Buster will work better.

Then, define the LXC container to run the plex server out of. I used Ubuntu 18.04 because plex supports it explicitly.

Next, assuming your media doesn't exist within your container, you'll need to get it bind-mounted within it from the host. Crack open the lxc config for the container within /etc/pve/lxc - and add a line like this:

Code:
mp0: /raid/video,mp=/var/lib/video
That'll make /raid/video in the host available as /var/lib/video in the container. I also have this at the end of that same file:

Code:
lxc.cgroup.devices.allow: c 226:0 rwm
lxc.cgroup.devices.allow: c 226:128 rwm
lxc.cgroup.devices.allow: c 29:0 rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/100/mount_hook
I forget precisely what that all does - but it's there to make the graphics devices available to the container with the right permissions. Lastly, /var/lib/lxc/100/mount_hook contains:

Code:
#!/bin/bash

mkdir -p ${LXC_ROOTFS_MOUNT}/dev/dri
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/fb0 c 29 0
On my system, that file has a mode of 755 (-rwxr-xr-x) and is owned by root. I believe that's a script that runs when the container comes up and ensures that the right device nodes exist for the graphics hardware that was passed through so that plex sees something when it scans for devices and can attach to it.

Inside of the container, I see this when I run vainfo:

Code:
root@plex:~# vainfo
error: can't connect to X server!
libva info: VA-API version 1.1.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_1
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.1 (libva 2.1.0)
vainfo: Driver version: Intel i965 driver for Intel(R) Gemini Lake - 2.1.0
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointEncSliceLP
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointEncSliceLP
      VAProfileH264MultiviewHigh      : VAEntrypointVLD
      VAProfileH264MultiviewHigh      : VAEntrypointEncSlice
      VAProfileH264StereoHigh         : VAEntrypointVLD
      VAProfileH264StereoHigh         : VAEntrypointEncSlice
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointEncPicture
      VAProfileVP8Version0_3          : VAEntrypointVLD
      VAProfileVP8Version0_3          : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointEncSlice
      VAProfileHEVCMain10             : VAEntrypointVLD
      VAProfileHEVCMain10             : VAEntrypointEncSlice
      VAProfileVP9Profile0            : VAEntrypointVLD
      VAProfileVP9Profile0            : VAEntrypointEncSlice
      VAProfileVP9Profile2            : VAEntrypointVLD
If you get stuck on something, let me know and I'll see if I can help unravel it.
 
  • Like
Reactions: MiniKnight

Metteus

New Member
Jan 22, 2016
4
2
3
37
Sorry for the belated reply to this...

I should have taken better notes of this when I did it originally, but looking at the current setup, here's the things I can recall having to do:

First, make sure that the right devices exist. In the host, I have this:

Code:
root@h2:~# ls -l /dev/dri/
total 0
crw-rw---- 1 root video 226,   0 Dec 21  2018 card0
crw-rw---- 1 root video 226, 128 Dec 21  2018 renderD128
I recall that from the host, running vainfo didn't show me what I wanted but it was because the userland tools in Debian 9 were too old for the device. Maybe the version in Buster will work better.

Then, define the LXC container to run the plex server out of. I used Ubuntu 18.04 because plex supports it explicitly.

Next, assuming your media doesn't exist within your container, you'll need to get it bind-mounted within it from the host. Crack open the lxc config for the container within /etc/pve/lxc - and add a line like this:

Code:
mp0: /raid/video,mp=/var/lib/video
That'll make /raid/video in the host available as /var/lib/video in the container. I also have this at the end of that same file:

Code:
lxc.cgroup.devices.allow: c 226:0 rwm
lxc.cgroup.devices.allow: c 226:128 rwm
lxc.cgroup.devices.allow: c 29:0 rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/100/mount_hook
I forget precisely what that all does - but it's there to make the graphics devices available to the container with the right permissions. Lastly, /var/lib/lxc/100/mount_hook contains:

Code:
#!/bin/bash

mkdir -p ${LXC_ROOTFS_MOUNT}/dev/dri
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/fb0 c 29 0
On my system, that file has a mode of 755 (-rwxr-xr-x) and is owned by root. I believe that's a script that runs when the container comes up and ensures that the right device nodes exist for the graphics hardware that was passed through so that plex sees something when it scans for devices and can attach to it.

Inside of the container, I see this when I run vainfo:

Code:
root@plex:~# vainfo
error: can't connect to X server!
libva info: VA-API version 1.1.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_1
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.1 (libva 2.1.0)
vainfo: Driver version: Intel i965 driver for Intel(R) Gemini Lake - 2.1.0
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointEncSliceLP
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointEncSliceLP
      VAProfileH264MultiviewHigh      : VAEntrypointVLD
      VAProfileH264MultiviewHigh      : VAEntrypointEncSlice
      VAProfileH264StereoHigh         : VAEntrypointVLD
      VAProfileH264StereoHigh         : VAEntrypointEncSlice
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointEncPicture
      VAProfileVP8Version0_3          : VAEntrypointVLD
      VAProfileVP8Version0_3          : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointEncSlice
      VAProfileHEVCMain10             : VAEntrypointVLD
      VAProfileHEVCMain10             : VAEntrypointEncSlice
      VAProfileVP9Profile0            : VAEntrypointVLD
      VAProfileVP9Profile0            : VAEntrypointEncSlice
      VAProfileVP9Profile2            : VAEntrypointVLD
If you get stuck on something, let me know and I'll see if I can help unravel it.
Hello Blood
Thank you very much for these details! The only thing i suppose was missing was to make "/var/lib/lxc/100/mount_hook" executable, otherwise the container didn't start. So i run the following command and now it seems to work great!

Code:
chmod 755 /var/lib/lxc/100/mount_hook
vainfo result:
Code:
plexserver:~$ vainfo
error: can't connect to X server!
libva info: VA-API version 1.1.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_1
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.1 (libva 2.1.0)
vainfo: Driver version: Intel i965 driver for Intel(R) Gemini Lake - 2.1.0
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointEncSliceLP
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointEncSliceLP
      VAProfileH264MultiviewHigh      : VAEntrypointVLD
      VAProfileH264MultiviewHigh      : VAEntrypointEncSlice
      VAProfileH264StereoHigh         : VAEntrypointVLD
      VAProfileH264StereoHigh         : VAEntrypointEncSlice
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointEncPicture
      VAProfileVP8Version0_3          : VAEntrypointVLD
      VAProfileVP8Version0_3          : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointEncSlice
      VAProfileHEVCMain10             : VAEntrypointVLD
      VAProfileHEVCMain10             : VAEntrypointEncSlice
      VAProfileVP9Profile0            : VAEntrypointVLD
      VAProfileVP9Profile0            : VAEntrypointEncSlice
      VAProfileVP9Profile2            : VAEntrypointVLD
Now the only thing is missing is the plex pass to see the results ;)!

I let you know, thank you for your support and time!
 

Metteus

New Member
Jan 22, 2016
4
2
3
37
Do you have any problem when hw accelaration is running ?
I can watch nothing because the quality is too poor, everythings is "blocky", "pixalate", "blurred".
Capture.PNG

Any suggestion ?
 

Patriot

Moderator
Apr 18, 2011
1,450
789
113
The price is low because realtek nics, build looks nice outside of factors outside your control lol.
 

blood

Member
Apr 20, 2017
42
14
8
44
Do you have any problem when hw accelaration is running ?
I can watch nothing because the quality is too poor, everythings is "blocky", "pixalate", "blurred".
Ouch, that does look bad. I definitely don't have that problem; with hardware transcoding things look effectively indistinguishable from software. For what it's worth, my media collection is all blurays and dvds - I recall reading about issues with 4k transcoding, but I don't own any of that material.

Does it look that way if you disable HW transcoding? I'm still running an older build of Proxmox VE - 5.3-6. I don't recall having to do anything exotic inside of the container for drivers or anything - just built the container based on Ubuntu 18.04 and took whatever vaapi libs were in the stable repo.

I'm actually rebuilding my plex server using a different system with an older Nvidia card because I believe there is support for both encoding and decoding now as well as a hack to enable unlimited transcoding sessions. I find their documentation... lacking... when it comes to what packages need to be installed to make it work, so I feel fortunate that I got it working on my h2 without too much mucking around.
 

Metteus

New Member
Jan 22, 2016
4
2
3
37
Problem solved.
The following was the solution i've found:

Code:
mv /usr/lib/plexmediaserver/lib/dri/iHD_drv_video.so /usr/lib/plexmediaserver/lib/dri/iHD_drv_video.so.bak