Jellyfin GPU Passthrough on Proxmox
Overview
Hardware transcoding for Jellyfin via GPU passthrough into a Proxmox LXC — starting with an Intel UHD 630 using QuickSync/VA-API, eventually moving the whole media stack to a dedicated machine with an NVIDIA MX550 and NVENC.
Why I Built It
Software transcoding works until you have more than one stream or the host is doing something else. An Intel GPU sitting idle inside a CPU that’s already in the homelab is the obvious thing to use — QuickSync hardware transcoding is fast and draws almost nothing extra.
The harder part is getting it working inside a Proxmox LXC. LXCs aren’t VMs — they share the host kernel — so GPU passthrough works differently than PCI passthrough in a full VM. You’re doing device bind mounts and cgroup2 rules, and most of the documentation is scattered across forum posts with conflicting advice about kernel versions.
How It Works
| Component | Details |
|---|---|
| Original approach | Intel UHD 630 → Proxmox LXC, QuickSync via VA-API |
| LXC config | cgroup2 device rules + /dev/dri/ bind mounts |
| Driver verification | lsmod for i915, vainfo with env vars set |
| Current setup | HP Pavilion (nk-celebrimbor, 192.168.1.68), NVIDIA MX550, NVENC/NVDEC |
Getting Intel passthrough working in an LXC requires two things: cgroup2 device allow rules for the DRI devices, and bind mount entries for the specific device nodes. In /etc/pve/lxc/<id>.conf:
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,create=file
The device numbers (226:0, 226:128) correspond to card0 and renderD128, but verify on your host with ls -la /dev/dri/ first — they can differ.
Verify inside the container with lsmod | grep i915. If you see i915, drm_buddy, ttm, and drm_display_helper, the driver is injected correctly.
VA-API in headless LXCs is the thing that trips people up. vainfo throws error: can't connect to X server! and looks like the passthrough is broken. It usually isn’t. Set these environment variables before running it:
export XDG_RUNTIME_DIR=/tmp/runtime-root
export LIBVA_DRIVER_NAME=iHD
vainfo
With those set, vainfo returns actual driver and codec information instead of an X11 error.
The media stack eventually moved to nk-celebrimbor — an HP Pavilion laptop repurposed as a dedicated Proxmox cluster node. It has an NVIDIA MX550 that does NVENC/NVDEC and handles more formats than QuickSync. The full Servarr stack — Radarr, Sonarr, Prowlarr, qBittorrent behind Gluetun VPN, and Jellyfin — all runs on nk-celebrimbor now, with 16GB RAM and 12 cores.
Challenges
- vainfo X11 errors in headless LXCs. Looks like a broken passthrough. Almost always isn’t — set the env vars before drawing conclusions.
- intel_gpu_top and debugfs.
intel_gpu_topneeds debugfs access inside the container and sometimes host-side i915 kernel parameters. You can have a working GPU passthrough where it still complains — the two tools are testing different things. - Device numbers vary. 226:0 and 226:128 are common but not guaranteed. Verify on the host before writing the LXC config.
- Scattered documentation. LXC GPU passthrough docs span Proxmox forums, Reddit, and GitHub issues across multiple kernel versions. A lot of it is outdated or hardware-specific and you have to triangulate.
Result
Jellyfin is doing hardware transcoding and the Proxmox host CPU is free for what it actually needs. nk-celebrimbor handles the full media stack now — multiple simultaneous streams transcode fine with the MX550.
If you’re running Jellyfin in a Proxmox LXC with an Intel iGPU, the passthrough works. Don’t trust vainfo’s X11 errors, verify with lsmod, and get the cgroup2 device rules right.