← All articles
VIRTUALIZATION GPU Passthrough in Proxmox: Gaming, AI, and Transcod... 2026-02-09 · 10 min read · gpu-passthrough · proxmox · vfio

GPU Passthrough in Proxmox: Gaming, AI, and Transcoding in VMs

Virtualization 2026-02-09 · 10 min read gpu-passthrough proxmox vfio iommu gaming ai transcoding virtualization

GPU passthrough lets you give a virtual machine direct access to a physical GPU, achieving near-native performance for graphics-intensive tasks. In a homelab context, this means you can run a Windows gaming VM with full GPU performance, dedicate a GPU to AI/ML workloads in a Linux VM, or give your Jellyfin instance hardware transcoding capabilities — all while other VMs and containers share the host CPU and RAM.

It sounds straightforward, but GPU passthrough is one of the trickier homelab configurations. IOMMU groups, VFIO drivers, vendor quirks, and BIOS settings all need to align. This guide walks through the entire process on Proxmox, covering the common pitfalls that trip people up.

Proxmox VE logo

Prerequisites

Before starting, verify your hardware supports passthrough:

CPU Requirements

Most CPUs from the last decade support this, but check your specific model. Budget CPUs (Celeron, some Pentiums) sometimes lack VT-d.

Motherboard Requirements

GPU Requirements

GPU Vendor Passthrough Support Notes
NVIDIA (consumer) Good with workarounds Code 43 error fixed by hiding VM status
NVIDIA (Quadro/Tesla) Excellent No workarounds needed
AMD (consumer) Good Reset bug on some models (RX 5000/6000)
AMD (Radeon Pro) Excellent Reliable
Intel Arc Emerging SR-IOV support for sharing

What You Need

Step 1: Enable IOMMU in BIOS

Enter your BIOS/UEFI settings and enable:

Intel systems:

AMD systems:

The exact location varies by motherboard manufacturer. Look under "Advanced," "CPU Configuration," or "Northbridge Configuration."

Step 2: Enable IOMMU in the Bootloader

SSH into your Proxmox host and edit the bootloader configuration.

For GRUB (most installations):

nano /etc/default/grub

Find the GRUB_CMDLINE_LINUX_DEFAULT line and add IOMMU parameters:

# Intel CPU
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

# AMD CPU
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"

The iommu=pt (passthrough) flag improves performance by only enabling IOMMU for devices that actually need it.

Update GRUB:

update-grub

For systemd-boot (ZFS root):

nano /etc/kernel/cmdline

Add to the existing line:

# Intel
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt

# AMD
root=ZFS=rpool/ROOT/pve-1 boot=zfs amd_iommu=on iommu=pt

Update the boot config:

proxmox-boot-tool refresh

Reboot:

reboot

Step 3: Verify IOMMU Is Active

After reboot, verify IOMMU is working:

dmesg | grep -e DMAR -e IOMMU

You should see lines like:

[    0.000000] DMAR: IOMMU enabled
[    0.123456] DMAR-IR: Queued invalidation will be enabled to support x2apic and target CPUs.

For AMD:

[    0.000000] AMD-Vi: IOMMU performance counters supported
[    0.000000] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40

Step 4: Check IOMMU Groups

This is where things get interesting. Devices are organized into IOMMU groups, and you must pass through an entire group — you can't pass through individual devices if they share a group with other devices you need.

#!/bin/bash
# List IOMMU groups and their devices
shopt -s nullglob
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
    echo "IOMMU Group ${g##*/}:"
    for d in $g/devices/*; do
        echo -e "\t$(lspci -nns ${d##*/})"
    done
done

Ideal output for your GPU looks like this:

IOMMU Group 14:
    01:00.0 VGA compatible controller [0300]: NVIDIA Corporation ... [10de:2684]
    01:00.1 Audio device [0403]: NVIDIA Corporation ... [10de:22ba]

The GPU and its audio device are alone in their IOMMU group — perfect. If you see other devices mixed in (SATA controllers, USB controllers, etc.), you need the ACS override patch or a different PCIe slot.

ACS Override Patch (If Groups Are Bad)

If your IOMMU groups aren't isolated, Proxmox includes the ACS override patch. Add to your kernel parameters:

# In GRUB_CMDLINE_LINUX_DEFAULT or /etc/kernel/cmdline
pcie_acs_override=downstream,multifunction

This forces devices into separate groups. Note that this reduces some IOMMU security guarantees, but for a homelab this is an acceptable trade-off.

Step 5: Configure VFIO

VFIO is the kernel framework that manages device passthrough. You need to tell it to claim your GPU before the host's graphics driver does.

Identify Your GPU

lspci -nn | grep -i nvidia
# Or for AMD:
lspci -nn | grep -i amd.*vga

Output:

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation AD102 [GeForce RTX 4090] [10de:2684] (rev a1)
01:00.1 Audio device [0403]: NVIDIA Corporation AD102 High Definition Audio Controller [10de:22ba] (rev a1)

Note the vendor:device IDs: 10de:2684 and 10de:22ba.

Load VFIO Modules

# /etc/modules
vfio
vfio_iommu_type1
vfio_pci

Bind GPU to VFIO

Create a VFIO configuration file with your GPU's vendor:device IDs:

echo "options vfio-pci ids=10de:2684,10de:22ba disable_vga=1" > /etc/modprobe.d/vfio.conf

Blacklist Host GPU Drivers

Prevent the host from loading GPU drivers for the passthrough card:

# /etc/modprobe.d/blacklist.conf
blacklist nvidia
blacklist nouveau
blacklist nvidiafb
# For AMD GPUs:
# blacklist amdgpu
# blacklist radeon

Update Initramfs

update-initramfs -u -k all

Reboot and verify VFIO has claimed the GPU:

lspci -nnk -s 01:00

You should see Kernel driver in use: vfio-pci for both the GPU and its audio device.

Step 6: Create the VM

Basic VM Settings

In the Proxmox web UI:

  1. Create VM > General: Give it a name, set the VM ID
  2. OS: Upload your Windows 11 ISO (or Linux ISO for AI/transcoding)
  3. System:
    • Machine: q35
    • BIOS: OVMF (UEFI)
    • Add EFI Disk
    • Add TPM State (Windows 11 requires it)
  4. Disks: VirtIO SCSI, at least 64GB for Windows
  5. CPU: Host type (not default kvm64), allocate cores as needed
  6. Memory: At least 8GB for gaming, 16GB+ for AI workloads

Add the GPU

In the VM's Hardware tab:

  1. Click Add > PCI Device
  2. Select your GPU from the dropdown
  3. Check the following options:
    • All Functions: Yes (passes through both GPU and audio)
    • ROM-Bar: Yes
    • Primary GPU: Yes (if this is the VM's only display output)
    • PCI-Express: Yes

NVIDIA-Specific Configuration

NVIDIA's consumer drivers detect virtual machines and refuse to work (the infamous "Code 43" error). Add these CPU flags to hide the VM:

# Edit the VM config directly
nano /etc/pve/qemu-server/<vmid>.conf

Add or modify the cpu line:

cpu: host,hidden=1,flags=+pcid
args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,kvm=off,hv_vendor_id=proxmox'

The key parts are hidden=1 and kvm=off — these prevent NVIDIA's driver from detecting the hypervisor.

AMD GPU Reset Bug Workaround

Some AMD GPUs (particularly RX 5000 and 6000 series) have a reset bug where the GPU doesn't properly reset when the VM shuts down, requiring a host reboot. The vendor-reset kernel module fixes this:

apt install dkms git pve-headers-$(uname -r)
git clone https://github.com/gnif/vendor-reset.git
cd vendor-reset
dkms install .

echo "vendor-reset" >> /etc/modules
echo "options vendor-reset" > /etc/modprobe.d/vendor-reset.conf
update-initramfs -u

Use Case 1: Windows Gaming VM

A GPU passthrough gaming VM gives you near-native gaming performance while keeping your homelab host running Linux.

Optimizations for Gaming

CPU pinning (prevents the VM from being moved between cores):

# In /etc/pve/qemu-server/<vmid>.conf
cores: 8
cpu: host,hidden=1
numa: 1
# Pin to specific cores (check your CPU topology with lscpu)
taskset: 0-7

Hugepages (reduces memory access latency):

# Reserve hugepages for the VM
echo 8192 > /proc/sys/vm/nr_hugepages  # 8192 x 2MB = 16GB

# In VM config
memory: 16384
hugepages: 2

USB passthrough for keyboard/mouse:

# Find your USB devices
lsusb
# Bus 001 Device 003: ID 046d:c52b Logitech USB Receiver

# Add to VM config or use Proxmox UI: Add > USB Device
usb0: host=046d:c52b

VirtIO drivers for disk and network performance:

Download the VirtIO ISO and mount it as a second CD-ROM during Windows installation. Install VirtIO drivers for storage, network, and balloon.

Performance Expectations

Benchmark Bare Metal GPU Passthrough Overhead
3DMark Time Spy 100% 95-98% 2-5%
Game FPS (average) 100% 93-97% 3-7%
GPU compute 100% 98-99% 1-2%
Storage I/O 100% 85-95% 5-15%

The small overhead comes from the virtualization layer. With proper CPU pinning and hugepages, you'll rarely notice it.

Use Case 2: AI/ML Workloads

GPU passthrough is ideal for running AI inference or training in a dedicated VM.

Linux VM for AI

# After creating a Linux VM with GPU passthrough:

# Install NVIDIA drivers
sudo apt install nvidia-driver-550

# Verify GPU is detected
nvidia-smi
# Should show your GPU model, driver version, CUDA version

# Install CUDA toolkit
sudo apt install nvidia-cuda-toolkit

# Install PyTorch with CUDA support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

Docker with GPU Access Inside the VM

# Install NVIDIA Container Toolkit
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
  sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

sudo apt update
sudo apt install nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

Then run GPU-accelerated containers:

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    restart: unless-stopped
    ports:
      - "11434:11434"
    volumes:
      - ./ollama-data:/root/.ollama
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

Use Case 3: Media Transcoding

For dedicated transcoding (Plex/Jellyfin), you often don't need full GPU passthrough. Intel iGPUs can be shared using GVT-g or SR-IOV, and you can pass through just the render device.

Passing Through Intel iGPU Render Device

Instead of full passthrough, share the Intel iGPU with the host while giving VMs/LXC containers access:

# On the Proxmox host, verify the render device exists
ls -la /dev/dri/
# renderD128 is the GPU render device

# For LXC containers, add to the container config:
# /etc/pve/lxc/<ctid>.conf
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file

NVIDIA GPU for Transcoding

If you're passing through an NVIDIA GPU specifically for Jellyfin/Plex transcoding:

# In your Jellyfin/Plex docker-compose inside the VM
services:
  jellyfin:
    image: jellyfin/jellyfin:latest
    devices:
      - /dev/dri:/dev/dri         # Intel iGPU
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia       # NVIDIA GPU
              count: 1
              capabilities: [gpu, video]

Troubleshooting

Common Issues and Fixes

Problem Cause Fix
VM won't start, "IOMMU not found" IOMMU not enabled in BIOS/bootloader Check BIOS settings, verify kernel params
NVIDIA Code 43 Driver detects VM Add hidden=1 and Hyper-V flags
Black screen after boot GPU not set as primary Enable "Primary GPU" in PCI device settings
VM crashes on shutdown AMD reset bug Install vendor-reset module
Poor performance CPU not pinned, no hugepages Configure CPU pinning and hugepages
Audio crackling High DPC latency Use PulseAudio/pipewire passthrough or USB audio
USB devices not working Wrong USB controller Pass through entire USB controller, not individual devices

Checking VFIO Binding

# Verify VFIO is loaded
lsmod | grep vfio

# Verify GPU is bound to VFIO
lspci -nnk -s 01:00
# Should show: Kernel driver in use: vfio-pci

# If still bound to nvidia/nouveau:
dmesg | grep -i vfio
# Look for errors about why VFIO couldn't claim the device

IOMMU Group Issues

If important devices share an IOMMU group with your GPU:

# Try a different PCIe slot (different slots often map to different IOMMU groups)

# Or use ACS override (less secure but functional)
# Add to kernel params: pcie_acs_override=downstream,multifunction

# Or pass through all devices in the group (if they're all expendable for the host)

VM Doesn't See the GPU

# Inside the VM, check PCI devices
lspci | grep -i vga    # Linux
# Device Manager > Display adapters    # Windows

# If the GPU isn't listed, check:
# 1. PCI device is added in Proxmox VM hardware
# 2. "All Functions" is checked
# 3. Machine type is q35, BIOS is OVMF

Alternative: GPU Sharing (SR-IOV and vGPU)

Full passthrough dedicates the entire GPU to one VM. For sharing a GPU among multiple VMs:

Intel SR-IOV (Arc and newer)

Intel Arc GPUs and 12th+ gen iGPUs support SR-IOV, which splits the GPU into virtual functions (VFs) that can be assigned to different VMs:

# Enable SR-IOV (Intel Arc example)
echo 4 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs

# Each VF appears as a separate PCI device
lspci | grep -i vga
# Shows original GPU + 4 virtual functions

NVIDIA vGPU (Enterprise Only)

NVIDIA vGPU requires enterprise licensing and is not available for consumer GPUs. If you have a Tesla or Quadro card, the vgpu_unlock community project can enable vGPU on some consumer cards, though this is unsupported.

Resource Planning

Use Case GPU Recommendation VRAM Power Draw
Windows gaming (1080p) GTX 1070+ / RX 580+ 6GB+ 150-250W
Windows gaming (4K) RTX 3080+ / RX 6800 XT+ 10GB+ 250-350W
AI inference (small models) RTX 3060 12GB 12GB 170W
AI inference (large models) RTX 3090 / 4090 24GB 350W
AI training RTX 4090 / A6000 24-48GB 350-450W
Media transcoding Intel N100 (iGPU) Shared 10W
Media transcoding (heavy) NVIDIA T400/T600 4-8GB 30-70W

Final Thoughts

GPU passthrough transforms your Proxmox host from a server into a versatile workstation platform. A single machine can run your homelab services, a Windows gaming VM, and an AI workload — each with dedicated GPU access and near-native performance.

The initial setup has a learning curve, especially around IOMMU groups and VFIO configuration. But once it's working, it's remarkably stable. Many homelabbers run GPU passthrough VMs for months without issues.

Start with a Linux VM and a known-compatible GPU to verify your hardware setup works. Once you've confirmed passthrough functions correctly, move on to Windows VMs (which need the NVIDIA workarounds) or more complex multi-GPU configurations.

The ability to consolidate gaming, AI, and server workloads onto a single machine is one of the most compelling reasons to run Proxmox in a homelab.