Monopolyman
PC Gamer and Tech Enthusiast
Retired
This tutorial was originally written for debian-based systems. However, that is no longer my daily OS and I consider Arch-based distros to be a much better host system for vfio, especially for beginners. If you want to view the debian-based tutorial view the second post in this thread.
Prerequisites
IOMMU compatible CPU and motherboard
1. Install necessary packages
Note: At this point you should have cables plugged into both the host video card and the card that you want to use for gaming.
4. Create virtual disk(s) for VM
6. Setting up input devices
7. Preparing virtual machine
10. Install The OS
Note: Both of these patches require the linux-vfio-lts or linux-vfio kernel to be installed. This should have been installed on step 1.
ACS override patch
Troubleshooting
This (http://vfio.blogspot.com/2014/08/vfiovga-faq.html) blog post has some of the common errors that you can run into and the solutions. I suggest that is the first place you turn if you run into an issue.
Black screen on guest and/or host has corrupted graphics
If you are using an Intel iGPU for the host, you make sure you have the i915 VGA arbiter patch enabled. If you don't want to use the VGA patch, you can set up OVMF.
Sound isn’t working properly
The first thing I recommend doing is playing around with the QEMU_PA_SAMPLES. Some may find that a lower value, like 128 or 512, will yield better results (less choppiness).
NVIDIA drivers not working in guest (or code 43)
NVIDIA doesn't like their cards being used in virtual machines. To get around this, you need to add ,kvm=off to your -cpu argument. (ie. host,kvm=off). If you are still getting issues, you need to disable Hyper-V enlightenments by deleting or commenting the following elements.
Tweaks and Optimizations
GPU performance can’t really get much better (it's already almost bare-metal), but there is a lot of room for improvement in other areas, especially the CPU. There are also some ways that can increase the usability and functionality of your VM.
Setting up Synergy
Syngery is a way to share your mouse and keyboard between two computers over a network. This allows you to use both the guest OS and host OS at the same time and have your input work between the two as if it’s a dual monitor setup.
1. Set a static IP on your guest OS. (http://portforward.com/networking/static-ip-windows-8.htm)
2. Download Synergy on the guest OS from https://synergy-project.org/. Alternatively, you can compile the source yourself for free.
3. Configure it so it will act as a server (share mouse+keyboard) and have your linux hostname in the appropriate place when configuring screens.
4. Download Synergy on the host (either from the website or by using “pacaur -S synergy”)
5. In the file you created in step 6d, add the line “sudo synergyc [Guest IP]:24800”.
6. In the detach file created in step 6g, add the lines “killall synergy” and “killall synergyc”.
Backing memory with hugepages
This can only be done if you have plenty of free RAM on your system. Using huge pages will ensure that the VMs memory does not get swapped which will increase pages. Certain processors also have larger IOMMU page tables which will be taken advantage of when using hugeapges.
1. Terminal: "sudo gedit /etc/fstab"
2. Add the line "hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=78 0 0". Make sure this is the only hugetlbfs line.
3. Terminal: "su"
4. Terminal: "umount /dev/hugepages"
5. Terminal: "mount /dev/hugepages" and then run "exit" to exit superuser.
6. Terminal: "grep Hugepagesize /proc/meminfo". Take note of this size.
7. Use a calculator to determine how many hugepages you need to back your VM. Be sure to include video RAM. For example, my hugepage size is 2048 kB (2mB) which is what your will likely be. I want the VM to have 10gB and my GPU has 4gB of RAM. That's 14gB total. 14gB = 14336 gB. 14336 mB / 2mB = 7168.
8. Terminal: "echo [Number from the step above] > /proc/sys/vm/nr_hugepages"
9. Terminal: "grep HugePages_Total /proc/meminfo" this will display the amount of hugepages. If it is less than you set, you don't have enough free RAM on your computer.
10. In the script you made in step 7a add "echo [Number from the step above] > /proc/sys/vm/nr_hugepages"
Note: You can optinonally reclaim the memory for the host after you shut off the VM by running "echo 0 > /proc/sys/vm/nr_hugepages".
Improve CPU performance with Cpuset and CPU pinning
CPU pinning allows you to assign real cores to virtual cores, thus increasing the performance of the virtual core. However, we also need to use cpusets to ensure that other host processes don’t interfere with that core.
xrandr allows us to control which monitors are on and off on Linux. This allows us to automatically turn off the monitor(s) that the GPU uses to play games so we don't have to manually switch inputs.
This mainly is useful if you are running into VGA arbitration issues and don't want to use the VGA arbiter patch.
Prerequisites
IOMMU compatible CPU and motherboard
You need a CPU that is capable of IOMMU. On Intel this is called vt-d. To check if your CPU has vt-d, search for it here. On AMD it's called AMD-Vi. Most AMD CPUs and APUs have it. It's also important you have a motherboard that is capable of supporting it and it's enabled. You may need to flash to a different version for it to work.
Arch-based distro
This tutorial was written based on what I've done on Arch Linux. The steps should be very similar in most other Arch-based distros (Ie. Manjaro). This can certainly be done in other distros, but the steps will be different aside from the XML.
Two GPUs (including integrated)
In order for this to work, you need two GPUs. One is used by the host and one is used by the guest. iGPUs can be used for the host.
Tutorial
1. Install necessary packages
1a. Terminal: "pacman -S gedit"
1b. Download and install pacaur. If you used an Arch-based distro like Manjaro, it likely comes pre-installed with an AUR helper, such as yaourt. If you installed Arch from scratch, you need to build it yourself. A tutorial on how to build AUR packages in general can be found here. Here is a script to automate the process, however I do not maintain this script and it could be out of date.
Note: You don't have to use gedit and pacaur and they can be substituted by any alternative. However, this tutorial only uses pacaur and gedit.
1c. Terminal: "pacaur -S qemu libvirt bridge-utils linux-vfio-lts"
Note: This command can take awhile (normally between 30 minutes and two hours depending on your system) since you are compiling the kernel.
Note: linux-vfio-lts can be substituted with linux-vfio for the bleeding edge kernel. However, I strongly recommend that you stick with linux-vfio-lts because bugs can be introduced in newer kernels.
Note: linux-vfio-lts or linux-vfio is not needed if you are not using an intel iGPU for the host and you are sure your processor has ACS capabilities. If you don't have a HEDT or Xeon, or you are unsure if your CPU supports ACS, I recommend installing the vfio kernel just in case.
2. Load necessary modules1b. Download and install pacaur. If you used an Arch-based distro like Manjaro, it likely comes pre-installed with an AUR helper, such as yaourt. If you installed Arch from scratch, you need to build it yourself. A tutorial on how to build AUR packages in general can be found here. Here is a script to automate the process, however I do not maintain this script and it could be out of date.
Note: You don't have to use gedit and pacaur and they can be substituted by any alternative. However, this tutorial only uses pacaur and gedit.
1c. Terminal: "pacaur -S qemu libvirt bridge-utils linux-vfio-lts"
Note: This command can take awhile (normally between 30 minutes and two hours depending on your system) since you are compiling the kernel.
Note: linux-vfio-lts can be substituted with linux-vfio for the bleeding edge kernel. However, I strongly recommend that you stick with linux-vfio-lts because bugs can be introduced in newer kernels.
Note: linux-vfio-lts or linux-vfio is not needed if you are not using an intel iGPU for the host and you are sure your processor has ACS capabilities. If you don't have a HEDT or Xeon, or you are unsure if your CPU supports ACS, I recommend installing the vfio kernel just in case.
2a. Terminal: "sudo gedit /etc/default/grub" and add "rd.modules-load=vfio-pci intel_iommu=on" inside the quotes on the GRUB_CMDLINE_LINUX_DEFAULT line. Example:
Note: Add "iommu=pt iommu=1" instead of "intel_iommu=on" for AMD CPUs.
2b. Terminal "sudo grub-mkconfig -o /boot/grub/grub.cfg"
2c. Terminal: "sudo gedit /etc/mkinitcpio.conf"
2d. Add the necessary modules in the MODULES line as shown below
2e. Terminal: "sudo mkinitcpio -p linux-vfio-lts"
3. Detach GPU from host
Code:
GRUB_CMDLINE_LINUX_DEFAULT="rd.modules-load=vfio-pci intel_iommu=on"
Note: Add "iommu=pt iommu=1" instead of "intel_iommu=on" for AMD CPUs.
2b. Terminal "sudo grub-mkconfig -o /boot/grub/grub.cfg"
2c. Terminal: "sudo gedit /etc/mkinitcpio.conf"
2d. Add the necessary modules in the MODULES line as shown below
Code:
MODULES="vfio vfio_iommu_type1 vfio_pci vfio_virqfd"
2e. Terminal: "sudo mkinitcpio -p linux-vfio-lts"
3a. Terminal: "lspci -nn | grep AMD" (grep NVIDIA if you are passing through an NVIDIA GPU). The output should be something like this
Code:
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Curacao XT [Radeon R9 270X] [1002:6810]
01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series] [1002:aab0]
3b. Terminal: "sudo gedit /etc/modprobe.d/vfio.conf". Add "options vfio-pci ids=" followed by numbers in the brackets at the end of the lspci output separated with each device separated by commas. Be sure to only include the IDs (one for GPU one for HDMI audio) for the card you want to use in the VM. Example:
3c. Terminal: "sudo mkinitcpio -p linux-vfio-lts"
3d. Reboot and ensure the GPU is being claimed by vfio. Do this by running "dmesg | grep vfio" and if you see something like below, you are good to go.
Code:
options vfio-pci ids=1002:aab0,1002:6810
3c. Terminal: "sudo mkinitcpio -p linux-vfio-lts"
3d. Reboot and ensure the GPU is being claimed by vfio. Do this by running "dmesg | grep vfio" and if you see something like below, you are good to go.
Code:
[ 0.484086] VFIO - User Level meta-driver version: 0.3
[ 0.497805] vfio_pci: add [1002:67b1[ffff:ffff]] class 0x000000/00000000
[ 0.511123] vfio_pci: add [1002:aac8[ffff:ffff]] class 0x000000/00000000
[ 3985.682858] vfio-pci 0000:01:00.0: enabling device (0000 -> 0003)
[ 3985.683003] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x270
[ 3985.683010] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x1b@0x2d0
Note: At this point you should have cables plugged into both the host video card and the card that you want to use for gaming.
4. Create virtual disk(s) for VM
Terminal: "qemu-img create [Desired Location] [Desired Size]G ". So a 50GiB image in my documents would look like "qemu-img create ~/Documents/disk.img 50G"
5. Define the virtual machine
5a. Make an xml file and paste the following code into it.
Code:
<?xml version="1.0" encoding="UTF-8"?>
<domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0" type="kvm">
<name>gamingvm</name>
<uuid>01bd2ed1-b465-4eba-b6e4-47c6ac8171c6</uuid>
<memory unit="GB">[Memory]</memory>
<currentMemory unit="GB">[Memory]</currentMemory>
<vcpu placement="static">[Cores*Threads]</vcpu>
<cpu mode="host-passthrough">
<topology sockets="1" cores="[Core]" threads="[Threads]" />
</cpu>
<os>
<type arch="x86_64" machine="q35">hvm</type>
<loader>/usr/share/qemu/bios.bin</loader>
<bootmenu enable="yes" />
</os>
<features>
<hyperv>
<relaxed state="on" />
<vapic state="on" />
<spinlocks state="on" retries="8191" />
</hyperv>
<acpi />
</features>
<clock offset="localtime">
<timer name="hypervclock" present="yes" />
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<interface type='bridge'>
<mac address='52:54:00:a0:41:92'/>
<source bridge='br0'/>
<model type='rtl8139'/>
<rom bar='off'/>
</interface>
<sound model='ich6'/>
<controller type="usb" index="0" />
<controller type="usb" index="1" />
<controller type="usb" index="2" />
<controller type="usb" index="3" />
<controller type="sata" index="0" />
<controller type="pci" index="0" model="pcie-root" />
<controller type="pci" index="1" model="dmi-to-pci-bridge" />
<controller type="pci" index="2" model="pci-bridge" />
<memballoon model="none" />
<sound model="ich6" />
<disk type="file" device="cdrom">
<driver name="qemu" type="raw" />
<source file="[OS disc image]" />
<target dev="hdc" bus="sata" />
<readonly />
<address type="drive" controller="0" bus="1" unit="0" />
</disk>
<disk type="file" device="disk">
<source file="[Primary HDD image]" />
<target dev="vda" bus="sata" />
<address type="drive" bus="0" />
</disk>
<disk type="file" device="disk">
<source file="[Secondary HDD image]" />
<target dev="vdb" bus="sata" />
<address type="drive" bus="1" />
</disk>
</devices>
<qemu:commandline>
<qemu:env name="QEMU_PA_SAMPLES" value="4096" />
<qemu:env name="QEMU_AUDIO_DRV" value="pa" />
<qemu:arg value="-device" />
<qemu:arg value="ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1" />
<qemu:arg value="-device" />
<qemu:arg value="vfio-pci,host=[GPU],bus=root.1,addr=00.0,multifunction=on,x-vga=on" />
<qemu:arg value="-device" />
<qemu:arg value="vfio-pci,host=[Audio],bus=root.1,addr=00.1" />
<qemu:arg value="-cpu" />
<qemu:arg value="host" />
</qemu:commandline>
</domain>
5b. Replace [OS disc image] with the path to an .iso for an install disc. I highly recommend using Windows 10 considering it's future support for dx12 and it has given me far less problems than Windows 8. This tutorial is also written for Win10, but Windows 8 is practically the same.
5c. Replace [Memory] with the amount of memory you want the VM to have. Note the value is in Gigabytes. Be sure not to give more than your system ram amount and leave some space for the host.
5d. Replace [Primary HDD Image] with the path to the image you created in step four. You can also add the path for a second hdd where [Secondary HDD Image]. You can also add more HDDs by copying the <disk></disk> elements and increasing the number of bus and increasing the letter on vdx (so third disk would be bus=2 vdc, fourth would be bus=3, vdd, etc.).
5e. Choose the amount of cores and threads you want your VM to have. If you're on a processor with 8 threads, I recommend 3 cores and 2 threads (2 threads per core). In this instance, I would change [Cores*Threads] to 6, [Cores] to 3, and [Threads] to 2.
5f. Change [GPU] and [Audio] to the slot/function numbers from lspci. For example, I would use "01:00.0" and "01:00.1"
5f. Terminal: "virsh define [Path to XML file]". The terminal should return this if everything is correct.
5c. Replace [Memory] with the amount of memory you want the VM to have. Note the value is in Gigabytes. Be sure not to give more than your system ram amount and leave some space for the host.
5d. Replace [Primary HDD Image] with the path to the image you created in step four. You can also add the path for a second hdd where [Secondary HDD Image]. You can also add more HDDs by copying the <disk></disk> elements and increasing the number of bus and increasing the letter on vdx (so third disk would be bus=2 vdc, fourth would be bus=3, vdd, etc.).
5e. Choose the amount of cores and threads you want your VM to have. If you're on a processor with 8 threads, I recommend 3 cores and 2 threads (2 threads per core). In this instance, I would change [Cores*Threads] to 6, [Cores] to 3, and [Threads] to 2.
5f. Change [GPU] and [Audio] to the slot/function numbers from lspci. For example, I would use "01:00.0" and "01:00.1"
5f. Terminal: "virsh define [Path to XML file]". The terminal should return this if everything is correct.
Code:
Domain gamingvm defined from [Path to XML file]
6. Setting up input devices
6a. Create a new XML file like the following
6b. Terminal: "lsusb". Identify which device is your keyboard. Then look at the numbers after "ID" from the output. In the XML file, replace [Before Colon] with the set of numbers/letters before the colon, then replace [After Colon] with the set of numbers/letters after the colon. Be sure to keep the 0x. It sounds a bit confusing, but here is an example:
My keyboard from lsusb
My XML file would to look like this
Note: You could just add the hostdev element in the devices element in the XML file you created in step four and define the vm again. However, putting the USB devices in separate files allows you to attach/detach much easier. This is especially useful if the VM fails to boot and you have no way of interacting with your host OS.
6c. Repeat that for your mouse. If your mouse/keyboard has multiple entries, be sure to do it all of them (for instance, a wireless mouse would show up twice. Once for the USB cord, and once for the wireless adapter).
6d. Create a file with the following content
6e. Replace [Mouse] and [Keyboard] with paths to the files you created in the previous steps. If you had to make more than two files, just add lines with the path to those at the end
6f. Make this file executable by either right clicking it and going into the properties or using "chmod +x".
6g. Make a copy of the file from 6d, except replace the word “attach” with “detach” so it says “virsh detach-device $vm ...”. This script can be used for detaching your input devices. Make sure it is executable.
Code:
<hostdev mode='subsystem' type='usb' managed='no'>
<source>
<vendor id='0x[Before Colon]'/>
<product id='0x[After Colon]'/>
</source>
</hostdev>
6b. Terminal: "lsusb". Identify which device is your keyboard. Then look at the numbers after "ID" from the output. In the XML file, replace [Before Colon] with the set of numbers/letters before the colon, then replace [After Colon] with the set of numbers/letters after the colon. Be sure to keep the 0x. It sounds a bit confusing, but here is an example:
My keyboard from lsusb
Code:
Bus 002 Device 005: ID 413c:2011 Dell Computer Corp. Multimedia Pro Keyboard
My XML file would to look like this
Code:
<hostdev mode='subsystem' type='usb' managed='no'>
<source>
<vendor id='0x413c'/>
<product id='0x2011'/>
</source>
</hostdev>
Note: You could just add the hostdev element in the devices element in the XML file you created in step four and define the vm again. However, putting the USB devices in separate files allows you to attach/detach much easier. This is especially useful if the VM fails to boot and you have no way of interacting with your host OS.
6c. Repeat that for your mouse. If your mouse/keyboard has multiple entries, be sure to do it all of them (for instance, a wireless mouse would show up twice. Once for the USB cord, and once for the wireless adapter).
6d. Create a file with the following content
Code:
#!/bin/bash
if [[ $(virsh list | grep gamingvm) != "" ]]
then
vm=gamingvm
fi
virsh attach-device $vm [Mouse]
virsh attach-device $vm [Keyboard]
6e. Replace [Mouse] and [Keyboard] with paths to the files you created in the previous steps. If you had to make more than two files, just add lines with the path to those at the end
6f. Make this file executable by either right clicking it and going into the properties or using "chmod +x".
6g. Make a copy of the file from 6d, except replace the word “attach” with “detach” so it says “virsh detach-device $vm ...”. This script can be used for detaching your input devices. Make sure it is executable.
7. Preparing virtual machine
7a. Create the following file
7b. Terminal: "sudo gedit /etc/libvirt/qemu.conf"
7c. Add the following lines (or find them and uncomment them)
7d. Terminal: "find /sys/kernel/iommu_groups/ -type l". Now look for what directory the GPU you are passing through is in. Replace "[GROUP]" with whatever number group your device is in.
8. Preparing PulseAudio
Code:
#!/bin/bash
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root" 1>&2
su
exit 0
fi
set -x
echo 1 > /sys/module/kvm/parameters/ignore_msrs
virsh start gamingvm
if [[ $(virsh list | grep gamingvm) == "" ]]
then
exit 0
fi
set +x
exit 0
7b. Terminal: "sudo gedit /etc/libvirt/qemu.conf"
7c. Add the following lines (or find them and uncomment them)
Code:
user = "root"
group = "root"
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
"/dev/vfio/[GROUP]"
]
clear_emulator_capabilities = 0
7d. Terminal: "find /sys/kernel/iommu_groups/ -type l". Now look for what directory the GPU you are passing through is in. Replace "[GROUP]" with whatever number group your device is in.
Note: If you are going to use HDMI audio, you do not need to follow these steps.
8a. Terminal: “cp /etc/pulse/default.pa ~/.pulse/”
8b. Terminal: "gedit ~/.pulse/"
8c. Add the following to the end of the file "load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1"
8d. Terminal: “su"
8d. Terminal: “mkdir ~/.pulse/"
8e. Terminal: "echo "default-server = 127.0.0.1" > ~/.pulse/client.conf"
9. Networking8a. Terminal: “cp /etc/pulse/default.pa ~/.pulse/”
8b. Terminal: "gedit ~/.pulse/"
8c. Add the following to the end of the file "load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1"
8d. Terminal: “su"
8d. Terminal: “mkdir ~/.pulse/"
8e. Terminal: "echo "default-server = 127.0.0.1" > ~/.pulse/client.conf"
Note: This uses netctl which should already be installed. If for some reason it's not, install it via "pacman -S netctl" and configure it by referring to the Arch Wiki page.
9a. Terminal: “cd /etc/netctl”
9b. Terminal: "sudo gedit [profile name]" [profile name] is the profile for the device that you use to get your interenet connection. It is likely called wired. If you are unsure run the command "ls" to view the available profiles.
9c. Change the "IP=... " line to "IP=no". If the Address, DNS, or Gateway lines are present, delete them. Take note of the "Interface=..." line for future reference.
9d. Terminal: "sudo gedit bridge". This will create a new bridge profile.
9e. Add the following to file
9e. Change [Interface] to the name of the interface you took note of in step 10c. Also feel free to change the IP address to what you want it to be.
9f. Terminal: "netctl enable bridge"
9g. Reboot to ensure networking on the host is still working.
9a. Terminal: “cd /etc/netctl”
9b. Terminal: "sudo gedit [profile name]" [profile name] is the profile for the device that you use to get your interenet connection. It is likely called wired. If you are unsure run the command "ls" to view the available profiles.
9c. Change the "IP=... " line to "IP=no". If the Address, DNS, or Gateway lines are present, delete them. Take note of the "Interface=..." line for future reference.
9d. Terminal: "sudo gedit bridge". This will create a new bridge profile.
9e. Add the following to file
Code:
Description="Bridge connection"
Interface=br0
Connection=bridge
BindsToInterfaces=([Interface])
IP=static
Address='192.168.1.202/24'
Gateway='192.168.1.1'
## Ignore (R)STP and immediately activate the bridge
SkipForwardingDelay=yes
9e. Change [Interface] to the name of the interface you took note of in step 10c. Also feel free to change the IP address to what you want it to be.
9f. Terminal: "netctl enable bridge"
9g. Reboot to ensure networking on the host is still working.
10. Install The OS
10a. Look at the "Kernel Patches" section of this tutorial below. Check to see if you need to enable those patches. If you do, enable the patches before you procede.
10b. Make the file from step seven executable and run it. If everything goes smoothly, switching to the input of your GPU should display the installation screen. If not, scroll down to the troubleshooting portion of the thread.
10c. Run the file you made in step six to enable keyboard and mouse.
10d. Install the OS like usual.
10e. You now (hopefully) have a VM capable of playing Windows games!
Kernel Patches10c. Run the file you made in step six to enable keyboard and mouse.
10d. Install the OS like usual.
10e. You now (hopefully) have a VM capable of playing Windows games!
Note: Both of these patches require the linux-vfio-lts or linux-vfio kernel to be installed. This should have been installed on step 1.
ACS override patch
Note: This patch is necessary if your CPU doesn't support ACS capabilities. None of regular consumer processors (i3, i5, or i7) support ACS. The only processors that support it are Xeons and some HEDT CPUs.
1. Terminal: "gedit /etc/default/grub"
2. On the GRUB_CMDLINE_LINUX_DEFAULT line, add "pcie_acs_override=downstream". Example:
3. Terminal: "sudo grub-mkconfig -o /boot/grub/grub.cfg"
i915 VGA arbiter patch1. Terminal: "gedit /etc/default/grub"
2. On the GRUB_CMDLINE_LINUX_DEFAULT line, add "pcie_acs_override=downstream". Example:
Code:
[code]GRUB_CMDLINE_LINUX_DEFAULT="pcie_acs_override=downstream rd.modules-load=vfio-pci intel_iommu=on"
3. Terminal: "sudo grub-mkconfig -o /boot/grub/grub.cfg"
Note: This step is needed if you are using any intel iGPU for your host system.
1. Terminal: "gedit /etc/default/grub"
2. On the GRUB_CMDLINE_LINUX_DEFAULT line, add "i915.enable_hd_vgaarb=1". Example:
1. Terminal: "gedit /etc/default/grub"
2. On the GRUB_CMDLINE_LINUX_DEFAULT line, add "i915.enable_hd_vgaarb=1". Example:
Code:
[code]GRUB_CMDLINE_LINUX_DEFAULT="i915.enable_hd_vgaarb=1 rd.modules-load=vfio-pci intel_iommu=on"
Troubleshooting
This (http://vfio.blogspot.com/2014/08/vfiovga-faq.html) blog post has some of the common errors that you can run into and the solutions. I suggest that is the first place you turn if you run into an issue.
Black screen on guest and/or host has corrupted graphics
If you are using an Intel iGPU for the host, you make sure you have the i915 VGA arbiter patch enabled. If you don't want to use the VGA patch, you can set up OVMF.
Sound isn’t working properly
The first thing I recommend doing is playing around with the QEMU_PA_SAMPLES. Some may find that a lower value, like 128 or 512, will yield better results (less choppiness).
NVIDIA drivers not working in guest (or code 43)
NVIDIA doesn't like their cards being used in virtual machines. To get around this, you need to add ,kvm=off to your -cpu argument. (ie. host,kvm=off). If you are still getting issues, you need to disable Hyper-V enlightenments by deleting or commenting the following elements.
Code:
<hyperv>
<relaxed state='off'/>
<vapic state='off'/>
<spinlocks state='off'/>
</hyperv>
...
<clock>
<timer name='hypervclock' present='no'/>
</clock>
Tweaks and Optimizations
GPU performance can’t really get much better (it's already almost bare-metal), but there is a lot of room for improvement in other areas, especially the CPU. There are also some ways that can increase the usability and functionality of your VM.
Setting up Synergy
Syngery is a way to share your mouse and keyboard between two computers over a network. This allows you to use both the guest OS and host OS at the same time and have your input work between the two as if it’s a dual monitor setup.
1. Set a static IP on your guest OS. (http://portforward.com/networking/static-ip-windows-8.htm)
2. Download Synergy on the guest OS from https://synergy-project.org/. Alternatively, you can compile the source yourself for free.
3. Configure it so it will act as a server (share mouse+keyboard) and have your linux hostname in the appropriate place when configuring screens.
4. Download Synergy on the host (either from the website or by using “pacaur -S synergy”)
5. In the file you created in step 6d, add the line “sudo synergyc [Guest IP]:24800”.
6. In the detach file created in step 6g, add the lines “killall synergy” and “killall synergyc”.
Backing memory with hugepages
This can only be done if you have plenty of free RAM on your system. Using huge pages will ensure that the VMs memory does not get swapped which will increase pages. Certain processors also have larger IOMMU page tables which will be taken advantage of when using hugeapges.
1. Terminal: "sudo gedit /etc/fstab"
2. Add the line "hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=78 0 0". Make sure this is the only hugetlbfs line.
3. Terminal: "su"
4. Terminal: "umount /dev/hugepages"
5. Terminal: "mount /dev/hugepages" and then run "exit" to exit superuser.
6. Terminal: "grep Hugepagesize /proc/meminfo". Take note of this size.
7. Use a calculator to determine how many hugepages you need to back your VM. Be sure to include video RAM. For example, my hugepage size is 2048 kB (2mB) which is what your will likely be. I want the VM to have 10gB and my GPU has 4gB of RAM. That's 14gB total. 14gB = 14336 gB. 14336 mB / 2mB = 7168.
8. Terminal: "echo [Number from the step above] > /proc/sys/vm/nr_hugepages"
9. Terminal: "grep HugePages_Total /proc/meminfo" this will display the amount of hugepages. If it is less than you set, you don't have enough free RAM on your computer.
10. In the script you made in step 7a add "echo [Number from the step above] > /proc/sys/vm/nr_hugepages"
Note: You can optinonally reclaim the memory for the host after you shut off the VM by running "echo 0 > /proc/sys/vm/nr_hugepages".
CPU pinning allows you to assign real cores to virtual cores, thus increasing the performance of the virtual core. However, we also need to use cpusets to ensure that other host processes don’t interfere with that core.
1. Terminal: "sudo gedit [XMl file from step five]"
2. Below the CPU element, add the following:
This setup is for a vm with six cores, and it pins each virtual core to a real core. You can change it however you want to fit your needs, just know vcpu is the virtual core and cpuset is the host core(s).
3. Define the VM
4. Terminal: “pacaur -S cpuset”
5. In the executable file you use to attach input devices, add the lines
6. Change the "0-1" to the cores that you want to reserve for your host OS (the ones you didn't use in step two).
7. In the script you use to detach input devices, add the line "cset set -d system"
Improve CPU performance via cpupower governor (frequency)2. Below the CPU element, add the following:
Code:
<cputune>
<vcpupin vcpu="0" cpuset="2"/>
<vcpupin vcpu="1" cpuset="3"/>
<vcpupin vcpu="2" cpuset="4"/>
<vcpupin vcpu="3" cpuset="5"/>
<vcpupin vcpu="4" cpuset="6"/>
<vcpupin vcpu="5" cpuset="7"/>
</cputune>
3. Define the VM
4. Terminal: “pacaur -S cpuset”
5. In the executable file you use to attach input devices, add the lines
Code:
cset set -c 0-1 -s system
cset proc -m -f root -t system
cset proc -k -f root -t system
6. Change the "0-1" to the cores that you want to reserve for your host OS (the ones you didn't use in step two).
7. In the script you use to detach input devices, add the line "cset set -d system"
Note: This is for when your CPU doesn't reach it's max frequency (boost) when gaming. This can force the CPU to boost and give you better performance. It may not be beneficial on all systems depending on the default governor.
1. Terminal: “pacaur -S cpupower”
2. Terminal: "cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor". Keep the output handy.
3. In the attach-device file from step six add the following line
If you used CPU pinning, replace "2-7" with the numbers of the real cores you pinned the virtual cores to. If not, you can
4. In the detach-device file from step six, add the following line
Using xrandr to control monitors1. Terminal: “pacaur -S cpupower”
2. Terminal: "cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor". Keep the output handy.
3. In the attach-device file from step six add the following line
Code:
sudo cpupower -c 2-7 frequency-set -g performance
4. In the detach-device file from step six, add the following line
Code:
sudo cpupower -c 2-7 frequency-set -g [The output you got from step two]
xrandr allows us to control which monitors are on and off on Linux. This allows us to automatically turn off the monitor(s) that the GPU uses to play games so we don't have to manually switch inputs.
1. Terminal: "xrandr". This will tell you the names of your displays.
2. Identify the name of the monitor(s) you wish to turn off, then in the script you use to start the VM add the line "xrandr --output [NAME OF DISPLAY] -off" with [NAME OF DISPLAY] being the monitor you want to turn off (and switch to the Windows GPU).
3 (Optional). Depending on how many monitors you have, you may also need to specify which monitor you want to become your primary (assuming the monitor your using for gaming was your primary). To do this, add the line "xrandr --output [NAME OF DISPLAY] --primary".
4. Save the script and you should be ready to go.
Using UEFI (OVMF) instead of legacy VGA 2. Identify the name of the monitor(s) you wish to turn off, then in the script you use to start the VM add the line "xrandr --output [NAME OF DISPLAY] -off" with [NAME OF DISPLAY] being the monitor you want to turn off (and switch to the Windows GPU).
3 (Optional). Depending on how many monitors you have, you may also need to specify which monitor you want to become your primary (assuming the monitor your using for gaming was your primary). To do this, add the line "xrandr --output [NAME OF DISPLAY] --primary".
4. Save the script and you should be ready to go.
This mainly is useful if you are running into VGA arbitration issues and don't want to use the VGA arbiter patch.
1. Verify your GPU has a UEFI capable firmware by checking on https://www.techpowerup.com/vgabios/
2. Download edk2.git-ovmf-x64 from here.
3. Extract the files to /usr/share. You should now have the following files:
4. In your VM's XML file, change the OS element to the following
Note: You may need to boot with emulated graphics (ie. qxl) in order to install the GPU drivers
2. Download edk2.git-ovmf-x64 from here.
3. Extract the files to /usr/share. You should now have the following files:
Code:
/usr/share/edk2.git/ovmf-x64/OVMF-pure-efi.fd
/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd
/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd
4. In your VM's XML file, change the OS element to the following
Code:
<os>
<loader readonly='yes' type='pflash'>/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
<nvram template='/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd'/>
</os>
Note: You may need to boot with emulated graphics (ie. qxl) in order to install the GPU drivers
- Operating System
- Linux
Last edited: