virtualbox.org
End user forums for VirtualBox
PCI Passthrough on Windows Host
PCI Passthrough on Windows Host
by abishur » 1. Aug 2016, 20:49
Re: PCI Passthrough on Windows Host
by socratis » 2. Aug 2016, 10:35
Re: PCI Passthrough on Windows Host
by mpack » 2. Aug 2016, 10:41
Re: PCI Passthrough on Windows Host
by socratis » 2. Aug 2016, 10:51
You sir are correct, I did mean the Linux host only part.
Funny thing is I was teaching my daughter to never answer an «A or B» question with yes|no. Teacher failure
Re: PCI Passthrough on Windows Host
by mpack » 2. Aug 2016, 10:53
Re: PCI Passthrough on Windows Host
by abishur » 3. Aug 2016, 22:33
Re: PCI Passthrough on Windows Host
by mpack » 4. Aug 2016, 17:38
I think you can be pretty sure that if they went to the trouble of implementing a feature which the user manual currently says is not available, then the manual would be updated to pass on the good news.
Frankly I wouldn’t hold my breath waiting for this feature to arrive on Windows hosts. Far more likely IMHO is that it’ll be quietly dropped from Linux hosts. That’s a personal opinion btw: I’m not on the devteam.
Re: PCI Passthrough on Windows Host
by admin@dwaves.de » 15. Mar 2019, 12:52
imho please do KEEP this feature, it would be a shame if it was dropped.
you could not pass through Network Cards:
sometimes you want server VMs to operate physically on a separate networks (internal, DMZ) without the host having access being exposed to all of these networks.
(yes of course, if VM get’s hacked. a VM escape could happen. not cool)
you could not pass through GPUs:
«VMware’s PCI Passthrough solution is by far the best I have used. The other virtualization platforms (eg. Microsoft Hyper-V, Xen, Citrix XenServer, Oracle VM, KVM, etc) provide little, or no, PCI Passthrough support. I have found that ESXi 5.0 had the best PCI Passthrough support, so stick with that version. (ESXi 5.1 was not quite as stable in this area)»
Virtualbox pci passthrough windows
OracleВ® VM VirtualBox
Administrator’s Guide for Release 6.0
2.5.В PCI Passthrough
When running on Linux hosts with a kernel version later than 2.6.31 , experimental host PCI devices passthrough is available.
The PCI passthrough module is shipped as an Oracle VM VirtualBox extension package, which must be installed separately. See Installing Oracle VM VirtualBox and Extension Packs.
This feature enables a guest to directly use physical PCI devices on the host, even if host does not have drivers for this particular device. Both, regular PCI and some PCI Express cards, are supported. AGP and certain PCI Express cards are not supported at the moment if they rely on Graphics Address Remapping Table (GART) unit programming for texture management as it does rather non-trivial operations with pages remapping interfering with IOMMU. This limitation may be lifted in future releases.
To be fully functional, PCI passthrough support in Oracle VM VirtualBox depends upon an IOMMU hardware unit. If the device uses bus mastering, for example it performs DMA to the OS memory on its own, then an IOMMU is required. Otherwise such DMA transactions may write to the wrong physical memory address as the device DMA engine is programmed using a device-specific protocol to perform memory transactions. The IOMMU functions as translation unit mapping physical memory access requests from the device using knowledge of the guest physical address to host physical addresses translation rules.
Intel’s solution for IOMMU is called Intel Virtualization Technology for Directed I/O (VT-d), and AMD’s solution is called AMD-Vi. Check your motherboard datasheet for the appropriate technology. Even if your hardware does not have a IOMMU, certain PCI cards may work, such as serial PCI adapters, but the guest will show a warning on boot and the VM execution will terminate if the guest driver will attempt to enable card bus mastering.
It is very common that the BIOS or the host OS disables the IOMMU by default. So before any attempt to use it please make sure that the following apply:
Your motherboard has an IOMMU unit.
Your CPU supports the IOMMU.
The IOMMU is enabled in the BIOS.
The VM must run with VT-x/AMD-V and nested paging enabled.
Your Linux kernel was compiled with IOMMU support, including DMA remapping. See the CONFIG_DMAR kernel compilation option. The PCI stub driver ( CONFIG_PCI_STUB ) is required as well.
Your Linux kernel recognizes and uses the IOMMU unit. The intel_iommu=on boot option could be needed. Search for DMAR and PCI-DMA in kernel boot log.
Once you made sure that the host kernel supports the IOMMU, the next step is to select the PCI card and attach it to the guest. To figure out the list of available PCI devices, use the lspci command. The output will look as follows:
The first column is a PCI address, in the format bus : device . function . This address could be used to identify the device for further operations. For example, to attach a PCI network controller on the system listed above to the second PCI bus in the guest, as device 5, function 0, use the following command:
To detach the same device, use:
Please note that both host and guest could freely assign a different PCI address to the card attached during runtime, so those addresses only apply to the address of the card at the moment of attachment on the host, and during BIOS PCI init on the guest.
If the virtual machine has a PCI device attached, certain limitations apply:
Only PCI cards with non-shared interrupts, such as those using MSI on the host, are supported at the moment.
No guest state can be reliably saved or restored. The internal state of the PCI card cannot be retrieved.
Teleportation, also called live migration, does not work. The internal state of the PCI card cannot be retrieved.
No lazy physical memory allocation. The host will preallocate the whole RAM required for the VM on startup, as we cannot catch physical hardware accesses to the physical memory.
Copyright В© 2004, 2020 Oracle and/or its affiliates. All rights reserved. Legal Notices
Просто записи программиста.
21 мая 2013 г.
Как я сделал PCI passthrough на Intel платформе.
Исходные данные
- Материнская плата H55M-E33(MS-7636)
- Процессор Intel® Core™ i5 CPU 650
- ОС Linux 3.2.0-43-generic x86_64 Ubuntu 12.04.2 LTS
- VirtualBox 4.2.12r84980
Процесс
Intel-IOMMU: enabled
DMAR: DRHD base: 0x000000fed90000 flags: 0x0
IOMMU 0: reg_base_addr fed90000 ver 1:0 cap c9008020e30272 ecap 1000
vboxpci: IOMMU found
$ sudo lspci -vv | grep -i flreset+
ExtTag- RBE- FLReset+
00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) Subsystem: Micro-Star International Co., Ltd. Device 7636
Kernel driver in use:snd-hda-intel
Kernel modules: snd-hda-intel
$vboxmanage modifyvm u1204 —pciattach 00:1b.0@01:05.0
$lspci -s
01:05 01:05.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06)
Звуковая карта определилась гостевой ОС и что самое главное заработала, т.е. звук гостевой системы выводится на физическое устройство хоста. В ходе моих экспериментов многие устройства успешно пробрасывались (без FLReset+) и виделись ядром гостя, но работать отказывались. Были случаи, что система хоста зависала, так что будьте осторожны (на этапе 9), если будете повторять. К сожалению GPU и USB пока пробросить не удалось, что является моей конечной целью.
Обновления заметки:
$lspci | grep USB
00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06)
00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06)