[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru



On Mon, 2 Jan 2023 18:10:24 -0500
Chuck Zmudzinski <brchuckz@xxxxxxxxxxxx> wrote:

> On 1/2/23 12:46 PM, Michael S. Tsirkin wrote:
> > On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:  
> > > Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> > > as noted in docs/igd-assign.txt in the Qemu source code.
> > > 
> > > Currently, when the xl toolstack is used to configure a Xen HVM guest with
> > > Intel IGD passthrough to the guest with the Qemu upstream device model,
> > > a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will 
> > > occupy
> > > a different slot. This problem often prevents the guest from booting.
> > > 
> > > The only available workaround is not good: Configure Xen HVM guests to use
> > > the old and no longer maintained Qemu traditional device model available
> > > from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> > > 
> > > To implement this feature in the Qemu upstream device model for Xen HVM
> > > guests, introduce the following new functions, types, and macros:
> > > 
> > > * XEN_PT_DEVICE_CLASS declaration, based on the existing 
> > > TYPE_XEN_PT_DEVICE
> > > * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> > > * typedef XenPTQdevRealize function pointer
> > > * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> > > * xen_igd_reserve_slot and xen_igd_clear_slot functions
> > > 
> > > The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> > > member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> > > the xl toolstack with the gfx_passthru option enabled, which sets the
> > > igd-passthru=on option to Qemu for the Xen HVM machine type.
> > > 
> > > The new xen_igd_reserve_slot function also needs to be implemented in
> > > hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> > > when Qemu is configured with --enable-xen and 
> > > --disable-xen-pci-passthrough,
> > > in which case it does nothing.
> > > 
> > > The new xen_igd_clear_slot function overrides qdev->realize of the parent
> > > PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> > > since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> > > created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> > > 
> > > Move the call to xen_host_pci_device_get, and the associated error
> > > handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> > > initialize the device class and vendor values which enables the checks for
> > > the Intel IGD to succeed. The verification that the host device is an
> > > Intel IGD to be passed through is done by checking the domain, bus, slot,
> > > and function values as well as by checking that gfx_passthru is enabled,
> > > the device class is VGA, and the device vendor in Intel.
> > > 
> > > Signed-off-by: Chuck Zmudzinski <brchuckz@xxxxxxx>  
> >
> > I'm not sure why is the issue xen specific. Can you explain?
> > Doesn't it affect kvm too?  
> 
> Recall from docs/igd-assign.txt that there are two modes for
> igd passthrough: legacy and upt, and the igd needs to be
> at slot 2 only when using legacy mode which gives one
> single guest exclusive access to the Intel igd.
> 
> It's only xen specific insofar as xen does not have support
> for the upt mode so xen must use legacy mode which
> requires the igd to be at slot 2. I am not an expert with

UPT mode never fully materialized for direct assignment, the folks at
Intel championing this scenario left.

> kvm, but if I understand correctly, with kvm one can use
> the upt mode with the Intel i915 kvmgt kernel module
> and in that case the guest will see a virtual Intel gpu
> that can be at any arbitrary slot when using kvmgt, and
> also, in that case, more than one guest can access the
> igd through the kvmgt kernel module.

This is true, IIRC an Intel vGPU does not need to be in slot 2.

> Again, I am not an expert and do not have as much
> experience with kvm, but if I understand correctly it is
> possible to use the legacy mode with kvm and I think you
> are correct that if one uses kvm in legacy mode and without
> using the Intel i915 kvmgt kernel module, then it would be
> necessary to reserve slot 2 for the igd on kvm.

It's necessary to configure the assigned IGD at slot 2 to make it
functional, yes, but I don't really understand this notion of
"reserving" slot 2.  If something occupies address 00:02.0 in the
config, it's the user's or management tool's responsibility to move it
to make this configuration functional.  Why does QEMU need to play a
part in reserving this bus address.  IGD devices are not generally
hot-pluggable either, so it doesn't seem we need to reserve an address
in case an IGD device is added dynamically later.
 
> Your question makes me curious, and I have not been able
> to determine if anyone has tried igd passthrough using
> legacy mode on kvm with recent versions of linux and qemu.

Yes, it works.

> I will try reproducing the problem on kvm in legacy mode with
> current versions of linux and qemu and report my findings.
> With kvm, there might be enough flexibility to specify the
> slot number for every pci device in the guest. Such a

I think this is always the recommendation, libvirt will do this by
default in order to make sure the configuration is reproducible.  This
is what we generally rely on for kvm/vfio IGD assignment to place the
GPU at the correct address.

> capability is not available using the xenlight toolstack
> for managing xen guests, so I have been using this patch
> to ensure that the Intel igd is at slot 2 with xen guests
> created by the xenlight toolstack.

Seems like a deficiency in xenlight.  I'm not sure why QEMU should take
on this burden to support support tool stacks that lack such basic
features.
 
> The patch as is will only fix the problem on xen, so if the
> problem exists on kvm also, I agree that the patch should
> be modified to also fix it on kvm.

AFAICT, it's not a problem on kvm/vfio because we generally make use of
invocations that specify bus addresses for each device by default,
making this a configuration requirement for the user or management tool
stack.  Thanks,

Alex




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.