[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel]Question about qemu interrupt deliver.



On 29/11/06 08:21, "Xu, Anthony" <anthony.xu@xxxxxxxxx> wrote:

> How about the second one?
> #define hvm_pci_intx_gsi(dev, intx)  \
>     (((((dev)<<2) + ((dev)>>3) + (intx)) & 31) + 16)
> 
> Why this logic is implemented inside hypervisor?

Since the hypervisor exposes a device-level interface for interrupt
assertion, the mapping from device INTx line to GSI naturally ends up in the
hypervisor. This makes sense IMO -- GSIs can be shared and to correctly
implement wire-OR semantics the guest needs to disambiguate which devices
are (or are not) asserting a particular GSI at any time. Obviously all
interrupt-capable PCI devices are currently implemented in qemu-dm so this
could be worked around and a GSI-level interface exposed by Xen. But I think
putting it in Xen is cleaner and more flexible long term. There is always
the option of allowing the mapping to be dynamically specified to Xen in
future (e.g., hvmloader could make a choice, install the appropriate ACPI
DSDT and use a new hypercall to dynamically modify PCI->link and PCI->GSI
information). It's not clear that level of flexibility will be warranted
though -- 32 non-legacy GSIs should be plenty to avoid sharing even with a
static barber-pole INTx->GSI mapping.

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.