[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH 1/5] Add MSI support to XEN



>From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx] 
>Sent: 2008年3月28日 16:52
>
>I'm not sure why this would be a prerequisite for the rest of the MSI
>support. Still I have a feeling that I may have asked for this 
>a long time
>ago on a previous iteration of this patchset... :-) It looks pretty
>sensible, but PHYSDEVOP_map_irq shouldn't take an IRQ_TYPE_IRQ 
>-- 'IRQ' is a
>meaningless thing architecturally-speaking, and I think 
>instead we should
>allow to specify a 'GSI' or an 'ISA IRQ'.

I think the reverse. :-) Here IRQ is just a namespace which is allocatable
and not bound to platform hard-wired logic. Each MSI just requires one
IRQ placeholder to gear to evtchn core with the latter on top of IRQ name-
space. However GSI or ISA IRQ more indicates platform attribute which
doesn't fit the purpose here, though GSI can be also tweaked in some
version of Linux kernel.

>
>As for mapping pirq to MSI, I'm not sure about making real 
>interrupt vectors
>visible to the guest. But maybe that's unavoidable. The way I 
>could imagine
>this working is to teach Xen a bit about accessing PCI space, 
>and then have
>the guest relinquish control of critical MSI control fields in 
>the config
>space to Xen. The guest would tell Xen where the fields are, 
>and then Xen
>can freely configure the target APIC, mask, etc. Seems neater 
>to me, but is
>this a nuts idea?
>

This should work, and may solve the issue Yunhong described in 
another mail by giving Xen ability to mask device directly upon
spurious interrupts. And... seems like less change to Linux code?
The only concern is how complex the interface may finally go,
and in this case Xen still needs to sync PCI config space access
for port I/O style.

Thanks,
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.