[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] use of struct hvm_mirq_dpci_mapping.gmsi vs. HVM_IRQ_DPCI_*_MSI flags



>>> On 28.04.11 at 22:27, "Kay, Allen M" <allen.m.kay@xxxxxxxxx> wrote:
> 2) HVM_IRQ_DPCI_GUEST_MSI and HVM_IRQ_DPCI_MACH_MSI usage:  These are for 
> handle cases various combination of host and guest interrupt types - 
> host_msi/guest_msi, host_intx/guest_intx, host_msi/guest_intx.  The last one 
> requires translation flag.  It is for supporting non MSI capable guests.  The 
> engineer originally worked on this is no longer working on this project.  You 
> are welcome to clean up if necessary.  However, testing various host/guest 
> interrupt combinations and make sure everything still works is quite a bit of 
> work.

The problem is that from what I can tell the current use is inconsistent,
and this inconsistency is causing problems with the data layout change.
Iirc there's no problem as long as there's not going to be a union for
overlaying the PCI/gMSI fields, so I'm going to leave off that part for
the first step.

As to testing - I'll have to rely on someone at your end doing the full
testing of these changes anyway; I simply don't have the hardware
(and time) to do all that.

> 3) I believe the locking mechanism was originally implemented by 
> Espen@netrome(?) so we are not sure about why the unlock is needed between 
> two iterations.  We have also encountered several which we would like to 
> clean up.  However, we left it as low priority task as the locking mechanisms 
> are quite complex and the amount of testing required after the cleanup is a 
> quite a bit of work.

Then we'll have to see how it goes with the change - see above for
the testing part.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.