|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v8 for-xen-4.5 1/2] dpci: Move from an hvm_irq_dpci (and struct domain) to an hvm_dirq_dpci model.
On Fri, Oct 24, 2014 at 10:49:32AM +0100, Jan Beulich wrote:
> >>> On 24.10.14 at 03:58, <konrad.wilk@xxxxxxxxxx> wrote:
> > On Thu, Oct 23, 2014 at 09:58:34AM +0100, Jan Beulich wrote:
> >> >>> On 21.10.14 at 19:19, <konrad.wilk@xxxxxxxxxx> wrote:
> >> > @@ -156,6 +165,7 @@ int pt_irq_create_bind(
> >> > {
> >> > pirq_dpci->gmsi.gflags = 0;
> >> > pirq_dpci->gmsi.gvec = 0;
> >> > + pirq_dpci->dom = NULL;
> >> > pirq_dpci->flags = 0;
> >> > pirq_cleanup_check(info, d);
> >> > spin_unlock(&d->event_lock);
> >>
> >> Just like this error path needing adjustment, the other one following
> >> failure of pirq_guest_bind() (after
> >>
> >> > @@ -232,7 +242,6 @@ int pt_irq_create_bind(
> >> > {
> >> > unsigned int share;
> >> >
> >> > - pirq_dpci->dom = d;
> >> > if ( pt_irq_bind->irq_type == PT_IRQ_TYPE_MSI_TRANSLATE )
> >> > {
> >> > pirq_dpci->flags = HVM_IRQ_DPCI_MAPPED |
> >>
> >> ) would seem to need adjustment too.
> >
> > However I am at lost of what you mean here. If by adjustment you mean
> > leave it alone I concur on the later.
>
> Indeed I managed to overlook that ->dom is being cleared there
> already. Over time I've been trying to make the code in this file at
> least a little more legible, but it's still lacking (i.e. in the case here
> proper grouping of the cleanup done on the different data
> structures would probably have made this more obvious, e.g. in
> the case at hand having the clearing of pirq_dpci->dom and
> pirq_dpci->flags side by side).
>
> > @@ -144,6 +141,18 @@ int pt_irq_create_bind(
> > HVM_IRQ_DPCI_GUEST_MSI;
> > pirq_dpci->gmsi.gvec = pt_irq_bind->u.msi.gvec;
> > pirq_dpci->gmsi.gflags = pt_irq_bind->u.msi.gflags;
> > + /*
> > + * 'pt_irq_create_bind' can be called after
> > 'pt_irq_destroy_bind'.
> > + * The 'pirq_cleanup_check' which would free the structure is
> > only
> > + * called if the event channel for the PIRQ is active. However
> > + * OS-es that use event channels usually bind PIRQs to eventds
> > + * and unbind them before calling 'pt_irq_destroy_bind' - with
> > the
> > + * result that we re-use the 'dpci' structure. This can be
> > + * reproduced with unloading and loading the driver for a
> > device.
> > + *
> > + * As such on every 'pt_irq_create_bind' call we MUST set it.
> > + */
> > + pirq_dpci->dom = d;
> > /* bind after hvm_irq_dpci is setup to avoid race with irq
> > handler*/
> > rc = pirq_guest_bind(d->vcpu[0], info, 0);
> > if ( rc == 0 && pt_irq_bind->u.msi.gtable )
> > @@ -156,6 +165,7 @@ int pt_irq_create_bind(
> > {
> > pirq_dpci->gmsi.gflags = 0;
> > pirq_dpci->gmsi.gvec = 0;
> > + pirq_dpci->dom = NULL;
> > pirq_dpci->flags = 0;
> > pirq_cleanup_check(info, d);
> > spin_unlock(&d->event_lock);
>
> Wait - is this correct even when pirq_guest_bind() succeeded but
> msixtbl_pt_register() failed? At the first glance I would say no. But
Keep in mind that if 'msixtbl_pt_register' fails we immediately call
'pirq_guest_unbind' and then land in here.
> apart from that needing sorting out I think the patch is fine now
> (and I hope the latest re-shuffling didn't break anything).
Wheew. I think so too.
>
> Jan
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |