[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v11 1/2] vmx: VT-d posted-interrupt core logic handling




> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Tuesday, February 2, 2016 5:53 PM
> To: Wu, Feng <feng.wu@xxxxxxxxx>
> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; Dario Faggioli
> <dario.faggioli@xxxxxxxxxx>; George Dunlap <george.dunlap@xxxxxxxxxxxxx>;
> Tian, Kevin <kevin.tian@xxxxxxxxx>; xen-devel@xxxxxxxxxxxxx; Keir Fraser
> <keir@xxxxxxx>
> Subject: RE: [PATCH v11 1/2] vmx: VT-d posted-interrupt core logic handling
> 
> >>> On 02.02.16 at 05:48, <feng.wu@xxxxxxxxx> wrote:
> 
> >
> >> -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> Sent: Friday, January 29, 2016 5:32 PM
> >> To: Wu, Feng <feng.wu@xxxxxxxxx>
> >> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; Dario Faggioli
> >> <dario.faggioli@xxxxxxxxxx>; George Dunlap <george.dunlap@xxxxxxxxxxxxx>;
> >> Tian, Kevin <kevin.tian@xxxxxxxxx>; xen-devel@xxxxxxxxxxxxx; Keir Fraser
> >> <keir@xxxxxxx>
> >> Subject: RE: [PATCH v11 1/2] vmx: VT-d posted-interrupt core logic handling
> >>
> >> >>> On 29.01.16 at 02:53, <feng.wu@xxxxxxxxx> wrote:
> >> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> >> Sent: Friday, January 29, 2016 12:38 AM
> >> >> >>> On 28.01.16 at 06:12, <feng.wu@xxxxxxxxx> wrote:
> >> >> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> >> >> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> >> >> > @@ -83,7 +83,140 @@ static int vmx_msr_write_intercept(unsigned int
> >> msr,
> >> >> uint64_t msr_content);
> >> >> >  static void vmx_invlpg_intercept(unsigned long vaddr);
> >> >> >  static int vmx_vmfunc_intercept(struct cpu_user_regs *regs);
> >> >> >
> >> >> > +static void vmx_vcpu_block(struct vcpu *v)
> >> >> > +{
> >> >> > +    unsigned long flags;
> >> >> > +    unsigned int dest;
> >> >> > +
> >> >> > +    struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
> >> >>
> >> >> Stray blank line between declarations.
> >> >>
> >> >> > +    ASSERT(v->arch.hvm_vmx.pi_block_list_lock == NULL);
> >> >>
> >> >> I think this needs to be done after acquiring the lock.
> >> >
> >> > I am not sure what you mean here. The ASSERT here means that
> >> > the vCPU is not on any pCPU list when we get here.
> >>
> >> My point is that until you've taken the lock, whether the vCPU is on
> >> any such list may change. All state updates are done under lock, so
> >> checking for valid incoming state should be done under lock too, to
> >> be most useful.
> >
> > I don't think so. We get this function via vcpu_block()/do_poll() ->
> > arch_vcpu_block() -> ... -> vmx_vcpu_block(), and the parameter
> > of this function should point to the current vCPU, so when we
> > get here, the v->arch.hvm_vmx.pi_block_list_lock should be NULL,
> > which means the vCPU is not in any blocking. and no one can change
> > it behind us this this moment.
> 
> Well, if we were to only think about how the code currently looks
> like, then ASSERT()s like this one could be left out. Yet they're
> there to also catch unexpected state when later another caller
> or another place setting pi_block_list_lock to non-NULL get
> introduced. With what you write, an alternative to what I've
> suggested above might be to _also_ ASSERT(v == current), but
> this would still be a weaker check than if the assertion was moved
> into the locked region.

Sorry for the late response, just back from holiday.

v->arch.hvm_vmx.pi_block_list_lock is assigned a value after this
ASSERT, that means in the locked region ' v->arch.hvm_vmx.pi_block_list_lock '
has a value hence not NULL. So no need to ASSERT and adding
ASSERT here is away from my original purpose, If you insist on moving
ASSERT to the locked region, I suggest to remove it.

> >> >> > --- a/xen/drivers/passthrough/vtd/iommu.c
> >> >> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> >> >> > @@ -2293,6 +2293,8 @@ static int reassign_device_ownership(
> >> >> >          pdev->domain = target;
> >> >> >      }
> >> >> >
> >> >> > +    vmx_pi_hooks_reassign(source, target);
> >> >> > +
> >> >> >      return ret;
> >> >> >  }
> >> >>
> >> >> I think you need to clear source's hooks here, but target's need to
> >> >> get set ahead of the actual assignment.
> >> >
> >> > I think this is the place where the device is moved from
> >> > &source->arch.pdev_list to &target->arch.pdev_list, if that is the
> >> > case, we should clear source and set target here, right?
> >>
> >> No - target needs to be ready to deal with events from the device
> >> _before_ the device actually gets assigned to it.
> >
> > I still don't get your point. I still think this is the place where the
> > device
> > is being got assigned. Or maybe you can share more info about the
> > place "_before_ the device actually gets assigned to it " ?
> 
> The issue is with the sequence of events. Right now you assign
> the device and only then prepare target to deal with it having a
> device assigned. Whereas the needed sequence is the other
> way around - prepare target, and only then assign the device.
> I.e. you need to do things in this order (all inside this function)
> - prepare target
> - re-assign device
> - clean up source

Do you mean something like the follows?

@@ -2279,13 +2280,19 @@ static int reassign_device_ownership(
             }
     }

+    vmx_pi_hooks_assign(target);
+
     ret = domain_context_unmap(source, devfn, pdev);
-    if ( ret )
+    if ( ret ) {
+        vmx_pi_hooks_deassign(target);
         return ret;
+    }

     ret = domain_context_mapping(target, devfn, pdev);
-    if ( ret )
+    if ( ret ) {
+        vmx_pi_hooks_deassign(target);
         return ret;
+    }

     if ( devfn == pdev->devfn )
     {
@@ -2293,6 +2300,8 @@ static int reassign_device_ownership(
         pdev->domain = target;
     }

+    vmx_pi_hooks_deassign(source);
+
     return ret;
 }
 
Thanks,
Feng

> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.