[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 0/4] VMX: Properly handle pi descriptor and per-cpu blocking list



On Tue, 2016-05-31 at 10:19 +0000, Wu, Feng wrote:
> > -----Original Message-----
> > From: Dario Faggioli [mailto:dario.faggioli@xxxxxxxxxx]
>
> > So, if you want try argumenting a bit on what was your line of
> > reasoning when doing things this way, that would be helpful (at
> > least
> > to me).
> 'pi_hotplug_lock' is trying to protect the following scenario:
> vmx_pi_blocking_cleanup() gets called either when the last assigned
> device is detached or the vCPU is going to be destroyed, and at the
> same time vmx_vcpu_block() is running. We need to make sure
> after all the blocking vCPU is cleaned up, we should not add new
> vCPU to the per-cpu blocking list. And that is why I introduce
> ' pi_blocking_cleaned_up' for each vCPU, which being set to
> 1 means that we cannot add it to the blocking list in
> vmx_vcpu_block().
> 
By "the vCPU is going to be destroyed", to what code path do you refer?
Because, for instance, there's this:

  domain_kill() --> domain_destroy() --> complete_domain_destroy() --
  --> vcpu_destroy() --> hvm_vcpu_destroy()

in which case, the vCPUs are not running --and hence can't block--
during their own destruction.

I take that you've found a path where that does not hold, and hence
requires this kind of protection?

For the other race (last device being unassigned), I'll look more into
it, but, in general, I share George's and Jan's views that we need
simpler, more consistent and easier to maintain solutions.

> For the other flag 'down', it is used for the following scenario:
> When a pCPU is going away and meanwhile vmx_vcpu_block() is
> called, we should not put the vCPU to a per-cpu blocking list, which
> is going away. 
> 
But, in this case, as George basically said already, if a pCPU is being
destroyed, there should be no vCPU running on it, and hence no vCPU
that, if blocking, would need being added to the pCPU blocking list.

> > For instance, now arch_vcpu_block() returns a value and, as you say
> > yourself in a comment, that is for (potentially) preventing a vcpu
> > to
> > block. So the behavior of schedule.c:vcpu_block(), now depends on
> > your
> > new flag per_cpu(vmx_pi_blocking, v->processor).down. Again, I'll
> > look
> > better, but this has few chances of being right (at least
> > logically).
> Like in vcpu_block(),it will check events before actually blocking
> the vcpu,
> here we just introduce another case in which the vCPU cannot be
> blocked.
> I don't know why you think this is problematic?
> 
Well, but, right now, it's like this:
 - the vCPU should block, waiting for an event
 - it turns out the event is already arrive
 - we can avoid blocking

In your case, AFAICUI, it's:
 - the vCPU should block, waiting for an event
 - the event is _not_ arrived, so we indeed should block
 - we do *not* block, due to some other reason

That does not look right to me... what about the fact that we wanted to
actually wait for the event? :-O

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.