[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/3] xen: add hypercall option to temporarily pin a vcpu

On 02/03/16 10:27, Dario Faggioli wrote:
> On Wed, 2016-03-02 at 08:14 +0100, Juergen Gross wrote:
>> On 01/03/16 16:52, George Dunlap wrote:
>>> Also -- have you actually tested the "cpupool move while pinned"
>>> functionality to make sure it actually works?  There's a weird bit
>>> in
>>> cpupool_unassign_cpu_helper() where after calling
>>> cpu_disable_scheduler(cpu), it unconditionally sets the cpu bit in
>>> the
>>> cpupool_free_cpus mask, even if it returns an error.  That can't be
>>> right, even for the existing -EAGAIN case, can it?
>> That should be no problem. Such a failure can be repaired easily by
>> adding the cpu to the cpupool again. 
> And there's not much else one can do, I would say. When we are in
> cpu_disable_scheduler(), coming from
> cpupool_unassign_cpu()-->cpupool_unassign_cpu() we're already halfway
> through removing the cpu from the pool (e.g., we already cleared the
> relevant bit from the cpupool's cpu_valid mask).
> And we don't actually want to revert that, as doing so would allow the
> scheduler to start again moving vcpus to that cpu (and the following
> attempts will risk failing with EAGAIN again :-D).
> FWIW, I've also found that part rather weird for quite some time... But
> it does indeed makes sense, IMO.
>> Adding a comment seems to be a
>> good idea. :-)
> Yep. Should we also add an error message for the user to be able to see
> it, even if she can't read the comment in the source code? (Not
> necessarily right there, if that would make it trigger too much... just
> in a place where it can be seen in the case the user actually need to
> do something).
>> What is wrong and even worse, schedule_cpu_switch() returning an
>> error
>> will leak domlist_read_lock. 
> Indeed, good catch. :-)
>>> And, in general, what happens if the device driver gets mixed up
>>> and
>>> forgets to unpin the vcpu?  Is the only recourse to reboot your
>>> host (or
>>> deal with the fact that you can't reconfigure your cpupools)?
>> Unless we add a "forced" option to "xl vcpu-pin", yes.
> Which would be fine to have, IMO. I'm not sure if it would better be an
> `xl vcpu-pin' flag, or a separate utility (as Jan is also saying).
> A separate utility would fit better the "emergency nature" of the
> thing, avoiding having to clobber xl for that (as this will be the
> only, pretty uncommon, case where such flag would be needed).
> However, an xl flag is easier to add, easier to document and easier and
> more natural to find, from the point of view of an user that really
> needs it. And perhaps it could turn out useful for other situations in
> future. So, I guess I'd say:
>  - yes, let's add that
>  - let's do it as a "force flag" of `xl vcpu-pin'.

Which raises the question: how to do that on the libxl level?

a) expand libxl_set_vcpuaffinity() with another parameter (is this even
   possible? I could do some ifdeffery, but the API would change...)

b) add a libxl_set_vcpuaffinity_force() variant

c) imply the force flag by specifying both hard and soft maps as NULL
   (it _is_ basically just that: keep both affinity sets), implying that
   it makes no sense to specify any affinities with the -f flag (which
   renders the "force" meaning rather strange, would be more a "restore"


> Regards,
> Dario

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.