[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 11/14] libxl: get and set soft affinity

On mer, 2013-11-20 at 14:56 +0000, Ian Campbell wrote:
> On Wed, 2013-11-20 at 15:50 +0100, Dario Faggioli wrote:
> > Again, I'm not getting. What's the window where you're worried about
> > races, if on/offlining is involved? What do you refer to with "during
> > all this" ?
> Between getting the maximum cpu number and checking the results of a pin
> call. What happens if a CPU went away such that you think when checking
> that there is 16 cpus (based on old information) but when the pin
> hypercall was made there were only 15. Or conversely if a CPU was
> plugged in.
Well, for the sake of this patch, all that we risk is printing a
spurious warning or missing one.

Anyway, I see what you mean now, and I guess I can try to mitigate it,
by moving the check for the number of cpus after the affinity setting
call. That would at least make it more likely for the information about
the number of cpus to be consistent with the result of the call, but
won't eliminate the possibility of races.

In fact, I don't think I can't avoid that with 100% probability, as
there is no way to get both the result of the affinity setting call and
the number of cpus in an atomic way, and I don't think it's worth
introducing one for the sake of this...

> Also, does this check fail if the cpumask is sparse? Is that something
> which can happen e.g. unplugging CPU#8 in a 16 CPU system?
Well, in that case I guess it'd be fine to print the warning. I mean, if
the user wanted affinity to cpu#8 and that went away, it's a good to
tell him he's not getting (exactly) what he asked, isn't it?


<<This happens because I choose it to happen!>> (Raistlin Majere)
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.