[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] cpupool-numa-split broken with pinned cpus?

On 13/06/16 10:43, Wei Liu wrote:
> CC Dario and Juergen
> On Thu, Jun 02, 2016 at 04:01:23PM +1200, Glenn Enright wrote:
>> Seeing this output, previously (around xen 4.4) it used to work despite the
>> pinned cpus.
>> # xl cpupool-numa-split
>> libxl: error: libxl.c:5516:libxl_set_vcpuonline: Requested 12 VCPUs, however
>> maxcpus is 2!: Success
>> error on removing vcpus for Domain-0
>> And then nothing has changed.
>> # xl cpupool-list
>> Name               CPUs   Sched     Active   Domain count
>> Pool-node0          24    credit       y          7
>> # xl cpupool-list
>> Name               CPUs   Sched     Active   Domain count
>> Pool-node0          24    credit       y          7
>> # xl vcpu-list 0
>> Name                                ID  VCPU   CPU State   Time(s) Affinity
>> (Hard / Soft)
>> Domain-0                             0     0    0   r--      65.2  0 / all
>> Domain-0                             0     1    1   -b-      67.8  1 / all
>> Two issues. One is that the message says success, when no such thing
>> occured.

Uuh, this seems to be a can of worms. The specific issue here is simple
to fix by using LOG() instead of LOGE() in libxl_set_vcpuonline().

There are many more users of LOGE() which I suspect to be wrong,
especially those checking for an error in aodev->rc.

I'd like to address all those cases in a separate series.

>> I'm also thinking this may be wrong behaviour, since the number of pinned
>> cores is lower the the number of cores on the first node. It may be that the
>> test in xl.c around line 5515 should allow this case?

In case there are less online vcpus than the first node has cpus
libxl_set_vcpuonline() shouldn't be called at all. Patch is coming

Are you okay with me adding you as the reporter of this problem in the


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.