[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Hypervisor crash(!) on xl cpupool-numa-split



Andre,

Can you try again with the attached patch?

Thanks,
 -George

On Tue, Feb 8, 2011 at 12:08 PM, George Dunlap
<George.Dunlap@xxxxxxxxxxxxx> wrote:
> On Tue, Feb 8, 2011 at 5:43 AM, Juergen Gross
> <juergen.gross@xxxxxxxxxxxxxx> wrote:
>> On 02/07/11 16:55, George Dunlap wrote:
>>>
>>> Juergen,
>>>
>>> What is supposed to happen if a domain is in cpupool0, and then all of
>>> the cpus are taken out of cpupool0?  Is that possible?
>>
>> No. Cpupool0 can't be without any cpu, as Dom0 is always member of cpupool0.
>
> If that's the case, then since Andre is running this immediately after
> boot, he shouldn't be seeing any vcpus in the new pools; and all of
> the dom0 vcpus should be migrated to cpupool0, right?  Is it possible
> that migration process isn't happening properly?
>
> It looks like schedule.c:cpu_disable_scheduler() will try to migrate
> all vcpus, and if it fails to migrate, it returns -EAGAIN so that the
> tools will try again.  It's probably worth instrumenting that whole
> code-path to make sure it actually happens as we expect.  Are we
> certain, for example, that if a hypercall continued on another cpu
> will actually return the new error value properly?
>
> Another minor thing: In cpupool.c:cpupool_unassign_cpu_helper(), why
> is the cpu's bit set in cpupool_free_cpus without checking to see if
> the cpu_disable_scheduler() call actually worked?  Shouldn't that also
> be inside the if() statement?
>
>  -George
>

Attachment: cpupools-vcpu-migrate-debug.diff
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.