[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Crash when using cpupools



On 03.09.21 17:41, Bertrand Marquis wrote:
Hi,

While doing some investigation with cpupools I encountered a crash when trying 
to isolate a guest to its own physical cpu.

I am using current staging status.

I did the following (on FVP with 8 cores):
- start dom0 with dom0_max_vcpus=1
- remove core 1 from dom0 cpupool: xl cpupool-cpu-remove Pool-0 1
- create a new pool: xl cpupool-create name=\"NetPool\”
- add core 1 to the pool: xl cpupool-cpu-add NetPool 1
- create a guest in NetPool using the following in the guest config: 
pool=“NetPool"

I end with a crash with the following call trace during guest creation:
(XEN) Xen call trace:
(XEN)    [<0000000000234cb0>] credit2.c#csched2_alloc_udata+0x58/0xfc (PC)
(XEN)    [<0000000000234c80>] credit2.c#csched2_alloc_udata+0x28/0xfc (LR)
(XEN)    [<0000000000242d38>] sched_move_domain+0x144/0x6c0
(XEN)    [<000000000022dd18>] cpupool.c#cpupool_move_domain_locked+0x38/0x70
(XEN)    [<000000000022fadc>] cpupool_do_sysctl+0x73c/0x780
(XEN)    [<000000000022d8e0>] do_sysctl+0x788/0xa58
(XEN)    [<0000000000273b68>] traps.c#do_trap_hypercall+0x78/0x170
(XEN)    [<0000000000274f70>] do_trap_guest_sync+0x138/0x618
(XEN)    [<0000000000260458>] entry.o#guest_sync_slowpath+0xa4/0xd4

After some debugging I found out that unit->vcpu_list is NULL after unit->vcpu_list = 
d->vcpu[unit->unit_id]; with unit_id 0 in common/sched/core.c:688
This makes the call to is_idle_unit(unit) in csched2_alloc_udata trigger the 
crash.

So there is no vcpu 0 in that domain? How is this possible?


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.