[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Crash when using cpupools

  • To: Juergen Gross <jgross@xxxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Mon, 6 Sep 2021 08:30:56 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=9aXBdmofQAsnNEK7ex5fwtBRjD0v15b+Jrmhf2x2xRg=; b=Jrl3DPwPhNUFpzLuD9/TynUeyHLAJVkbo4yCMVVFpBWMiY2n2NhfUo2FOk1/Syr49CrUuSyLGWlNcecu2NDdq1kNH3a4QodoCwvejupt3dQjDKSUmycHz5tRsZAxe+QfR+FMudne0H1oWoo+skt7LGdx/Ux5vcFLArmHcSFh/tR+fpMhY4Q+HXMMD4a+6H5soOOL+cJwAa3Zb1KhMICizw1oRFO/SQ5GrfropCK0BpXwW3QkmTb8L/dVvYYBRp0E8vxFGsAlicb5hb+5YD3rsPB2wa834ZY0DWpH2Z/81T1wTpo41GRWDYmLNZTH9mic8Mq7mfgt3v5vM4F88Fc1OQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bi9vdWf6Ov5vRQAnSVzECFRi4N7Z9i2F5SS47UpfJCmS/LZ6Z3oz1kN/k4PDlpPzmkVEN0ry2cnRCGN5CZGidk0Otj3zpZtYb9zDn8i3uwMHcGwFg5zR4Jg8GZjzTSwsle98SNKumfHf/gR13ImI+NAn2HoL2DVAGzzNTvTLROgDv42xE07YGglSmUuL/5XfHw0oUWGa8SlUAQsQxZxkrtdlV/KxvO7PkbY6jSB3NSh9wNaY1Fis9mMnpFvuvHmWo/KOPWBTjmCEGj4qZoPjrVHiDnkMdILXnBU5K97uJnDTQ6PQgcgwDWMkb7Ue2yHufC+HaWE8GztMuPP/KiyJrQ==
  • Authentication-results-original: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>
  • Delivery-date: Mon, 06 Sep 2021 08:31:18 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHXoNo0twqKn1RLAkCotEsfXja8k6uWr0+AgAACAYA=
  • Thread-topic: Crash when using cpupools

Hi Juergen,

> On 6 Sep 2021, at 09:23, Juergen Gross <jgross@xxxxxxxx> wrote:
> On 03.09.21 17:41, Bertrand Marquis wrote:
>> Hi,
>> While doing some investigation with cpupools I encountered a crash when 
>> trying to isolate a guest to its own physical cpu.
>> I am using current staging status.
>> I did the following (on FVP with 8 cores):
>> - start dom0 with dom0_max_vcpus=1
>> - remove core 1 from dom0 cpupool: xl cpupool-cpu-remove Pool-0 1
>> - create a new pool: xl cpupool-create name=\"NetPool\”
>> - add core 1 to the pool: xl cpupool-cpu-add NetPool 1
>> - create a guest in NetPool using the following in the guest config: 
>> pool=“NetPool"
>> I end with a crash with the following call trace during guest creation:
>> (XEN) Xen call trace:
>> (XEN)    [<0000000000234cb0>] credit2.c#csched2_alloc_udata+0x58/0xfc (PC)
>> (XEN)    [<0000000000234c80>] credit2.c#csched2_alloc_udata+0x28/0xfc (LR)
>> (XEN)    [<0000000000242d38>] sched_move_domain+0x144/0x6c0
>> (XEN)    [<000000000022dd18>] cpupool.c#cpupool_move_domain_locked+0x38/0x70
>> (XEN)    [<000000000022fadc>] cpupool_do_sysctl+0x73c/0x780
>> (XEN)    [<000000000022d8e0>] do_sysctl+0x788/0xa58
>> (XEN)    [<0000000000273b68>] traps.c#do_trap_hypercall+0x78/0x170
>> (XEN)    [<0000000000274f70>] do_trap_guest_sync+0x138/0x618
>> (XEN)    [<0000000000260458>] entry.o#guest_sync_slowpath+0xa4/0xd4
>> After some debugging I found out that unit->vcpu_list is NULL after 
>> unit->vcpu_list = d->vcpu[unit->unit_id]; with unit_id 0 in 
>> common/sched/core.c:688
>> This makes the call to is_idle_unit(unit) in csched2_alloc_udata trigger the 
>> crash.
> So there is no vcpu 0 in that domain? How is this possible?

No idea, I will need to dig deeper as the state I come to does not make sense.

Could you just confirm that my operations are right and this should work before 
I start digging ?


> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.