[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC] Xen crashes on ASSERT on suspend/resume, suggested fix


  • To: Stefano Stabellini <stefano.stabellini@xxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 23 May 2023 15:54:36 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PQbcCiDxSWHvhUxrrc3j4cucKK+/8BcO/kM9Y/HecYI=; b=POxvFr0YHSICreFWYwJHGWRjzg6qLptP58kT6pcmQFv+aYHft8sSYbtw70XmXmQSmrStIbHg5R1qFXiEe5hqRGC6pCNGnVj5kOwoDW0DLulQwZJURBzWC+EVi62J0Ty41z7lRHqii1BoNdA09mGPcLjHNI2urOO/y8vSiUBVy+U02a1xUqsDv2OpiHY+FukCO3k7UKTnU8ql4MOymb9a/n44+IygR0to/UCO4nGNQHZWp0thLAMk45Ja7UdR/bFvcH69i+k0N8a8QtA/65KKgoO+GJpo9oapHN0Soi3/Ei03S8aLU+V4FBq3szG0Na75toxg2H2tpXwWIgFfgf91RA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FlxEvfeRhh6aUlQBwsXBARyxqyEcUJg5sQqgJlc3PI0GAQdkOix82tgGTCgJoFS6tE46AAZUtTtSO7f6ttSW6JWcYsQTkDAo0HRwiNATRz5A65QkFG0HPDVmO3P+jx8UbgKpxDsqJ6qC+oVhnfKeZ6oY/PbMxzbZRKtf792OL5VY0EsPdWr8XJnUCEli13zoxnzyVOqzlS9HpGtHu3Mfy23atDkMVKLV+8psukslRXGv1r+4RYuouA+Q4vKI1xpwQbdqZy9e6Hw5uJ/M1Y8q/V1Ggkn1hgn/Emo9Bfo/gecReIwkBsrKO0V1GgT16tMYww3dJ6sTahCEVEciurYBvA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx, jbeulich@xxxxxxxx, andrew.cooper3@xxxxxxxxxx, xenia.ragiadakou@xxxxxxx
  • Delivery-date: Tue, 23 May 2023 13:55:07 +0000
  • Ironport-data: A9a23:HkMaxKCVGPKNPBVW/+Hiw5YqxClBgxIJ4kV8jS/XYbTApDxwgTADm GUbDWnTO6yPNGTyco9za9uy9UsB7cfVyIBhQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs t7pyyHlEAbNNwVcbCRMs8pvlDs15K6p4G5C5QRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw9eh1OE9i0 MwjLh81XA/SwOCI6baAc7w57igjBJGD0II3nFhFlGucJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+OxuvDK7IA9ZidABNPLPfdOHX4NNl1uwr WPa5WXpRBodMbRzzBLcqyr01rKQzHmTtIQ6GbCl6vkpqkKqn08iVTM/bQGcpqG3hRvrMz5YA wlOksY0loAp6EGlR9/6GQakqXSJuhodXdt4Gug2rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0 EeTmNHkDiApt6eaIVqf/LqJqTK5OQAOMHQPIyQDSGMt89TloYh1lBvAT99vGa2yk/X8HD22y DePxBXSnJ0WhM8Pkq+9rVbOhmv2ooCTF1FvoALKQmii8wV1Ipa/YJCl4kTa6vAGK5uFSl6Gv z4PnM32AP0yMKxhXRelGI0ldIxFLd7fWNEAqTaDx6Ucygk=
  • Ironport-hdrordr: A9a23:xRyWJ6r0RxmHCcx4UTgwtRcaV5oleYIsimQD101hICG9E/b1qy nKpp8mPHDP5wr5NEtPpTnjAsm9qALnlKKdiLN5Vd3OYOCMghrKEGgN1/qG/xTQXwH46+5Bxe NBXsFFebnN5IFB/KTH3DU=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, May 18, 2023 at 04:44:53PM -0700, Stefano Stabellini wrote:
> Hi all,
> 
> After many PVH Dom0 suspend/resume cycles we are seeing the following
> Xen crash (it is random and doesn't reproduce reliably):
> 
> (XEN) [555.042981][<ffff82d04032a137>] R 
> arch/x86/irq.c#_clear_irq_vector+0x214/0x2bd
> (XEN) [555.042986][<ffff82d04032a74c>] F destroy_irq+0xe2/0x1b8
> (XEN) [555.042991][<ffff82d0403276db>] F msi_free_irq+0x5e/0x1a7
> (XEN) [555.042995][<ffff82d04032da2d>] F unmap_domain_pirq+0x441/0x4b4
> (XEN) [555.043001][<ffff82d0402d29b9>] F 
> arch/x86/hvm/vmsi.c#vpci_msi_disable+0xc0/0x155
> (XEN) [555.043006][<ffff82d0402d39fc>] F vpci_msi_arch_disable+0x1e/0x2b
> (XEN) [555.043013][<ffff82d04026561c>] F 
> drivers/vpci/msi.c#control_write+0x109/0x10e
> (XEN) [555.043018][<ffff82d0402640c3>] F vpci_write+0x11f/0x268
> (XEN) [555.043024][<ffff82d0402c726a>] F 
> arch/x86/hvm/io.c#vpci_portio_write+0xa0/0xa7
> (XEN) [555.043029][<ffff82d0402c6682>] F hvm_process_io_intercept+0x203/0x26f
> (XEN) [555.043034][<ffff82d0402c6718>] F hvm_io_intercept+0x2a/0x4c
> (XEN) [555.043039][<ffff82d0402b6353>] F 
> arch/x86/hvm/emulate.c#hvmemul_do_io+0x29b/0x5f6
> (XEN) [555.043043][<ffff82d0402b66dd>] F 
> arch/x86/hvm/emulate.c#hvmemul_do_io_buffer+0x2f/0x6a
> (XEN) [555.043048][<ffff82d0402b7bde>] F hvmemul_do_pio_buffer+0x33/0x35
> (XEN) [555.043053][<ffff82d0402c7042>] F handle_pio+0x6d/0x1b4
> (XEN) [555.043059][<ffff82d04029ec20>] F svm_vmexit_handler+0x10bf/0x18b0
> (XEN) [555.043064][<ffff82d0402034e5>] F svm_stgi_label+0x8/0x18
> (XEN) [555.043067]
> (XEN) [555.469861]
> (XEN) [555.471855] ****************************************
> (XEN) [555.477315] Panic on CPU 9:
> (XEN) [555.480608] Assertion 'per_cpu(vector_irq, cpu)[old_vector] == irq' 
> failed at arch/x86/irq.c:233
> (XEN) [555.489882] ****************************************
> 
> Looking at the code in question, the ASSERT looks wrong to me.
> 
> Specifically, if you see send_cleanup_vector and
> irq_move_cleanup_interrupt, it is entirely possible to have old_vector
> still valid and also move_in_progress still set, but only some of the
> per_cpu(vector_irq, me)[vector] cleared. It seems to me that this could
> happen especially when an MSI has a large old_cpu_mask.

i guess the only way to get into such situation would be if you happen
to execute _clear_irq_vector() with a cpu_online_map smaller than the
one in old_cpu_mask, at which point you will leave old_vector fields
not updated.

Maybe somehow you get into this situation when doing suspend/resume?

Could you try to add a:

ASSERT(cpumask_equal(tmp_mask, desc->arch.old_cpu_mask));

Before the `for_each_cpu(cpu, tmp_mask)` loop?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.