[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/2] libxl: Fix guest kexec - skip cpuid policy


  • To: Jason Andryuk <jandryuk@xxxxxxxxx>
  • From: Anthony PERARD <anthony.perard@xxxxxxxxxx>
  • Date: Wed, 25 Jan 2023 15:45:55 +0000
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Dongli Zhang <dongli.zhang@xxxxxxxxxx>
  • Delivery-date: Wed, 25 Jan 2023 15:46:49 +0000
  • Ironport-data: A9a23:7w3MQKnDarIF24mM8ie0E1ro5gxLJkRdPkR7XQ2eYbSJt1+Wr1Gzt xJOXWrVbKyLamv8LY9xboS29hhXvMDdx9JqHAM5rnwxHiMWpZLJC+rCIxarNUt+DCFhoGFPt JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5gKGzRH5MbpETU2PByqgKmVrNrbSq 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/ f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3 eUFCmETPzSBvOHsnZexUeszjMkjI/C+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglH2dSFYr1SE47I6+WHJwCR60aT3McqTcduPLSlQthfD/ zubpTuhav0cHN+v4gTfz2uMv86RwBrGCa5PSeey9OE/1TV/wURMUUZLBDNXu8KRiEe4V8hON k889S8nrKx0/0uuJvHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQMMinN87Q3otz FDht9HmHzt0q5WOVGmQsLyTqFuaNS8TImsDIz0ERA0Ky975qYo3g1TESdMLOKetg8f8Az3Y3 zGApy94jLIW5fPnzI3iowqB2Wj14MGUEEhsvF6/sn+ZAh1RfZOHNpL5zVrg7qwdCYyCTAaLs XgLop3LhAwRNq2lmCuISeQLObim4feZLTHR6WJS84kdGyeFoCD6I90JiN1qDAIwa5tfJ2e1C KPGkVkJjKK/KkdGekOej2iZL80xhZbtGt3+Phw/RoofO8MhHONrEcwHWKJx44wPuBJ0+U3cE c3BGSpJMZr9IfoP8dZOb71BuYLHPwhnrY8pebj1zg68zZ2Vb2OPRLEOPTOmN75msP7U/V+Oq o4BZ6NmLimzt8WnMkHqHXM7dwhWfRDX+7iowyCoSgJzClU/QzxwYxMg6bggZ5Zkj8xoehTgp xmAtrtj4AOn3xXvcFzaAk2PnZuzBf6TW1pnZ31zVbtpslB/CbuSAFA3LMVqLeh8qrcypRO2J tFcE/i97j10Ymyv01wggVPV9eSOqDzDadqyAheY
  • Ironport-hdrordr: A9a23:OZP6xa9vgj+MaBC/xu9uk+AoI+orL9Y04lQ7vn2ZKSY5TiX4rb HIoB1/73XJYVkqN03I9ervBEDEewK+yXcX2/h0AV7BZmnbUQKTRekP0WKh+UyDJ8SXzIVgPM xbAs1D4bPLbGSTjazBkXWF+9RL+qj5zEh/792usUuETmtRGtBdBx8SMHf8LqXvLjM2f6bQEv Cnl7N6jgvlQ1s7ROKhCEIIWuDSzue76a4PMXY9dmYaABDlt0LS1ILH
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Jan 23, 2023 at 09:59:38PM -0500, Jason Andryuk wrote:
> When a domain performs a kexec (soft reset), libxl__build_pre() is
> called with the existing domid.  Calling libxl__cpuid_legacy() on the
> existing domain fails since the cpuid policy has already been set, and
> the guest isn't rebuilt and doesn't kexec.
> 
> xc: error: Failed to set d1's policy (err leaf 0xffffffff, subleaf 
> 0xffffffff, msr 0xffffffff) (17 = File exists): Internal error
> libxl: error: libxl_cpuid.c:494:libxl__cpuid_legacy: Domain 1:Failed to apply 
> CPUID policy: File exists
> libxl: error: libxl_create.c:1641:domcreate_rebuild_done: Domain 1:cannot 
> (re-)build domain: -3
> libxl: error: libxl_xshelp.c:201:libxl__xs_read_mandatory: xenstore read 
> failed: `/libxl/1/type': No such file or directory
> libxl: warning: libxl_dom.c:49:libxl__domain_type: unable to get domain type 
> for domid=1, assuming HVM
> 
> During a soft_reset, skip calling libxl__cpuid_legacy() to avoid the
> issue.  Before the fixes commit, the libxl__cpuid_legacy() failure would

s/fixes/fixed/ or maybe better just write: "before commit 34990446ca91".

> have been ignored, so kexec would continue.
> 
> Fixes: 34990446ca91 "libxl: don't ignore the return value from 
> xc_cpuid_apply_policy"

FYI, the tags format is with () around the commit title:
    Fixes: 34990446ca91 ("libxl: don't ignore the return value from 
xc_cpuid_apply_policy")
I have this in my git config file to help generate those:
[alias]
    fixes = log -1 --abbrev=12 --format=tformat:'Fixes: %h (\"%s\")'


> Signed-off-by: Jason Andryuk <jandryuk@xxxxxxxxx>
> ---
> Probably a backport candidate since this has been broken for a while.
> 
> v2:
> Use soft_reset field in libxl__domain_build_state. - Juergen

Reviewed-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>

Thanks,

-- 
Anthony PERARD



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.