[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen staging] x86/cpuid: do not expand max leaves on restore
commit 111c8c33a8a18588f3da3c5dbb7f5c63ddb98ce5 Author: Roger Pau Monné <roger.pau@xxxxxxxxxx> AuthorDate: Thu Apr 29 16:04:11 2021 +0200 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Thu Apr 29 16:04:11 2021 +0200 x86/cpuid: do not expand max leaves on restore When restoring limit the maximum leaves to the ones supported by Xen 4.12 in order to not expand the maximum leaves a guests sees. Note this is unlikely to cause real issues. Guests restored from Xen versions 4.13 or greater will contain CPUID data on the stream that will override the values set by xc_cpuid_apply_policy. Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> Acked-by: Jan Beulich <jbeulich@xxxxxxxx> --- tools/libs/guest/xg_cpuid_x86.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c index 5ea69ad3d5..bf9a3750b5 100644 --- a/tools/libs/guest/xg_cpuid_x86.c +++ b/tools/libs/guest/xg_cpuid_x86.c @@ -498,18 +498,23 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore, goto out; } - /* - * Account for feature which have been disabled by default since Xen 4.13, - * so migrated-in VM's don't risk seeing features disappearing. - */ if ( restore ) { + /* + * Account for feature which have been disabled by default since Xen 4.13, + * so migrated-in VM's don't risk seeing features disappearing. + */ p->basic.rdrand = test_bit(X86_FEATURE_RDRAND, host_featureset); if ( di.hvm ) { p->feat.mpx = test_bit(X86_FEATURE_MPX, host_featureset); } + + /* Clamp maximum leaves to the ones supported on 4.12. */ + p->basic.max_leaf = min(p->basic.max_leaf, 0xdu); + p->feat.max_subleaf = 0; + p->extd.max_leaf = min(p->extd.max_leaf, 0x1cu); } if ( featureset ) -- generated by git-patchbot for /home/xen/git/xen.git#staging
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |