[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] xen: fix setting of max_pfn in shared_info


  • To: Juergen Gross <jgross@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 16 Jun 2021 12:56:37 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=l4T+Y4Bmfu4hOBK10YfuO2pb8j7kiTpMxBLpw8+PnyI=; b=C518WbVDfLd3/Rd0BhutK76ssuHKlm60SaBbuIniW9JhNcgP/00xhCxnsuSkLv2E2X/v1oWoq4bz+vSkwHI2BFnH/8UCb7r6ba5LbSOZJ9aTU4lUbMPFBvswJy784+bnzKrXU1AiJKVqwf9WoY7MnDGHEy0V6elKCQLTBBrRciTTPQ+2F79gz8ssQ6vmlNfVTZZ4Q/N7mjdx8OpZsXxTHfwIuZ/Xy02d+EKyfdr3lRlgYtyKjX0HxjXrMgp3ordYFVq54YtdDk7frQ7u2x0WaMffCFc2CVyCAj1mtsAuQwKPPVf29rmJtPb6gefp5QgHoU5bDM7dWb7CESyRCSriwg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JqampmMQAY3EyMlwGWYh2woIyNC9CcybqTx4slcegCJIisw56wU2zFpjUjFWLE0+KDY72yRe1JetJoG10U6XnOFTiteigTGCNzkVnZKZKiFIfHZdInkLATdkwrbkFgrajIoxQDh5T0nZ6OwV7b9T55i64B4T8cbsqJi9IshIVR93K6pTgg3Ru3UvCCmkrH7zRaTZ8mlk/c5+uK6N699lWqJJTDvjErCrDglBRumBR0wYDjCvp/v2uxsWVL51kUx/SfL/jPrA4fenPIXl+BwS73ZfhCk6d1q+/EP5qkVOGlhY4K8tnlTGUH2wXVI90bH5rlfObBgKLwfrL762B1xKCg==
  • Authentication-results: kernel.org; dkim=none (message not signed) header.d=none;kernel.org; dmarc=none action=none header.from=suse.com;
  • Cc: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Thomas Gleixner <tglx@xxxxxxxxxxxxx>, Ingo Molnar <mingo@xxxxxxxxxx>, Borislav Petkov <bp@xxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, stable@xxxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, x86@xxxxxxxxxx
  • Delivery-date: Wed, 16 Jun 2021 10:56:58 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 16.06.2021 12:37, Juergen Gross wrote:
> On 16.06.21 11:52, Jan Beulich wrote:
>> On 16.06.2021 09:30, Juergen Gross wrote:
>>> Xen PV guests are specifying the highest used PFN via the max_pfn
>>> field in shared_info. This value is used by the Xen tools when saving
>>> or migrating the guest.
>>>
>>> Unfortunately this field is misnamed, as in reality it is specifying
>>> the number of pages (including any memory holes) of the guest, so it
>>> is the highest used PFN + 1. Renaming isn't possible, as this is a
>>> public Xen hypervisor interface which needs to be kept stable.
>>>
>>> The kernel will set the value correctly initially at boot time, but
>>> when adding more pages (e.g. due to memory hotplug or ballooning) a
>>> real PFN number is stored in max_pfn. This is done when expanding the
>>> p2m array, and the PFN stored there is even possibly wrong, as it
>>> should be the last possible PFN of the just added P2M frame, and not
>>> one which led to the P2M expansion.
>>>
>>> Fix that by setting shared_info->max_pfn to the last possible PFN + 1.
>>>
>>> Fixes: 98dd166ea3a3c3 ("x86/xen/p2m: hint at the last populated P2M entry")
>>> Cc: stable@xxxxxxxxxxxxxxx
>>> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
>>
>> The code change is fine, so
>> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
>>
>> But I think even before the rename you would want to clarify the comment
>> next to the variable's definition, to make clear what it really holds.
> 
> It already says: "Number of valid entries in the p2m table(s) ..."
> What do you think is unclear about that? Or do you mean another
> variable?

I mean the variable the value of which the patch corrects, i.e.
xen_p2m_last_pfn. What I see in current source is

/*
 * Hint at last populated PFN.
 *
 * Used to set HYPERVISOR_shared_info->arch.max_pfn so the toolstack
 * can avoid scanning the whole P2M (which may be sized to account for
 * hotplugged memory).
 */
static unsigned long xen_p2m_last_pfn;

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.