[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xl: relax freemem()'s retry calculation


  • To: Anthony PERARD <anthony.perard@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 12 Jul 2022 09:01:48 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CbCJuwscgh3wdpQ0BtHFHbXJt0mR/kDzS6u9A7szgQw=; b=kSpospmGtZfbPxfdEh30zIBZL8j9Maj+6MIazFNqRt0v0FBBcC5VD2O5EmGSCkLgS93NOvRHDtIqMSBN0N7DSjOgaPEHSJNbYBAt3eK4kbPO21JuRCJ50EewuQl9y9IG1/xgUn2SxsUQ+vA6NvpvVxT3Ihv0wTnsFSx8SUiiG+HQyX6hfEZtQMHHz7iVjW9T+2kV6V3cFwjbBuowbhnhHpvkgThI1a34DxeyZNyFkvYGgTDloqxrLi+J9KLBPZgkaydtfllKFpoNv0yfjrIRtqyd2PlzalafbaFTQcM8qkyb5c8JH77CxhDoAP62akKVGFKhJ8/DOV+ESiEt50OxwQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GfN4CAKOARs90oSxDV8MgnWsv2Q7CzAZ6OOiw+OXVgpCEtdrW29vuBy60Otvsi4W4v9wr23TBgbmZolfK5LWUa4BNK7mVob8kCWWFfYu8mvMIogcg8VzE4zkeOM1LYzjuADZdZBt/IjmFV0isSSxYF8niu9IZmy+0oRUY7TyEH8VC2YiSVoyXZnyzo2h8AYWJ3Kq+M775uP4YfOcsVHwfXHuaAQ2wLv1Q/Qbt15UfR3UN/k5fLLgNVbJV8W3PO7Fl81ahTX1Gd6sl/AZgIBxCST7LmZOkZ6QB6Fzbi3DOUBJ4WMCRQgXzpHjA+pQrlSFAgGrLSJgsEGY4nHJvGGfMw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Tue, 12 Jul 2022 07:02:18 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 11.07.2022 18:21, Anthony PERARD wrote:
> On Fri, Jul 08, 2022 at 03:39:38PM +0200, Jan Beulich wrote:
>> While in principle possible also under other conditions as long as other
>> parallel operations potentially consuming memory aren't "locked out", in
>> particular with IOMMU large page mappings used in Dom0 (for PV when in
>> strict mode; for PVH when not sharing page tables with HAP) ballooning
>> out of individual pages can actually lead to less free memory available
>> afterwards. This is because to split a large page, one or more page
>> table pages are necessary (one per level that is split).
>>
>> When rebooting a guest I've observed freemem() to fail: A single page
>> was required to be ballooned out (presumably because of heap
>> fragmentation in the hypervisor). This ballooning out of a single page
>> of course went fast, but freemem() then found that it would require to
>> balloon out another page. This repeating just another time leads to the
>> function to signal failure to the caller - without having come anywhere
>> near the designated 30s that the whole process is allowed to not make
>> any progress at all.
>>
>> Convert from a simple retry count to actually calculating elapsed time,
>> subtracting from an initial credit of 30s. Don't go as far as limiting
>> the "wait_secs" value passed to libxl_wait_for_memory_target(), though.
>> While this leads to the overall process now possibly taking longer (if
>> the previous iteration ended very close to the intended 30s), this
>> compensates to some degree for the value passed really meaning "allowed
>> to run for this long without making progress".
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> I further wonder whether the "credit expired" loop exit wouldn't better
>> be moved to the middle of the loop, immediately after "return true".
>> That way having reached the goal on the last iteration would be reported
>> as success to the caller, rather than as "timed out".
> 
> That would sound like a good improvement to the patch.

Oh. I would have made it a separate one, if deemed sensible. Order
shouldn't matter as I'd consider both backporting candidates.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.