[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/5] x86/paging: drop set-allocation from final-teardown


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 16 Mar 2023 13:34:58 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LWKj0DgKhsn1FC7rEBrBEVZs80wf/xpxpc3eazY+ljo=; b=i06VKc5k62g+jZ7ZO3AU12rVZiu2zVPh25vE668+x1C0A7UXzHLIIAfi/30OL95ARc58RbM96rRZxqx37iCFcvpRkoR2xcAJJWuPl/Ft978pKqyhM91GSJlBrQYdoMJtVgzVam7sOZlF7etxn4WRUV2D8QOBdVv6nxuD3uY64HyrYYFKWwEjk0z0fL5Kv0hBza+XUoJo0/mFE7BcSZAWpigfDnvvsRY7e4fpdmcslKR/5u4Rk6K1hhN00NKn/5NQSVs6mJUrLRrshpbyvdNZzine48H3Vwpb0U+n6yKtBz1g5AyxbGArxtV2a/J0KfVoNSRV0tX+NPgnxfYuSLV8Eg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=j6kZo5Da5UZBLwDJ0O7gVTXvIngB77q8koe4EhSCmn4ydJWduFFKqRRWpLTSQ/pGO+SxEFYNGHIsT4kdJMDvk3RoE4rMDjq0A8qlY8z4IGx34usObYs9x/eq5a0eoJS7uAVjbR03al3JFnBGpNgSnUVncKqapAGgliY3/GKhgp7ToI43H4Azb9mq9G3XlMyWIuaOsEkjI08qQOj+9WjwTd2QB32fDCiq1lel5ikPwM2jPR/B/aqZZRABiz1nRDbTPDR9Acj89nV7v2K8BxIpP8WBtYP2kWbY2cM0vosgnlrVyY7W68DnDkgnSMJtajBqXSY0pTO58B9VKZhwNyMGjw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>
  • Delivery-date: Thu, 16 Mar 2023 12:35:14 +0000
  • Ironport-data: A9a23:BI80aKnuRlYZMwK68O5QT/ro5gy0J0RdPkR7XQ2eYbSJt1+Wr1Gzt xJMWjuAOqnfMDemfNFxaYm3oBkAuZLVn4drSlM/+yw9QiMWpZLJC+rCIxarNUt+DCFhoGFPt JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq LvartbWfVSowFaYCEpNg064gE4p7aSaVA8w5ARkPqgQ5QGGyRH5MbpETU2PByqgKmVrNrbSq 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/ f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3 fU1cRMCSiyOvbK7xpyrc7FC2Pg8PNa+aevzulk4pd3YJdAPZMmbBonvu5pf1jp2gd1SF/HDY cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVw3ieC1WDbWUoXiqcF9hEGXq 3iA523kKhobKMae2XyO9XfEaurnxHuiANtMSu3onhJsqF7N+F4JJxEMbkThoNeliReTd/JgB VNBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rrZ5UOVC3YJShZFacc6r4kmSDoyz FiLktj1Qzt1v9W9Vna15rqS6zSoNkA9MW4HTT8JS00C+daLnW0ophfGT9ImHKvriNTwQGn02 2rT9HB4gKgPh8kW0an95UrAnz+nupnOSEgy+xnTWWWmqAh+YeZJerCV1LQS1t4YRK7xc7VLl CFsdxS2hAzWMaywqQ==
  • Ironport-hdrordr: A9a23:KNfxuqlhb/zVH9LcOOIeBwzQrJTpDfIW3DAbv31ZSRFFG/Fwwf re+8jzsiWE6wr5OUtBpTnuAsK9qB/nn6KdgrNxAV7BZmbbUTCTXeVfBOLZqlXd8kvFm9K1vp 0PT0ERMrHN5fcRt7ed3OEVeexQouVuUcqT9ILj80s=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Jan 09, 2023 at 02:39:52PM +0100, Jan Beulich wrote:
> The fixes for XSA-410 have arranged for P2M pages being freed by P2M
> code to be properly freed directly, rather than being put back on the
> paging pool list. Therefore whatever p2m_teardown() may return will no
> longer need taking care of here. Drop the code, leaving the assertions
> in place and adding "total" back to the PAGING_PRINTK() message.
> 
> With merely the (optional) log message and the assertions left, there's
> really no point anymore to hold the paging lock there, so drop that too.
> 
> Requested-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

> ---
> The remaining parts of hap_final_teardown() could be moved as well, at
> the price of a CONFIG_HVM conditional. I wasn't sure whether that was
> deemed reasonable.

I think it's cleaner to leave them as-is.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.